source
stringlengths 32
227
| text
stringlengths 12
212k
| categories
stringlengths 2
6.09k
|
|---|---|---|
https://en.wikipedia.org/wiki/Anarchism
|
Anarchism is a political philosophy and movement that seeks to abolish all institutions that perpetuate authority, coercion, or hierarchy, primarily targeting the state and capitalism. Anarchism advocates for the replacement of the state with stateless societies and voluntary free associations. A historically left-wing movement, anarchism is usually described as the libertarian wing of the socialist movement (libertarian socialism).
Although traces of anarchist ideas are found all throughout history, modern anarchism emerged from the Enlightenment. During the latter half of the 19th and the first decades of the 20th century, the anarchist movement flourished in most parts of the world and had a significant role in workers' struggles for emancipation. Various anarchist schools of thought formed during this period. Anarchists have taken part in several revolutions, most notably in the Paris Commune, the Russian Civil War and the Spanish Civil War, whose end marked the end of the classical era of anarchism. In the last decades of the 20th and into the 21st century, the anarchist movement has been resurgent once more, growing in popularity and influence within anti-capitalist, anti-war and anti-globalisation movements.
Anarchists employ diverse approaches, which may be generally divided into revolutionary and evolutionary strategies; there is significant overlap between the two. Evolutionary methods try to simulate what an anarchist society might be like, but revolutionary tactics, which have historically taken a violent turn, aim to overthrow authority and the state. Many facets of human civilization have been influenced by anarchist theory, critique, and praxis.
Etymology, terminology, and definition
The etymological origin of anarchism is from the Ancient Greek anarkhia (ἀναρχία), meaning "without a ruler", composed of the prefix an- ("without") and the word arkhos ("leader" or "ruler"). The suffix -ism denotes the ideological current that favours anarchy. Anarchism appears in English from 1642 as anarchisme and anarchy from 1539; early English usages emphasised a sense of disorder. Various factions within the French Revolution labelled their opponents as anarchists, although few such accused shared many views with later anarchists. Many revolutionaries of the 19th century such as William Godwin (1756–1836) and Wilhelm Weitling (1808–1871) would contribute to the anarchist doctrines of the next generation but did not use anarchist or anarchism in describing themselves or their beliefs.
The first political philosopher to call himself an anarchist () was Pierre-Joseph Proudhon (1809–1865), marking the formal birth of anarchism in the mid-19th century. Since the 1890s and beginning in France, libertarianism has often been used as a synonym for anarchism; its use as a synonym is still common outside the United States. Some usages of libertarianism refer to individualistic free-market philosophy only, and free-market anarchism in particular is termed libertarian anarchism.
While the term libertarian has been largely synonymous with anarchism, its meaning has more recently been diluted by wider adoption from ideologically disparate groups, including both the New Left and libertarian Marxists, who do not associate themselves with authoritarian socialists or a vanguard party, and extreme cultural liberals, who are primarily concerned with civil liberties. Additionally, some anarchists use libertarian socialist to avoid anarchism's negative connotations and emphasise its connections with socialism. Anarchism is broadly used to describe the anti-authoritarian wing of the socialist movement. Anarchism is contrasted to socialist forms which are state-oriented or from above. Scholars of anarchism generally highlight anarchism's socialist credentials and criticise attempts at creating dichotomies between the two. Some scholars describe anarchism as having many influences from liberalism, and being both liberal and socialist but more so. Many scholars reject anarcho-capitalism as a misunderstanding of anarchist principles.
While opposition to the state is central to anarchist thought, defining anarchism is not an easy task for scholars, as there is a lot of discussion among scholars and anarchists on the matter, and various currents perceive anarchism slightly differently. Major definitional elements include the will for a non-coercive society, the rejection of the state apparatus, the belief that human nature allows humans to exist in or progress toward such a non-coercive society, and a suggestion on how to act to pursue the ideal of anarchy.
History
Pre-modern era
The most notable precursors to anarchism in the ancient world were in China and Greece. In China, philosophical anarchism (the discussion on the legitimacy of the state) was delineated by Taoist philosophers Zhuang Zhou and Laozi. Alongside Stoicism, Taoism has been said to have had "significant anticipations" of anarchism.
Anarchic attitudes were also articulated by tragedians and philosophers in Greece. Aeschylus and Sophocles used the myth of Antigone to illustrate the conflict between laws imposed by the state and personal autonomy. Socrates questioned Athenian authorities constantly and insisted on the right of individual freedom of conscience. Cynics dismissed human law (nomos) and associated authorities while trying to live according to nature (physis). Stoics were supportive of a society based on unofficial and friendly relations among its citizens without the presence of a state.
In medieval Europe, there was no anarchistic activity except some ascetic religious movements. These, and other Muslim movements, later gave birth to religious anarchism. In the Sasanian Empire, Mazdak called for an egalitarian society and the abolition of monarchy, only to be soon executed by Emperor Kavad I. In Basra, religious sects preached against the state. In Europe, various religious sects developed anti-state and libertarian tendencies.
Renewed interest in antiquity during the Renaissance and in private judgment during the Reformation restored elements of anti-authoritarian secularism in Europe, particularly in France. Enlightenment challenges to intellectual authority (secular and religious) and the revolutions of the 1790s and 1848 all spurred the ideological development of what became the era of classical anarchism.
Modern era
During the French Revolution, partisan groups such as the Enragés and the saw a turning point in the fermentation of anti-state and federalist sentiments. The first anarchist currents developed throughout the 19th century as William Godwin espoused philosophical anarchism in England, morally delegitimising the state, Max Stirner's thinking paved the way to individualism and Pierre-Joseph Proudhon's theory of mutualism found fertile soil in France. By the late 1870s, various anarchist schools of thought had become well-defined and a wave of then-unprecedented globalisation occurred from 1880 to 1914. This era of classical anarchism lasted until the end of the Spanish Civil War and is considered the golden age of anarchism.
Drawing from mutualism, Mikhail Bakunin founded collectivist anarchism and entered the International Workingmen's Association, a class worker union later known as the First International that formed in 1864 to unite diverse revolutionary currents. The International became a significant political force, with Karl Marx being a leading figure and a member of its General Council. Bakunin's faction (the Jura Federation) and Proudhon's followers (the mutualists) opposed state socialism, advocating political abstentionism and small property holdings. After bitter disputes, the Bakuninists were expelled from the International by the Marxists at the 1872 Hague Congress. Anarchists were treated similarly in the Second International, being ultimately expelled in 1896. Bakunin predicted that if revolutionaries gained power by Marx's terms, they would end up the new tyrants of workers. In response to their expulsion from the First International, anarchists formed the St. Imier International. Under the influence of Peter Kropotkin, a Russian philosopher and scientist, anarcho-communism overlapped with collectivism. Anarcho-communists, who drew inspiration from the 1871 Paris Commune, advocated for free federation and for the distribution of goods according to one's needs.
During this time, a minority of anarchists adopted tactics of revolutionary political violence, known as propaganda of the deed. The dismemberment of the French socialist movement into many groups and the execution and exile of many Communards to penal colonies following the suppression of the Paris Commune favoured individualist political expression and acts. Even though many anarchists distanced themselves from these terrorist acts, infamy came upon the movement and attempts were made to prevent anarchists immigrating to the US, including the Immigration Act of 1903, also called the Anarchist Exclusion Act. Illegalism was another strategy which some anarchists adopted during this period.
By the turn of the 20th century, the terrorist movement had died down, giving way to anarchist communism and syndicalism, while anarchism had spread all over the world. In China, small groups of students imported the humanistic pro-science version of anarcho-communism. Tokyo was a hotspot for rebellious youth from East Asian countries, who moved to the Japanese capital to study. In Latin America, Argentina was a stronghold for anarcho-syndicalism, where it became the most prominent left-wing ideology. Anarchists were involved in the Strandzha Commune and Krusevo Republic established in Macedonia in Ilinden–Preobrazhenie Uprising of 1903, and in the Mexican Revolution of 1910. The revolutionary wave of 1917–23 saw varying degrees of active participation by anarchists.
Despite concerns, anarchists enthusiastically participated in the Russian Revolution in opposition to the White movement, especially in the Makhnovshchina. Seeing the victories of the Bolsheviks in the October Revolution and the resulting Russian Civil War, many workers and activists turned to Communist parties, which grew at the expense of anarchism and other socialist movements. In France and the United States, members of major syndicalist movements such as the General Confederation of Labour and the Industrial Workers of the World left their organisations and joined the Communist International. However, anarchists met harsh suppression after the Bolshevik government had stabilised, including during the Kronstadt rebellion. Several anarchists from Petrograd and Moscow fled to Ukraine, before the Bolsheviks crushed the anarchist movement there too. With the anarchists being repressed in Russia, two new antithetical currents emerged, namely platformism and synthesis anarchism. The former sought to create a coherent group that would push for revolution while the latter were against anything that would resemble a political party.
In the Spanish Civil War of 1936–39, anarchists and syndicalists (CNT and FAI) once again allied themselves with various currents of leftists. A long tradition of Spanish anarchism led to anarchists playing a pivotal role in the war, and particularly in the Spanish Revolution of 1936. In response to the army rebellion, an anarchist-inspired movement of peasants and workers, supported by armed militias, took control of Barcelona and of large areas of rural Spain, where they collectivised the land. The Soviet Union provided some limited assistance at the beginning of the war, but the result was a bitter fight between communists and other leftists in a series of events known as the May Days, as Joseph Stalin asserted Soviet control of the Republican government, ending in another defeat of anarchists at the hands of the communists.
Post-WWII
By the end of World War II, the anarchist movement had been severely weakened. The 1960s witnessed a revival of anarchism, likely caused by a perceived failure of Marxism–Leninism and tensions built by the Cold War. During this time, anarchism found a presence in other movements critical towards both capitalism and the state such as the anti-nuclear, environmental, and peace movements, the counterculture of the 1960s, and the New Left. It also saw a transition from its previous revolutionary nature to provocative anti-capitalist reformism. Anarchism became associated with punk subculture as exemplified by bands such as Crass and the Sex Pistols. The established feminist tendencies of anarcha-feminism returned with vigour during the second wave of feminism. Black anarchism began to take form at this time and influenced anarchism's move from a Eurocentric demographic. This coincided with its failure to gain traction in Northern Europe and its unprecedented height in Latin America.
Around the turn of the 21st century, anarchism grew in popularity and influence within anti-capitalist, anti-war and anti-globalisation movements. Interest in the anarchist movement developed alongside momentum in the anti-globalisation movement, whose leading activist networks were anarchist in orientation. Anarchists became known for their involvement in protests against the World Trade Organization (WTO), the Group of Eight and the World Economic Forum. During the protests, ad hoc leaderless anonymous cadres known as black blocs engaged in rioting, property destruction and violent confrontations with the police. Other organisational tactics pioneered at this time include affinity groups, security culture and the use of decentralised technologies such as the Internet. A significant event of this period was the confrontations at the 1999 Seattle WTO conference. As the movement shaped 21st century radicalism, wider embrace of anarchist principles signaled a revival of interest. Contemporary news coverage which emphasizes black bloc demonstrations has reinforced anarchism's historical association with chaos and violence.
While having revolutionary aspirations, many contemporary forms of anarchism are not confrontational. Instead, they are trying to build an alternative way of social organization (following the theories of dual power), based on mutual interdependence and voluntary cooperation, for instance in groups such as Food Not Bombs and in self-managed social centers.
Anarchism's publicity has also led more scholars in fields such as anthropology and history to engage with the anarchist movement, although contemporary anarchism favours actions over academic theory. Anarchist ideas have been influential in the development of the Zapatistas in Mexico and the Democratic Federation of Northern Syria, more commonly known as Rojava, a de facto autonomous region in northern Syria.
Schools of thought
Anarchist schools of thought have been generally grouped into two main historical traditions, social anarchism and individualist anarchism, owing to their different origins, values and evolution. The individualist current emphasises negative liberty in opposing restraints upon the free individual, while the social current emphasises positive liberty in aiming to achieve the free potential of society through equality and social ownership. In a chronological sense, anarchism can be segmented by the classical currents of the late 19th century and the post-classical currents (anarcha-feminism, green anarchism, and post-anarchism) developed thereafter.
Anarchism's emphasis on anti-capitalism, egalitarianism, and for the extension of community and individuality sets it apart from anarcho-capitalism and other types of economic libertarianism. Anarchism is usually placed on the far-left of the political spectrum, though many reject state authority from conservative principles, such as anarcho-capitalists. Much of its economics and legal philosophy reflect anti-authoritarian, anti-statist, libertarian, and radical interpretations of left-wing and socialist politics such as collectivism, communism, individualism, mutualism, and syndicalism, among other libertarian socialist economic theories.
As anarchism does not offer a fixed body of doctrine from a single particular worldview, many anarchist types and traditions exist and varieties of anarchy diverge widely. One reaction against sectarianism within the anarchist milieu was anarchism without adjectives, a call for toleration and unity among anarchists first adopted by Fernando Tarrida del Mármol in 1889 in response to the bitter debates of anarchist theory at the time. Despite separation, the various anarchist schools of thought are not seen as distinct entities but rather as tendencies that intermingle and are connected through a set of shared principles such as autonomy, mutual aid, anti-authoritarianism and decentralisation.
Beyond the specific factions of anarchist political movements which constitute political anarchism lies philosophical anarchism, which holds that the state lacks moral legitimacy, without necessarily accepting the imperative of revolution to eliminate it. A component especially of individualist anarchism, philosophical anarchism may tolerate the existence of a minimal state but claims that citizens have no moral obligation to obey government when it conflicts with individual autonomy. Philosophical currents as diverse as Objectivism and Kantianism have provided arguments drawn on in favor of philosophical anarchism, including Wolff's defense of anarchism against formal methods for legitimating it. Anarchism pays significant attention to moral arguments since ethics have a central role in anarchist philosophy. Belief in political nihilism has been espoused by anarchists.
Classical
Inceptive currents among classical anarchist currents were mutualism and individualism. They were followed by the major currents of social anarchism (collectivist, communist and syndicalist). They differ on organisational and economic aspects of their ideal society.
Mutualism is an 18th-century economic theory that was developed into anarchist theory by Pierre-Joseph Proudhon. Its aims include "abolishing the state", reciprocity, free association, voluntary contract, federation and monetary reform of both credit and currency that would be regulated by a bank of the people. Mutualism has been retrospectively characterised as ideologically situated between individualist and collectivist forms of anarchism. In What Is Property? (1840), Proudhon first characterised his goal as a "third form of society, the synthesis of communism and property." Collectivist anarchism is a revolutionary socialist form of anarchism commonly associated with Mikhail Bakunin. Collectivist anarchists advocate collective ownership of the means of production which is theorised to be achieved through violent revolution and that workers be paid according to time worked, rather than goods being distributed according to need as in communism. Collectivist anarchism arose alongside Marxism but rejected the dictatorship of the proletariat despite the stated Marxist goal of a collectivist stateless society.
Anarcho-communism is a theory of anarchism that advocates a communist society with common ownership of the means of production, held by a federal network of voluntary associations, with production and consumption based on the guiding principle "From each according to his ability, to each according to his need." Anarcho-communism developed from radical socialist currents after the French Revolution but was first formulated as such in the Italian section of the First International. It was later expanded upon in the theoretical work of Peter Kropotkin, whose specific style would go onto become the dominating view of anarchists by the late 19th century. Anarcho-syndicalism is a branch of anarchism that views labour syndicates as a potential force for revolutionary social change, replacing capitalism and the state with a new society democratically self-managed by workers. The basic principles of anarcho-syndicalism are direct action, workers' solidarity and workers' self-management.
Individualist anarchism is a set of several traditions of thought within the anarchist movement that emphasise the individual and their will over any kinds of external determinants. Early influences on individualist forms of anarchism include William Godwin, Max Stirner, and Henry David Thoreau. Through many countries, individualist anarchism attracted a small yet diverse following of Bohemian artists and intellectuals as well as young anarchist outlaws in what became known as illegalism and individual reclamation.
Post-classical and contemporary
Anarchism has continued to generate many philosophies and movements, at times eclectic, drawing upon various sources and combining disparate concepts to create new philosophical approaches. The anti-capitalist tradition of classical anarchism has remained prominent within contemporary currents.
Various anarchist groups, tendencies, and schools of thought exist today, making it difficult to describe the contemporary anarchist movement. While theorists and activists have established "relatively stable constellations of anarchist principles", there is no consensus on which principles are core and commentators describe multiple anarchisms, rather than a singular anarchism, in which common principles are shared between schools of anarchism while each group prioritizes those principles differently. Gender equality can be a common principle, although it ranks as a higher priority to anarcha-feminists than anarcho-communists.
Anarchists are generally committed against coercive authority in all forms, namely "all centralized and hierarchical forms of government (e.g., monarchy, representative democracy, state socialism, etc.), economic class systems (e.g., capitalism, Bolshevism, feudalism, slavery, etc.), autocratic religions (e.g., fundamentalist Islam, Roman Catholicism, etc.), patriarchy, heterosexism, white supremacy, and imperialism." Anarchist schools disagree on the methods by which these forms should be opposed.
Tactics
Anarchists' tactics take various forms but in general serve two major goals, namely, to first oppose the Establishment and secondly to promote anarchist ethics and reflect an anarchist vision of society, illustrating the unity of means and ends. A broad categorisation can be made between aims to destroy oppressive states and institutions by revolutionary means on one hand and aims to change society through evolutionary means on the other. Evolutionary tactics embrace nonviolence and take a gradual approach to anarchist aims, although there is significant overlap between the two.
Anarchist tactics have shifted during the course of the last century. Anarchists during the early 20th century focused more on strikes and militancy while contemporary anarchists use a broader array of approaches.
Classical era
During the classical era, anarchists had a militant tendency. Not only did they confront state armed forces, as in Spain and Ukraine, but some of them also employed terrorism as propaganda of the deed. Assassination attempts were carried out against heads of state, some of which were successful. Anarchists also took part in revolutions. Many anarchists, especially the Galleanists, believed that these attempts would be the impetus for a revolution against capitalism and the state. Many of these attacks were done by individual assailants and the majority took place in the late 1870s, the early 1880s and the 1890s, with some still occurring in the early 1900s. Their decrease in prevalence was the result of further judicial power and of targeting and cataloging by state institutions.
Anarchist perspectives towards violence have always been controversial. Anarcho-pacifists advocate for non-violence means to achieve their stateless, nonviolent ends. Other anarchist groups advocate direct action, a tactic which can include acts of sabotage or terrorism. This attitude was quite prominent a century ago when seeing the state as a tyrant and some anarchists believing that they had every right to oppose its oppression by any means possible. Emma Goldman and Errico Malatesta, who were proponents of limited use of violence, stated that violence is merely a reaction to state violence as a necessary evil.
Anarchists took an active role in strike actions, although they tended to be antipathetic to formal syndicalism, seeing it as reformist. They saw it as a part of the movement which sought to overthrow the state and capitalism. Anarchists also reinforced their propaganda within the arts, some of whom practiced naturism and nudism. Those anarchists also built communities which were based on friendship and were involved in the news media.
Revolutionary and insurrectionary
In the current era, Italian anarchist Alfredo Bonanno, a proponent of insurrectionary anarchism, has reinstated the debate on violence by rejecting the nonviolence tactic adopted since the late 19th century by Kropotkin and other prominent anarchists afterwards. Both Bonanno and the French group The Invisible Committee advocate for small, informal affiliation groups, where each member is responsible for their own actions but works together to bring down oppression using sabotage and other violent means against state, capitalism, and other enemies. Members of The Invisible Committee were arrested in 2008 on various charges, terrorism included.
Overall, contemporary anarchists are much less violent and militant than their ideological ancestors. They mostly engage in confronting the police during demonstrations and riots, especially in countries such as Canada, Greece, and Mexico. Militant black bloc protest groups are known for clashing with the police; however, anarchists not only clash with state operators, they also engage in the struggle against fascists, racists, and other bigots, taking anti-fascist action and mobilizing to prevent hate rallies from happening.
Evolutionary
Anarchists commonly employ direct action. This can take the form of disrupting and protesting against unjust hierarchy, or the form of self-managing their lives through the creation of counter-institutions such as communes and non-hierarchical collectives. Decision-making is often handled in an anti-authoritarian way, with everyone having equal say in each decision, an approach known as horizontalism. Contemporary-era anarchists have been engaging with various grassroots movements that are more or less based on horizontalism, although not explicitly anarchist, respecting personal autonomy and participating in mass activism such as strikes and demonstrations. In contrast with the "big-A Anarchism" of the classical era, the newly coined term "small-a anarchism" signals their tendency not to base their thoughts and actions on classical-era anarchism or to refer to classical anarchists such as Peter Kropotkin and Pierre-Joseph Proudhon to justify their opinions. Those anarchists would rather base their thought and praxis on their own experience, which they will later theorize.
The concept of prefigurative politics is enacted by many contemporary anarchist groups, striving to embody the principles, organization and tactics of the changed social structure they hope to bring about. As part of this the decision-making process of small anarchist affinity groups plays a significant tactical role. Anarchists have employed various methods to build a rough consensus among members of their group without the need of a leader or a leading group. One way is for an individual from the group to play the role of facilitator to help achieve a consensus without taking part in the discussion themselves or promoting a specific point. Minorities usually accept rough consensus, except when they feel the proposal contradicts anarchist ethics, goals and values. Anarchists usually form small groups (5–20 individuals) to enhance autonomy and friendships among their members. These kinds of groups more often than not interconnect with each other, forming larger networks. Anarchists still support and participate in strikes, especially wildcat strikes as these are leaderless strikes not organised centrally by a syndicate.
As in the past, newspapers and journals are used, and anarchists have gone online to spread their message. Anarchists have found it easier to create websites because of distributional and other difficulties, hosting electronic libraries and other portals. Anarchists were also involved in developing various software that are available for free. The way these hacktivists work to develop and distribute resembles the anarchist ideals, especially when it comes to preserving users' privacy from state surveillance.
Anarchists organize themselves to squat and reclaim public spaces. During important events such as protests and when spaces are being occupied, they are often called Temporary Autonomous Zones (TAZ), spaces where art, poetry, and surrealism are blended to display the anarchist ideal. As seen by anarchists, squatting is a way to regain urban space from the capitalist market, serving pragmatical needs and also being an exemplary direct action. Acquiring space enables anarchists to experiment with their ideas and build social bonds. Adding up these tactics while having in mind that not all anarchists share the same attitudes towards them, along with various forms of protesting at highly symbolic events, make up a carnivalesque atmosphere that is part of contemporary anarchist vividity.
Key issues
As anarchism is a philosophy that embodies many diverse attitudes, tendencies, and schools of thought, disagreement over questions of values, ideology, and tactics is common. Its diversity has led to widely different uses of identical terms among different anarchist traditions which has created a number of definitional concerns in anarchist theory. The compatibility of capitalism, nationalism, and religion with anarchism is widely disputed, and anarchism enjoys complex relationships with ideologies such as communism, collectivism, Marxism, and trade unionism. Anarchists may be motivated by humanism, divine authority, enlightened self-interest, veganism, or any number of alternative ethical doctrines. Phenomena such as civilisation, technology (e.g. within anarcho-primitivism), and the democratic process may be sharply criticised within some anarchist tendencies and simultaneously lauded in others.
The state
Objection to the state and its institutions is a sine qua non of anarchism. Anarchists consider the state as a tool of domination and believe it to be illegitimate regardless of its political tendencies. Instead of people being able to control the aspects of their life, major decisions are taken by a small elite. Authority ultimately rests solely on power, regardless of whether that power is open or transparent, as it still has the ability to coerce people. Another anarchist argument against states is that the people constituting a government, even the most altruistic among officials, will unavoidably seek to gain more power, leading to corruption. Anarchists consider the idea that the state is the collective will of the people to be an unachievable fiction due to the fact that the ruling class is distinct from the rest of society.
Specific anarchist attitudes towards the state vary. Robert Paul Wolff believed that the tension between authority and autonomy would mean the state could never be legitimate. Bakunin saw the state as meaning "coercion, domination by means of coercion, camouflaged if possible but unceremonious and overt if need be." A. John Simmons and Leslie Green, who leaned toward philosophical anarchism, believed that the state could be legitimate if it is governed by consensus, although they saw this as highly unlikely. Beliefs on how to abolish the state also differ.
Gender, sexuality, and free love
As gender and sexuality carry along them dynamics of hierarchy, many anarchists address, analyse, and oppose the suppression of one's autonomy imposed by gender roles.
Sexuality was not often discussed by classical anarchists but the few that did felt that an anarchist society would lead to sexuality naturally developing. Sexual violence was a concern for anarchists such as Benjamin Tucker, who opposed age-of-consent laws, believing they would benefit predatory men. A historical current that arose and flourished during 1890 and 1920 within anarchism was free love. In contemporary anarchism, this current survives as a tendency to support polyamory, relationship anarchy, and queer anarchism. Free love advocates were against marriage, which they saw as a way of men imposing authority over women, largely because marriage law greatly favoured the power of men. The notion of free love was much broader and included a critique of the established order that limited women's sexual freedom and pleasure. Those free love movements contributed to the establishment of communal houses, where large groups of travelers, anarchists and other activists slept in beds together. Free love had roots both in Europe and the United States; however, some anarchists struggled with the jealousy that arose from free love. Anarchist feminists were advocates of free love, against marriage, and pro-choice (using a contemporary term), and had a similar agenda. Anarchist and non-anarchist feminists differed on suffrage but were supportive of one another.
During the second half of the 20th century, anarchism intermingled with the second wave of feminism, radicalising some currents of the feminist movement and being influenced as well. By the latest decades of the 20th century, anarchists and feminists were advocating for the rights and autonomy of women, gays, queers and other marginalised groups, with some feminist thinkers suggesting a fusion of the two currents. With the third wave of feminism, sexual identity and compulsory heterosexuality became a subject of study for anarchists, yielding a post-structuralist critique of sexual normality. Some anarchists distanced themselves from this line of thinking, suggesting that it leaned towards an individualism that was dropping the cause of social liberation.
Education
The interest of anarchists in education stretches back to the first emergence of classical anarchism. Anarchists consider proper education, one which sets the foundations of the future autonomy of the individual and the society, to be an act of mutual aid. Anarchist writers such as William Godwin (Political Justice) and Max Stirner ("The False Principle of Our Education") attacked both state education and private education as another means by which the ruling class replicate their privileges.
In 1901, Catalan anarchist and free thinker Francisco Ferrer established the Escuela Moderna in Barcelona as an opposition to the established education system which was dictated largely by the Catholic Church. Ferrer's approach was secular, rejecting both state and church involvement in the educational process while giving pupils large amounts of autonomy in planning their work and attendance. Ferrer aimed to educate the working class and explicitly sought to foster class consciousness among students. The school closed after constant harassment by the state and Ferrer was later arrested. Nonetheless, his ideas formed the inspiration for a series of modern schools around the world. Christian anarchist Leo Tolstoy, who published the essay Education and Culture, also established a similar school with its founding principle being that "for education to be effective it had to be free." In a similar token, A. S. Neill founded what became the Summerhill School in 1921, also declaring being free from coercion.
Anarchist education is based largely on the idea that a child's right to develop freely and without manipulation ought to be respected and that rationality would lead children to morally good conclusions; however, there has been little consensus among anarchist figures as to what constitutes manipulation. Ferrer believed that moral indoctrination was necessary and explicitly taught pupils that equality, liberty and social justice were not possible under capitalism, along with other critiques of government and nationalism.
Late 20th century and contemporary anarchist writers (Paul Goodman, Herbert Read, and Colin Ward) intensified and expanded the anarchist critique of state education, largely focusing on the need for a system that focuses on children's creativity rather than on their ability to attain a career or participate in consumerism as part of a consumer society. Contemporary anarchists such as Ward claim that state education serves to perpetuate socioeconomic inequality.
While few anarchist education institutions have survived to the modern-day, major tenets of anarchist schools, among them respect for child autonomy and relying on reasoning rather than indoctrination as a teaching method, have spread among mainstream educational institutions. Judith Suissa names three schools as explicitly anarchists' schools, namely the Free Skool Santa Cruz in the United States which is part of a wider American-Canadian network of schools, the Self-Managed Learning College in Brighton, England, and the Paideia School in Spain.
The arts
The connection between anarchism and art was quite profound during the classical era of anarchism, especially among artistic currents that were developing during that era such as futurists, surrealists and others. In literature, anarchism was mostly associated with the New Apocalyptics and the neo-romanticism movement. In music, anarchism has been associated with music scenes such as punk. Anarchists such as Leo Tolstoy and Herbert Read stated that the border between the artist and the non-artist, what separates art from a daily act, is a construct produced by the alienation caused by capitalism and it prevents humans from living a joyful life.
Other anarchists advocated for or used art as a means to achieve anarchist ends. In his book Breaking the Spell: A History of Anarchist Filmmakers, Videotape Guerrillas, and Digital Ninjas, Chris Robé claims that "anarchist-inflected practices have increasingly structured movement-based video activism." Throughout the 20th century, many prominent anarchists (Peter Kropotkin, Emma Goldman, Gustav Landauer and Camillo Berneri) and publications such as Anarchy wrote about matters pertaining to the arts.
Three overlapping properties made art useful to anarchists. It could depict a critique of existing society and hierarchies, serve as a prefigurative tool to reflect the anarchist ideal society and even turn into a means of direct action such as in protests. As it appeals to both emotion and reason, art could appeal to the whole human and have a powerful effect. The 19th-century neo-impressionist movement had an ecological aesthetic and offered an example of an anarchist perception of the road towards socialism. In Les chataigniers a Osny by anarchist painter Camille Pissarro, the blending of aesthetic and social harmony is prefiguring an ideal anarchistic agrarian community.
Criticism
The most common critique of anarchism is the assertion that humans cannot self-govern and so a state is necessary for human survival. Philosopher Bertrand Russell supported this critique, stating that "[p]eace and war, tariffs, regulations of sanitary conditions and the sale of noxious drugs, the preservation of a just system of distribution: these, among others, are functions which could hardly be performed in a community in which there was no central government." Another common criticism of anarchism is that it fits a world of isolation in which only the small enough entities can be self-governing; a response would be that major anarchist thinkers advocated anarchist federalism.
Another criticism of anarchism is the belief that it is inherently unstable: that an anarchist society would inevitably evolve back into a state. Thomas Hobbes and other early social contract theorists argued that the state emerges in response to natural anarchy to protect the people's interests and keep order. Philosopher Robert Nozick argued that a "night-watchman state", or minarchy, would emerge from anarchy through the process of an invisible hand, in which people would exercise their liberty and buy protection from protection agencies, evolving into a minimal state. Anarchists reject these criticisms by arguing that humans in a state of nature would not just be in a state of war. Anarcho-primitivists in particular argue that humans were better off in a state of nature in small tribes living close to the land, while anarchists in general argue that the negatives of state organization, such as hierarchies, monopolies and inequality, outweigh the benefits.
Philosophy lecturer Andrew G. Fiala composed a list of common arguments against anarchism which includes critiques such as that anarchism is innately related to violence and destruction, not only in the pragmatic world, such as at protests, but in the world of ethics as well. Secondly, anarchism is evaluated as unfeasible or utopian since the state cannot be defeated practically. This line of arguments most often calls for political action within the system to reform it. The third argument is that anarchism is self-contradictory as a ruling theory that has no ruling theory. Anarchism also calls for collective action while endorsing the autonomy of the individual, hence no collective action can be taken. Lastly, Fiala mentions a critique towards philosophical anarchism of being ineffective (all talk and thoughts) and in the meantime capitalism and bourgeois class remains strong.
One of the earliest criticisms is that anarchism defies and fails to understand the biological inclination to authority. Joseph Raz states that the acceptance of authority implies the belief that following their instructions will afford more success. Raz believes that this argument is true in following both authorities' successful and mistaken instruction. Anarchists reject this criticism because challenging or disobeying authority does not entail the disappearance of its advantages by acknowledging authority such as doctors or lawyers as reliable, nor does it involve a complete surrender of independent judgment. Anarchist perception of human nature, rejection of the state, and commitment to social revolution has been criticised by academics as naive, overly simplistic, and unrealistic, respectively. Classical anarchism has been criticised for relying too heavily on the belief that the abolition of the state will lead to human cooperation prospering.
Friedrich Engels, considered to be one of the principal founders of Marxism, criticised anarchism's anti-authoritarianism as inherently counter-revolutionary because in his view a revolution is by itself authoritarian. A Socialist Workers Party pamphlet by John Molyneux, Anarchism: A Marxist Criticism argues that "anarchism cannot win", believing that it lacks the ability to properly implement its ideas. The Marxist criticism of anarchism is that it has a utopian character because all individuals should have anarchist views and values. According to the Marxist view, that a social idea would follow directly from this human ideal and out of the free will of every individual formed its essence. Marxists state that this contradiction was responsible for their inability to act. In the anarchist vision, the conflict between liberty and equality was resolved through coexistence and intertwining.
Explanatory notes
Citations
General and cited sources
Primary sources
Secondary sources
Tertiary sources
|
;Anti-capitalism;Anti-fascism;Economic ideologies;Far-left politics;Left-wing politics;Libertarian socialism;Libertarianism;Political culture;Political ideologies;Political movements;Social theories;Socialism
|
https://en.wikipedia.org/wiki/Albedo
|
Albedo ( ; ) is the fraction of sunlight that is diffusely reflected by a body. It is measured on a scale from 0 (corresponding to a black body that absorbs all incident radiation) to 1 (corresponding to a body that reflects all incident radiation). Surface albedo is defined as the ratio of radiosity Je to the irradiance Ee (flux per unit area) received by a surface. The proportion reflected is not only determined by properties of the surface itself, but also by the spectral and angular distribution of solar radiation reaching the Earth's surface. These factors vary with atmospheric composition, geographic location, and time (see position of the Sun).
While directional-hemispherical reflectance factor is calculated for a single angle of incidence (i.e., for a given position of the Sun), albedo is the directional integration of reflectance over all solar angles in a given period. The temporal resolution may range from seconds (as obtained from flux measurements) to daily, monthly, or annual averages.
Unless given for a specific wavelength (spectral albedo), albedo refers to the entire spectrum of solar radiation. Due to measurement constraints, it is often given for the spectrum in which most solar energy reaches the surface (between 0.3 and 3 μm). This spectrum includes visible light (0.4–0.7 μm), which explains why surfaces with a low albedo appear dark (e.g., trees absorb most radiation), whereas surfaces with a high albedo appear bright (e.g., snow reflects most radiation).
Ice–albedo feedback is a positive feedback climate process where a change in the area of ice caps, glaciers, and sea ice alters the albedo and surface temperature of a planet. Ice is very reflective, therefore it reflects far more solar energy back to space than the other types of land area or open water. Ice–albedo feedback plays an important role in global climate change. Albedo is an important concept in climate science.
Terrestrial albedo
Any albedo in visible light falls within a range of about 0.9 for fresh snow to about 0.04 for charcoal, one of the darkest substances. Deeply shadowed cavities can achieve an effective albedo approaching the zero of a black body. When seen from a distance, the ocean surface has a low albedo, as do most forests, whereas desert areas have some of the highest albedos among landforms. Most land areas are in an albedo range of 0.1 to 0.4. The average albedo of Earth is about 0.3. This is far higher than for the ocean primarily because of the contribution of clouds.
Earth's surface albedo is regularly estimated via Earth observation satellite sensors such as NASA's MODIS instruments on board the Terra and Aqua satellites, and the CERES instrument on the Suomi NPP and JPSS. As the amount of reflected radiation is only measured for a single direction by satellite, not all directions, a mathematical model is used to translate a sample set of satellite reflectance measurements into estimates of directional-hemispherical reflectance and bi-hemispherical reflectance (e.g.,). These calculations are based on the bidirectional reflectance distribution function (BRDF), which describes how the reflectance of a given surface depends on the view angle of the observer and the solar angle. BDRF can facilitate translations of observations of reflectance into albedo.
Earth's average surface temperature due to its albedo and the greenhouse effect is currently about . If Earth were frozen entirely (and hence be more reflective), the average temperature of the planet would drop below . If only the continental land masses became covered by glaciers, the mean temperature of the planet would drop to about . In contrast, if the entire Earth was covered by water – a so-called ocean planet – the average temperature on the planet would rise to almost .
In 2021, scientists reported that Earth dimmed by ~0.5% over two decades (1998–2017) as measured by earthshine using modern photometric techniques. This may have both been co-caused by climate change as well as a substantial increase in global warming. However, the link to climate change has not been explored to date and it is unclear whether or not this represents an ongoing trend.
White-sky, black-sky, and blue-sky albedo
For land surfaces, it has been shown that the albedo at a particular solar zenith angle θi can be approximated by the proportionate sum of two terms:
the directional-hemispherical reflectance at that solar zenith angle, , sometimes referred to as black-sky albedo, and
the bi-hemispherical reflectance, , sometimes referred to as white-sky albedo.
with being the proportion of direct radiation from a given solar angle, and being the proportion of diffuse illumination, the actual albedo (also called blue-sky albedo) can then be given as:
This formula is important because it allows the albedo to be calculated for any given illumination conditions from a knowledge of the intrinsic properties of the surface.
Changes to albedo due to human activities
Human activities (e.g., deforestation, farming, and urbanization) change the albedo of various areas around the globe. Human impacts to "the physical properties of the land surface can perturb the climate by altering the Earth’s radiative energy balance" even on a small scale or when undetected by satellites.
Urbanization generally decreases albedo (commonly being 0.01–0.02 lower than adjacent croplands), which contributes to global warming. Deliberately increasing albedo in urban areas can mitigate the urban heat island effect. An estimate in 2022 found that on a global scale, "an albedo increase of 0.1 in worldwide urban areas would result in a cooling effect that is equivalent to absorbing ~44 Gt of CO2 emissions."
Intentionally enhancing the albedo of the Earth's surface, along with its daytime thermal emittance, has been proposed as a solar radiation management strategy to mitigate energy crises and global warming known as passive daytime radiative cooling (PDRC). Efforts toward widespread implementation of PDRCs may focus on maximizing the albedo of surfaces from very low to high values, so long as a thermal emittance of at least 90% can be achieved.
The tens of thousands of hectares of greenhouses in Almería, Spain form a large expanse of whitened plastic roofs. A 2008 study found that this anthropogenic change lowered the local surface area temperature of the high-albedo area, although changes were localized. A follow-up study found that "CO2-eq. emissions associated to changes in surface albedo are a consequence of land transformation" and can reduce surface temperature increases associated with climate change.
Examples of terrestrial albedo effects
Illumination
Albedo is not directly dependent on the illumination because changing the amount of incoming light proportionally changes the amount of reflected light, except in circumstances where a change in illumination induces a change in the Earth's surface at that location (e.g. through melting of reflective ice). However, albedo and illumination both vary by latitude. Albedo is highest near the poles and lowest in the subtropics, with a local maximum in the tropics.
Insolation effects
The intensity of albedo temperature effects depends on the amount of albedo and the level of local insolation (solar irradiance); high albedo areas in the Arctic and Antarctic regions are cold due to low insolation, whereas areas such as the Sahara Desert, which also have a relatively high albedo, will be hotter due to high insolation. Tropical and sub-tropical rainforest areas have low albedo, and are much hotter than their temperate forest counterparts, which have lower insolation. Because insolation plays such a big role in the heating and cooling effects of albedo, high insolation areas like the tropics will tend to show a more pronounced fluctuation in local temperature when local albedo changes.
Arctic regions notably release more heat back into space than what they absorb, effectively cooling the Earth. This has been a concern since arctic ice and snow has been melting at higher rates due to higher temperatures, creating regions in the arctic that are notably darker (being water or ground which is darker color) and reflects less heat back into space. This feedback loop results in a reduced albedo effect.
Climate and weather
Albedo affects climate by determining how much radiation a planet absorbs. The uneven heating of Earth from albedo variations between land, ice, or ocean surfaces can drive weather.
The response of the climate system to an initial forcing is modified by feedbacks: increased by "self-reinforcing" or "positive" feedbacks and reduced by "balancing" or "negative" feedbacks. The main reinforcing feedbacks are the water-vapour feedback, the ice–albedo feedback, and the net effect of clouds.
Albedo–temperature feedback
When an area's albedo changes due to snowfall, a snow–temperature feedback results. A layer of snowfall increases local albedo, reflecting away sunlight, leading to local cooling. In principle, if no outside temperature change affects this area (e.g., a warm air mass), the raised albedo and lower temperature would maintain the current snow and invite further snowfall, deepening the snow–temperature feedback. However, because local weather is dynamic due to the change of seasons, eventually warm air masses and a more direct angle of sunlight (higher insolation) cause melting. When the melted area reveals surfaces with lower albedo, such as grass, soil, or ocean, the effect is reversed: the darkening surface lowers albedo, increasing local temperatures, which induces more melting and thus reducing the albedo further, resulting in still more heating.
Snow
Snow albedo is highly variable, ranging from as high as 0.9 for freshly fallen snow, to about 0.4 for melting snow, and as low as 0.2 for dirty snow. Over Antarctica, snow albedo averages a little more than 0.8. If a marginally snow-covered area warms, snow tends to melt, lowering the albedo, and hence leading to more snowmelt because more radiation is being absorbed by the snowpack (referred to as the ice–albedo positive feedback).
In Switzerland, the citizens have been protecting their glaciers with large white tarpaulins to slow down the ice melt. These large white sheets are helping to reject the rays from the sun and defecting the heat. Although this method is very expensive, it has been shown to work, reducing snow and ice melt by 60%.
Just as fresh snow has a higher albedo than does dirty snow, the albedo of snow-covered sea ice is far higher than that of sea water. Sea water absorbs more solar radiation than would the same surface covered with reflective snow. When sea ice melts, either due to a rise in sea temperature or in response to increased solar radiation from above, the snow-covered surface is reduced, and more surface of sea water is exposed, so the rate of energy absorption increases. The extra absorbed energy heats the sea water, which in turn increases the rate at which sea ice melts. As with the preceding example of snowmelt, the process of melting of sea ice is thus another example of a positive feedback. Both positive feedback loops have long been recognized as important for global warming.
Cryoconite, powdery windblown dust containing soot, sometimes reduces albedo on glaciers and ice sheets.
The dynamical nature of albedo in response to positive feedback, together with the effects of small errors in the measurement of albedo, can lead to large errors in energy estimates. Because of this, in order to reduce the error of energy estimates, it is important to measure the albedo of snow-covered areas through remote sensing techniques rather than applying a single value for albedo over broad regions.
Small-scale effects
Albedo works on a smaller scale, too. In sunlight, dark clothes absorb more heat and light-coloured clothes reflect it better, thus allowing some control over body temperature by exploiting the albedo effect of the colour of external clothing.
Solar photovoltaic effects
Albedo can affect the electrical energy output of solar photovoltaic devices. For example, the effects of a spectrally responsive albedo are illustrated by the differences between the spectrally weighted albedo of solar photovoltaic technology based on hydrogenated amorphous silicon (a-Si:H) and crystalline silicon (c-Si)-based compared to traditional spectral-integrated albedo predictions. Research showed impacts of over 10% for vertically (90°) mounted systems, but such effects were substantially lower for systems with lower surface tilts. Spectral albedo strongly affects the performance of bifacial solar cells where rear surface performance gains of over 20% have been observed for c-Si cells installed above healthy vegetation. An analysis on the bias due to the specular reflectivity of 22 commonly occurring surface materials (both human-made and natural) provided effective albedo values for simulating the performance of seven photovoltaic materials mounted on three common photovoltaic system topologies: industrial (solar farms), commercial flat rooftops and residential pitched-roof applications.
Trees
Forests generally have a low albedo because the majority of the ultraviolet and visible spectrum is absorbed through photosynthesis. For this reason, the greater heat absorption by trees could offset some of the carbon benefits of afforestation (or offset the negative climate impacts of deforestation). In other words: The climate change mitigation effect of carbon sequestration by forests is partially counterbalanced in that reforestation can decrease the reflection of sunlight (albedo).
In the case of evergreen forests with seasonal snow cover, albedo reduction may be significant enough for deforestation to cause a net cooling effect. Trees also impact climate in extremely complicated ways through evapotranspiration. The water vapor causes cooling on the land surface, causes heating where it condenses, acts as strong greenhouse gas, and can increase albedo when it condenses into clouds. Scientists generally treat evapotranspiration as a net cooling impact, and the net climate impact of albedo and evapotranspiration changes from deforestation depends greatly on local climate.
Mid-to-high-latitude forests have a much lower albedo during snow seasons than flat ground, thus contributing to warming. Modeling that compares the effects of albedo differences between forests and grasslands suggests that expanding the land area of forests in temperate zones offers only a temporary mitigation benefit.
In seasonally snow-covered zones, winter albedos of treeless areas are 10% to 50% higher than nearby forested areas because snow does not cover the trees as readily. Deciduous trees have an albedo value of about 0.15 to 0.18 whereas coniferous trees have a value of about 0.09 to 0.15. Variation in summer albedo across both forest types is associated with maximum rates of photosynthesis because plants with high growth capacity display a greater fraction of their foliage for direct interception of incoming radiation in the upper canopy. The result is that wavelengths of light not used in photosynthesis are more likely to be reflected back to space rather than being absorbed by other surfaces lower in the canopy.
Studies by the Hadley Centre have investigated the relative (generally warming) effect of albedo change and (cooling) effect of carbon sequestration on planting forests. They found that new forests in tropical and midlatitude areas tended to cool; new forests in high latitudes (e.g., Siberia) were neutral or perhaps warming.
Research in 2023, drawing from 176 flux stations globally, revealed a climate trade-off: increased carbon uptake from afforestation results in reduced albedo. Initially, this reduction may lead to moderate global warming over a span of approximately 20 years, but it is expected to transition into significant cooling thereafter.
Water
Water reflects light very differently from typical terrestrial materials. The reflectivity of a water surface is calculated using the Fresnel equations.
At the scale of the wavelength of light even wavy water is always smooth so the light is reflected in a locally specular manner (not diffusely). The glint of light off water is a commonplace effect of this. At small angles of incident light, waviness results in reduced reflectivity because of the steepness of the reflectivity-vs.-incident-angle curve and a locally increased average incident angle.
Although the reflectivity of water is very low at low and medium angles of incident light, it becomes very high at high angles of incident light such as those that occur on the illuminated side of Earth near the terminator (early morning, late afternoon, and near the poles). However, as mentioned above, waviness causes an appreciable reduction. Because light specularly reflected from water does not usually reach the viewer, water is usually considered to have a very low albedo in spite of its high reflectivity at high angles of incident light.
Note that white caps on waves look white (and have high albedo) because the water is foamed up, so there are many superimposed bubble surfaces which reflect, adding up their reflectivities. Fresh 'black' ice exhibits Fresnel reflection.
Snow on top of this sea ice increases the albedo to 0.9.
Clouds
Cloud albedo has substantial influence over atmospheric temperatures. Different types of clouds exhibit different reflectivity, theoretically ranging in albedo from a minimum of near 0 to a maximum approaching 0.8. "On any given day, about half of Earth is covered by clouds, which reflect more sunlight than land and water. Clouds keep Earth cool by reflecting sunlight, but they can also serve as blankets to trap warmth."
Albedo and climate in some areas are affected by artificial clouds, such as those created by the contrails of heavy commercial airliner traffic. A study following the burning of the Kuwaiti oil fields during Iraqi occupation showed that temperatures under the burning oil fires were as much as colder than temperatures several miles away under clear skies.
Aerosol effects
Aerosols (very fine particles/droplets in the atmosphere) have both direct and indirect effects on Earth's radiative balance. The direct (albedo) effect is generally to cool the planet; the indirect effect (the particles act as cloud condensation nuclei and thereby change cloud properties) is less certain.
Black carbon
Another albedo-related effect on the climate is from black carbon particles. The size of this effect is difficult to quantify: the Intergovernmental Panel on Climate Change estimates that the global mean radiative forcing for black carbon aerosols from fossil fuels is +0.2 W m−2, with a range +0.1 to +0.4 W m−2. Black carbon is a bigger cause of the melting of the polar ice cap in the Arctic than carbon dioxide due to its effect on the albedo.
Astronomical albedo
In astronomy, the term albedo can be defined in several different ways, depending upon the application and the wavelength of electromagnetic radiation involved.
Optical or visual albedo
The albedos of planets, satellites and minor planets such as asteroids can be used to infer much about their properties. The study of albedos, their dependence on wavelength, lighting angle ("phase angle"), and variation in time composes a major part of the astronomical field of photometry. For small and far objects that cannot be resolved by telescopes, much of what we know comes from the study of their albedos. For example, the absolute albedo can indicate the surface ice content of outer Solar System objects, the variation of albedo with phase angle gives information about regolith properties, whereas unusually high radar albedo is indicative of high metal content in asteroids.
Enceladus, a moon of Saturn, has one of the highest known optical albedos of any body in the Solar System, with an albedo of 0.99. Another notable high-albedo body is Eris, with an albedo of 0.96. Many small objects in the outer Solar System and asteroid belt have low albedos down to about 0.05. A typical comet nucleus has an albedo of 0.04. Such a dark surface is thought to be indicative of a primitive and heavily space weathered surface containing some organic compounds.
The overall albedo of the Moon is measured to be around 0.14, but it is strongly directional and non-Lambertian, displaying also a strong opposition effect. Although such reflectance properties are different from those of any terrestrial terrains, they are typical of the regolith surfaces of airless Solar System bodies.
Two common optical albedos that are used in astronomy are the (V-band) geometric albedo (measuring brightness when illumination comes from directly behind the observer) and the Bond albedo (measuring total proportion of electromagnetic energy reflected). Their values can differ significantly, which is a common source of confusion.
In detailed studies, the directional reflectance properties of astronomical bodies are often expressed in terms of the five Hapke parameters which semi-empirically describe the variation of albedo with phase angle, including a characterization of the opposition effect of regolith surfaces. One of these five parameters is yet another type of albedo called the single-scattering albedo. It is used to define scattering of electromagnetic waves on small particles. It depends on properties of the material (refractive index), the size of the particle, and the wavelength of the incoming radiation.
An important relationship between an object's astronomical (geometric) albedo, absolute magnitude and diameter is given by:
where is the astronomical albedo, is the diameter in kilometers, and is the absolute magnitude.
Radar albedo
In planetary radar astronomy, a microwave (or radar) pulse is transmitted toward a planetary target (e.g. Moon, asteroid, etc.) and the echo from the target is measured. In most instances, the transmitted pulse is circularly polarized and the received pulse is measured in the same sense of polarization as the transmitted pulse (SC) and the opposite sense (OC). The echo power is measured in terms of radar cross-section, , , or (total power, SC + OC) and is equal to the cross-sectional area of a metallic sphere (perfect reflector) at the same distance as the target that would return the same echo power.
Those components of the received echo that return from first-surface reflections (as from a smooth or mirror-like surface) are dominated by the OC component as there is a reversal in polarization upon reflection. If the surface is rough at the wavelength scale or there is significant penetration into the regolith, there will be a significant SC component in the echo caused by multiple scattering.
For most objects in the solar system, the OC echo dominates and the most commonly reported radar albedo parameter is the (normalized) OC radar albedo (often shortened to radar albedo):
where the denominator is the effective cross-sectional area of the target object with mean radius, . A smooth metallic sphere would have .
Radar albedos of Solar System objects
The values reported for the Moon, Mercury, Mars, Venus, and Comet P/2005 JQ5 are derived from the total (OC+SC) radar albedo reported in those references.
Relationship to surface bulk density
In the event that most of the echo is from first surface reflections ( or so), the OC radar albedo is a first-order approximation of the Fresnel reflection coefficient (aka reflectivity) and can be used to estimate the bulk density of a planetary surface to a depth of a meter or so (a few wavelengths of the radar wavelength which is typically at the decimeter scale) using the following empirical relationships:
.
History
The term albedo was introduced into optics by Johann Heinrich Lambert in his 1760 work Photometria.
See also
Bio-geoengineering
Cool roof
Daisyworld
Emissivity
Exitance
Global dimming
Ice–albedo feedback
Irradiance
Kirchhoff's law of thermal radiation
Opposition surge
Polar see-saw
Radar astronomy
Solar radiation management
References
External links
Albedo Project
Albedo – Encyclopedia of Earth
NASA MODIS BRDF/albedo product site
Ocean surface albedo look-up-table
Surface albedo derived from Meteosat observations
A discussion of Lunar albedos
reflectivity of metals (chart)
|
1760s neologisms;Climate change feedbacks;Climate forcing;Climatology;Electromagnetic radiation;Land surface effects on climate;Meteorological quantities;Radiation;Radiometry;Scattering, absorption and radiative transfer (optics)
|
https://en.wikipedia.org/wiki/International%20Atomic%20Time
|
International Atomic Time (abbreviated TAI, from its French name ) is a high-precision atomic coordinate time standard based on the notional passage of proper time on Earth's geoid. TAI is a weighted average of the time kept by over 450 atomic clocks in over 80 national laboratories worldwide. It is a continuous scale of time, without leap seconds, and it is the principal realisation of Terrestrial Time (with a fixed offset of epoch). It is the basis for Coordinated Universal Time (UTC), which is used for civil timekeeping all over the Earth's surface and which has leap seconds.
UTC deviates from TAI by a number of whole seconds. , immediately after the most recent leap second was put into effect, UTC has been exactly 37 seconds behind TAI. The 37 seconds result from the initial difference of 10 seconds at the start of 1972, plus 27 leap seconds in UTC since 1972. In 2022, the General Conference on Weights and Measures decided to abandon the leap second by or before 2035, at which point the difference between TAI and UTC will remain fixed.
TAI may be reported using traditional means of specifying days, carried over from non-uniform time standards based on the rotation of the Earth. Specifically, both Julian days and the Gregorian calendar are used. TAI in this form was synchronised with Universal Time at the beginning of 1958, and the two have drifted apart ever since, due primarily to the slowing rotation of the Earth.
Operation
TAI is a weighted average of the time kept by over 450 atomic clocks in over 80 national laboratories worldwide. The majority of the clocks involved are caesium clocks; the International System of Units (SI) definition of the second is based on caesium. The clocks are compared using GPS signals and two-way satellite time and frequency transfer. Due to the signal averaging TAI is an order of magnitude more stable than its best constituent clock.
The participating institutions each broadcast, in real time, a frequency signal with timecodes, which is their estimate of TAI. Time codes are usually published in the form of UTC, which differs from TAI by a well-known integer number of seconds. These time scales are denoted in the form UTC(NPL) in the UTC form, where NPL here identifies the National Physical Laboratory, UK. The TAI form may be denoted TAI(NPL). The latter is not to be confused with TA(NPL), which denotes an independent atomic time scale, not synchronised to TAI or to anything else.
The clocks at different institutions are regularly compared against each other. The International Bureau of Weights and Measures (BIPM, France), combines these measurements to retrospectively calculate the weighted average that forms the most stable time scale possible. This combined time scale is published monthly in "Circular T", and is the canonical TAI. This time scale is expressed in the form of tables of differences UTC − UTC(k) (equal to TAI − TAI(k)) for each participating institution k. The same circular also gives tables of TAI − TA(k), for the various unsynchronised atomic time scales.
Errors in publication may be corrected by issuing a revision of the faulty Circular T or by errata in a subsequent Circular T. Aside from this, once published in Circular T, the TAI scale is not revised. In hindsight, it is possible to discover errors in TAI and to make better estimates of the true proper time scale. Since the published circulars are definitive, better estimates do not create another version of TAI; it is instead considered to be creating a better realisation of Terrestrial Time (TT).
History
Early atomic time scales consisted of quartz clocks with frequencies calibrated by a single atomic clock; the atomic clocks were not operated continuously. Atomic timekeeping services started experimentally in 1955, using the first caesium atomic clock at the National Physical Laboratory, UK (NPL). It was used as a basis for calibrating the quartz clocks at the Royal Greenwich Observatory and to establish a time scale, called Greenwich Atomic (GA). The United States Naval Observatory began the A.1 scale on 13 September 1956, using an Atomichron commercial atomic clock, followed by the NBS-A scale at the National Bureau of Standards, Boulder, Colorado on 9 October 1957.
The International Time Bureau (BIH) began a time scale, Tm or AM, in July 1955, using both local caesium clocks and comparisons to distant clocks using the phase of VLF radio signals. The BIH scale, A.1, and NBS-A were defined by an epoch at the beginning of 1958 The procedures used by the BIH evolved, and the name for the time scale changed: A3 in 1964 and TA(BIH) in 1969.
The SI second was defined in terms of the caesium atom in 1967. From 1971 to 1975 the General Conference on Weights and Measures and the International Committee for Weights and Measures made a series of decisions that designated the BIPM time scale International Atomic Time (TAI).
In the 1970s, it became clear that the clocks participating in TAI were ticking at different rates due to gravitational time dilation, and the combined TAI scale, therefore, corresponded to an average of the altitudes of the various clocks. Starting from the Julian Date 2443144.5 (1 January 1977 00:00:00 TAI), corrections were applied to the output of all participating clocks, so that TAI would correspond to proper time at the geoid (mean sea level). Because the clocks were, on average, well above sea level, this meant that TAI slowed by about one part in a trillion. The former uncorrected time scale continues to be published under the name EAL (Échelle Atomique Libre, meaning Free Atomic Scale).
The instant that the gravitational correction started to be applied serves as the epoch for Barycentric Coordinate Time (TCB), Geocentric Coordinate Time (TCG), and Terrestrial Time (TT), which represent three fundamental time scales in the Solar System. All three of these time scales were defined to read JD 2443144.5003725 (1 January 1977 00:00:32.184) exactly at that instant. TAI was henceforth a realisation of TT, with the equation TT(TAI) = TAI + 32.184 s.
The continued existence of TAI was questioned in a 2007 letter from the BIPM to the ITU-R which stated, "In the case of a redefinition of UTC without leap seconds, the CCTF would consider discussing the possibility of suppressing TAI, as it would remain parallel to the continuous UTC."
Relation to UTC
Contrary to TAI, UTC is a discontinuous time scale. It is occasionally adjusted by leap seconds. Between these adjustments, it is composed of segments that are mapped to atomic time by a constant offset. From its beginning in 1961 through December 1971, the adjustments were made regularly in fractional leap seconds so that UTC approximated UT2. Afterwards, these adjustments were made only in whole seconds to approximate UT1. This was a compromise arrangement in order to enable a publicly broadcast time scale. The less frequent whole-second adjustments meant that the time scale would be more stable and easier to synchronize internationally. The fact that it continues to approximate UT1 means that tasks such as navigation which require a source of Universal Time continue to be well served by the public broadcast of UTC.
Footnotes
Bibliography
|
Time scales
|
https://en.wikipedia.org/wiki/Altruism
|
Altruism is the concern for the well-being of others, independently of personal benefit or reciprocity.
The word altruism was popularised (and possibly coined) by the French philosopher Auguste Comte in French, as , for an antonym of egoism. He derived it from the Italian , which in turn was derived from Latin , meaning "other people" or "somebody else". Altruism may be considered a synonym of selflessness, the opposite of self-centeredness.
Altruism is an important moral value in many cultures and religions. It can expand beyond care for humans to include other sentient beings and future generations.
Altruism, as observed in populations of organisms, is when an individual performs an action at a cost to itself (in terms of e.g. pleasure and quality of life, time, probability of survival or reproduction) that benefits, directly or indirectly, another individual, without the expectation of reciprocity or compensation for that action.
The theory of psychological egoism suggests that no act of sharing, helping, or sacrificing can be "truly" altruistic, as the actor may receive an intrinsic reward in the form of personal gratification. The validity of this argument depends on whether such intrinsic rewards qualify as "benefits".
The term altruism can also refer to an ethical doctrine that claims that individuals are morally obliged to benefit others. Used in this sense, it is usually contrasted with egoism, which claims individuals are morally obligated to serve themselves first.
Effective altruism is the use of evidence and reason to determine the most effective ways to benefit others.
The notion of altruism
The concept of altruism has a history in philosophical and ethical thought. The term was coined in the 19th century by the founding sociologist and philosopher of science Auguste Comte, and has become a major topic for psychologists (especially evolutionary psychology researchers), evolutionary biologists, and ethologists. Whilst ideas about altruism from one field can affect the other fields, the different methods and focuses of these fields always lead to different perspectives on altruism. In simple terms, altruism is caring about the welfare of other people and acting to help them, above oneself.
Cross-cultural perspectives on altruism
Cross-cultural perspectives on altruism show that how we view and experience helping others depends heavily on where we come from. In individualistic cultures, like many Western countries, acts of altruism often bring personal joy and satisfaction, as they align with values that emphasize individual achievement and self-fulfillment. On the other hand, in collectivist cultures, common in many Eastern societies, altruism is often seen as a responsibility to the group rather than a personal choice. This difference means that people in collectivist cultures might not feel the same personal happiness from helping others, as the act is more about fulfilling social obligations. Ultimately, these variations highlight how deeply cultural norms shape the way we approach and experience altruism.
Scientific viewpoints
Anthropology
Marcel Mauss's essay The Gift contains a passage called "Note on alms". This note describes the evolution of the notion of alms (and by extension of altruism) from the notion of sacrifice. In it, he writes:
Evolutionary explanations
In ethology (the scientific study of animal behaviour), and more generally in the study of social evolution, altruism refers to behavior by an individual that increases the fitness of another individual while decreasing the fitness of the actor. In evolutionary psychology this term may be applied to a wide range of human behaviors such as charity, emergency aid, help to coalition partners, tipping, courtship gifts, production of public goods, and environmentalism.
The need for an explanation of altruistic behavior that is compatible with evolutionary origins has driven the development of new theories. Two related strands of research on altruism have emerged from traditional evolutionary analyses and evolutionary game theory: a mathematical model and analysis of behavioral strategies.
Some of the proposed mechanisms are:
Kin selection. That animals and humans are more altruistic towards close kin than to distant kin and non-kin has been confirmed in numerous studies across many different cultures. Even subtle cues indicating kinship may unconsciously increase altruistic behavior. One kinship cue is facial resemblance. One study found that slightly altering photographs to resemble the faces of study participants more closely increased the trust the participants expressed regarding depicted persons. Another cue is having the same family name, especially if rare, which has been found to increase helpful behavior. Another study found more cooperative behavior, the greater the number of perceived kin in a group. Using kinship terms in political speeches increased audience agreement with the speaker in one study. This effect was powerful for firstborns, who are typically close to their families.
Vested interests. People are likely to suffer if their friends, allies and those from similar social ingroups suffer or disappear. Helping such group members may, therefore, also benefit the altruist. Making ingroup membership more noticeable increases cooperativeness. Extreme self-sacrifice towards the ingroup may be adaptive if a hostile outgroup threatens the entire ingroup.
Reciprocal altruism. See also Reciprocity (evolution).
Direct reciprocity. Research shows that it can be beneficial to help others if there is a chance that they will reciprocate the help. The effective tit for tat strategy is one game theoretic example. Many people seem to be following a similar strategy by cooperating if and only if others cooperate in return.
One consequence is that people are more cooperative with one another if they are more likely to interact again in the future. People tend to be less cooperative if they perceive that the frequency of helpers in the population is lower. They tend to help less if they see non-cooperativeness by others, and this effect tends to be stronger than the opposite effect of seeing cooperative behaviors. Simply changing the cooperative framing of a proposal may increase cooperativeness, such as calling it a "Community Game" instead of a "Wall Street Game".
A tendency towards reciprocity implies that people feel obligated to respond if someone helps them. This has been used by charities that give small gifts to potential donors hoping to induce reciprocity. Another method is to announce publicly that someone has given a large donation. The tendency to reciprocate can even generalize, so people become more helpful toward others after being helped. On the other hand, people will avoid or even retaliate against those perceived not to be cooperating. People sometimes mistakenly fail to help when they intended to, or their helping may not be noticed, which may cause unintended conflicts. As such, it may be an optimal strategy to be slightly forgiving of and have a slightly generous interpretation of non-cooperation.
People are more likely to cooperate on a task if they can communicate with one another first. This may be due to better cooperativeness assessments or promises exchange. They are more cooperative if they can gradually build trust instead of being asked to give extensive help immediately. Direct reciprocity and cooperation in a group can be increased by changing the focus and incentives from intra-group competition to larger-scale competitions, such as between groups or against the general population. Thus, giving grades and promotions based only on an individual's performance relative to a small local group, as is common, may reduce cooperative behaviors in the group.
Indirect reciprocity. Because people avoid poor reciprocators and cheaters, a person's reputation is important. A person esteemed for their reciprocity is more likely to receive assistance, even from individuals they have not directly interacted with before.
Strong reciprocity. This form of reciprocity is expressed by people who invest more resources in cooperation and punishment than what is deemed optimal based on established theories of altruism.
Pseudo-reciprocity. An organism behaves altruistically and the recipient does not reciprocate but has an increased chance of acting in a way that is selfish but also as a byproduct benefits the altruist.
Costly signaling and the handicap principle. Altruism, by diverting resources from the altruist, can act as an "honest signal" of available resources and the skills to acquire them. This may signal to others that the altruist is a valuable potential partner. It may also signal interactive and cooperative intentions, since someone who does not expect to interact further in the future gains nothing from such costly signaling. While it's uncertain if costly signaling can predict long-term cooperative traits, people tend to trust helpers more. Costly signaling loses its value when everyone shares identical traits, resources, and cooperative intentions, but it gains significance as population variability in these aspects increases.
Hunters who share meat display a costly signal of ability. The research found that good hunters have higher reproductive success and more adulterous relations even if they receive no more of the hunted meat than anyone else. Similarly, holding large feasts and giving large donations are ways of demonstrating one's resources. Heroic risk-taking has also been interpreted as a costly signal of ability.
Both indirect reciprocity and costly signaling depend on reputation value and tend to make similar predictions. One is that people will be more helpful when they know that their helping behavior will be communicated to people they will interact with later, publicly announced, discussed, or observed by someone else. This has been documented in many studies. The effect is sensitive to subtle cues, such as people being more helpful when there were stylized eyespots instead of a logo on a computer screen. Weak reputational cues such as eyespots may become unimportant if there are stronger cues present and may lose their effect with continued exposure unless reinforced with real reputational effects. Public displays such as public weeping for dead celebrities and participation in demonstrations may be influenced by a desire to be seen as generous. People who know that they are publicly monitored sometimes even wastefully donate the money they know is not needed by the recipient because of reputational concerns.
Typically, women find altruistic men to be attractive partners. When women look for a long-term partner, altruism may be a trait they prefer as it may indicate that the prospective partner is also willing to share resources with her and her children. Men perform charitable acts in the early stages of a romantic relationship or simply when in the presence of an attractive woman. While both sexes state that kindness is the most preferable trait in a partner, there is some evidence that men place less value on this than women and that women may not be more altruistic in the presence of an attractive man. Men may even avoid altruistic women in short-term relationships, which may be because they expect less success.
People may compete for the social benefit of a burnished reputation, which may cause competitive altruism. On the other hand, in some experiments, a proportion of people do not seem to care about reputation and do not help more, even if this is conspicuous. This may be due to reasons such as psychopathy or that they are so attractive that they need not be seen as altruistic. The reputational benefits of altruism occur in the future compared to the immediate costs of altruism. While humans and other organisms generally place less value on future costs/benefits as compared to those in the present, some have shorter time horizons than others, and these people tend to be less cooperative.
Explicit extrinsic rewards and punishments have sometimes been found to have a counterintuitively inverse effect on behaviors when compared to intrinsic rewards. This may be because such extrinsic incentives may replace (partially or in whole) intrinsic and reputational incentives, motivating the person to focus on obtaining the extrinsic rewards, which may make the thus-incentivized behaviors less desirable. People prefer altruism in others when it appears to be due to a personality characteristic rather than overt reputational concerns; simply pointing out that there are reputational benefits of action may reduce them. This may be used as a derogatory tactic against altruists ("you're just virtue signalling"), especially by those who are non-cooperators. A counterargument is that doing good due to reputational concerns is better than doing no good.
Group selection. It has controversially been argued by some evolutionary scientists such as David Sloan Wilson that natural selection can act at the level of non-kin groups to produce adaptations that benefit a non-kin group, even if these adaptations are detrimental at the individual level. Thus, while altruistic persons may under some circumstances be outcompeted by less altruistic persons at the individual level, according to group selection theory, the opposite may occur at the group level where groups consisting of the more altruistic persons may outcompete groups consisting of the less altruistic persons. Such altruism may only extend to ingroup members while directing prejudice and antagonism against outgroup members (see also in-group favoritism). Many other evolutionary scientists have criticized group selection theory.
Such explanations do not imply that humans consciously calculate how to increase their inclusive fitness when doing altruistic acts. Instead, evolution has shaped psychological mechanisms, such as emotions, that promote certain altruistic behaviors.
The benefits for the altruist may be increased, and the costs reduced by being more altruistic towards certain groups. Research has found that people are more altruistic to kin than to no-kin, to friends than strangers, to those attractive than to those unattractive, to non-competitors than competitors, and to members in-groups than to members of out-groups.
The study of altruism was the initial impetus behind George R. Price's development of the Price equation, a mathematical equation used to study genetic evolution. An interesting example of altruism is found in the cellular slime moulds, such as Dictyostelium mucoroides. These protists live as individual amoebae until starved, at which point they aggregate and form a multicellular fruiting body in which some cells sacrifice themselves to promote the survival of other cells in the fruiting body.
Selective investment theory proposes that close social bonds, and associated emotional, cognitive, and neurohormonal mechanisms, evolved to facilitate long-term, high-cost altruism between those closely depending on one another for survival and reproductive success.
Such cooperative behaviors have sometimes been seen as arguments for left-wing politics, for example, by the Russian zoologist and anarchist Peter Kropotkin in his 1902 book Mutual Aid: A Factor of Evolution and moral philosopher Peter Singer in his book A Darwinian Left.
Neurobiology
Jorge Moll and Jordan Grafman, neuroscientists at the National Institutes of Health and LABS-D'Or Hospital Network, provided the first evidence for the neural bases of altruistic giving in normal healthy volunteers, using functional magnetic resonance imaging. In their research, they showed that both pure monetary rewards and charitable donations activated the mesolimbic reward pathway, a primitive part of the brain that usually responds to food and sex. However, when volunteers generously placed the interests of others before their own by making charitable donations, another brain circuit was also selectively activated: the subgenual cortex/septal region. These structures are related to social attachment and bonding in other species. The experiment suggested that altruism is not a higher moral faculty overpowering innate selfish desires, but a fundamental, ingrained, and enjoyable trait in the brain. One brain region, the subgenual anterior cingulate cortex/basal forebrain, contributes to learning altruistic behavior, especially in people with a propensity for empathy.
Bill Harbaugh, a University of Oregon economist, in an fMRI scanner test conducted with his psychologist colleague Dr. Ulrich Mayr, reached the same conclusions as Jorge Moll and Jordan Grafman about giving to charity, although they were able to divide the study group into two groups: "egoists" and "altruists". One of their discoveries was that, though rarely, even some of the considered "egoists" sometimes gave more than expected because that would help others, leading to the conclusion that there are other factors in charity, such as a person's environment and values.
A recent meta-analysis of fMRI studies conducted by Shawn Rhoads, Jo Cutler, and Abigail Marsh analyzed the results of prior studies of generosity in which participants could freely choose to give or not give resources to someone else. The results of this study confirmed that altruism is supported by distinct mechanisms from giving motivated by reciprocity or by fairness. This study also confirmed that the right ventral striatum is recruited during altruistic giving, as well as the ventromedial prefrontal cortex, bilateral anterior cingulate cortex, and bilateral anterior insula, which are regions previously implicated in empathy.
Abigail Marsh has conducted studies of real-world altruists that have also identified an important role for the amygdala in human altruism. In real-world altruists, such as people who have donated kidneys to strangers, the amygdala is larger than in typical adults. Altruists' amygdalas are also more responsive than those of typical adults to the sight of others' distress, which is thought to reflect an empathic response to distress. This structure may also be involved in altruistic choices due to its role in encoding the value of outcomes for others. This is consistent with the findings of research in non-human animals, which has identified neurons within the amygdala that specifically encode the value of others' outcomes, activity in which appears to drive altruistic choices in monkeys.
Psychology
The International Encyclopedia of the Social Sciences defines psychological altruism as "a motivational state to increase another's welfare". Psychological altruism is contrasted with psychological egoism, which refers to the motivation to increase one's welfare. In keeping with this, research in real-world altruists, including altruistic kidney donors, bone marrow donors, humanitarian aid workers, and heroic rescuers findings that these altruists are primarily distinguished from other adults by unselfish traits and decision-making patterns. This suggests that human altruism reflects genuinely high valuation of others' outcomes.
There has been some debate on whether humans are capable of psychological altruism. Some definitions specify a self-sacrificial nature to altruism and a lack of external rewards for altruistic behaviors. However, because altruism ultimately benefits the self in many cases, the selflessness of altruistic acts is difficult to prove. The social exchange theory postulates that altruism only exists when the benefits outweigh the costs to the self.
Daniel Batson, a psychologist, examined this question and argued against the social exchange theory. He identified four significant motives: to ultimately benefit the self (egoism), to ultimately benefit the other person (altruism), to benefit a group (collectivism), or to uphold a moral principle (principlism). Altruism that ultimately serves selfish gains is thus differentiated from selfless altruism, but the general conclusion has been that empathy-induced altruism can be genuinely selfless. The empathy-altruism hypothesis states that psychological altruism exists and is evoked by the empathic desire to help someone suffering. Feelings of empathic concern are contrasted with personal distress, which compels people to reduce their unpleasant emotions and increase their positive ones by helping someone in need. Empathy is thus not selfless since altruism works either as a way to avoid those negative, unpleasant feelings and have positive, pleasant feelings when triggered by others' need for help or as a way to gain social reward or avoid social punishment by helping. People with empathic concern help others in distress even when exposure to the situation could be easily avoided, whereas those lacking in empathic concern avoid allowing it unless it is difficult or impossible to avoid exposure to another's suffering.
Helping behavior is seen in humans from about two years old when a toddler can understand subtle emotional cues.
In psychological research on altruism, studies often observe altruism as demonstrated through prosocial behaviors such as helping, comforting, sharing, cooperation, philanthropy, and community service. People are most likely to help if they recognize that a person is in need and feel personal responsibility for reducing the person's distress. The number of bystanders witnessing pain or suffering affects the likelihood of helping (the Bystander effect). More significant numbers of bystanders decrease individual feelings of responsibility. However, a witness with a high level of empathic concern is likely to assume personal responsibility entirely regardless of the number of bystanders.
Many studies have observed the effects of volunteerism (as a form of altruism) on happiness and health and have consistently found that those who exhibit volunteerism also have better current and future health and well-being. In a study of older adults, those who volunteered had higher life satisfaction and will to live, and less depression, anxiety, and somatization. Volunteerism and helping behavior have not only been shown to improve mental health but physical health and longevity as well, attributable to the activity and social integration it encourages. One study examined the physical health of mothers who volunteered over 30 years and found that 52% of those who did not belong to a volunteer organization experienced a major illness while only 36% of those who did volunteer experienced one. A study on adults aged 55 and older found that during the four-year study period, people who volunteered for two or more organizations had a 63% lower likelihood of dying. After controlling for prior health status, it was determined that volunteerism accounted for a 44% reduction in mortality. Merely being aware of kindness in oneself and others is also associated with greater well-being. A study that asked participants to count each act of kindness they performed for one week significantly enhanced their subjective happiness. Happier people are kinder and more grateful, kinder people are happier and more grateful and more grateful people are happier and kinder, the study suggests.
While research supports the idea that altruistic acts bring about happiness, it has also been found to work in the opposite direction—that happier people are also kinder. The relationship between altruistic behavior and happiness is bidirectional. Studies found that generosity increases linearly from sad to happy affective states.
Feeling over-taxed by the needs of others has negative effects on health and happiness. For example, one study on volunteerism found that feeling overwhelmed by others' demands had an even stronger negative effect on mental health than helping had a positive one (although positive effects were still significant).
Older humans were found to have higher altruism.
Genetics and environment
Both genetics and environment have been implicated in influencing pro-social or altruistic behavior. Candidate genes include OXTR (polymorphisms in the oxytocin receptor), CD38, COMT, DRD4, DRD5, IGF2, AVPR1A and GABRB2. It is theorized that some of these genes influence altruistic behavior by modulating levels of neurotransmitters such as serotonin and dopamine.
According to Christopher Boehm, altruistic behaviour evolved as a way of surviving within a group.
Sociology
"Sociologists have long been concerned with how to build the good society". The structure of our societies and how individuals come to exhibit charitable, philanthropic, and other pro-social, altruistic actions for the common good is a commonly researched topic within the field. The American Sociology Association (ASA) acknowledges public sociology saying, "The intrinsic scientific, policy, and public relevance of this field of investigation in helping to construct 'good societies' is unquestionable". This type of sociology seeks contributions that aid popular and theoretical understandings of what motivates altruism and how it is organized, and promotes an altruistic focus in order to benefit the world and people it studies.
How altruism is framed, organized, carried out, and what motivates it at the group level is an area of focus that sociologists investigate in order to contribute back to the groups it studies and "build the good society". The motivation of altruism is also the focus of study; for example, one study links the occurrence of moral outrage to altruistic compensation of victims. Studies show that generosity in laboratory and in online experiments is contagious – people imitate the generosity they observe in others.
Religious viewpoints
Most, if not all, of the world's religions promote altruism as a very important moral value. Buddhism, Christianity, Hinduism, Islam, Jainism, Judaism, and Sikhism, etc., place particular emphasis on altruistic morality.
Buddhism
Altruism figures prominently in Buddhism. Love and compassion are components of all forms of Buddhism, and are focused on all beings equally: love is the wish that all beings be happy, and compassion is the wish that all beings be free from suffering. "Many illnesses can be cured by the one medicine of love and compassion. These qualities are the ultimate source of human happiness, and the need for them lies at the very core of our being" (Dalai Lama).
The notion of altruism is modified in such a world-view, since the belief is that such a practice promotes the practitioner's own happiness: "The more we care for the happiness of others, the greater our own sense of well-being becomes" (Dalai Lama).
In Buddhism, a person's actions cause karma, which consists of consequences proportional to the moral implications of their actions. Deeds considered to be bad are punished, while those considered to be good are rewarded.
Jainism
The fundamental principles of Jainism revolve around altruism, not only for other humans but for all sentient beings. Jainism preaches – to live and let live, not harming sentient beings, i.e. uncompromising reverence for all life. The first , Rishabhdev, introduced the concept of altruism for all living beings, from extending knowledge and experience to others to donation, giving oneself up for others, non-violence, and compassion for all living things.
The principle of nonviolence seeks to minimize karmas which limit the capabilities of the soul. Jainism views every soul as worthy of respect because it has the potential to become (God in Jainism). Because all living beings possess a soul, great care and awareness is essential in one's actions. Jainism emphasizes the equality of all life, advocating harmlessness towards all, whether the creatures are great or small. This policy extends even to microscopic organisms. Jainism acknowledges that every person has different capabilities and capacities to practice and therefore accepts different levels of compliance for ascetics and householders.
Christianity
Thomas Aquinas interprets the biblical phrase "You should love your neighbour as yourself" as meaning that love for ourselves is the exemplar of love for others. Considering that "the love with which a man loves himself is the form and root of friendship", he quotes Aristotle that "the origin of friendly relations with others lies in our relations to ourselves",. Aquinas concluded that though we are not bound to love others more than ourselves, we naturally seek the common good, the good of the whole, more than any private good, the good of a part. However, he thought we should love God more than ourselves and our neighbours, and more than our bodily life—since the ultimate purpose of loving our neighbour is to share in eternal beatitude: a more desirable thing than bodily well-being. In coining the word "altruism", as stated above, Comte was probably opposing this Thomistic doctrine, which is present in some theological schools within Catholicism. The aim and focus of Christian life is a life that glorifies God, while obeying Christ's command to treat others equally, caring for them and understanding that eternity in heaven is what Jesus' Resurrection at Calvary was all about.
Many biblical authors draw a strong connection between love of others and love of God. John 1:4 states that for one to love God one must love his fellow man, and that hatred of one's fellow man is the same as hatred of God. Thomas Jay Oord has argued in several books that altruism is but one possible form of love. An altruistic action is not always a loving action. Oord defines altruism as acting for the other's good, and he agrees with feminists who note that sometimes love requires acting for one's own good when the other's demands undermine overall well-being.
German philosopher Max Scheler distinguishes two ways in which the strong can help the weak. One way is a sincere expression of Christian love, "motivated by a powerful feeling of security, strength, and inner salvation, of the invincible fullness of one's own life and existence". Another way is merely "one of the many modern substitutes for love,... nothing but the urge to turn away from oneself and to lose oneself in other people's business". At its worst, Scheler says, "love for the small, the poor, the weak, and the oppressed is really disguised hatred, repressed envy, an impulse to detract, etc., directed against the opposite phenomena: wealth, strength, power, largesse."
Islam
In the Arabic language, "" (إيثار) means "preferring others to oneself".
On the topic of donating blood to non-Muslims (a controversial topic within the faith), the Shia religious professor, Fadhil al-Milani has provided theological evidence that makes it positively justifiable. In fact, he considers it a form of religious sacrifice and ithar (altruism).
For Sufis, 'iythar means devotion to others through complete forgetfulness of one's own concerns, where concern for others is deemed as a demand made by God on the human body, considered to be property of God alone. The importance of 'iythar (also known as ) lies in sacrifice for the sake of the greater good; Islam considers those practicing as abiding by the highest degree of nobility.
This is similar to the notion of chivalry. A constant concern for God results in a careful attitude towards people, animals, and other things in this world.
Judaism
Judaism defines altruism as the desired goal of creation. Rabbi Abraham Isaac Kook stated that love is the most important attribute in humanity. Love is defined as bestowal, or giving, which is the intention of altruism. This can be altruism towards humanity that leads to altruism towards the creator or God. Kabbalah defines God as the force of giving in existence. Rabbi Moshe Chaim Luzzatto focused on the "purpose of creation" and how the will of God was to bring creation into perfection and adhesion with this force of giving.
Modern Kabbalah developed by Rabbi Yehuda Ashlag, in his writings about the future generation, focuses on how society could achieve an altruistic social framework. Ashlag proposed that such a framework is the purpose of creation, and everything that happens is to raise humanity to the level of altruism, love for one another. Ashlag focused on society and its relation to divinity.
Sikhism
Altruism is essential to the Sikh religion. The central faith in Sikhism is that the greatest deed anyone can do is to imbibe and live the godly qualities such as love, affection, sacrifice, patience, harmony, and truthfulness. , or selfless service to the community for its own sake, is an important concept in Sikhism.
The fifth Guru, Guru Arjun, sacrificed his life to uphold "22 carats of pure truth, the greatest gift to humanity", according to the Guru Granth Sahib. The ninth Guru, Tegh Bahadur, sacrificed his life to protect weak and defenseless people against atrocity.
In the late seventeenth century, Guru Gobind Singh (the tenth Guru in Sikhism), was at war with the Mughal rulers to protect the people of different faiths when a fellow Sikh, Bhai Kanhaiya, attended the troops of the enemy. He gave water to both friends and foes who were wounded on the battlefield. Some of the enemy began to fight again and some Sikh warriors were annoyed by Bhai Kanhaiya as he was helping their enemy. Sikh soldiers brought Bhai Kanhaiya before Guru Gobind Singh, and complained of his action that they considered counterproductive to their struggle on the battlefield. "What were you doing, and why?" asked the Guru. "I was giving water to the wounded because I saw your face in all of them", replied Bhai Kanhaiya. The Guru responded, "Then you should also give them ointment to heal their wounds. You were practicing what you were coached in the house of the Guru."
Under the tutelage of the Guru, Bhai Kanhaiya subsequently founded a volunteer corps for altruism, which is still engaged today in doing good to others and in training new recruits for this service.
Hinduism
In Hinduism, selflessness (), love (), kindness (), and forgiveness () are considered as the highest acts of humanity or "". Giving alms to the beggars or poor people is considered as a divine act or "" and Hindus believe it will free their souls from guilt or "" and will led them to heaven or "" in afterlife. Altruism is also the central act of various Hindu mythology and religious poems and songs. Mass donation of clothes to poor people (), or blood donation camp or mass food donation () for poor people is common in various Hindu religious ceremonies.
The Bhagavad Gita supports the doctrine of karma yoga (achieving oneness with God through action) and Nishkama Karma or action without expectation or desire for personal gain which can be said to encompass altruism. Altruistic acts are generally celebrated and well received in Hindu literature and are central to Hindu morality.
Philosophy
There is a wide range of philosophical views on humans' obligations or motivations to act altruistically. Proponents of ethical altruism maintain that individuals are morally obligated to act altruistically. The opposing view is ethical egoism, which maintains that moral agents should always act in their own self-interest. Both ethical altruism and ethical egoism contrast with utilitarianism, which maintains that each agent should act in order to maximise the efficacy of their function and the benefit to both themselves and their co-inhabitants.
A related concept in descriptive ethics is psychological egoism, the thesis that humans always act in their own self-interest and that true altruism is impossible. Rational egoism is the view that rationality consists in acting in one's self-interest (without specifying how this affects one's moral obligations).
In his book I am You: The Metaphysical Foundations for Global Ethics, Daniel Kolak argues that open individualism provides a rational basis for altruism. According to Kolak, egoism is incoherent because the concept of a future self is incoherent, similar to the idea of anattā in Buddhist philosophy, and everyone is in reality the same being. Derek Parfit made similar arguments in the book Reasons and Persons, using thought experiments such as the teletransportation paradox to illustrate the philosophical problems with personal identity.
Effective altruism
Effective altruism is a philosophy and social movement that uses evidence and reasoning to determine the most effective ways to benefit others. Effective altruism encourages individuals to consider all causes and actions and to act in the way that brings about the greatest positive impact, based upon their values. It is the broad, evidence-based, and cause-neutral approach that distinguishes effective altruism from traditional altruism or charity. Effective altruism is part of the larger movement towards evidence-based practices.
While a substantial proportion of effective altruists have focused on the nonprofit sector, the philosophy of effective altruism applies more broadly to prioritizing the scientific projects, companies, and policy initiatives which can be estimated to save lives, help people, or otherwise have the biggest benefit. People associated with the movement include philosopher Peter Singer, Facebook co founder Dustin Moskovitz, Cari Tuna, Oxford-based researchers William MacAskill and Toby Ord, and professional poker player Liv Boeree.
Extreme altruism
Pathological altruism
Pathological altruism is altruism taken to an unhealthy extreme, such that it either harms the altruistic person or the person's well-intentioned actions cause more harm than good.
The term "pathological altruism" was popularised by the book Pathological Altruism.
Examples include depression and burnout seen in healthcare professionals, an unhealthy focus on others to the detriment of one's own needs, animal hoarding, and ineffective philanthropic and social programs that ultimately worsen the situations they are meant to aid.
Extreme altruism also known as costly altruism, extraordinary altruism, or heroic behaviours (shall be distinguished from heroism), refers to selfless acts directed to a stranger which significantly exceed the normal altruistic behaviours, often involving risks or great cost to the altruists themselves. Since acts of extreme altruism are often directed towards strangers, many commonly accepted models of simple altruism appear inadequate in explaining this phenomenon.
One of the initial concepts was introduced by Wilson in 1976, which he referred to as "hard-core" altruism. This form is characterised by impulsive actions directed towards others, typically a stranger and lacking incentives for reward. Since then, several papers have mentioned the possibility of such altruism.
In 21st century the progress in the field slowed down due to adopting ethical guidelines that restrict exposing research participants to costly or risky decisions (see Declaration of Helsinki). Consequently, much research has based their studies on living organ donations and the actions of Carnegie Hero medal Recipients, actions which involve high risk, high cost, and are of infrequent occurrences. A typical example of extreme altruism would be non-directed kidney donation—a living person donating one of their kidneys to a stranger without any benefits or knowing the recipient.
However, current research can only be carried out on a small population that meets the requirements of extreme altruism. Most of the time the research is also via the form of self-report which could lead to self-report biases. Due to the limitations, the current gap between high stakes and normal altruism remains unknown.
Characteristics of extreme altruists
Norms
In 1970, Schwartz hypothesised that extreme altruism is positively related to a person's moral norms and is not influenced by the cost associated with the action. This hypothesis was supported in the same study examining bone marrow donors. Schwartz discovered that individuals with strong personal norms and those who attribute more responsibility to themselves are more inclined to participate in bone marrow donation. Similar findings were observed in a 1986 study by Piliavin and Libby focusing on blood donors. These studies suggest that personal norms lead to the activation of moral norms, leading individuals to feel compelled to help others.
Enhanced Fear Recognition
Abigail Marsh has described psychopaths as the "opposite" group of people to extreme altruists and has conducted a few research, comparing these two groups of individuals. Utilising techniques such as brain imaging and behavioural experiments, Marsh's team observed that kidney donors tend to have larger amygdala sizes and exhibit better abilities in recognizing fearful expressions compared to psychopathic individuals. Furthermore, an improved ability to recognize fear has been associated with an increase in prosocial behaviours, including greater charity contribution.
Fast Decisions when Perform Acts of Extreme Altruism
Rand and Epstein explored the behaviours of 51 Carnegie Hero Medal Recipients, demonstrating how extreme altruistic behaviours often stem from system I of the Dual Process Theory, which leads to rapid and intuitive behaviours. Additionally, a separate by Carlson et al. indicated that such prosocial behaviours are prevalent in emergencies where immediate actions are required.
This discovery has led to ethical debates, particularly in the context of living organ donation, where laws regarding this issue differ by country. As observed in extreme altruists, these decisions are made intuitively, which may reflect insufficient consideration. Critics are concerned about whether this rapid decision encompasses a thorough cost-benefit analysis and question the appropriateness of exposing donors to such risk.
Social discounting
One finding suggests how extreme altruists exhibit lower levels of social discounting as compared to others. With that meaning extreme altruists place a higher value on the welfare of strangers than a typical person does.
Low Social-Economic Status
Analysis of 676 Carnegie Hero Award Recipients and another study on 243 rescuing acts reveal that a significant proportion of rescuers come from lower socio-economic backgrounds. Johnson attributes the distribution to the high-risk occupations that are more prevalent between lower socioeconomic groups. Another hypothesis proposed by Lyons is that individuals from these groups may perceive they have less to lose when engaging in high-risk extreme altruistic behaviours.
Possible explanations
Evolutionary theories such as the kin-selection, reciprocity, vested interest and punishment either contradict or do not fully explain the concept of extreme altruism. As a result, considerable research has attempted for a separate explanation for this behaviour.
Costly Signalling Theory for Extreme Behaviours
Research suggests that males are more likely to engage in heroic and risk-taking behaviours due to a preference among females for such traits. These extreme altruistic behaviours could serve to act as an unconscious "signal" to showcase superior power and ability compared to ordinary individuals. When an extreme altruist survives a high-risk situation, they send an "honest signal" of quality. Three qualities hypothesized to be exhibited by extreme altruists, which could be interpreted as "signals", are: (1) traits that are difficult to fake, (2) a willingness to help, and (3) generous behaviours.
Empathy-Altruism Hypothesis
The empathy altruism hypothesis appears to align with the concept of extreme altruism without contradiction. The hypothesis was supported with further brain scanning research, which indicates how this group of people demonstrate a higher level of empathy concern. The level of empathy concern then triggers activation in specific brain regions, urging the individual to engage in heroic behaviours.
Mistakes and Outliers
While most altruistic behaviours offer some form of benefit, extreme altruism may sometimes result from a mistake where the victim does not reciprocate. Considering the impulsive characteristic of extreme altruists, some researchers suggest that these individuals have made a wrong judgement during the cost-benefit analysis. Furthermore, extreme altruism might be a rare variation of altruism where they lie towards to ends of a normal distribution. In the US, the annual prevalence rate per capita is less than 0.00005%, this shows the rarity of such behaviours.
Digital altruism
Digital altruism is the notion that some are willing to freely share information based on the principle of reciprocity and in the belief that in the end, everyone benefits from sharing information via the Internet.
There are three types of digital altruism: (1) "everyday digital altruism", involving expedience, ease, moral engagement, and conformity; (2) "creative digital altruism", involving creativity, heightened moral engagement, and cooperation; and (3) "co-creative digital altruism" involving creativity, moral engagement, and meta cooperative efforts.
See also
Further reading
Cappelen, Alexander W.; Enke, Benjamin; Tungodden, Bertil (2025). "Universalism: Global Evidence". American Economic Review. 115 (1): 43–76.
Notes
References
External links
|
;Auguste Comte;Defence mechanisms;Interpersonal relationships;Moral psychology;Morality;Philanthropy;Social philosophy
|
https://en.wikipedia.org/wiki/Astronomer
|
An astronomer is a scientist in the field of astronomy who focuses on a specific question or field outside the scope of Earth. Astronomers observe astronomical objects, such as stars, planets, moons, comets and galaxies – in either observational (by analyzing the data) or theoretical astronomy. Examples of topics or fields astronomers study include planetary science, solar astronomy, the origin or evolution of stars, or the formation of galaxies. A related but distinct subject is physical cosmology, which studies the Universe as a whole.
Types
Astronomers typically fall under either of two main types: observational and theoretical. Observational astronomers make direct observations of celestial objects and analyze the data. In contrast, theoretical astronomers create and investigate models of things that cannot be observed. Because it takes millions to billions of years for a system of stars or a galaxy to complete a life cycle, astronomers must observe snapshots of different systems at unique points in their evolution to determine how they form, evolve, and die. They use this data to create models or simulations to theorize how different celestial objects work.
Further subcategories under these two main branches of astronomy include planetary astronomy, astrobiology, stellar astronomy, astrometry, galactic astronomy, extragalactic astronomy, or physical cosmology. Astronomers can also specialize in certain specialties of observational astronomy, such as infrared astronomy, neutrino astronomy, x-ray astronomy, and gravitational-wave astronomy.
Academic
History
Historically, astronomy was more concerned with the classification and description of phenomena in the sky, while astrophysics attempted to explain these phenomena and the differences between them using physical laws. Today, that distinction has mostly disappeared and the terms "astronomer" and "astrophysicist" are interchangeable. Professional astronomers are highly educated individuals who typically have a PhD in physics or astronomy and are employed by research institutions or universities. They spend the majority of their time working on research, although they quite often have other duties such as teaching, building instruments, or aiding in the operation of an observatory.
The American Astronomical Society, which is the major organization of professional astronomers in North America, has approximately 8,200 members (as of 2024). This number includes scientists from other fields such as physics, geology, and engineering, whose research interests are closely related to astronomy. The International Astronomical Union comprises about 12,700 members from 92 countries who are involved in astronomical research at the PhD level and beyond (as of 2024).
Contrary to the classical image of an old astronomer peering through a telescope through the dark hours of the night, it is far more common to use a charge-coupled device (CCD) camera to record a long, deep exposure, allowing a more sensitive image to be created because the light is added over time. Before CCDs, photographic plates were a common method of observation. Modern astronomers spend relatively little time at telescopes, usually just a few weeks per year. Analysis of observed phenomena, along with making predictions as to the causes of what they observe, takes the majority of observational astronomers' time.
Activities and graduate degree training
Astronomers who serve as faculty spend much of their time teaching undergraduate and graduate classes. Most universities also have outreach programs, including public telescope time and sometimes planetariums, as a public service to encourage interest in the field.
Those who become astronomers usually have a broad background in physics, mathematics, sciences, and computing in high school. Taking courses that teach how to research, write, and present papers are part of the higher education of an astronomer, while most astronomers attain both a Master's degree and eventually a PhD degree in astronomy, physics or astrophysics.
PhD training typically involves 5-6 years of study, including completion of upper-level courses in the core sciences, a competency examination, experience with teaching undergraduates and participating in outreach programs, work on research projects under the student's supervising professor, completion of a PhD thesis, and passing a final oral exam. Throughout the PhD training, a successful student is financially supported with a stipend.
Amateur astronomers
While there is a relatively low number of professional astronomers, the field is popular among amateurs. Most cities have amateur astronomy clubs that meet on a regular basis and often host star parties. The Astronomical Society of the Pacific is the largest general astronomical society in the world, comprising both professional and amateur astronomers as well as educators from 70 different nations.
As with any hobby, most people who practice amateur astronomy may devote a few hours a month to stargazing and reading the latest developments in research. However, amateurs span the range from so-called "armchair astronomers" to people who own science-grade telescopes and instruments with which they are able to make their own discoveries, create astrophotographs, and assist professional astronomers in research.
Sources
|
;Astronomy;Science occupations
|
https://en.wikipedia.org/wiki/Apollo
|
Apollo is one of the Olympian deities in ancient Greek and Roman religion and Greek and Roman mythology. Apollo has been recognized as a god of archery, music and dance, truth and prophecy, healing and diseases, the Sun and light, poetry, and more. One of the most important and complex of the Greek gods, he is the son of Zeus and Leto, and the twin brother of Artemis, goddess of the hunt. He is considered to be the most beautiful god and is represented as the ideal of the kouros (ephebe, or a beardless, athletic youth). Apollo is known in Greek-influenced Etruscan mythology as Apulu.
As the patron deity of Delphi (Apollo Pythios), Apollo is an oracular god—the prophetic deity of the Delphic Oracle and also the deity of ritual purification. His oracles were often consulted for guidance in various matters. He was in general seen as the god who affords help and wards off evil, and is referred to as , the "averter of evil". Medicine and healing are associated with Apollo, whether through the god himself or mediated through his son Asclepius. Apollo delivered people from epidemics, yet he is also a god who could bring ill health and deadly plague with his arrows. The invention of archery itself is credited to Apollo and his sister Artemis. Apollo is usually described as carrying a silver or golden bow and a quiver of arrows.
As the god of mousike, Apollo presides over all music, songs, dance, and poetry. He is the inventor of string-music and the frequent companion of the Muses, functioning as their chorus leader in celebrations. The lyre is a common attribute of Apollo. Protection of the young is one of the best attested facets of his panhellenic cult persona. As a , Apollo is concerned with the health and education of children, and he presided over their passage into adulthood. Long hair, which was the prerogative of boys, was cut at the coming of age () and dedicated to Apollo. The god himself is depicted with long, uncut hair to symbolise his eternal youth.
Apollo is an important pastoral deity, and he was the patron of herdsmen and shepherds. Protection of herds, flocks and crops from diseases, pests and predators were his primary rustic duties. On the other hand, Apollo also encouraged the founding of new towns and the establishment of civil constitutions, is associated with dominion over colonists, and was the giver of laws. His oracles were often consulted before setting laws in a city. Apollo Agyieus was the protector of the streets, public places and home entrances.
In Hellenistic times, especially during the 5th century BCE, as Apollo Helios he became identified among Greeks with Helios, the personification of the Sun. Although Latin theological works from at least 1st century BCE identified Apollo with Sol, there was no conflation between the two among the classical Latin poets until 1st century CE.
Etymology
Apollo (Attic, Ionic, and Homeric Greek: , ( ); Doric: , ; Arcadocypriot: , ; Aeolic: , ; )
The name Apollo—unlike the related older name Paean—is generally not found in the Linear B (Mycenean Greek) texts, although there is a possible attestation in the lacunose form ]pe-rjo-[ (Linear B: ]-[) on the KN E 842 tablet, though it has also been suggested that the name might actually read "Hyperion" ([u]-pe-rjo-[ne]).
The etymology of the name is uncertain. The spelling ( in Classical Attic) had almost superseded all other forms by the beginning of the common era, but the Doric form, (), is more archaic, as it is derived from an earlier . It probably is a cognate to the Doric month Apellaios (), and the offerings () at the initiation of the young men during the family-festival (). According to some scholars, the words are derived from the Doric word (), which originally meant "wall", "fence for animals" and later "assembly within the limits of the square". Apella () is the name of the popular assembly in Sparta, corresponding to the (). R. S. P. Beekes rejected the connection of the theonym with the noun and suggested a Pre-Greek proto-form *Apalyun.
Several instances of popular etymology are attested by ancient authors. Thus, the Greeks most often associated Apollo's name with the Greek verb (), "to destroy". Plato in Cratylus connects the name with (), "redemption", with (apolousis), "purification", and with (), "simple", in particular in reference to the Thessalian form of the name, , and finally with (), "ever-shooting". Hesychius connects the name Apollo with the Doric (), which means "assembly", so that Apollo would be the god of political life, and he also gives the explanation (), "fold", in which case Apollo would be the god of flocks and herds. In the ancient Macedonian language () means "stone", and some toponyms may be derived from this word: (Pella, the capital of ancient Macedonia) and (Pellēnē/Pellene).
The Hittite form Apaliunas (d) is attested in the Manapa-Tarhunta letter. The Hittite testimony reflects an early form , which may also be surmised from the comparison of Cypriot with Doric . The name of the Lydian god Qλdãns // may reflect an earlier /-/ before palatalization, syncope, and the pre-Lydian sound change * > . Note the labiovelar in place of the labial // found in pre-Doric and Hittite Apaliunas. A Luwian etymology suggested for Apaliunas makes Apollo "The One of Entrapment", perhaps in the sense of "Hunter".
Greco-Roman epithets
Apollo's chief epithet was Phoebus ( ; , Phoibos ), literally "bright". It was very commonly used by both the Greeks and Romans for Apollo's role as the god of light. Like other Greek deities, he had a number of others applied to him, reflecting the variety of roles, duties, and aspects ascribed to the god. However, while Apollo has a great number of appellations in Greek myth, only a few occur in Latin literature.
Sun
Aegletes ( ; , Aiglētēs), from , "light of the Sun"
Helius ( ; , Helios), literally "Sun"
Lyceus ( ; , Lykeios, from Proto-Greek *), "light". The meaning of the epithet "Lyceus" later became associated with Apollo's mother Leto, who was the patron goddess of Lycia () and who was identified with the wolf ().
Phanaeus ( ; , Phanaios), literally "giving or bringing light"
Phoebus ( ; , Phoibos), literally "bright", his most commonly used epithet by both the Greeks and Romans
Sol (Roman) (), "Sun" in Latin
Wolf
Lycegenes ( ; , Lukēgenēs), literally "born of a wolf" or "born of Lycia"
Lycoctonus ( ; , Lykoktonos), from , "wolf", and , "to kill"
Origin and birth
Apollo's birthplace was Mount Cynthus on the island of Delos.
Cynthius ( ; , Kunthios), literally "Cynthian"
Cynthogenes ( ; , Kynthogenēs), literally "born of Cynthus"
Delius ( ; , Delios), literally "Delian"
Didymaeus ( ; , Didymaios) from δίδυμος, "twin", as the twin of Artemis
Place of worship
Delphi and Actium were his primary places of worship.
Acraephius ( ; , Akraiphios, literally "Acraephian") or Acraephiaeus ( ; , Akraiphiaios), "Acraephian", from the Boeotian town of Acraephia (), reputedly founded by his son Acraepheus.
Actiacus ( ; , Aktiakos), literally "Actian", after Actium ()
Delphinius ( ; , Delphinios), literally "Delphic", after Delphi (Δελφοί). An etiology in the Homeric Hymns associated this with dolphins.
Epactaeus, meaning "god worshipped on the coast", in Samos.
Pythius ( ; , Puthios, from Πυθώ, Pythō), from the region around Delphi
Smintheus ( ; , Smintheus), "Sminthian"—that is, "of the town of Sminthos or Sminthe" near the Troad town of Hamaxitus
Napaian Apollo (), from the city of Nape at the island of Lesbos
Eutresites, from the city of Eutresis.
Ixios (Ἴξιος), derived from a district in Rhodes called Ixiae or Ixia.
Healing and disease
Acesius ( ; , Akesios), from , "healing". Acesius was the epithet of Apollo worshipped in Elis, where he had a temple in the agora.
Acestor ( ; , Akestōr), literally "healer"
Culicarius (Roman) ( ), from Latin culicārius, "of midges"
Iatrus ( ; , Iātros), literally "physician"
Medicus (Roman) ( ), "physician" in Latin. A temple was dedicated to Apollo Medicus in Rome, probably next to the temple of Bellona.
Paean ( ; , Paiān), physician, healer
Parnopius ( ; , Parnopios), from , "locust"
Founder and protector
Agyieus ( ; , Aguīeus), from , "street", for his role in protecting roads and homes
Alexicacus ( ; , Alexikakos), literally "warding off evil"
Apotropaeus ( ; , Apotropaios), from , "to avert"
Archegetes ( ; , Arkhēgetēs), literally "founder"
Averruncus (Roman) ( ; from Latin āverruncare), "to avert"
Clarius ( ; , Klārios), from Doric , "allotted lot"
Epicurius ( ; , Epikourios), from , "to aid"
Genetor ( ; , Genetōr), literally "ancestor"
Nomius ( ; , Nomios), literally "pastoral"
Nymphegetes ( ; , Numphēgetēs), from , "Nymph", and , "leader", for his role as a protector of shepherds and pastoral life
Patroos (, Patrōios) from , "related to one's father", for his role as father of Ion and founder of the Ionians, as worshipped at the Temple of Apollo Patroos in Athens
Sauroctonus (, Sauroctonos), "lizard-killer", possibly a reference to his killing of Python
Prophecy and truth
Coelispex (Roman) ( ), from Latin coelum, "sky", and specere "to look at"
Iatromantis ( ; , Iātromantis,) from , "physician", and , "prophet", referring to his role as a god both of healing and of prophecy
Leschenorius ( ; , Leskhēnorios), from , "converser"
Loxias ( ; , Loxias), from , "to say", historically associated with , "ambiguous"
Manticus ( ; , Mantikos), literally "prophetic"
Proopsios (), meaning "foreseer" or "first seen"
Music and arts
Musagetes ( ; Doric , Mousāgetās), from , "Muse", and "leader"
Musegetes ( ; , Mousēgetēs), as the preceding
Archery
Aphetor ( ; , Aphētōr), from , "to let loose"
Aphetorus ( ; , Aphētoros), as the preceding
Arcitenens (Roman) ( ), literally "bow-carrying"
Argyrotoxus ( ; , Argyrotoxos), literally "with silver bow"
Clytotoxus ( ; , Klytótoxos), "he who is famous for his bow", the renowned archer.
Hecaërgus ( ; , Hekaergos), literally "far-shooting"
Hecebolus ( ; , Hekēbolos), "far-shooting"
Ismenius ( ; , Ismēnios), literally "of Ismenus", after Ismenus, the son of Amphion and Niobe, whom he struck with an arrow
Appearance
Acersecomes (, Akersekómēs), "he who has unshorn hair", the eternal ephebe.
Chrysocomes ( ; , Khrusokómēs), literally "he who has golden hair".
Amazons
Amazonius (), Pausanias at the Description of Greece writes that near Pyrrhichus there was a sanctuary of Apollo, called Amazonius () with an image of the god said to have been dedicated by the Amazons.
Other
Boedromius (), was a surname of Apollo in Athens, with varying explanations for its origin. Some claim that the reason the god was given this name was because he had helped the Athenians overcome the Amazons in their battle, which took place on the seventh of Boedromion, the day the Boedromia were later commemorated. Others claim that the term originated from the fact that, in the battle between Eumolpus and Erechtheus and Ion, Apollo had counselled the Athenians to charge the enemy with a war cry (Βοή) if they were going to win.
Celtic epithets and cult titles
Apollo was worshipped throughout the Roman Empire. In the traditionally Celtic lands, he was most often seen as a healing and sun god. He was often equated with Celtic gods of similar character.
Apollo Atepomarus ("the great horseman" or "possessing a great horse"). Apollo was worshipped at Mauvières (Indre). Horses were, in the Celtic world, closely linked to the Sun.
Apollo Belenus ("bright" or "brilliant"). This epithet was given to Apollo in parts of Gaul, Northern Italy and Noricum (part of modern Austria). Apollo Belenus was a healing and sun god.
Apollo Cunomaglus ("hound lord"). A title given to Apollo at a shrine at Nettleton Shrub, Wiltshire. May have been a god of healing. Cunomaglus himself may originally have been an independent healing god.
Apollo Grannus. Grannus was a healing spring god, later equated with Apollo.
Apollo Maponus. A god known from inscriptions in Britain. This may be a local fusion of Apollo and Maponus.
Apollo Moritasgus ("masses of sea water"). An epithet for Apollo at Alesia, where he was worshipped as the god of healing and, possibly, of physicians.
Apollo Vindonnus ("clear light"). Apollo Vindonnus had a temple at Essarois, near Châtillon-sur-Seine in present-day Burgundy. He was a god of healing, especially of the eyes.
Apollo Virotutis ("benefactor of mankind"). Apollo Virotutis was worshipped, among other places, at Fins d'Annecy (Haute-Savoie) and at Jublains (Maine-et-Loire).
Origins
Apollo is considered the most Hellenic (Greek) of the Olympian gods.
The cult centers of Apollo in Greece, Delphi and Delos, date from the 8th century BCE. The Delos sanctuary was primarily dedicated to Artemis, Apollo's twin sister. At Delphi, Apollo was venerated as the slayer of the monstrous serpent Python. For the Greeks, Apollo was the most Greek of all the gods, and through the centuries he acquired different functions. In Archaic Greece he was the prophet, the oracular god who in older times was connected with "healing". In Classical Greece he was the god of light and of music, but in popular religion he had a strong function to keep away evil. Walter Burkert discerned three components in the prehistory of Apollo worship, which he termed "a Dorian-northwest Greek component, a Cretan-Minoan component, and a Syro-Hittite component."
Healer and god-protector from evil
In classical times, his major function in popular religion was to keep away evil, and he was therefore called "apotropaios" (, "averting evil") and "alexikakos" ( "keeping off ill"; from v. + n. ). Apollo also had many epithets relating to his function as a healer. Some commonly-used examples are "paion" ( literally "healer" or "helper") "epikourios" (, "succouring"), "oulios" (, "healer, baleful") and "loimios" (, "of the plague"). In later writers, the word, "paion", usually spelled "Paean", becomes a mere epithet of Apollo in his capacity as a god of healing.
Apollo in his aspect of "healer" has a connection to the primitive god Paean (), who did not have a cult of his own. Paean serves as the healer of the gods in the Iliad, and seems to have originated in a pre-Greek religion. It is suggested, though unconfirmed, that he is connected to the Mycenaean figure pa-ja-wo-ne (Linear B: ). Paean was the personification of holy songs sung by "seer-doctors" (), which were supposed to cure disease.
Homer uses the noun Paeon to designate both a god and that god's characteristic song of apotropaic thanksgiving and triumph. Such songs were originally addressed to Apollo and afterwards to other gods: to Dionysus, to Apollo Helios, to Apollo's son Asclepius the healer. About the 4th century BCE, the paean became merely a formula of adulation; its object was either to implore protection against disease and misfortune or to offer thanks after such protection had been rendered. It was in this way that Apollo had become recognized as the god of music. Apollo's role as the slayer of the Python led to his association with battle and victory; hence it became the Roman custom for a paean to be sung by an army on the march and before entering into battle, when a fleet left the harbour, and also after a victory had been won.
In the Iliad, Apollo is the healer under the gods, but he is also the bringer of disease and death with his arrows, similar to the function of the Vedic god of disease Rudra. He sends a plague () to the Achaeans. Knowing that Apollo can prevent a recurrence of the plague he sent, they purify themselves in a ritual and offer him a large sacrifice of cows, called a hecatomb.
Dorian origin
The Homeric Hymn to Apollo depicts Apollo as an intruder from the north. The connection with the northern-dwelling Dorians and their initiation festival apellai is reinforced by the month Apellaios in northwest Greek calendars. The family-festival was dedicated to Apollo (Doric: ). Apellaios is the month of these rites, and Apellon is the "megistos kouros" (the great Kouros). However it can explain only the Doric type of the name, which is connected with the Ancient Macedonian word "pella" (Pella), stone. Stones played an important part in the cult of the god, especially in the oracular shrine of Delphi (Omphalos).
Minoan origin
George Huxley considered the identification of Apollo with the Minoan deity Paiawon, worshipped in Crete, to have originated at Delphi. In the Homeric Hymn, Apollo appears as a dolphin carrying Cretan priests to Delphi, to which site they evidently transfer their religious practices. Apollo Delphinios or Delphidios was a sea-god worshipped especially in Crete and in the islands. Apollo's sister Artemis, who was the Greek goddess of hunting, is identified with the Minoan goddess Britomartis (Diktynna), and with Laphria the Pre-Greek "mistress of the animals" who was specially worshipped at Delphi. In her earliest depictions she was accompanied by the "Master of the animals", a bow-wielding god of hunting whose name has been lost; aspects of this figure may have been absorbed into the more popular Apollo. A family of priests at Delphi was named "Lab(r)yaden". The name may derive from Laphria.
Anatolian origin
A non-Greek origin of Apollo has long been assumed in scholarship. The name of Apollo's mother Leto has Lydian origin, and she was worshipped on the coasts of Asia Minor. The inspiration oracular cult was probably introduced into Greece from Anatolia, which is the origin of Sibyl, and where some of the oldest oracular shrines originated. Omens, symbols, purifications, and exorcisms appear in old Assyro-Babylonian texts. These rituals were spread into the empire of the Hittites, and from there into Greece.
Homer pictures Apollo on the side of the Trojans, fighting against the Achaeans, during the Trojan War. He is pictured as a terrible god, less trusted by the Greeks than other gods. The god seems to be related to Appaliunas, a tutelary god of Wilusa (Troy) in Asia Minor, but the word is not complete. The stones found in front of the gates of Homeric Troy were the symbols of Apollo. A western Anatolian origin may also be bolstered by references to the parallel worship of Artimus (Artemis) and Qλdãns, whose name may be cognate with the Hittite and Doric forms, in surviving Lydian texts. However, recent scholars have cast doubt on the identification of Qλdãns with Apollo.
The Greeks gave to him the name agyieus as the protector god of public places and houses who wards off evil and his symbol was a tapered stone or column. However, while usually Greek festivals were celebrated at the full moon, all the feasts of Apollo were celebrated on the seventh day of the month, and the emphasis given to that day (sibutu) indicates a Babylonian origin.
Proto-Indo-European
The Vedic Rudra has some functions similar to those of Apollo. The terrible god is called "the archer" and the bow is also an attribute of Shiva. Rudra could bring diseases with his arrows, but he was able to free people of them and his alternative Shiva is a healer physician god. However the Indo-European component of Apollo does not explain his strong association with omens, exorcisms, and an oracular cult.
Oracular cult
Unusually among the Olympic deities, Apollo had two cult sites that had widespread influence: Delos and Delphi. In cult practice, Delian Apollo and Pythian Apollo (the Apollo of Delphi) were so distinct that they might both have shrines in the same locality. Lycia was sacred to the god, for this Apollo was also called Lycian. Apollo's cult was already fully established when written sources commenced, about 650 BCE. Apollo became extremely important to the Greek world as an oracular deity in the archaic period, and the frequency of theophoric names such as Apollodorus or Apollonios and cities named Apollonia testify to his popularity. Oracular sanctuaries to Apollo were established in other sites. In the 2nd and 3rd century CE, those at Didyma and Claros pronounced the so-called "theological oracles", in which Apollo confirms that all deities are aspects or servants of an all-encompassing, highest deity. "In the 3rd century, Apollo fell silent. Julian the Apostate (359–361) tried to revive the Delphic oracle, but failed."
Oracular shrines
Apollo had a famous oracle in Delphi, and other notable ones in Claros and Didyma. His oracular shrine in Abae in Phocis, where he bore the toponymic epithet Abaeus (, Apollon Abaios), was important enough to be consulted by Croesus. His oracular shrines include:
Abae in Phocis.
Bassae in the Peloponnese.
At Clarus, on the west coast of Asia Minor; as at Delphi a holy spring which gave off a pneuma, from which the priests drank.
In Corinth, the Oracle of Corinth came from the town of Tenea, from prisoners supposedly taken in the Trojan War.
At Khyrse, in Troad, the temple was built for Apollo Smintheus.
In Delos, there was an oracle to the Delian Apollo, during summer. The Hieron (Sanctuary) of Apollo adjacent to the Sacred Lake, was the place where the god was said to have been born.
In Delphi, the Pythia became filled with the pneuma of Apollo, said to come from a spring inside the Adyton.
In Didyma, an oracle on the coast of Anatolia, south west of Lydian (Luwian) Sardis, in which priests from the lineage of the Branchidae received inspiration by drinking from a healing spring located in the temple. Was believed to have been founded by Branchus, son or lover of Apollo.
In Hierapolis Bambyce, Syria (modern Manbij), according to the treatise De Dea Syria, the sanctuary of the Syrian Goddess contained a robed and bearded image of Apollo. Divination was based on spontaneous movements of this image.
At Patara, in Lycia, there was a seasonal winter oracle of Apollo, said to have been the place where the god went from Delos. As at Delphi the oracle at Patara was a woman.
In Segesta in Sicily.
Oracles were also given by sons of Apollo.
In Oropus, north of Athens, the oracle Amphiaraus, was said to be the son of Apollo; Oropus also had a sacred spring.
in Labadea, east of Delphi, Trophonius, another son of Apollo, killed his brother and fled to the cave where he was also afterwards consulted as an oracle.
Temples of Apollo
Many temples were dedicated to Apollo in Greece and the Greek colonies. They show the spread of the cult of Apollo and the evolution of Greek architecture, which was mostly based on the rightness of form and on mathematical relations. Some of the earliest temples, especially in Crete, do not belong to any Greek order. It seems that the first peripteral temples were rectangular wooden structures. The different wooden elements were considered divine, and their forms were preserved in the marble or stone elements of the temples of Doric order. The Greeks used standard types because they believed that the world of objects was a series of typical forms which could be represented in several instances. The temples should be canonic, and the architects were trying to achieve this esthetic perfection. From the earliest times there were certain rules strictly observed in rectangular peripteral and prostyle buildings. The first buildings were built narrowly in order to hold the roof, and when the dimensions changed some mathematical relations became necessary in order to keep the original forms. This probably influenced the theory of numbers of Pythagoras, who believed that behind the appearance of things there was the permanent principle of mathematics.
The Doric order dominated during the 6th and the 5th century BC but there was a mathematical problem regarding the position of the triglyphs, which could not be solved without changing the original forms. The order was almost abandoned for the Ionic order, but the Ionic capital also posed an insoluble problem at the corner of a temple. Both orders were abandoned for the Corinthian order gradually during the Hellenistic age and under Rome.
The most important temples are:
Greek temples
Thebes, Greece: The oldest temple probably dedicated to Apollo Ismenius was built in the 9th century BC. It seems that it was a curvilinear building. The Doric temple was built in the early 7th century BC, but only some small parts have been found. A festival called Daphnephoria was celebrated every ninth year in honour of Apollo Ismenius (or Galaxius). The people held laurel branches (daphnai), and at the head of the procession walked a youth (chosen priest of Apollo), who was called "daphnephoros".
Eretria: According to the Homeric hymn to Apollo, the god arrived on the plain, seeking for a location to establish its oracle. The first temple of Apollo Daphnephoros, "Apollo, laurel-bearer", or "carrying off Daphne", is dated to 800 BC. The temple was curvilinear hecatombedon (a hundred feet). In a smaller building were kept the bases of the laurel branches which were used for the first building. Another temple probably peripteral was built in the 7th century BC, with an inner row of wooden columns over its Geometric predecessor. It was rebuilt peripteral around 510 BC, with the stylobate measuring 21.00 x 43.00 m. The number of pteron column was 6 x 14.
Dreros (Crete). The temple of Apollo Delphinios dates from the 7th century BC, or probably from the middle of the 8th century BC. According to the legend, Apollo appeared as a dolphin, and carried Cretan priests to the port of Delphi. The dimensions of the plan are 10.70 x 24.00 m and the building was not peripteral. It contains column-bases of the Minoan type, which may be considered as the predecessors of the Doric columns.
Gortyn (Crete). A temple of Pythian Apollo, was built in the 7th century BC. The plan measured 19.00 x 16.70 m and it was not peripteral. The walls were solid, made from limestone, and there was a single door on the east side.
Thermon (West Greece): The Doric temple of Apollo Thermios, was built in the middle of the 7th century BC. It was built on an older curvilinear building dating perhaps from the 10th century, on which a peristyle was added. The temple was narrow, and the number of pteron columns (probably wooden) was 5 x 15. There was a single row of inner columns. It measures 12.13 x 38.23 m at the stylobate, which was made from stones.
Corinth: A Doric temple was built in the 6th century BC. The temple's stylobate measures 21.36 x 53.30 m, and the number of pteron columns was 6 x 15. There was a double row of inner columns. The style is similar to the Temple of Alcmeonidae at Delphi. The Corinthians were considered to be the inventors of the Doric order.
Napes (Lesbos): An Aeolic temple probably of Apollo Napaios was built in the 7th century BC. Some special capitals with floral ornament have been found, which are called Aeolic, and it seems that they were borrowed from the East.
Cyrene, Libya: The oldest Doric temple of Apollo was built in . The number of pteron columns was 6 x 11, and it measures 16.75 x 30.05 m at the stylobate. There was a double row of sixteen inner columns on stylobates. The capitals were made from stone.
Naukratis: An Ionic temple was built in the early 6th century BC. Only some fragments have been found and the earlier ones, made from limestone, are identified among the oldest of the Ionic order.
Syracuse, Sicily: A Doric temple was built at the beginning of the 6th century BC. The temple's stylobate measures 21.47 x 55.36 m and the number of pteron columns was 6 x 17. It was the first temple in Greek west built completely out of stone. A second row of columns were added, obtaining the effect of an inner porch.
Selinus (Sicily):The Doric Temple C dates from 550 BC, and it was probably dedicated to Apollo. The temple's stylobate measures 10.48 x 41.63 m and the number of pteron columns was 6 x 17. There was a portico with a second row of columns, which is also attested for the temple at Syracuse.
Delphi: The first temple dedicated to Apollo, was built in the 7th century BC. According to the legend, it was wooden made of laurel branches. The "Temple of Alcmeonidae" was built in and it is the oldest Doric temple with significant marble elements. The temple's stylobate measures 21.65 x 58.00 m, and the number of pteron columns as 6 x 15. A fest similar with Apollo's fest at Thebes, Greece was celebrated every nine years. A boy was sent to the temple, who walked on the sacred road and returned carrying a laurel branch (dopnephoros). The maidens participated with joyful songs.
Chios: An Ionic temple of Apollo Phanaios was built at the end of the 6th century BC. Only some small parts have been found and the capitals had floral ornament.
Abae (Phocis). The temple was destroyed by the Persians in the invasion of Xerxes in 480 BC, and later by the Boeotians. It was rebuilt by Hadrian. The oracle was in use from early Mycenaean times to the Roman period, and shows the continuity of Mycenaean and Classical Greek religion.
Bassae (Peloponnesus): A temple dedicated to Apollo Epikourios ("Apollo the helper"), was built in 430 BC, designed by Iktinos. It combined Doric and Ionic elements, and the earliest use of a column with a Corinthian capital in the middle. The temple is of a relatively modest size, with the stylobate measuring 14.5 x 38.3 metres containing a Doric peristyle of 6 x 15 columns. The roof left a central space open to admit light and air.
Delos: A temple probably dedicated to Apollo and not peripteral, was built in the late 7th century BC, with a plan measuring 10.00 x 15.60 m. The Doric Great temple of Apollo, was built in . The temple's stylobate measures 13.72 x 29.78 m, and the number of pteron columns as 6 x 13. Marble was extensively used.
Ambracia: A Doric peripteral temple dedicated to Apollo Pythios Sotir was built in 500 BC, at the centre of the Greek city Arta. Only some parts have been found, and it seems that the temple was built on earlier sanctuaries dedicated to Apollo. The temple measures 20.75 x 44.00 m at the stylobate. The foundation which supported the statue of the god, still exists.
Didyma (near Miletus): The gigantic Ionic temple of Apollo Didymaios started around 540 BC. The construction ceased and then it was restarted in 330 BC. The temple is dipteral, with an outer row of 10 x 21 columns, and it measures 28.90 x 80.75 m at the stylobate.
Clarus (near ancient Colophon): According to the legend, the famous seer Calchas, on his return from Troy, came to Clarus. He challenged the seer Mopsus, and died when he lost. The Doric temple of Apollo Clarius was probably built in the 3rd century BC., and it was peripteral with 6 x 11 columns. It was reconstructed at the end of the Hellenistic period, and later from the emperor Hadrian but Pausanias claims that it was still incomplete in the 2nd century BC.
Hamaxitus (Troad): In the Iliad, Chryses the priest of Apollo, addresses the god with the epithet Smintheus (Lord of Mice), related to the god's ancient role as bringer of the disease (plague). Recent excavations indicate that the Hellenistic temple of Apollo Smintheus was constructed in 150–125 BC, but the symbol of the mouse god was used on coinage probably from the 4th century . The temple measures 40.00 x 23.00 m at the stylobate, and the number of pteron columns was 8 x 14.
Pythion (), this was the name of a shrine of Apollo at Athens near the Ilisos river. It was created by Peisistratos, and tripods were placed there by those who had won in the cyclic chorus at the Thargelia.
Setae (Lydia): The temple of Apollo Aksyros located in the city.
Apollonia Pontica: There were two temples of Apollo Healer in the city. One from the Late Archaic period and the other from the Early Classical period.
Ikaros island in the Persian Gulf (modern Failaka Island): There was a temple of Apollo on the island.
Argos in Cyprus: there was a temple of Apollo Erithios (Ἐριθίου Ἀπόλλωνος ἱερῷ).
The temple and oracle of Apollo at Eutresis.
An altar of Apollo Acritas was at Lacedaemon. In addition, above a sanctuary surnamed Gasepton of Earth in Lacedaemon was set up the Maleatian Apollo.
Etruscan and Roman temples
Veii (Etruria): The temple of Apollo was built in the late 6th century BC, indicating the spread of Apollo's culture (Aplu) in Etruria. There was a prostyle porch, which is called Tuscan, and a triple cella 18.50 m wide.
Falerii Veteres (Etruria): A temple of Apollo was built probably in the 4th–3rd century BC. Parts of a terracotta capital, and a terracotta base have been found. It seems that the Etruscan columns were derived from the archaic Doric. A cult of Apollo Soranus is attested by one inscription found near Falerii.
Pompeii (Italy): The cult of Apollo was widespread in the region of Campania since the 6th century BC. The temple was built in 120 BC, but its beginnings lie in the 6th century BC. It was reconstructed after an earthquake in AD 63. It demonstrates a mixing of styles which formed the basis of Roman architecture. The columns in front of the cella formed a Tuscan prostyle porch, and the cella is situated unusually far back. The peripteral colonnade of 48 Ionic columns was placed in such a way that the emphasis was given to the front side.
Rome: The temple of Apollo Sosianus and the temple of Apollo Medicus. The first temple building dates to 431 BC, and was dedicated to Apollo Medicus (the doctor), after a plague of 433 BC. It was rebuilt by Gaius Sosius, probably in 34 BC. Only three columns with Corinthian capitals exist today. It seems that the cult of Apollo had existed in this area since at least to the mid-5th century BC.
Rome: The temple of Apollo Palatinus was located on the Palatine hill within the sacred boundary of the city. It was dedicated by Augustus in 28 BC. The façade of the original temple was Ionic and it was constructed from solid blocks of marble. Many famous statues by Greek masters were on display in and around the temple, including a marble statue of the god at the entrance and a statue of Apollo in the cella.
Melite (modern Mdina, Malta): A Temple of Apollo was built in the city in the 2nd century AD. Its remains were discovered in the 18th century, and many of its architectural fragments were dispersed among private collections or reworked into new sculptures. Parts of the temple's podium were rediscovered in 2002.
Mythology
In the myths, Apollo is the son of Zeus, the king of the gods, and Leto, his previous wife or one of his mistresses. Apollo often appears in the myths, plays and hymns either directly or indirectly through his oracles. As Zeus' favorite son, he had direct access to the mind of Zeus and was willing to reveal this knowledge to humans. A divinity beyond human comprehension, he appears both as a beneficial and a wrathful god.
Birth
Homeric Hymn to Apollo
Pregnant with the offspring of Zeus, Leto wandered through many lands wanting to give birth to Apollo. However all the lands rejected her out of fear. Upon reaching Delos, Leto requested the island to shelter her, and that in return her son would bring fame and prosperity to the island. Delos then revealed to Leto that Apollo was rumoured to be the god who will "greatly lord it among gods and men all over the fruitful earth". For this reason, all the lands were fearful and Delos feared that Apollo would cast her aside once he is born. Hearing this, Leto swore on the river Styx that if she is allowed to give birth on the island, her son would honour Delos the most amongst all the other lands. Assured by this, Delos agreed to assist Leto. All goddesses except Hera also came to aid Leto.
However, Hera had tricked Eileithyia, the goddess of childbirth, to stay on Olympus, due to which Leto was unable to give birth. The goddesses then convinced Iris to go bring Eileithyia by offering her a necklace of amber 9 yards (8.2 m) long. Iris did accordingly and persuaded Eileithyia to step onto the island. Thus, clutching a palm tree, Leto finally gave birth after labouring for nine days and nine nights, with Apollo "leaping forth" from his mother's womb. The goddesses washed the newborn, covered him in a white garment and fastened golden bands around him. As Leto was unable to feed him, Themis, the goddess of divine law, fed him nectar and ambrosia. Upon tasting the divine food, the child broke free of the bands fastened onto him and declared that he would be the master of lyre and archery, and interpret the will of Zeus to humankind. He then started to walk, which caused the island to be filled with gold.
Callimachus' Hymn to Delos
The island Delos used to be Asteria, a goddess who jumped into the waters to escape the advances of Zeus and became a free-floating island of the same name. When Leto got pregnant, Hera was told that Leto's son would become more dear to Zeus than Ares. Enraged by this, Hera watched over the heavens and sent out Ares and Iris to prevent Leto from giving birth on the earth. Ares, stationed over the mainland, and Iris, over the islands, threatened all the lands and prevented them from helping Leto.
When Leto arrived at Thebes, fetal Apollo prophesied from his mother's womb that in the future he would punish a slanderous woman in Thebes (Niobe), so he did not want to be born there. Leto then went to Thessaly and sought the help of the river nymphs who were the daughters of the river Peneus. Though he was initially fearful and reluctant, Peneus later decided to let Leto give birth in his waters. He did not change his mind even when Ares produced a terrifying sound and threatened to hurl mountain peaks into the river. But Leto herself declined his help and departed, as she did not want him to suffer for her sake.
After being turned away from various lands, Apollo spoke again from the womb, asking his mother to take look at the floating island in front of her and expressing his wish to be born there. When Leto approached Asteria, all the other islands fled. But Asteria welcomed Leto without any fear of Hera. Walking on the island, she sat down against a palm tree and asked Apollo to be born. During the childbirth, swans circled the island seven times, a sign that later on Apollo would play the seven-stringed lyre. When Apollo finally "leapt forth" from his mother's womb, the nymphs of the island sang a hymn to Eileithyia that was heard to the heavens. The moment Apollo was born, the entire island, including the trees and the waters, became gold. Asteria bathed the newborn, swaddled him and fed him with her breast milk. The island had become rooted and was later called Delos.
Hera was no longer angry, as Zeus had managed to calm her down; and she held no grudge against Asteria, since Asteria had rejected Zeus in the past.
Pindar
Pindar is the earliest source who explicitly calls Apollo and Artemis as twins. Here, Asteria is also stated to be Leto's sister. Wanting to escape Zeus' advances, she flung herself into the sea and became a floating rock called Ortygia until the twins were born. When Leto stepped on the rock, four pillars with adamantine bases rose from the earth and held up the rock. When Apollo and Artemis were born, their bodies shone radiantly and a chant was sung by Eileithyia and Lachesis, one of the three Moirai.
Hyginus
Scorning the advances of Zeus, Asteria transformed herself into a bird and jumped into a sea. From her, an island rose which was called Ortygia. When Hera discovered that Leto was pregnant with Zeus' child, she decreed that Leto can give birth only in a place where sun does not shine. During this time, the monster Python also started hounding Leto with an intent of killing her, because he had foreseen his death coming at the hands of Leto's offspring. However, on Zeus' orders, Boreas carried away Leto and entrusted her to Poseidon. To protect her, Poseidon took her to the island Ortygia and covered it with waves so that the sun would not shine on it. Leto gave birth clinging to an olive tree and henceforth the island was called Delos.
Other variations
Aside from those mentioned above, more variations on the story of Apollo's birth include:
Aelian states that it took Leto twelve days and twelve nights to travel from Hyperborea to Delos. Leto changed herself into a she-wolf before giving birth. This is given as the reason why Homer describes Apollo as the "wolf-born god".
Libanius wrote that neither land nor visible islands would receive Leto, but by the will of Zeus Delos then became visible, and thus received Leto and the children.
According to Strabo, the Curetes helped Leto by creating loud noises with their weapons and thus frightening Hera, they concealed Leto's childbirth.
Theognis wrote that the island was filled with ambrosial fragrance when Apollo was born, and the Earth laughed with joy.
In some versions, Artemis was born first and subsequently assisted with the birth of Apollo.
While in some accounts Apollo's birth itself fixed the floating Delos to the earth, there are accounts of Apollo securing Delos to the bottom of the ocean a little while later.
This island became sacred to Apollo and was one of the major cult centres of the god.
Apollo was born on the seventh day (, hebdomagenes) of the month Thargelion—according to Delian tradition—or of the month Bysios—according to Delphian tradition. The seventh and twentieth, the days of the new and full moon, were ever afterwards held sacred to him.
Hyperborea
Hyperborea, the mystical land of eternal spring, venerated Apollo above all the gods. The Hyperboreans always sang and danced in his honor and hosted Pythian games. There, a vast forest of beautiful trees was called "the garden of Apollo". Apollo spent the winter months among the Hyperboreans, leaving his shrine in Delphi under the care of Dionysus. His absence from the world caused coldness and this was marked as his annual death. No prophecies were issued during this time. He returned to the world during the beginning of the spring. The Theophania festival was held in Delphi to celebrate his return.
However, Diodorus Silculus states that Apollo visited Hyperborea every nineteen years. This nineteen-year period was called by the Greeks as the 'year of Meton', the time period in which the stars returned to their initial positions. And that visiting Hyperborea at that time, Apollo played on the cithara and danced continuously from the vernal equinox until the rising of the Pleiades (constellations).
Hyperborea was also Leto's birthplace. It is said that Leto came to Delos from Hyperborea accompanied by a pack of wolves. Henceforth, Hyperborea became Apollo's winter home and wolves became sacred to him. His intimate connection to wolves is evident from his epithet Lyceus, meaning wolf-like. But Apollo was also the wolf-slayer in his role as the god who protected flocks from predators. The Hyperborean worship of Apollo bears the strongest marks of Apollo being worshipped as the sun god. Shamanistic elements in Apollo's cult are often liked to his Hyperborean origin, and he is likewise speculated to have originated as a solar shaman. Shamans like Abaris and Aristeas were also the followers of Apollo, who hailed from Hyperborea.
In myths, the tears of amber Apollo shed when his son Asclepius died mixed with the waters of the river Eridanos, which surrounded Hyperborea. Apollo also buried in Hyperborea the arrow which he had used to kill the Cyclopes. He later gave this arrow to Abaris.
Childhood and youth
Growing up, Apollo was nursed by the nymphs Korythalia and Aletheia, the personification of truth. Phoebe, his grandmother, gave the oracular shrine of Delphi to Apollo as a birthday gift.
As a four-year-old child, Apollo built a foundation and an altar on Delos using the horns of the goats that his sister Artemis hunted. Since he learnt the art of building when young, he came to be known as Archegetes, (the founder of towns) and guided men to build new cities. To keep the child amused, the Delian nymphs ran around the altar beating it, and then with their hands tied behind their backs, bit an olive branch. It later became a custom for all the sailors who passed by the island to do the same.
From his father Zeus, Apollo received a golden headband and a chariot driven by swans.
In his early years when Apollo spent his time herding cows, he was reared by the Thriae, who trained him and enhanced his prophetic skills. The god Pan was also said to have mentored him in the prophetic art. Apollo is also said to have invented the lyre, and along with Artemis, the art of archery. He then taught the humans the art of healing and archery.
Lycian peasants
Soon after giving birth to her twins, Leto fled from Delos fearing Hera. Upon reaching Lycia, her infants had drained all of their mother's milk and cried for more to satisfy their hunger. The exhausted mother then tried drinking from a nearby lake but was stopped by some Lycian peasants. When she begged them to let her quench her thirst, the haughty peasants not only threatened her but also stirred the mud in the lake to dirty the waters. Angered by this, Leto turned them into frogs.
In a slightly varied version, Leto took her infants and crossed over to Lycia where she attempted to bathe her children in a spring she found there. But the local herdsmen drove her away. After that, some wolves found Leto and guided her to the river Xanthos, where Leto was able to bathe her children and quench her thirst. She then returned to the spring and turned the herdsmen into frogs.
Slaying of Python
Python, a chthonic serpent-dragon, was a child of Gaia and the guardian of the Delphic Oracle.
In the Callimachus' hymn to Delos, fetal Apollo foresees the death of Python at his hands.
In the Homeric hymn to Apollo, Python was a female drakon and the nurse of the giant Typhon whom Hera had created to overthrow Zeus. She was described as a terrifying monster and a "bloody plague". Apollo, in his pursuit to establish his worship, came across Python and killed her with a single arrow shot from his bow. He let the corpse rot under the sun and declared himself the oracular deity of Delphi. Other authors have Apollo kill the monster using a hundred arrows or a thousand arrows.
According to Euripides, Leto had brought her twins to the cliffs of Parnassus shortly after giving birth to them. Upon seeing the monster there, Apollo, still a child being carried in his mother's arms, leapt forth and killed Python. Some authors also mention that Python was killed for displaying lustful affections towards Leto.
In another account, Python chased pregnant Leto with an intent of killing her because his death was fated to come at the hands of Leto's child. However, he had to stop the chase when Leto came under the protection of Poseidon. After his birth, four days old Apollo killed the serpent with the bow and arrows gifted to him by Hephaestus and avenged the trouble given to his mother. The god then put the bones of the slain monster in a cauldron and deposited it in his temple.
This legend is also narrated as the origin of the cry "Hië paian". According to Athenaeus, Python attacked Leto and her twins during their visit to Delphi. Taking Artemis into her arms, Leto climbed upon a rock and cried at Apollo to shoot the monster. The cry let out by her, "ιε, παῖ" ("Shoot, boy") later got slightly altered as "ἰὴ παιών" (Hië paian), an exclamation to avert evils. Callimachus attributes the origin of this phrase to the Delphians, who let out the cry to encourage Apollo when the young god battled with Python.
Strabo has recorded a slightly different version where Python was actually a cruel and lawless man who was also known by the name "Drakon". When Apollo was teaching the humans to cultivate fruits and civilise themselves, the residents of Parnassus complained to the god about Python. In response to their pleas, Apollo killed the man with his arrows. During the fight, the Parnassians shouted "Hië paian" to encourage the god.
Establishment of worship in Delphi
Continuing from his victory over Python, the Homeric hymn describes how the young god established his worship among the humans. As Apollo was pondering about what kind of men he should recruit to serve him, he spotted a ship full of Cretan merchants or pirates. He took the form of a dolphin and sprang aboard the ship. Whenever the oblivious crew members tried throwing the dolphin overboard, the god shook the ship until the crew was awed into submission. Apollo then created a breeze that directed the ship to Delphi. Upon reaching the land, he revealed himself as a god and initiated them as his priests. He instructed them to guard his temple and always keep righteousness in their hearts.
Alcaeus narrates the following account: Zeus, who had adorned his newborn son with a golden headband, also provided him with a chariot driven by swans and instructed Apollo to visit Delphi to establish his laws among the people. But Apollo disobeyed his father and went to the land of Hyperborea. The Delphians continuously sung paeans in his honour and pleaded him to come back to them. The god returned only after a year and then carried out Zeus' orders.
In other variations, the shrine at Delphi was simply handed over to Apollo by his grandmother Phoebe as a gift, or Themis herself inspired him to be the oracular voice of Delphi.
However, in many other accounts, Apollo had to overcome certain obstacles before he was able to establish himself at Delphi. Gaea came in conflict with Apollo for killing Python and claiming the Delphic oracle for himself. According to Pindar, she sought to banish Apollo to Tartarus as a punishment. According to Euripides, soon after Apollo took the ownership of the oracle, Gaea started sending prophetic dreams to the humans. As a result, people stopped visiting Delphi to obtain prophecies. Troubled by this, Apollo went to Olympus and supplicated to Zeus. Zeus, admiring the ambitions of his young son, granted his request by putting an end to the dream visions. This sealed the role of Apollo as the oracular deity of Delphi.
Since Apollo had committed a blood crime, he also had to be purified. Pausanias has recorded two of the many variations of this purification. In one of them, both Apollo and Artemis fled to Sicyon and were purified there. In the other tradition that had been prevalent among the Cretans, Apollo alone travelled to Crete and was purified by Carmanor. In another account, the Argive king Crotopus was the one who performed the purification rites on Apollo alone.
According the Aristonous and Aelian, Apollo was purified by the will of Zeus in the Vale of Tempe. Aristonous has continued the tale, saying that Apollo was escorted back to Delphi by Athena. As a token of gratitude, he later built a temple for Athena at Delphi, which served as a threshold for his own temple. Upon reaching Delphi, Apollo convinced Gaea and Themis into handing over the seat of oracle to him. To celebrate this event, other immortals also graced Apollo with gifts – Poseidon gave him the land of Delphi, the Delphian nymphs gifted him the Corycian cave, and Artemis set her dogs to patrol and safeguard the land.
Some others have also said that Apollo was exiled and subjected to servitude under king Admetus as a means of punishment for the murder he had committed. It was when he was serving as a cowherd under Admetus that the theft of the cattle by Hermes happened. The servitude was said to have lasted for either one year, or one great year (a cycle of eight years), or nine years.
Plutarch, however, has mentioned a variation where Apollo was neither purified in Tempe nor banished to Earth as a servant for nine years, but was driven out to another world for nine great years. The god who returned was cleansed and purified, thus becoming a "true Phoebus – that is to say, clear and bright". He then took over the Delphic oracle, which had been under the care of Themis in his absence. Henceforth, Apollo became the god who cleansed himself from the sin of murder, made men aware of their guilt and purified them.
The Pythian games were also established by Apollo, either as funeral games to honor Python or to celebrate his own victory. The Pythia was Apollo's high priestess and his mouthpiece through whom he gave prophecies.
Tityus
Tityus was another giant who tried to rape Leto, either on his own accord when she was on her way to Delphi or at the order of Hera. Leto called upon on her children who instantly slew the giant. Apollo, still a young boy, shot him with his arrows. In some accounts, Artemis also joined him in protecting their mother by attacking Tityos with her arrows. For this act, he was banished to Tartarus and there he was pegged to the rock floor and stretched on an area of , while a pair of vultures feasted daily on his liver or his heart.
Another account recorded by Strabo says that Tityus was not a giant but a lawless man whom Apollo killed at the request of the residents.
Admetus
Admetus was the king of Pherae, who was known for his hospitality. When Apollo was exiled from Olympus for killing Python, he served as a herdsman under Admetus, who was then young and unmarried. Apollo is said to have shared a romantic relationship with Admetus during his stay. After completing his years of servitude, Apollo went back to Olympus as a god.
Because Admetus had treated Apollo well, the god conferred great benefits on him in return. Apollo's mere presence is said to have made the cattle give birth to twins. Apollo helped Admetus win the hand of Alcestis, the daughter of King Pelias, by taming a lion and a boar to draw Admetus' chariot. He was present during their wedding to give his blessings. When Admetus angered the goddess Artemis by forgetting to give her the due offerings, Apollo came to the rescue and calmed his sister. When Apollo learnt of Admetus' untimely death, he convinced or tricked the Fates into letting Admetus live past his time.
According to another version, or perhaps some years later, when Zeus struck down Apollo's son Asclepius with a lightning bolt for resurrecting the dead, Apollo in revenge killed the Cyclopes, who had fashioned the bolt for Zeus. Apollo would have been banished to Tartarus for this, but his mother Leto intervened, and reminding Zeus of their old love, pleaded with him not to kill their son. Zeus obliged and sentenced Apollo to one year of hard labor once again under Admetus.
The love between Apollo and Admetus was a favored topic of Roman poets like Ovid and Servius.
Niobe
The fate of Niobe was prophesied by Apollo while he was still in Leto's womb. Niobe was the queen of Thebes and wife of Amphion. She displayed hubris when she boasted that she was superior to Leto because she had fourteen children (Niobids), seven male and seven female, while Leto had only two. She further mocked Apollo's effeminate appearance and Artemis' manly appearance. Leto, insulted by this, told her children to punish Niobe. Accordingly, Apollo killed Niobe's sons, and Artemis her daughters. According to some versions of the myth, among the Niobids, Chloris and her brother Amyclas were not killed because they prayed to Leto. Amphion, at the sight of his dead sons, either killed himself or was killed by Apollo after swearing revenge.
A devastated Niobe fled to Mount Sipylos in Asia Minor and turned into stone as she wept. Her tears formed the river Achelous. Zeus had turned all the people of Thebes to stone and so no one buried the Niobids until the ninth day after their death, when the gods themselves entombed them.
When Chloris married and had children, Apollo granted her son Nestor the years he had taken away from the Niobids. Hence, Nestor was able to live for 3 generations.
Building the walls of Troy
Once Apollo and Poseidon served under the Trojan king Laomedon in accordance with Zeus' words. Apollodorus states that the gods willingly went to the king disguised as humans in order to check his hubris. Apollo guarded the cattle of Laomedon in the valleys of Mount Ida, while Poseidon built the walls of Troy. Other versions make both Apollo and Poseidon the builders of the wall. In Ovid's account, Apollo completes his task by playing his tunes on his lyre.
In Pindar's odes, the gods took a mortal named Aeacus as their assistant. When the work was completed, three snakes rushed against the wall, and though the two that attacked the sections of the wall built by the gods fell down dead, the third forced its way into the city through the portion of the wall built by Aeacus. Apollo immediately prophesied that Troy would fall at the hands of Aeacus's descendants, the Aeacidae (i.e. his son Telamon joined Heracles when he sieged the city during Laomedon's rule. Later, his great-grandson Neoptolemus was present in the wooden horse that leads to the downfall of Troy).
However, the king not only refused to give the gods the wages he had promised, but also threatened to bind their feet and hands, and sell them as slaves. Angered by the unpaid labour and the insults, Apollo infected the city with a pestilence and Poseidon sent the sea monster Cetus. To deliver the city from it, Laomedon had to sacrifice his daughter Hesione (who would later be saved by Heracles).
During his stay in Troy, Apollo had a lover named Ourea, who was a nymph and daughter of Poseidon. Together they had a son named Ileus, whom Apollo loved dearly.
Trojan War
Apollo sided with the Trojans during the Trojan War waged by the Greeks against the Trojans.
During the war, the Greek king Agamemnon captured Chryseis, the daughter of Apollo's priest Chryses, and refused to return her. Angered by this, Apollo shot arrows infected with the plague into the Greek encampment. He demanded that they return the girl, and the Achaeans (Greeks) complied, indirectly causing the anger of Achilles, which is the theme of the Iliad.
Receiving the aegis from Zeus, Apollo entered the battlefield as per his father's command, causing great terror to the enemy with his war cry. He pushed the Greeks back and destroyed many of the soldiers. He is described as "the rouser of armies" because he rallied the Trojan army when they were falling apart.
When Zeus allowed the other gods to get involved in the war, Apollo was provoked by Poseidon to a duel. However, Apollo declined to fight him, saying that he would not fight his uncle for the sake of mortals.
When the Greek hero Diomedes injured the Trojan hero Aeneas, Aphrodite tried to rescue him, but Diomedes injured her as well. Apollo then enveloped Aeneas in a cloud to protect him. He repelled the attacks Diomedes made on him and gave the hero a stern warning to abstain from attacking a god. Aeneas was then taken to Pergamos, a sacred spot in Troy, where he was healed.
After the death of Sarpedon, a son of Zeus, Apollo rescued the corpse from the battlefield as per his father's wish and cleaned it. He then gave it to Sleep (Hypnos) and Death (Thanatos). Apollo had also once convinced Athena to stop the war for that day, so that the warriors can relieve themselves for a while.
The Trojan hero Hector (who, according to some, was the god's own son by Hecuba) was favored by Apollo. When he got severely injured, Apollo healed him and encouraged him to take up his arms. During a duel with Achilles, when Hector was about to lose, Apollo hid Hector in a cloud of mist to save him. When the Greek warrior Patroclus tried to get into the fort of Troy, he was stopped by Apollo. Encouraging Hector to attack Patroclus, Apollo stripped the armour of the Greek warrior and broke his weapons. Patroclus was eventually killed by Hector. At last, after Hector's fated death, Apollo protected his corpse from Achilles' attempt to mutilate it by creating a magical cloud over the corpse, shielding it from the rays of the sun.
Apollo held a grudge against Achilles throughout the war because Achilles had murdered his son Tenes before the war began and brutally assassinated his son Troilus in his own temple. Not only did Apollo save Hector from Achilles, he also tricked Achilles by disguising himself as a Trojan warrior and driving him away from the gates.
Finally, Apollo caused Achilles' death by guiding an arrow shot by Paris into Achilles' heel. In some versions, Apollo himself killed Achilles by taking the disguise of Paris.
Apollo helped many Trojan warriors—including Agenor, Polydamas, and Glaucus—in the battlefield. Though he greatly favored the Trojans, Apollo was bound to follow the orders of Zeus and served his father loyally during the war.
Nurturer of the young
Apollo Kourotrophos is the god who nurtures and protects children and the young, especially boys. He oversees their education and their passage into adulthood. Education is said to have originated from Apollo and the Muses. Many myths have him train his children. It was a custom for boys to cut and dedicate their long hair to Apollo after reaching adulthood.
Chiron, the abandoned centaur, was fostered by Apollo, who instructed him in medicine, prophecy, archery and more. Chiron would later become a great teacher himself.
Asclepius in his childhood gained much knowledge pertaining to medicinal arts from his father. However, he was later entrusted to Chiron for further education.
Anius, Apollo's son by Rhoeo, was abandoned by his mother soon after his birth. Apollo brought him up and educated him in mantic arts. Anius later became the priest of Apollo and the king of Delos.
Iamus was the son of Apollo and Evadne. When Evadne went into labour, Apollo sent the Moirai to assist his lover. After the child was born, Apollo sent snakes to feed the child some honey. When Iamus reached the age of education, Apollo took him to Olympia and taught him many arts, including the ability to understand and explain the languages of birds.
Idmon was educated by Apollo to be a seer. Even though he foresaw his death that would happen in his journey with the Argonauts, he embraced his destiny and died a brave death. To commemorate his son's bravery, Apollo commanded Boeotians to build a town around the tomb of the hero, and to honor him.
Apollo adopted Carnus, the abandoned son of Zeus and Europa. He reared the child with the help of his mother Leto and educated him to be a seer.
When his son Melaneus reached the age of marriage, Apollo asked the princess Stratonice to be his son's bride and carried her away from her home when she agreed.
Apollo saved a shepherd boy (name unknown) from death in a large deep cave, by means of vultures. To thank him, the shepherd built Apollo a temple under the name Vulturius.
God of music
Immediately after his birth, Apollo demanded a lyre and invented the paean, thus becoming the god of music. As the divine singer, he is the patron of poets, singers and musicians. The invention of string music is attributed to him. Plato said that the innate ability of humans to take delight in music, rhythm and harmony is the gift of Apollo and the Muses. According to Socrates, ancient Greeks believed that Apollo is the god who directs the harmony and makes all things move together, both for the gods and the humans. For this reason, he was called Homopolon before the Homo was replaced by A. Apollo's harmonious music delivered people from their pain, and hence, like Dionysus, he is also called the liberator. The swans, which were considered to be the most musical among the birds, were believed to be the "singers of Apollo". They are Apollo's sacred birds and acted as his vehicle during his travel to Hyperborea. Aelian says that when the singers would sing hymns to Apollo, the swans would join the chant in unison.
Among the Pythagoreans, the study of mathematics and music were connected to the worship of Apollo, their principal deity. Their belief was that music purifies the soul, just as medicine purifies the body. They also believed that music was delegated to the same mathematical laws of harmony as the mechanics of the cosmos, evolving into an idea known as the music of the spheres.
Apollo appears as the companion of the Muses, and as Musagetes ("leader of Muses") he leads them in dance. They spend their time on Parnassus, which is one of their sacred places. Apollo is also the lover of the Muses and by them he became the father of famous musicians like Orpheus and Linus.
Apollo is often found delighting the immortal gods with his songs and music on the lyre. In his role as the god of banquets, he was always present to play music at weddings of the gods, like the marriage of Eros and Psyche, Peleus and Thetis. He is a frequent guest of the Bacchanalia, and many ancient ceramics depict him being at ease amidst the maenads and satyrs. Apollo also participated in musical contests when challenged by others. He was the victor in all those contests, but he tended to punish his opponents severely for their hubris.
Apollo's lyre
The invention of the lyre is attributed either to Hermes or to Apollo himself. Distinctions have been made that Hermes invented lyre made of tortoise shell, whereas the lyre Apollo invented was a regular lyre.
Myths tell that the infant Hermes stole a number of Apollo's cows and took them to a cave in the woods near Pylos, covering their tracks. In the cave, he found a tortoise and killed it, then removed the insides. He used one of the cow's intestines and the tortoise shell and made his lyre.
Upon discovering the theft, Apollo confronted Hermes and asked him to return his cattle. When Hermes acted innocent, Apollo took the matter to Zeus. Zeus, having seen the events, sided with Apollo, and ordered Hermes to return the cattle. Hermes then began to play music on the lyre he had invented. Apollo fell in love with the instrument and offered to exchange the cattle for the lyre. Hence, Apollo then became the master of the lyre.
According to other versions, Apollo had invented the lyre himself, whose strings he tore in repenting of the excess punishment he had given to Marsyas. Hermes' lyre, therefore, would be a reinvention.
Contest with Pan
Once Pan had the audacity to compare his music with that of Apollo and to challenge the god of music to a contest. The mountain-god Tmolus was chosen to umpire. Pan blew on his pipes, and with his rustic melody gave great satisfaction to himself and his faithful follower, Midas, who happened to be present. Then, Apollo struck the strings of his lyre. It was so beautiful that Tmolus at once awarded the victory to Apollo, and everyone was pleased with the judgement. Only Midas dissented and questioned the justice of the award. Apollo did not want to suffer such a depraved pair of ears any longer, and caused them to become the ears of a donkey.
Contest with Marsyas
Marsyas was a satyr who was punished by Apollo for his hubris. He had found an aulos on the ground, tossed away after being invented by Athena because it made her cheeks puffy. Athena had also placed a curse upon the instrument, that whoever would pick it up would be severely punished. When Marsyas played the flute, everyone became frenzied with joy. This led Marsyas to think that he was better than Apollo, and he challenged the god to a musical contest. The contest was judged by the Muses, or the nymphs of Nysa. Athena was also present to witness the contest.
Marsyas taunted Apollo for "wearing his hair long, for having a fair face and smooth body, for his skill in so many arts". He also further said,
The Muses and Athena sniggered at this comment. The contestants agreed to take turns displaying their skills and the rule was that the victor could "do whatever he wanted" to the loser.
According to one account, after the first round, they both were deemed equal by the Nysiads. But in the next round, Apollo decided to play on his lyre and add his melodious voice to his performance. Marsyas argued against this, saying that Apollo would have an advantage and accused Apollo of cheating. But Apollo replied that since Marsyas played the flute, which needed air blown from the throat, it was similar to singing, and that either they both should get an equal chance to combine their skills or none of them should use their mouths at all. The nymphs decided that Apollo's argument was just. Apollo then played his lyre and sang at the same time, mesmerising the audience. Marsyas could not do this. Apollo was declared the winner and, angered with Marsyas' haughtiness and his accusations, decided to flay the satyr.
According to another account, Marsyas played his flute out of tune at one point and accepted his defeat. Out of shame, he assigned to himself the punishment of being skinned for a wine sack. Another variation is that Apollo played his instrument upside down. Marsyas could not do this with his instrument. So the Muses who were the judges declared Apollo the winner. Apollo hung Marsyas from a tree to flay him.
Apollo flayed the limbs of Marsyas alive in a cave near Celaenae in Phrygia for his hubris to challenge a god. He then gave the rest of his body for proper burial and nailed Marsyas' flayed skin to a nearby pine-tree as a lesson to the others. Marsyas' blood turned into the river Marsyas. But Apollo soon repented and being distressed at what he had done, he tore the strings of his lyre and threw it away. The lyre was later discovered by the Muses and Apollo's sons Linus and Orpheus. The Muses fixed the middle string, Linus the string struck with the forefinger, and Orpheus the lowest string and the one next to it. They took it back to Apollo, but the god, who had decided to stay away from music for a while, laid away both the lyre and the pipes at Delphi and joined Cybele in her wanderings to as far as Hyperborea.
Contest with Cinyras
Cinyras was a ruler of Cyprus, who was a friend of Agamemnon. Cinyras promised to assist Agamemnon in the Trojan war, but did not keep his promise. Agamemnon cursed Cinyras. He invoked Apollo and asked the god to avenge the broken promise. Apollo then had a lyre-playing contest with Cinyras, and defeated him. Either Cinyras committed suicide when he lost, or was killed by Apollo.
Patron of sailors
Apollo functions as the patron and protector of sailors, one of the duties he shares with Poseidon. In the myths, he is seen helping heroes who pray to him for a safe journey.
When Apollo spotted a ship of Cretan sailors that were caught in a storm, he quickly assumed the shape of a dolphin and guided their ship safely to Delphi.
When the Argonauts faced a terrible storm, Jason prayed to his patron, Apollo, to help them. Apollo used his bow and golden arrow to shed light upon an island, where the Argonauts soon took shelter. This island was renamed "Anaphe", which means "He revealed it".
Apollo helped the Greek hero Diomedes, to escape from a great tempest during his journey homeward. As a token of gratitude, Diomedes built a temple in honor of Apollo under the epithet Epibaterius ("the embarker").
During the Trojan War, Odysseus came to the Trojan camp to return Chriseis, the daughter of Apollo's priest Chryses, and brought many offerings to Apollo. Pleased with this, Apollo sent gentle breezes that helped Odysseus return safely to the Greek camp.
Arion was a poet who was kidnapped by some sailors for the rich prizes he possessed. Arion requested them to let him sing for the last time, to which the sailors consented. Arion began singing a song in praise of Apollo, seeking the god's help. Consequently, numerous dolphins surrounded the ship and when Arion jumped into the water, the dolphins carried him away safely.
Wars
Trojan War
Apollo played a pivotal role in the entire Trojan War. He sided with the Trojans, and sent a terrible plague to the Greek camp, which indirectly led to the conflict between Achilles and Agamemnon. He killed the Greek heroes Patroclus, Achilles, and numerous Greek soldiers. He also helped many Trojan heroes, the most important one being Hector. After the end of the war, Apollo and Poseidon together cleaned the remains of the city and the camps.
Telegony war
A war broke out between the Brygoi and the Thesprotians, who had the support of Odysseus. The gods Athena and Ares came to the battlefield and took sides. Athena helped the hero Odysseus while Ares fought alongside of the Brygoi. When Odysseus lost, Athena and Ares came into a direct duel. To stop the battling gods and the terror created by their battle, Apollo intervened and stopped the duel between them.
Indian war
When Zeus suggested that Dionysus defeat the Indians in order to earn a place among the gods, Dionysus declared war against the Indians and travelled to India along with his army of Bacchantes and satyrs. Among the warriors was Aristaeus, Apollo's son. Apollo armed his son with his own hands and gave him a bow and arrows and fitted a strong shield to his arm. After Zeus urged Apollo to join the war, he went to the battlefield. Seeing several of his nymphs and Aristaeus drowning in a river, he took them to safety and healed them. He taught Aristaeus more useful healing arts and sent him back to help the army of Dionysus.
Theban war
During the war between the sons of Oedipus, Apollo favored Amphiaraus, a seer and one of the leaders in the war. Though saddened that the seer was fated to be doomed in the war, Apollo made Amphiaraus' last hours glorious by "lighting his shield and his helm with starry gleam". When Hypseus tried to kill the hero with a spear, Apollo directed the spear towards the charioteer of Amphiaraus instead. Then Apollo himself replaced the charioteer and took the reins in his hands. He deflected many spears and arrows away from them. He also killed many of the enemy warriors like Melaneus, Antiphus, Aetion, Polites and Lampus. At last, when the moment of departure came, Apollo expressed his grief with tears in his eyes and bid farewell to Amphiaraus, who was soon engulfed by the Earth.
Slaying of giants
Apollo killed the giants Python and Tityos, who had assaulted his mother Leto.
Gigantomachy
During the gigantomachy, Apollo and Heracles blinded the giant Ephialtes by shooting him in his eyes, Apollo shooting his left and Heracles his right. He also killed Porphyrion, the king of giants, using his bow and arrows.
Aloadae
The Aloadae, namely Otis and Ephialtes, were twin giants who decided to wage war upon the gods. They attempted to storm Mt. Olympus by piling up mountains, and threatened to fill the sea with mountains and inundate dry land. They even dared to seek the hand of Hera and Artemis in marriage. Angered by this, Apollo killed them by shooting them with arrows. According to another tale, Apollo killed them by sending a deer between them; as they tried to kill it with their javelins, they accidentally stabbed each other and died.
Phorbas
Phorbas was a savage giant king of Phlegyas who was described as having swine-like features. He wished to plunder Delphi for its wealth. He seized the roads to Delphi and started harassing the pilgrims. He captured the old people and children and sent them to his army to hold them for ransom. And he challenged the young and sturdy men to a match of boxing, only to cut their heads off when they would get defeated by him. He hung the chopped-off heads to an oak tree. Finally, Apollo came to put an end to this cruelty. He entered a boxing contest with Phorbas and killed him with a single blow.
Other stories
In the first Olympic games, Apollo defeated Ares and became the victor in wrestling. He outran Hermes in the race and won first place.
Apollo divides months into summer and winter. He rides on the back of a swan to the land of the Hyperboreans during the winter months, and the absence of warmth in winter is due to his departure. During his absence, Delphi was under the care of Dionysus, and no prophecies were given during winters.
Periphas
Periphas was an Attican king and a priest of Apollo. He was noble, just and rich. He did all his duties justly. Because of this people were very fond of him and started honouring him to the same extent as Zeus. At one point, they worshipped Periphas in place of Zeus and set up shrines and temples for him. This annoyed Zeus, who decided to annihilate the entire family of Periphas. But because he was a just king and a good devotee, Apollo intervened and requested his father to spare Periphas. Zeus considered Apollo's words and agreed to let him live. But he metamorphosed Periphas into an eagle and made the eagle the king of birds. When Periphas' wife requested Zeus to let her stay with her husband, Zeus turned her into a vulture and fulfilled her wish.
Molpadia and Parthenos
Molpadia and Parthenos were the sisters of Rhoeo, a former lover of Apollo. One day, they were put in charge of watching their father's ancestral wine jar but they fell asleep while performing this duty. While they were asleep, the wine jar was broken by the swine their family kept. When the sisters woke up and saw what had happened, they threw themselves off a cliff in fear of their father's wrath. Apollo, who was passing by, caught them and carried them to two different cities in Chersonesus, Molpadia to Castabus and Parthenos to Bubastus. He turned them into goddesses and they both received divine honors. Molpadia's name was changed to Hemithea upon her deification.
Prometheus
Prometheus was the titan who was punished by Zeus for stealing fire. He was bound to a rock, where each day an eagle was sent to eat Prometheus' liver, which would then grow back overnight to be eaten again the next day. Seeing his plight, Apollo pleaded with Zeus to release the kind Titan, while Artemis and Leto stood behind him with tears in their eyes. Zeus, moved by Apollo's words and the tears of the goddesses, finally sent Heracles to free Prometheus.
Heracles
After Heracles (then named Alcides) was struck with madness and killed his family, he sought to purify himself and consulted the oracle of Apollo. Apollo, through the Pythia, commanded him to serve king Eurystheus for twelve years and complete the ten tasks the king would give him. Only then would Alcides be absolved of his sin. Apollo also renamed him Heracles.
To complete his third task, Heracles had to capture the Ceryneian Hind, a hind sacred to Artemis, and bring back it alive. After chasing the hind for one year, the animal eventually got tired, and when it tried crossing the river Ladon, Heracles captured it. While he was taking it back, he was confronted by Apollo and Artemis, who were angered at Heracles for this act. However, Heracles soothed the goddess and explained his situation to her. After much pleading, Artemis permitted him to take the hind and told him to return it later.
After he was freed from his servitude to Eurystheus, Heracles fell in conflict with Iphytus, a prince of Oechalia, and murdered him. Soon after, he contracted a terrible disease. He consulted the oracle of Apollo once again, in the hope of ridding himself of the disease. The Pythia, however, denied to give any prophesy. In anger, Heracles snatched the sacred tripod and started walking away, intending to start his own oracle. However, Apollo did not tolerate this and stopped Heracles; a duel ensued between them. Artemis rushed to support Apollo, while Athena supported Heracles. Soon, Zeus threw his thunderbolt between the fighting brothers and separated them. He reprimanded Heracles for this act of violation and asked Apollo to give a solution to Heracles. Apollo then ordered the hero to serve under Omphale, queen of Lydia for one year in order to purify himself.
After their reconciliation, Apollo and Heracles together founded the city of Gythion.
Plato's concept of soulmates
A long time ago, there were three kinds of human beings: male, descended from the sun; female, descended from the earth; and androgynous, descended from the moon. Each human being was completely round, with four arms and four legs, two identical faces on opposite sides of a head with four ears, and all else to match. They were powerful and unruly. Otis and Ephialtes even dared to scale Mount Olympus.
To check their insolence, Zeus devised a plan to humble them and improve their manners instead of completely destroying them. He cut them all in two and asked Apollo to make necessary repairs, giving humans the individual shape they still have now. Apollo turned their heads and necks around towards their wounds, he pulled together their skin at the abdomen, and sewed the skin together at the middle of it. This is what we call navel today. He smoothened the wrinkles and shaped the chest. But he made sure to leave a few wrinkles on the abdomen and around the navel so that they might be reminded of their punishment.
The rock of Leukas
Leukatas was believed to be a white-colored rock jutting out from the island of Leukas into the sea. It was present in the sanctuary of Apollo Leukates. A leap from this rock was believed to have put an end to the longings of love.
Once, Aphrodite fell deeply in love with Adonis, a young man of great beauty who was later accidentally killed by a boar. Heartbroken, Aphrodite wandered looking for the rock of Leukas. When she reached the sanctuary of Apollo in Argos, she confided in him her love and sorrow. Apollo then brought her to the rock of Leukas and asked her to throw herself from the top of the rock. She did so and was freed from her love. When she sought the reason behind this, Apollo told her that Zeus, before taking another lover, would sit on this rock to free himself from his love for Hera.
Another tale relates that a man named Nireus, who fell in love with the cult statue of Athena, came to the rock and jumped in order to relieve himself. After jumping, he fell into the net of a fisherman in which, when he was pulled out, he found a box filled with gold. He fought with the fisherman and took the gold, but Apollo appeared to him in the night in a dream and warned him not to appropriate gold which belonged to others.
It was an ancestral custom among the Leukadians to fling a criminal from this rock every year at the sacrifice performed in honor of Apollo for the sake of averting evil. However, a number of men would be stationed all around below rock to catch the criminal and take him out of the borders in order to exile him from the island. This was the same rock from which, according to a legend, Sappho took her suicidal leap.
Slaying of Titans
Once Hera, out of spite, aroused the Titans to war against Zeus and take away his throne. Accordingly, when the Titans tried to climb Mount Olympus, Zeus with the help of Apollo, Artemis and Athena, defeated them and cast them into Tartarus.
Female lovers
Apollo is said to have been the lover of all nine Muses, and not being able to choose one of them, he decided to remain unwed. He fathered the Corybantes by the Muse Thalia. By Calliope, he had Hymenaios, Ialemus, Orpheus and Linus. Alternatively, Linus was said to be the son of Apollo and either Urania or Terpsichore.
In the Great Eoiae that is attributed to Hesiod, Scylla is the daughter of Apollo and Hecate.
Cyrene was a Thessalian princess whom Apollo loved. In her honor, he built the city Cyrene and made her its ruler. She was later granted longevity by Apollo who turned her into a nymph. The couple had two sons, Aristaeus, and Idmon.
Evadne was a nymph daughter of Poseidon and a lover of Apollo. They had a son, Iamos. During the time of the childbirth, Apollo sent Eileithyia, the goddess of childbirth to assist her.
Rhoeo, a princess of the island of Naxos was loved by Apollo. Out of affection for her, Apollo turned her sisters into goddesses. On the island Delos she bore Apollo a son named Anius. Not wanting to have the child, she entrusted the infant to Apollo and left. Apollo raised and educated the child on his own.
Ourea, a daughter of Poseidon, fell in love with Apollo when he and Poseidon were serving the Trojan king Laomedon. They both united on the day the walls of Troy were built. She bore to Apollo a son, whom Apollo named Ileus, after the city of his birth, Ilion (Troy). Ileus was very dear to Apollo.
Thero, daughter of Phylas, a maiden as beautiful as the moonbeams, was loved by the radiant Apollo, and she loved him in return. Through their union, she became the mother of Chaeron, who was famed as "the tamer of horses". He later built the city Chaeronea.
Hyrie or Thyrie was the mother of Cycnus. Apollo turned both the mother and son into swans when they jumped into a lake and tried to kill themselves.
Hecuba was the wife of King Priam of Troy, and Apollo had a son with her named Troilus. An oracle prophesied that Troy would not be defeated as long as Troilus reached the age of twenty alive. He was ambushed and killed by Achilleus, and Apollo avenged his death by killing Achilles. After the sack of Troy, Hecuba was taken to Lycia by Apollo.
Coronis was daughter of Phlegyas, King of the Lapiths. While pregnant with Asclepius, Coronis fell in love with Ischys, son of Elatus and slept with him. When Apollo found out about her infidelity through his prophetic powers or thanks to his raven who informed him, he sent his sister, Artemis, to kill Coronis. Apollo rescued the baby by cutting open Coronis' belly and gave it to the centaur Chiron to raise.
Dryope, the daughter of Dryops, was impregnated by Apollo in the form of a snake. She gave birth to a son named Amphissus.
In Euripides' play Ion, Apollo fathered Ion by Creusa, wife of Xuthus. He used his powers to conceal her pregnancy from her father. Later, when Creusa left Ion to die in the wild, Apollo asked Hermes to save the child and bring him to the oracle at Delphi, where he was raised by a priestess.
Apollo loved and kidnapped an Oceanid nymph, Melia. Her father Oceanus sent one of his sons, Caanthus, to find her, but Caanthus could not take her back from Apollo, so he burned Apollo's sanctuary. In retaliation, Apollo shot and killed Caanthus.
Male lovers
Hyacinth (or Hyacinthus), a beautiful and athletic Spartan prince, was one of Apollo's favourite lovers. The pair was practicing throwing the discus when a discus thrown by Apollo was blown off course by the jealous Zephyrus and struck Hyacinthus in the head, killing him instantly. Apollo is said to be filled with grief. Out of Hyacinthus' blood, Apollo created a flower named after him as a memorial to his death, and his tears stained the flower petals with the interjection , meaning alas. He was later resurrected and taken to heaven. The festival Hyacinthia was a national celebration of Sparta, which commemorated the death and rebirth of Hyacinthus.
Another male lover was Cyparissus, a descendant of Heracles. Apollo gave him a tame deer as a companion but Cyparissus accidentally killed it with a javelin as it lay asleep in the undergrowth. Cyparissus was so saddened by its death that he asked Apollo to let his tears fall forever. Apollo granted the request by turning him into the Cypress named after him, which was said to be a sad tree because the sap forms droplets like tears on the trunk.
Admetus, the king of Pherae, was also Apollo's lover. During his exile, which lasted either for one year or nine years, Apollo served Admetus as a herdsman. The romantic nature of their relationship was first described by Callimachus of Alexandria, who wrote that Apollo was "fired with love" for Admetus. Plutarch lists Admetus as one of Apollo's lovers and says that Apollo served Admetus because he doted upon him. Latin poet Ovid in his said that even though he was a god, Apollo forsook his pride and stayed in as a servant for the sake of Admetus. Tibullus describes Apollo's love to the king as servitium amoris (slavery of love) and asserts that Apollo became his servant not by force but by choice. He would also make cheese and serve it to Admetus. His domestic actions caused embarrassment to his family.
When Admetus wanted to marry princess Alcestis, Apollo provided a chariot pulled by a lion and a boar he had tamed. This satisfied Alcestis' father and he let Admetus marry his daughter. Further, Apollo saved the king from Artemis' wrath and also convinced the Moirai to postpone Admetus' death once.
Branchus, a shepherd, one day came across Apollo in the woods. Captivated by the god's beauty, he kissed Apollo. Apollo requited his affections and wanting to reward him, bestowed prophetic skills on him. His descendants, the Branchides, were an influential clan of prophets.
Other male lovers of Apollo include:
Adonis, who is said to have been the lover of both Apollo and Aphrodite. He behaved as a man with Aphrodite and as a woman with Apollo.
Atymnius, otherwise known as a beloved of Sarpedon
Boreas, the god of North winds
Cinyras, king of Cyprus and the priest of Aphrodite
Helenus, a Trojan prince (son of Priam and Hecuba). He received from Apollo an ivory bow with which he later wounded Achilles in the hand.
Hippolytus of Sicyon (not the same as Hippolytus, the son of Theseus)
Hymenaios, the son of Magnes
Iapis, to whom Apollo taught the art of healing
Phorbas, the dragon slayer (probably the son of Triopas)
Children
Apollo sired many children, from mortal women and nymphs as well as the goddesses. His children grew up to be physicians, musicians, poets, seers or archers. Many of his sons founded new cities and became kings.
Asclepius is the most famous son of Apollo. His skills as a physician surpassed that of Apollo's. Zeus killed him for bringing back the dead, but upon Apollo's request, he was resurrected as a god. Aristaeus was placed under the care of Chiron after his birth. He became the god of beekeeping, cheese-making, animal husbandry and more. He was ultimately given immortality for the benefits he bestowed upon humanity. The Corybantes were spear-clashing, dancing demigods.
The sons of Apollo who participated in the Trojan War include the Trojan princes Hector and Troilus, as well as Tenes, the king of Tenedos, all three of whom were killed by Achilles over the course of the war.
Apollo's children who became musicians and bards include Orpheus, Linus, Ialemus, Hymenaeus, Philammon, Eumolpus and Eleuther. Apollo fathered 3 daughters, Apollonis, Borysthenis and Cephisso, who formed a group of minor Muses, the "Musa Apollonides". Plutarch recounts that the Delphians believed the three Muses to be Nete, Mese, and Hypate, after the highest, middle, and lowest strings of the lyre. Phemonoe was a seer and poet who was the inventor of Hexameter.
Apis, Idmon, Iamus, Tenerus, Mopsus, Galeus, Telmessus and others were gifted seers. Anius, Pythaeus and Ismenus lived as high priests. Most of them were trained by Apollo himself.
Arabus, Delphos, Dryops, Miletos, Tenes, Epidaurus, Ceos, Lycoras, Syrus, Pisus, Marathus, Megarus, Patarus, Acraepheus, Cicon, Chaeron and many other sons of Apollo, under the guidance of his words, founded eponymous cities.
He also had a son by Agathippe who was named Chrysorrhoas who was a mechanic artist. His other daughters include Eurynome, Chariclo wife of Chiron, Eurydice the wife of Orpheus, Eriopis, famous for her beautiful hair, Melite the heroine, Pamphile the silk weaver, Parthenos, and by some accounts, Phoebe, Hilyra and Scylla. Apollo turned Parthenos into a constellation after her early death.
Additionally, Apollo fostered and educated Chiron, the centaur who later became the greatest teacher and educated many demigods, including Apollo's sons. Apollo also fostered Carnus, the son of Zeus and Europa.
List of offspring and their mothers
The following is a list of Apollo's offspring, by various mothers. Beside each offspring, the earliest source to record the parentage is given, along with the century to which the source (in some cases approximately) dates.
Failed love attempts
Love affairs ascribed to Apollo are a late development in Greek mythology. Their vivid anecdotal qualities have made some of them favorites of painters since the Renaissance, the result being that they stand out more prominently in the modern imagination.
Daphne was a nymph who scorned Apollo's advances and ran away from him. When Apollo chased her in order to persuade her, she changed herself into a laurel tree. According to other versions, she cried for help during the chase, and Gaia helped her by taking her in and placing a laurel tree in her place. According to Roman poet Ovid, the chase was brought about by Cupid, who hit Apollo with a golden arrow of love and Daphne with a leaden arrow of hatred. The myth explains the origin of the laurel and the connection of Apollo with the laurel and its leaves, which his priestess employed at Delphi. The leaves became the symbol of victory and laurel wreaths were given to the victors of the Pythian games.
Marpessa was kidnapped by Idas but was loved by Apollo as well. Zeus made her choose between them, and she chose Idas on the grounds that Apollo, being immortal, would tire of her when she grew old.
Sinope, a nymph, was approached by the amorous Apollo. She made him promise that he would grant to her whatever she would ask for, and then cleverly asked him to let her stay a virgin. Apollo kept his promise and went back.
Bolina was admired by Apollo but she refused him and jumped into the sea. To avoid her death, Apollo turned her into a nymph, saving her life.
Castalia was a nymph whom Apollo loved. She fled from him and dove into the spring at Delphi, at the base of Mt. Parnassos, which was then named after her. Water from this spring was sacred; it was used to clean the Delphian temples and inspire the priestesses.
Cassandra was a daughter of Hecuba and Priam. Apollo wished to court her. Cassandra promised to return his love on one condition – he should give her the power to see the future. Apollo fulfilled her wish, but she went back on her word and rejected him soon after. Angered that she broke her promise, Apollo cursed her that even though she would see the future, no one would ever believe her prophecies.
The Sibyl of Cumae like Cassandra promised Apollo her love in exchange for a boon. asking for as many years of life as the grains of sand in her hand. Apollo granted her wish, but she broke her word. While she lived longer, Apollo did not grant her agelessness, causing her to wither until only her voice remained.
Hestia, the goddess of the hearth, rejected both Apollo's and Poseidon's marriage proposals and swore that she would always stay unmarried.
In one version of the prophet Tiresias's origins, he was originally a woman who promised Apollo to sleep with him if he would give her music lessons. Apollo gave her her wish, but then she went back on her word and refused him. Apollo in anger turned her into a man.
Female counterparts
Artemis
Artemis as the sister of Apollo, is thea apollousa, that is, she as a female divinity represented the same idea that Apollo did as a male divinity. In the pre-Hellenic period, their relationship was described as the one between husband and wife, and there seems to have been a tradition which actually described Artemis as the wife of Apollo. However, this relationship was never sexual but spiritual, which is why they both are seen being unmarried in the Hellenic period.
Artemis, like her brother, is armed with a bow and arrows. She is the cause of sudden deaths of women. She also is the protector of the young, especially girls. Though she has nothing to do with oracles, music or poetry, she sometimes led the female chorus on Olympus while Apollo sang. The laurel (daphne) was sacred to both. Artemis Daphnaia had her temple among the Lacedemonians, at a place called Hypsoi. Apollo Daphnephoros had a temple in Eretria, a "place where the citizens are to take the oaths". In later times when Apollo was regarded as identical with the sun or Helios, Artemis was naturally regarded as Selene or the moon.
Hecate
Hecate, the goddess of witchcraft and magic, is the chthonic counterpart of Apollo. They both are cousins, since their mothers – Leto and Asteria – are sisters. One of Apollo's epithets, Hecatos, is the masculine form of Hecate, and both names mean "working from afar". While Apollo presided over the prophetic powers and magic of light and heaven, Hecate presided over the prophetic powers and magic of night and chthonian darkness. If Hecate is the "gate-keeper", Apollo Agyieus is the "door-keeper". Hecate is the goddess of crossroads and Apollo is the god and protector of streets.
The oldest evidence found for Hecate's worship is at Apollo's temple in Miletos. There, Hecate was taken to be Apollo's sister counterpart in the absence of Artemis. Hecate's lunar nature makes her the goddess of the waning moon and contrasts and complements, at the same time, Apollo's solar nature.
Athena
As a deity of knowledge and great power, Apollo was seen being the male counterpart of Athena. Being Zeus' favorite children, they were given more powers and duties. Apollo and Athena often took up the role of protectors of cities, and were patrons of some of the important cities. Athena was the principal goddess of Athens, Apollo was the principal god of Sparta.
As patrons of arts, Apollo and Athena were companions of the Muses, the former a much more frequent companion than the latter. Apollo was sometimes called the son of Athena and Hephaestus.
In the Trojan War, as Zeus' executive, Apollo is seen holding the aegis like Athena usually does. Apollo's decisions were usually approved by his sister Athena, and they both worked to establish the law and order set forth by Zeus.
Apollo in the Oresteia
In Aeschylus' Oresteia trilogy, Clytemnestra kills her husband, King Agamemnon because he had sacrificed their daughter Iphigenia to proceed forward with the Trojan war. Apollo gives an order through the Oracle at Delphi that Agamemnon's son, Orestes, is to kill Clytemnestra and Aegisthus, her lover. Orestes and Pylades carry out the revenge, and consequently Orestes is pursued by the Erinyes or Furies (female personifications of vengeance).
Apollo and the Furies argue about whether the matricide was justified; Apollo holds that the bond of marriage is sacred and Orestes was avenging his father, whereas the Erinyes say that the bond of blood between mother and son is more meaningful than the bond of marriage. They invade his temple, and he drives them away. He says that the matter should be brought before Athena. Apollo promises to protect Orestes, as Orestes has become Apollo's supplicant. Apollo advocates Orestes at the trial, and ultimately Athena rules in favor of Apollo.
Roman Apollo
The Roman worship of Apollo was adopted from the Greeks. As a quintessentially Greek god, Apollo had no direct Roman equivalent, although later Roman poets often referred to him as Phoebus. There was a tradition that the Delphic oracle was consulted as early as the period of the kings of Rome during the reign of Tarquinius Superbus.
On the occasion of a pestilence in the 430s BCE, Apollo's first temple at Rome was established in the Flaminian fields, replacing an older cult site there known as the "Apollinare". During the Second Punic War in 212 BCE, the Ludi Apollinares ("Apollonian Games") were instituted in his honor, on the instructions of a prophecy attributed to one Marcius. In the time of Augustus, who considered himself under the special protection of Apollo and was even said to be his son, his worship developed and he became one of the chief gods of Rome.
After the Battle of Actium, which was fought near a sanctuary of Apollo, Augustus enlarged Apollo's temple, dedicated a portion of the spoils to him, and instituted quinquennial games in his honour. He also erected a new temple to the god on the Palatine hill. Sacrifices and prayers on the Palatine to Apollo and Diana formed the culmination of the Secular Games, held in 17 BCE to celebrate the dawn of a new era.
Festivals
The chief Apollonian festival was the Pythian Games held every four years at Delphi and was one of the four great Panhellenic Games. Also of major importance was the Delia held every four years on Delos. Athenian annual festivals included the Boedromia, Metageitnia, Pyanepsia, and Thargelia.
Spartan annual festivals were the Carneia and the Hyacinthia.
Thebes every nine years held the Daphnephoria.
Attributes and symbols
Apollo's most common attributes were the bow and arrow. Other attributes of his included the kithara (an advanced version of the common lyre), the plectrum and the sword. Another common emblem was the sacrificial tripod, representing his prophetic powers. The Pythian Games were held in Apollo's honor every four years at Delphi. The bay laurel plant was used in expiatory sacrifices and in making the crown of victory at these games.
The palm tree was also sacred to Apollo because he had been born under one in Delos. Animals sacred to Apollo included wolves, dolphins, roe deer, swans, cicadas (symbolizing music and song), ravens, hawks, crows (Apollo had hawks and crows as his messengers), snakes (referencing Apollo's function as the god of prophecy), mice and griffins, mythical eagle–lion hybrids of Eastern origin.
Homer and Porphyry wrote that Apollo had a hawk as his messenger. In many myths Apollo is transformed into a hawk. In addition, Claudius Aelianus wrote that in Ancient Egypt people believed that hawks were sacred to the god and that according to the ministers of Apollo in Egypt there were certain men called "hawk-keepers" (ἱερακοβοσκοί) who fed and tended the hawks belonging to the god. Eusebius wrote that the second appearance of the moon is held sacred in the city of Apollo in Egypt and that the city's symbol is a man with a hawklike face (Horus). Claudius Aelianus wrote that Egyptians called Apollo Horus in their own language.
As god of colonization, Apollo gave oracular guidance on colonies, especially during the height of colonization, 750–550 BCE. According to Greek tradition, he helped Cretan or Arcadian colonists found the city of Troy. However, this story may reflect a cultural influence which had the reverse direction: Hittite cuneiform texts mention an Asia Minor god called Appaliunas or Apalunas in connection with the city of Wilusa attested in Hittite inscriptions, which is now generally regarded as being identical with the Greek Ilion by most scholars. In this interpretation, Apollo's title of Lykegenes can simply be read as "born in Lycia", which effectively severs the god's supposed link with wolves (possibly a folk etymology).
In literary contexts, Apollo represents harmony, order, and reason—characteristics contrasted with those of Dionysus, god of wine, who represents ecstasy and disorder. The contrast between the roles of these gods is reflected in the adjectives Apollonian and Dionysian. However, the Greeks thought of the two qualities as complementary: the two gods are brothers, and when Apollo at winter left for Hyperborea, he would leave the Delphic oracle to Dionysus. This contrast appears to be shown on the two sides of the Borghese Vase.
Apollo is often associated with the Golden Mean. This is the Greek ideal of moderation and a virtue that opposes gluttony.
In antiquity, Apollo was associated with the planet Mercury. The ancient Greeks believed that the Mercury as observed during the morning was different from the one during the evening, because each twilight Mercury would appear farther from the Sun as it set than it had the night before. The morning planet was called Apollo, and the one at evening Hermes/Mercury before they realised they were the same, thereupon the name 'Mercury/Hermes' was kept, and 'Apollo' was dropped.
Apollo in the arts
Apollo is a common theme in Greek and Roman art and also in the art of the Renaissance. The earliest Greek word for a statue is "delight" (, agalma), and the sculptors tried to create forms which would inspire such guiding vision. Maurice Bowra notices that the Greek artist puts into a god the highest degree of power and beauty that can be imagined. The sculptors derived this from observations on human beings, but they also embodied in concrete form, issues beyond the reach of ordinary thought.
The naked bodies of the statues are associated with the cult of the body which was essentially a religious activity. The muscular frames and limbs combined with slim waists indicate the Greek desire for health, and the physical capacity which was necessary in the hard Greek environment. The statues of Apollo and the other gods present them in their full youth and strength. "In the balance and relation of their limbs, such figures express their whole character, mental and physical, and reveal their central being, the radiant reality of youth in its heyday".
Archaic sculpture
Numerous statues of male youths from Archaic Greece exist, and were once thought to be representations of Apollo, though later discoveries indicated that many represented mortals. In 1895, V. I. Leonardos proposed the term kouros ("male youth") to refer to those from Keratea; this usage was later expanded by Henri Lechat in 1904 to cover all statues of this format.
The earliest examples of life-sized statues of Apollo may be two figures from the Ionic sanctuary on the island of Delos. Such statues were found across the Greek-speaking world, the preponderance of these were found at the sanctuaries of Apollo with more than one hundred from the sanctuary of Apollo Ptoios, Boeotia alone. Significantly more rare are the life-sized bronze statues. One of the few originals which survived into the present day—so rare that its discovery in 1959 was described as "a miracle" by Ernst Homann-Wedeking—is the masterpiece bronze, Piraeus Apollo. It was found in Piraeus, a port city close to Athens, and is believed to have come from north-eastern Peloponnesus. It is the only surviving large-scale Peloponnesian statue.
Classical sculpture
The famous Apollo of Mantua and its variants are early forms of the Apollo Citharoedus statue type, in which the god holds the cithara, a sophisticated seven-stringed variant of the lyre, in his left arm. While none of the Greek originals have survived, several Roman copies from approximately the late 1st or early 2nd century exist, of which an example is the Apollo Barberini.
Hellenistic Greece-Rome
Apollo as a handsome beardless young man, is often depicted with a cithara (as Apollo Citharoedus) or bow in his hand, or reclining on a tree (the Apollo Lykeios and Apollo Sauroctonos types). The Apollo Belvedere is a marble sculpture that was rediscovered in the late 15th century; for centuries it epitomized the ideals of Classical Antiquity for Europeans, from the Renaissance through the 19th century. The marble is a Hellenistic or Roman copy of a bronze original by the Greek sculptor Leochares, made between 330 and 320 BCE.
Another haloed Apollo in mosaic, from Hadrumentum, is in the museum at Sousse. The conventions of this representation, head tilted, lips slightly parted, large-eyed, curling hair cut in locks grazing the neck, were developed in the 3rd century BCE to depict Alexander the Great. Some time after this mosaic was executed, the earliest depictions of Christ would also be beardless and haloed.
Modern reception
Apollo often appears in modern and popular culture due to his status as the god of music, dance and poetry.
Postclassical art and literature
Dance and music
Apollo has featured in dance and music in modern culture. Percy Bysshe Shelley composed a "Hymn of Apollo" (1820), and the god's instruction of the Muses formed the subject of Igor Stravinsky's Apollon musagète (1927–1928). In 1978, the Canadian band Rush released an album with songs "Apollo: Bringer of Wisdom"/"Dionysus: Bringer of Love".
Books
Apollo has been portrayed in modern literature, such as when Charles Handy in Gods of Management (1978) uses Greek gods as a metaphor to portray various types of organizational culture. Apollo represents a "role" culture where order, reason, and bureaucracy prevail.
Psychology and philosophy
In the philosophical discussion of the arts, a distinction is sometimes made between the Apollonian and Dionysian impulses, where the former is concerned with imposing intellectual order and the latter with chaotic creativity. Friedrich Nietzsche argued that a fusion of the two was most desirable. Psychologist Carl Jung's Apollo archetype represents what he saw as the disposition in people to over-intellectualise and maintain emotional distance.
Spaceflight
In spaceflight, the 1960s and 1970s NASA program for orbiting and landing astronauts on the Moon was named after Apollo, by NASA manager Abe Silverstein:
Genealogy
Notes
References
Sources
Primary sources
Aelian, On Animals, Volume II: Books 6–11. Translated by A. F. Scholfield. Loeb Classical Library 447. Cambridge, MA: Harvard University Press, 1958.
Aeschylus, The Eumenides in Aeschylus, with an English translation by Herbert Weir Smyth, Ph. D. in two volumes, Vol 2, Cambridge, Massachusetts, Harvard University Press, 1926, Online version at the Perseus Digital Library.
Antoninus Liberalis, The Metamorphoses of Antoninus Liberalis translated by Francis Celoria (Routledge 1992). Online version at the Topos Text Project.
Apollodorus, Apollodorus, The Library, with an English Translation by Sir James George Frazer, F.B.A., F.R.S. in 2 Volumes. Cambridge, MA, Harvard University Press; London, William Heinemann Ltd. 1921. Online version at the Perseus Digital Library.
Apollonius of Rhodes, Apollonius Rhodius: the Argonautica, translated by Robert Cooper Seaton, W. Heinemann, 1912. Internet Archive.
Callimachus, Callimachus and Lycophron with an English Translation by A. W. Mair; Aratus, with an English Translation by G. R. Mair, London: W. Heinemann, New York: G. P. Putnam 1921. Online version at Harvard University Press. Internet Archive.
Campbell, David A., Greek Lyric, Volume III: Stesichorus, Ibycus, Simonides, and Others, Loeb Classical Library No. 476, Cambridge, Massachusetts, Harvard University Press, 1991. . Online version at Harvard University Press.
Cicero, Marcus Tullius, De Natura Deorum in Cicero in Twenty-eight Volumes, XIX De Natura Deorum; Academica, with an English translation by H. Rackham, Cambridge, Massachusetts: Harvard University Press; London: William Heinemann, Ltd, 1967. Internet Archive.
Diodorus Siculus, Library of History, Volume III: Books 4.59-8, translated by C. H. Oldfather, Loeb Classical Library No. 340. Cambridge, Massachusetts, Harvard University Press, 1939. . Online version at Harvard University Press. Online version by Bill Thayer.
Etymologicum Magnum, edited by Thomas Gaisford, Oxford, E. Typographeo Academico, 1848. Online version at the Munich Digitization Center.
Herodotus, Herodotus, with an English translation by A. D. Godley. Cambridge. Harvard University Press. 1920. Online version available at The Perseus Digital Library.
Hesiod, Catalogue of Women, in Hesiod: The Shield, Catalogue of Women, Other Fragments, edited and translated by Glenn W. Most, Loeb Classical Library No. 503, Cambridge, Massachusetts, Harvard University Press, 2007, 2018. . Online version at Harvard University Press.
Hesiod, Theogony, in The Homeric Hymns and Homerica with an English Translation by Hugh G. Evelyn-White, Cambridge, MA., Harvard University Press; London, William Heinemann Ltd. 1914. Online version at the Perseus Digital Library.
Homeric Hymn 3 to Apollo in The Homeric Hymns and Homerica with an English Translation by Hugh G. Evelyn-White, Cambridge, MA., Harvard University Press; London, William Heinemann Ltd. 1914. Online version at the Perseus Digital Library.
Homeric Hymn 4 to Hermes, in The Homeric Hymns and Homerica with an English Translation by Hugh G. Evelyn-White, Cambridge, Massachusetts, Harvard University Press; London, William Heinemann Ltd. 1914. Online version at the Perseus Digital Library.
Homer, The Iliad with an English Translation by A.T. Murray, PhD in two volumes. Cambridge, MA., Harvard University Press; London, William Heinemann, Ltd. 1924. Online version at the Perseus Digital Library.
Homer; The Odyssey with an English Translation by A.T. Murray, PH.D. in two volumes. Cambridge, MA., Harvard University Press; London, William Heinemann, Ltd. 1919. Online version at the Perseus Digital Library.
Hyginus, Gaius Julius, De astronomia, in The Myths of Hyginus, edited and translated by Mary A. Grant, Lawrence: University of Kansas Press, 1960. Online version at ToposText.
Hyginus, Gaius Julius, Fabulae, in The Myths of Hyginus, edited and translated by Mary A. Grant, Lawrence: University of Kansas Press, 1960. Online version at ToposText.
Livy, The History of Rome, Books I and II With An English Translation. Cambridge. Cambridge, Mass., Harvard University Press; London, William Heinemann, Ltd. 1919.
Nonnus, Dionysiaca; translated by Rouse, W H D, I Books I-XV. Loeb Classical Library No. 344, Cambridge, Massachusetts, Harvard University Press; London, William Heinemann Ltd. 1940. Internet Archive
Nonnus, Dionysiaca; translated by Rouse, W H D, II Books XVI-XXXV. Loeb Classical Library No. 345, Cambridge, Massachusetts, Harvard University Press; London, William Heinemann Ltd. 1940. Internet Archive
Page, Denys Lionel, Sir, Poetae Melici Graeci, Oxford University Press, 1962. .
Statius, Thebaid. Translated by Mozley, J H. Loeb Classical Library Volumes. Cambridge, Massachusetts, Harvard University Press; London, William Heinemann Ltd. 1928.
Strabo, The Geography of Strabo. Edition by H.L. Jones. Cambridge, Mass.: Harvard University Press; London: William Heinemann, Ltd. 1924. Online version at the Perseus Digital Library.
Sophocles, Oedipus Rex
Palaephatus, On Unbelievable Tales 46. Hyacinthus (330 BCE)
Ovid, Metamorphoses, Brookes More, Boston, Cornhill Publishing Co. 1922. Online version at the Perseus Digital Library. 10. 162–219 (1–8 CE)
Pausanias, Pausanias Description of Greece with an English Translation by W.H.S. Jones, Litt.D., and H.A. Ormerod, M.A., in 4 Volumes. Cambridge, MA, Harvard University Press; London, William Heinemann Ltd. 1918. Online version at the Perseus Digital Library.
Philostratus the Elder, Imagines, in Philostratus the Elder, Imagines. Philostratus the Younger, Imagines. Callistratus, Descriptions. Translated by Arthur Fairbanks. Loeb Classical Library No. 256. Cambridge, Massachusetts: Harvard University Press, 1931. . Online version at Harvard University Press. Internet Archive 1926 edition. i.24 Hyacinthus (170–245 CE)
Philostratus the Younger, Imagines, in Philostratus the Elder, Imagines. Philostratus the Younger, Imagines. Callistratus, Descriptions. Translated by Arthur Fairbanks. Loeb Classical Library No. 256. Cambridge, Massachusetts: Harvard University Press, 1931. . Online version at Harvard University Press. Internet Archive 1926 edition. 14. Hyacinthus (170–245 CE)
Pindar, Odes, Diane Arnson Svarlien. 1990. Online version at the Perseus Digital Library.
Pliny, Natural History, Volume I: Books 1-2, translated by H. Rackham, Loeb Classical Library No. 330, Cambridge, Massachusetts, Harvard University Press, 1938. . Online version at Harvard University Press.
Plutarch. Lives, Volume I: Theseus and Romulus. Lycurgus and Numa. Solon and Publicola. Translated by Bernadotte Perrin. Loeb Classical Library No. 46. Cambridge, Massachusetts: Harvard University Press, 1914. . Online version at Harvard University Press. Numa at the Perseus Digital Library.
Pseudo-Plutarch, De fluviis, in Plutarch's morals, Volume V, edited and translated by William Watson Goodwin, Boston: Little, Brown & Co., 1874. Online version at the Perseus Digital Library.
Lucian, Dialogues of the Dead. Dialogues of the Sea-Gods. Dialogues of the Gods. Dialogues of the Courtesans, translated by M. D. MacLeod, Loeb Classical Library No. 431, Cambridge, Massachusetts, Harvard University Press, 1961. . Online version at Harvard University Press. Internet Archive.
First Vatican Mythographer, 197. Thamyris et Musae
Servius, Servii grammatici qui feruntur in Vergilii carmina commentarii, Volume I, edited by Georgius Thilo and Hermannus Hagen, Bibliotheca Teubneriana, Leipzig, Teubner, 1881. Internet Archive. Online version at the Perseus Digital Library.
Stephanus of Byzantium, Stephani Byzantii Ethnicorum quae supersunt, edited by August Meineke, Berlin, Impensis G. Reimeri, 1849. Internet Archive. Google Books. Online version at ToposText.
Tzetzes, John, Chiliades, editor Gottlieb Kiessling, F.C.G. Vogel, 1826. Google Books. (English translation: Book I by Ana Untila; Books II–IV, by Gary Berkowitz; Books V–VI by Konstantino Ramiotis; Books VII–VIII by Vasiliki Dogani; Books IX–X by Jonathan Alexander; Books XII–XIII by Nikolaos Giallousis. Internet Archive).
Valerius Flaccus, Argonautica, translated by J. H. Mozley, Loeb Classical Library No. 286. Cambridge, Massachusetts, Harvard University Press; London, William Heinemann Ltd. 1928. . Online version at Harvard University Press. Online translated text available at theoi.com.
Vergil, Aeneid. Theodore C. Williams. trans. Boston. Houghton Mifflin Co. 1910. Online version at the Perseus Digital Library.
Secondary sources
Athanassakis, Apostolos N., and Benjamin M. Wolkow, The Orphic Hymns, Johns Hopkins University Press; owlerirst Printing edition (29 May 2013). . Google Books.
M. Bieber, 1964. Alexander the Great in Greek and Roman Art. Chicago.
Hugh Bowden, 2005. Classical Athens and the Delphic Oracle: Divination and Democracy. Cambridge University Press.
Walter Burkert, 1985. Greek Religion (Harvard University Press) III.2.5 passim
Fontenrose, Joseph Eddy, Python: A Study of Delphic Myth and Its Origins, University of California Press, 1959. .
Gantz, Timothy, Early Greek Myth: A Guide to Literary and Artistic Sources, Johns Hopkins University Press, 1996, Two volumes: (Vol. 1), (Vol. 2).
Miranda J. Green, 1997. Dictionary of Celtic Myth and Legend, Thames and Hudson.
Grimal, Pierre, The Dictionary of Classical Mythology, Wiley-Blackwell, 1996. .
Hard, Robin, The Routledge Handbook of Greek Mythology: Based on H.J. Rose's "Handbook of Greek Mythology", Psychology Press, 2004, . Google Books.
Karl Kerenyi, 1953. Apollon: Studien über Antiken Religion und Humanität revised edition.
Kerényi, Karl 1951, The Gods of the Greeks, Thames and Hudson, London.
Mertens, Dieter; Schutzenberger, Margareta. Città e monumenti dei Greci d'Occidente: dalla colonizzazione alla crisi di fine V secolo a.C.. Roma L'Erma di Bretschneider, 2006. .
Martin Nilsson, 1955. Die Geschichte der Griechische Religion, vol. I. C.H. Beck.
Parada, Carlos, Genealogical Guide to Greek Mythology, Jonsered, Paul Åströms Förlag, 1993. .
Pauly–Wissowa, Realencyclopädie der klassischen Altertumswissenschaft: II, "Apollon". The best repertory of cult sites (Burkert).
Peck, Harry Thurston, Harpers Dictionary of Classical Antiquities, New York. Harper and Brothers. 1898. Online version at the Perseus Digital Library.
Pfeiff, K.A., 1943. Apollon: Wandlung seines Bildes in der griechischen Kunst. Traces the changing iconography of Apollo.
D.S.Robertson (1945) A handbook of Greek and Roman Architecture Cambridge University Press
West, M. L. (2003), Greek Epic Fragments: From the Seventh to the Fifth Centuries BC, Loeb Classical Library No. 497, Cambridge, Massachusetts, Harvard University Press, 2003. . Online version at Harvard University Press. Internet Archive.
Smith, William, Dictionary of Greek and Roman Biography and Mythology, London (1873). Online version at the Perseus Digital Library.
Smith, William, A Dictionary of Greek and Roman Antiquities. William Smith, LLD. William Wayte. G. E. Marindin. Albemarle Street, London. John Murray. 1890. Online version at the Perseus Digital Library.
Spivey Nigel (1997) Greek art Phaedon Press Ltd.
Tripp, Edward, Crowell's Handbook of Classical Mythology, Thomas Y. Crowell Co; First edition (June 1970). . Internet Archive.
External links
Apollo at the Greek Mythology Link, by Carlos Parada
The Warburg Institute Iconographic Database (c. 1650 images of Apollo)
|
;Arts gods;Beauty gods;Characters in the Argonautica;Characters in the Odyssey;Childhood gods;Children of Zeus;Classical oracles;Dance gods;Deities in the Iliad;Delian mythology;Dii Consentes;Divine twins;Dragonslayers;Greek gods;Health deities;Health gods;Knowledge gods;Kourotrophoi;Light gods;Mercurian deities;Metamorphoses characters;Music and singing gods;Musicians in Greek mythology;Mythological Greek archers;Mythological Greek physicians;Mythological rapists;Oracular gods;Plague gods;Raven deities;Roman gods;Shapeshifters in Greek mythology;Solar gods;Supernatural healing;Twelve Olympians;Wolf deities
|
https://en.wikipedia.org/wiki/Abacus
|
An abacus ( abaci or abacuses), also called a counting frame, is a hand-operated calculating tool which was used from ancient times in the ancient Near East, Europe, China, and Russia, until the adoption of the Hindu–Arabic numeral system. An abacus consists of a two-dimensional array of slidable beads (or similar objects). In their earliest designs, the beads could be loose on a flat surface or sliding in grooves. Later the beads were made to slide on rods and built into a frame, allowing faster manipulation.
Each rod typically represents one digit of a multi-digit number laid out using a positional numeral system such as base ten (though some cultures used different numerical bases). Roman and East Asian abacuses use a system resembling bi-quinary coded decimal, with a top deck (containing one or two beads) representing fives and a bottom deck (containing four or five beads) representing ones. Natural numbers are normally used, but some allow simple fractional components (e.g. , , and in Roman abacus), and a decimal point can be imagined for fixed-point arithmetic.
Any particular abacus design supports multiple methods to perform calculations, including addition, subtraction, multiplication, division, and square and cube roots. The beads are first arranged to represent a number, then are manipulated to perform a mathematical operation with another number, and their final position can be read as the result (or can be used as the starting number for subsequent operations).
In the ancient world, abacuses were a practical calculating tool. It was widely used in Europe as late as the 17th century, but fell out of use with the rise of decimal notation and algorismic methods. Although calculators and computers are commonly used today instead of abacuses, abacuses remain in everyday use in some countries. The abacus has an advantage of not requiring a writing implement and paper (needed for algorism) or an electric power source. Merchants, traders, and clerks in some parts of Eastern Europe, Russia, China, and Africa use abacuses. The abacus remains in common use as a scoring system in non-electronic table games. Others may use an abacus due to visual impairment that prevents the use of a calculator. The abacus is still used to teach the fundamentals of mathematics to children in many countries such as Japan and China.
Etymology
The word abacus dates to at least 1387 AD when a Middle English work borrowed the word from Latin that described a sandboard abacus. The Latin word is derived from ancient Greek () which means something without a base, and colloquially, any piece of rectangular material. Alternatively, without reference to ancient texts on etymology, it has been suggested that it means "a square tablet strewn with dust", or "drawing-board covered with dust (for the use of mathematics)" (the exact shape of the Latin perhaps reflects the genitive form of the Greek word, ()). While the table strewn with dust definition is popular, some argue evidence is insufficient for that conclusion. Greek probably borrowed from a Northwest Semitic language like Phoenician, evidenced by a cognate with the Hebrew word ʾābāq (), or "dust" (in the post-Biblical sense "sand used as a writing surface").
Both abacuses and abaci are used as plurals. The user of an abacus is called an abacist.
History
Mesopotamia
The Sumerian abacus appeared between 2700 and 2300 BC. It held a table of successive columns which delimited the successive orders of magnitude of their sexagesimal (base 60) number system.
Some scholars point to a character in Babylonian cuneiform that may have been derived from a representation of the abacus. It is the belief of Old Babylonian scholars, such as Ettore Carruccio, that Old Babylonians "seem to have used the abacus for the operations of addition and subtraction; however, this primitive device proved difficult to use for more complex calculations".
Egypt
Greek historian Herodotus mentioned the abacus in Ancient Egypt. He wrote that the Egyptians manipulated the pebbles from right to left, opposite in direction to the Greek left-to-right method. Archaeologists have found ancient disks of various sizes that are thought to have been used as counters. However, wall depictions of this instrument are yet to be discovered.
Persia
At around 600 BC, Persians first began to use the abacus, during the Achaemenid Empire. Under the Parthian, Sassanian, and Iranian empires, scholars concentrated on exchanging knowledge and inventions with the countries around them – India, China, and the Roman Empire – which is how the abacus may have been exported to other countries.
Greece
The earliest archaeological evidence for the use of the Greek abacus dates to the 5th century BC. Demosthenes (384–322 BC) complained that the need to use pebbles for calculations was too difficult. A play by Alexis from the 4th century BC mentions an abacus and pebbles for accounting, and both Diogenes and Polybius use the abacus as a metaphor for human behavior, stating "that men that sometimes stood for more and sometimes for less" like the pebbles on an abacus. The Greek abacus was a table of wood or marble, pre-set with small counters in wood or metal for mathematical calculations. This Greek abacus was used in Achaemenid Persia, the Etruscan civilization, Ancient Rome, and the Western Christian world until the French Revolution.
The Salamis Tablet, found on the Greek island Salamis in 1846 AD, dates to 300 BC, making it the oldest counting board discovered so far. It is a slab of white marble in length, wide, and thick, on which are 5 groups of markings. In the tablet's center is a set of 5 parallel lines equally divided by a vertical line, capped with a semicircle at the intersection of the bottom-most horizontal line and the single vertical line. Below these lines is a wide space with a horizontal crack dividing it. Below this crack is another group of eleven parallel lines, again divided into two sections by a line perpendicular to them, but with the semicircle at the top of the intersection; the third, sixth and ninth of these lines are marked with a cross where they intersect with the vertical line. Also from this time frame, the Darius Vase was unearthed in 1851. It was covered with pictures, including a "treasurer" holding a wax tablet in one hand while manipulating counters on a table with the other.
Rome
The normal method of calculation in ancient Rome, as in Greece, was by moving counters on a smooth table. Originally pebbles () were used. Marked lines indicated units, fives, tens, etc. as in the Roman numeral system.
Writing in the 1st century BC, Horace refers to the wax abacus, a board covered with a thin layer of black wax on which columns and figures were inscribed using a stylus.
One example of archaeological evidence of the Roman abacus, shown nearby in reconstruction, dates to the 1st century AD. It has eight long grooves containing up to five beads in each and eight shorter grooves having either one or no beads in each. The groove marked I indicates units, X tens, and so on up to millions. The beads in the shorter grooves denote fives (five units, five tens, etc.) resembling a bi-quinary coded decimal system related to the Roman numerals. The short grooves on the right may have been used for marking Roman "ounces" (i.e. fractions).
Medieval Europe
The Roman system of 'counter casting' was used widely in medieval Europe, and persisted in limited use into the nineteenth century. Wealthy abacists used decorative minted counters, called jetons.
Due to Pope Sylvester II's reintroduction of the abacus with modifications, it became widely used in Europe again during the 11th century It used beads on wires, unlike the traditional Roman counting boards, which meant the abacus could be used much faster and was more easily moved.
China
The earliest known written documentation of the Chinese abacus dates to the 2nd century BC.
The Chinese abacus, also known as the suanpan (算盤/算盘, lit. "calculating tray"), comes in various lengths and widths, depending on the operator. It usually has more than seven rods. There are two beads on each rod in the upper deck and five beads each in the bottom one, to represent numbers in a bi-quinary coded decimal-like system. The beads are usually rounded and made of hardwood. The beads are counted by moving them up or down towards the beam; beads moved toward the beam are counted, while those moved away from it are not. One of the top beads is 5, while one of the bottom beads is 1. Each rod has a number under it, showing the place value. The suanpan can be reset to the starting position instantly by a quick movement along the horizontal axis to spin all the beads away from the horizontal beam at the center.
The prototype of the Chinese abacus appeared during the Han dynasty, and the beads are oval. The Song dynasty and earlier used the 1:4 type or four-beads abacus similar to the modern abacus including the shape of the beads commonly known as Japanese-style abacus.
In the early Ming dynasty, the abacus began to appear in a 1:5 ratio. The upper deck had one bead and the bottom had five beads. In the late Ming dynasty, the abacus styles appeared in a 2:5 ratio. The upper deck had two beads, and the bottom had five.
Various calculation techniques were devised for Suanpan enabling efficient calculations. Some schools teach students how to use it.
In the long scroll Along the River During the Qingming Festival painted by Zhang Zeduan during the Song dynasty (960–1297), a suanpan is clearly visible beside an account book and doctor's prescriptions on the counter of an apothecary's (Feibao).
The similarity of the Roman abacus to the Chinese one suggests that one could have inspired the other, given evidence of a trade relationship between the Roman Empire and China. However, no direct connection has been demonstrated, and the similarity of the abacuses may be coincidental, both ultimately arising from counting with five fingers per hand. Where the Roman model (like most modern Korean and Japanese) has 4 plus 1 bead per decimal place, the standard suanpan has 5 plus 2. Incidentally, this ancient Chinese calculation system 市用制 (Shì yòng zhì) allows use with a hexadecimal numeral system (or any base up to 18) which is used for traditional Chinese measures of weight [(jīn (斤) and liǎng (兩)]. (Instead of running on wires as in the Chinese, Korean, and Japanese models, the Roman model used grooves, presumably making arithmetic calculations much slower).
Another possible source of the suanpan is Chinese counting rods, which operated with a decimal system but lacked the concept of zero as a placeholder. The zero was probably introduced to the Chinese in the Tang dynasty (618–907) when travel in the Indian Ocean and the Middle East would have provided direct contact with India, allowing them to acquire the concept of zero and the decimal point from Indian merchants and mathematicians.
India
The Abhidharmakośabhāṣya of Vasubandhu (316–396), a Sanskrit work on Buddhist philosophy, says that the second-century CE philosopher Vasumitra said that "placing a wick (Sanskrit vartikā) on the number one (ekāṅka) means it is a one while placing the wick on the number hundred means it is called a hundred, and on the number one thousand means it is a thousand". It is unclear exactly what this arrangement may have been. Around the 5th century, Indian clerks were already finding new ways of recording the contents of the abacus. Hindu texts used the term śūnya (zero) to indicate the empty column on the abacus.
Japan
In Japan, the abacus is called soroban (, lit. "counting tray"). It was imported from China in the 14th century. It was probably in use by the working class a century or more before the ruling class adopted it, as the class structure obstructed such changes. The 1:4 abacus, which removes the seldom-used second and fifth bead, became popular in the 1940s.
Today's Japanese abacus is a 1:4 type, four-bead abacus, introduced from China in the Muromachi era. It adopts the form of the upper deck one bead and the bottom four beads. The top bead on the upper deck was equal to five and the bottom one is similar to the Chinese or Korean abacus, and the decimal number can be expressed, so the abacus is designed as a 1:4 device. The beads are always in the shape of a diamond. The quotient division is generally used instead of the division method; at the same time, in order to make the multiplication and division digits consistently use the division multiplication. Later, Japan had a 3:5 abacus called 天三算盤, which is now in the Ize Rongji collection of Shansi Village in Yamagata City. Japan also used a 2:5 type abacus.
The four-bead abacus spread, and became common around the world. Improvements to the Japanese abacus arose in various places. In China, an abacus with an aluminium frame and plastic beads has been used. The file is next to the four beads, and pressing the "clearing" button puts the upper bead in the upper position, and the lower bead in the lower position.
The abacus is still manufactured in Japan, despite the proliferation, practicality, and affordability of pocket electronic calculators. The use of the soroban is still taught in Japanese primary schools as part of mathematics, primarily as an aid to faster mental calculation. Using visual imagery, one can complete a calculation as quickly as with a physical instrument.
Korea
The Chinese abacus migrated from China to Korea around 1400 AD. Koreans call it jupan (주판), supan (수판) or jusan (주산). The four-beads abacus (1:4) was introduced during the Goryeo Dynasty. The 5:1 abacus was introduced to Korea from China during the Ming Dynasty.
Native America
Some sources mention the use of an abacus called a nepohualtzintzin in ancient Aztec culture. This Mesoamerican abacus used a 5-digit base-20 system. The word Nepōhualtzintzin comes from Nahuatl, formed by the roots; Ne – personal -; pōhual or pōhualli – the account -; and tzintzin – small similar elements. Its complete meaning was taken as: counting with small similar elements. Its use was taught in the Calmecac to the temalpouhqueh , who were students dedicated to taking the accounts of skies, from childhood.
The Nepōhualtzintzin was divided into two main parts separated by a bar or intermediate cord. In the left part were four beads. Beads in the first row have unitary values (1, 2, 3, and 4), and on the right side, three beads had values of 5, 10, and 15, respectively. In order to know the value of the respective beads of the upper rows, it is enough to multiply by 20 (by each row), the value of the corresponding count in the first row.
The device featured 13 rows with 7 beads, 91 in total. This was a basic number for this culture. It had a close relation to natural phenomena, the underworld, and the cycles of the heavens. One Nepōhualtzintzin (91) represented the number of days that a season of the year lasts, two Nepōhualtzitzin (182) is the number of days of the corn's cycle, from its sowing to its harvest, three Nepōhualtzintzin (273) is the number of days of a baby's gestation, and four Nepōhualtzintzin (364) completed a cycle and approximated one year. When translated into modern computer arithmetic, the Nepōhualtzintzin amounted to the rank from 10 to 18 in floating point, which precisely calculated large and small amounts, although round off was not allowed.
The rediscovery of the Nepōhualtzintzin was due to the Mexican engineer David Esparza Hidalgo, who in his travels throughout Mexico found diverse engravings and paintings of this instrument and reconstructed several of them in gold, jade, encrustations of shell, etc. Very old Nepōhualtzintzin are attributed to the Olmec culture, and some bracelets of Mayan origin, as well as a diversity of forms and materials in other cultures.
Sanchez wrote in Arithmetic in Maya that another base 5, base 4 abacus had been found in the Yucatán Peninsula that also computed calendar data. This was a finger abacus, on one hand, 0, 1, 2, 3, and 4 were used; and on the other hand 0, 1, 2, and 3 were used. Note the use of zero at the beginning and end of the two cycles.
The quipu of the Incas was a system of colored knotted cords used to record numerical data, like advanced tally sticks – but not used to perform calculations. Calculations were carried out using a yupana (Quechua for "counting tool"; see figure) which was still in use after the conquest of Peru. The working principle of a yupana is unknown, but in 2001 Italian mathematician De Pasquale proposed an explanation. By comparing the form of several yupanas, researchers found that calculations were based using the Fibonacci sequence 1, 1, 2, 3, 5 and powers of 10, 20, and 40 as place values for the different fields in the instrument. Using the Fibonacci sequence would keep the number of grains within any one field at a minimum.
Russia
The Russian abacus, the schoty (, plural from , counting), usually has a single slanted deck, with ten beads on each wire (except one wire with four beads for quarter-ruble fractions). 4-bead wire was introduced for quarter-kopeks, which were minted until 1916. The Russian abacus is used vertically, with each wire running horizontally. The wires are usually bowed upward in the center, to keep the beads pinned to either side. It is cleared when all the beads are moved to the right. During manipulation, beads are moved to the left. For easy viewing, the middle 2 beads on each wire (the 5th and 6th bead) usually are of a different color from the other eight. Likewise, the left bead of the thousands wire (and the million wire, if present) may have a different color.
The Russian abacus was in use in shops and markets throughout the former Soviet Union, and its usage was taught in most schools until the 1990s. Even the 1874 invention of mechanical calculator, Odhner arithmometer, had not replaced them in Russia. According to Yakov Perelman, some businessmen attempting to import calculators into the Russian Empire were known to leave in despair after watching a skilled abacus operator. Likewise, the mass production of Felix arithmometers since 1924 did not significantly reduce abacus use in the Soviet Union. The Russian abacus began to lose popularity only after the mass production of domestic microcalculators in 1974.
The Russian abacus was brought to France around 1820 by mathematician Jean-Victor Poncelet, who had served in Napoleon's army and had been a prisoner of war in Russia. To Poncelet's French contemporaries, it was something new. Poncelet used it, not for any applied purpose, but as a teaching and demonstration aid. The Turks and the Armenian people used abacuses similar to the Russian schoty. It was named a coulba by the Turks and a choreb by the Armenians.
School abacus
Around the world, abacuses have been used in pre-schools and elementary schools as an aid in teaching the numeral system and arithmetic.
In Western countries, a bead frame similar to the Russian abacus but with straight wires and a vertical frame is common (see image).
Each bead represents one unit (e.g. 74 can be represented by shifting all beads on 7 wires and 4 beads on the 8th wire, so numbers up to 100 may be represented). In the bead frame shown, the gap between the 5th and 6th wire, corresponding to the color change between the 5th and the 6th bead on each wire, suggests the latter use. Teaching multiplication, e.g. 6 times 7, may be represented by shifting 7 beads on 6 wires.
The red-and-white abacus is used in contemporary primary schools for a wide range of number-related lessons. The twenty bead version, referred to by its Dutch name rekenrek ("calculating frame"), is often used, either on a string of beads or on a rigid framework.
Neurological analysis
Learning how to calculate with the abacus may improve capacity for mental calculation. Abacus-based mental calculation (AMC), which was derived from the abacus, is the act of performing calculations, including addition, subtraction, multiplication, and division, in the mind by manipulating an imagined abacus. It is a high-level cognitive skill that runs calculations with an effective algorithm. People doing long-term AMC training show higher numerical memory capacity and experience more effectively connected neural pathways. They are able to retrieve memory to deal with complex processes. AMC involves both visuospatial and visuomotor processing that generate the visual abacus and move the imaginary beads. Since it only requires that the final position of beads be remembered, it takes less memory and less computation time.
Renaissance abacuses
Binary abacus
The binary abacus is used to explain how computers manipulate numbers. The abacus shows how numbers, letters, and signs can be stored in a binary system on a computer, or via ASCII. The device consists of beads on parallel wires arranged in three rows; each bead represents a switch which can be either "on" or "off".
Visually impaired users
An adapted abacus, invented by Tim Cranmer, and called a Cranmer abacus is commonly used by visually impaired users. A piece of soft fabric or rubber is placed behind the beads, keeping them in place while the users manipulate them. The device is then used to perform the mathematical functions of multiplication, division, addition, subtraction, square root, and cube root.
Although blind students have benefited from talking calculators, the abacus is often taught to these students in early grades. Blind students can also complete mathematical assignments using a braille-writer and Nemeth code (a type of braille code for mathematics) but large multiplication and long division problems are tedious. The abacus gives these students a tool to compute mathematical problems that equals the speed and mathematical knowledge required by their sighted peers using pencil and paper. Many blind people find this number machine a useful tool throughout life.
See also
Chinese Zhusuan
Chisanbop
Logical abacus
Napier's bones
Sand table
Slide rule
Notes
Footnotes
References
External links
Tutorials
Min Multimedia
History
Curiosities
Abacus in Various Number Systems at cut-the-knot
Java applet of Chinese, Japanese and Russian abaci
An atomic-scale abacus
Examples of Abaci
Aztex Abacus
Indian Abacus
Abacus Course
|
;Ancient Roman mathematics;Chinese mathematics;Egyptian mathematics;Greek mathematics;Indian mathematics;Japanese mathematics;Korean mathematics;Mathematical tools
|
https://en.wikipedia.org/wiki/Acid
|
An acid is a molecule or ion capable of either donating a proton (i.e. hydrogen ion, H+), known as a Brønsted–Lowry acid, or forming a covalent bond with an electron pair, known as a Lewis acid.
The first category of acids are the proton donors, or Brønsted–Lowry acids. In the special case of aqueous solutions, proton donors form the hydronium ion H3O+ and are known as Arrhenius acids. Brønsted and Lowry generalized the Arrhenius theory to include non-aqueous solvents. A Brønsted–Lowry or Arrhenius acid usually contains a hydrogen atom bonded to a chemical structure that is still energetically favorable after loss of H+.
Aqueous Arrhenius acids have characteristic properties that provide a practical description of an acid. Acids form aqueous solutions with a sour taste, can turn blue litmus red, and react with bases and certain metals (like calcium) to form salts. The word acid is derived from the Latin , meaning 'sour'. An aqueous solution of an acid has a pH less than 7 and is colloquially also referred to as "acid" (as in "dissolved in acid"), while the strict definition refers only to the solute. A lower pH means a higher acidity, and thus a higher concentration of positive hydrogen ions in the solution. Chemicals or substances having the property of an acid are said to be acidic.
Common aqueous acids include hydrochloric acid (a solution of hydrogen chloride that is found in gastric acid in the stomach and activates digestive enzymes), acetic acid (vinegar is a dilute aqueous solution of this liquid), sulfuric acid (used in car batteries), and citric acid (found in citrus fruits). As these examples show, acids (in the colloquial sense) can be solutions or pure substances, and can be derived from acids (in the strict sense) that are solids, liquids, or gases. Strong acids and some concentrated weak acids are corrosive, but there are exceptions such as carboranes and boric acid.
The second category of acids are Lewis acids, which form a covalent bond with an electron pair. An example is boron trifluoride (BF3), whose boron atom has a vacant orbital that can form a covalent bond by sharing a lone pair of electrons on an atom in a base, for example the nitrogen atom in ammonia (NH3). Lewis considered this as a generalization of the Brønsted definition, so that an acid is a chemical species that accepts electron pairs either directly or by releasing protons (H+) into the solution, which then accept electron pairs. Hydrogen chloride, acetic acid, and most other Brønsted–Lowry acids cannot form a covalent bond with an electron pair, however, and are therefore not Lewis acids. Conversely, many Lewis acids are not Arrhenius or Brønsted–Lowry acids. In modern terminology, an acid is implicitly a Brønsted acid and not a Lewis acid, since chemists almost always refer to a Lewis acid explicitly as such.
Definitions and concepts
Modern definitions are concerned with the fundamental chemical reactions common to all acids.
Most acids encountered in everyday life are aqueous solutions, or can be dissolved in water, so the Arrhenius and Brønsted–Lowry definitions are the most relevant.
The Brønsted–Lowry definition is the most widely used definition; unless otherwise specified, acid–base reactions are assumed to involve the transfer of a proton (H+) from an acid to a base.
Hydronium ions are acids according to all three definitions. Although alcohols and amines can be Brønsted–Lowry acids, they can also function as Lewis bases due to the lone pairs of electrons on their oxygen and nitrogen atoms.
Arrhenius acids
In 1884, Svante Arrhenius attributed the properties of acidity to hydrogen ions (H+), later described as protons or hydrons. An Arrhenius acid is a substance that, when added to water, increases the concentration of H+ ions in the water. Chemists often write H+(aq) and refer to the hydrogen ion when describing acid–base reactions but the free hydrogen nucleus, a proton, does not exist alone in water, it exists as the hydronium ion (H3O+) or other forms (H5O2+, H9O4+). Thus, an Arrhenius acid can also be described as a substance that increases the concentration of hydronium ions when added to water. Examples include molecular substances such as hydrogen chloride and acetic acid.
An Arrhenius base, on the other hand, is a substance that increases the concentration of hydroxide (OH−) ions when dissolved in water. This decreases the concentration of hydronium because the ions react to form H2O molecules:
H3O + OH ⇌ H2O(liq) + H2O(liq)
Due to this equilibrium, any increase in the concentration of hydronium is accompanied by a decrease in the concentration of hydroxide. Thus, an Arrhenius acid could also be said to be one that decreases hydroxide concentration, while an Arrhenius base increases it.
In an acidic solution, the concentration of hydronium ions is greater than 10−7 moles per liter. Since pH is defined as the negative logarithm of the concentration of hydronium ions, acidic solutions thus have a pH of less than 7.
Brønsted–Lowry acids
While the Arrhenius concept is useful for describing many reactions, it is also quite limited in its scope. In 1923, chemists Johannes Nicolaus Brønsted and Thomas Martin Lowry independently recognized that acid–base reactions involve the transfer of a proton. A Brønsted–Lowry acid (or simply Brønsted acid) is a species that donates a proton to a Brønsted–Lowry base. Brønsted–Lowry acid–base theory has several advantages over Arrhenius theory. Consider the following reactions of acetic acid (CH3COOH), the organic acid that gives vinegar its characteristic taste:
Both theories easily describe the first reaction: CH3COOH acts as an Arrhenius acid because it acts as a source of H3O+ when dissolved in water, and it acts as a Brønsted acid by donating a proton to water. In the second example CH3COOH undergoes the same transformation, in this case donating a proton to ammonia (NH3), but does not relate to the Arrhenius definition of an acid because the reaction does not produce hydronium. Nevertheless, CH3COOH is both an Arrhenius and a Brønsted–Lowry acid.
Brønsted–Lowry theory can be used to describe reactions of molecular compounds in nonaqueous solution or the gas phase. Hydrogen chloride (HCl) and ammonia combine under several different conditions to form ammonium chloride, NH4Cl. In aqueous solution HCl behaves as hydrochloric acid and exists as hydronium and chloride ions. The following reactions illustrate the limitations of Arrhenius's definition:
H3O + Cl + NH3 → Cl + NH(aq) + H2O
HCl(benzene) + NH3(benzene) → NH4Cl(s)
HCl(g) + NH3(g) → NH4Cl(s)
As with the acetic acid reactions, both definitions work for the first example, where water is the solvent and hydronium ion is formed by the HCl solute. The next two reactions do not involve the formation of ions but are still proton-transfer reactions. In the second reaction hydrogen chloride and ammonia (dissolved in benzene) react to form solid ammonium chloride in a benzene solvent and in the third gaseous HCl and NH3 combine to form the solid.
Lewis acids
A third, only marginally related concept was proposed in 1923 by Gilbert N. Lewis, which includes reactions with acid–base characteristics that do not involve a proton transfer. A Lewis acid is a species that accepts a pair of electrons from another species; in other words, it is an electron pair acceptor. Brønsted acid–base reactions are proton transfer reactions while Lewis acid–base reactions are electron pair transfers. Many Lewis acids are not Brønsted–Lowry acids. Contrast how the following reactions are described in terms of acid–base chemistry:
In the first reaction a fluoride ion, F−, gives up an electron pair to boron trifluoride to form the product tetrafluoroborate. Fluoride "loses" a pair of valence electrons because the electrons shared in the B—F bond are located in the region of space between the two atomic nuclei and are therefore more distant from the fluoride nucleus than they are in the lone fluoride ion. BF3 is a Lewis acid because it accepts the electron pair from fluoride. This reaction cannot be described in terms of Brønsted theory because there is no proton transfer.
The second reaction can be described using either theory. A proton is transferred from an unspecified Brønsted acid to ammonia, a Brønsted base; alternatively, ammonia acts as a Lewis base and transfers a lone pair of electrons to form a bond with a hydrogen ion. The species that gains the electron pair is the Lewis acid; for example, the oxygen atom in H3O+ gains a pair of electrons when one of the H—O bonds is broken and the electrons shared in the bond become localized on oxygen.
Depending on the context, a Lewis acid may also be described as an oxidizer or an electrophile. Organic Brønsted acids, such as acetic, citric, or oxalic acid, are not Lewis acids. They dissociate in water to produce a Lewis acid, H+, but at the same time, they also yield an equal amount of a Lewis base (acetate, citrate, or oxalate, respectively, for the acids mentioned). This article deals mostly with Brønsted acids rather than Lewis acids.
Dissociation and equilibrium
Reactions of acids are often generalized in the form , where HA represents the acid and A− is the conjugate base. This reaction is referred to as protolysis. The protonated form (HA) of an acid is also sometimes referred to as the free acid.
Acid–base conjugate pairs differ by one proton, and can be interconverted by the addition or removal of a proton (protonation and deprotonation, respectively). The acid can be the charged species and the conjugate base can be neutral in which case the generalized reaction scheme could be written as . In solution there exists an equilibrium between the acid and its conjugate base. The equilibrium constant K is an expression of the equilibrium concentrations of the molecules or the ions in solution. Brackets indicate concentration, such that [H2O] means the concentration of H2O. The acid dissociation constant Ka is generally used in the context of acid–base reactions. The numerical value of Ka is equal to the product (multiplication) of the concentrations of the products divided by the concentration of the reactants, where the reactant is the acid (HA) and the products are the conjugate base and H+.
The stronger of two acids will have a higher Ka than the weaker acid; the ratio of hydrogen ions to acid will be higher for the stronger acid as the stronger acid has a greater tendency to lose its proton. Because the range of possible values for Ka spans many orders of magnitude, a more manageable constant, pKa is more frequently used, where pKa = −log10 Ka. Stronger acids have a smaller pKa than weaker acids. Experimentally determined pKa at 25 °C in aqueous solution are often quoted in textbooks and reference material.
Nomenclature
Arrhenius acids are named according to their anions. In the classical naming system, the ionic suffix is dropped and replaced with a new suffix, according to the table following. The prefix "hydro-" is used when the acid is made up of just hydrogen and one other element. For example, HCl has chloride as its anion, so the hydro- prefix is used, and the -ide suffix makes the name take the form hydrochloric acid.
Classical naming system:
In the IUPAC naming system, "aqueous" is simply added to the name of the ionic compound. Thus, for hydrogen chloride, as an acid solution, the IUPAC name is aqueous hydrogen chloride.
Acid strength
The strength of an acid refers to its ability or tendency to lose a proton. A strong acid is one that completely dissociates in water; in other words, one mole of a strong acid HA dissolves in water yielding one mole of H+ and one mole of the conjugate base, A−, and none of the protonated acid HA. In contrast, a weak acid only partially dissociates and at equilibrium both the acid and the conjugate base are in solution. Examples of strong acids are hydrochloric acid (HCl), hydroiodic acid (HI), hydrobromic acid (HBr), perchloric acid (HClO4), nitric acid (HNO3) and sulfuric acid (H2SO4). In water each of these essentially ionizes 100%. The stronger an acid is, the more easily it loses a proton, H+. Two key factors that contribute to the ease of deprotonation are the polarity of the H—A bond and the size of atom A, which determines the strength of the H—A bond. Acid strengths are also often discussed in terms of the stability of the conjugate base.
Stronger acids have a larger acid dissociation constant, Ka and a lower pKa than weaker acids.
Sulfonic acids, which are organic oxyacids, are a class of strong acids. A common example is toluenesulfonic acid (tosylic acid). Unlike sulfuric acid itself, sulfonic acids can be solids. In fact, polystyrene functionalized into polystyrene sulfonate is a solid strongly acidic plastic that is filterable.
Superacids are acids stronger than 100% sulfuric acid. Examples of superacids are fluoroantimonic acid, magic acid and perchloric acid. The strongest known acid is helium hydride ion, with a proton affinity of 177.8kJ/mol. Superacids can permanently protonate water to give ionic, crystalline hydronium "salts". They can also quantitatively stabilize carbocations.
While Ka measures the strength of an acid compound, the strength of an aqueous acid solution is measured by pH, which is an indication of the concentration of hydronium in the solution. The pH of a simple solution of an acid compound in water is determined by the dilution of the compound and the compound's Ka.
Lewis acid strength in non-aqueous solutions
Lewis acids have been classified in the ECW model and it has been shown that there is no one order of acid strengths. The relative acceptor strength of Lewis acids toward a series of bases, versus other Lewis acids, can be illustrated by C-B plots. It has been shown that to define the order of Lewis acid strength at least two properties must be considered. For Pearson's qualitative HSAB theory the two properties are hardness and strength while for Drago's quantitative ECW model the two properties are electrostatic and covalent.
Chemical characteristics
Monoprotic acids
Monoprotic acids, also known as monobasic acids, are those acids that are able to donate one proton per molecule during the process of dissociation (sometimes called ionization) as shown below (symbolized by HA):
Ka
Common examples of monoprotic acids in mineral acids include hydrochloric acid (HCl) and nitric acid (HNO3). On the other hand, for organic acids the term mainly indicates the presence of one carboxylic acid group and sometimes these acids are known as monocarboxylic acid. Examples in organic acids include formic acid (HCOOH), acetic acid (CH3COOH) and benzoic acid (C6H5COOH).
Polyprotic acids
Polyprotic acids, also known as polybasic acids, are able to donate more than one proton per acid molecule, in contrast to monoprotic acids that only donate one proton per molecule. Specific types of polyprotic acids have more specific names, such as diprotic (or dibasic) acid (two potential protons to donate), and triprotic (or tribasic) acid (three potential protons to donate). Some macromolecules such as proteins and nucleic acids can have a very large number of acidic protons.
A diprotic acid (here symbolized by H2A) can undergo one or two dissociations depending on the pH. Each dissociation has its own dissociation constant, Ka1 and Ka2.
Ka1
Ka2
The first dissociation constant is typically greater than the second (i.e., Ka1 > Ka2). For example, sulfuric acid (H2SO4) can donate one proton to form the bisulfate anion (HSO), for which Ka1 is very large; then it can donate a second proton to form the sulfate anion (SO), wherein the Ka2 is intermediate strength. The large Ka1 for the first dissociation makes sulfuric a strong acid. In a similar manner, the weak unstable carbonic acid can lose one proton to form bicarbonate anion and lose a second to form carbonate anion (CO). Both Ka values are small, but Ka1 > Ka2 .
A triprotic acid (H3A) can undergo one, two, or three dissociations and has three dissociation constants, where Ka1 > Ka2 > Ka3.
Ka1
Ka2
Ka3
An inorganic example of a triprotic acid is orthophosphoric acid (H3PO4), usually just called phosphoric acid. All three protons can be successively lost to yield H2PO, then HPO, and finally PO, the orthophosphate ion, usually just called phosphate. Even though the positions of the three protons on the original phosphoric acid molecule are equivalent, the successive Ka values differ since it is energetically less favorable to lose a proton if the conjugate base is more negatively charged. An organic example of a triprotic acid is citric acid, which can successively lose three protons to finally form the citrate ion.
Although the subsequent loss of each hydrogen ion is less favorable, all of the conjugate bases are present in solution. The fractional concentration, α (alpha), for each species can be calculated. For example, a generic diprotic acid will generate 3 species in solution: H2A, HA−, and A2−. The fractional concentrations can be calculated as below when given either the pH (which can be converted to the [H+]) or the concentrations of the acid with all its conjugate bases:
A plot of these fractional concentrations against pH, for given K1 and K2, is known as a Bjerrum plot. A pattern is observed in the above equations and can be expanded to the general n -protic acid that has been deprotonated i -times:
where K0 = 1 and the other K-terms are the dissociation constants for the acid.
Neutralization
Neutralization is the reaction between an acid and a base, producing a salt and neutralized base; for example, hydrochloric acid and sodium hydroxide form sodium chloride and water:
HCl(aq) + NaOH(aq) → H2O(l) + NaCl(aq)
Neutralization is the basis of titration, where a pH indicator shows equivalence point when the equivalent number of moles of a base have been added to an acid. It is often wrongly assumed that neutralization should result in a solution with pH 7.0, which is only the case with similar acid and base strengths during a reaction.
Neutralization with a base weaker than the acid results in a weakly acidic salt. An example is the weakly acidic ammonium chloride, which is produced from the strong acid hydrogen chloride and the weak base ammonia. Conversely, neutralizing a weak acid with a strong base gives a weakly basic salt (e.g., sodium fluoride from hydrogen fluoride and sodium hydroxide).
Weak acid–weak base equilibrium
In order for a protonated acid to lose a proton, the pH of the system must rise above the pKa of the acid. The decreased concentration of H+ in that basic solution shifts the equilibrium towards the conjugate base form (the deprotonated form of the acid). In lower-pH (more acidic) solutions, there is a high enough H+ concentration in the solution to cause the acid to remain in its protonated form.
Solutions of weak acids and salts of their conjugate bases form buffer solutions.
Titration
To determine the concentration of an acid in an aqueous solution, an acid–base titration is commonly performed. A strong base solution with a known concentration, usually NaOH or KOH, is added to neutralize the acid solution according to the color change of the indicator with the amount of base added. The titration curve of an acid titrated by a base has two axes, with the base volume on the x-axis and the solution's pH value on the y-axis. The pH of the solution always goes up as the base is added to the solution.
Example: Diprotic acid
For each diprotic acid titration curve, from left to right, there are two midpoints, two equivalence points, and two buffer regions.
Equivalence points
Due to the successive dissociation processes, there are two equivalence points in the titration curve of a diprotic acid. The first equivalence point occurs when all first hydrogen ions from the first ionization are titrated. In other words, the amount of OH− added equals the original amount of H2A at the first equivalence point. The second equivalence point occurs when all hydrogen ions are titrated. Therefore, the amount of OH− added equals twice the amount of H2A at this time. For a weak diprotic acid titrated by a strong base, the second equivalence point must occur at pH above 7 due to the hydrolysis of the resulted salts in the solution. At either equivalence point, adding a drop of base will cause the steepest rise of the pH value in the system.
Buffer regions and midpoints
A titration curve for a diprotic acid contains two midpoints where pH=pKa. Since there are two different Ka values, the first midpoint occurs at pH=pKa1 and the second one occurs at pH=pKa2. Each segment of the curve that contains a midpoint at its center is called the buffer region. Because the buffer regions consist of the acid and its conjugate base, it can resist pH changes when base is added until the next equivalent points.
Applications of acids
In industry
Acids are fundamental reagents in treating almost all processes in modern industry. Sulfuric acid, a diprotic acid, is the most widely used acid in industry, and is also the most-produced industrial chemical in the world. It is mainly used in producing fertilizer, detergent, batteries and dyes, as well as used in processing many products such like removing impurities. According to the statistics data in 2011, the annual production of sulfuric acid was around 200 million tonnes in the world. For example, phosphate minerals react with sulfuric acid to produce phosphoric acid for the production of phosphate fertilizers, and zinc is produced by dissolving zinc oxide into sulfuric acid, purifying the solution and electrowinning.
In the chemical industry, acids react in neutralization reactions to produce salts. For example, nitric acid reacts with ammonia to produce ammonium nitrate, a fertilizer. Additionally, carboxylic acids can be esterified with alcohols, to produce esters.
Acids are often used to remove rust and other corrosion from metals in a process known as pickling. They may be used as an electrolyte in a wet cell battery, such as sulfuric acid in a car battery.
In food
Tartaric acid is an important component of some commonly used foods like unripened mangoes and tamarind. Natural fruits and vegetables also contain acids. Citric acid is present in oranges, lemon and other citrus fruits. Oxalic acid is present in tomatoes, spinach, and especially in carambola and rhubarb; rhubarb leaves and unripe carambolas are toxic because of high concentrations of oxalic acid. Ascorbic acid (Vitamin C) is an essential vitamin for the human body and is present in such foods as amla (Indian gooseberry), lemon, citrus fruits, and guava.
Many acids can be found in various kinds of food as additives, as they alter their taste and serve as preservatives. Phosphoric acid, for example, is a component of cola drinks. Acetic acid is used in day-to-day life as vinegar. Citric acid is used as a preservative in sauces and pickles.
Carbonic acid is one of the most common acid additives that are widely added in soft drinks. During the manufacturing process, CO2 is usually pressurized to dissolve in these drinks to generate carbonic acid. Carbonic acid is very unstable and tends to decompose into water and CO2 at room temperature and pressure. Therefore, when bottles or cans of these kinds of soft drinks are opened, the soft drinks fizz and effervesce as CO2 bubbles come out.
Certain acids are used as drugs. Acetylsalicylic acid (Aspirin) is used as a pain killer and for bringing down fevers.
In human bodies
Acids play important roles in the human body. The hydrochloric acid present in the stomach aids digestion by breaking down large and complex food molecules. Amino acids are required for synthesis of proteins required for growth and repair of body tissues. Fatty acids are also required for growth and repair of body tissues. Nucleic acids are important for the manufacturing of DNA and RNA and transmitting of traits to offspring through genes. Carbonic acid is important for maintenance of pH equilibrium in the body.
Human bodies contain a variety of organic and inorganic compounds, among those dicarboxylic acids play an essential role in many biological behaviors. Many of those acids are amino acids, which mainly serve as materials for the synthesis of proteins. Other weak acids serve as buffers with their conjugate bases to keep the body's pH from undergoing large scale changes that would be harmful to cells. The rest of the dicarboxylic acids also participate in the synthesis of various biologically important compounds in human bodies.
Acid catalysis
Acids are used as catalysts in industrial and organic chemistry; for example, sulfuric acid is used in very large quantities in the alkylation process to produce gasoline. Some acids, such as sulfuric, phosphoric, and hydrochloric acids, also effect dehydration and condensation reactions. In biochemistry, many enzymes employ acid catalysis.
Biological occurrence
Many biologically important molecules are acids. Nucleic acids, which contain acidic phosphate groups, include DNA and RNA. Nucleic acids contain the genetic code that determines many of an organism's characteristics, and is passed from parents to offspring. DNA contains the chemical blueprint for the synthesis of proteins, which are made up of amino acid subunits. Cell membranes contain fatty acid esters such as phospholipids.
An α-amino acid has a central carbon (the α or alpha carbon) that is covalently bonded to a carboxyl group (thus they are carboxylic acids), an amino group, a hydrogen atom and a variable group. The variable group, also called the R group or side chain, determines the identity and many of the properties of a specific amino acid. In glycine, the simplest amino acid, the R group is a hydrogen atom, but in all other amino acids it is contains one or more carbon atoms bonded to hydrogens, and may contain other elements such as sulfur, oxygen or nitrogen. With the exception of glycine, naturally occurring amino acids are chiral and almost invariably occur in the L-configuration. Peptidoglycan, found in some bacterial cell walls contains some D-amino acids. At physiological pH, typically around 7, free amino acids exist in a charged form, where the acidic carboxyl group (-COOH) loses a proton (-COO−) and the basic amine group (-NH2) gains a proton (-NH). The entire molecule has a net neutral charge and is a zwitterion, with the exception of amino acids with basic or acidic side chains. Aspartic acid, for example, possesses one protonated amine and two deprotonated carboxyl groups, for a net charge of −1 at physiological pH.
Fatty acids and fatty acid derivatives are another group of carboxylic acids that play a significant role in biology. These contain long hydrocarbon chains and a carboxylic acid group on one end. The cell membrane of nearly all organisms is primarily made up of a phospholipid bilayer, a micelle of hydrophobic fatty acid esters with polar, hydrophilic phosphate "head" groups. Membranes contain additional components, some of which can participate in acid–base reactions.
In humans and many other animals, hydrochloric acid is a part of the gastric acid secreted within the stomach to help hydrolyze proteins and polysaccharides, as well as converting the inactive pro-enzyme, pepsinogen into the enzyme, pepsin. Some organisms produce acids for defense; for example, ants produce formic acid.
Acid–base equilibrium plays a critical role in regulating mammalian breathing. Oxygen gas (O2) drives cellular respiration, the process by which animals release the chemical potential energy stored in food, producing carbon dioxide (CO2) as a byproduct. Oxygen and carbon dioxide are exchanged in the lungs, and the body responds to changing energy demands by adjusting the rate of ventilation. For example, during periods of exertion the body rapidly breaks down stored carbohydrates and fat, releasing CO2 into the blood stream. In aqueous solutions such as blood CO2 exists in equilibrium with carbonic acid and bicarbonate ion.
It is the decrease in pH that signals the brain to breathe faster and deeper, expelling the excess CO2 and resupplying the cells with O2.
Cell membranes are generally impermeable to charged or large, polar molecules because of the lipophilic fatty acyl chains comprising their interior. Many biologically important molecules, including a number of pharmaceutical agents, are organic weak acids that can cross the membrane in their protonated, uncharged form but not in their charged form (i.e., as the conjugate base). For this reason the activity of many drugs can be enhanced or inhibited by the use of antacids or acidic foods. The charged form, however, is often more soluble in blood and cytosol, both aqueous environments. When the extracellular environment is more acidic than the neutral pH within the cell, certain acids will exist in their neutral form and will be membrane soluble, allowing them to cross the phospholipid bilayer. Acids that lose a proton at the intracellular pH will exist in their soluble, charged form and are thus able to diffuse through the cytosol to their target. Ibuprofen, aspirin and penicillin are examples of drugs that are weak acids.
Common acids
Mineral acids (inorganic acids)
Hydrogen halides and their solutions: hydrofluoric acid (HF), hydrochloric acid (HCl), hydrobromic acid (HBr), hydroiodic acid (HI)
Halogen oxoacids: hypochlorous acid (HClO), chlorous acid (HClO2), chloric acid (HClO3), perchloric acid (HClO4), and corresponding analogs for bromine and iodine
Hypofluorous acid (HFO), the only known oxoacid for fluorine.
Sulfuric acid (H2SO4)
Fluorosulfuric acid (HSO3F)
Nitric acid (HNO3)
Phosphoric acid (H3PO4)
Fluoroantimonic acid (HSbF6)
Fluoroboric acid (HBF4)
Hexafluorophosphoric acid (HPF6)
Chromic acid (H2CrO4)
Boric acid (H3BO3)
Sulfonic acids
A sulfonic acid has the general formula RS(=O)2–OH, where R is an organic radical.
Methanesulfonic acid (or mesylic acid, CH3SO3H)
Ethanesulfonic acid (or esylic acid, CH3CH2SO3H)
Benzenesulfonic acid (or besylic acid, C6H5SO3H)
p-Toluenesulfonic acid (or tosylic acid, CH3C6H4SO3H)
Trifluoromethanesulfonic acid (or triflic acid, CF3SO3H)
Polystyrene sulfonic acid (sulfonated polystyrene, [CH2CH(C6H4)SO3H]n)
Carboxylic acids
A carboxylic acid has the general formula R-C(O)OH, where R is an organic radical. The carboxyl group -C(O)OH contains a carbonyl group, C=O, and a hydroxyl group, O-H.
Acetic acid (CH3COOH)
Citric acid (C6H8O7)
Formic acid (HCOOH)
Gluconic acid HOCH2-(CHOH)4-COOH
Lactic acid (CH3-CHOH-COOH)
Oxalic acid (HOOC-COOH)
Tartaric acid (HOOC-CHOH-CHOH-COOH)
Halogenated carboxylic acids
Halogenation at alpha position increases acid strength, so that the following acids are all stronger than acetic acid.
Fluoroacetic acid
Trifluoroacetic acid
Chloroacetic acid
Dichloroacetic acid
Trichloroacetic acid
Vinylogous carboxylic acids
Normal carboxylic acids are the direct union of a carbonyl group and a hydroxyl group. In vinylogous carboxylic acids, a carbon-carbon double bond separates the carbonyl and hydroxyl groups.
Ascorbic acid
Nucleic acids
Deoxyribonucleic acid (DNA)
Ribonucleic acid (RNA)
References
Listing of strengths of common acids and bases
External links
Curtipot: Acid–Base equilibria diagrams, pH calculation and titration curves simulation and analysis – freeware
|
;Acid–base chemistry
|
https://en.wikipedia.org/wiki/Bitumen
|
Bitumen ( , ) is an immensely viscous constituent of petroleum. Depending on its exact composition, it can be a sticky, black liquid or an apparently solid mass that behaves as a liquid over very large time scales. In American English, the material is commonly referred to as asphalt or tar. Whether found in natural deposits or refined from petroleum, the substance is classed as a pitch. Prior to the 20th century, the term asphaltum was in general use. The word derives from the Ancient Greek word (), which referred to natural bitumen or pitch. The largest natural deposit of bitumen in the world is the Pitch Lake of southwest Trinidad, which is estimated to contain 10 million tons.
About 70% of annual bitumen production is destined for road construction, its primary use. In this application, bitumen is used to bind aggregate particles like gravel and forms a substance referred to as asphalt concrete, which is colloquially termed asphalt. Its other main uses lie in bituminous waterproofing products, such as roofing felt and roof sealant.
In material sciences and engineering, the terms asphalt and bitumen are often used interchangeably and refer both to natural and manufactured forms of the substance, although there is regional variation as to which term is most common. Worldwide, geologists tend to favor the term bitumen for the naturally occurring material. For the manufactured material, which is a refined residue from the distillation process of selected crude oils, bitumen is the prevalent term in much of the world; however, in American English, asphalt is more commonly used. To help avoid confusion, the terms "liquid asphalt", "asphalt binder", or "asphalt cement" are used in the U.S. to distinguish it from asphalt concrete. Colloquially, various forms of bitumen are sometimes referred to as "tar", as in the name of the La Brea Tar Pits.
Naturally occurring bitumen is sometimes specified by the term crude bitumen. Its viscosity is similar to that of cold molasses while the material obtained from the fractional distillation of crude oil boiling at is sometimes referred to as "refined bitumen". The Canadian province of Alberta has most of the world's reserves of natural bitumen in the Athabasca oil sands, which cover , an area larger than England.
Terminology
Etymology
The Latin word traces to the Proto-Indo-European root *gʷet- "pitch".
The word "asphalt" is derived from the late Middle English, in turn from French asphalte, based on Late Latin asphaltum, which is the latinisation of the Greek (ásphaltos), a word meaning "asphalt/bitumen/pitch", which perhaps derives from , "not, without", i.e. the alpha privative, and (sphallein), "to cause to fall, baffle, (in passive) err, (in passive) be balked of".
The first use of asphalt by the ancients was as a cement to secure or join various objects, and it thus seems likely that the name itself was expressive of this application. Specifically, Herodotus mentioned that bitumen was brought to Babylon to build its gigantic fortification wall.
From the Greek, the word passed into late Latin, and thence into French (asphalte) and English ("asphaltum" and "asphalt"). In French, the term asphalte is used for naturally occurring asphalt-soaked limestone deposits, and for specialised manufactured products with fewer voids or greater bitumen content than the "asphaltic concrete" used to pave roads.
Modern terminology
Bitumen mixed with clay was usually called "asphaltum", but the term is less commonly used today.
In American English, "asphalt" is equivalent to the British "bitumen". However, "asphalt" is also commonly used as a shortened form of "asphalt concrete" (therefore equivalent to the British "asphalt" or "tarmac").
In Canadian English, the word "bitumen" is used to refer to the vast Canadian deposits of extremely heavy crude oil, while "asphalt" is used for the oil refinery product. Diluted bitumen (diluted with naphtha to make it flow in pipelines) is known as "dilbit" in the Canadian petroleum industry, while bitumen "upgraded" to synthetic crude oil is known as "syncrude", and syncrude blended with bitumen is called "synbit".
"Bitumen" is still the preferred geological term for naturally occurring deposits of the solid or semi-solid form of petroleum. "Bituminous rock" is a form of sandstone impregnated with bitumen. The oil sands of Alberta, Canada are a similar material.
Neither of the terms "asphalt" or "bitumen" should be confused with tar or coal tars. Tar is the thick liquid product of the dry distillation and pyrolysis of organic hydrocarbons primarily sourced from vegetation masses, whether fossilized as with coal, or freshly harvested. The majority of bitumen, on the other hand, was formed naturally when vast quantities of organic animal materials were deposited by water and buried hundreds of metres deep at the diagenetic point, where the disorganized fatty hydrocarbon molecules joined in long chains in the absence of oxygen. Bitumen occurs as a solid or highly viscous liquid. It may even be mixed in with coal deposits. Bitumen, and coal using the Bergius process, can be refined into petrols such as gasoline, and bitumen may be distilled into tar, not the other way around.
Composition
Normal composition
The components of bitumen include four main classes of compounds:
Naphthene aromatics (naphthalene), consisting of partially hydrogenated polycyclic aromatic compounds
Polar aromatics, consisting of high molecular weight phenols and carboxylic acids produced by partial oxidation of the material
Saturated hydrocarbons; the percentage of saturated compounds in asphalt correlates with its softening point
Asphaltenes, consisting of high molecular weight phenols and heterocyclic compounds
Bitumen typically contains, elementally 80% by weight of carbon; 10% hydrogen; up to 6% sulfur; and molecularly, between 5 and 25% by weight of asphaltenes dispersed in 90% to 65% maltenes. Most natural bitumens also contain organosulfur compounds, nickel and vanadium are found at <10 parts per million, as is typical of some petroleum. The substance is soluble in carbon disulfide. It is commonly modelled as a colloid, with asphaltenes as the dispersed phase and maltenes as the continuous phase. "It is almost impossible to separate and identify all the different molecules of bitumen, because the number of molecules with different chemical structure is extremely large".
Asphalt may be confused with coal tar, which is a visually similar black, thermoplastic material produced by the destructive distillation of coal. During the early and mid-20th century, when town gas was produced, coal tar was a readily available byproduct and extensively used as the binder for road aggregates. The addition of coal tar to macadam roads led to the word "tarmac", which is now used in common parlance to refer to road-making materials. However, since the 1970s, when natural gas succeeded town gas, bitumen has completely overtaken the use of coal tar in these applications. Other examples of this confusion include La Brea Tar Pits and the Canadian tar sands, both of which actually contain natural bitumen rather than tar. "Pitch" is another term sometimes informally used at times to refer to asphalt, as in Pitch Lake.
Additives, mixtures and contaminants
For economic and other reasons, bitumen is sometimes sold combined with other materials, often without being labeled as anything other than simply "bitumen".
Of particular note is the use of re-refined engine oil bottoms – "REOB" or "REOBs"the residue of recycled automotive engine oil collected from the bottoms of re-refining vacuum distillation towers, in the manufacture of asphalt. REOB contains various elements and compounds found in recycled engine oil: additives to the original oil and materials accumulating from its circulation in the engine (typically iron and copper). Some research has indicated a correlation between this adulteration of bitumen and poorer-performing pavement.
Occurrence
The majority of bitumen used commercially is obtained from petroleum. Nonetheless, large amounts of bitumen occur in concentrated form in nature. Naturally occurring deposits of bitumen are formed from the remains of ancient, microscopic algae (diatoms) and other once-living things. These natural deposits of bitumen have been formed during the Carboniferous period, when giant swamp forests dominated many parts of the Earth. They were deposited in the mud on the bottom of the ocean or lake where the organisms lived. Under the heat (above 50°C) and pressure of burial deep in the earth, the remains were transformed into materials such as bitumen, kerogen, or petroleum.
Natural deposits of bitumen include lakes such as the Pitch Lake in Trinidad and Tobago and Lake Bermudez in Venezuela. Natural seeps occur in the La Brea Tar Pits and the McKittrick Tar Pits in California, as well as in the Dead Sea.
Bitumen also occurs in unconsolidated sandstones known as "oil sands" in Alberta, Canada, and the similar "tar sands" in Utah, US.
The Canadian province of Alberta has most of the world's reserves, in three huge deposits covering , an area larger than England or New York state. These bituminous sands contain of commercially established oil reserves, giving Canada the third largest oil reserves in the world. Although historically it was used without refining to pave roads, nearly all of the output is now used as raw material for oil refineries in Canada and the United States.
The world's largest deposit of natural bitumen, known as the Athabasca oil sands, is located in the McMurray Formation of Northern Alberta. This formation is from the early Cretaceous, and is composed of numerous lenses of oil-bearing sand with up to 20% oil. Isotopic studies show the oil deposits to be about 110 million years old. Two smaller but still very large formations occur in the Peace River oil sands and the Cold Lake oil sands, to the west and southeast of the Athabasca oil sands, respectively. Of the Alberta deposits, only parts of the Athabasca oil sands are shallow enough to be suitable for surface mining. The other 80% has to be produced by oil wells using enhanced oil recovery techniques like steam-assisted gravity drainage.
Much smaller heavy oil or bitumen deposits also occur in the Uinta Basin in Utah, US. The Tar Sand Triangle deposit, for example, is roughly 6% bitumen.
Bitumen may occur in hydrothermal veins. An example of this is within the Uinta Basin of Utah, in the US, where there is a swarm of laterally and vertically extensive veins composed of a solid hydrocarbon termed Gilsonite. These veins formed by the polymerization and solidification of hydrocarbons that were mobilized from the deeper oil shales of the Green River Formation during burial and diagenesis.
Bitumen is similar to the organic matter in carbonaceous meteorites. However, detailed studies have shown these materials to be distinct. The vast Alberta bitumen resources are considered to have started out as living material from marine plants and animals, mainly algae, that died millions of years ago when an ancient ocean covered Alberta. They were covered by mud, buried deeply over time, and gently cooked into oil by geothermal heat at a temperature of . Due to pressure from the rising of the Rocky Mountains in southwestern Alberta, 80 to 55 million years ago, the oil was driven northeast hundreds of kilometres and trapped into underground sand deposits left behind by ancient river beds and ocean beaches, thus forming the oil sands.
History
Paleolithic times
Bitumen use goes back to the Middle Paleolithic, where it was shaped into tool handles or used as an adhesive for attaching stone tools to hafts.
The earliest evidence of bitumen use was discovered when archeologists identified bitumen material on Levallois flint artefacts that date to about 71,000 years BP at the Umm el Tlel open-air site, located on the northern slope of the Qdeir Plateau in el Kowm Basin in Central Syria. Microscopic analyses found bituminous residue on two-thirds of the stone artefacts, suggesting that bitumen was an important and frequently-used component of tool making for people in that region at that time. Geochemical analyses of the asphaltic residues places its source to localized natural bitumen outcroppings in the Bichri Massif, about 40 km northeast of the Umm el Tlel archeological site.
A re-examination of artifacts uncovered in 1908 at Le Moustier rock shelters in France has identified Mousterian stone tools that were attached to grips made of ochre and bitumen. The grips were formulated with 55% ground goethite ochre and 45% cooked liquid bitumen to create a moldable putty that hardened into handles. Earlier, less-careful excavations at Le Moustier prevent conclusive identification of the archaeological culture and age, but the European Mousterian style of these tools suggests they are associated with Neanderthals during the late Middle Paleolithic into the early Upper Paleolithic between 60,000 and 35,000 years before present. It is the earliest evidence of multicomponent adhesive in Europe.
Ancient times
The use of natural bitumen for waterproofing and as an adhesive dates at least to the fifth millennium BC, with a crop storage basket discovered in Mehrgarh, of the Indus Valley civilization, lined with it. By the 3rd millennium BC refined rock asphalt was in use in the region, and was used to waterproof the Great Bath in Mohenjo-daro.
In the ancient Near East, the Sumerians used natural bitumen deposits for mortar between bricks and stones, to cement parts of carvings, such as eyes, into place, for ship caulking, and for waterproofing. The Greek historian Herodotus said hot bitumen was used as mortar in the walls of Babylon.
The long Euphrates Tunnel beneath the river Euphrates at Babylon in the time of Queen Semiramis () was reportedly constructed of burnt bricks covered with bitumen as a waterproofing agent.
Bitumen was used by ancient Egyptians to embalm mummies. The Persian word for asphalt is moom, which is related to the English word mummy. The Egyptians' primary source of bitumen was the Dead Sea, which the Romans knew as Palus Asphaltites (Asphalt Lake).
In approximately 40 AD, Dioscorides described the Dead Sea material as Judaicum bitumen, and noted other places in the region where it could be found. The Sidon bitumen is thought to refer to material found at Hasbeya in Lebanon. Pliny also refers to bitumen being found in Epirus. Bitumen was a valuable strategic resource. It was the object of the first known battle for a hydrocarbon deposit – between the Seleucids and the Nabateans in 312 BC.
In the ancient Far East, natural bitumen was slowly boiled to get rid of the higher fractions, leaving a thermoplastic material of higher molecular weight that, when layered on objects, became hard upon cooling. This was used to cover objects that needed waterproofing, such as scabbards and other items. Statuettes of household deities were also cast with this type of material in Japan, and probably also in China.
In North America, archaeological recovery has indicated that bitumen was sometimes used to adhere stone projectile points to wooden shafts. In Canada, aboriginal people used bitumen seeping out of the banks of the Athabasca and other rivers to waterproof birch bark canoes, and also heated it in smudge pots to ward off mosquitoes in the summer. Bitumen was also used to waterproof plank canoes used by indigenous peoples in pre-colonial southern California.
Continental Europe
In 1553, Pierre Belon described in his work Observations that pissasphalto, a mixture of pitch and bitumen, was used in the Republic of Ragusa (now Dubrovnik, Croatia) for tarring of ships.
An 1838 edition of Mechanics Magazine cites an early use of asphalt in France. A pamphlet dated 1621, by "a certain Monsieur d'Eyrinys, states that he had discovered the existence (of asphaltum) in large quantities in the vicinity of Neufchatel", and that he proposed to use it in a variety of ways – "principally in the construction of air-proof granaries, and in protecting, by means of the arches, the water-courses in the city of Paris from the intrusion of dirt and filth", which at that time made the water unusable. "He expatiates also on the excellence of this material for forming level and durable terraces" in palaces, "the notion of forming such terraces in the streets not one likely to cross the brain of a Parisian of that generation".
But the substance was generally neglected in France until the revolution of 1830. In the 1830s there was a surge of interest, and asphalt became widely used "for pavements, flat roofs, and the lining of cisterns, and in England, some use of it had been made of it for similar purposes". Its rise in Europe was "a sudden phenomenon", after natural deposits were found "in France at Osbann (Bas-Rhin), the Parc (Ain) and the Puy-de-la-Poix (Puy-de-Dôme)", although it could also be made artificially. One of the earliest uses in France was the laying of about 24,000 square yards of Seyssel asphalt at the Place de la Concorde in 1835.
United Kingdom
Among the earlier uses of bitumen in the United Kingdom was for etching. William Salmon's Polygraphice (1673) provides a recipe for varnish used in etching, consisting of three ounces of virgin wax, two ounces of mastic, and one ounce of asphaltum. By the fifth edition in 1685, he had included more asphaltum recipes from other sources.
The first British patent for the use of asphalt was "Cassell's patent asphalte or bitumen" in 1834. Then on 25 November 1837, Richard Tappin Claridge patented the use of Seyssel asphalt (patent #7849), for use in asphalte pavement, having seen it employed in France and Belgium when visiting with Frederick Walter Simms, who worked with him on the introduction of asphalt to Britain. Dr T. Lamb Phipson writes that his father, Samuel Ryland Phipson, a friend of Claridge, was also "instrumental in introducing the asphalte pavement (in 1836)".
Claridge obtained a patent in Scotland on 27 March 1838, and obtained a patent in Ireland on 23 April 1838. In 1851, extensions for the 1837 patent and for both 1838 patents were sought by the trustees of a company previously formed by Claridge. Claridge's Patent Asphalte Companyformed in 1838 for the purpose of introducing to Britain "Asphalte in its natural state from the mine at Pyrimont Seysell in France","laid one of the first asphalt pavements in Whitehall". Trials were made of the pavement in 1838 on the footway in Whitehall, the stable at Knightsbridge Barracks, "and subsequently on the space at the bottom of the steps leading from Waterloo Place to St. James Park". "The formation in 1838 of Claridge's Patent Asphalte Company (with a distinguished list of aristocratic patrons, and Marc and Isambard Brunel as, respectively, a trustee and consulting engineer), gave an enormous impetus to the development of a British asphalt industry". "By the end of 1838, at least two other companies, Robinson's and the Bastenne company, were in production", with asphalt being laid as paving at Brighton, Herne Bay, Canterbury, Kensington, the Strand, and a large floor area in Bunhill-row, while meantime Claridge's Whitehall paving "continue(d) in good order". The Bonnington Chemical Works manufactured asphalt using coal tar and by 1839 had installed it in Bonnington.
In 1838, there was a flurry of entrepreneurial activity involving bitumen, which had uses beyond paving. For example, bitumen could also be used for flooring, damp proofing in buildings, and for waterproofing of various types of pools and baths, both of which were also proliferating in the 19th century. One of the earliest surviving examples of its use can be seen at Highgate Cemetery where it was used in 1839 to seal the roof of the terrace catacombs. On the London stockmarket, there were various claims as to the exclusivity of bitumen quality from France, Germany and England. And numerous patents were granted in France, with similar numbers of patent applications being denied in England due to their similarity to each other. In England, "Claridge's was the type most used in the 1840s and 50s".
In 1914, Claridge's Company entered into a joint venture to produce tar-bound macadam, with materials manufactured through a subsidiary company called Clarmac Roads Ltd. Two products resulted, namely Clarmac, and Clarphalte, with the former being manufactured by Clarmac Roads and the latter by Claridge's Patent Asphalte Co., although Clarmac was more widely used. However, the First World War ruined the Clarmac Company, which entered into liquidation in 1915. The failure of Clarmac Roads Ltd had a flow-on effect to Claridge's Company, which was itself compulsorily wound up, ceasing operations in 1917, having invested a substantial amount of funds into the new venture, both at the outset and in a subsequent attempt to save the Clarmac Company.
Bitumen was thought in 19th century Britain to contain chemicals with medicinal properties. Extracts from bitumen were used to treat catarrh and some forms of asthma and as a remedy against worms, especially the tapeworm.
United States
The first use of bitumen in the New World was by aboriginal peoples. On the west coast, as early as the 13th century, the Tongva, Luiseño and Chumash peoples collected the naturally occurring bitumen that seeped to the surface above underlying petroleum deposits. All three groups used the substance as an adhesive. It is found on many different artifacts of tools and ceremonial items. For example, it was used on rattles to adhere gourds or turtle shells to rattle handles. It was also used in decorations. Small round shell beads were often set in asphaltum to provide decorations. It was used as a sealant on baskets to make them watertight for carrying water, possibly poisoning those who drank the water. Asphalt was used also to seal the planks on ocean-going canoes.
Asphalt was first used to pave streets in the 1870s. At first naturally occurring "bituminous rock" was used, such as at Ritchie Mines in Macfarlan in Ritchie County, West Virginia from 1852 to 1873. In 1876, asphalt-based paving was used to pave Pennsylvania Avenue in Washington DC, in time for the celebration of the national centennial.
In the horse-drawn era, US streets were mostly unpaved and covered with dirt or gravel. Especially where mud or trenching often made streets difficult to pass, pavements were sometimes made of diverse materials including wooden planks, cobble stones or other stone blocks, or bricks. Unpaved roads produced uneven wear and hazards for pedestrians. In the late 19th century with the rise of the popular bicycle, bicycle clubs were important in pushing for more general pavement of streets. Advocacy for pavement increased in the early 20th century with the rise of the automobile. Asphalt gradually became an ever more common method of paving. St. Charles Avenue in New Orleans was paved its whole length with asphalt by 1889.
In 1900, Manhattan alone had 130,000 horses, pulling streetcars, wagons, and carriages, and leaving their waste behind. They were not fast, and pedestrians could dodge and scramble their way across the crowded streets. Small towns continued to rely on dirt and gravel, but larger cities wanted much better streets. They looked to wood or granite blocks by the 1850s. In 1890, a third of Chicago's 2000 miles of streets were paved, chiefly with wooden blocks, which gave better traction than mud. Brick surfacing was a good compromise, but even better was asphalt paving, which was easy to install and to cut through to get at sewers. With London and Paris serving as models, Washington laid 400,000 square yards of asphalt paving by 1882; it became the model for Buffalo, Philadelphia and elsewhere. By the end of the century, American cities boasted 30 million square yards of asphalt paving, well ahead of brick. The streets became faster and more dangerous so electric traffic lights were installed. Electric trolleys (at 12 miles per hour) became the main transportation service for middle class shoppers and office workers until they bought automobiles after 1945 and commuted from more distant suburbs in privacy and comfort on asphalt highways.
Canada
Canada has the world's largest deposit of natural bitumen in the Athabasca oil sands, and Canadian First Nations along the Athabasca River had long used it to waterproof their canoes. In 1719, a Cree named Wa-Pa-Su brought a sample for trade to Henry Kelsey of the Hudson's Bay Company, who was the first recorded European to see it. However, it wasn't until 1787 that fur trader and explorer Alexander MacKenzie saw the Athabasca oil sands and said, "At about 24 miles from the fork (of the Athabasca and Clearwater Rivers) are some bituminous fountains into which a pole of 20 feet long may be inserted without the least resistance."
The value of the deposit was obvious from the start, but the means of extracting the bitumen was not. The nearest town, Fort McMurray, Alberta, was a small fur trading post, other markets were far away, and transportation costs were too high to ship the raw bituminous sand for paving. In 1915, Sidney Ells of the Federal Mines Branch experimented with separation techniques and used the product to pave 600 feet of road in Edmonton, Alberta. Other roads in Alberta were paved with material extracted from oil sands, but it was generally not economic. During the 1920s Dr. Karl A. Clark of the Alberta Research Council patented a hot water oil separation process and entrepreneur Robert C. Fitzsimmons built the Bitumount oil separation plant, which between 1925 and 1958 produced up to per day of bitumen using Dr. Clark's method. Most of the bitumen was used for waterproofing roofs, but other uses included fuels, lubrication oils, printers ink, medicines, rust- and acid-proof paints, fireproof roofing, street paving, patent leather, and fence post preservatives. Eventually Fitzsimmons ran out of money and the plant was taken over by the Alberta government. Today the Bitumount plant is a Provincial Historic Site.
Photography and art
Bitumen was used in early photographic technology. In 1826, or 1827, it was used by French scientist Joseph Nicéphore Niépce to make the oldest surviving photograph from nature. The bitumen was thinly coated onto a pewter plate which was then exposed in a camera. Exposure to light hardened the bitumen and made it insoluble, so that when it was subsequently rinsed with a solvent only the sufficiently light-struck areas remained. Many hours of exposure in the camera were required, making bitumen impractical for ordinary photography, but from the 1850s to the 1920s it was in common use as a photoresist in the production of printing plates for various photomechanical printing processes.
Bitumen was the nemesis of many artists during the 19th century. Although widely used for a time, it ultimately proved unstable for use in oil painting, especially when mixed with the most common diluents, such as linseed oil, varnish and turpentine. Unless thoroughly diluted, bitumen never fully solidifies and will in time corrupt the other pigments with which it comes into contact. The use of bitumen as a glaze to set in shadow or mixed with other colors to render a darker tone resulted in the eventual deterioration of many paintings, for instance those of Delacroix. Perhaps the most famous example of the destructiveness of bitumen is Théodore Géricault's Raft of the Medusa (1818–1819), where his use of bitumen caused the brilliant colors to degenerate into dark greens and blacks and the paint and canvas to buckle.
Modern use
Global use
The vast majority of refined bitumen is used in construction: primarily as a constituent of products used in paving and roofing applications. According to the requirements of the end use, bitumen is produced to specification. This is achieved either by refining or blending. It is estimated that the current world use of bitumen is approximately 102 million tonnes per year. Approximately 85% of all the bitumen produced is used as the binder in asphalt concrete for roads. It is also used in other paved areas such as airport runways, car parks and footways. Typically, the production of asphalt concrete involves mixing fine and coarse aggregates such as sand, gravel and crushed rock with asphalt, which acts as the binding agent. Other materials, such as recycled polymers (e.g., rubber tyres), may be added to the bitumen to modify its properties according to the application for which the bitumen is ultimately intended.
A further 10% of global bitumen production is used in roofing applications, where its waterproofing qualities are invaluable.
The remaining 5% of bitumen is used mainly for sealing and insulating purposes in a variety of building materials, such as pipe coatings, carpet tile backing and paint. Bitumen is applied in the construction and maintenance of many structures, systems, and components, such as:
Highways
Airport runways
Footways and pedestrian ways
Car parks
Racetracks
Tennis courts
Roofing
Damp proofing
Dams
Reservoir and pool linings
Soundproofing
Pipe coatings
Cable coatings
Paints
Building water proofing
Tile underlying waterproofing
Newspaper ink production
Rolled asphalt concrete
The largest use of bitumen is for making asphalt concrete for road surfaces; this accounts for approximately 85% of the bitumen consumed in the United States. There are about 4,000 asphalt concrete mixing plants in the US, and a similar number in Europe.
Asphalt concrete pavement mixes are typically composed of 5% bitumen (known as asphalt cement in the US) and 95% aggregates (stone, sand, and gravel). Due to its highly viscous nature, bitumen must be heated so it can be mixed with the aggregates at the asphalt mixing facility. The temperature required varies depending upon characteristics of the bitumen and the aggregates, but warm-mix asphalt technologies allow producers to reduce the temperature required.
The weight of an asphalt pavement depends upon the aggregate type, the bitumen, and the air void content. An average example in the United States is about 112 pounds per square yard, per inch of pavement thickness.
When maintenance is performed on asphalt pavements, such as milling to remove a worn or damaged surface, the removed material can be returned to a facility for processing into new pavement mixtures. The bitumen in the removed material can be reactivated and put back to use in new pavement mixes. With some 95% of paved roads being constructed of or surfaced with asphalt, a substantial amount of asphalt pavement material is reclaimed each year. According to industry surveys conducted annually by the Federal Highway Administration and the National Asphalt Pavement Association, more than 99% of the bitumen removed each year from road surfaces during widening and resurfacing projects is reused as part of new pavements, roadbeds, shoulders and embankments or stockpiled for future use.
Asphalt concrete paving is widely used in airports around the world. Due to the sturdiness and ability to be repaired quickly, it is widely used for runways.
Mastic asphalt
Mastic asphalt is a type of asphalt that differs from dense graded asphalt (asphalt concrete) in that it has a higher bitumen (binder) content, usually around 7–10% of the whole aggregate mix, as opposed to rolled asphalt concrete, which has only around 5% asphalt. This thermoplastic substance is widely used in the building industry for waterproofing flat roofs and tanking underground. Mastic asphalt is heated to a temperature of and is spread in layers to form an impervious barrier about thick.
Bitumen emulsion
Bitumen emulsions are colloidal mixtures of bitumen and water. Due to the different surface tensions of the two liquids, stable emulsions cannot be created simply by mixing. Therefore, various emulsifiers and stabilizers are added. Emulsifiers are amphiphilic molecules that differ in the charge of their polar head group. They reduce the surface tension of the emulsion and thus prevent bitumen particles from fusing. The emulsifier charge defines the type of emulsion: anionic (negatively charged) and cationic (positively charged). The concentration of an emulsifier is a critical parameter affecting the size of the bitumen particles—higher concentrations lead to smaller bitumen particles. Thus, emulsifiers have a great impact on the stability, viscosity, breaking strength, and adhesion of the bitumen emulsion. The size of bitumen particles is usually between 0.1 and 50μm with a main fraction between 1μm and 10μm. Laser diffraction techniques can be used to determine the particle size distribution quickly and easily. Cationic emulsifiers primarily include long-chain amines such as imidazolines, amido-amines, and diamines, which acquire a positive charge when an acid is added. Anionic emulsifiers are often fatty acids extracted from lignin, tall oil, or tree resin saponified with bases such as NaOH, which creates a negative charge.
During the storage of bitumen emulsions, bitumen particles sediment, agglomerate (flocculation), or fuse (coagulation), which leads to a certain instability of the bitumen emulsion. How fast this process occurs depends on the formulation of the bitumen emulsion but also storage conditions such as temperature and humidity. When emulsified bitumen gets into contact with aggregates, emulsifiers lose their effectiveness, the emulsion breaks down, and an adhering bitumen film is formed referred to as 'breaking'. Bitumen particles almost instantly create a continuous bitumen film by coagulating and separating from water which evaporates. Not each asphalt emulsion reacts as fast as the other when it gets into contact with aggregates. That enables a classification into Rapid-setting (R), Slow-setting (SS), and Medium-setting (MS) emulsions, but also an individual, application-specific optimization of the formulation and a wide field of application (1). For example, Slow-breaking emulsions ensure a longer processing time which is particularly advantageous for fine aggregates (1).
Adhesion problems are reported for anionic emulsions in contact with quartz-rich aggregates. They are substituted by cationic emulsions achieving better adhesion. The extensive range of bitumen emulsions is covered insufficiently by standardization. DIN EN 13808 for cationic asphalt emulsions has been existing since July 2005. Here, a classification of bitumen emulsions based on letters and numbers is described, considering charges, viscosities, and the type of bitumen. The production process of bitumen emulsions is very complex. Two methods are commonly used, the "Colloid mill" method and the "High Internal Phase Ratio (HIPR)" method. In the "Colloid mill" method, a rotor moves at high speed within a stator by adding bitumen and a water-emulsifier mixture. The resulting shear forces generate bitumen particles between 5μm and 10μm coated with emulsifiers. The "High Internal Phase Ratio (HIPR)" method is used for creating smaller bitumen particles, monomodal, narrow particle size distributions, and very high bitumen concentrations. Here, a highly concentrated bitumen emulsion is produced first by moderate stirring and diluted afterward. In contrast to the "Colloid-Mill" method, the aqueous phase is introduced into hot bitumen, enabling very high bitumen concentrations.
T The "High Internal Phase Ratio (HIPR)" method is used for creating smaller bitumen particles, monomodal, narrow particle size distributions, and very high bitumen concentrations. Here, a highly concentrated bitumen emulsion is produced first by moderate stirring and diluted afterward. In contrast to the "Colloid-Mill" method, the aqueous phase is introduced into hot bitumen, enabling very high bitumen concentrations (1).he "High Internal Phase Ratio (HIPR)" method is used for creating smaller bitumen particles, monomodal, narrow particle size distributions, and very high bitumen concentrations. Here, a highly concentrated bitumen emulsion is produced first by moderate stirring and diluted afterward. In contrast to the "Colloid-Mill" method, the aqueous phase is introduced into hot bitumen, enabling very high bitumen concentrations (1).
Bitumen emulsions are used in a wide variety of applications. They are used in road construction and building protection and primarily include the application in cold recycling mixtures, adhesive coating, and surface treatment (1). Due to the lower viscosity in comparison to hot bitumen, processing requires less energy and is associated with significantly less risk of fire and burns. Chipseal involves spraying the road surface with bitumen emulsion followed by a layer of crushed rock, gravel or crushed slag. Slurry seal is a mixture of bitumen emulsion and fine crushed aggregate that is spread on the surface of a road. Cold-mixed asphalt can also be made from bitumen emulsion to create pavements similar to hot-mixed asphalt, several inches in depth, and bitumen emulsions are also blended into recycled hot-mix asphalt to create low-cost pavements. Bitumen emulsion based techniques are known to be useful for all classes of roads, their use may also be possible in the following applications: 1. Asphalts for heavily trafficked roads (based on the use of polymer modified emulsions) 2. Warm emulsion based mixtures, to improve both their maturation time and mechanical properties 3. Half-warm technology, in which aggregates are heated up to 100 degrees, producing mixtures with similar properties to those of hot asphalts 4. High performance surface dressing.
Synthetic crude oil
Synthetic crude oil, also known as syncrude, is the output from a bitumen upgrader facility used in connection with oil sand production in Canada. Bituminous sands are mined using enormous (100-ton capacity) power shovels and loaded into even larger (400-ton capacity) dump trucks for movement to an upgrading facility. The process used to extract the bitumen from the sand is a hot water process originally developed by Dr. Karl Clark of the University of Alberta during the 1920s. After extraction from the sand, the bitumen is fed into a bitumen upgrader which converts it into a light crude oil equivalent. This synthetic substance is fluid enough to be transferred through conventional oil pipelines and can be fed into conventional oil refineries without any further treatment. By 2015 Canadian bitumen upgraders were producing over per day of synthetic crude oil, of which 75% was exported to oil refineries in the United States.
In Alberta, five bitumen upgraders produce synthetic crude oil and a variety of other products: The Suncor Energy upgrader near Fort McMurray, Alberta produces synthetic crude oil plus diesel fuel; the Syncrude Canada, Canadian Natural Resources, and Nexen upgraders near Fort McMurray produce synthetic crude oil; and the Shell Scotford Upgrader near Edmonton produces synthetic crude oil plus an intermediate feedstock for the nearby Shell Oil Refinery. A sixth upgrader, under construction in 2015 near Redwater, Alberta, will upgrade half of its crude bitumen directly to diesel fuel, with the remainder of the output being sold as feedstock to nearby oil refineries and petrochemical plants.
Non-upgraded crude bitumen
Canadian bitumen does not differ substantially from oils such as Venezuelan extra-heavy and Mexican heavy oil in chemical composition, and the real difficulty is moving the extremely viscous bitumen through oil pipelines to the refinery. Many modern oil refineries are extremely sophisticated and can process non-upgraded bitumen directly into products such as gasoline, diesel fuel, and refined asphalt without any preprocessing. This is particularly common in areas such as the US Gulf coast, where refineries were designed to process Venezuelan and Mexican oil, and in areas such as the US Midwest where refineries were rebuilt to process heavy oil as domestic light oil production declined. Given the choice, such heavy oil refineries usually prefer to buy bitumen rather than synthetic oil because the cost is lower, and in some cases because they prefer to produce more diesel fuel and less gasoline. By 2015 Canadian production and exports of non-upgraded bitumen exceeded that of synthetic crude oil at over per day, of which about 65% was exported to the United States.
Because of the difficulty of moving crude bitumen through pipelines, non-upgraded bitumen is usually diluted with natural-gas condensate in a form called dilbit or with synthetic crude oil, called synbit. However, to meet international competition, much non-upgraded bitumen is now sold as a blend of multiple grades of bitumen, conventional crude oil, synthetic crude oil, and condensate in a standardized benchmark product such as Western Canadian Select. This sour, heavy crude oil blend is designed to have uniform refining characteristics to compete with internationally marketed heavy oils such as Mexican Mayan or Arabian Dubai Crude.
Radioactive waste encapsulation matrix
Bitumen was used starting in the 1960s as a hydrophobic matrix aiming to encapsulate radioactive waste such as medium-activity salts (mainly soluble sodium nitrate and sodium sulfate) produced by the reprocessing of spent nuclear fuels or radioactive sludges from sedimentation ponds. Bituminised radioactive waste containing highly radiotoxic alpha-emitting transuranic elements from nuclear reprocessing plants have been produced at industrial scale in France, Belgium and Japan, but this type of waste conditioning has been abandoned because operational safety issues (risks of fire, as occurred in a bituminisation plant at Tokai Works in Japan) and long-term stability problems related to their geological disposal in deep rock formations. One of the main problems is the swelling of bitumen exposed to radiation and to water. Bitumen swelling is first induced by radiation because of the presence of hydrogen gas bubbles generated by alpha and gamma radiolysis. A second mechanism is the matrix swelling when the encapsulated hygroscopic salts exposed to water or moisture start to rehydrate and to dissolve. The high concentration of salt in the pore solution inside the bituminised matrix is then responsible for osmotic effects inside the bituminised matrix. The water moves in the direction of the concentrated salts, the bitumen acting as a semi-permeable membrane. This also causes the matrix to swell. The swelling pressure due to osmotic effect under constant volume can be as high as 200 bar. If not properly managed, this high pressure can cause fractures in the near field of a disposal gallery of bituminised medium-level waste. When the bituminised matrix has been altered by swelling, encapsulated radionuclides are easily leached by the contact of ground water and released in the geosphere. The high ionic strength of the concentrated saline solution also favours the migration of radionuclides in clay host rocks. The presence of chemically reactive nitrate can also affect the redox conditions prevailing in the host rock by establishing oxidizing conditions, preventing the reduction of redox-sensitive radionuclides. Under their higher valences, radionuclides of elements such as selenium, technetium, uranium, neptunium and plutonium have a higher solubility and are also often present in water as non-retarded anions. This makes the disposal of medium-level bituminised waste very challenging.
Different types of bitumen have been used: blown bitumen (partly oxidized with air oxygen at high temperature after distillation, and harder) and direct distillation bitumen (softer). Blown bitumens like Mexphalte, with a high content of saturated hydrocarbons, are more easily biodegraded by microorganisms than direct distillation bitumen, with a low content of saturated hydrocarbons and a high content of aromatic hydrocarbons.
Concrete encapsulation of radwaste is presently considered a safer alternative by the nuclear industry and the waste management organisations.
Other uses
Roofing shingles and roll roofing account for most of the remaining bitumen consumption. Other uses include cattle sprays, fence-post treatments, and waterproofing for fabrics. Bitumen is used to make Japan black, a lacquer known especially for its use on iron and steel, and it is also used in paint and marker inks by some exterior paint supply companies to increase the weather resistance and permanence of the paint or ink, and to make the color darker. Bitumen is also used to seal some alkaline batteries during the manufacturing process. Bitumen is also commonly used as a ground in the etching process of intaglio printmaking.
Production
About 164,000,000 tons were produced in 2019. It is obtained as the "heavy" (i.e., difficult to distill) fraction. Material with a boiling point greater than around 500°C is considered asphalt. Vacuum distillation separates it from the other components in crude oil (such as naphtha, gasoline and diesel). The resulting material is typically further treated to extract small but valuable amounts of lubricants and to adjust the properties of the material to suit applications. In a de-asphalting unit, the crude bitumen is treated with either propane or butane in a supercritical phase to extract the lighter molecules, which are then separated. Further processing is possible by "blowing" the product: namely reacting it with oxygen. This step makes the product harder and more viscous.
Bitumen is typically stored and transported at temperatures around . Sometimes diesel oil or kerosene are mixed in before shipping to retain liquidity; upon delivery, these lighter materials are separated out of the mixture. This mixture is often called "bitumen feedstock", or BFS. Some dump trucks route the hot engine exhaust through pipes in the dump body to keep the material warm. The backs of tippers carrying asphalt, as well as some handling equipment, are also commonly sprayed with a releasing agent before filling to aid release. Diesel oil is no longer used as a release agent due to environmental concerns.
Oil sands
Naturally occurring crude bitumen impregnated in sedimentary rock is the prime feed stock for petroleum production from "oil sands", currently under development in Alberta, Canada. Canada has most of the world's supply of natural bitumen, covering 140,000 square kilometres (an area larger than England), giving it the second-largest proven oil reserves in the world. The Athabasca oil sands are the largest bitumen deposit in Canada and the only one accessible to surface mining, although recent technological breakthroughs have resulted in deeper deposits becoming producible by in situ methods. Because of oil price increases after 2003, producing bitumen became highly profitable, but as a result of the decline after 2014 it became uneconomic to build new plants again. By 2014, Canadian crude bitumen production averaged about per day and was projected to rise to per day by 2020. The total amount of crude bitumen in Alberta that could be extracted is estimated to be about , which at a rate of would last about 200 years.
Alternatives and bioasphalt
Although uncompetitive economically, bitumen can be made from nonpetroleum-based renewable resources such as sugar, molasses and rice, corn and potato starches. Bitumen can also be made from waste material by fractional distillation of used motor oil, which is sometimes otherwise disposed of by burning or dumping into landfills. Use of motor oil may cause premature cracking in colder climates, resulting in roads that need to be repaved more frequently.
Nonpetroleum-based asphalt binders can be made light-colored. Lighter-colored roads absorb less heat from solar radiation, reducing their contribution to the urban heat island effect. Parking lots that use bitumen alternatives are called green parking lots.
Albanian deposits
Selenizza is a naturally occurring solid hydrocarbon bitumen found in native deposits in Selenice, in Albania, the only European asphalt mine still in use. The bitumen is found in the form of veins, filling cracks in a more or less horizontal direction. The bitumen content varies from 83% to 92% (soluble in carbon disulphide), with a penetration value near to zero and a softening point (ring and ball) around 120°C. The insoluble matter, consisting mainly of silica ore, ranges from 8% to 17%.
Albanian bitumen extraction has a long history and was practiced in an organized way by the Romans. After centuries of silence, the first mentions of Albanian bitumen appeared only in 1868, when the Frenchman Coquand published the first geological description of the deposits of Albanian bitumen. In 1875, the exploitation rights were granted to the Ottoman government and in 1912, they were transferred to the Italian company Simsa. Since 1945, the mine was exploited by the Albanian government and from 2001 to date, the management passed to a French company, which organized the mining process for the manufacture of the natural bitumen on an industrial scale.
Today the mine is predominantly exploited in an open pit quarry but several of the many underground mines (deep and extending over several km) still remain viable. Selenizza is produced primarily in granular form, after melting the bitumen pieces selected in the mine.
Selenizza is mainly used as an additive in the road construction sector. It is mixed with traditional bitumen to improve both the viscoelastic properties and the resistance to ageing. It may be blended with the hot bitumen in tanks, but its granular form allows it to be fed in the mixer or in the recycling ring of normal asphalt plants. Other typical applications include the production of mastic asphalts for sidewalks, bridges, car-parks and urban roads as well as drilling fluid additives for the oil and gas industry. Selenizza is available in powder or in granular material of various particle sizes and is packaged in sacks or in thermal fusible polyethylene bags.
A life-cycle assessment study of the natural selenizza compared with petroleum bitumen has shown that the environmental impact of the selenizza is about half the impact of the road asphalt produced in oil refineries in terms of carbon dioxide emission.
Recycling
Bitumen is a commonly recycled material in the construction industry. The two most common recycled materials that contain bitumen are reclaimed asphalt pavement (RAP) and reclaimed asphalt shingles (RAS). RAP is recycled at a greater rate than any other material in the United States, and typically contains approximately 5–6% bitumen binder. Asphalt shingles typically contain 20–40% bitumen binder.
Bitumen naturally becomes stiffer over time due to oxidation, evaporation, exudation, and physical hardening. For this reason, recycled asphalt is typically combined with virgin asphalt, softening agents, and/or rejuvenating additives to restore its physical and chemical properties.
Economics
Although bitumen typically makes up only 4 to 5 percent (by weight) of the pavement mixture, as the pavement's binder, it is also the most expensive part of the cost of the road-paving material.
During bitumen's early use in modern paving, oil refiners gave it away. However, bitumen is a highly traded commodity today. Its prices increased substantially in the early 21st Century. A U.S. government report states:
"In 2002, asphalt sold for approximately $160 per ton. By the end of 2006, the cost had doubled to approximately $320 per ton, and then it almost doubled again in 2012 to approximately $610 per ton."
The report indicates that an "average" 1-mile (1.6-kilometer)-long, four-lane highway would include "300 tons of asphalt," which, "in 2002 would have cost around $48,000. By 2006 this would have increased to $96,000 and by 2012 to $183,000... an increase of about $135,000 for every mile of highway in just 10 years."
The Middle East is a significant exporter of bitumen, particularly to India and China. According to the Argus Bitumen Report (2024/07/12), India is the largest importer, driven by extensive infrastructure projects. The report projects a CAGR of 4.5% for India's bitumen imports over the next five years, while China's imports are expected to grow at a CAGR of 3.8%. The current export price to India is approximately $350 per metric ton, and for China, it is around $360 per metric ton. The Middle East's strategic advantage in crude oil production underpins its capacity to meet these demands.
Health and safety
People can be exposed to bitumen in the workplace by breathing in fumes or skin absorption. The National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit of 5mg/m3 over a 15-minute period.
Bitumen is a largely inert material that must be heated or diluted to a point where it becomes workable for the production of materials for paving, roofing, and other applications. In examining the potential health hazards associated with bitumen, the International Agency for Research on Cancer (IARC) determined that it is the application parameters, predominantly temperature, that affect occupational exposure and the potential bioavailable carcinogenic hazard/risk of the bitumen emissions. In particular, temperatures greater than 199°C (390°F), were shown to produce a greater exposure risk than when bitumen was heated to lower temperatures, such as those typically used in asphalt pavement mix production and placement. IARC has classified paving asphalt fumes as a Class 2B possible carcinogen, indicating inadequate evidence of carcinogenicity in humans.
In 2020, scientists reported that bitumen currently is a significant and largely overlooked source of air pollution in urban areas, especially during hot and sunny periods.
A bitumen-like substance found in the Himalayas and known as shilajit is sometimes used as an Ayurveda medicine, but is not in fact a tar, resin or bitumen.
Sources
.
|
;Amorphous solids;Building materials;Chemical mixtures;IARC Group 2B carcinogens;Pavements;Petroleum products;Road construction materials
|
https://en.wikipedia.org/wiki/Atomic%20number
|
The atomic number or nuclear charge number (symbol Z) of a chemical element is the charge number of its atomic nucleus. For ordinary nuclei composed of protons and neutrons, this is equal to the proton number (np) or the number of protons found in the nucleus of every atom of that element. The atomic number can be used to uniquely identify ordinary chemical elements. In an ordinary uncharged atom, the atomic number is also equal to the number of electrons.
For an ordinary atom which contains protons, neutrons and electrons, the sum of the atomic number Z and the neutron number N gives the atom's atomic mass number A. Since protons and neutrons have approximately the same mass (and the mass of the electrons is negligible for many purposes) and the mass defect of the nucleon binding is always small compared to the nucleon mass, the atomic mass of any atom, when expressed in daltons (making a quantity called the "relative isotopic mass"), is within 1% of the whole number A.
Atoms with the same atomic number but different neutron numbers, and hence different mass numbers, are known as isotopes. A little more than three-quarters of naturally occurring elements exist as a mixture of isotopes (see monoisotopic elements), and the average isotopic mass of an isotopic mixture for an element (called the relative atomic mass) in a defined environment on Earth determines the element's standard atomic weight. Historically, it was these atomic weights of elements (in comparison to hydrogen) that were the quantities measurable by chemists in the 19th century.
The conventional symbol Z comes from the German word 'number', which, before the modern synthesis of ideas from chemistry and physics, merely denoted an element's numerical place in the periodic table, whose order was then approximately, but not completely, consistent with the order of the elements by atomic weights. Only after 1915, with the suggestion and evidence that this Z number was also the nuclear charge and a physical characteristic of atoms, did the word (and its English equivalent atomic number) come into common use in this context.
The rules above do not always apply to exotic atoms which contain short-lived elementary particles other than protons, neutrons and electrons.
Notation
The atomic number is used in AZE notation, (with A as the mass number, Z the atomic number, and E for element) to denote an isotope. When a chemical symbol is used, e.g. "C" for carbon, standard notation uses a superscript at the upper left of the chemical symbol for the mass number and indicates the atomic number with a subscript at the lower left (e.g. , , , , , and ). Because the atomic number is given by the element symbol, it is common to state only the mass number in the superscript and leave out the atomic number subscript (e.g. , , , , , and ).
The common pronunciation of the AZE notation is different from how it is written: is commonly pronounced as helium-four instead of four-two-helium, and as uranium two-thirty-five (American English) or uranium-two-three-five (British) instead of 235-92-uranium.
Various notations appear in older sources were used, such as
Ne(22) in 1934, Ne22 for neon-22 (1935) or Pb210 for lead-210 (1933)
History
In the 19th century, the term "atomic number" typically meant the number of atoms in a given volume. Modern chemists prefer to use the concept of molar concentration.
In 1913, Antonius van den Broek proposed that the electric charge of an atomic nucleus, expressed as a multiplier of the elementary charge, was equal to the element's sequential position on the periodic table. Ernest Rutherford, in various articles in which he discussed van den Broek's idea, used the term "atomic number" to refer to an element's position on the periodic table. No writer before Rutherford is known to have used the term "atomic number" in this way, so it was probably he who established this definition.
After Rutherford deduced the existence of the proton in 1920, "atomic number" customarily referred to the proton number of an atom. In 1921, the German Atomic Weight Commission based its new periodic table on the nuclear charge number and in 1923 the International Committee on Chemical Elements followed suit.
The periodic table and a natural number for each element
The periodic table of elements creates an ordering of the elements, and so they can be numbered in order.
Dmitri Mendeleev arranged his first periodic tables (first published on March 6, 1869) in order of atomic weight ("Atomgewicht"). However, in consideration of the elements' observed chemical properties, he changed the order slightly and placed tellurium (atomic weight 127.6) ahead of iodine (atomic weight 126.9). This placement is consistent with the modern practice of ordering the elements by proton number, Z, but that number was not known or suspected at the time.
A simple numbering based on atomic weight position was never entirely satisfactory. In addition to the case of iodine and tellurium, several other pairs of elements (such as argon and potassium, cobalt and nickel) were later shown to have nearly identical or reversed atomic weights, thus requiring their placement in the periodic table to be determined by their chemical properties. However the gradual identification of more and more chemically similar lanthanide elements, whose atomic number was not obvious, led to inconsistency and uncertainty in the periodic numbering of elements at least from lutetium (element 71) onward (hafnium was not known at this time).
The Rutherford-Bohr model and van den Broek
In 1911, Ernest Rutherford gave a model of the atom in which a central nucleus held most of the atom's mass and a positive charge which, in units of the electron's charge, was to be approximately equal to half of the atom's atomic weight, expressed in numbers of hydrogen atoms. This central charge would thus be approximately half the atomic weight (though it was almost 25% different from the atomic number of gold , ), the single element from which Rutherford made his guess). Nevertheless, in spite of Rutherford's estimation that gold had a central charge of about 100 (but was element on the periodic table), a month after Rutherford's paper appeared, Antonius van den Broek first formally suggested that the central charge and number of electrons in an atom were exactly equal to its place in the periodic table (also known as element number, atomic number, and symbolized Z). This eventually proved to be the case.
Moseley's 1913 experiment
The experimental position improved dramatically after research by Henry Moseley in 1913. Moseley, after discussions with Bohr who was at the same lab (and who had used Van den Broek's hypothesis in his Bohr model of the atom), decided to test Van den Broek's and Bohr's hypothesis directly, by seeing if spectral lines emitted from excited atoms fitted the Bohr theory's postulation that the frequency of the spectral lines be proportional to the square of Z.
To do this, Moseley measured the wavelengths of the innermost photon transitions (K and L lines) produced by the elements from aluminium (Z = 13) to gold (Z = 79) used as a series of movable anodic targets inside an x-ray tube. The square root of the frequency of these photons increased from one target to the next in an arithmetic progression. This led to the conclusion (Moseley's law) that the atomic number does closely correspond (with an offset of one unit for K-lines, in Moseley's work) to the calculated electric charge of the nucleus, i.e. the element number Z. Among other things, Moseley demonstrated that the lanthanide series (from lanthanum to lutetium inclusive) must have 15 members—no fewer and no more—which was far from obvious from known chemistry at that time.
Missing elements
After Moseley's death in 1915, the atomic numbers of all known elements from hydrogen to uranium (Z = 92) were examined by his method. There were seven elements (with Z < 92) which were not found and therefore identified as still undiscovered, corresponding to atomic numbers 43, 61, 72, 75, 85, 87 and 91. From 1918 to 1947, all seven of these missing elements were discovered. By this time, the first four transuranium elements had also been discovered, so that the periodic table was complete with no gaps as far as curium (Z = 96).
The proton and the idea of nuclear electrons
In 1915, the reason for nuclear charge being quantized in units of Z, which were now recognized to be the same as the element number, was not understood. An old idea called Prout's hypothesis had postulated that the elements were all made of residues (or "protyles") of the lightest element hydrogen, which in the Bohr-Rutherford model had a single electron and a nuclear charge of one. However, as early as 1907, Rutherford and Thomas Royds had shown that alpha particles, which had a charge of +2, were the nuclei of helium atoms, which had a mass four times that of hydrogen, not two times. If Prout's hypothesis were true, something had to be neutralizing some of the charge of the hydrogen nuclei present in the nuclei of heavier atoms.
In 1917, Rutherford succeeded in generating hydrogen nuclei from a nuclear reaction between alpha particles and nitrogen gas, and believed he had proven Prout's law. He called the new heavy nuclear particles protons in 1920 (alternate names being proutons and protyles). It had been immediately apparent from the work of Moseley that the nuclei of heavy atoms have more than twice as much mass as would be expected from their being made of hydrogen nuclei, and thus there was required a hypothesis for the neutralization of the extra protons presumed present in all heavy nuclei. A helium nucleus was presumed to have four protons plus two "nuclear electrons" (electrons bound inside the nucleus) to cancel two charges. At the other end of the periodic table, a nucleus of gold with a mass 197 times that of hydrogen was thought to contain 118 nuclear electrons in the nucleus to give it a residual charge of +79, consistent with its atomic number.
Discovery of the neutron makes Z the proton number
All consideration of nuclear electrons ended with James Chadwick's discovery of the neutron in 1932. An atom of gold now was seen as containing 118 neutrons rather than 118 nuclear electrons, and its positive nuclear charge now was realized to come entirely from a content of 79 protons. Since Moseley had previously shown that the atomic number Z of an element equals this positive charge, it was now clear that Z is identical to the number of protons of its nuclei.
Chemical properties
Each element has a specific set of chemical properties as a consequence of the number of electrons present in the neutral atom, which is Z (the atomic number). The configuration of these electrons follows from the principles of quantum mechanics. The number of electrons in each element's electron shells, particularly the outermost valence shell, is the primary factor in determining its chemical bonding behavior. Hence, it is the atomic number alone that determines the chemical properties of an element; and it is for this reason that an element can be defined as consisting of any mixture of atoms with a given atomic number.
New elements
The quest for new elements is usually described using atomic numbers. As of , all elements with atomic numbers 1 to 118 have been observed. The most recent element discovered was number 117 (tennessine) in 2009. Synthesis of new elements is accomplished by bombarding target atoms of heavy elements with ions, such that the sum of the atomic numbers of the target and ion elements equals the atomic number of the element being created. In general, the half-life of a nuclide becomes shorter as atomic number increases, though undiscovered nuclides with certain "magic" numbers of protons and neutrons may have relatively longer half-lives and comprise an island of stability.
A hypothetical element composed only of neutrons, neutronium, has also been proposed and would have atomic number 0, but has never been observed.
References
|
Atoms;Chemical properties;Dimensionless numbers of chemistry;Nuclear physics;Numbers
|
https://en.wikipedia.org/wiki/Anatomy
|
Anatomy () is the branch of morphology concerned with the study of the internal structure of organisms and their parts. Anatomy is a branch of natural science that deals with the structural organization of living things. It is an old science, having its beginnings in prehistoric times. Anatomy is inherently tied to developmental biology, embryology, comparative anatomy, evolutionary biology, and phylogeny, as these are the processes by which anatomy is generated, both over immediate and long-term timescales. Anatomy and physiology, which study the structure and function of organisms and their parts respectively, make a natural pair of related disciplines, and are often studied together. Human anatomy is one of the essential basic sciences that are applied in medicine, and is often studied alongside physiology.
Anatomy is a complex and dynamic field that is constantly evolving as discoveries are made. In recent years, there has been a significant increase in the use of advanced imaging techniques, such as MRI and CT scans, which allow for more detailed and accurate visualizations of the body's structures.
The discipline of anatomy is divided into macroscopic and microscopic parts. Macroscopic anatomy, or gross anatomy, is the examination of an animal's body parts using unaided eyesight. Gross anatomy also includes the branch of superficial anatomy. Microscopic anatomy involves the use of optical instruments in the study of the tissues of various structures, known as histology, and also in the study of cells.
The history of anatomy is characterized by a progressive understanding of the functions of the organs and structures of the human body. Methods have also improved dramatically, advancing from the examination of animals by dissection of carcasses and cadavers (corpses) to 20th-century medical imaging techniques, including X-ray, ultrasound, and magnetic resonance imaging.
Etymology and definition
Derived from the Greek anatomē "dissection" (from anatémnō "I cut up, cut open" from ἀνά aná "up", and τέμνω témnō "I cut"), anatomy is the scientific study of the structure of organisms including their systems, organs and tissues. It includes the appearance and position of the various parts, the materials from which they are composed, and their relationships with other parts. Anatomy is quite distinct from physiology and biochemistry, which deal respectively with the functions of those parts and the chemical processes involved. For example, an anatomist is concerned with the shape, size, position, structure, blood supply and innervation of an organ such as the liver; while a physiologist is interested in the production of bile, the role of the liver in nutrition and the regulation of bodily functions.
The discipline of anatomy can be subdivided into a number of branches, including gross or macroscopic anatomy and microscopic anatomy. Gross anatomy is the study of structures large enough to be seen with the naked eye, and also includes superficial anatomy or surface anatomy, the study by sight of the external body features. Microscopic anatomy is the study of structures on a microscopic scale, along with histology (the study of tissues), and embryology (the study of an organism in its immature condition). Regional anatomy is the study of the interrelationships of all of the structures in a specific body region, such as the abdomen. In contrast, systemic anatomy is the study of the structures that make up a discrete body system—that is, a group of structures that work together to perform a unique body function, such as the digestive system.
Anatomy can be studied using both invasive and non-invasive methods with the goal of obtaining information about the structure and organization of organs and systems. Methods used include dissection, in which a body is opened and its organs studied, and endoscopy, in which a video camera-equipped instrument is inserted through a small incision in the body wall and used to explore the internal organs and other structures. Angiography using X-rays or magnetic resonance angiography are methods to visualize blood vessels.
The term "anatomy" is commonly taken to refer to human anatomy. However, substantially similar structures and tissues are found throughout the rest of the animal kingdom, and the term also includes the anatomy of other animals. The term zootomy is also sometimes used to specifically refer to non-human animals. The structure and tissues of plants are of a dissimilar nature and they are studied in plant anatomy.
Animal tissues
The kingdom Animalia contains multicellular organisms that are heterotrophic and motile (although some have secondarily adopted a sessile lifestyle). Most animals have bodies differentiated into separate tissues and these animals are also known as eumetazoans. They have an internal digestive chamber, with one or two openings; the gametes are produced in multicellular sex organs, and the zygotes include a blastula stage in their embryonic development. Metazoans do not include the sponges, which have undifferentiated cells.
Unlike plant cells, animal cells have neither a cell wall nor chloroplasts. Vacuoles, when present, are more in number and much smaller than those in the plant cell. The body tissues are composed of numerous types of cells, including those found in muscles, nerves and skin. Each typically has a cell membrane formed of phospholipids, cytoplasm and a nucleus. All of the different cells of an animal are derived from the embryonic germ layers. Those simpler invertebrates which are formed from two germ layers of ectoderm and endoderm are called diploblastic and the more developed animals whose structures and organs are formed from three germ layers are called triploblastic. All of a triploblastic animal's tissues and organs are derived from the three germ layers of the embryo, the ectoderm, mesoderm and endoderm.
Animal tissues can be grouped into four basic types: connective, epithelial, muscle and nervous tissue.
Connective tissue
Connective tissues are fibrous and made up of cells scattered among inorganic material called the extracellular matrix. Often called fascia (from the Latin "fascia," meaning "band" or "bandage"), connective tissues give shape to organs and holds them in place. The main types are loose connective tissue, adipose tissue, fibrous connective tissue, cartilage and bone. The extracellular matrix contains proteins, the chief and most abundant of which is collagen. Collagen plays a major part in organizing and maintaining tissues. The matrix can be modified to form a skeleton to support or protect the body. An exoskeleton is a thickened, rigid cuticle which is stiffened by mineralization, as in crustaceans or by the cross-linking of its proteins as in insects. An endoskeleton is internal and present in all developed animals, as well as in many of those less developed.
Epithelium
Epithelial tissue is composed of closely packed cells, bound to each other by cell adhesion molecules, with little intercellular space. Epithelial cells can be squamous (flat), cuboidal or columnar and rest on a basal lamina, the upper layer of the basement membrane, the lower layer is the reticular lamina lying next to the connective tissue in the extracellular matrix secreted by the epithelial cells. There are many different types of epithelium, modified to suit a particular function. In the respiratory tract there is a type of ciliated epithelial lining; in the small intestine there are microvilli on the epithelial lining and in the large intestine there are intestinal villi. Skin consists of an outer layer of keratinized stratified squamous epithelium that covers the exterior of the vertebrate body. Keratinocytes make up to 95% of the cells in the skin. The epithelial cells on the external surface of the body typically secrete an extracellular matrix in the form of a cuticle. In simple animals this may just be a coat of glycoproteins. In more advanced animals, many glands are formed of epithelial cells.
Muscle tissue
Muscle cells (myocytes) form the active contractile tissue of the body. Muscle tissue functions to produce force and cause motion, either locomotion or movement within internal organs. Muscle is formed of contractile filaments and is separated into three main types; smooth muscle, skeletal muscle and cardiac muscle. Smooth muscle has no striations when examined microscopically. It contracts slowly but maintains contractibility over a wide range of stretch lengths. It is found in such organs as sea anemone tentacles and the body wall of sea cucumbers. Skeletal muscle contracts rapidly but has a limited range of extension. It is found in the movement of appendages and jaws. Obliquely striated muscle is intermediate between the other two. The filaments are staggered and this is the type of muscle found in earthworms that can extend slowly or make rapid contractions. In higher animals striated muscles occur in bundles attached to bone to provide movement and are often arranged in antagonistic sets. Smooth muscle is found in the walls of the uterus, bladder, intestines, stomach, oesophagus, respiratory airways, and blood vessels. Cardiac muscle is found only in the heart, allowing it to contract and pump blood round the body.
Nervous tissue
Nervous tissue is composed of many nerve cells known as neurons which transmit information. In some slow-moving radially symmetrical marine animals such as ctenophores and cnidarians (including sea anemones and jellyfish), the nerves form a nerve net, but in most animals they are organized longitudinally into bundles. In simple animals, receptor neurons in the body wall cause a local reaction to a stimulus. In more complex animals, specialized receptor cells such as chemoreceptors and photoreceptors are found in groups and send messages along neural networks to other parts of the organism. Neurons can be connected together in ganglia. In higher animals, specialized receptors are the basis of sense organs and there is a central nervous system (brain and spinal cord) and a peripheral nervous system. The latter consists of sensory nerves that transmit information from sense organs and motor nerves that influence target organs. The peripheral nervous system is divided into the somatic nervous system which conveys sensation and controls voluntary muscle, and the autonomic nervous system which involuntarily controls smooth muscle, certain glands and internal organs, including the stomach.
Vertebrate anatomy
All vertebrates have a similar basic body plan and at some point in their lives, mostly in the embryonic stage, share the major chordate characteristics: a stiffening rod, the notochord; a dorsal hollow tube of nervous material, the neural tube; pharyngeal arches; and a tail posterior to the anus. The spinal cord is protected by the vertebral column and is above the notochord, and the gastrointestinal tract is below it. Nervous tissue is derived from the ectoderm, connective tissues are derived from mesoderm, and gut is derived from the endoderm. At the posterior end is a tail which continues the spinal cord and vertebrae but not the gut. The mouth is found at the anterior end of the animal, and the anus at the base of the tail. The defining characteristic of a vertebrate is the vertebral column, formed in the development of the segmented series of vertebrae. In most vertebrates the notochord becomes the nucleus pulposus of the intervertebral discs. However, a few vertebrates, such as the sturgeon and the coelacanth, retain the notochord into adulthood. Jawed vertebrates are typified by paired appendages, fins or legs, which may be secondarily lost. The limbs of vertebrates are considered to be homologous because the same underlying skeletal structure was inherited from their last common ancestor. This is one of the arguments put forward by Charles Darwin to support his theory of evolution.
Fish anatomy
The body of a fish is divided into a head, trunk and tail, although the divisions between the three are not always externally visible. The skeleton, which forms the support structure inside the fish, is either made of cartilage, in cartilaginous fish, or bone in bony fish. The main skeletal element is the vertebral column, composed of articulating vertebrae which are lightweight yet strong. The ribs attach to the spine and there are no limbs or limb girdles. The main external features of the fish, the fins, are composed of either bony or soft spines called rays, which with the exception of the caudal fins, have no direct connection with the spine. They are supported by the muscles which compose the main part of the trunk. The heart has two chambers and pumps the blood through the respiratory surfaces of the gills and on round the body in a single circulatory loop. The eyes are adapted for seeing underwater and have only local vision. There is an inner ear but no external or middle ear. Low frequency vibrations are detected by the lateral line system of sense organs that run along the length of the sides of fish, and these respond to nearby movements and to changes in water pressure.
Sharks and rays are basal fish with numerous primitive anatomical features similar to those of ancient fish, including skeletons composed of cartilage. Their bodies tend to be dorso-ventrally flattened, they usually have five pairs of gill slits and a large mouth set on the underside of the head. The dermis is covered with separate dermal placoid scales. They have a cloaca into which the urinary and genital passages open, but not a swim bladder. Cartilaginous fish produce a small number of large, yolky eggs. Some species are ovoviviparous and the young develop internally but others are oviparous and the larvae develop externally in egg cases.
The bony fish lineage shows more derived anatomical traits, often with major evolutionary changes from the features of ancient fish. They have a bony skeleton, are generally laterally flattened, have five pairs of gills protected by an operculum, and a mouth at or near the tip of the snout. The dermis is covered with overlapping scales. Bony fish have a swim bladder which helps them maintain a constant depth in the water column, but not a cloaca. They mostly spawn a large number of small eggs with little yolk which they broadcast into the water column.
Amphibian anatomy
Amphibians are a class of animals comprising frogs, salamanders and caecilians. They are tetrapods, but the caecilians and a few species of salamander have either no limbs or their limbs are much reduced in size. Their main bones are hollow and lightweight and are fully ossified and the vertebrae interlock with each other and have articular processes. Their ribs are usually short and may be fused to the vertebrae. Their skulls are mostly broad and short, and are often incompletely ossified. Their skin contains little keratin and lacks scales, but contains many mucous glands and in some species, poison glands. The hearts of amphibians have three chambers, two atria and one ventricle. They have a urinary bladder and nitrogenous waste products are excreted primarily as urea. Amphibians breathe by means of buccal pumping, a pump action in which air is first drawn into the buccopharyngeal region through the nostrils. These are then closed and the air is forced into the lungs by contraction of the throat. They supplement this with gas exchange through the skin which needs to be kept moist.
In frogs the pelvic girdle is robust and the hind legs are much longer and stronger than the forelimbs. The feet have four or five digits and the toes are often webbed for swimming or have suction pads for climbing. Frogs have large eyes and no tail. Salamanders resemble lizards in appearance; their short legs project sideways, the belly is close to or in contact with the ground and they have a long tail. Caecilians superficially resemble earthworms and are limbless. They burrow by means of zones of muscle contractions which move along the body and they swim by undulating their body from side to side.
Reptile anatomy
Reptiles are a class of animals comprising turtles, tuataras, lizards, snakes and crocodiles. They are tetrapods, but the snakes and a few species of lizard either have no limbs or their limbs are much reduced in size. Their bones are better ossified and their skeletons stronger than those of amphibians. The teeth are conical and mostly uniform in size. The surface cells of the epidermis are modified into horny scales which create a waterproof layer. Reptiles are unable to use their skin for respiration as do amphibians and have a more efficient respiratory system drawing air into their lungs by expanding their chest walls. The heart resembles that of the amphibian but there is a septum which more completely separates the oxygenated and deoxygenated bloodstreams. The reproductive system has evolved for internal fertilization, with a copulatory organ present in most species. The eggs are surrounded by amniotic membranes which prevents them from drying out and are laid on land, or develop internally in some species. The bladder is small as nitrogenous waste is excreted as uric acid.
Turtles are notable for their protective shells. They have an inflexible trunk encased in a horny carapace above and a plastron below. These are formed from bony plates embedded in the dermis which are overlain by horny ones and are partially fused with the ribs and spine. The neck is long and flexible and the head and the legs can be drawn back inside the shell. Turtles are vegetarians and the typical reptile teeth have been replaced by sharp, horny plates. In aquatic species, the front legs are modified into flippers.
Tuataras superficially resemble lizards but the lineages diverged in the Triassic period. There is one living species, Sphenodon punctatus. The skull has two openings (fenestrae) on either side and the jaw is rigidly attached to the skull. There is one row of teeth in the lower jaw and this fits between the two rows in the upper jaw when the animal chews. The teeth are merely projections of bony material from the jaw and eventually wear down. The brain and heart are more primitive than those of other reptiles, and the lungs have a single chamber and lack bronchi. The tuatara has a well-developed parietal eye on its forehead.
Lizards have skulls with only one fenestra on each side, the lower bar of bone below the second fenestra having been lost. This results in the jaws being less rigidly attached which allows the mouth to open wider. Lizards are mostly quadrupeds, with the trunk held off the ground by short, sideways-facing legs, but a few species have no limbs and resemble snakes. Lizards have moveable eyelids, eardrums are present and some species have a central parietal eye.
Snakes are closely related to lizards, having branched off from a common ancestral lineage during the Cretaceous period, and they share many of the same features. The skeleton consists of a skull, a hyoid bone, spine and ribs though a few species retain a vestige of the pelvis and rear limbs in the form of pelvic spurs. The bar under the second fenestra has also been lost and the jaws have extreme flexibility allowing the snake to swallow its prey whole. Snakes lack moveable eyelids, the eyes being covered by transparent "spectacle" scales. They do not have eardrums but can detect ground vibrations through the bones of their skull. Their forked tongues are used as organs of taste and smell and some species have sensory pits on their heads enabling them to locate warm-blooded prey.
Crocodilians are large, low-slung aquatic reptiles with long snouts and large numbers of teeth. The head and trunk are dorso-ventrally flattened and the tail is laterally compressed. It undulates from side to side to force the animal through the water when swimming. The tough keratinized scales provide body armour and some are fused to the skull. The nostrils, eyes and ears are elevated above the top of the flat head enabling them to remain above the surface of the water when the animal is floating. Valves seal the nostrils and ears when it is submerged. Unlike other reptiles, crocodilians have hearts with four chambers allowing complete separation of oxygenated and deoxygenated blood.
Bird anatomy
Birds are tetrapods but though their hind limbs are used for walking or hopping, their front limbs are wings covered with feathers and adapted for flight. Birds are endothermic, have a high metabolic rate, a light skeletal system and powerful muscles. The long bones are thin, hollow and very light. Air sac extensions from the lungs occupy the centre of some bones. The sternum is wide and usually has a keel and the caudal vertebrae are fused. There are no teeth and the narrow jaws are adapted into a horn-covered beak. The eyes are relatively large, particularly in nocturnal species such as owls. They face forwards in predators and sideways in ducks.
The feathers are outgrowths of the epidermis and are found in localized bands from where they fan out over the skin. Large flight feathers are found on the wings and tail, contour feathers cover the bird's surface and fine down occurs on young birds and under the contour feathers of water birds. The only cutaneous gland is the single uropygial gland near the base of the tail. This produces an oily secretion that waterproofs the feathers when the bird preens. There are scales on the legs, feet and claws on the tips of the toes.
Mammal anatomy
Mammals are a diverse class of animals, mostly terrestrial but some are aquatic and others have evolved flapping or gliding flight. They mostly have four limbs, but some aquatic mammals have no limbs or limbs modified into fins, and the forelimbs of bats are modified into wings. The legs of most mammals are situated below the trunk, which is held well clear of the ground. The bones of mammals are well ossified and their teeth, which are usually differentiated, are coated in a layer of prismatic enamel. The teeth are shed once (milk teeth) during the animal's lifetime or not at all, as is the case in cetaceans. Mammals have three bones in the middle ear and a cochlea in the inner ear. They are clothed in hair and their skin contains glands which secrete sweat. Some of these glands are specialized as mammary glands, producing milk to feed the young. Mammals breathe with lungs and have a muscular diaphragm separating the thorax from the abdomen which helps them draw air into the lungs. The mammalian heart has four chambers, and oxygenated and deoxygenated blood are kept entirely separate. Nitrogenous waste is excreted primarily as urea.
Mammals are amniotes, and most are viviparous, giving birth to live young. Exceptions to this are the egg-laying monotremes, the platypus and the echidnas of Australia. Most other mammals have a placenta through which the developing foetus obtains nourishment, but in marsupials, the foetal stage is very short and the immature young is born and finds its way to its mother's pouch where it latches on to a teat and completes its development.
Human anatomy
Humans have the overall body plan of a mammal. Humans have a head, neck, trunk (which includes the thorax and abdomen), two arms and hands, and two legs and feet.
Generally, students of certain biological sciences, paramedics, prosthetists and orthotists, physiotherapists, occupational therapists, nurses, podiatrists, and medical students learn gross anatomy and microscopic anatomy from anatomical models, skeletons, textbooks, diagrams, photographs, lectures and tutorials and in addition, medical students generally also learn gross anatomy through practical experience of dissection and inspection of cadavers. The study of microscopic anatomy (or histology) can be aided by practical experience examining histological preparations (or slides) under a microscope.
Human anatomy, physiology and biochemistry are complementary basic medical sciences, which are generally taught to medical students in their first year at medical school. Human anatomy can be taught regionally or systemically; that is, respectively, studying anatomy by bodily regions such as the head and chest, or studying by specific systems, such as the nervous or respiratory systems. The major anatomy textbook, Gray's Anatomy, has been reorganized from a systems format to a regional format, in line with modern teaching methods. A thorough working knowledge of anatomy is required by physicians, especially surgeons and doctors working in some diagnostic specialties, such as histopathology and radiology.
Academic anatomists are usually employed by universities, medical schools or teaching hospitals. They are often involved in teaching anatomy, and research into certain systems, organs, tissues or cells.
Invertebrate anatomy
Invertebrates constitute a vast array of living organisms ranging from the simplest unicellular eukaryotes such as Paramecium to such complex multicellular animals as the octopus, lobster and dragonfly. They constitute about 95% of the animal species. By definition, none of these creatures has a backbone. The cells of single-cell protozoans have the same basic structure as those of multicellular animals but some parts are specialized into the equivalent of tissues and organs. Locomotion is often provided by cilia or flagella or may proceed via the advance of pseudopodia, food may be gathered by phagocytosis, energy needs may be supplied by photosynthesis and the cell may be supported by an endoskeleton or an exoskeleton. Some protozoans can form multicellular colonies.
Metazoans are a multicellular organism, with different groups of cells serving different functions. The most basic types of metazoan tissues are epithelium and connective tissue, both of which are present in nearly all invertebrates. The outer surface of the epidermis is normally formed of epithelial cells and secretes an extracellular matrix which provides support to the organism. An endoskeleton derived from the mesoderm is present in echinoderms, sponges and some cephalopods. Exoskeletons are derived from the epidermis and is composed of chitin in arthropods (insects, spiders, ticks, shrimps, crabs, lobsters). Calcium carbonate constitutes the shells of molluscs, brachiopods and some tube-building polychaete worms and silica forms the exoskeleton of the microscopic diatoms and radiolaria. Other invertebrates may have no rigid structures but the epidermis may secrete a variety of surface coatings such as the pinacoderm of sponges, the gelatinous cuticle of cnidarians (polyps, sea anemones, jellyfish) and the collagenous cuticle of annelids. The outer epithelial layer may include cells of several types including sensory cells, gland cells and stinging cells. There may also be protrusions such as microvilli, cilia, bristles, spines and tubercles.
Marcello Malpighi, the father of microscopical anatomy, discovered that plants had tubules similar to those he saw in insects like the silk worm. He observed that when a ring-like portion of bark was removed on a trunk a swelling occurred in the tissues above the ring, and he unmistakably interpreted this as growth stimulated by food coming down from the leaves, and being captured above the ring.
Arthropod anatomy
Arthropods comprise the largest phylum of invertebrates in the animal kingdom with over a million known species.
Insects possess segmented bodies supported by a hard-jointed outer covering, the exoskeleton, made mostly of chitin. The segments of the body are organized into three distinct parts, a head, a thorax and an abdomen. The head typically bears a pair of sensory antennae, a pair of compound eyes, one to three simple eyes (ocelli) and three sets of modified appendages that form the mouthparts. The thorax has three pairs of segmented legs, one pair each for the three segments that compose the thorax and one or two pairs of wings. The abdomen is composed of eleven segments, some of which may be fused and houses the digestive, respiratory, excretory and reproductive systems. There is considerable variation between species and many adaptations to the body parts, especially wings, legs, antennae and mouthparts.
Spiders a class of arachnids have four pairs of legs; a body of two segments—a cephalothorax and an abdomen. Spiders have no wings and no antennae. They have mouthparts called chelicerae which are often connected to venom glands as most spiders are venomous. They have a second pair of appendages called pedipalps attached to the cephalothorax. These have similar segmentation to the legs and function as taste and smell organs. At the end of each male pedipalp is a spoon-shaped cymbium that acts to support the copulatory organ.
Other branches of anatomy
Surface anatomy is important as the study of anatomical landmarks that can be readily seen from the exterior contours of the body. It enables medics and veterinarians to gauge the position and anatomy of the associated deeper structures. Superficial is a directional term that indicates that structures are located relatively close to the surface of the body.
Comparative anatomy relates to the comparison of anatomical structures (both gross and microscopic) in different animals.
Artistic anatomy relates to anatomic studies of body proportions for artistic reasons.
History
Ancient
In 1600 BCE, the Edwin Smith Papyrus, an Ancient Egyptian medical text, described the heart and its vessels, as well as the brain and its meninges and cerebrospinal fluid, and the liver, spleen, kidneys, uterus and bladder. It showed the blood vessels diverging from the heart. The Ebers Papyrus () features a "treatise on the heart", with vessels carrying all the body's fluids to or from every member of the body.
Ancient Greek anatomy and physiology underwent great changes and advances throughout the early medieval world. Over time, this medical practice expanded due to a continually developing understanding of the functions of organs and structures in the body. Phenomenal anatomical observations of the human body were made, which contributed to the understanding of the brain, eye, liver, reproductive organs, and nervous system.
The Hellenistic Egyptian city of Alexandria was the stepping-stone for Greek anatomy and physiology. Alexandria not only housed the biggest library for medical records and books of the liberal arts in the world during the time of the Greeks but was also home to many medical practitioners and philosophers. Great patronage of the arts and sciences from the Ptolemaic dynasty of Egypt helped raise Alexandria up, further rivalling other Greek states' cultural and scientific achievements.
Some of the most striking advances in early anatomy and physiology took place in Hellenistic Alexandria. Two of the most famous anatomists and physiologists of the third century were Herophilus and Erasistratus. These two physicians helped pioneer human dissection for medical research, using the cadavers of condemned criminals, which was considered taboo until the Renaissance—Herophilus was recognized as the first person to perform systematic dissections. Herophilus became known for his anatomical works, making impressive contributions to many branches of anatomy and many other aspects of medicine. Some of the works included classifying the system of the pulse, the discovery that human arteries had thicker walls than veins, and that the atria were parts of the heart. Herophilus's knowledge of the human body has provided vital input towards understanding the brain, eye, liver, reproductive organs, and nervous system and characterizing the course of the disease. Erasistratus accurately described the structure of the brain, including the cavities and membranes, and made a distinction between its cerebrum and cerebellum During his study in Alexandria, Erasistratus was particularly concerned with studies of the circulatory and nervous systems. He could distinguish the human body's sensory and motor nerves and believed air entered the lungs and heart, which was then carried throughout the body. His distinction between the arteries and veins—the arteries carrying the air through the body, while the veins carry the blood from the heart was a great anatomical discovery. Erasistratus was also responsible for naming and describing the function of the epiglottis and the heart's valves, including the tricuspid. During the third century, Greek physicians were able to differentiate nerves from blood vessels and tendons and to realize that the nerves convey neural impulses. It was Herophilus who made the point that damage to motor nerves induced paralysis. Herophilus named the meninges and ventricles in the brain, appreciated the division between cerebellum and cerebrum and recognized that the brain was the "seat of intellect" and not a "cooling chamber" as propounded by Aristotle Herophilus is also credited with describing the optic, oculomotor, motor division of the trigeminal, facial, vestibulocochlear and hypoglossal nerves.
Incredible feats were made during the third century BCE in both the digestive and reproductive systems. Herophilus discovered and described not only the salivary glands but also the small intestine and liver. He showed that the uterus is a hollow organ and described the ovaries and uterine tubes. He recognized that spermatozoa were produced by the testes and was the first to identify the prostate gland.
The anatomy of the muscles and skeleton is described in the Hippocratic Corpus, an Ancient Greek medical work written by unknown authors. Aristotle described vertebrate anatomy based on animal dissection. Praxagoras identified the difference between arteries and veins. Also in the 4th century BCE, Herophilos and Erasistratus produced more accurate anatomical descriptions based on vivisection of criminals in Alexandria during the Ptolemaic period.
In the 2nd century, Galen of Pergamum, an anatomist, clinician, writer, and philosopher, wrote the final and highly influential anatomy treatise of ancient times. He compiled existing knowledge and studied anatomy through the dissection of animals. He was one of the first experimental physiologists through his vivisection experiments on animals. Galen's drawings, based mostly on dog anatomy, became effectively the only anatomical textbook for the next thousand years. His work was known to Renaissance doctors only through Islamic Golden Age medicine until it was translated from Greek sometime in the 15th century.
Medieval to early modern
Anatomy developed little from classical times until the sixteenth century; as the historian Marie Boas writes, "Progress in anatomy before the sixteenth century is as mysteriously slow as its development after 1500 is startlingly rapid". Between 1275 and 1326, the anatomists Mondino de Luzzi, Alessandro Achillini and Antonio Benivieni at Bologna carried out the first systematic human dissections since ancient times. Mondino's Anatomy of 1316 was the first textbook in the medieval rediscovery of human anatomy. It describes the body in the order followed in Mondino's dissections, starting with the abdomen, thorax, head, and limbs. It was the standard anatomy textbook for the next century.
Leonardo da Vinci (1452–1519) was trained in anatomy by Andrea del Verrocchio. He made use of his anatomical knowledge in his artwork, making many sketches of skeletal structures, muscles and organs of humans and other vertebrates that he dissected.
Andreas Vesalius (1514–1564), professor of anatomy at the University of Padua, is considered the founder of modern human anatomy. Originally from Brabant, Vesalius published the influential book De humani corporis fabrica ("the structure of the human body"), a large format book in seven volumes, in 1543. The accurate and intricately detailed illustrations, often in allegorical poses against Italianate landscapes, are thought to have been made by the artist Jan van Calcar, a pupil of Titian.
In England, anatomy was the subject of the first public lectures given in any science; these were provided by the Company of Barbers and Surgeons in the 16th century, joined in 1583 by the Lumleian lectures in surgery at the Royal College of Physicians.
Late modern
Medical schools began to be set up in the United States towards the end of the 18th century. Classes in anatomy needed a continual stream of cadavers for dissection, and these were difficult to obtain. Philadelphia, Baltimore, and New York were all renowned for body snatching activity as criminals raided graveyards at night, removing newly buried corpses from their coffins. A similar problem existed in Britain where demand for bodies became so great that grave-raiding and even anatomy murder were practised to obtain cadavers. Some graveyards were, in consequence, protected with watchtowers. The practice was halted in Britain by the Anatomy Act of 1832, while in the United States, similar legislation was enacted after the physician William S. Forbes of Jefferson Medical College was found guilty in 1882 of "complicity with resurrectionists in the despoliation of graves in Lebanon Cemetery".
The teaching of anatomy in Britain was transformed by Sir John Struthers, Regius Professor of Anatomy at the University of Aberdeen from 1863 to 1889. He was responsible for setting up the system of three years of "pre-clinical" academic teaching in the sciences underlying medicine, including especially anatomy. This system lasted until the reform of medical training in 1993 and 2003. As well as teaching, he collected many vertebrate skeletons for his museum of comparative anatomy, published over 70 research papers, and became famous for his public dissection of the Tay Whale. From 1822 the Royal College of Surgeons regulated the teaching of anatomy in medical schools. Medical museums provided examples in comparative anatomy, and were often used in teaching. Ignaz Semmelweis investigated puerperal fever and he discovered how it was caused. He noticed that the frequently fatal fever occurred more often in mothers examined by medical students than by midwives. The students went from the dissecting room to the hospital ward and examined women in childbirth. Semmelweis showed that when the trainees washed their hands in chlorinated lime before each clinical examination, the incidence of puerperal fever among the mothers could be reduced dramatically.
Before the modern medical era, the primary means for studying the internal structures of the body were dissection of the dead and inspection, palpation, and auscultation of the living. The advent of microscopy opened up an understanding of the building blocks that constituted living tissues. Technical advances in the development of achromatic lenses increased the resolving power of the microscope, and around 1839, Matthias Jakob Schleiden and Theodor Schwann identified that cells were the fundamental unit of organization of all living things. The study of small structures involved passing light through them, and the microtome was invented to provide sufficiently thin slices of tissue to examine. Staining techniques using artificial dyes were established to help distinguish between different tissue types. Advances in the fields of histology and cytology began in the late 19th century along with advances in surgical techniques allowing for the painless and safe removal of biopsy specimens. The invention of the electron microscope brought a significant advance in resolution power and allowed research into the ultrastructure of cells and the organelles and other structures within them. About the same time, in the 1950s, the use of X-ray diffraction for studying the crystal structures of proteins, nucleic acids, and other biological molecules gave rise to a new field of molecular anatomy.
Equally important advances have occurred in non-invasive techniques for examining the body's interior structures. X-rays can be passed through the body and used in medical radiography and fluoroscopy to differentiate interior structures that have varying degrees of opaqueness. Magnetic resonance imaging, computed tomography, and ultrasound imaging have all enabled the examination of internal structures in unprecedented detail to a degree far beyond the imagination of earlier generations.
Sources
|
;Anatomical terminology;Branches of biology;Morphology (biology)
|
https://en.wikipedia.org/wiki/Ambiguity
|
Ambiguity is the type of meaning in which a phrase, statement, or resolution is not explicitly defined, making for several interpretations; others describe it as a concept or statement that has no real reference. A common aspect of ambiguity is uncertainty. It is thus an attribute of any idea or statement whose intended meaning cannot be definitively resolved, according to a rule or process with a finite number of steps. (The prefix ambi- reflects the idea of "two", as in "two meanings").
The concept of ambiguity is generally contrasted with vagueness. In ambiguity, specific and distinct interpretations are permitted (although some may not be immediately obvious), whereas with vague information it is difficult to form any interpretation at the desired level of specificity.
Linguistic forms
Lexical ambiguity is contrasted with semantic ambiguity. The former represents a choice between a finite number of known and meaningful context-dependent interpretations. The latter represents a choice between any number of possible interpretations, none of which may have a standard agreed-upon meaning. This form of ambiguity is closely related to vagueness.
Ambiguity in human language is argued to reflect principles of efficient communication. Languages that communicate efficiently will avoid sending information that is redundant with information provided in the context. This can be shown mathematically to result in a system that is ambiguous when context is neglected. In this way, ambiguity is viewed as a generally useful feature of a linguistic system.
Linguistic ambiguity can be a problem in law, because the interpretation of written documents and oral agreements is often of paramount importance.
Lexical ambiguity
The lexical ambiguity of a word or phrase applies to it having more than one meaning in the language to which the word belongs. "Meaning" here refers to whatever should be represented by a good dictionary. For instance, the word "bank" has several distinct lexical definitions, including "financial institution" and "edge of a river". Or consider "apothecary". One could say "I bought herbs from the apothecary". This could mean one actually spoke to the apothecary (pharmacist) or went to the apothecary (pharmacy).
The context in which an ambiguous word is used often makes it clearer which of the meanings is intended. If, for instance, someone says "I put $100 in the bank", most people would not think someone used a shovel to dig in the mud. However, some linguistic contexts do not provide sufficient information to make a used word clearer.
Lexical ambiguity can be addressed by algorithmic methods that automatically associate the appropriate meaning with a word in context, a task referred to as word-sense disambiguation.
The use of multi-defined words requires the author or speaker to clarify their context, and sometimes elaborate on their specific intended meaning (in which case, a less ambiguous term should have been used). The goal of clear concise communication is that the receiver(s) have no misunderstanding about what was meant to be conveyed. An exception to this could include a politician whose "weasel words" and obfuscation are necessary to gain support from multiple constituents with mutually exclusive conflicting desires from his or her candidate of choice. Ambiguity is a powerful tool of political science.
More problematic are words whose multiple meanings express closely related concepts. "Good", for example, can mean "useful" or "functional" (That's a good hammer), "exemplary" (She's a good student), "pleasing" (This is good soup), "moral" (a good person versus the lesson to be learned from a story), "righteous", etc. "I have a good daughter" is not clear about which sense is intended. The various ways to apply prefixes and suffixes can also create ambiguity ("unlockable" can mean "capable of being opened" or "impossible to lock").
Semantic and syntactic ambiguity
Semantic ambiguity occurs when a word, phrase or sentence, taken out of context, has more than one interpretation. In "We saw her duck" (example due to Richard Nordquist), the words "her duck" can refer either
to the person's bird (the noun "duck", modified by the possessive pronoun "her"), or
to a motion she made (the verb "duck", the subject of which is the objective pronoun "her", object of the verb "saw").
Syntactic ambiguity arises when a sentence can have two (or more) different meanings because of the structure of the sentence—its syntax. This is often due to a modifying expression, such as a prepositional phrase, the application of which is unclear. "He ate the cookies on the couch", for example, could mean that he ate those cookies that were on the couch (as opposed to those that were on the table), or it could mean that he was sitting on the couch when he ate the cookies. "To get in, you will need an entrance fee of $10 or your voucher and your drivers' license." This could mean that you need EITHER ten dollars OR BOTH your voucher and your license. Or it could mean that you need your license AND you need EITHER ten dollars OR a voucher. Only rewriting the sentence, or placing appropriate punctuation can resolve a syntactic ambiguity.
For the notion of, and theoretic results about, syntactic ambiguity in artificial, formal languages (such as computer programming languages), see Ambiguous grammar.
Usually, semantic and syntactic ambiguity go hand in hand. The sentence "We saw her duck" is also syntactically ambiguous. Conversely, a sentence like "He ate the cookies on the couch" is also semantically ambiguous. Rarely, but occasionally, the different parsings of a syntactically ambiguous phrase result in the same meaning. For example, the command "Cook, cook!" can be parsed as "Cook (noun used as vocative), cook (imperative verb form)!", but also as "Cook (imperative verb form), cook (noun used as vocative)!". It is more common that a syntactically unambiguous phrase has a semantic ambiguity; for example, the lexical ambiguity in "Your boss is a funny man" is purely semantic, leading to the response "Funny ha-ha or funny peculiar?"
Spoken language can contain many more types of ambiguities that are called phonological ambiguities, where there is more than one way to compose a set of sounds into words. For example, "ice cream" and "I scream". Such ambiguity is generally resolved according to the context. A mishearing of such, based on incorrectly resolved ambiguity, is called a mondegreen.
Philosophy
Philosophers (and other users of logic) spend a lot of time and effort searching for and removing (or intentionally adding) ambiguity in arguments because it can lead to incorrect conclusions and can be used to deliberately conceal bad arguments. For example, a politician might say, "I oppose taxes which hinder economic growth", an example of a glittering generality. Some will think they oppose taxes in general because they hinder economic growth. Others may think they oppose only those taxes that they believe will hinder economic growth. In writing, the sentence can be rewritten to reduce possible misinterpretation, either by adding a comma after "taxes" (to convey the first sense) or by changing "which" to "that" (to convey the second sense) or by rewriting it in other ways. The devious politician hopes that each constituent will interpret the statement in the most desirable way, and think the politician supports everyone's opinion. However, the opposite can also be true—an opponent can turn a positive statement into a bad one if the speaker uses ambiguity (intentionally or not). The logical fallacies of amphiboly and equivocation rely heavily on the use of ambiguous words and phrases.
In continental philosophy (particularly phenomenology and existentialism), there is much greater tolerance of ambiguity, as it is generally seen as an integral part of the human condition. Martin Heidegger argued that the relation between the subject and object is ambiguous, as is the relation of mind and body, and part and whole. In Heidegger's phenomenology, Dasein is always in a meaningful world, but there is always an underlying background for every instance of signification. Thus, although some things may be certain, they have little to do with Dasein's sense of care and existential anxiety, e.g., in the face of death. In calling his work Being and Nothingness an "essay in phenomenological ontology" Jean-Paul Sartre follows Heidegger in defining the human essence as ambiguous, or relating fundamentally to such ambiguity. Simone de Beauvoir tries to base an ethics on Heidegger's and Sartre's writings (The Ethics of Ambiguity), where she highlights the need to grapple with ambiguity: "as long as there have been philosophers and they have thought, most of them have tried to mask it ... And the ethics which they have proposed to their disciples has always pursued the same goal. It has been a matter of eliminating the ambiguity by making oneself pure inwardness or pure externality, by escaping from the sensible world or being engulfed by it, by yielding to eternity or enclosing oneself in the pure moment." Ethics cannot be based on the authoritative certainty given by mathematics and logic, or prescribed directly from the empirical findings of science. She states: "Since we do not succeed in fleeing it, let us, therefore, try to look the truth in the face. Let us try to assume our fundamental ambiguity. It is in the knowledge of the genuine conditions of our life that we must draw our strength to live and our reason for acting". Other continental philosophers suggest that concepts such as life, nature, and sex are ambiguous. Corey Anton has argued that we cannot be certain what is separate from or unified with something else: language, he asserts, divides what is not, in fact, separate. Following Ernest Becker, he argues that the desire to 'authoritatively disambiguate' the world and existence has led to numerous ideologies and historical events such as genocide. On this basis, he argues that ethics must focus on 'dialectically integrating opposites' and balancing tension, rather than seeking a priori validation or certainty. Like the existentialists and phenomenologists, he sees the ambiguity of life as the basis of creativity.
Literature and rhetoric
In literature and rhetoric, ambiguity can be a useful tool. Groucho Marx's classic joke depends on a grammatical ambiguity for its humor, for example: "Last night I shot an elephant in my pajamas. How he got in my pajamas, I'll never know". Songs and poetry often rely on ambiguous words for artistic effect, as in the song title "Don't It Make My Brown Eyes Blue" (where "blue" can refer to the color, or to sadness).
In the narrative, ambiguity can be introduced in several ways: motive, plot, character. F. Scott Fitzgerald uses the latter type of ambiguity with notable effect in his novel The Great Gatsby.
Mathematical notation
Mathematical notation is a helpful tool that eliminates a lot of misunderstandings associated with natural language in physics and other sciences. Nonetheless, there are still some inherent ambiguities due to lexical, syntactic, and semantic reasons that persist in mathematical notation.
Names of functions
The ambiguity in the style of writing a function should not be confused with a multivalued function, which can (and should) be defined in a deterministic and unambiguous way. Several special functions still do not have established notations. Usually, the conversion to another notation requires to scale the argument or the resulting value; sometimes, the same name of the function is used, causing confusions. Examples of such underestablished functions:
Sinc function
Elliptic integral of the third kind; translating elliptic integral form MAPLE to Mathematica, one should replace the second argument to its square; dealing with complex values, this may cause problems.
Exponential integral
Hermite polynomial
Expressions
Ambiguous expressions often appear in physical and mathematical texts.
It is common practice to omit multiplication signs in mathematical expressions. Also, it is common to give the same name to a variable and a function, for example, Then, if one sees there is no way to distinguish whether it means multiplied by or function evaluated at argument equal to In each case of use of such notations, the reader is supposed to be able to perform the deduction and reveal the true meaning.
Creators of algorithmic languages try to avoid ambiguities. Many algorithmic languages (C++ and Fortran) require the character * as a symbol of multiplication. The Wolfram Language used in Mathematica allows the user to omit the multiplication symbol, but requires square brackets to indicate the argument of a function; square brackets are not allowed for grouping of expressions. Fortran, in addition, does not allow use of the same name (identifier) for different objects, for example, function and variable; in particular, the expression is qualified as an error.
The order of operations may depend on the context. In most programming languages, the operations of division and multiplication have equal priority and are executed from left to right. Until the last century, many editorials assumed that multiplication is performed first, for example, is interpreted as in this case, the insertion of parentheses is required when translating the formulas to an algorithmic language. In addition, it is common to write an argument of a function without parenthesis, which also may lead to ambiguity.
In the scientific journal style, one uses roman letters to denote elementary functions, whereas variables are written using italics.
For example, in mathematical journals the expression does not denote the sine function, but the product of the three variables although in the informal notation of a slide presentation it may stand for
Commas in multi-component subscripts and superscripts are sometimes omitted; this is also potentially ambiguous notation.
For example, in the notation the reader can only infer from the context whether it means a single-index object, taken with the subscript equal to product of variables and or it is an indication to a trivalent tensor.
Examples of potentially confusing ambiguous mathematical expressions
An expression such as can be understood to mean either or Often the author's intention can be understood from the context, in cases where only one of the two makes sense, but an ambiguity like this should be avoided, for example by writing
The expression means in several texts, though it might be thought to mean since commonly means Conversely, might seem to mean as this exponentiation notation usually denotes function iteration: in general, means However, for trigonometric and hyperbolic functions, this notation conventionally means exponentiation of the result of function application.
The expression can be interpreted as meaning however, it is more commonly understood to mean
Notations in quantum optics and quantum mechanics
It is common to define the coherent states in quantum optics with and states with fixed number of photons with Then, there is an "unwritten rule": the state is coherent if there are more Greek characters than Latin characters in the argument, and -photon state if the Latin characters dominate. The ambiguity becomes even worse, if is used for the states with certain value of the coordinate, and means the state with certain value of the momentum, which may be used in books on quantum mechanics. Such ambiguities easily lead to confusions, especially if some normalized adimensional, dimensionless variables are used. Expression may mean a state with single photon, or the coherent state with mean amplitude equal to 1, or state with momentum equal to unity, and so on. The reader is supposed to guess from the context.
Ambiguous terms in physics and mathematics
Some physical quantities do not yet have established notations; their value (and sometimes even dimension, as in the case of the Einstein coefficients), depends on the system of notations. Many terms are ambiguous. Each use of an ambiguous term should be preceded by the definition, suitable for a specific case. Just like Ludwig Wittgenstein states in Tractatus Logico-Philosophicus: "... Only in the context of a proposition has a name meaning."
A highly confusing term is gain. For example, the sentence "the gain of a system should be doubled", without context, means close to nothing.
It may mean that the ratio of the output voltage of an electric circuit to the input voltage should be doubled.
It may mean that the ratio of the output power of an electric or optical circuit to the input power should be doubled.
It may mean that the gain of the laser medium should be doubled, for example, doubling the population of the upper laser level in a quasi-two level system (assuming negligible absorption of the ground-state).
The term intensity is ambiguous when applied to light. The term can refer to any of irradiance, luminous intensity, radiant intensity, or radiance, depending on the background of the person using the term.
Also, confusions may be related with the use of atomic percent as measure of concentration of a dopant, or resolution of an imaging system, as measure of the size of the smallest detail that still can be resolved at the background of statistical noise. See also Accuracy and precision.
The Berry paradox arises as a result of systematic ambiguity in the meaning of terms such as "definable" or "nameable". Terms of this kind give rise to vicious circle fallacies. Other terms with this type of ambiguity are: satisfiable, true, false, function, property, class, relation, cardinal, and ordinal.
Mathematical interpretation of ambiguity
In mathematics and logic, ambiguity can be considered to be an instance of the logical concept of underdetermination—for example, leaves open what the value of is—while overdetermination, except when like , is a self-contradiction, also called inconsistency, paradoxicalness, or oxymoron, or in mathematics an inconsistent system—such as which has no solution.
Logical ambiguity and self-contradiction is analogous to visual ambiguity and impossible objects, such as the Necker cube and impossible cube, or many of the drawings of M. C. Escher.
Constructed language
Some languages have been created with the intention of avoiding ambiguity, especially lexical ambiguity. Lojban and Loglan are two related languages that have been created for this, focusing chiefly on syntactic ambiguity as well. The languages can be both spoken and written. These languages are intended to provide a greater technical precision over big natural languages, although historically, such attempts at language improvement have been criticized. Languages composed from many diverse sources contain much ambiguity and inconsistency. The many exceptions to syntax and semantic rules are time-consuming and difficult to learn.
Biology
In structural biology, ambiguity has been recognized as a problem for studying protein conformations. The analysis of a protein three-dimensional structure consists in dividing the macromolecule into subunits called domains. The difficulty of this task arises from the fact that different definitions of what a domain is can be used (e.g. folding autonomy, function, thermodynamic stability, or domain motions), which sometimes results in a single protein having different—yet equally valid—domain assignments.
Christianity and Judaism
Christianity and Judaism employ the concept of paradox synonymously with "ambiguity". Many Christians and Jews endorse Rudolf Otto's description of the sacred as 'mysterium tremendum et fascinans', the awe-inspiring mystery that fascinates humans. The apocryphal Book of Judith is noted for the "ingenious ambiguity" expressed by its heroine; for example, she says to the villain of the story, Holofernes, "my lord will not fail to achieve his purposes", without specifying whether my lord refers to the villain or to God.
The orthodox Catholic writer G. K. Chesterton regularly employed paradox to tease out the meanings in common concepts that he found ambiguous or to reveal meaning often overlooked or forgotten in common phrases: the title of one of his most famous books, Orthodoxy (1908), itself employed such a paradox.
Music
In music, pieces or sections that confound expectations and may be or are interpreted simultaneously in different ways are ambiguous, such as some polytonality, polymeter, other ambiguous meters or rhythms, and ambiguous phrasing, or (Stein 2005, p. 79) any aspect of music. The music of Africa is often purposely ambiguous. To quote Sir Donald Francis Tovey (1935, p. 195), "Theorists are apt to vex themselves with vain efforts to remove uncertainty just where it has a high aesthetic value."
Visual art
In visual art, certain images are visually ambiguous, such as the Necker cube, which can be interpreted in two ways. Perceptions of such objects remain stable for a time, then may flip, a phenomenon called multistable perception.
The opposite of such ambiguous images are impossible objects.
Pictures or photographs may also be ambiguous at the semantic level: the visual image is unambiguous, but the meaning and narrative may be ambiguous: is a certain facial expression one of excitement or fear, for instance?
Social psychology and the bystander effect
In social psychology, ambiguity is a factor used in determining peoples' responses to various situations. High levels of ambiguity in an emergency (e.g. an unconscious man lying on a park bench) make witnesses less likely to offer any sort of assistance, due to the fear that they may have misinterpreted the situation and acted unnecessarily. Alternately, non-ambiguous emergencies (e.g. an injured person verbally asking for help) elicit more consistent intervention and assistance. With regard to the bystander effect, studies have shown that emergencies deemed ambiguous trigger the appearance of the classic bystander effect (wherein more witnesses decrease the likelihood of any of them helping) far more than non-ambiguous emergencies.
Computer science
In computer science, the SI prefixes kilo-, mega- and giga- were historically used in certain contexts to mean either the first three powers of 1024 (1024, 10242 and 10243) contrary to the metric system in which these units unambiguously mean one thousand, one million, and one billion. This usage is particularly prevalent with electronic memory devices (e.g. DRAM) addressed directly by a binary machine register where a decimal interpretation makes no practical sense.
Subsequently, the Ki, Mi, and Gi prefixes were introduced so that binary prefixes could be written explicitly, also rendering k, M, and G unambiguous in texts conforming to the new standard—this led to a new ambiguity in engineering documents lacking outward trace of the binary prefixes (necessarily indicating the new style) as to whether the usage of k, M, and G remains ambiguous (old style) or not (new style). 1 M (where M is ambiguously or ) is less uncertain than the engineering value (defined to designate the interval ). As non-volatile storage devices begin to exceed 1 GB in capacity (where the ambiguity begins to routinely impact the second significant digit), GB and TB almost always mean 109 and 1012 bytes.
|
;Concepts in epistemology;Formal semantics (natural language);Mathematical notation;Semantics
|
https://en.wikipedia.org/wiki/Aardvark
|
Aardvarks ( ; Orycteropus afer) are medium-sized, burrowing, nocturnal mammals native to Africa. Aardvarks are the only living species of the family Orycteropodidae and the order Tubulidentata. They have a long proboscis, similar to a pig's snout, which is used to sniff out food.
They are afrotheres, a clade that also includes elephants, manatees, and hyraxes.
They are found over much of the southern two-thirds of the African continent, avoiding areas that are mainly rocky. Nocturnal feeders, aardvarks subsist on ants and termites by using their sharp claws and powerful legs to dig the insects out of their hills. Aardvarks also dig to create burrows in which to live and rear their young.
Name and taxonomy
Name
The aardvark is sometimes colloquially called the "African ant bear", "anteater" (not to be confused with the South American anteaters), or the "Cape anteater" after the Cape of Good Hope.
The name "aardvark" is Afrikaans () and comes from earlier Afrikaans . It means "earth pig" or "ground pig" (: , : ), because of its burrowing habits.
The name Orycteropus means "burrowing foot", and the name afer refers to Africa. The name of the aardvark's order, Tubulidentata, comes from the tubule-style teeth.
Taxonomy
The aardvark is not closely related to the pig; rather, it is the sole extant representative of the obscure mammalian order Tubulidentata, in which it is usually considered to form one variable species of the genus Orycteropus, the sole surviving genus in the family Orycteropodidae. The aardvark is not closely related to the South American anteater, despite sharing some characteristics and a superficial resemblance. The similarities are the outcome of convergent evolution. The closest living relatives of the aardvark are the elephant shrews, Tenrecidae, and golden moles. Along with sirenians, hyraxes, elephants, and their extinct relatives, these animals form the superorder Afrotheria. Studies of the brain have shown the similarities with Condylarthra.
Evolutionary history
Based on his study of fossils, Bryan Patterson has concluded that early relatives of the aardvark appeared in Africa around the end of the Paleocene. The ptolemaiidans, a mysterious clade of mammals with uncertain affinities, may actually be stem-aardvarks, either as a sister clade to Tubulidentata or as a grade leading to true tubulidentates.
The first unambiguous tubulidentate was probably Myorycteropus africanus from Kenyan Miocene deposits. The earliest example from the genus Orycteropus was Orycteropus mauritanicus, found in Algeria in deposits from the middle Miocene, with an equally old version found in Kenya. Fossils from the aardvark have been dated to 5 million years, and have been located throughout Europe and the Near East.
The mysterious Pleistocene Plesiorycteropus from Madagascar was originally thought to be a tubulidentate that was descended from ancestors that entered the island during the Eocene. However, a number of subtle anatomical differences coupled with recent molecular evidence now lead researchers to believe that Plesiorycteropus is a relative of golden moles and tenrecs that achieved an aardvark-like appearance and ecological niche through convergent evolution.
Subspecies
The aardvark has seventeen poorly defined subspecies listed:
Orycteropus afer afer (Southern aardvark)
O. a. adametzi Grote, 1921 (Western aardvark)
O. a. aethiopicus Sundevall, 1843
O. a. angolensis Zukowsky & Haltenorth, 1957
O. a. erikssoni Lönnberg, 1906
O. a. faradjius Hatt, 1932
O. a. haussanus Matschie, 1900
O. a. kordofanicus Rothschild, 1927
O. a. lademanni Grote, 1911
O. a. leptodon Hirst, 1906
O. a. matschiei Grote, 1921
O. a. observandus Grote, 1921
O. a. ruvanensis Grote, 1921
O. a. senegalensis Lesson, 1840
O. a. somalicus Lydekker, 1908
O. a. wardi Lydekker, 1908
O. a. wertheri Matschie, 1898 (Eastern aardvark)
The 1911 Encyclopædia Britannica also mentions O. a. capensis or Cape ant-bear from South Africa.
Description
The aardvark is vaguely pig-like in appearance. Its body is stout with a prominently arched back and is sparsely covered with coarse hairs. The limbs are of moderate length, with the rear legs being longer than the forelegs. The front feet have lost the pollex (or 'thumb'), resulting in four toes, while the rear feet have all five toes. Each toe bears a large, robust nail which is somewhat flattened and shovel-like, and appears to be intermediate between a claw and a hoof. Whereas the aardvark is considered digitigrade, it appears at times to be plantigrade. This confusion happens because when it squats it stands on its soles. A contributing characteristic to the burrow digging capabilities of aardvarks is an endosteal tissue called compacted coarse cancellous bone (CCCB). The stress and strain resistance provided by CCCB allows aardvarks to create their burrows, ultimately leading to a favourable environment for plants and a variety of animals. Digging is also facilitated by its forearm's unusually stout ulna and radius.An aardvark's weight is typically between . An aardvark's length is usually between , and can reach lengths of when its tail (which can be up to ) is taken into account. It is tall at the shoulder, and has a girth of about . It does not exhibit sexual dimorphism.
It is the largest member of the proposed clade Afroinsectiphilia. The aardvark is pale yellowish-grey in colour and often stained reddish-brown by soil. The aardvark's coat is thin, and the animal's primary protection is its tough skin. Its hair is short on its head and tail; however its legs tend to have longer hair. The hair on the majority of its body is grouped in clusters of three to four hairs. The hair surrounding its nostrils is dense to help filter particulate matter out as it digs. Its tail is very thick at the base and gradually tapers.
Head
The greatly elongated head is set on a short, thick neck, and the end of the snout bears a disc, which houses the nostrils. It contains a thin but complete zygomatic arch. The head of the aardvark contains many unique and different features. One of the most distinctive characteristics of the Tubulidentata is their teeth. Instead of having a pulp cavity, each tooth has a cluster of thin, hexagonal, upright, parallel tubes of vasodentin (a modified form of dentine), with individual pulp canals, held together by cementum. The number of columns is dependent on the size of the tooth, with the largest having about 1,500. The teeth have no enamel coating and are worn away and regrow continuously. The aardvark is born with conventional incisors and canines at the front of the jaw, which fall out and are not replaced. Adult aardvarks have only cheek teeth at the back of the jaw, and have a dental formula of: These remaining teeth are peg-like and rootless and are of unique composition. The teeth consist of 14 upper and 12 lower jaw molars. The nasal area of the aardvark is another unique area, as it contains ten nasal conchae, more than any other placental mammal.
The sides of the nostrils are thick with hair. The tip of the snout is highly mobile and is moved by modified mimetic muscles. The fleshy dividing tissue between its nostrils probably has sensory functions, but it is uncertain whether they are olfactory or vibratory in nature. Its nose is made up of more turbinate bones than any other mammal, with between nine and 11, compared to dogs with four to five. With a large quantity of turbinate bones, the aardvark has more space for the moist epithelium, which is the location of the olfactory bulb. The nose contains nine olfactory bulbs, more than any other mammal. Its keen sense of smell is not just from the quantity of bulbs in the nose but also in the development of the brain, as its olfactory lobe is very developed. The snout resembles an elongated pig snout. The mouth is small and tubular, typical of species that feed on ants and termites. The aardvark has a long, thin, snakelike, protruding tongue (as much as long) and elaborate structures supporting a keen sense of smell. The ears, which are very effective, are disproportionately long, about long. The eyes are small for its head, and consist only of rods.
Digestive system
The aardvark's stomach has a muscular pyloric area that acts as a gizzard to grind swallowed food up, thereby rendering chewing unnecessary. Its cecum is large. Both sexes emit a strong smelling secretion from an anal gland. Its salivary glands are highly developed and almost completely ring the neck; their output is what causes the tongue to maintain its tackiness. The female has two pairs of teats in the inguinal region.
Genetically speaking, the aardvark is a living fossil, as its chromosomes are highly conserved, reflecting much of the early eutherian arrangement before the divergence of the major modern taxa.
Habitat and range
Aardvarks are found in sub-Saharan Africa, where suitable habitat (savannas, grasslands, woodlands and bushland) and food (i.e., ants and termites) is available. They spend the daylight hours in dark burrows to avoid the heat of the day. The only major habitat that they are not present in is swamp forest, as the high water table precludes digging to a sufficient depth. They also avoid terrain rocky enough to cause problems with digging. They have been documented as high as in Ethiopia. They can be found throughout sub-Saharan Africa from Ethiopia all the way to Cape of Good Hope in South Africa with few exceptions including the coastal areas of Namibia, Ivory Coast, and Ghana. They are not found in Madagascar.
Ecology and behaviour
Aardvarks live for up to 23 years in captivity. Its keen hearing warns it of predators: lions, leopards, cheetahs, African wild dogs, hyenas, and pythons. Some humans also hunt aardvarks for meat. Aardvarks can dig fast or run in zigzag fashion to elude enemies, but if all else fails, they will strike with their claws, tail and shoulders, sometimes flipping onto their backs lying motionless except to lash out with all four feet. They are capable of causing substantial damage to unprotected areas of an attacker. They will also dig to escape as they can. Sometimes, when pressed, aardvarks can dig extremely quickly.
Feeding
The aardvark is nocturnal and is a solitary creature that feeds almost exclusively on ants and termites (myrmecophagy); studies in the Nama Karoo revealed that ants, especially Anoplolepis custodiens, were the predominant prey year-round, followed by termites like Trinervitermes trinervoides. In winter, when ant numbers declined, aardvarks relied more on termites, often feeding on epigeal mounds coinciding with the presence of alates, possibly to meet their nutritional needs. They avoid eating the African driver ant and red ants. Due to their stringent diet requirements, they require a large range to survive.
The only fruit eaten by aardvarks is the aardvark cucumber. In fact, the cucumber and the aardvark have a symbiotic relationship as they eat the subterranean fruit, then defecate the seeds near their burrows, which then grow rapidly due to the loose soil and fertile nature of the area. The time spent in the intestine of the aardvark helps the fertility of the seed, and the fruit provides needed moisture for the aardvark.
An aardvark emerges from its burrow in the late afternoon or shortly after sunset, and forages over a considerable home range encompassing . While foraging for food, the aardvark will keep its nose to the ground and its ears pointed forward, which indicates that both smell and hearing are involved in the search for food. They zig-zag as they forage and will usually not repeat a route for five to eight days as they appear to allow time for the termite nests to recover before feeding on it again.
During a foraging period, they will stop to dig a V-shaped trench with their forefeet and then sniff it profusely as a means to explore their location. When a concentration of ants or termites is detected, the aardvark digs into it with its powerful front legs, keeping its long ears upright to listen for predators, and takes up an astonishing number of insects with its long, sticky tongue—as many as 50,000 in one night have been recorded. Its claws enable it to dig through the extremely hard crust of a termite or ant mound quickly. It avoids inhaling the dust by sealing the nostrils. When successful, the aardvark's long (up to ) tongue licks up the insects; the termites' biting, or the ants' stinging attacks are rendered futile by the tough skin. After an aardvark visit at a termite mound, other animals will visit to pick up all the leftovers. Termite mounds alone do not provide enough food for the aardvark, so they look for termites that are on the move. When these insects move, they can form columns long and these tend to provide easy pickings with little effort exerted by the aardvark. These columns are more common in areas of livestock or other hoofed animals. The trampled grass and dung attract termites from the Odontotermes, Microtermes, and Pseudacanthotermes genera.
On a nightly basis they tend to be more active during the first portion of night (roughly the four hours between 8:00p.m. and 12:00a.m.); however, they do not seem to prefer bright or dark nights over the other. During adverse weather or if disturbed they will retreat to their burrow systems. They cover between per night; however, some studies have shown that they may traverse as far as in a night.
Aardvarks shift their circadian rhythms to more diurnal activity patterns in response to a reduced food supply. This survival tactic may signify an increased risk of imminent mortality.
Vocalisation
The aardvark is a rather quiet animal. However, it does make soft grunting sounds as it forages and loud grunts as it makes for its tunnel entrance. It makes a bleating sound if frightened. When it is threatened it will make for one of its burrows. If one is not close it will dig a new one rapidly. This new one will be short and require the aardvark to back out when the coast is clear.
Movement
The aardvark is known to be a good swimmer and has been witnessed successfully swimming in strong currents. It can dig a yard of tunnel in about five minutes, but otherwise moves fairly slowly.
When leaving the burrow at night, they pause at the entrance for about ten minutes, sniffing and listening. After this period of watchfulness, it will bound out and within seconds it will be away. It will then pause, prick its ears, twisting its head to listen, then jump and move off to start foraging.
Aside from digging out ants and termites, the aardvark also excavates burrows in which to live, which generally fall into one of three categories: burrows made while foraging, refuge and resting location, and permanent homes. Temporary sites are scattered around the home range and are used as refuges, while the main burrow is also used for breeding. Main burrows can be deep and extensive, have several entrances and can be as long as . These burrows can be large enough for a person to enter. The aardvark changes the layout of its home burrow regularly, and periodically moves on and makes a new one. The old burrows are an important part of the African wildlife scene. As they are vacated, then they are inhabited by smaller animals like the African wild dog, ant-eating chat, Nycteris thebaica and warthogs. Other animals that use them are hares, mongooses, hyenas, owls, pythons, and lizards. Without these refuges many animals would die during wildfire season. Only mothers and young share burrows; however, the aardvark is known to live in small family groups or as a solitary creature. If attacked in the tunnel, it will escape by digging out of the tunnel thereby placing the fresh fill between it and its predator, or if it decides to fight it will roll onto its back, and attack with its claws. The aardvark has been known to sleep in a recently excavated ant nest, which also serves as protection from its predators.
Reproduction
It is believed to exhibit polygamous breeding behavior. During mating, the male secures himself to the female's back using his claws, which can occasionally result in noticeable scratches. Males play no role on parental care.
Aardvarks pair only during the breeding season; after a gestation period of seven months, one cub weighing around is born during May–July. When born, the young has flaccid ears and many wrinkles. When nursing, it will nurse off each teat in succession. After two weeks, the folds of skin disappear and after three, the ears can be held upright. After 5–6 weeks, body hair starts growing. It is able to leave the burrow to accompany its mother after only two weeks and eats termites at nine weeks, and is weaned between three months and 16 weeks. At six months of age, it is able to dig its own burrows, but it will often remain with the mother until the next mating season, and is sexually mature from approximately two years of age.
Conservation
Aardvarks were thought to have declining numbers, however, this is possibly because they are not readily seen. There are no definitive counts because of their nocturnal and secretive habits; however, their numbers seem to be stable overall. They are not considered common anywhere in Africa, but due to their large range, they maintain sufficient numbers. There may be a slight decrease in numbers in eastern, northern, and western Africa. Southern African numbers are not decreasing. It has received an official designation from the IUCN as least concern. However, they are a species in a precarious situation, as they are so dependent on such specific food; therefore if a problem arises with the abundance of termites, the species as a whole would be affected drastically.
Recent research suggests that aardvarks may be particularly vulnerable to alterations in temperature caused by climate change. Droughts negatively impact the availability of termites and ants, which comprise the bulk of an aardvark's diet. Nocturnal species faced with resource scarcity may increase their diurnal activity to spare the energy costs of staying warm at night, but this comes at the cost of withstanding high temperatures during the day. A study on aardvarks in the Kalahari Desert saw that five out of six aardvarks being studied perished following a drought. Aardvarks that survive droughts can take long periods of time to regain health and optimal thermoregulatory physiology, reducing the reproductive potential of the species.
Aardvarks adapt well to captivity. The first recorded instance was at London Zoo in 1869, which housed an individual from South Africa.
Mythology and popular culture
In African folklore, the aardvark is much admired because of its diligent quest for food and its fearless response to soldier ants. Hausa magicians make a charm from the heart, skin, forehead, and nails of the aardvark, which they then proceed to pound together with the root of a certain tree. Wrapped in a piece of skin and worn on the chest, the charm is said to give the owner the ability to pass through walls or roofs at night. The charm is said to be used by burglars and those seeking to visit young girls without their parents' permission. Also, some tribes, such as the Margbetu, Ayanda, and Logo, will use aardvark teeth to make bracelets, which are regarded as good luck charms. The meat, which has a resemblance to pork, is eaten in certain cultures. In the mythology of the Dagbon people of Ghana, the aardvark is believed to possess superpowers. The Dagombas believe this animal can transfigure into and interact with humans.
The ancient Egyptian god Set is usually depicted with the head of an unidentified animal, whose similarity to an aardvark has been noted in scholarship.
The titular character and his families from Arthur, an animated television series for children based on a book series and produced by WGBH, shown in more than 180 countries, is an aardvark. In the first book of the series, Arthur's Nose (1976), he has a long, aardvark-like nose, but in later books, his face becomes more rounded.
Otis the Aardvark was a puppet character used on Children's BBC programming.
An aardvark features as the antagonist in the cartoon The Ant and the Aardvark as well as in the Canadian animated series The Raccoons.
The supersonic fighter-bomber F-111/FB-111 was nicknamed the Aardvark because of its long nose resembling the animal. It also had similarities with its nocturnal missions flown at a very low level employing ordnance that could penetrate deep into the ground. In the US Navy, the squadron VF-114 was nicknamed the Aardvarks, flying F-4s and then F-14s. The squadron mascot was adapted from the animal in the comic strip B.C., which the F-4 was said to resemble.
Cerebus the Aardvark is a 300-issue comic book series by Dave Sim.
Footnotes
References
External links
IUCN/SSC Afrotheria Specialist Group
A YouTube video introducing the Bronx Zoo's aardvarks
"The Biology of the Aardvark (Orycteropus afer)" a diploma thesis (without images)
"The Biology of the Aardvark" (Orycteropus afer)" the thesis with images
|
Afrikaans words and phrases;Extant Zanclean first appearances;Mammals described in 1766;Mammals of Africa;Myrmecophagous mammals;Orycteropus;Taxa named by Peter Simon Pallas;Xerophiles
|
https://en.wikipedia.org/wiki/Aardwolf
|
The aardwolf (Proteles cristatus) is an insectivorous hyaenid species, native to East and Southern Africa. Its name means "earth-wolf" in Afrikaans and Dutch. It is also called the maanhaar-jackal (Afrikaans for "mane-jackal"), termite-eating hyena and civet hyena, based on its habit of secreting substances from its anal gland, a characteristic shared with the African civet.
Unlike many of its relatives in the order Carnivora, the aardwolf does not hunt large animals. It eats insects and their larvae, mainly termites; one aardwolf can lap up as many as 300,000 termites during a single night using its long, sticky tongue. The aardwolf's tongue has adapted to be tough enough to withstand the strong bite of termites.
The aardwolf lives in the shrublands of eastern and southern Africa – open lands covered with stunted trees and shrubs. It is nocturnal, resting in burrows during the day and emerging at night to seek food.
Taxonomy
The aardwolf is generally classified as part of the hyena family Hyaenidae. However, it was formerly placed in its own family Protelidae. Early on, scientists felt that it was merely mimicking the striped hyena, which subsequently led to the creation of Protelidae. Recent studies have suggested that the aardwolf probably diverged from other hyaenids early on; how early is still unclear, as the fossil record and genetic studies disagree by 10 million years.
The aardwolf is the only surviving species in the subfamily Protelinae. There is disagreement as to whether the species is monotypic, or can be divided into subspecies. A 2021 study found the genetic differences in eastern and southern aardwolves may be pronounced enough to categorize them as species.
A 2006 molecular analysis indicates it is phylogenetically the most basal of the four extant hyaenidae species.
Etymology
The generic name Proteles is derived from two words of Greek origin: , prōtos and téleios, which combined mean "complete in front" referring to the aardwolf's five toes on the front paws and four on the hindpaws.
The specific name cristatus is derived from Latin and means "provided with a comb or tuft", relating to its mane.
Description
The aardwolf resembles a much smaller and thinner striped hyena, with a more slender muzzle, black vertical stripes on a coat of yellowish fur, and a long, distinct mane down the midline of the neck and back. It also has one or two diagonal stripes down the fore and hindquarters and several stripes on its legs. The mane is raised during confrontations to make the aardwolf appear larger. It is missing the throat spot that others in the family have. Its lower leg (from the knee down) is all black, and its tail is bushy with a black tip.
The aardwolf is about long, excluding its bushy tail, which is about long, and stands about tall at the shoulders. An adult aardwolf weighs approximately , sometimes reaching . The aardwolves in the south of the continent tend to be slightly smaller (about ) than the eastern version (around ). This makes the aardwolf the smallest extant member of the Hyaenidae family. The front feet have five toes each, unlike the four-toed hyena. The skull is similar in shape to those of other hyenas, though much smaller, and its cheek teeth are specialised for eating insects. It still has canines, but unlike other hyenas, these teeth are used primarily for fighting and defense. Its ears, which are large, are very similar to those of the striped hyena.
As an aardwolf ages, it will typically lose some of its teeth, though this has little impact on its feeding habits due to the softness of the insects that it eats.
Distribution and habitat
Aardwolves live in open, dry plains and bushland, avoiding mountainous areas. Due to their specific food requirements, they are found only in regions where termites of the family Hodotermitidae occur. Termites of this family depend on dead and withered grass and are most populous in heavily grazed grasslands and savannahs, including farmland. For most of the year, aardwolves spend time in shared territories consisting of up to a dozen dens, which are occupied for six weeks at a time.
There are two distinct populations: one in Southern Africa, and another in East and Northeast Africa. The species does not occur in the intermediary miombo forests.
An adult pair, along with their most-recent offspring, occupies a territory of .
Behavior and ecology
Aardwolves are shy and nocturnal, sleeping in burrows by day. They will, on occasion during the winter, become diurnal feeders. This happens during the coldest periods as they then stay in at night to conserve heat.
They are primarily solitary animals, though during mating season they form monogamous pairs which occupy a territory with their young. If their territory is infringed upon by another aardwolf, they will chase the intruder away for up to or to the border. If the intruder is caught, which rarely happens, a fight will occur, which is accompanied by soft clucking, hoarse barking, and a type of roar. The majority of incursions occur during mating season, when they can occur once or twice per week. When food is scarce, the stringent territorial system may be abandoned and as many as three pairs may occupy a single territory.
The territory is marked by both sexes, as they both have developed anal glands from which they extrude a black substance that is smeared on rocks or grass stalks in -long streaks. Aardwolves also have scent glands on the forefoot and penile pad. They often mark near termite mounds within their territory every 20 minutes or so. If they are patrolling their territorial boundaries, the marking frequency increases drastically, to once every . At this rate, an individual may mark 60 marks per hour, and upwards of 200 per night.
An aardwolf pair's territory may have up to 10 dens, and numerous middens where they dig small holes and bury their feces with sand. Their dens are usually abandoned aardvark, springhare, or porcupine dens, or on occasion they are crevices in rocks. They will also dig their own dens, or enlarge dens started by springhares. They typically will only use one or two dens at a time, rotating through all of their dens every six months. During the summer, they may rest outside their den during the night and sleep underground during the heat of the day.
Aardwolves are not fast runners nor are they particularly adept at fighting off predators. Therefore, when threatened, the aardwolf may attempt to mislead its foe by doubling back on its tracks. If confronted, it may raise its mane in an attempt to appear more menacing. It also emits a foul-smelling liquid from its anal glands.
Feeding
The aardwolf feeds primarily on termites and more specifically on Trinervitermes. This genus of termites has different species throughout the aardwolf's range. In East Africa, they eat Trinervitermes bettonianus, in central Africa, they eat Trinervitermes rhodesiensis, and in southern Africa, they eat T. trinervoides. Their technique consists of licking them off the ground as opposed to the aardvark, which digs into the mound. They locate their food by sound and also from the scent secreted by the soldier termites. An aardwolf may consume up to 250,000 termites per night using its long, broad, sticky tongue.
They do not destroy the termite mound or consume the entire colony, thus ensuring that the termites can rebuild and provide a continuous supply of food. They often memorize the location of such nests and return to them every few months. During certain seasonal events, such as the onset of the rainy season and the cold of midwinter, the primary termites become scarce, so the need for other foods becomes pronounced. During these times, the southern aardwolf will seek out Hodotermes mossambicus, a type of harvester termite active in the afternoon, which explains some of their diurnal behavior in the winter. The eastern aardwolf, during the rainy season, subsists on termites from the genera Odontotermes and Macrotermes. They are also known to feed on other insects and larvae, and, some sources mention, very occasionally small mammals and birds, but these constitute a very small percentage of their total diet. They use their wide tongues to lap surface foraging termites off of the ground and consume large quantities of sand in the process, which aids in digestion in the absence of teeth to break down their food.
Unlike other hyenas, aardwolves do not scavenge or kill larger animals. Contrary to popular myths, aardwolves do not eat carrion, and if they are seen eating while hunched over a dead carcass, they are actually eating larvae and beetles. Also, contrary to some sources, they do not like meat, unless it is finely ground or cooked for them. The adult aardwolf was formerly assumed to forage in small groups, but more recent research has shown that they are primarily solitary foragers, necessary because of the scarcity of their insect prey. Their primary source, Trinervitermes, forages in small but dense patches of . While foraging, the aardwolf can cover about per hour, which translates to per summer night and per winter night.
Breeding
The breeding season varies depending on location, but normally takes place during autumn or spring. In South Africa, breeding occurs in early July. During the breeding season, unpaired male aardwolves search their own territory, as well as others, for a female to mate with. Dominant males also mate opportunistically with the females of less dominant neighboring aardwolves, which can result in conflict between rival males. Dominant males even go a step further and as the breeding season approaches, they make increasingly greater and greater incursions onto weaker males' territories. As the female comes into oestrus, they add pasting to their tricks inside of the other territories, sometimes doing so more in rivals' territories than their own. Females will also, when given the opportunity, mate with the dominant male, which increases the chances of the dominant male guarding "his" cubs with her. Copulation lasts between 1 and 4.5 hours.
Gestation lasts between 89 and 92 days, producing two to five cubs (most often two or three) during the rainy season (October–December), when termites are more active. They are born with their eyes open, but initially are helpless, and weigh around . The first six to eight weeks are spent in the den with their parents. The male may spend up to six hours a night watching over the cubs while the mother is out looking for food. After three months, they begin supervised foraging, and by four months are normally independent, though they often share a den with their mother until the next breeding season. By the time the next set of cubs is born, the older cubs have moved on. Aardwolves generally achieve sexual maturity at one and a half to two years of age.
Conservation
The aardwolf has not seen decreasing numbers and is relatively widespread throughout eastern Africa. They are not common throughout their range, as they maintain a density of no more than 1 per square kilometer, if food is abundant. Because of these factors, the IUCN has rated the aardwolf as least concern. In some areas, they are persecuted because of the mistaken belief that they prey on livestock; however, they are actually beneficial to the farmers because they eat termites that are detrimental. In other areas, the farmers have recognized this, but they are still killed, on occasion, for their fur. Dogs and insecticides are also common killers of the aardwolf.
In captivity
Frankfurt Zoo in Germany was home to the oldest recorded aardwolf in captivity at 18 years and 11 months.
Notes
References
Sources
Further reading
External links
Animal Diversity Web
IUCN Hyaenidae Specialist Group Aardwolf pages on hyaenidae.org
Cam footage from the Namib desert https://m.youtube.com/watch?v=lRevqS6Pxgg
|
Carnivorans of Africa;Fauna of East Africa;Hyenas;Mammals described in 1783;Mammals of Africa;Mammals of Southern Africa;Myrmecophagous mammals;Nocturnal animals;Taxa named by Anders Sparrman
|
https://en.wikipedia.org/wiki/Adobe
|
Adobe ( ; ) is a building material made from earth and organic materials. is Spanish for mudbrick. In some English-speaking regions of Spanish heritage, such as the Southwestern United States, the term is used to refer to any kind of earthen construction, or various architectural styles like Pueblo Revival or Territorial Revival. Most adobe buildings are similar in appearance to cob and rammed earth buildings. Adobe is among the earliest building materials, and is used throughout the world.
Adobe architecture has been dated to before 5,100 BP.
Description
Adobe bricks are rectangular prisms small enough that they can quickly air dry individually without cracking. They can be subsequently assembled, with the application of adobe mud to bond the individual bricks into a structure. There is no standard size, with substantial variations over the years and in different regions. In some areas a popular size measured weighing about ; in other contexts the size is weighing about . The maximum sizes can reach up to ; above this weight it becomes difficult to move the pieces, and it is preferred to ram the mud in situ, resulting in a different typology known as rammed earth.
Strength
In dry climates, adobe structures are extremely durable, and account for some of the oldest existing buildings in the world. Adobe buildings offer significant advantages due to their greater thermal mass, but they are known to be particularly susceptible to earthquake damage if they are not reinforced. Cases where adobe structures were widely damaged during earthquakes include the 1976 Guatemala earthquake, the 2003 Bam earthquake, and the 2010 Chile earthquake.
Distribution
Buildings made of sun-dried earth are common throughout the world (Middle East, Western Asia, North Africa, West Africa, South America, Southwestern North America, Southwestern and Eastern Europe.). Adobe had been in use by indigenous peoples of the Americas in the Southwestern United States, Mesoamerica, and the Andes for several thousand years. Puebloan peoples built their adobe structures with handsful or basketsful of adobe, until the Spanish introduced them to making bricks. Adobe bricks were used in Spain from the Late Bronze and Iron Ages (eighth century BCE onwards). Its wide use can be attributed to its simplicity of design and manufacture, and economics.
Etymology
The word adobe has existed for around 4,000 years with relatively little change in either pronunciation or meaning. The word can be traced from the Middle Egyptian () word ḏbt "mud brick" (with vowels unwritten). Middle Egyptian evolved into Late Egyptian and finally to Coptic (), where it appeared as ⲧⲱⲃⲉ tōbə. This was adopted into Arabic as aṭ-ṭawbu or aṭ-ṭūbu, with the definite article al- attached to the root tuba. This was assimilated into the Old Spanish language as adobe , probably via Mozarabic. English borrowed the word from Spanish in the early 18th century, still referring to mudbrick construction.
In more modern English usage, the term adobe has come to include a style of architecture popular in the desert climates of North America, especially in New Mexico, regardless of the construction method.
Composition
An adobe brick is a composite material made of earth mixed with water and an organic material such as straw or dung. The soil composition typically contains sand, silt and clay. Straw is useful in binding the brick together and allowing the brick to dry evenly, thereby preventing cracking due to uneven shrinkage rates through the brick. Dung offers the same advantage. The most desirable soil texture for producing the mud of adobe is 15% clay, 10–30% silt, and 55–75% fine sand. Another source quotes 15–25% clay and the remainder sand and coarser particles up to cobbles , with no deleterious effect. Modern adobe is stabilized with either emulsified asphalt or Portland cement up to 10% by weight.
No more than half the clay content should be expansive clays, with the remainder non-expansive illite or kaolinite. Too much expansive clay results in uneven drying through the brick, resulting in cracking, while too much kaolinite will make a weak brick. Typically the soils of the Southwest United States, where such construction has been widely used, are an adequate composition.
Material properties
Adobe walls are load bearing, i.e. they carry their own weight into the foundation rather than by another structure, hence the adobe must have sufficient compressive strength. In the United States, most building codes call for a minimum compressive strength of for the adobe block. Adobe construction should be designed so as to avoid lateral structural loads that would cause bending loads. The building codes require the building sustain a lateral acceleration earthquake load. Such an acceleration will cause lateral loads on the walls, resulting in shear and bending and inducing tensile stresses. To withstand such loads, the codes typically call for a tensile modulus of rupture strength of at least for the finished block.
In addition to being an inexpensive material with a small resource cost, adobe can serve as a significant heat reservoir due to the thermal properties inherent in the massive walls typical in adobe construction. In climates typified by hot days and cool nights, the high thermal mass of adobe mediates the high and low temperatures of the day, moderating the temperature of the living space. The massive walls require a large and relatively long input of heat from the sun (radiation) and from the surrounding air (convection) before they warm through to the interior. After the sun sets and the temperature drops, the warm wall will continue to transfer heat to the interior for several hours due to the time-lag effect. Thus, a well-planned adobe wall of the appropriate thickness is very effective at controlling inside temperature through the wide daily fluctuations typical of desert climates, a factor which has contributed to its longevity as a building material.
Thermodynamic material properties have significant variation in the literature. Some experiments suggest that the standard consideration of conductivity is not adequate for this material, as its main thermodynamic property is inertia, and conclude that experimental tests should be performed over a longer period of time than usual – preferably with changing thermal jumps. There is an effective R-value for a north facing wall of R0=10 hr ft2 °F/Btu, which corresponds to thermal conductivity k=10 in x 1 ft/12 in /R0=0.33 Btu/(hr ft °F) or 0.57 W/(m K) in agreement with the thermal conductivity reported from another source. To determine the total R-value of a wall, scale R0 by the thickness of the wall in inches. The thermal resistance of adobe is also stated as an R-value for a wall R0=4.1 hr ft2 °F/Btu. Another source provides the following properties: conductivity 0.30 Btu/(hr ft °F) or 0.52 W/(m K); specific heat capacity 0.24 Btu/(lb °F) or 1 kJ/(kg K) and density , giving heat capacity 25.4 Btu/(ft3 °F) or 1700 kJ/(m3 K). Using the average value of the thermal conductivity as k = 32 Btu/(hr ft °F) or 0.55 W/(m K), the thermal diffusivity is calculated to be .
Uses
Poured and puddled adobe walls
Poured and puddled adobe (puddled clay, piled earth), today called cob, is made by placing soft adobe in layers, rather than by making individual dried bricks or using a form. "Puddle" is a general term for a clay or clay and sand-based material worked into a dense, plastic state. These are the oldest methods of building with adobe in the Americas until holes in the ground were used as forms, and later wooden forms used to make individual bricks were introduced by the Spanish.
Adobe bricks
Bricks made from adobe are usually made by pressing the mud mixture into an open timber frame. In North America, the brick is typically about in size. The mixture is molded into the frame, which is removed after initial setting. After drying for a few hours, the bricks are turned on edge to finish drying. Slow drying in shade reduces cracking.
The same mixture, without straw, is used to make mortar and often plaster on interior and exterior walls. Some cultures used lime-based cement for the plaster to protect against rain damage.
Depending on the form into which the mixture is pressed, adobe can encompass nearly any shape or size, provided drying is even and the mixture includes reinforcement for larger bricks. Reinforcement can include manure, straw, cement, rebar, or wooden posts. Straw, cement, or manure added to a standard adobe mixture can produce a stronger, more crack-resistant brick. A test is done on the soil content first. To do so, a sample of the soil is mixed into a clear container with some water, creating an almost completely saturated liquid. The container is shaken vigorously for one minute. It is then allowed to settle for a day until the soil has settled into layers. Heavier particles settle out first, sand above, silt above that, and very fine clay and organic matter will stay in suspension for days. After the water has cleared, percentages of the various particles can be determined. Fifty to 60 percent sand and 35 to 40 percent clay will yield strong bricks. The Cooperative State Research, Education, and Extension Service at New Mexico State University recommends a mix of not more than clay, not less than sand, and never more than silt.
During the Great Depression, designer and builder Hugh W. Comstock used cheaper materials and made a specialized adobe brick called "Bitudobe." His first adobe house was built in 1936. In 1948, he published the book Post-Adobe; Simplified Adobe Construction Combining A Rugged Timber Frame And Modern Stabilized Adobe, which described his method of construction, including how to make "Bitudobe." In 1938, he served as an adviser to the architects Franklin & Kump Associates, who built the Carmel High School, which used his Post-adobe system.
Adobe wall construction
The ground supporting an adobe structure should be compressed, as the weight of adobe wall is significant and foundation settling may cause cracking of the wall. Footing depth is to be below the ground frost level. The footing and stem wall are commonly thick, respectively. Modern construction codes call for the use of reinforcing steel in the footing and stem wall. Adobe bricks are laid by course. Adobe walls usually never rise above two stories as they are load bearing and adobe has low structural strength. When creating window and door openings, a lintel is placed on top of the opening to support the bricks above. Atop the last courses of brick, bond beams made of heavy wood beams or modern reinforced concrete are laid to provide a horizontal bearing plate for the roof beams and to redistribute lateral earthquake loads to shear walls more able to carry the forces. To protect the interior and exterior adobe walls, finishes such as mud plaster, whitewash or stucco can be applied. These protect the adobe wall from water damage, but need to be reapplied periodically. Alternatively, the walls can be finished with other nontraditional plasters that provide longer protection. Bricks made with stabilized adobe generally do not need protection of plasters.
Adobe roof
The traditional adobe roof has been constructed using a mixture of soil/clay, water, sand and organic materials. The mixture was then formed and pressed into wood forms, producing rows of dried earth bricks that would then be laid across a support structure of wood and plastered into place with more adobe.
Depending on the materials available, a roof may be assembled using wood or metal beams to create a framework to begin layering adobe bricks. Depending on the thickness of the adobe bricks, the framework has been preformed using a steel framing and a layering of a metal fencing or wiring over the framework to allow an even load as masses of adobe are spread across the metal fencing like cob and allowed to air dry accordingly. This method was demonstrated with an adobe blend heavily impregnated with cement to allow even drying and prevent cracking.
The more traditional flat adobe roofs are functional only in dry climates that are not exposed to snow loads. The heaviest wooden beams, called vigas, lie atop the wall. Across the vigas lie smaller members called latillas and upon those brush is then laid. Finally, the adobe layer is applied.
To construct a flat adobe roof, beams of wood were laid to span the building, the ends of which were attached to the tops of the walls. Once the vigas, latillas and brush are laid, adobe bricks are placed. An adobe roof is often laid with bricks slightly larger in width to ensure a greater expanse is covered when placing the bricks onto the roof. Following each individual brick should be a layer of adobe mortar, recommended to be at least thick to make certain there is ample strength between the brick's edges and also to provide a relative moisture barrier during rain.
Roof design evolved around 1850 in the American Southwest. of adobe mud was applied on top of the latillas, then of dry adobe dirt applied to the roof. The dirt was contoured into a low slope to a downspout aka a 'canal'. When moisture was applied to the roof the clay particles expanded to create a waterproof membrane. Once a year it was necessary to pull the weeds from the roof and re-slope the dirt as needed.
Depending on the materials, adobe roofs can be inherently fire-proof. The construction of a chimney can greatly influence the construction of the roof supports, creating an extra need for care in choosing the materials. The builders can make an adobe chimney by stacking simple adobe bricks in a similar fashion as the surrounding walls.
In 1927, the Uniform Building Code (UBC) was adopted in the United States. Local ordinances, referencing the UBC added requirements to building with adobe. These included: restriction of building height of adobe structures to 1-story, requirements for adobe mix (compressive and shear strength) and new requirements which stated that every building shall be designed to withstand seismic activity, specifically lateral forces. By the 1980s however, seismic related changes in the California Building Code effectively ended solid wall adobe construction in California; however Post-and-Beam adobe and veneers are still being used.
Adobe around the world
The largest structure ever made from adobe is the Arg-é Bam built by the Achaemenid Empire. Other large adobe structures are the Huaca del Sol in Peru, with 100 million signed bricks and the ciudellas of Chan Chan and Tambo Colorado, both in Peru.
See also
used adobe walls
(waterproofing plaster)
(also known as Ctesiphon Arch) in Iraq is the largest mud brick arch in the world, built beginning in 540 AD
References
External links
|
Adobe buildings and structures;Appropriate technology;Masonry;Soil-based building materials;Sustainable building;Vernacular architecture
|
https://en.wikipedia.org/wiki/Android%20%28robot%29
|
An android is a humanoid robot or other artificial being, often made from a flesh-like material. Historically, androids existed only in the domain of science fiction and were frequently seen in film and television, but advances in robot technology have allowed the design of functional and realistic humanoid robots.
Terminology
The Oxford English Dictionary traces the earliest use (as "Androides") to Ephraim Chambers' 1728 Cyclopaedia, in reference to an automaton that St. Albertus Magnus allegedly created. By the late 1700s, "androides", elaborate mechanical devices resembling humans performing human activities, were displayed in exhibit halls.
The term "android" appears in US patents as early as 1863 in reference to miniature human-like toy automatons. The term android was used in a more modern sense by the French author Auguste Villiers de l'Isle-Adam in his work Tomorrow's Eve (1886), featuring an artificial humanoid robot named Hadaly. The term made an impact into English pulp science fiction starting from Jack Williamson's The Cometeers (1936) and the distinction between mechanical robots and fleshy androids was popularized by Edmond Hamilton's Captain Future stories (1940–1944).
Although Karel Čapek's robots in R.U.R. (Rossum's Universal Robots) (1921)—the play that introduced the word robot to the world—were organic artificial humans, the word "robot" has come to primarily refer to mechanical humans, animals, and other beings. The term "android" can mean either one of these, while a cyborg ("cybernetic organism" or "bionic man") would be a creature that is a combination of organic and mechanical parts.
The term "droid", popularized by George Lucas in the original Star Wars film and now used widely within science fiction, originated as an abridgment of "android", but has been used by Lucas and others to mean any robot, including distinctly non-human form machines like R2-D2. The word "android" was used in Star Trek: The Original Series episode "What Are Little Girls Made Of?" The abbreviation "andy", coined as a pejorative by writer Philip K. Dick in his novel Do Androids Dream of Electric Sheep?, has seen some further usage, such as within the TV series Total Recall 2070.
While the term "android" is used in reference to human-looking robots in general (not necessarily male-looking humanoid robots), a robot with a female appearance can also be referred to as a gynoid. Besides one can refer to robots without alluding to their sexual appearance by calling them anthrobots (a portmanteau of anthrōpos and robot; see anthrobotics) or anthropoids (short for anthropoid robots; the term humanoids is not appropriate because it is already commonly used to refer to human-like organic species in the context of science fiction, futurism and speculative astrobiology).
Authors have used the term android in more diverse ways than robot or cyborg. In some fictional works, the difference between a robot and android is only superficial, with androids being made to look like humans on the outside but with robot-like internal mechanics. In other stories, authors have used the word "android" to mean a wholly organic, yet artificial, creation. Other fictional depictions of androids fall somewhere in between.
Eric G. Wilson, who defines an android as a "synthetic human being", distinguishes between three types of android, based on their body's composition:
the mummy type – made of "dead things" or "stiff, inanimate, natural material", such as mummies, puppets, dolls and statues
the golem type – made from flexible, possibly organic material, including golems and homunculi
the automaton type – made from a mix of dead and living parts, including automatons and robots
Although human morphology is not necessarily the ideal form for working robots, the fascination in developing robots that can mimic it can be found historically in the assimilation of two concepts: simulacra (devices that exhibit likeness) and automata (devices that have independence).
Projects
Several projects aiming to create androids that look, and, to a certain degree, speak or act like a human being have been launched or are underway.
Japan
Japanese robotics have been leading the field since the 1970s. Waseda University initiated the WABOT project in 1967, and in 1972 completed the WABOT-1, the first android, a full-scale humanoid intelligent robot. Its limb control system allowed it to walk with the lower limbs, and to grip and transport objects with hands, using tactile sensors. Its vision system allowed it to measure distances and directions to objects using external receptors, artificial eyes and ears. And its conversation system allowed it to communicate with a person in Japanese, with an artificial mouth.
In 1984, WABOT-2 was revealed, and made a number of improvements. It was capable of playing the organ. Wabot-2 had ten fingers and two feet, and was able to read a score of music. It was also able to accompany a person. In 1986, Honda began its humanoid research and development program, to create humanoid robots capable of interacting successfully with humans.
The Intelligent Robotics Lab, directed by Hiroshi Ishiguro at Osaka University, and the Kokoro company demonstrated the Actroid at Expo 2005 in Aichi Prefecture, Japan and released the Telenoid R1 in 2010. In 2006, Kokoro developed a new DER 2 android. The height of the human body part of DER2 is 165 cm. There are 47 mobile points. DER2 can not only change its expression but also move its hands and feet and twist its body. The "air servosystem" which Kokoro developed originally is used for the actuator. As a result of having an actuator controlled precisely with air pressure via a servosystem, the movement is very fluid and there is very little noise. DER2 realized a slimmer body than that of the former version by using a smaller cylinder. Outwardly DER2 has a more beautiful proportion. Compared to the previous model, DER2 has thinner arms and a wider repertoire of expressions. Once programmed, it is able to choreograph its motions and gestures with its voice.
The Intelligent Mechatronics Lab, directed by Hiroshi Kobayashi at the Tokyo University of Science, has developed an android head called Saya, which was exhibited at Robodex 2002 in Yokohama, Japan. There are several other initiatives around the world involving humanoid research and development at this time, which will hopefully introduce a broader spectrum of realized technology in the near future. Now Saya is working at the Science University of Tokyo as a guide.
The Waseda University (Japan) and NTT docomo's manufacturers have succeeded in creating a shape-shifting robot WD-2. It is capable of changing its face. At first, the creators decided the positions of the necessary points to express the outline, eyes, nose, and so on of a certain person. The robot expresses its face by moving all points to the decided positions, they say. The first version of the robot was first developed back in 2003. After that, a year later, they made a couple of major improvements to the design. The robot features an elastic mask made from the average head dummy. It uses a driving system with a 3DOF unit. The WD-2 robot can change its facial features by activating specific facial points on a mask, with each point possessing three degrees of freedom. This one has 17 facial points, for a total of 56 degrees of freedom. As for the materials they used, the WD-2's mask is fabricated with a highly elastic material called Septom, with bits of steel wool mixed in for added strength. Other technical features reveal a shaft driven behind the mask at the desired facial point, driven by a DC motor with a simple pulley and a slide screw. Apparently, the researchers can also modify the shape of the mask based on actual human faces. To "copy" a face, they need only a 3D scanner to determine the locations of an individual's 17 facial points. After that, they are then driven into position using a laptop and 56 motor control boards. In addition, the researchers also mention that the shifting robot can even display an individual's hair style and skin color if a photo of their face is projected onto the 3D Mask.
Singapore
Prof Nadia Thalmann, a Nanyang Technological University scientist, directed efforts of the Institute for Media Innovation along with the School of Computer Engineering in the development of a social robot, Nadine. Nadine is powered by software similar to Apple's Siri or Microsoft's Cortana. Nadine may become a personal assistant in offices and homes in future, or she may become a companion for the young and the elderly.
Assoc Prof Gerald Seet from the School of Mechanical & Aerospace Engineering and the BeingThere Centre led a three-year R&D development in tele-presence robotics, creating EDGAR. A remote user can control EDGAR with the user's face and expressions displayed on the robot's face in real time. The robot also mimics their upper body movements.
South Korea
KITECH researched and developed EveR-1, an android interpersonal communications model capable of emulating human emotional expression via facial "musculature" and capable of rudimentary conversation, having a vocabulary of around 400 words. She is tall and weighs , matching the average figure of a Korean woman in her twenties. EveR-1's name derives from the Biblical Eve, plus the letter r for robot. EveR-1's advanced computing processing power enables speech recognition and vocal synthesis, at the same time processing lip synchronization and visual recognition by 90-degree micro-CCD cameras with face recognition technology. An independent microchip inside her artificial brain handles gesture expression, body coordination, and emotion expression. Her whole body is made of highly advanced synthetic jelly silicon and with 60 artificial joints in her face, neck, and lower body; she is able to demonstrate realistic facial expressions and sing while simultaneously dancing. In South Korea, the Ministry of Information and Communication had an ambitious plan to put a robot in every household by 2020. Several robot cities have been planned for the country: the first will be built in 2016 at a cost of 500 billion won (US$440 million), of which 50 billion is direct government investment. The new robot city will feature research and development centers for manufacturers and part suppliers, as well as exhibition halls and a stadium for robot competitions. The country's new Robotics Ethics Charter will establish ground rules and laws for human interaction with robots in the future, setting standards for robotics users and manufacturers, as well as guidelines on ethical standards to be programmed into robots to prevent human abuse of robots and vice versa.
United States
Walt Disney and a staff of Imagineers created Great Moments with Mr. Lincoln that debuted at the 1964 New York World's Fair.
Dr. William Barry, an Education Futurist and former visiting West Point Professor of Philosophy and Ethical Reasoning at the United States Military Academy, created an AI android character named "Maria Bot". This Interface AI android was named after the infamous fictional robot Maria in the 1927 film Metropolis, as a well-behaved distant relative. Maria Bot is the first AI Android Teaching Assistant at the university level. Maria Bot has appeared as a keynote speaker as a duo with Barry for a TEDx talk in Everett, Washington in February 2020.
Resembling a human from the shoulders up, Maria Bot is a virtual being android that has complex facial expressions and head movement and engages in conversation about a variety of subjects. She uses AI to process and synthesize information to make her own decisions on how to talk and engage. She collects data through conversations, direct data inputs such as books or articles, and through internet sources.
Maria Bot was built by an international high-tech company for Barry to help improve education quality and eliminate education poverty. Maria Bot is designed to create new ways for students to engage and discuss ethical issues raised by the increasing presence of robots and artificial intelligence. Barry also uses Maria Bot to demonstrate that programming a robot with life-affirming, ethical framework makes them more likely to help humans to do the same.
Maria Bot is an ambassador robot for good and ethical AI technology.
Hanson Robotics, Inc., of Texas and KAIST produced an android portrait of Albert Einstein, using Hanson's facial android technology mounted on KAIST's life-size walking bipedal robot body. This Einstein android, also called "Albert Hubo", thus represents the first full-body walking android in history. Hanson Robotics, the FedEx Institute of Technology, and the University of Texas at Arlington also developed the android portrait of sci-fi author Philip K. Dick (creator of Do Androids Dream of Electric Sheep?, the basis for the film Blade Runner), with full conversational capabilities that incorporated thousands of pages of the author's works. In 2005, the PKD android won a first-place artificial intelligence award from AAAI.
China
On April 19, 2025, 21 humanoid robots participated along with 12,000 human runners in a half-marathon in Beijing. While almost every robot fell down and had overheating problems, and the robots were continuously being controlled by human handlers accompanying them, six of the robots did reach the finish line. Two of them, Tiangong Ultra by Chinese robotics company UBTech, and N2 by Chinese company Noetix Robotics, which took first and second place respectively among robots in the race, stood out for their consistent (albeit slow) pace.
Use in fiction
Androids are a staple of science fiction. Isaac Asimov pioneered the fictionalization of the science of robotics and artificial intelligence, notably in his 1950s series I, Robot. One thing common to most fictional androids is that the real-life technological challenges associated with creating thoroughly human-like robots — such as the creation of strong artificial intelligence—are assumed to have been solved. Fictional androids are often depicted as mentally and physically equal or superior to humans—moving, thinking and speaking as fluidly as them.
The tension between the nonhuman substance and the human appearance—or even human ambitions—of androids is the dramatic impetus behind most of their fictional depictions. Some android heroes seek, like Pinocchio, to become human, as in the film Bicentennial Man, or Data in Star Trek: The Next Generation. Others, as in the film Westworld, rebel against abuse by careless humans. Android hunter Deckard in Do Androids Dream of Electric Sheep? and its film adaptation Blade Runner discovers that his targets appear to be, in some ways, more "human" than he is. The sequel Blade Runner 2049 involves android hunter K, himself an android, discovering the same thing. Android stories, therefore, are not essentially stories "about" androids; they are stories about the human condition and what it means to be human.
One aspect of writing about the meaning of humanity is to use discrimination against androids as a mechanism for exploring racism in society, as in Blade Runner. Perhaps the clearest example of this is John Brunner's 1968 novel Into the Slave Nebula, where the blue-skinned android slaves are explicitly shown to be fully human. More recently, the androids Bishop and Annalee Call in the films Aliens and Alien Resurrection are used as vehicles for exploring how humans deal with the presence of an "Other". The 2018 video game Detroit: Become Human also explores how androids are treated as second class citizens in a near future society.
Female androids, or "gynoids", are often seen in science fiction, and can be viewed as a continuation of the long tradition of men attempting to create the stereotypical "perfect woman". Examples include the Greek myth of Pygmalion and the female robot Maria in Fritz Lang's Metropolis. Some gynoids, like Pris in Blade Runner, are designed as sex-objects, with the intent of "pleasing men's violent sexual desires", or as submissive, servile companions, such as in The Stepford Wives. Fiction about gynoids has therefore been described as reinforcing "essentialist ideas of femininity", although others have suggested that the treatment of androids is a way of exploring racism and misogyny in society.
The 2015 Japanese film Sayonara, starring Geminoid F, was promoted as "the first movie to feature an android performing opposite a human actor".
The 2023 Dutch film I'm Not a Robot won the Academy Award for Best Live Action Short Film in 2025.
See also
References
Further reading
Kerman, Judith B. (1991). Retrofitting Blade Runner: Issues in Ridley Scott's Blade Runner and Philip K. Dick's Do Androids Dream of Electric Sheep? Bowling Green, OH: Bowling Green State University Popular Press. .
Perkowitz, Sidney (2004). Digital People: From Bionic Humans to Androids. Joseph Henry Press. .
Shelde, Per (1993). Androids, Humanoids, and Other Science Fiction Monsters: Science and Soul in Science Fiction Films. New York: New York University Press. .
Ishiguro, Hiroshi. "Android science." Cognitive Science Society. 2005.
Glaser, Horst Albert and Rossbach, Sabine: The Artificial Human, Frankfurt/M., Bern, New York 2011 "The Artificial Human"
TechCast Article Series, Jason Rupinski and Richard Mix, "Public Attitudes to Androids: Robot Gender, Tasks, & Pricing"
Carpenter, J. (2009). Why send the Terminator to do R2D2s job?: Designing androids as rhetorical phenomena. Proceedings of HCI 2009: Beyond Gray Droids: Domestic Robot Design for the 21st Century. Cambridge, UK. 1 September.
Telotte, J.P. Replications: A Robotic History of the Science Fiction Film. University of Illinois Press, 1995.
External links
|
;Human–machine interaction;Japanese inventions;Osaka University research;Robots;Science fiction themes;South Korean inventions
|
https://en.wikipedia.org/wiki/Algorithm
|
In mathematics and computer science, an algorithm () is a finite sequence of mathematically rigorous instructions, typically used to solve a class of specific problems or to perform a computation. Algorithms are used as specifications for performing calculations and data processing. More advanced algorithms can use conditionals to divert the code execution through various routes (referred to as automated decision-making) and deduce valid inferences (referred to as automated reasoning).
In contrast, a heuristic is an approach to solving problems without well-defined correct or optimal results. For example, although social media recommender systems are commonly called "algorithms", they actually rely on heuristics as there is no truly "correct" recommendation.
As an effective method, an algorithm can be expressed within a finite amount of space and time and in a well-defined formal language for calculating a function. Starting from an initial state and initial input (perhaps empty), the instructions describe a computation that, when executed, proceeds through a finite number of well-defined successive states, eventually producing "output" and terminating at a final ending state. The transition from one state to the next is not necessarily deterministic; some algorithms, known as randomized algorithms, incorporate random input.
Etymology
Around 825 AD, Persian scientist and polymath Muḥammad ibn Mūsā al-Khwārizmī wrote kitāb al-ḥisāb al-hindī ("Book of Indian computation") and kitab al-jam' wa'l-tafriq al-ḥisāb al-hindī ("Addition and subtraction in Indian arithmetic"). In the early 12th century, Latin translations of these texts involving the Hindu–Arabic numeral system and arithmetic appeared, for example Liber Alghoarismi de practica arismetrice, attributed to John of Seville, and Liber Algorismi de numero Indorum, attributed to Adelard of Bath. Here, alghoarismi or algorismi is the Latinization of Al-Khwarizmi's name; the text starts with the phrase Dixit Algorismi, or "Thus spoke Al-Khwarizmi".
The word algorism in English came to mean the use of place-value notation in calculations; it occurs in the Ancrene Wisse from circa 1225. By the time Geoffrey Chaucer wrote The Canterbury Tales in the late 14th century, he used a variant of the same word in describing augrym stones, stones used for place-value calculation. In the 15th century, under the influence of the Greek word ἀριθμός (arithmos, "number"; cf. "arithmetic"), the Latin word was altered to algorithmus. By 1596, this form of the word was used in English, as algorithm, by Thomas Hood.
Definition
One informal definition is "a set of rules that precisely defines a sequence of operations", which would include all computer programs (including programs that do not perform numeric calculations), and any prescribed bureaucratic procedure
or cook-book recipe. In general, a program is an algorithm only if it stops eventually—even though infinite loops may sometimes prove desirable. define an algorithm to be an explicit set of instructions for determining an output, that can be followed by a computing machine or a human who could only carry out specific elementary operations on symbols.
Most algorithms are intended to be implemented as computer programs. However, algorithms are also implemented by other means, such as in a biological neural network (for example, the human brain performing arithmetic or an insect looking for food), in an electrical circuit, or a mechanical device.
History
Ancient algorithms
Step-by-step procedures for solving mathematical problems have been recorded since antiquity. This includes in Babylonian mathematics (around 2500 BC), Egyptian mathematics (around 1550 BC), Indian mathematics (around 800 BC and later), the Ifa Oracle (around 500 BC), Greek mathematics (around 240 BC), Chinese mathematics (around 200 BC and later), and Arabic mathematics (around 800 AD).
The earliest evidence of algorithms is found in ancient Mesopotamian mathematics. A Sumerian clay tablet found in Shuruppak near Baghdad and dated to describes the earliest division algorithm. During the Hammurabi dynasty , Babylonian clay tablets described algorithms for computing formulas. Algorithms were also used in Babylonian astronomy. Babylonian clay tablets describe and employ algorithmic procedures to compute the time and place of significant astronomical events.
Algorithms for arithmetic are also found in ancient Egyptian mathematics, dating back to the Rhind Mathematical Papyrus . Algorithms were later used in ancient Hellenistic mathematics. Two examples are the Sieve of Eratosthenes, which was described in the Introduction to Arithmetic by Nicomachus, and the Euclidean algorithm, which was first described in Euclid's Elements ().Examples of ancient Indian mathematics included the Shulba Sutras, the Kerala School, and the Brāhmasphuṭasiddhānta.
The first cryptographic algorithm for deciphering encrypted code was developed by Al-Kindi, a 9th-century Arab mathematician, in A Manuscript On Deciphering Cryptographic Messages. He gave the first description of cryptanalysis by frequency analysis, the earliest codebreaking algorithm.
Computers
Weight-driven clocks
Bolter credits the invention of the weight-driven clock as "the key invention [of Europe in the Middle Ages]," specifically the verge escapement mechanism producing the tick and tock of a mechanical clock. "The accurate automatic machine" led immediately to "mechanical automata" in the 13th century and "computational machines"—the difference and analytical engines of Charles Babbage and Ada Lovelace in the mid-19th century. Lovelace designed the first algorithm intended for processing on a computer, Babbage's analytical engine, which is the first device considered a real Turing-complete computer instead of just a calculator. Although the full implementation of Babbage's second device was not realized for decades after her lifetime, Lovelace has been called "history's first programmer".
Electromechanical relay
Bell and Newell (1971) write that the Jacquard loom, a precursor to Hollerith cards (punch cards), and "telephone switching technologies" led to the development of the first computers. By the mid-19th century, the telegraph, the precursor of the telephone, was in use throughout the world. By the late 19th century, the ticker tape () was in use, as were Hollerith cards (c. 1890). Then came the teleprinter () with its punched-paper use of Baudot code on tape.
Telephone-switching networks of electromechanical relays were invented in 1835. These led to the invention of the digital adding device by George Stibitz in 1937. While working in Bell Laboratories, he observed the "burdensome" use of mechanical calculators with gears. "He went home one evening in 1937 intending to test his idea... When the tinkering was over, Stibitz had constructed a binary adding device".
Formalization
In 1928, a partial formalization of the modern concept of algorithms began with attempts to solve the Entscheidungsproblem (decision problem) posed by David Hilbert. Later formalizations were framed as attempts to define "effective calculability" or "effective method". Those formalizations included the Gödel–Herbrand–Kleene recursive functions of 1930, 1934 and 1935, Alonzo Church's lambda calculus of 1936, Emil Post's Formulation 1 of 1936, and Alan Turing's Turing machines of 1936–37 and 1939.
Representations
Algorithms can be expressed in many kinds of notation, including natural languages, pseudocode, flowcharts, drakon-charts, programming languages or control tables (processed by interpreters). Natural language expressions of algorithms tend to be verbose and ambiguous and are rarely used for complex or technical algorithms. Pseudocode, flowcharts, drakon-charts, and control tables are structured expressions of algorithms that avoid common ambiguities of natural language. Programming languages are primarily for expressing algorithms in a computer-executable form but are also used to define or document algorithms.
Turing machines
There are many possible representations and Turing machine programs can be expressed as a sequence of machine tables (see finite-state machine, state-transition table, and control table for more), as flowcharts and drakon-charts (see state diagram for more), as a form of rudimentary machine code or assembly code called "sets of quadruples", and more. Algorithm representations can also be classified into three accepted levels of Turing machine description: high-level description, implementation description, and formal description. A high-level description describes the qualities of the algorithm itself, ignoring how it is implemented on the Turing machine. An implementation description describes the general manner in which the machine moves its head and stores data to carry out the algorithm, but does not give exact states. In the most detail, a formal description gives the exact state table and list of transitions of the Turing machine.
Flowchart representation
The graphical aid called a flowchart offers a way to describe and document an algorithm (and a computer program corresponding to it). It has four primary symbols: arrows showing program flow, rectangles (SEQUENCE, GOTO), diamonds (IF-THEN-ELSE), and dots (OR-tie). Sub-structures can "nest" in rectangles, but only if a single exit occurs from the superstructure.
Algorithmic analysis
It is often important to know how much time, storage, or other cost an algorithm may require. Methods have been developed for the analysis of algorithms to obtain such quantitative answers (estimates); for example, an algorithm that adds up the elements of a list of n numbers would have a time requirement of , using big O notation. The algorithm only needs to remember two values: the sum of all the elements so far, and its current position in the input list. If the space required to store the input numbers is not counted, it has a space requirement of , otherwise is required.
Different algorithms may complete the same task with a different set of instructions in less or more time, space, or 'effort' than others. For example, a binary search algorithm (with cost ) outperforms a sequential search (cost ) when used for table lookups on sorted lists or arrays.
Formal versus empirical
The analysis, and study of algorithms is a discipline of computer science. Algorithms are often studied abstractly, without referencing any specific programming language or implementation. Algorithm analysis resembles other mathematical disciplines as it focuses on the algorithm's properties, not implementation. Pseudocode is typical for analysis as it is a simple and general representation. Most algorithms are implemented on particular hardware/software platforms and their algorithmic efficiency is tested using real code. The efficiency of a particular algorithm may be insignificant for many "one-off" problems but it may be critical for algorithms designed for fast interactive, commercial, or long-life scientific usage. Scaling from small n to large n frequently exposes inefficient algorithms that are otherwise benign.
Empirical testing is useful for uncovering unexpected interactions that affect performance. Benchmarks may be used to compare before/after potential improvements to an algorithm after program optimization.
Empirical tests cannot replace formal analysis, though, and are non-trivial to perform fairly.
Execution efficiency
To illustrate the potential improvements possible even in well-established algorithms, a recent significant innovation, relating to FFT algorithms (used heavily in the field of image processing), can decrease processing time up to 1,000 times for applications like medical imaging. In general, speed improvements depend on special properties of the problem, which are very common in practical applications. Speedups of this magnitude enable computing devices that make extensive use of image processing (like digital cameras and medical equipment) to consume less power.
Design
Algorithm design is a method or mathematical process for problem-solving and engineering algorithms. The design of algorithms is part of many solution theories, such as divide-and-conquer or dynamic programming within operation research. Techniques for designing and implementing algorithm designs are also called algorithm design patterns, with examples including the template method pattern and the decorator pattern. One of the most important aspects of algorithm design is resource (run-time, memory usage) efficiency; the big O notation is used to describe e.g., an algorithm's run-time growth as the size of its input increases.
Structured programming
Per the Church–Turing thesis, any algorithm can be computed by any Turing complete model. Turing completeness only requires four instruction types—conditional GOTO, unconditional GOTO, assignment, HALT. However, Kemeny and Kurtz observe that, while "undisciplined" use of unconditional GOTOs and conditional IF-THEN GOTOs can result in "spaghetti code", a programmer can write structured programs using only these instructions; on the other hand "it is also possible, and not too hard, to write badly structured programs in a structured language". Tausworthe augments the three Böhm-Jacopini canonical structures: SEQUENCE, IF-THEN-ELSE, and WHILE-DO, with two more: DO-WHILE and CASE. An additional benefit of a structured program is that it lends itself to proofs of correctness using mathematical induction.
Legal status
By themselves, algorithms are not usually patentable. In the United States, a claim consisting solely of simple manipulations of abstract concepts, numbers, or signals does not constitute "processes" (USPTO 2006), so algorithms are not patentable (as in Gottschalk v. Benson). However practical applications of algorithms are sometimes patentable. For example, in Diamond v. Diehr, the application of a simple feedback algorithm to aid in the curing of synthetic rubber was deemed patentable. The patenting of software is controversial, and there are criticized patents involving algorithms, especially data compression algorithms, such as Unisys's LZW patent. Additionally, some cryptographic algorithms have export restrictions (see export of cryptography).
Classification
By implementation
Recursion
A recursive algorithm invokes itself repeatedly until meeting a termination condition and is a common functional programming method. Iterative algorithms use repetitions such as loops or data structures like stacks to solve problems. Problems may be suited for one implementation or the other. The Tower of Hanoi is a puzzle commonly solved using recursive implementation. Every recursive version has an equivalent (but possibly more or less complex) iterative version, and vice versa.
Serial, parallel or distributed
Algorithms are usually discussed with the assumption that computers execute one instruction of an algorithm at a time on serial computers. Serial algorithms are designed for these environments, unlike parallel or distributed algorithms. Parallel algorithms take advantage of computer architectures where multiple processors can work on a problem at the same time. Distributed algorithms use multiple machines connected via a computer network. Parallel and distributed algorithms divide the problem into subproblems and collect the results back together. Resource consumption in these algorithms is not only processor cycles on each processor but also the communication overhead between the processors. Some sorting algorithms can be parallelized efficiently, but their communication overhead is expensive. Iterative algorithms are generally parallelizable, but some problems have no parallel algorithms and are called inherently serial problems.
Deterministic or non-deterministic
Deterministic algorithms solve the problem with exact decisions at every step; whereas non-deterministic algorithms solve problems via guessing. Guesses are typically made more accurate through the use of heuristics.
Exact or approximate
While many algorithms reach an exact solution, approximation algorithms seek an approximation that is close to the true solution. Such algorithms have practical value for many hard problems. For example, the Knapsack problem, where there is a set of items, and the goal is to pack the knapsack to get the maximum total value. Each item has some weight and some value. The total weight that can be carried is no more than some fixed number X. So, the solution must consider the weights of items as well as their value.
Quantum algorithm
Quantum algorithms run on a realistic model of quantum computation. The term is usually used for those algorithms that seem inherently quantum or use some essential feature of Quantum computing such as quantum superposition or quantum entanglement.
By design paradigm
Another way of classifying algorithms is by their design methodology or paradigm. Some common paradigms are:
Brute-force or exhaustive search
Brute force is a problem-solving method of systematically trying every possible option until the optimal solution is found. This approach can be very time-consuming, testing every possible combination of variables. It is often used when other methods are unavailable or too complex. Brute force can solve a variety of problems, including finding the shortest path between two points and cracking passwords.
Divide and conquer
A divide-and-conquer algorithm repeatedly reduces a problem to one or more smaller instances of itself (usually recursively) until the instances are small enough to solve easily. Merge sorting is an example of divide and conquer, where an unordered list is repeatedly split into smaller lists, which are sorted in the same way and then merged. In a simpler variant of divide and conquer called prune and search or decrease-and-conquer algorithm, which solves one smaller instance of itself, and does not require a merge step. An example of a prune and search algorithm is the binary search algorithm.
Search and enumeration
Many problems (such as playing chess) can be modelled as problems on graphs. A graph exploration algorithm specifies rules for moving around a graph and is useful for such problems. This category also includes search algorithms, branch and bound enumeration, and backtracking.
Randomized algorithm
Such algorithms make some choices randomly (or pseudo-randomly). They find approximate solutions when finding exact solutions may be impractical (see heuristic method below). For some problems, the fastest approximations must involve some randomness. Whether randomized algorithms with polynomial time complexity can be the fastest algorithm for some problems is an open question known as the P versus NP problem. There are two large classes of such algorithms:
Monte Carlo algorithms return a correct answer with high probability. E.g. RP is the subclass of these that run in polynomial time.
Las Vegas algorithms always return the correct answer, but their running time is only probabilistically bound, e.g. ZPP.
Reduction of complexity
This technique transforms difficult problems into better-known problems solvable with (hopefully) asymptotically optimal algorithms. The goal is to find a reducing algorithm whose complexity is not dominated by the resulting reduced algorithms. For example, one selection algorithm finds the median of an unsorted list by first sorting the list (the expensive portion), and then pulling out the middle element in the sorted list (the cheap portion). This technique is also known as transform and conquer.
Back tracking
In this approach, multiple solutions are built incrementally and abandoned when it is determined that they cannot lead to a valid full solution.
Optimization problems
For optimization problems there is a more specific classification of algorithms; an algorithm for such problems may fall into one or more of the general categories described above as well as into one of the following:
Linear programming
When searching for optimal solutions to a linear function bound by linear equality and inequality constraints, the constraints can be used directly to produce optimal solutions. There are algorithms that can solve any problem in this category, such as the popular simplex algorithm. Problems that can be solved with linear programming include the maximum flow problem for directed graphs. If a problem also requires that any of the unknowns be integers, then it is classified in integer programming. A linear programming algorithm can solve such a problem if it can be proved that all restrictions for integer values are superficial, i.e., the solutions satisfy these restrictions anyway. In the general case, a specialized algorithm or an algorithm that finds approximate solutions is used, depending on the difficulty of the problem.
Dynamic programming
When a problem shows optimal substructures—meaning the optimal solution can be constructed from optimal solutions to subproblems—and overlapping subproblems, meaning the same subproblems are used to solve many different problem instances, a quicker approach called dynamic programming avoids recomputing solutions. For example, Floyd–Warshall algorithm, the shortest path between a start and goal vertex in a weighted graph can be found using the shortest path to the goal from all adjacent vertices. Dynamic programming and memoization go together. Unlike divide and conquer, dynamic programming subproblems often overlap. The difference between dynamic programming and simple recursion is the caching or memoization of recursive calls. When subproblems are independent and do not repeat, memoization does not help; hence dynamic programming is not applicable to all complex problems. Using memoization dynamic programming reduces the complexity of many problems from exponential to polynomial.
The greedy method
Greedy algorithms, similarly to a dynamic programming, work by examining substructures, in this case not of the problem but of a given solution. Such algorithms start with some solution and improve it by making small modifications. For some problems, they always find the optimal solution but for others they may stop at local optima. The most popular use of greedy algorithms is finding minimal spanning trees of graphs without negative cycles. Huffman Tree, Kruskal, Prim, Sollin are greedy algorithms that can solve this optimization problem.
The heuristic method
In optimization problems, heuristic algorithms find solutions close to the optimal solution when finding the optimal solution is impractical. These algorithms get closer and closer to the optimal solution as they progress. In principle, if run for an infinite amount of time, they will find the optimal solution. They can ideally find a solution very close to the optimal solution in a relatively short time. These algorithms include local search, tabu search, simulated annealing, and genetic algorithms. Some, like simulated annealing, are non-deterministic algorithms while others, like tabu search, are deterministic. When a bound on the error of the non-optimal solution is known, the algorithm is further categorized as an approximation algorithm.
Examples
One of the simplest algorithms finds the largest number in a list of numbers of random order. Finding the solution requires looking at every number in the list. From this follows a simple algorithm, which can be described in plain English as:
High-level description:
If a set of numbers is empty, then there is no highest number.
Assume the first number in the set is the largest.
For each remaining number in the set: if this number is greater than the current largest, it becomes the new largest.
When there are no unchecked numbers left in the set, consider the current largest number to be the largest in the set.
(Quasi-)formal description:
Written in prose but much closer to the high-level language of a computer program, the following is the more formal coding of the algorithm in pseudocode or pidgin code:
Input: A list of numbers L.
Output: The largest number in the list L.
if L.size = 0 return null
largest ← L[0]
for each item in L, do
if item > largest, then
largest ← item
return largest
Bibliography
Bell, C. Gordon and Newell, Allen (1971), Computer Structures: Readings and Examples, McGraw–Hill Book Company, New York. .
Includes a bibliography of 56 references.
,
: cf. Chapter 3 Turing machines where they discuss "certain enumerable sets not effectively (mechanically) enumerable".
Campagnolo, M.L., Moore, C., and Costa, J.F. (2000) An analog characterization of the subrecursive functions. In Proc. of the 4th Conference on Real Numbers and Computers, Odense University, pp. 91–109
Reprinted in The Undecidable, p. 89ff. The first expression of "Church's Thesis". See in particular page 100 (The Undecidable) where he defines the notion of "effective calculability" in terms of "an algorithm", and he uses the word "terminates", etc.
Reprinted in The Undecidable, p. 110ff. Church shows that the Entscheidungsproblem is unsolvable in about 3 pages of text and 3 pages of footnotes.
Davis gives commentary before each article. Papers of Gödel, Alonzo Church, Turing, Rosser, Kleene, and Emil Post are included; those cited in the article are listed here by author's name.
Davis offers concise biographies of Leibniz, Boole, Frege, Cantor, Hilbert, Gödel and Turing with von Neumann as the show-stealing villain. Very brief bios of Joseph-Marie Jacquard, Babbage, Ada Lovelace, Claude Shannon, Howard Aiken, etc.
,
Yuri Gurevich, Sequential Abstract State Machines Capture Sequential Algorithms, ACM Transactions on Computational Logic, Vol 1, no 1 (July 2000), pp. 77–111. Includes bibliography of 33 sources.
, 3rd edition 1976[?], (pbk.)
, . Cf. Chapter "The Spirit of Truth" for a history leading to, and a discussion of, his proof.
Presented to the American Mathematical Society, September 1935. Reprinted in The Undecidable, p. 237ff. Kleene's definition of "general recursion" (known now as mu-recursion) was used by Church in his 1935 paper An Unsolvable Problem of Elementary Number Theory that proved the "decision problem" to be "undecidable" (i.e., a negative result).
Reprinted in The Undecidable, p. 255ff. Kleene refined his definition of "general recursion" and proceeded in his chapter "12. Algorithmic theories" to posit "Thesis I" (p. 274); he would later repeat this thesis (in Kleene 1952:300) and name it "Church's Thesis"(Kleene 1952:317) (i.e., the Church thesis).
Kosovsky, N.K. Elements of Mathematical Logic and its Application to the theory of Subrecursive Algorithms, LSU Publ., Leningrad, 1981
A.A. Markov (1954) Theory of algorithms. [Translated by Jacques J. Schorr-Kon and PST staff] Imprint Moscow, Academy of Sciences of the USSR, 1954 [i.e., Jerusalem, Israel Program for Scientific Translations, 1961; available from the Office of Technical Services, U.S. Dept. of Commerce, Washington] Description 444 p. 28 cm. Added t.p. in Russian Translation of Works of the Mathematical Institute, Academy of Sciences of the USSR, v. 42. Original title: Teoriya algerifmov. [QA248.M2943 Dartmouth College library. U.S. Dept. of Commerce, Office of Technical Services, number OTS .]
Minsky expands his "...idea of an algorithm – an effective procedure..." in chapter 5.1 Computability, Effective Procedures and Algorithms. Infinite machines.
Reprinted in The Undecidable, pp. 289ff. Post defines a simple algorithmic-like process of a man writing marks or erasing marks and going from box to box and eventually halting, as he follows a list of simple instructions. This is cited by Kleene as one source of his "Thesis I", the so-called Church–Turing thesis.
Reprinted in The Undecidable, p. 223ff. Herein is Rosser's famous definition of "effective method": "...a method each step of which is precisely predetermined and which is certain to produce the answer in a finite number of steps... a machine which will then solve any problem of the set with no human intervention beyond inserting the question and (later) reading the answer" (p. 225–226, The Undecidable)
Cf. in particular the first chapter titled: Algorithms, Turing Machines, and Programs. His succinct informal definition: "...any sequence of instructions that can be obeyed by a robot, is called an algorithm" (p. 4).
. Corrections, ibid, vol. 43(1937) pp. 544–546. Reprinted in The Undecidable, p. 116ff. Turing's famous paper completed as a Master's dissertation while at King's College Cambridge UK.
Reprinted in The Undecidable, pp. 155ff. Turing's paper that defined "the oracle" was his PhD thesis while at Princeton.
United States Patent and Trademark Office (2006), 2106.02 **>Mathematical Algorithms: 2100 Patentability, Manual of Patent Examining Procedure (MPEP). Latest revision August 2006
Zaslavsky, C. (1970). Mathematics of the Yoruba People and of Their Neighbors in Southern Nigeria. The Two-Year College Mathematics Journal, 1(2), 76–99. https://doi.org/10.2307/3027363
Further reading
Jon Kleinberg, Éva Tardos(2006): Algorithm Design, Pearson/Addison-Wesley, ISBN 978-0-32129535-4
Knuth, Donald E. (2000). Selected Papers on Analysis of Algorithms . Stanford, California: Center for the Study of Language and Information.
Knuth, Donald E. (2010). Selected Papers on Design of Algorithms . Stanford, California: Center for the Study of Language and Information.
External links
Dictionary of Algorithms and Data Structures – National Institute of Standards and Technology
Algorithm repositories
The Stony Brook Algorithm Repository – State University of New York at Stony Brook
Collected Algorithms of the ACM – Associations for Computing Machinery
The Stanford GraphBase – Stanford University
|
;Articles with example pseudocode;Mathematical logic;Theoretical computer science
|
https://en.wikipedia.org/wiki/Asteroid
|
An asteroid is a minor planet—an object larger than a meteoroid that is neither a planet nor an identified comet—that orbits within the inner Solar System or is co-orbital with Jupiter (Trojan asteroids). Asteroids are rocky, metallic, or icy bodies with no atmosphere, and are broadly classified into C-type (carbonaceous), M-type (metallic), or S-type (silicaceous). The size and shape of asteroids vary significantly, ranging from small rubble piles under a kilometer across to Ceres, a dwarf planet almost 1000 km in diameter. A body is classified as a comet, not an asteroid, if it shows a coma (tail) when warmed by solar radiation, although recent observations suggest a continuum between these types of bodies.
Of the roughly one million known asteroids, the greatest number are located between the orbits of Mars and Jupiter, approximately 2 to 4 AU from the Sun, in a region known as the main asteroid belt. The total mass of all the asteroids combined is only 3% that of Earth's Moon. The majority of main belt asteroids follow slightly elliptical, stable orbits, revolving in the same direction as the Earth and taking from three to six years to complete a full circuit of the Sun.
Asteroids have historically been observed from Earth. The first close-up observation of an asteroid was made by the Galileo spacecraft. Several dedicated missions to asteroids were subsequently launched by NASA and JAXA, with plans for other missions in progress. NASA's NEAR Shoemaker studied Eros, and Dawn observed Vesta and Ceres. JAXA's missions Hayabusa and Hayabusa2 studied and returned samples of Itokawa and Ryugu, respectively. OSIRIS-REx studied Bennu, collecting a sample in 2020 which was delivered back to Earth in 2023. NASA's Lucy, launched in 2021, is tasked with studying ten different asteroids, two from the main belt and eight Jupiter trojans. Psyche, launched October 2023, aims to study the metallic asteroid Psyche.
Near-Earth asteroids have the potential for catastrophic consequences if they strike Earth, with a notable example being the Chicxulub impact, widely thought to have induced the Cretaceous–Paleogene mass extinction. As an experiment to meet this danger, in September 2022 the Double Asteroid Redirection Test spacecraft successfully altered the orbit of the non-threatening asteroid Dimorphos by crashing into it.
Terminology
In 2006, the International Astronomical Union (IAU) introduced the currently preferred broad term small Solar System body, defined as an object in the Solar System that is neither a planet, a dwarf planet, nor a natural satellite; this includes asteroids, comets, and more recently discovered classes. According to IAU, "the term 'minor planet' may still be used, but generally, 'Small Solar System Body' will be preferred."
Historically, the first discovered asteroid, Ceres, was at first considered a new planet. It was followed by the discovery of other similar bodies, which with the equipment of the time appeared to be points of light like stars, showing little or no planetary disc, though readily distinguishable from stars due to their apparent motions. This prompted the astronomer Sir William Herschel to propose the term asteroid, coined in Greek as ἀστεροειδής, or asteroeidēs, meaning 'star-like, star-shaped', and derived from the Ancient Greek astēr 'star, planet'. In the early second half of the 19th century, the terms asteroid and planet (not always qualified as "minor") were still used interchangeably.
Traditionally, small bodies orbiting the Sun were classified as comets, asteroids, or meteoroids, with anything smaller than one meter across being called a meteoroid. The term asteroid, never officially defined, can be informally used to mean "an irregularly shaped rocky body orbiting the Sun that does not qualify as a planet or a dwarf planet under the IAU definitions". The main difference between an asteroid and a comet is that a comet shows a coma (tail) due to sublimation of its near-surface ices by solar radiation. A few objects were first classified as minor planets but later showed evidence of cometary activity. Conversely, some (perhaps all) comets are eventually depleted of their surface volatile ices and become asteroid-like. A further distinction is that comets typically have more eccentric orbits than most asteroids; highly eccentric asteroids are probably dormant or extinct comets.
The minor planets beyond Jupiter's orbit are sometimes also called "asteroids", especially in popular presentations. However, it is becoming increasingly common for the term asteroid to be restricted to minor planets of the inner Solar System. Therefore, this article will restrict itself for the most part to the classical asteroids: objects of the asteroid belt, Jupiter trojans, and near-Earth objects.
For almost two centuries after the discovery of Ceres in 1801, all known asteroids spent most of their time at or within the orbit of Jupiter, though a few, such as 944 Hidalgo, ventured farther for part of their orbit. Starting in 1977 with 2060 Chiron, astronomers discovered small bodies that permanently resided further out than Jupiter, now called centaurs. In 1992, 15760 Albion was discovered, the first object beyond the orbit of Neptune (other than Pluto); soon large numbers of similar objects were observed, now called trans-Neptunian object. Further out are Kuiper-belt objects, scattered-disc objects, and the much more distant Oort cloud, hypothesized to be the main reservoir of dormant comets. They inhabit the cold outer reaches of the Solar System where ices remain solid and comet-like bodies exhibit little cometary activity; if centaurs or trans-Neptunian objects were to venture close to the Sun, their volatile ices would sublimate, and traditional approaches would classify them as comets.
The Kuiper-belt bodies are called "objects" partly to avoid the need to classify them as asteroids or comets. They are thought to be predominantly comet-like in composition, though some may be more akin to asteroids. Most do not have the highly eccentric orbits associated with comets, and the ones so far discovered are larger than traditional comet nuclei. Other recent observations, such as the analysis of the cometary dust collected by the Stardust probe, are increasingly blurring the distinction between comets and asteroids, suggesting "a continuum between asteroids and comets" rather than a sharp dividing line.
In 2006, the IAU created the class of dwarf planets for the largest minor planets—those massive enough to have become ellipsoidal under their own gravity. Only the largest object in the asteroid belt has been placed in this category: Ceres, at about across.
History of observations
Despite their large numbers, asteroids are a relatively recent discovery, with the first one—Ceres—only being identified in 1801. Only one asteroid, 4 Vesta, which has a relatively reflective surface, is normally visible to the naked eye in dark skies when it is favorably positioned. Rarely, small asteroids passing close to Earth may be briefly visible to the naked eye. , the Minor Planet Center had data on 1,199,224 minor planets in the inner and outer Solar System, of which about 614,690 had enough information to be given numbered designations.
Discovery of Ceres
In 1772, German astronomer Johann Elert Bode, citing Johann Daniel Titius, published a numerical procession known as the Titius–Bode law (now discredited). Except for an unexplained gap between Mars and Jupiter, Bode's formula seemed to predict the orbits of the known planets. He wrote the following explanation for the existence of a "missing planet":
This latter point seems in particular to follow from the astonishing relation which the known six planets observe in their distances from the Sun. Let the distance from the Sun to Saturn be taken as 100, then Mercury is separated by 4 such parts from the Sun. Venus is 4 + 3 = 7. The Earth 4 + 6 = 10. Mars 4 + 12 = 16. Now comes a gap in this so orderly progression. After Mars there follows a space of 4 + 24 = 28 parts, in which no planet has yet been seen. Can one believe that the Founder of the universe had left this space empty? Certainly not. From here we come to the distance of Jupiter by 4 + 48 = 52 parts, and finally to that of Saturn by 4 + 96 = 100 parts.
Bode's formula predicted another planet would be found with an orbital radius near 2.8 astronomical units (AU), or 420 million km, from the Sun. The Titius–Bode law got a boost with William Herschel's discovery of Uranus near the predicted distance for a planet beyond Saturn. In 1800, a group headed by Franz Xaver von Zach, editor of the German astronomical journal Monatliche Correspondenz (Monthly Correspondence), sent requests to 24 experienced astronomers (whom he dubbed the "celestial police"), asking that they combine their efforts and begin a methodical search for the expected planet. Although they did not discover Ceres, they later found the asteroids 2 Pallas, 3 Juno and 4 Vesta.
One of the astronomers selected for the search was Giuseppe Piazzi, a Catholic priest at the Academy of Palermo, Sicily. Before receiving his invitation to join the group, Piazzi discovered Ceres on 1 January 1801. He was searching for "the 87th [star] of the Catalogue of the Zodiacal stars of Mr la Caille", but found that "it was preceded by another". Instead of a star, Piazzi had found a moving star-like object, which he first thought was a comet:
The light was a little faint, and of the colour of Jupiter, but similar to many others which generally are reckoned of the eighth magnitude. Therefore I had no doubt of its being any other than a fixed star. [...] The evening of the third, my suspicion was converted into certainty, being assured it was not a fixed star. Nevertheless before I made it known, I waited till the evening of the fourth, when I had the satisfaction to see it had moved at the same rate as on the preceding days.
Piazzi observed Ceres a total of 24 times, the final time on 11 February 1801, when illness interrupted his work. He announced his discovery on 24 January 1801 in letters to only two fellow astronomers, his compatriot Barnaba Oriani of Milan and Bode in Berlin. He reported it as a comet but "since its movement is so slow and rather uniform, it has occurred to me several times that it might be something better than a comet". In April, Piazzi sent his complete observations to Oriani, Bode, and French astronomer Jérôme Lalande. The information was published in the September 1801 issue of the Monatliche Correspondenz.
By this time, the apparent position of Ceres had changed (mostly due to Earth's motion around the Sun), and was too close to the Sun's glare for other astronomers to confirm Piazzi's observations. Toward the end of the year, Ceres should have been visible again, but after such a long time it was difficult to predict its exact position. To recover Ceres, mathematician Carl Friedrich Gauss, then 24 years old, developed an efficient method of orbit determination. In a few weeks, he predicted the path of Ceres and sent his results to von Zach. On 31 December 1801, von Zach and fellow celestial policeman Heinrich W. M. Olbers found Ceres near the predicted position and thus recovered it. At 2.8 AU from the Sun, Ceres appeared to fit the Titius–Bode law almost perfectly; however, Neptune, once discovered in 1846, was 8 AU closer than predicted, leading most astronomers to conclude that the law was a coincidence. Piazzi named the newly discovered object Ceres Ferdinandea, "in honor of the patron goddess of Sicily and of King Ferdinand of Bourbon".
Further search
Three other asteroids (2 Pallas, 3 Juno, and 4 Vesta) were discovered by von Zach's group over the next few years, with Vesta found in 1807. No new asteroids were discovered until 1845. Amateur astronomer Karl Ludwig Hencke started his searches of new asteroids in 1830, and fifteen years later, while looking for Vesta, he found the asteroid later named 5 Astraea. It was the first new asteroid discovery in 38 years. Carl Friedrich Gauss was given the honor of naming the asteroid. After this, other astronomers joined; 15 asteroids were found by the end of 1851. In 1868, when James Craig Watson discovered the 100th asteroid, the French Academy of Sciences engraved the faces of Karl Theodor Robert Luther, John Russell Hind, and Hermann Goldschmidt, the three most successful asteroid-hunters at that time, on a commemorative medallion marking the event.
In 1891, Max Wolf pioneered the use of astrophotography to detect asteroids, which appeared as short streaks on long-exposure photographic plates. This dramatically increased the rate of detection compared with earlier visual methods: Wolf alone discovered 248 asteroids, beginning with 323 Brucia, whereas only slightly more than 300 had been discovered up to that point. It was known that there were many more, but most astronomers did not bother with them, some calling them "vermin of the skies", a phrase variously attributed to Eduard Suess and Edmund Weiss. Even a century later, only a few thousand asteroids were identified, numbered and named.
19th and 20th centuries
In the past, asteroids were discovered by a four-step process. First, a region of the sky was photographed by a wide-field telescope or astrograph. Pairs of photographs were taken, typically one hour apart. Multiple pairs could be taken over a series of days. Second, the two films or plates of the same region were viewed under a stereoscope. A body in orbit around the Sun would move slightly between the pair of films. Under the stereoscope, the image of the body would seem to float slightly above the background of stars. Third, once a moving body was identified, its location would be measured precisely using a digitizing microscope. The location would be measured relative to known star locations.
These first three steps do not constitute asteroid discovery: the observer has only found an apparition, which gets a provisional designation, made up of the year of discovery, a letter representing the half-month of discovery, and finally a letter and a number indicating the discovery's sequential number (example: ). The last step is sending the locations and time of observations to the Minor Planet Center, where computer programs determine whether an apparition ties together earlier apparitions into a single orbit. If so, the object receives a catalogue number and the observer of the first apparition with a calculated orbit is declared the discoverer, and granted the honor of naming the object subject to the approval of the International Astronomical Union.
Naming
By 1851, the Royal Astronomical Society decided that asteroids were being discovered at such a rapid rate that a different system was needed to categorize or name asteroids. In 1852, when de Gasparis discovered the twentieth asteroid, Benjamin Valz gave it a name and a number designating its rank among asteroid discoveries, 20 Massalia. Sometimes asteroids were discovered and not seen again. So, starting in 1892, new asteroids were listed by the year and a capital letter indicating the order in which the asteroid's orbit was calculated and registered within that specific year. For example, the first two asteroids discovered in 1892 were labeled 1892A and 1892B. However, there were not enough letters in the alphabet for all of the asteroids discovered in 1893, so 1893Z was followed by 1893AA. A number of variations of these methods were tried, including designations that included year plus a Greek letter in 1914. A simple chronological numbering system was established in 1925.
Currently all newly discovered asteroids receive a provisional designation (such as ) consisting of the year of discovery and an alphanumeric code indicating the half-month of discovery and the sequence within that half-month. Once an asteroid's orbit has been confirmed, it is given a number, and later may also be given a name (e.g. ). The formal naming convention uses parentheses around the number—e.g. (433) Eros—but dropping the parentheses is quite common. Informally, it is also common to drop the number altogether, or to drop it after the first mention when a name is repeated in running text. In addition, names can be proposed by the asteroid's discoverer, within guidelines established by the International Astronomical Union.
Symbols
The first asteroids to be discovered were assigned iconic symbols like the ones traditionally used to designate the planets. By 1852 there were two dozen asteroid symbols, which often occurred in multiple variants.
In 1851, after the fifteenth asteroid, Eunomia, had been discovered, Johann Franz Encke made a major change in the upcoming 1854 edition of the Berliner Astronomisches Jahrbuch (BAJ, Berlin Astronomical Yearbook). He introduced a disk (circle), a traditional symbol for a star, as the generic symbol for an asteroid. The circle was then numbered in order of discovery to indicate a specific asteroid. The numbered-circle convention was quickly adopted by astronomers, and the next asteroid to be discovered (16 Psyche, in 1852) was the first to be designated in that way at the time of its discovery. However, Psyche was given an iconic symbol as well, as were a few other asteroids discovered over the next few years. 20 Massalia was the first asteroid that was not assigned an iconic symbol, and no iconic symbols were created after the 1855 discovery of 37 Fides.
Formation
Many asteroids are the shattered remnants of planetesimals, bodies within the young Sun's solar nebula that never grew large enough to become planets. It is thought that planetesimals in the asteroid belt evolved much like the rest of objects in the solar nebula until Jupiter neared its current mass, at which point excitation from orbital resonances with Jupiter ejected over 99% of planetesimals in the belt. Simulations and a discontinuity in spin rate and spectral properties suggest that asteroids larger than approximately in diameter accreted during that early era, whereas smaller bodies are fragments from collisions between asteroids during or after the Jovian disruption. Ceres and Vesta grew large enough to melt and differentiate, with heavy metallic elements sinking to the core, leaving rocky minerals in the crust.
In the Nice model, many Kuiper-belt objects are captured in the outer asteroid belt, at distances greater than 2.6 AU. Most were later ejected by Jupiter, but those that remained may be the D-type asteroids, and possibly include Ceres.
Distribution within the Solar System
Various dynamical groups of asteroids have been discovered orbiting in the inner Solar System. Their orbits are perturbed by the gravity of other bodies in the Solar System and by the Yarkovsky effect. Significant populations include:
Asteroid belt
The majority of known asteroids orbit within the asteroid belt between the orbits of Mars and Jupiter, generally in relatively low-eccentricity (i.e. not very elongated) orbits. This belt is estimated to contain between 1.1 and 1.9 million asteroids larger than in diameter, and millions of smaller ones. These asteroids may be remnants of the protoplanetary disk, and in this region the accretion of planetesimals into planets during the formative period of the Solar System was prevented by large gravitational perturbations by Jupiter.
Contrary to popular imagery, the asteroid belt is mostly empty. The asteroids are spread over such a large volume that reaching an asteroid without aiming carefully would be improbable. Nonetheless, hundreds of thousands of asteroids are currently known, and the total number ranges in the millions or more, depending on the lower size cutoff. Over 200 asteroids are known to be larger than 100 km, and a survey in the infrared wavelengths has shown that the asteroid belt has between 700,000 and 1.7 million asteroids with a diameter of 1 km or more. The absolute magnitudes of most of the known asteroids are between 11 and 19, with the median at about 16.
The total mass of the asteroid belt is estimated to be kg, which is just 3% of the mass of the Moon; the mass of the Kuiper Belt and Scattered Disk is over 100 times as large. The four largest objects, Ceres, Vesta, Pallas, and Hygiea, account for maybe 62% of the belt's total mass, with 39% accounted for by Ceres alone.
Trojans
Trojans are populations that share an orbit with a larger planet or moon, but do not collide with it because they orbit in one of the two Lagrangian points of stability, and , which lie 60° ahead of and behind the larger body.
In the Solar System, most known trojans share the orbit of Jupiter. They are divided into the Greek camp at (ahead of Jupiter) and the Trojan camp at (trailing Jupiter). More than a million Jupiter trojans larger than one kilometer are thought to exist, of which more than 7,000 are currently catalogued. In other planetary orbits only nine Mars trojans, 28 Neptune trojans, two Uranus trojans, and two Earth trojans, have been found to date. A temporary Venus trojan is also known. Numerical orbital dynamics stability simulations indicate that Saturn and Uranus probably do not have any primordial trojans.
Near-Earth asteroids
Near-Earth asteroids, or NEAs, are asteroids that have orbits that pass close to that of Earth. Asteroids that actually cross Earth's orbital path are known as Earth-crossers. , a total of 28,772 near-Earth asteroids were known; 878 have a diameter of one kilometer or larger.
A small number of NEAs are extinct comets that have lost their volatile surface materials, although having a faint or intermittent comet-like tail does not necessarily result in a classification as a near-Earth comet, making the boundaries somewhat fuzzy. The rest of the near-Earth asteroids are driven out of the asteroid belt by gravitational interactions with Jupiter.
Many asteroids have natural satellites (minor-planet moons). , there were 85 NEAs known to have at least one moon, including three known to have two moons. The asteroid 3122 Florence, one of the largest potentially hazardous asteroids with a diameter of , has two moons measuring across, which were discovered by radar imaging during the asteroid's 2017 approach to Earth.
Near-Earth asteroids are divided into groups based on their semi-major axis (a), perihelion distance (q), and aphelion distance (Q):
The Atiras or Apoheles have orbits strictly inside Earth's orbit: an Atira asteroid's aphelion distance (Q) is smaller than Earth's perihelion distance (0.983 AU). That is, , which implies that the asteroid's semi-major axis is also less than 0.983 AU.
The Atens have a semi-major axis of less than 1 AU and cross Earth's orbit. Mathematically, and . (0.983 AU is Earth's perihelion distance.)
The Apollos have a semi-major axis of more than 1 AU and cross Earth's orbit. Mathematically, and . (1.017 AU is Earth's aphelion distance.)
The Amors have orbits strictly outside Earth's orbit: an Amor asteroid's perihelion distance (q) is greater than Earth's aphelion distance (1.017 AU). Amor asteroids are also near-earth objects so . In summary, . (This implies that the asteroid's semi-major axis (a) is also larger than 1.017 AU.) Some Amor asteroid orbits cross the orbit of Mars.
Martian moons
It is unclear whether Martian moons Phobos and Deimos are captured asteroids or were formed due to impact event on Mars. Phobos and Deimos both have much in common with carbonaceous C-type asteroids, with spectra, albedo, and density very similar to those of C- or D-type asteroids. Based on their similarity, one hypothesis is that both moons may be captured main-belt asteroids. Both moons have very circular orbits which lie almost exactly in Mars's equatorial plane, and hence a capture origin requires a mechanism for circularizing the initially highly eccentric orbit, and adjusting its inclination into the equatorial plane, most probably by a combination of atmospheric drag and tidal forces, although it is not clear whether sufficient time was available for this to occur for Deimos. Capture also requires dissipation of energy. The current Martian atmosphere is too thin to capture a Phobos-sized object by atmospheric braking. Geoffrey A. Landis has pointed out that the capture could have occurred if the original body was a binary asteroid that separated under tidal forces.
Phobos could be a second-generation Solar System object that coalesced in orbit after Mars formed, rather than forming concurrently out of the same birth cloud as Mars.
Another hypothesis is that Mars was once surrounded by many Phobos- and Deimos-sized bodies, perhaps ejected into orbit around it by a collision with a large planetesimal. The high porosity of the interior of Phobos (based on the density of 1.88 g/cm3, voids are estimated to comprise 25 to 35 percent of Phobos's volume) is inconsistent with an asteroidal origin. Observations of Phobos in the thermal infrared suggest a composition containing mainly phyllosilicates, which are well known from the surface of Mars. The spectra are distinct from those of all classes of chondrite meteorites, again pointing away from an asteroidal origin. Both sets of findings support an origin of Phobos from material ejected by an impact on Mars that reaccreted in Martian orbit, similar to the prevailing theory for the origin of Earth's moon.
Characteristics
Size distribution
Asteroids vary greatly in size, from almost for the largest down to rocks just 1 meter across, below which an object is classified as a meteoroid. The three largest are very much like miniature planets: they are roughly spherical, have at least partly differentiated interiors, and are thought to be surviving protoplanets. The vast majority, however, are much smaller and are irregularly shaped; they are thought to be either battered planetesimals or fragments of larger bodies.
The dwarf planet Ceres is by far the largest asteroid, with a diameter of . The next largest are 4 Vesta and 2 Pallas, both with diameters of just over . Vesta is the brightest of the four main-belt asteroids that can, on occasion, be visible to the naked eye. On some rare occasions, a near-Earth asteroid may briefly become visible without technical aid; see 99942 Apophis.
The mass of all the objects of the asteroid belt, lying between the orbits of Mars and Jupiter, is estimated to be , ≈ 3.25% of the mass of the Moon. Of this, Ceres comprises , about 40% of the total. Adding in the next three most massive objects, Vesta (11%), Pallas (8.5%), and Hygiea (3–4%), brings this figure up to a bit over 60%, whereas the next seven most-massive asteroids bring the total up to 70%. The number of asteroids increases rapidly as their individual masses decrease.
The number of asteroids decreases markedly with increasing size. Although the size distribution generally follows a power law, there are 'bumps' at about and , where more asteroids than expected from such a curve are found. Most asteroids larger than approximately 120 km in diameter are primordial (surviving from the accretion epoch), whereas most smaller asteroids are products of fragmentation of primordial asteroids. The primordial population of the main belt was probably 200 times what it is today.
Largest asteroids
Three largest objects in the asteroid belt, Ceres, Vesta, and Pallas, are intact protoplanets that share many characteristics common to planets, and are atypical compared to the majority of irregularly shaped asteroids. The fourth-largest asteroid, Hygiea, appears nearly spherical although it may have an undifferentiated interior, like the majority of asteroids. The four largest asteroids constitute half the mass of the asteroid belt.
Ceres is the only asteroid that appears to have a plastic shape under its own gravity and hence the only one that is a dwarf planet. It has a much higher absolute magnitude than the other asteroids, of around 3.32, and may possess a surface layer of ice. Like the planets, Ceres is differentiated: it has a crust, a mantle and a core. No meteorites from Ceres have been found on Earth.
Vesta, too, has a differentiated interior, though it formed inside the Solar System's frost line, and so is devoid of water; its composition is mainly of basaltic rock with minerals such as olivine. Aside from the large crater at its southern pole, Rheasilvia, Vesta also has an ellipsoidal shape. Vesta is the parent body of the Vestian family and other V-type asteroids, and is the source of the HED meteorites, which constitute 5% of all meteorites on Earth.
Pallas is unusual in that, like Uranus, it rotates on its side, with its axis of rotation tilted at high angles to its orbital plane. Its composition is similar to that of Ceres: high in carbon and silicon, and perhaps partially differentiated. Pallas is the parent body of the Palladian family of asteroids.
Hygiea is the largest carbonaceous asteroid and, unlike the other largest asteroids, lies relatively close to the plane of the ecliptic. It is the largest member and presumed parent body of the Hygiean family of asteroids. Because there is no sufficiently large crater on the surface to be the source of that family, as there is on Vesta, it is thought that Hygiea may have been completely disrupted in the collision that formed the Hygiean family and recoalesced after losing a bit less than 2% of its mass. Observations taken with the Very Large Telescope's SPHERE imager in 2017 and 2018, revealed that Hygiea has a nearly spherical shape, which is consistent both with it being in hydrostatic equilibrium, or formerly being in hydrostatic equilibrium, or with being disrupted and recoalescing.
Internal differentiation of large asteroids is possibly related to their lack of natural satellites, as satellites of main belt asteroids are mostly believed to form from collisional disruption, creating a rubble pile structure.
Rotation
Measurements of the rotation rates of large asteroids in the asteroid belt show that there is an upper limit. Very few asteroids with a diameter larger than 100 meters have a rotation period less than 2.2 hours. For asteroids rotating faster than approximately this rate, the inertial force at the surface is greater than the gravitational force, so any loose surface material would be flung out. However, a solid object should be able to rotate much more rapidly. This suggests that most asteroids with a diameter over 100 meters are rubble piles formed through the accumulation of debris after collisions between asteroids.
Color
Asteroids become darker and redder with age due to space weathering. However evidence suggests most of the color change occurs rapidly, in the first hundred thousand years, limiting the usefulness of spectral measurement for determining the age of asteroids.
Surface features
Except for the "big four" (Ceres, Pallas, Vesta, and Hygiea), asteroids are likely to be broadly similar in appearance, if irregular in shape. 253 Mathilde is a rubble pile saturated with craters with diameters the size of the asteroid's radius. Earth-based observations of 511 Davida, one of the largest asteroids after the big four, reveal a similarly angular profile, suggesting it is also saturated with radius-size craters. Medium-sized asteroids such as Mathilde and 243 Ida, that have been observed up close, also reveal a deep regolith covering the surface. Of the big four, Pallas and Hygiea are practically unknown. Vesta has compression fractures encircling a radius-size crater at its south pole but is otherwise a spheroid.
Dawn spacecraft revealed that Ceres has a heavily cratered surface, but with fewer large craters than expected. Models based on the formation of the current asteroid belt had suggested Ceres should possess 10 to 15 craters larger than in diameter. The largest confirmed crater on Ceres, Kerwan Basin, is across. The most likely reason for this is viscous relaxation of the crust slowly flattening out larger impacts.
Composition
Asteroids are classified by their characteristic emission spectra, with the majority falling into three main groups: C-type, M-type, and S-type. These describe carbonaceous (carbon-rich), metallic, and silicaceous (stony) compositions, respectively. The physical composition of asteroids is varied and in most cases poorly understood. Ceres appears to be composed of a rocky core covered by an icy mantle; Vesta is thought to have a nickel-iron core, olivine mantle, and basaltic crust. Thought to be the largest undifferentiated asteroid, 10 Hygiea seems to have a uniformly primitive composition of carbonaceous chondrite, but it may actually be a differentiated asteroid that was globally disrupted by an impact and then reassembled. Other asteroids appear to be the remnant cores or mantles of proto-planets, high in rock and metal. Most small asteroids are believed to be piles of rubble held together loosely by gravity, although the largest are probably solid. Some asteroids have moons or are co-orbiting binaries: rubble piles, moons, binaries, and scattered asteroid families are thought to be the results of collisions that disrupted a parent asteroid, or possibly a planet.
In the main asteroid belt, there appear to be two primary populations of asteroid: a dark, volatile-rich population, consisting of the C-type and P-type asteroids, with albedos less than 0.10 and densities under , and a dense, volatile-poor population, consisting of the S-type and M-type asteroids, with albedos over 0.15 and densities greater than 2.7. Within these populations, larger asteroids are denser, presumably due to compression. There appears to be minimal macro-porosity (interstitial vacuum) in the score of asteroids with masses greater than .
Composition is calculated from three primary sources: albedo, surface spectrum, and density. The last can only be determined accurately by observing the orbits of moons the asteroid might have. So far, every asteroid with moons has turned out to be a rubble pile, a loose conglomeration of rock and metal that may be half empty space by volume. The investigated asteroids are as large as 280 km in diameter, and include 121 Hermione (268×186×183 km), and 87 Sylvia (384×262×232 km). Few asteroids are larger than 87 Sylvia, none of them have moons. The fact that such large asteroids as Sylvia may be rubble piles, presumably due to disruptive impacts, has important consequences for the formation of the Solar System: computer simulations of collisions involving solid bodies show them destroying each other as often as merging, but colliding rubble piles are more likely to merge. This means that the cores of the planets could have formed relatively quickly.
Water
Scientists hypothesize that some of the first water brought to Earth was delivered by asteroid impacts after the collision that produced the Moon. In 2009, the presence of water ice was confirmed on the surface of 24 Themis using NASA's Infrared Telescope Facility. The surface of the asteroid appears completely covered in ice. As this ice layer is sublimating, it may be getting replenished by a reservoir of ice under the surface. Organic compounds were also detected on the surface. The presence of ice on 24 Themis makes the initial theory plausible.
In October 2013, water was detected on an extrasolar body for the first time, on an asteroid orbiting the white dwarf GD 61. On 22 January 2014, European Space Agency (ESA) scientists reported the detection, for the first definitive time, of water vapor on Ceres, the largest object in the asteroid belt. The detection was made by using the far-infrared abilities of the Herschel Space Observatory. The finding is unexpected because comets, not asteroids, are typically considered to "sprout jets and plumes". According to one of the scientists, "The lines are becoming more and more blurred between comets and asteroids."
Findings have shown that solar winds can react with the oxygen in the upper layer of the asteroids and create water. It has been estimated that "every cubic metre of irradiated rock could contain up to 20 litres"; study was conducted using an atom probe tomography, numbers are given for the Itokawa S-type asteroid.
Acfer 049, a meteorite discovered in Algeria in 1990, was shown in 2019 to have an ultraporous lithology (UPL): porous texture that could be formed by removal of ice that filled these pores, this suggests that UPL "represent fossils of primordial ice".
Organic compounds
Asteroids contain traces of amino acids and other organic compounds, and some speculate that asteroid impacts may have seeded the early Earth with the chemicals necessary to initiate life, or may have even brought life itself to Earth (an event called "panspermia"). In August 2011, a report, based on NASA studies with meteorites found on Earth, was published suggesting DNA and RNA components (adenine, guanine and related organic molecules) may have been formed on asteroids and comets in outer space.
In November 2019, scientists reported detecting, for the first time, sugar molecules, including ribose, in meteorites, suggesting that chemical processes on asteroids can produce some fundamentally essential bio-ingredients important to life, and supporting the notion of an RNA world prior to a DNA-based origin of life on Earth, and possibly, as well, the notion of panspermia.
Classification
Asteroids are commonly categorized according to two criteria: the characteristics of their orbits, and features of their reflectance spectrum.
Orbital classification
Many asteroids have been placed in groups and families based on their orbital characteristics. Apart from the broadest divisions, it is customary to name a group of asteroids after the first member of that group to be discovered. Groups are relatively loose dynamical associations, whereas families are tighter and result from the catastrophic break-up of a large parent asteroid sometime in the past. Families are more common and easier to identify within the main asteroid belt, but several small families have been reported among the Jupiter trojans. Main belt families were first recognized by Kiyotsugu Hirayama in 1918 and are often called Hirayama families in his honor.
About 30–35% of the bodies in the asteroid belt belong to dynamical families, each thought to have a common origin in a past collision between asteroids. A family has also been associated with the plutoid dwarf planet .
Some asteroids have unusual horseshoe orbits that are co-orbital with Earth or another planet. Examples are 3753 Cruithne and . The first instance of this type of orbital arrangement was discovered between Saturn's moons Epimetheus and Janus. Sometimes these horseshoe objects temporarily become quasi-satellites for a few decades or a few hundred years, before returning to their earlier status. Both Earth and Venus are known to have quasi-satellites.
Such objects, if associated with Earth or Venus or even hypothetically Mercury, are a special class of Aten asteroids. However, such objects could be associated with the outer planets as well.
Spectral classification
In 1975, an asteroid taxonomic system based on color, albedo, and spectral shape was developed by Chapman, Morrison, and Zellner. These properties are thought to correspond to the composition of the asteroid's surface material. The original classification system had three categories: C-types for dark carbonaceous objects (75% of known asteroids), S-types for stony (silicaceous) objects (17% of known asteroids) and U for those that did not fit into either C or S. This classification has since been expanded to include many other asteroid types. The number of types continues to grow as more asteroids are studied.
The two most widely used taxonomies now used are the Tholen classification and SMASS classification. The former was proposed in 1984 by David J. Tholen, and was based on data collected from an eight-color asteroid survey performed in the 1980s. This resulted in 14 asteroid categories. In 2002, the Small Main-Belt Asteroid Spectroscopic Survey resulted in a modified version of the Tholen taxonomy with 24 different types. Both systems have three broad categories of C, S, and X asteroids, where X consists of mostly metallic asteroids, such as the M-type. There are also several smaller classes.
The proportion of known asteroids falling into the various spectral types does not necessarily reflect the proportion of all asteroids that are of that type; some types are easier to detect than others, biasing the totals.
Problems
Originally, spectral designations were based on inferences of an asteroid's composition. However, the correspondence between spectral class and composition is not always very good, and a variety of classifications are in use. This has led to significant confusion. Although asteroids of different spectral classifications are likely to be composed of different materials, there are no assurances that asteroids within the same taxonomic class are composed of the same (or similar) materials.
Active asteroids
Active asteroids are objects that have asteroid-like orbits but show comet-like visual characteristics. That is, they show comae, tails, or other visual evidence of mass-loss (like a comet), but their orbit remains within Jupiter's orbit (like an asteroid). These bodies were originally designated main-belt comets (MBCs) in 2006 by astronomers David Jewitt and Henry Hsieh, but this name implies they are necessarily icy in composition like a comet and that they only exist within the main-belt, whereas the growing population of active asteroids shows that this is not always the case.
The first active asteroid discovered is 7968 Elst–Pizarro. It was discovered (as an asteroid) in 1979 but then was found to have a tail by Eric Elst and Guido Pizarro in 1996 and given the cometary designation 133P/Elst-Pizarro. Another notable object is 311P/PanSTARRS: observations made by the Hubble Space Telescope revealed that it had six comet-like tails. The tails are suspected to be streams of material ejected by the asteroid as a result of a rubble pile asteroid spinning fast enough to remove material from it.
By smashing into the asteroid Dimorphos, NASA's Double Asteroid Redirection Test spacecraft made it an active asteroid. Scientists had proposed that some active asteroids are the result of impact events, but no one had ever observed the activation of an asteroid. The DART mission activated Dimorphos under precisely known and carefully observed impact conditions, enabling the detailed study of the formation of an active asteroid for the first time. Observations show that Dimorphos lost approximately 1 million kilograms after the collision. Impact produced a dust plume that temporarily brightened the Didymos system and developed a -long dust tail that persisted for several months.
Observation and exploration
Until the age of space travel, objects in the asteroid belt could only be observed with large telescopes, their shapes and terrain remaining a mystery. The best modern ground-based telescopes and the Earth-orbiting Hubble Space Telescope can only resolve a small amount of detail on the surfaces of the largest asteroids. Limited information about the shapes and compositions of asteroids can be inferred from their light curves (variation in brightness during rotation) and their spectral properties. Sizes can be estimated by timing the lengths of star occultations (when an asteroid passes directly in front of a star). Radar imaging can yield good information about asteroid shapes and orbital and rotational parameters, especially for near-Earth asteroids. Spacecraft flybys can provide much more data than any ground or space-based observations; sample-return missions gives insights about regolith composition.
Ground-based observations
As asteroids are rather small and faint objects, the data that can be obtained from ground-based observations (GBO) are limited. By means of ground-based optical telescopes the visual magnitude can be obtained; when converted into the absolute magnitude it gives a rough estimate of the asteroid's size. Light-curve measurements can also be made by GBO; when collected over a long period of time it allows an estimate of the rotational period, the pole orientation (sometimes), and a rough estimate of the asteroid's shape. Spectral data (both visible-light and near-infrared spectroscopy) gives information about the object's composition, used to classify the observed asteroids. Such observations are limited as they provide information about only the thin layer on the surface (up to several micrometers). As planetologist Patrick Michel writes:
Mid- to thermal-infrared observations, along with polarimetry measurements, are probably the only data that give some indication of actual physical properties. Measuring the heat flux of an asteroid at a single wavelength gives an estimate of the dimensions of the object; these measurements have lower uncertainty than measurements of the reflected sunlight in the visible-light spectral region. If the two measurements can be combined, both the effective diameter and the geometric albedo—the latter being a measure of the brightness at zero phase angle, that is, when illumination comes from directly behind the observer—can be derived. In addition, thermal measurements at two or more wavelengths, plus the brightness in the visible-light region, give information on the thermal properties. The thermal inertia, which is a measure of how fast a material heats up or cools off, of most observed asteroids is lower than the bare-rock reference value but greater than that of the lunar regolith; this observation indicates the presence of an insulating layer of granular material on their surface. Moreover, there seems to be a trend, perhaps related to the gravitational environment, that smaller objects (with lower gravity) have a small regolith layer consisting of coarse grains, while larger objects have a thicker regolith layer consisting of fine grains. However, the detailed properties of this regolith layer are poorly known from remote observations. Moreover, the relation between thermal inertia and surface roughness is not straightforward, so one needs to interpret the thermal inertia with caution.
Near-Earth asteroids that come into close vicinity of the planet can be studied in more details with radar; it provides information about the surface of the asteroid (for example can show the presence of craters and boulders). Such observations were conducted by the Arecibo Observatory in Puerto Rico (305 meter dish) and Goldstone Observatory in California (70 meter dish). Radar observations can also be used for accurate determination of the orbital and rotational dynamics of observed objects.
Space-based observations
Both space and ground-based observatories conducted asteroid search programs; the space-based searches are expected to detect more objects because there is no atmosphere to interfere and because they can observe larger portions of the sky. NEOWISE observed more than 100,000 asteroids of the main belt, Spitzer Space Telescope observed more than 700 near-Earth asteroids. These observations determined rough sizes of the majority of observed objects, but provided limited detail about surface properties (such as regolith depth and composition, angle of repose, cohesion, and porosity).
Asteroids were also studied by the Hubble Space Telescope, such as tracking the colliding asteroids in the main belt, break-up of an asteroid, observing an active asteroid with six comet-like tails, and observing asteroids that were chosen as targets of dedicated missions.
Space probe missions
According to Patrick Michel
The internal structure of asteroids is inferred only from indirect evidence: bulk densities measured by spacecraft, the orbits of natural satellites in the case of asteroid binaries, and the drift of an asteroid's orbit due to the Yarkovsky thermal effect. A spacecraft near an asteroid is perturbed enough by the asteroid's gravity to allow an estimate of the asteroid's mass. The volume is then estimated using a model of the asteroid's shape. Mass and volume allow the derivation of the bulk density, whose uncertainty is usually dominated by the errors made on the volume estimate. The internal porosity of asteroids can be inferred by comparing their bulk density with that of their assumed meteorite analogues, dark asteroids seem to be more porous (>40%) than bright ones. The nature of this porosity is unclear.
Dedicated missions
The first asteroid to be photographed in close-up was 951 Gaspra in 1991, followed in 1993 by 243 Ida and its moon Dactyl, all of which were imaged by the Galileo probe en route to Jupiter. Other asteroids briefly visited by spacecraft en route to other destinations include 9969 Braille (by Deep Space 1 in 1999), 5535 Annefrank (by Stardust in 2002), 2867 Šteins and 21 Lutetia (by the Rosetta probe in 2008), and 4179 Toutatis (China's lunar orbiter Chang'e 2, which flew within in 2012).
The first dedicated asteroid probe was NASA's NEAR Shoemaker, which photographed 253 Mathilde in 1997, before entering into orbit around 433 Eros, finally landing on its surface in 2001. It was the first spacecraft to successfully orbit and land on an asteroid. From September to November 2005, the Japanese Hayabusa probe studied 25143 Itokawa in detail and returned samples of its surface to Earth on 13 June 2010, the first asteroid sample-return mission. In 2007, NASA launched the Dawn spacecraft, which orbited 4 Vesta for a year, and observed the dwarf planet Ceres for three years.
Hayabusa2, a probe launched by JAXA 2014, orbited its target asteroid 162173 Ryugu for more than a year and took samples that were delivered to Earth in 2020. The spacecraft is now on an extended mission and expected to arrive at a new target in 2031.
NASA launched the OSIRIS-REx in 2016, a sample return mission to asteroid 101955 Bennu. In 2021, the probe departed the asteroid with a sample from its surface. Sample was delivered to Earth in September 2023. The spacecraft continues its extended mission, designated OSIRIS-APEX, to explore near-Earth asteroid Apophis in 2029.
In 2021, NASA launched Double Asteroid Redirection Test (DART), a mission to test technology for defending Earth against potential hazardous objects. DART deliberately crashed into the minor-planet moon Dimorphos of the double asteroid Didymos in September 2022 to assess the potential of a spacecraft impact to deflect an asteroid from a collision course with Earth. In October, NASA declared DART a success, confirming it had shortened Dimorphos's orbital period around Didymos by about 32 minutes.
NASA's Lucy, launched in 2021, is a multiple-asteroid flyby probe focused on flying by 7 Jupiter trojans of varying types. While not yet set to reach its first main target, 3548 Eurybates, until 2027, it has made flybys of the main-belt asteroids 152830 Dinkinesh and 52246 Donaldjohanson.
Planned missions
NASA's Psyche, launched in October 2023, is intended to study the large metallic asteroid of the same name, and is on track to arrive there in 2029.
ESA's Hera, launched in October 2024, is intended study the results of the DART impact. It is expected to measure the size and morphology of the crater, and momentum transmitted by the impact, to determine the efficiency of the deflection produced by DART.
JAXA's DESTINY+ is a mission for a flyby of the Geminids meteor shower parent body 3200 Phaethon, as well as various minor bodies. Its launch is planned for 2024.
CNSA's Tianwen-2 is planned to launch in 2025. If all goes as planned, it will use solar electric propulsion to explore the co-orbital near-Earth asteroid 469219 Kamoʻoalewa and the active asteroid 311P/PanSTARRS. The spacecraft is tasked with collecting samples of the regolith of Kamo'oalewa.
Asteroid mining
The concept of asteroid mining was proposed in 1970s. Matt Anderson defines successful asteroid mining as "the development of a mining program that is both financially self-sustaining and profitable to its investors". It has been suggested that asteroids might be used as a source of materials that may be rare or exhausted on Earth, or materials for constructing space habitats. Materials that are heavy and expensive to launch from Earth may someday be mined from asteroids and used for space manufacturing and construction.
As resource depletion on Earth becomes more real, the idea of extracting valuable elements from asteroids and returning these to Earth for profit, or using space-based resources to build solar-power satellites and space habitats, becomes more attractive. Hypothetically, water processed from ice could refuel orbiting propellant depots.
From the astrobiological perspective, asteroid prospecting could provide scientific data for the search for extraterrestrial intelligence (SETI). Some astrophysicists have suggested that if advanced extraterrestrial civilizations employed asteroid mining long ago, the hallmarks of these activities might be detectable.
Threats to Earth
There is increasing interest in identifying asteroids whose orbits cross Earth's, and that could, given enough time, collide with Earth. The three most important groups of near-Earth asteroids are the Apollos, Amors, and Atens.
The near-Earth asteroid 433 Eros had been discovered as long ago as 1898, and the 1930s brought a flurry of similar objects. In order of discovery, these were: 1221 Amor, 1862 Apollo, 2101 Adonis, and finally 69230 Hermes, which approached within 0.005 AU of Earth in 1937. Astronomers began to realize the possibilities of Earth impact.
Two events in later decades increased the alarm: the increasing acceptance of the Alvarez hypothesis that an impact event resulted in the Cretaceous–Paleogene extinction, and the 1994 observation of Comet Shoemaker-Levy 9 crashing into Jupiter. The U.S. military also declassified the information that its military satellites, built to detect nuclear explosions, had detected hundreds of upper-atmosphere impacts by objects ranging from one to ten meters across.
All of these considerations helped spur the launch of highly efficient surveys, consisting of charge-coupled device (CCD) cameras and computers directly connected to telescopes. , it was estimated that 89% to 96% of near-Earth asteroids one kilometer or larger in diameter had been discovered. , the LINEAR system alone had discovered 147,132 asteroids. Among the surveys, 19,266 near-Earth asteroids have been discovered including almost 900 more than in diameter.
In June 2018, the National Science and Technology Council warned that the United States is unprepared for an asteroid impact event, and has developed and released the "National Near-Earth Object Preparedness Strategy Action Plan" to better prepare. According to expert testimony in the United States Congress in 2013, NASA would require at least five years of preparation before a mission to intercept an asteroid could be launched.
Asteroid deflection strategies
Various collision avoidance techniques have different trade-offs with respect to metrics such as overall performance, cost, failure risks, operations, and technology readiness. There are various methods for changing the course of an asteroid/comet. These can be differentiated by various types of attributes such as the type of mitigation (deflection or fragmentation), energy source (kinetic, electromagnetic, gravitational, solar/thermal, or nuclear), and approach strategy (interception, rendezvous, or remote station).
Strategies fall into two basic sets: fragmentation and delay. Fragmentation concentrates on rendering the impactor harmless by fragmenting it and scattering the fragments so that they miss the Earth or are small enough to burn up in the atmosphere. Delay exploits the fact that both the Earth and the impactor are in orbit. An impact occurs when both reach the same point in space at the same time, or more correctly when some point on Earth's surface intersects the impactor's orbit when the impactor arrives. Since the Earth is approximately 12,750 km in diameter and moves at approx. 30 km per second in its orbit, it travels a distance of one planetary diameter in about 425 seconds, or slightly over seven minutes. Delaying, or advancing the impactor's arrival by times of this magnitude can, depending on the exact geometry of the impact, cause it to miss the Earth.
"Project Icarus" was one of the first projects designed in 1967 as a contingency plan in case of collision with 1566 Icarus. The plan relied on the new Saturn V rocket, which did not make its first flight until after the report had been completed. Six Saturn V rockets would be used, each launched at variable intervals from months to hours away from impact. Each rocket was to be fitted with a single 100-megaton nuclear warhead as well as a modified Apollo Service Module and uncrewed Apollo Command Module for guidance to the target. The warheads would be detonated 30 meters from the surface, deflecting or partially destroying the asteroid. Depending on the subsequent impacts on the course or the destruction of the asteroid, later missions would be modified or cancelled as needed. The "last-ditch" launch of the sixth rocket would be 18 hours prior to impact.
Fiction
Asteroids and the asteroid belt are a staple of science fiction stories. Asteroids play several potential roles in science fiction: as places human beings might colonize, resources for extracting minerals, hazards encountered by spacecraft traveling between two other points, and as a threat to life on Earth or other inhabited planets, dwarf planets, and natural satellites by potential impact.
|
;Minor planets;Solar System
|
https://en.wikipedia.org/wiki/Apple%20Inc.
|
Apple Inc. is an American multinational corporation and technology company headquartered in Cupertino, California, in Silicon Valley. It is best known for its consumer electronics, software, and services. Founded in 1976 as Apple Computer Company by Steve Jobs, Steve Wozniak and Ronald Wayne, the company was incorporated by Jobs and Wozniak as Apple Computer, Inc. the following year. It was renamed Apple Inc. in 2007 as the company had expanded its focus from computers to consumer electronics. Apple is the largest technology company by revenue, with billion in the 2024 fiscal year.
The company was founded to produce and market Wozniak's Apple I personal computer. Its second computer, the Apple II, became a best seller as one of the first mass-produced microcomputers. Apple introduced the Lisa in 1983 and the Macintosh in 1984, as some of the first computers to use a graphical user interface and a mouse. By 1985, internal company problems led to Jobs leaving to form NeXT, Inc., and Wozniak withdrawing to other ventures; John Sculley served as long-time CEO for over a decade. In the 1990s, Apple lost considerable market share in the personal computer industry to the lower-priced Wintel duopoly of the Microsoft Windows operating system on Intel-powered PC clones. In 1997, Apple was weeks away from bankruptcy. To resolve its failed operating system strategy, it bought NeXT, effectively bringing Jobs back to the company, who guided Apple back to profitability over the next decade with the introductions of the iMac, iPod, iPhone, and iPad devices to critical acclaim as well as the iTunes Store, launching the "Think different" advertising campaign, and opening the Apple Store retail chain. These moves elevated Apple to consistently be one of the world's most valuable brands since about 2010. Jobs resigned in 2011 for health reasons, and died two months later; he was succeeded as CEO by Tim Cook.
Apple's product lineup includes portable and home hardware such as the iPhone, iPad, Apple Watch, Mac, and Apple TV; operating systems such as iOS, iPadOS, and macOS; and various software and services including Apple Pay, iCloud, and multimedia streaming services like Apple Music and Apple TV+. Apple is one of the Big Five American information technology companies; for the most part since 2011, Apple has been the world's largest company by market capitalization, and, , is the largest manufacturing company by revenue, the fourth-largest personal computer vendor by unit sales, the largest vendor of tablet computers, and the largest vendor of mobile phones in the world. Apple became the first publicly traded U.S. company to be valued at over $1 trillion in 2018, and, , is valued at just over $3.74 trillion. Apple is the largest company on the Nasdaq, where it trades under the ticker symbol "AAPL".
Apple has received criticism regarding its contractors' labor practices, its relationship with trade unions, its environmental practices, and its business ethics, including anti-competitive practices and materials sourcing. Nevertheless, the company has a large following and enjoys a high level of brand loyalty.
History
1976–1980: Founding and incorporation
Apple Computer Company was founded on April 1, 1976, by Steve Jobs, Steve Wozniak, and Ronald Wayne as a partnership. The company's first product is the Apple I, a computer designed and hand-built entirely by Wozniak. To finance its creation, Jobs sold his Volkswagen Bus, and Wozniak sold his HP-65 calculator. Neither received the full selling price but in total earned . Wozniak debuted the first prototype at the Homebrew Computer Club in July 1976. The Apple I was sold as a motherboard with CPU, RAM, and basic textual-video chips—a base kit concept which was not yet marketed as a complete personal computer. It was priced soon after debut for . Wozniak later said he was unaware of the coincidental mark of the beast in the number 666, and that he came up with the price because he liked "repeating digits".
Apple Computer, Inc. was incorporated in Cupertino, California, on January 3, 1977, without Wayne, who had left and sold his share of the company back to Jobs and Wozniak for $800 only twelve days after having co-founded it. Multimillionaire Mike Markkula provided essential business expertise and funding of to Jobs and Wozniak during the incorporation of Apple. During the first five years of operations, revenue grew exponentially, doubling about every four months. Between September 1977 and September 1980, yearly sales grew from $775,000 to million, an average annual growth rate of 533%.
The Apple II, also designed by Wozniak, was introduced on April 16, 1977, at the first West Coast Computer Faire. It differs from its major rivals, the TRS-80 and Commodore PET, because of its character cell-based color graphics and open architecture. The Apple I and early Apple II models use ordinary audio cassette tapes as storage devices, which were superseded by the -inch floppy disk drive and interface called the Disk II in 1978.
The Apple II was chosen to be the desktop platform for the first killer application of the business world: VisiCalc, a spreadsheet program released in 1979. VisiCalc created a business market for the Apple II and gave home users an additional reason to buy an Apple II: compatibility with the office, but Apple II market share remained behind home computers made by competitors such as Atari, Commodore, and Tandy.
On December 12, 1980, Apple went public with an initial public offering (IPO) on the fully electronic NASDAQ Stock Market, selling 4.6 million shares at $22 per share ($.10 per share when adjusting for stock splits ), generating over $100 million, which was more capital than any IPO since Ford Motor Company in 1956. By the end of the day, around 300 millionaires were created, including Jobs and Wozniak, from a stock price of $29 per share and a market cap of $1.778 billion.
1980–1990: Success with Macintosh
In December 1979, Steve Jobs and Apple employees, including Jef Raskin, visited Xerox PARC, where they observed the Xerox Alto, featuring a graphical user interface (GUI). Apple subsequently negotiated access to PARC's technology, leading to Apple's option to buy shares at a preferential rate. This visit influenced Jobs to implement a GUI in Apple's products, starting with the Apple Lisa. Despite being pioneering as a mass-marketed GUI computer, the Lisa suffered from high costs and limited software options, leading to commercial failure.
Jobs, angered by being pushed off the Lisa team, took over the company's Macintosh division. Wozniak and Raskin had envisioned the Macintosh as a low-cost computer with a text-based interface like the Apple II, but a plane crash in 1981 forced Wozniak to step back from the project. Jobs quickly redefined the Macintosh as a graphical system that would be cheaper than the Lisa, undercutting his former division. Jobs was also hostile to the Apple II division, which at the time, generated most of the company's revenue.
In 1984, Apple launched the Macintosh, the first personal computer without a bundled programming language. Its debut was signified by "1984", a million television advertisement directed by Ridley Scott that aired during the third quarter of Super Bowl XVIII on January 22, 1984. This was hailed as a watershed event for Apple's success and was called a "masterpiece" by CNN and one of the greatest TV advertisements of all time by TV Guide.
The advertisement created great interest in Macintosh, and sales were initially good, but began to taper off dramatically after the first three months as reviews started to come in. Jobs had required of RAM, which limited its speed and software in favor of aspiring for a projected price point of . The Macintosh shipped for , a price panned by critics due to its slow performance. In early 1985, this sales slump triggered a power struggle between Steve Jobs and CEO John Sculley, who had been hired away from Pepsi two years earlier by Jobs saying, "Do you want to sell sugar water for the rest of your life or come with me and change the world?" Sculley removed Jobs as the head of the Macintosh division, with unanimous support from the Apple board of directors.
The board of directors instructed Sculley to contain Jobs and his ability to launch expensive forays into untested products. Rather than submit to Sculley's direction, Jobs attempted to oust him from leadership. Jean-Louis Gassée informed Sculley that Jobs had been attempting to organize a boardroom coup, and called an emergency meeting at which Apple's executive staff sided with Sculley, and stripped Jobs of all operational duties. Jobs resigned from Apple in September 1985 and took several Apple employees with him to found NeXT. Wozniak had also quit his active employment at Apple earlier in 1985 to pursue other ventures, expressing his frustration with Apple's treatment of the Apple II division and stating that the company had "been going in the wrong direction for the last five years". Wozniak remained employed by Apple as a representative, receiving a stipend estimated to be $120,000 per year. Jobs and Wozniak remained Apple shareholders following their departures.
After the departures of Jobs and Wozniak in 1985, Sculley launched the Macintosh 512K that year with quadruple the RAM, and introduced the LaserWriter, the first reasonably priced PostScript laser printer. PageMaker, an early desktop publishing application taking advantage of the PostScript language, was also released by Aldus Corporation in July 1985. It has been suggested that the combination of Macintosh, LaserWriter, and PageMaker was responsible for the creation of the desktop publishing market.
This dominant position in the desktop publishing market allowed the company to focus on higher price points, the so-called "high-right policy" named for the position on a chart of price vs. profits. Newer models selling at higher price points offered higher profit margin, and appeared to have no effect on total sales as power users snapped up every increase in speed. Although some worried about pricing themselves out of the market, the high-right policy was in full force by the mid-1980s, due to Jean-Louis Gassée's slogan of "fifty-five or die", referring to the 55% profit margins of the Macintosh II.
This policy began to backfire late in the decade as desktop publishing programs appeared on IBM PC compatibles with some of the same functionality of the Macintosh at far lower price points. The company lost its dominant position in the desktop publishing market and estranged many of its original consumer customer base who could no longer afford Apple products. The Christmas season of 1989 was the first in the company's history to have declining sales, which led to a 20% drop in Apple's stock price. During this period, the relationship between Sculley and Gassée deteriorated, leading Sculley to effectively demote Gassée in January 1990 by appointing Michael Spindler as the chief operating officer. Gassée left the company later that year to set up a rival, Be Inc.
1990–1997: Decline and restructuring
The company pivoted strategy and, in October 1990, introduced three lower-cost models: the Macintosh Classic, the Macintosh LC, and the Macintosh IIsi, all of which generated significant sales due to pent-up demand. In 1991, Apple introduced the hugely successful PowerBook with a design that set the current shape for almost all modern laptops. The same year, Apple introduced System 7, a major upgrade to the Macintosh operating system, adding color to the interface and introducing new networking capabilities.
The success of the lower-cost Macs and the PowerBook brought increasing revenue. For some time, Apple was doing very well, introducing fresh new products at increasing profits. The magazine MacAddict named the period between 1989 and 1991 as the "first golden age" of the Macintosh.
The success of lower-cost consumer Macs, especially the LC, cannibalized higher-priced machines. To address this, management introduced several new brands, selling largely identical machines at different price points, for different markets: the high-end Quadra series, the mid-range Centris series, and the consumer-marketed Performa series. This led to significant consumer confusion between so many models.
In 1993, the Apple II series was discontinued. It was expensive to produce, and the company decided it was still absorbing sales from lower-cost Macintosh models. After the launch of the LC, Apple encouraged developers to create applications for Macintosh rather than Apple II, and authorized salespersons to redirect consumers from Apple II and toward Macintosh. The Apple IIe was discontinued in 1993.
Apple experimented with several other unsuccessful consumer targeted products during the 1990s, including QuickTake digital cameras, PowerCD portable CD audio players, speakers, the Pippin video game console, the eWorld online service, and Apple Interactive Television Box. Enormous resources were invested in the problematic Newton tablet division, based on John Sculley's unrealistic market forecasts.
Throughout this period, Microsoft continued to gain market share with Windows by focusing on delivering software to inexpensive personal computers, while Apple was delivering a richly engineered but expensive experience. Apple relied on high profit margins and never developed a clear response; it sued Microsoft for making a GUI similar to the Lisa in Apple Computer, Inc. v. Microsoft Corp. The lawsuit dragged on for years and was finally dismissed. The major product flops and the rapid loss of market share to Windows sullied Apple's reputation, and in 1993 Sculley was replaced as CEO by Michael Spindler.
Under Spindler, Apple, IBM, and Motorola formed the AIM alliance in 1994 to create a new computing platform (the PowerPC Reference Platform or PReP), with IBM and Motorola hardware coupled with Apple software. The AIM alliance hoped that PReP's performance and Apple's software would leave the PC far behind and thus counter the dominance of Windows. That year, Apple introduced the Power Macintosh, the first of many computers with Motorola's PowerPC processor.
In the wake of the alliance, Apple opened up to the idea of allowing Motorola and other companies to build Macintosh clones. Over the next two years, 75 distinct Macintosh clone models were introduced. However, by 1996, Apple executives were worried that the clones were cannibalizing sales of its own high-end computers, where profit margins were highest.
In 1996, Spindler was replaced as CEO by Gil Amelio, who was hired for his reputation as a corporate rehabilitator. Amelio made deep changes, including extensive layoffs and cost-cutting.
This period was also marked by numerous failed attempts to modernize the Macintosh operating system (MacOS). The original Macintosh operating system (System 1) was not built for multitasking (running several applications at once). The company attempted to correct this by introducing cooperative multitasking in System 5, but still decided it needed a more modern approach. This led to the Pink project in 1988, A/UX that same year, Copland in 1994, and evaluated the purchase of BeOS in 1996. Talks with Be stalled when the CEO, former Apple executive Jean-Louis Gassée, demanded $300 million in contrast to Apple's $125 million offer. Only weeks away from bankruptcy, Apple's board preferred NeXTSTEP and purchased NeXT in late 1996 for $400 million, retaining Steve Jobs.
1997–2007: Return to profitability
The NeXT acquisition was finalized on February 9, 1997, and the board brought Jobs back to Apple as an advisor. On July 9, 1997, Jobs staged a boardroom coup, which resulted in Amelio's resignation after overseeing a three-year record-low stock price and crippling financial losses. The board named Jobs as interim CEO and he immediately reviewed the product lineup. Jobs canceled 70% of models, ending 3,000 jobs and paring to the core of its computer offerings.
The next month, in August 1997, Steve Jobs convinced Microsoft to make a $150 million investment in Apple and a commitment to continue developing Mac software. This was seen as an "antitrust insurance policy" for Microsoft which had recently settled with the Department of Justice over anti-competitive practices in the United States v. Microsoft Corp. case. Around then, Jobs donated Apple's internal library and archives to Stanford University, to focus more on the present and the future rather than the past. He ended the Mac clone deals and in September 1997, purchased the largest clone maker, Power Computing. On November 10, 1997, the Apple Store website launched, which was tied to a new build-to-order manufacturing model similar to PC manufacturer Dell's success. The moves paid off for Jobs; at the end of his first year as CEO, the company had a $309 million profit.
On May 6, 1998, Apple introduced a new all-in-one computer reminiscent of the original Macintosh: the iMac. The iMac was a huge success, with 800,000 units sold in its first five months, and ushered in major shifts in the industry by abandoning legacy technologies like the -inch diskette, being an early adopter of the USB connector, and coming pre-installed with Internet connectivity (the "i" in iMac) via Ethernet and a dial-up modem. Its striking teardrop shape and translucent materials were designed by Jonathan Ive, who had been hired by Amelio, and who collaborated with Jobs for more than a decade to reshape Apple's product design.
A little more than a year later on July 21, 1999, Apple introduced the iBook consumer laptop. It culminated Jobs's strategy to produce only four products: refined versions of the Power Macintosh G3 desktop and PowerBook G3 laptop for professionals, and the iMac desktop and iBook laptop for consumers. Jobs said the small product line allowed for a greater focus on quality and innovation.
Around then, Apple also completed numerous acquisitions to create a portfolio of digital media production software for both professionals and consumers. Apple acquired Macromedia's Key Grip digital video editing software project, which was launched as Final Cut Pro in April 1999. Key Grip's development also led to Apple's release of the consumer video-editing product iMovie in October 1999. Apple acquired the German company Astarte in April 2000, which had developed the DVD authoring software DVDirector, which Apple repackaged as the professional-oriented DVD Studio Pro, and reused its technology to create iDVD for the consumer market. In 2000, Apple purchased the SoundJam MP audio player software from Casady & Greene. Apple renamed the program iTunes, and simplified the user interface and added CD burning.
In 2001, Apple changed course with three announcements. First, on March 24, 2001, Apple announced the release of a new modern operating system, Mac OS X. This was after numerous failed attempts in the early 1990s, and several years of development. Mac OS X is based on NeXTSTEP, OpenStep, and BSD Unix, to combine the stability, reliability, and security of Unix with the ease of use of an overhauled user interface. Second, in May 2001, the first two Apple Store retail locations opened in Virginia and California, offering an improved presentation of the company's products. At the time, many speculated that the stores would fail, but they became highly successful, and the first of more than 500 stores around the world. Third, on October 23, 2001, the iPod portable digital audio player debuted. The product was first sold on November 10, 2001, and was extremely successful, with over 100 million units sold within six years.
In 2003, the iTunes Store was introduced with music downloads for 99¢ a song and iPod integration. It quickly became the market leader in online music services, with over 5 billion downloads by June 19, 2008. Two years later, the iTunes Store was the world's largest music retailer.
In 2002, Apple purchased Nothing Real for its advanced digital compositing application Shake, and Emagic for the music productivity application Logic. The purchase of Emagic made Apple the first computer manufacturer to own a music software company. The acquisition was followed by the development of Apple's consumer-level GarageBand application. The release of iPhoto that year completed the iLife suite.
At the Worldwide Developers Conference keynote address on June 6, 2005, Jobs announced that Apple would move away from PowerPC processors, and the Mac would transition to Intel processors in 2006. On January 10, 2006, the new MacBook Pro and iMac became the first Apple computers to use Intel's Core Duo CPU. By August 7, 2006, Apple made the transition to Intel chips for the entire Mac product line—over one year sooner than announced. The Power Mac, iBook, and PowerBook brands were retired during the transition; the Mac Pro, MacBook, and MacBook Pro became their respective successors. Apple also introduced Boot Camp in 2006 to help users install Windows XP or Windows Vista on their Intel Macs alongside Mac OS X.
Apple's success during this period was evident in its stock price. Between early 2003 and 2006, the price of Apple's stock increased more than tenfold, from around $6 per share (split-adjusted) to over $80. When Apple surpassed Dell's market cap in January 2006, Jobs sent an email to Apple employees saying Dell's CEO Michael Dell should eat his words. Nine years prior, Dell had said that if he ran Apple he would "shut it down and give the money back to the shareholders".
2007–2011: Success with mobile devices
During his keynote speech at the Macworld Expo on January 9, 2007, Jobs announced the renaming of Apple Computer, Inc. to Apple Inc., because the company had broadened its focus from computers to consumer electronics. This event also saw the announcement of the iPhone and the Apple TV. The company sold 270,000 first-generation iPhones during the first 30 hours of sales, and the device was called "a game changer for the industry".
In an article posted on Apple's website on February 6, 2007, Jobs wrote that Apple would be willing to sell music on the iTunes Store without digital rights management, thereby allowing tracks to be played on third-party players if record labels would agree to drop the technology. On April 2, 2007, Apple and EMI jointly announced the removal of DRM technology from EMI's catalog in the iTunes Store, effective in May 2007. Other record labels eventually followed suit and Apple published a press release in January 2009 to announce that all songs on the iTunes Store are available without their FairPlay DRM.
In July 2008, Apple launched the App Store to sell third-party applications for the iPhone and iPod Touch. Within a month, the store sold 60 million applications and registered an average daily revenue of $1 million, with Jobs speculating in August 2008 that the App Store could become a billion-dollar business for Apple. By October 2008, Apple was the third-largest mobile handset supplier in the world due to the popularity of the iPhone.
On January 14, 2009, Jobs announced in an internal memo that he would be taking a six-month medical leave of absence from Apple until the end of June 2009 and would spend the time focusing on his health. In the email, Jobs stated that "the curiosity over my personal health continues to be a distraction not only for me and my family, but everyone else at Apple as well", and explained that the break would allow the company "to focus on delivering extraordinary products". Though Jobs was absent, Apple recorded its best non-holiday quarter (Q1 FY 2009) during the recession, with revenue of $8.16 billion and profit of $1.21 billion.
After years of speculation and multiple rumored "leaks", Apple unveiled a large screen, tablet-like media device known as the iPad on January 27, 2010. The iPad ran the same touch-based operating system as the iPhone, and all iPhone apps were compatible with the iPad. This gave the iPad a large app catalog on launch, though having very little development time before the release. Later that year on April 3, 2010, the iPad was launched in the U.S. It sold more than 300,000 units on its first day, and 500,000 by the end of the first week. In May 2010, Apple's market cap exceeded that of competitor Microsoft for the first time since 1989.
In June 2010, Apple released the iPhone 4, which introduced video calling using FaceTime, multitasking, and a new design with an exposed stainless steel frame as the phone's antenna system. Later that year, Apple again refreshed the iPod line by introducing a multi-touch iPod Nano, an iPod Touch with FaceTime, and an iPod Shuffle that brought back the clickwheel buttons of earlier generations. It also introduced the smaller, cheaper second-generation Apple TV which allowed the rental of movies and shows.
On January 17, 2011, Jobs announced in an internal Apple memo that he would take another medical leave of absence for an indefinite period to allow him to focus on his health. Chief operating officer Tim Cook assumed Jobs's day-to-day operations at Apple, although Jobs would still remain "involved in major strategic decisions". Apple became the most valuable consumer-facing brand in the world. In June 2011, Jobs surprisingly took the stage and unveiled iCloud, an online storage and syncing service for music, photos, files, and software which replaced MobileMe, Apple's previous attempt at content syncing. This would be the last product launch Jobs would attend before his death.
On August 24, 2011, Jobs resigned his position as CEO of Apple. He was replaced by Cook and Jobs became Apple's chairman. Apple did not have a chairman at the time and instead had two co-lead directors—Andrea Jung and Arthur D. Levinson—who continued with those titles until Levinson replaced Jobs as chairman of the board in November after Jobs's death.
2011–present: Post-Jobs era, Tim Cook
On October 5, 2011, Steve Jobs died, marking the end of an era for Apple. The next major product announcement by Apple was on January 19, 2012, when Apple's Phil Schiller introduced iBooks Textbooks for iOS and iBook Author for Mac OS X in New York City. Jobs stated in the biography Steve Jobs that he wanted to reinvent the textbook industry and education.
From 2011 to 2012, Apple released the iPhone 4s and iPhone 5, which featured improved cameras, an intelligent software assistant named Siri, and cloud-synced data with iCloud; the third- and fourth-generation iPads, which featured Retina displays; and the iPad Mini, which featured a 7.9-inch screen in contrast to the iPad's 9.7-inch screen. These launches were successful, with the iPhone 5 (released September 21, 2012) becoming Apple's biggest iPhone launch with over two million pre-orders and sales of three million iPads in three days following the launch of the iPad Mini and fourth-generation iPad (released November 3, 2012). Apple also released a third-generation 13-inch MacBook Pro with a Retina display and new iMac and Mac Mini computers.
On August 20, 2012, Apple's rising stock price increased the company's market capitalization to a then-record $624 billion. This beat the non-inflation-adjusted record for market capitalization previously set by Microsoft in 1999. On August 24, 2012, a US jury ruled that Samsung should pay Apple $1.05 billion (£665m) in damages in an intellectual property lawsuit. Samsung appealed the damages award, which was reduced by $450 million and further granted Samsung's request for a new trial. On November 10, 2012, Apple confirmed a global settlement that dismissed all existing lawsuits between Apple and HTC up to that date, in favor of a ten-year license agreement for current and future patents between the two companies. It is predicted that Apple will make million per year from this deal with HTC.
In May 2014, Apple confirmed its intent to acquire Dr. Dre and Jimmy Iovine's audio company Beats Electronics—producer of the "Beats by Dr. Dre" line of headphones and speaker products, and operator of the music streaming service Beats Music—for billion, and to sell their products through Apple's retail outlets and resellers. Iovine believed that Beats had always "belonged" with Apple, as the company modeled itself after Apple's "unmatched ability to marry culture and technology". The acquisition was the largest purchase in Apple's history.
During a press event on September 9, 2014, Apple introduced a smartwatch called the Apple Watch. Initially, Apple marketed the device as a fashion accessory and a complement to the iPhone, that would allow people to look at their smartphones less. Over time, the company has focused on developing health and fitness-oriented features on the watch, in an effort to compete with dedicated activity trackers. In January 2016, Apple announced that over one billion Apple devices were in active use worldwide.
On June 6, 2016, Fortune released Fortune 500, its list of companies ranked on revenue generation. In the trailing fiscal year of 2015, Apple was listed as the top tech company. It ranked third, overall, with billion in revenue. This represents a movement upward of two spots from the previous year's list.
In June 2017, Apple announced the HomePod, its smart speaker aimed to compete against Sonos, Google Home, and Amazon Echo. Toward the end of the year, TechCrunch reported that Apple was acquiring Shazam, a company that introduced its products at WWDC and specializing in music, TV, film and advertising recognition. The acquisition was confirmed a few days later, reportedly costing Apple million, with media reports that the purchase looked like a move to acquire data and tools bolstering the Apple Music streaming service. The purchase was approved by the European Union in September 2018.
Also in June 2017, Apple appointed Jamie Erlicht and Zack Van Amburg to head the newly formed worldwide video unit. In November 2017, Apple announced it was branching out into original scripted programming: a drama series starring Jennifer Aniston and Reese Witherspoon, and a reboot of the anthology series Amazing Stories with Steven Spielberg. In June 2018, Apple signed the Writers Guild of America's minimum basic agreement and Oprah Winfrey to a multi-year content partnership. Additional partnerships for original series include Sesame Workshop and DHX Media and its subsidiary Peanuts Worldwide, and a partnership with A24 to create original films.
During the Apple Special Event in September 2017, the AirPower wireless charger was announced alongside the iPhone X, iPhone 8, and Watch Series 3. The AirPower was intended to wirelessly charge multiple devices, simultaneously. Though initially set to release in early 2018, the AirPower would be canceled in March 2019, marking the first cancellation of a device under Cook's leadership. On August 19, 2020, Apple's share price briefly topped $467.77, making it the first US company with a market capitalization of trillion.
During its annual WWDC keynote speech on June 22, 2020, Apple announced it would move away from Intel processors, and the Mac would transition to processors developed in-house. The announcement was expected by industry analysts, and it has been noted that Macs featuring Apple's processors would allow for big increases in performance over current Intel-based models. On November 10, 2020, the MacBook Air, MacBook Pro, and the Mac Mini became the first Macs powered by an Apple-designed processor, the Apple M1.
In April 2022, it was reported that Samsung Electro-Mechanics would be collaborating with Apple on its M2 chip instead of LG Innotek. Developer logs showed that at least nine Mac models with four different M2 chips were being tested.
The Wall Street Journal reported that Apple's effort to develop its own chips left it better prepared to deal with the semiconductor shortage that emerged during the COVID-19 pandemic, which led to increased profitability, with sales of M1-based Mac computers rising sharply in 2020 and 2021. It also inspired other companies like Tesla, Amazon, and Meta Platforms to pursue a similar path.
In April 2022, Apple opened an online store that allowed anyone in the U.S. to view repair manuals and order replacement parts for specific recent iPhones, although the difference in cost between this method and official repair is anticipated to be minimal.
In May 2022, a trademark was filed for RealityOS, an operating system reportedly intended for virtual and augmented reality headsets, first mentioned in 2017. According to Bloomberg, the headset may come out in 2023. Further insider reports state that the device uses iris scanning for payment confirmation and signing into accounts.
On June 18, 2022, the Apple Store in Towson, Maryland, became the first to unionize in the U.S., with the employees voting to join the International Association of Machinists and Aerospace Workers.
On July 7, 2022, Apple added Lockdown Mode to macOS 13 and iOS 16, as a response to the earlier Pegasus revelations; the mode increases security protections for high-risk users against targeted zero-day malware.
Apple launched a buy now, pay later service called 'Apple Pay Later' for its Apple Wallet users in March 2023. The program allows its users to apply for loans between $50 and $1,000 to make online or in-app purchases and then repaying them through four installments spread over six weeks without any interest or fees.
In November 2023, Apple agreed to a $25 million settlement in a U.S. Department of Justice case that alleged Apple was discriminating against U.S. citizens in hiring. Apple created jobs that were not listed online and required paper submission to apply for, while advertising these jobs to foreign workers as part of recruitment for PERM.
In January 2024, Apple announced compliance with the European Union's competition law, with major changes to the App Store and other services, effective on March 7. This enables iOS users in the 27-nation bloc to use alternative app stores, and alternative payment methods within apps. This adds a menu in Safari for downloading alternative browsers, such as Chrome or Firefox.
In June 2024, Apple introduced Apple Intelligence to incorporate on-device artificial intelligence capabilities.
On November 1, 2024, Apple announced its acquisition of Pixelmator, a company known for its image editing applications for iPhone and Mac. Apple had previously showcased Pixelmator's apps during its product launches, including naming Pixelmator Pro its Mac App of the Year in 2018 for its innovative use of machine learning and AI. In the announcement, Pixelmator stated that there would be no significant changes to its existing apps following the acquisition.
On December 31, 2024, a preliminary settlement was filed in the Oakland, California federal court that accused Apple of unlawfully recording private conversations through unintentional Siri activations and shared them with third parties, including advertisers. Apple agreed to a $95 million cash settlement to resolve this lawsuit in which its Siri assistant violated user privacy. While denying any wrongdoing, Apple settled the case, allowing affected users to potentially claim up to $20 per device. Attorneys sought $28.5 million in fees from the settlement fund.
Products
Since the company's founding and into the early 2000s, Apple primarily sold computers, which are marketed as Macintosh since the mid-1980s. Since then, the company has expanded its product categories to include various portable devices, starting with the now discontinued iPod (2001), and later with the iPhone (2007) and iPad (2010). Apple also sells several other products that it categorizes as "Wearables, Home and Accessories", such as the Apple Watch, Apple TV, AirPods, HomePod, and Apple Vision Pro.
Apple devices have been praised for creating a cohesive ecosystem when used in conjunction with other Apple products, though have received criticism for not functioning as well or with as many features when used with competitive devices and instead often relying on Apple's proprietary features, software, and services to work as intended by Apple, an approach often described as "walled garden". As of 2023, there are over 2 billion Apple devices in active use worldwide.
Mac
Mac, which is short for Macintosh—its official name until 1999—is Apple's line of personal computers that use the company's proprietary macOS operating system. Personal computers were Apple's original business line, but they account for only about eight percent of the company's revenue.
There are six Mac computer families in production:
iMac: Consumer all-in-one desktop computer, introduced in 1998.
Mac Mini: Consumer sub-desktop computer, introduced in 2005.
MacBook Pro: Professional notebook, introduced in 2006.
Mac Pro: Professional workstation, introduced in 2006.
MacBook Air: Consumer ultra-thin notebook, introduced in 2008.
Mac Studio: Professional small form-factor workstation, introduced in 2022.
Often described as a walled garden, Macs use Apple silicon chips, run the macOS operating system, and include Apple software like the Safari web browser, iMovie for home movie editing, GarageBand for music creation, and the iWork productivity suite. Apple also sells pro apps: Final Cut Pro for video production, Logic Pro for musicians and producers, and Xcode for software developers. Apple also sells a variety of accessories for Macs, including the Pro Display XDR, Apple Studio Display, Magic Mouse, Magic Trackpad, and Magic Keyboard.
iPhone
The iPhone is Apple's line of smartphones, which run the iOS operating system. The first iPhone was unveiled by Steve Jobs on January 9, 2007. Since then, new iPhone models have been released every year. When it was introduced, its multi-touch screen was described as "revolutionary" and a "game-changer" for the mobile phone industry. The device has been credited with creating the app economy.
iOS is one of the two major smartphone platforms in the world, alongside Android. The iPhone has generated large profits for the company, and is credited with helping to make Apple one of the world's most valuable publicly traded companies. , the iPhone accounts for nearly half of the company's revenue.
iPad
The iPad is Apple's line of tablets which run iPadOS. The first-generation iPad was announced on January 27, 2010. The iPad is mainly marketed for consuming multimedia, creating art, working on documents, videoconferencing, and playing games. The iPad lineup consists of several base iPad models, and the smaller iPad Mini, upgraded iPad Air, and high-end iPad Pro. Apple has consistently improved the iPad's performance, with the iPad Pro adopting the same M1 and M2 chips as the Mac; but the iPad still receives criticism for its limited OS.
Apple has sold more than 500 million iPads, though sales peaked in 2013. The iPad still remains the most popular tablet computer by sales , and accounted for seven percent of the company's revenue . Apple sells several iPad accessories, including the Apple Pencil, Smart Keyboard, Smart Keyboard Folio, Magic Keyboard, and several adapters.
Other products
Apple makes several other products that it categorizes as "Wearables, Home and Accessories". These products include the AirPods line of wireless headphones, Apple TV digital media players, Apple Watch smartwatches, Beats headphones, HomePod smart speakers, and the Vision Pro mixed reality headset. , this broad line of products comprises about ten percent of the company's revenues.
Services
Apple offers a broad line of services, including advertising in the App Store and Apple News app, the AppleCare+ extended warranty plan, the iCloud+ cloud-based data storage service, payment services through the Apple Card credit card and the Apple Pay processing platform, digital content services including Apple Books, Apple Fitness+, Apple Music, Apple News+, Apple TV+, and the iTunes Store. , services comprise about 26% of the company's revenue. In 2019, Apple announced it would be making a concerted effort to expand its service revenues.
Marketing
Branding
According to Steve Jobs, the company's name was inspired by his visit to an apple farm while on a fruitarian diet. Apple's first logo, designed by Ron Wayne, depicts Sir Isaac Newton sitting under an apple tree. It was almost immediately replaced by Rob Janoff's "rainbow Apple", the now-familiar rainbow-colored silhouette of an apple with a bite taken out of it. This logo has been erroneously referred to as a tribute to Alan Turing, with the bite mark a reference to his method of suicide.
On August 27, 1999, Apple officially dropped the rainbow scheme and began to use monochromatic logos nearly identical in shape to the previous rainbow incarnation. An Aqua-themed version of the monochrome logo was used from 1998 until 2003, and a glass-themed version was used from 2007 until 2013.
Apple evangelists were actively engaged by the company at one time, but this was after the phenomenon had already been firmly established. Apple evangelist Guy Kawasaki has called the brand fanaticism "something that was stumbled upon", while Ive claimed in 2014 that "people have an incredibly personal relationship" with Apple's products.
Fortune magazine named Apple the most admired company in the United States in 2008, and in the world from 2008 to 2012. On September 30, 2013, Apple surpassed Coca-Cola to become the world's most valuable brand in the Omnicom Group's "Best Global Brands" report. Boston Consulting Group has ranked Apple as the world's most innovative brand every year . 1.65 billion Apple products were in active use. In February 2023, that number exceeded 2 billion devices. In 2023, the World Intellectual Property Organization (WIPO)'s Madrid Yearly Review ranked Apple Inc.'s number of marks applications filled under the Madrid System as 10th in the world, with 74 trademarks applications submitted during 2023.
Apple was ranked the No. 3 company in the world in the 2024 Fortune 500 list.
Advertising
Apple's first slogan, "Byte into an Apple", was coined in the late 1970s. From 1997 to 2002, the slogan "Think different" was used in advertising campaigns, and is still closely associated with Apple. Apple also has slogans for specific product lines—for example, "iThink, therefore iMac" was used in 1998 to promote the iMac, and "Say hello to iPhone" has been used in iPhone advertisements. "Hello" was also used to introduce the original Macintosh, Newton, iMac ("hello (again)"), and iPod.
From the introduction of the Macintosh in 1984, with the 1984 Super Bowl advertisement to the more modern Get a Mac adverts, Apple has been recognized for its efforts toward effective advertising and marketing for its products. However, claims made by later campaigns were criticized, particularly the 2005 Power Mac ads. Apple's product advertisements gained significant attention as a result of their eye-popping graphics and catchy tunes. Musicians who benefited from an improved profile as a result of their songs being included on Apple advertisements include Canadian singer Feist with the song "1234" and Yael Naïm with the song "New Soul".
Stores
The first Apple Stores were originally opened as two locations in May 2001 by then-CEO Steve Jobs, after years of attempting but failing store-within-a-store concepts. Seeing a need for improved retail presentation of the company's products, he began an effort in 1997 to revamp the retail program to get an improved relationship to consumers, and hired Ron Johnson in 2000. Jobs relaunched Apple's online store in 1997, and opened the first two physical stores in 2001. The media initially speculated that Apple would fail, but its stores were highly successful, bypassing the sales numbers of competing nearby stores, and within three years reached US$1 billion in annual sales, becoming the fastest retailer in history to do so.
Over the years, Apple has expanded the number of retail locations and its geographical coverage, with 499 stores across 22 countries worldwide . Strong product sales have placed Apple among the top-tier retail stores, with sales over $16 billion globally in 2011. Apple Stores underwent a period of significant redesign, beginning in May 2016. This redesign included physical changes to the Apple Stores, such as open spaces and re-branded rooms, and changes in function to facilitate interaction between consumers and professionals.
Many Apple Stores are located inside shopping malls, but Apple has built several stand-alone "flagship" stores in high-profile locations. It has been granted design patents and received architectural awards for its stores' designs and construction, specifically for its use of glass staircases and cubes. The success of Apple Stores have had significant influence over other consumer electronics retailers, who have lost traffic, control and profits due to a perceived higher quality of service and products at Apple Stores. Due to the popularity of the brand, Apple receives a large number of job applications, many of which come from young workers. Although Apple Store employees receive above-average pay, are offered money toward education and health care, and receive product discounts, there are limited or no paths of career advancement.
Market power
On March 16, 2020, France fined Apple €1.1 billion for colluding with two wholesalers to stifle competition and keep prices high by handicapping independent resellers. The arrangement created aligned prices for Apple products such as iPads and personal computers for about half the French retail market. According to the French regulators, the abuses occurred between 2005 and 2017 but were first discovered after a complaint by an independent reseller, eBizcuss, in 2012.
On August 13, 2020, Epic Games, the maker of the popular game Fortnite, sued both Apple and Google after Fortnite was removed from Apple's and Google's app stores. The lawsuits came after Apple and Google blocked the game after it introduced a direct payment system that bypassed the fees that Apple and Google had imposed. In September 2020, Epic Games founded the Coalition for App Fairness together with thirteen other companies, which aims for better conditions for the inclusion of apps in the app stores. Later, in December 2020, Facebook agreed to assist Epic in their legal game against Apple, planning to support the company by providing materials and documents to Epic. Facebook had, however, stated that the company would not participate directly with the lawsuit, although did commit to helping with the discovery of evidence relating to the trial of 2021. In the months prior to their agreement, Facebook had been dealing with feuds against Apple relating to the prices of paid apps and privacy rule changes. Head of ad products for Facebook Dan Levy commented, saying that "this is not really about privacy for them, this is about an attack on personalized ads and the consequences it's going to have on small-business owners," commenting on the full-page ads placed by Facebook in various newspapers in December 2020.
Privacy
Apple has publicly taken a pro-privacy stance, actively making privacy-conscious features and settings part of its conferences, promotional campaigns, and public image. With its iOS 8 mobile operating system in 2014, the company started encrypting all contents of iOS devices through users' passcodes, making it impossible at the time for the company to provide customer data to law enforcement requests seeking such information. With the popularity rise of cloud storage solutions, Apple began a technique in 2016 to do deep learning scans for facial data in photos on the user's local device and encrypting the content before uploading it to Apple's iCloud storage system. It also introduced "differential privacy", a way to collect crowdsourced data from many users, while keeping individual users anonymous, in a system that Wired described as "trying to learn as much as possible about a group while learning as little as possible about any individual in it". Users are explicitly asked if they want to participate, and can actively opt-in or opt-out.
However, Apple has aided law enforcement in criminal investigations by providing iCloud backups of users' devices, and the company's commitment to privacy has been questioned by its efforts to promote biometric authentication technology in its newer iPhone models, which do not have the same level of constitutional privacy as a passcode in the United States.
With Apple's release of an update to iOS 14, Apple required all developers of iPhone, iPad, and iPod Touch applications to directly ask iPhone users permission to track them. The feature, called "App Tracking Transparency", received heavy criticism from Facebook, whose primary business model revolves around the tracking of users' data and sharing such data with advertisers so users can see more relevant ads, a technique commonly known as targeted advertising. After Facebook's measures, including purchasing full-page newspaper advertisements protesting App Tracking Transparency, Apple released the update in early 2021. A study by Verizon subsidiary Flurry Analytics reported only 4% of iOS users in the United States and 12% worldwide have opted into tracking.
Prior to the release of iOS 15, Apple announced new efforts at combating child sexual abuse material on iOS and Mac platforms. Parents of minor iMessage users can now be alerted if their child sends or receives nude photographs. Additionally, on-device hashing would take place on media destined for upload to iCloud, and hashes would be compared to a list of known abusive images provided by law enforcement; if enough matches were found, Apple would be alerted and authorities informed. The new features received praise from law enforcement and victims rights advocates. However, privacy advocates, including the Electronic Frontier Foundation, condemned the new features as invasive and highly prone to abuse by authoritarian governments.
Ireland's Data Protection Commission launched a privacy investigation to examine whether Apple complied with the EU's GDPR law following an investigation into how the company processes personal data with targeted ads on its platform.
In December 2019, security researcher Brian Krebs discovered that the iPhone 11 Pro would still show the arrow indicator –signifying location services are being used– at the top of the screen while the main location services toggle is enabled, despite all individual location services being disabled. Krebs was unable to replicate this behavior on older models and when asking Apple for comment, he was told by Apple that "It is expected behavior that the Location Services icon appears in the status bar when Location Services is enabled. The icon appears for system services that do not have a switch in Settings."
Apple later further clarified that this behavior was to ensure compliance with ultra-wideband regulations in specific countries, a technology Apple started implementing in iPhones starting with iPhone 11 Pro, and emphasized that "the management of ultra wideband compliance and its use of location data is done entirely on the device and Apple is not collecting user location data." Will Strafach, an executive at security firm Guardian Firewall, confirmed the lack of evidence that location data was sent off to a remote server. Apple promised to add a new toggle for this feature and in later iOS revisions Apple provided users with the option to tap on the location services indicator in Control Center to see which specific service is using the device's location.
According to published reports by Bloomberg News on March 30, 2022, Apple turned over data such as phone numbers, physical addresses, and IP addresses to hackers posing as law enforcement officials using forged documents. The law enforcement requests sometimes included forged signatures of real or fictional officials. When asked about the allegations, an Apple representative referred the reporter to a section of the company policy for law enforcement guidelines, which stated, "We review every data request for legal sufficiency and use advanced systems and processes to validate law enforcement requests and detect abuse."
Corporate affairs
Business trends
The key trends for Apple are, as of each financial year ending September 24:
Leadership
Senior management
, the management of Apple Inc. includes:
Tim Cook (chief executive officer)
Jeff Williams (chief operating officer)
Kevan Parekh (senior vice president and chief financial officer)
Katherine L. Adams (senior vice president and general counsel)
Eddy Cue (senior vice president – Internet Software and Services)
Craig Federighi (senior vice president – Software Engineering)
John Giannandrea (senior vice president – Machine Learning and AI Strategy)
Deirdre O'Brien (senior vice president – Retail + People)
John Ternus (senior vice president – Hardware Engineering)
Greg Joswiak (senior vice president – Worldwide Marketing)
Johny Srouji (senior vice president – Hardware Technologies)
Sabih Khan (senior vice president – Operations)
Board of directors
, the board of directors of Apple Inc. includes:
Arthur D. Levinson (chairman)
Tim Cook (executive director and CEO)
James A. Bell
Alex Gorsky
Andrea Jung
Monica Lozano
Ronald Sugar
Susan Wagner
Previous CEOs
Michael Scott (1977–1981)
Mike Markkula (1981–1983)
John Sculley (1983–1993)
Michael Spindler (1993–1996)
Gil Amelio (1996–1997)
Steve Jobs (1997–2011)
Ownership
, the largest shareholders of Apple were:
The Vanguard Group (1,400,000,000 shares, 9.29%)
BlackRock (1,120,000,0000 shares, 7.48%)
State Street Corporation (595,500,000 shares, 3.96%)
Fidelity Investments (341,640,000 shares, 2.27%)
Geode Capital Management (340,160,000 shares, 2.26%)
Berkshire Hathaway (300,000,000 shares, 2.00%)
Morgan Stanley (238,260,000 shares, 1.59%)
T. Rowe Price (220,110,000 shares, 1.47%)
Norges Bank (187,160,000 shares, 1.25%)
JPMorgan Chase (183,010,000 shares, 1.22%)
Corporate culture
Apple is one of several highly successful companies founded in the 1970s that bucked the traditional notions of corporate culture. Jobs often walked around the office barefoot even after Apple became a Fortune 500 company. By the time of the "1984" television advertisement, Apple's informal culture had become a key trait that differentiated it from its competitors. According to a 2011 report in Fortune, this has resulted in a corporate culture more akin to a startup rather than a multinational corporation. In a 2017 interview, Wozniak credited watching Star Trek and attending Star Trek conventions in his youth as inspiration for co-founding Apple.
As the company has grown and been led by a series of differently opinionated chief executives, some media have suggested that it has lost some of its original character. Nonetheless, it has maintained a reputation for fostering individuality and excellence that reliably attracts talented workers, particularly after Jobs returned. Numerous Apple employees have stated that projects without Jobs's involvement often took longer than others.
The Apple Fellows program awards employees for extraordinary technical or leadership contributions to personal computing. Recipients include Bill Atkinson, Steve Capps, Rod Holt, Alan Kay, Guy Kawasaki, Al Alcorn, Don Norman, Rich Page, Steve Wozniak, and Phil Schiller.
Jobs intended that employees were to be specialists who are not exposed to functions outside their area of expertise. For instance, Ron Johnson—Senior Vice President of Retail Operations until November 1, 2011—was responsible for site selection, in-store service, and store layout, yet had no control of the inventory in his stores. This was done by Tim Cook, who had a background in supply-chain management. Apple is known for strictly enforcing accountability. Each project has a "directly responsible individual" or "DRI" in Apple jargon. Unlike other major U.S. companies, Apple provides a relatively simple compensation policy for executives that does not include perks enjoyed by other CEOs like country club fees or private use of company aircraft. The company typically grants stock options to executives every other year.
In 2015, Apple had 110,000 full-time employees. This increased to 116,000 full-time employees the next year, a notable hiring decrease, largely due to its first revenue decline. Apple does not specify how many of its employees work in retail, though its 2014 SEC filing put the number at approximately half of its employee base. In September 2017, Apple announced that it had over 123,000 full-time employees.
Apple has a strong culture of corporate secrecy, and has an anti-leak Global Security team that recruits from the National Security Agency, the Federal Bureau of Investigation, and the United States Secret Service. In December 2017, Glassdoor said Apple was the 48th best place to work, having originally entered at rank 19 in 2009, peaking at rank 10 in 2012, and falling down the ranks in subsequent years. In 2023, Bloomberg Mark Gurman revealed the existence of Apple's Exploratory Design Group (XDG), which was working to add glucose monitoring to the Apple Watch. Gurman compared XDG to Alphabet's X "moonshot factory".
Offices
Apple Inc.'s world corporate headquarters are located in Cupertino, in the middle of California's Silicon Valley, at Apple Park, a massive circular groundscraper building with a circumference of . The building opened in April 2017 and houses more than 12,000 employees. Apple co-founder Steve Jobs wanted Apple Park to look less like a business park and more like a nature refuge, and personally appeared before the Cupertino City Council in June 2011 to make the proposal, in his final public appearance before his death.
Apple also operates from the Apple Campus (also known by its address, 1 Infinite Loop), a grouping of six buildings in Cupertino that total located about to the west of Apple Park. The Apple Campus was the company's headquarters from its opening in 1993, until the opening of Apple Park in 2017. The buildings, located at 1–6 Infinite Loop, are arranged in a circular pattern around a central green space, in a design that has been compared to that of a university.
In addition to Apple Park and the Apple Campus, Apple occupies an additional thirty office buildings scattered throughout the city of Cupertino, including three buildings as prior headquarters: Stephens Creek Three from 1977 to 1978, Bandley One from 1978 to 1982, and Mariani One from 1982 to 1993. In total, Apple occupies almost 40% of the available office space in the city.
Apple's headquarters for Europe, the Middle East and Africa (EMEA) are located in Cork in the south of Ireland, called the Hollyhill campus. The facility, which opened in 1980, houses 5,500 people and was Apple's first location outside of the United States. Apple's international sales and distribution arms operate out of the campus in Cork.
Apple has two campuses near Austin, Texas: a campus opened in 2014 houses 500 engineers who work on Apple silicon and a campus opened in 2021 where 6,000 people work in technical support, supply chain management, online store curation, and Apple Maps data management. The company also has several other locations in Boulder, Colorado; Culver City, California; Herzliya (Israel), London, New York, Pittsburgh, San Diego, and Seattle that each employ hundreds of people.
Litigation
Apple has been a participant in various legal proceedings and claims since it began operation. In particular, Apple is known for and promotes itself as actively and aggressively enforcing its intellectual property interests. Some litigation examples include Apple v. Samsung, Apple v. Microsoft, Motorola Mobility v. Apple Inc., and Apple Corps v. Apple Computer. Apple has also had to defend itself against charges on numerous occasions of violating intellectual property rights. Most have been dismissed in the courts as shell companies known as patent trolls, with no evidence of actual use of patents in question. On December 21, 2016, Nokia announced that in the U.S. and Germany, it has filed a suit against Apple, claiming that the latter's products infringe on Nokia's patents.
Most recently, in November 2017, the United States International Trade Commission announced an investigation into allegations of patent infringement in regards to Apple's remote desktop technology; Aqua Connect, a company that builds remote desktop software, has claimed that Apple infringed on two of its patents.
Epic Games filed lawsuit against Apple in August 2020 in the United States District Court for the Northern District of California, related to Apple's practices in the iOS App Store.
In January 2022, Ericsson sued Apple over payment of royalty of 5G technology. On June 24, 2024, the European Commission accused Apple of violating the Digital Markets Act by preventing "app developers from freely steering consumers to alternative channels for offers and content". In April 2025, Apple was found guilty and fined 500 million euros ($570 million) for violating the Digital Markets Act.
Finances
, Apple is the world's largest technology company by revenue, with US$383.28 billion; the world's largest technology company by total assets; the fourth-largest personal computer vendor by unit sales; and the world's largest mobile phone manufacturer.
In its fiscal year ending in September 2011, Apple Inc. reported a total of $108 billion in annual revenues—a significant increase from its 2010 revenues of $65 billion—and nearly $82 billion in cash reserves. On March 19, 2012, Apple announced plans for a $2.65-per-share dividend beginning in fourth quarter of 2012, per approval by their board of directors.
The company's worldwide annual revenue in 2013 totaled $170 billion. In May 2013, Apple entered the top ten of the Fortune 500 list of companies for the first time, rising 11 places above its 2012 ranking to take the sixth position. , Apple has around US$234 billion of cash and marketable securities, of which 90% is located outside the United States for tax purposes.
Apple amassed 65% of all profits made by the eight largest worldwide smartphone manufacturers in quarter one of 2014, according to a report by Canaccord Genuity. In the first quarter of 2015, the company garnered 92% of all earnings.
On April 30, 2017, The Wall Street Journal reported that Apple had cash reserves of $250 billion, officially confirmed by Apple as specifically $256.8 billion a few days later.
, Apple was the largest publicly traded corporation in the world by market capitalization. On August 2, 2018, Apple became the first publicly traded U.S. company to reach a $1 trillion market value, and , is valued at just over $3.2 trillion. Apple was ranked No. 4 on the 2018 Fortune 500 rankings of the largest United States corporations by revenue.
In July 2022, Apple reported an 11% decline in Q3 profits compared to 2021. Its revenue in the same period rose 2% year-on-year to $83 billion, though this figure was also lower than in 2021, where the increase was at 36%. The general downturn is reportedly caused by the slowing global economy and supply chain disruptions in China. That year, Apple was one of the largest corporate spenders on research and development worldwide, with R&D expenditure amounting to over $27 billion.
In May 2023, Apple reported a decline in its sales for the first quarter of 2023. Compared to that of 2022, revenue for 2023 fell by 3%. This is Apple's second consecutive quarter of sales decline. This fall is attributed to the slowing economy and consumers putting off purchases of iPads and computers due to increased pricing. However, iPhone sales held up with a year-on-year increase of 1.5%. According to Apple, demands for such devices were strong, particularly in Latin America and South Asia.
Taxes
Apple has created subsidiaries in low-tax places such as Ireland, the Netherlands, Luxembourg, and the British Virgin Islands to cut the taxes it pays around the world. According to The New York Times, in the 1980s Apple was among the first tech companies to designate overseas salespeople in high-tax countries in a manner that allowed the company to sell on behalf of low-tax subsidiaries on other continents, sidestepping income taxes. In the late 1980s, Apple was a pioneer of an accounting technique known as the "Double Irish with a Dutch sandwich", which reduces taxes by routing profits through Irish subsidiaries and the Netherlands and then to the Caribbean.
British Conservative Party Member of Parliament Charlie Elphicke published research on October 30, 2012, which showed that some multinational companies, including Apple Inc., were making billions of pounds of profit in the UK, but were paying an effective tax rate to the UK Treasury of only 3 percent, well below standard corporate tax rates. He followed this research by calling on the Chancellor of the Exchequer George Osborne to force these multinationals, which also included Google and The Coca-Cola Company, to state the effective rate of tax they pay on their UK revenues. Elphicke also said that government contracts should be withheld from multinationals who do not pay their fair share of UK tax.
According to a US Senate report on the company's offshore tax structure concluded in May 2013, Apple has held billions of dollars in profits in Irish subsidiaries to pay little or no taxes to any government by using an unusual global tax structure. The main subsidiary, a holding company that includes Apple's retail stores throughout Europe, has not paid any corporate income tax in the last five years. "Apple has exploited a difference between Irish and U.S. tax residency rules", the report said. On May 21, 2013, Apple CEO Tim Cook defended his company's tax tactics at a Senate hearing.
Apple says that it is the single largest taxpayer in the U.S., with an effective tax rate of approximately of 26% as of Q2 FY2016. In an interview with the German newspaper FAZ in October 2017, Tim Cook stated that Apple was the biggest taxpayer worldwide.
In 2016, after a two-year investigation, the European Commission claimed that Apple's use of a hybrid Double Irish tax arrangement constituted "illegal state aid" from Ireland, and ordered Apple to pay 13 billion euros ($14.5 billion) in unpaid taxes, the largest corporate tax fine in history. This was later annulled, after the European General Court ruled that the commission had provided insufficient evidence. In 2018, Apple repatriated $285 billion to the United States, resulting in a $38 billion tax payment spread over the following eight years.
Charity
Apple is a partner of Product Red, a fundraising campaign for AIDS charity. In November 2014, Apple arranged for all App Store revenue in a two-week period to go to the fundraiser, generating more than US$20 million, and in March 2017, it released an iPhone 7 with a red color finish.
Apple contributes financially to fundraisers in times of natural disasters. In November 2012, it donated $2.5 million to the American Red Cross to aid relief efforts after Hurricane Sandy, and in 2017 it donated $5 million to relief efforts for both Hurricane Irma and Hurricane Harvey, and for the 2017 Central Mexico earthquake. The company has used its iTunes platform to encourage donations in the wake of environmental disasters and humanitarian crises, such as the 2010 Haiti earthquake, the 2011 Japan earthquake, Typhoon Haiyan in the Philippines in November 2013, and the 2015 European migrant crisis. Apple emphasizes that it does not incur any processing or other fees for iTunes donations, sending 100% of the payments directly to relief efforts, though it also acknowledges that the Red Cross does not receive any personal information on the users donating and that the payments may not be tax deductible.
On April 14, 2016, Apple and the World Wide Fund for Nature (WWF) announced that they have engaged in a partnership to, "help protect life on our planet". Apple released a special page in the iTunes App Store, Apps for Earth. In the arrangement, Apple has committed that through April 24, WWF will receive 100% of the proceeds from the applications participating in the App Store via both the purchases of any paid apps and the In-App Purchases. Apple and WWF's Apps for Earth campaign raised more than $8 million in total proceeds to support WWF's conservation work. WWF announced the results at WWDC 2016 in San Francisco.
During the COVID-19 pandemic, Apple's CEO Cook announced that the company will be donating "millions" of masks to health workers in the United States and Europe. On January 13, 2021, Apple announced a $100 million Racial Equity and Justice Initiative to help combat institutional racism worldwide after the 2020 murder of George Floyd. In June 2023, Apple announced doubling this and then distributed more than $200 million to support organizations focused on education, economic growth, and criminal justice. Half is philanthropic grants and half is centered on equity.
Environment
Apple Energy
Apple Energy, LLC is a wholly-owned subsidiary of Apple Inc. that sells solar energy. , Apple's solar farms in California and Nevada have been declared to provide 217.9 megawatts of solar generation capacity. Apple has received regulatory approval to construct a landfill gas energy plant in North Carolina to use the methane emissions to generate electricity. Apple's North Carolina data center is already powered entirely by renewable sources.
Energy and resources
In 2010, Climate Counts, a nonprofit organization dedicated to directing consumers toward the greenest companies, gave Apple a score of 52 points out of a possible 100, which puts Apple in their top category "Striding". This was an increase from May 2008, when Climate Counts only gave Apple 11 points out of 100, which placed the company last among electronics companies, at which time Climate Counts also labeled Apple with a "stuck icon", adding that Apple at the time was "a choice to avoid for the climate-conscious consumer".
Following a Greenpeace protest, Apple released a statement on April 17, 2012, committing to ending its use of coal and shifting to 100% renewable clean energy. By 2013, Apple was using 100% renewable energy to power their data centers. Overall, 75% of the company's power came from clean renewable sources.
In May 2015, Greenpeace evaluated the state of the Green Internet and commended Apple on their environmental practices saying, "Apple's commitment to renewable energy has helped set a new bar for the industry, illustrating in very concrete terms that a 100% renewable Internet is within its reach, and providing several models of intervention for other companies that want to build a sustainable Internet."
, Apple states that 100% of its U.S. operations run on renewable energy, 100% of Apple's data centers run on renewable energy and 93% of Apple's global operations run on renewable energy. However, the facilities are connected to the local grid which usually contains a mix of fossil and renewable sources, so Apple carbon offsets its electricity use. The Electronic Product Environmental Assessment Tool (EPEAT) allows consumers to see the effect a product has on the environment. Each product receives a Gold, Silver, or Bronze rank depending on its efficiency and sustainability. Every Apple tablet, notebook, desktop computer, and display that EPEAT ranks achieves a Gold rating, the highest possible. Although Apple's data centers recycle water 35 times, the increased activity in retail, corporate and data centers also increase the amount of water use to in 2015.
During an event on March 21, 2016, Apple provided a status update on its environmental initiative to be 100% renewable in all of its worldwide operations. Lisa P. Jackson, Apple's vice president of Environment, Policy and Social Initiatives who reports directly to CEO, Tim Cook, announced that , 93% of Apple's worldwide operations are powered with renewable energy. Also featured was the company's efforts to use sustainable paper in their product packaging; 99% of all paper used by Apple in the product packaging comes from post-consumer recycled paper or sustainably managed forests, as the company continues its move to all paper packaging for all of its products.
Apple announced on August 16, 2016, that Lens Technology, one of its major suppliers in China, has committed to power all its glass production for Apple with 100 percent renewable energy by 2018. The commitment is a large step in Apple's efforts to help manufacturers lower their carbon footprint in China. Apple also announced that all 14 of its final assembly sites in China are now compliant with UL's Zero Waste to Landfill validation. The standard, which started in January 2015, certifies that all manufacturing waste is reused, recycled, composted, or converted into energy (when necessary). Since the program began, nearly 140,000 metric tons of waste have been diverted from landfills.
On July 21, 2020, Apple announced its plan to become carbon neutral across its entire business, manufacturing supply chain, and product life cycle by 2030. In the next 10 years, Apple will try to lower emissions with a series of innovative actions, including: low carbon product design, expanding energy efficiency, renewable energy, process and material innovations, and carbon removal.
In June 2024, the United States Environmental Protection Agency (EPA) published a report about an electronic computer manufacturing facility leased by Apple in 2015 in Santa Clara, California, code named Aria. The EPA report stated that Apple was potentially in violation of federal regulations under the Resource Conservation and Recovery Act (RCRA). According to a report from Bloomberg in 2018, the facility is used to develop microLED screens under the code name T159. The inspection found that Apple was potentially mistreating waste as only subject to California regulations and that they had potentially miscalculated the effectiveness of Apple's activated carbon filters, which filter volatile organic compounds (VOCs) from the air. The EPA inspected the facility in August 2023 due to a tip from a former Apple employee who posted the report on X.
Toxins
Following further campaigns by Greenpeace, in 2008, Apple became the first electronics manufacturer to eliminate all polyvinyl chloride (PVC) and brominated flame retardants (BFRs) in its complete product line. In June 2007, Apple began replacing the cold cathode fluorescent lamp (CCFL) backlit LCD displays in its computers with mercury-free LED-backlit LCD displays and arsenic-free glass, starting with the upgraded MacBook Pro. Apple offers comprehensive and transparent information about the CO2e, emissions, materials, and electrical usage concerning every product they currently produce or have sold in the past (and which they have enough data needed to produce the report), in their portfolio on their homepage. Allowing consumers to make informed purchasing decisions on the products they offer for sale. In June 2009, Apple's iPhone 3GS was free of PVC, arsenic, and BFRs. Since 2009, all Apple products have mercury-free LED-backlit LCD displays, arsenic-free glass, and non-PVC cables. All Apple products have EPEAT Gold status and beat the latest Energy Star guidelines in each product's respective regulatory category.
In November 2011, Apple was featured in Greenpeace's Guide to Greener Electronics, which ranks electronics manufacturers on sustainability, climate and energy policy, and how "green" their products are. The company ranked fourth of fifteen electronics companies (moving up five places from the previous year) with a score of 4.6/10. Greenpeace praised Apple's sustainability, noting that the company exceeded its 70% global recycling goal in 2010. Apple continues to score well on product ratings, with all of their products now being free of PVC plastic and BFRs. However, the guide criticized Apple on the Energy criteria for not seeking external verification of its greenhouse gas emissions data, and for not setting any targets to reduce emissions. In January 2012, Apple requested that its cable maker, Volex, begin producing halogen-free USB and power cables.
Green bonds
In February 2016, Apple issued a billion green bond (climate bond), the first ever of its kind by a U.S. tech company. The green bond proceeds are dedicated to the financing of environmental projects.
Supply chain
Apple products were made in the United States in Apple-owned factories until the late 1990s; however, as a result of outsourcing initiatives in the 2000s, almost all of its manufacturing is now handled abroad. According to a report by The New York Times, Apple insiders "believe the vast scale of overseas factories, as well as the flexibility, diligence and industrial skills of foreign workers, have so outpaced their American counterparts that 'Made in the U.S.A.' is no longer a viable option for most Apple products".
The company's manufacturing, procurement, and logistics enable it to execute massive product launches without having to maintain large, profit-sapping inventories. In 2011, Apple's profit margins were 40 percent, compared with between 10 and 20 percent for most other hardware companies. Cook's catchphrase to describe his focus on the company's operational arm is: "Nobody wants to buy sour milk."
In May 2017, the company announced a $1 billion funding project for "advanced manufacturing" in the United States, and subsequently invested $200 million in Corning Inc., a manufacturer of toughened Gorilla Glass technology used in Apple's iPhones. The following December, Apple's chief operating officer, Jeff Williams, told CNBC that the "$1 billion" amount was "absolutely not" the final limit on its spending, elaborating that "We're not thinking in terms of a fund limit... We're thinking about, where are the opportunities across the U.S. to help nurture companies that are making the advanced technology — and the advanced manufacturing that goes with that — that quite frankly is essential to our innovation."
During the Mac's early history, Apple generally refused to adopt prevailing industry standards for hardware, instead creating their own. This trend was largely reversed in the late 1990s, beginning with Apple's adoption of the PCI bus in the 7500/8500/9500 Power Macs. Apple has since joined the industry standards groups to influence the future direction of technology standards such as USB, AGP, HyperTransport, Wi-Fi, NVMe, PCIe and others in its products. FireWire is an Apple-originated standard that was widely adopted across the industry after it was standardized as IEEE 1394 and is a legally mandated port in all cable TV boxes in the United States.
Apple has gradually expanded its efforts in getting its products into the Indian market. In July 2012, during a conference call with investors, CEO Tim Cook said that he "[loves] India", but that Apple saw larger opportunities outside the region. India's requirement that 30% of products sold be manufactured in the country was described as "really adds cost to getting product to market". In May 2016, Apple opened an iOS app development center in Bangalore and a maps development office for 4,000 staff in Hyderabad. In March, The Wall Street Journal reported that Apple would begin manufacturing iPhone models in India "over the next two months", and in May, the Journal wrote that an Apple manufacturer had begun production of the iPhone SE in the country, while Apple told CNBC that the manufacturing was for a "small number" of units. In April 2019, Apple initiated manufacturing of the iPhone 7 at its Bengaluru facility, keeping in mind demand from local customers even as they seek more incentives from the government of India. At the beginning of 2020, Tim Cook announced that Apple schedules the opening of its first physical outlet in India for 2021, while an online store is to be launched by the end of the year. The opening of the Apple Store was postponed, and finally took place in April 2023, while the online store was launched in September 2020.
Worker organizations
Apple directly employs 147,000 workers including 25,000 corporate employees in Apple Park and across Silicon Valley. The vast majority of its employees work at the over 500 retail Apple Stores globally. Apple relies on a larger, outsourced workforce for manufacturing, particularly in China where Apple directly employs 10,000 workers across its retail and corporate divisions. In addition, one further million workers are contracted by Apple's suppliers to assemble Apple products, including Foxconn and Pegatron. Zhengzhou Technology Park alone employs 350,000 Chinese workers in Zhengzhou to exclusively work on the iPhone. , Apple uses hardware components from 43 different countries. The majority of assembling is done by Taiwanese original design manufacturer firms Foxconn, Pegatron, Wistron and Compal Electronics in factories primarily located inside China, and, to a lesser extent, Foxconn plants in Brazil, and India.
Apple workers around the globe have been involved in organizing since the 1990s. Apple unions are made up of retail, corporate, and outsourced workers. Apple employees have joined trade unions or formed works councils in Australia, France, Germany, Italy, Japan, the United Kingdom and the United States. In 2021, Apple Together, a solidarity union, sought to bring together the company's global worker organizations. The majority of industrial labor disputes (including union recognition) involving Apple occur indirectly through its suppliers and contractors, notably Foxconn plants in China and, to a lesser extent, in Brazil and India.
Democratic Republic of the Congo
In 2019, Apple was named as a defendant in a forced labour and child slavery lawsuit by Congolese families of children injured and killed in cobalt mines owned by Glencore and Zhejiang Huayou Cobalt, which supply battery materials to Apple and other companies.
In April 2024, lawyers representing the Democratic Republic of the Congo notified Apple of evidence that Apple may be sourcing minerals from conflict areas of eastern Congo. Apple policies and documentation describe mitigation efforts against conflict minerals, however the lawyers identify discrepancies in supplier reporting as well as a Global Witness report describing a lack of "meaningful mitigation" on Apple's part. In December 2024, DRC filed a lawsuit against Apple's European subsidiaries.
Bibliography
|
;1976 establishments in California;1980s initial public offerings;American brands;Audio software companies;Companies based in Cupertino, California;Companies in the Dow Jones Global Titans 50;Companies in the Dow Jones Industrial Average;Companies in the PRISM network;Companies listed on the Nasdaq;Computer companies established in 1976;Computer companies of the United States;Computer hardware companies;Computer systems companies;Display technology companies;Electronics companies of the United States;Home computer hardware companies;Mobile phone manufacturers;Multinational companies headquartered in the United States;Networking hardware companies;Portable audio player manufacturers;Retail companies of the United States;Software companies based in the San Francisco Bay Area;Software companies established in 1976;Steve Jobs;Technology companies based in the San Francisco Bay Area;Technology companies established in 1976;Technology companies of the United States;Virtual reality companies
|
https://en.wikipedia.org/wiki/Argon
|
Argon is a chemical element; it has symbol Ar and atomic number 18. It is in group 18 of the periodic table and is a noble gas. Argon is the third most abundant gas in Earth's atmosphere, at 0.934% (9340 ppmv). It is more than twice as abundant as water vapor (which averages about 4000 ppmv, but varies greatly), 23 times as abundant as carbon dioxide (400 ppmv), and more than 500 times as abundant as neon (18 ppmv). Argon is the most abundant noble gas in Earth's crust, comprising 0.00015% of the crust.
Nearly all argon in Earth's atmosphere is radiogenic argon-40, derived from the decay of potassium-40 in Earth's crust. In the universe, argon-36 is by far the most common argon isotope, as it is the most easily produced by stellar nucleosynthesis in supernovas.
The name "argon" is derived from the Greek word , neuter singular form of meaning 'lazy' or 'inactive', as a reference to the fact that the element undergoes almost no chemical reactions. The complete octet (eight electrons) in the outer atomic shell makes argon stable and resistant to bonding with other elements. Its triple point temperature of 83.8058 K is a defining fixed point in the International Temperature Scale of 1990.
Argon is extracted industrially by the fractional distillation of liquid air. It is mostly used as an inert shielding gas in welding and other high-temperature industrial processes where ordinarily unreactive substances become reactive; for example, an argon atmosphere is used in graphite electric furnaces to prevent the graphite from burning. It is also used in incandescent and fluorescent lighting, and other gas-discharge tubes. It makes a distinctive blue-green gas laser. It is also used in fluorescent glow starters.
Characteristics
Argon has approximately the same solubility in water as oxygen and is 2.5 times more soluble in water than nitrogen. Argon is colorless, odorless, nonflammable and nontoxic as a solid, liquid or gas. Argon is chemically inert under most conditions and forms no confirmed stable compounds at room temperature.
Although argon is a noble gas, it can form some compounds under various extreme conditions. Argon fluorohydride (HArF), a compound of argon with fluorine and hydrogen that is stable below , has been demonstrated. Although the neutral ground-state chemical compounds of argon are presently limited to HArF, argon can form clathrates with water when atoms of argon are trapped in a lattice of water molecules. Ions, such as , and excited-state complexes, such as ArF, have been demonstrated. Theoretical calculation predicts several more argon compounds that should be stable but have not yet been synthesized.
History
Argon (Greek , neuter singular form of meaning "lazy" or "inactive") is named in reference to its chemical inactivity. This chemical property of this first noble gas to be discovered impressed the namers. An unreactive gas was suspected to be a component of air by Henry Cavendish in 1785.
Argon was first isolated from air in 1894 by Lord Rayleigh and Sir William Ramsay at University College London by removing oxygen, carbon dioxide, water, and nitrogen from a sample of clean air. They first accomplished this by replicating an experiment of Henry Cavendish's. They trapped a mixture of atmospheric air with additional oxygen in a test-tube (A) upside-down over a large quantity of dilute alkali solution (B), which in Cavendish's original experiment was potassium hydroxide, and conveyed a current through wires insulated by U-shaped glass tubes (CC) which sealed around the platinum wire electrodes, leaving the ends of the wires (DD) exposed to the gas and insulated from the alkali solution. The arc was powered by a battery of five Grove cells and a Ruhmkorff coil of medium size. The alkali absorbed the oxides of nitrogen produced by the arc and also carbon dioxide. They operated the arc until no more reduction of volume of the gas could be seen for at least an hour or two and the spectral lines of nitrogen disappeared when the gas was examined. The remaining oxygen was reacted with alkaline pyrogallate to leave behind an apparently non-reactive gas which they called argon.
Before isolating the gas, they had determined that nitrogen produced from chemical compounds was 0.5% lighter than nitrogen from the atmosphere. The difference was slight, but it was important enough to attract their attention for many months. They concluded that there was another gas in the air mixed in with the nitrogen. Argon was also encountered in 1882 through independent research of H. F. Newall and W. N. Hartley. Each observed new lines in the emission spectrum of air that did not match known elements.
Prior to 1957, the symbol for argon was "A". This was changed to Ar after the International Union of Pure and Applied Chemistry published the work Nomenclature of Inorganic Chemistry in 1957.
Occurrence
Argon constitutes 0.934% by volume and 1.288% by mass of Earth's atmosphere. Air is the primary industrial source of purified argon products. Argon is isolated from air by fractionation, most commonly by cryogenic fractional distillation, a process that also produces purified nitrogen, oxygen, neon, krypton and xenon. Earth's crust and seawater contain 1.2 ppm and 0.45 ppm of argon, respectively.
Isotopes
The main isotopes of argon found on Earth are (99.6%), (0.34%), and (0.06%). Naturally occurring , with a half-life of 1.25 years, decays to stable (11.2%) by electron capture or positron emission, and also to stable (88.8%) by beta decay. These properties and ratios are used to determine the age of rocks by K–Ar dating.
In Earth's atmosphere, is made by cosmic ray activity, primarily by neutron capture of followed by two-neutron emission. In the subsurface environment, it is also produced through neutron capture by , followed by proton emission. is created from the neutron capture by followed by an alpha particle emission as a result of subsurface nuclear explosions. It has a half-life of 35 days.
Between locations in the Solar System, the isotopic composition of argon varies greatly. Where the major source of argon is the decay of in rocks, will be the dominant isotope, as it is on Earth. Argon produced directly by stellar nucleosynthesis is dominated by the alpha-process nuclide . Correspondingly, solar argon contains 84.6% (according to solar wind measurements), and the ratio of the three isotopes 36Ar : 38Ar : 40Ar in the atmospheres of the outer planets is 8400 : 1600 : 1. This contrasts with the low abundance of primordial in Earth's atmosphere, which is only 31.5 ppmv (= 9340 ppmv × 0.337%), comparable with that of neon (18.18 ppmv) on Earth and with interplanetary gasses, measured by probes.
The atmospheres of Mars, Mercury and Titan (the largest moon of Saturn) contain argon, predominantly as .
The predominance of radiogenic is the reason the standard atomic weight of terrestrial argon is greater than that of the next element, potassium, a fact that was puzzling when argon was discovered. Mendeleev positioned the elements on his periodic table in order of atomic weight, but the inertness of argon suggested a placement before the reactive alkali metal. Henry Moseley later solved this problem by showing that the periodic table is actually arranged in order of atomic number (see History of the periodic table).
Compounds
Argon's complete octet of electrons indicates full s and p subshells. This full valence shell makes argon very stable and extremely resistant to bonding with other elements. Before 1962, argon and the other noble gases were considered to be chemically inert and unable to form compounds; however, compounds of the heavier noble gases have since been synthesized. The first argon compound with tungsten pentacarbonyl, W(CO)5Ar, was isolated in 1975. However, it was not widely recognised at that time. In August 2000, another argon compound, argon fluorohydride (HArF), was formed by researchers at the University of Helsinki, by shining ultraviolet light onto frozen argon containing a small amount of hydrogen fluoride with caesium iodide. This discovery caused the recognition that argon could form weakly bound compounds, even though it was not the first. It is stable up to 17 kelvins (−256 °C). The metastable dication, which is valence-isoelectronic with carbonyl fluoride and phosgene, was observed in 2010. Argon-36, in the form of argon hydride (argonium) ions, has been detected in interstellar medium associated with the Crab Nebula supernova; this was the first noble-gas molecule detected in outer space.
Solid argon hydride (Ar(H2)2) has the same crystal structure as the MgZn2 Laves phase. It forms at pressures between 4.3 and 220 GPa, though Raman measurements suggest that the H2 molecules in Ar(H2)2 dissociate above 175 GPa.
Production
Argon is extracted industrially by the fractional distillation of liquid air in a cryogenic air separation unit; a process that separates liquid nitrogen, which boils at 77.3 K, from argon, which boils at 87.3 K, and liquid oxygen, which boils at 90.2 K. About 700,000 tonnes of argon are produced worldwide every year.
Applications
Argon has several desirable properties:
Argon is a chemically inert gas.
Argon is the cheapest alternative when nitrogen is not sufficiently inert.
Argon has low thermal conductivity.
Argon has electronic properties (ionization and/or the emission spectrum) desirable for some applications.
Other noble gases would be equally suitable for most of these applications, but argon is by far the cheapest. It is inexpensive, since it occurs naturally in air and is readily obtained as a byproduct of cryogenic air separation in the production of liquid oxygen and liquid nitrogen: the primary constituents of air are used on a large industrial scale. The other noble gases (except helium) are produced this way as well, but argon is the most plentiful by far. The bulk of its applications arise simply because it is inert and relatively cheap.
Industrial processes
Argon is used in some high-temperature industrial processes where ordinarily non-reactive substances become reactive. For example, an argon atmosphere is used in graphite electric furnaces to prevent the graphite from burning.
For some of these processes, the presence of nitrogen or oxygen gases might cause defects within the material. Argon is used in some types of arc welding such as gas metal arc welding and gas tungsten arc welding, as well as in the processing of titanium and other reactive elements. An argon atmosphere is also used for growing crystals of silicon and germanium.
Argon is used in the poultry industry to asphyxiate birds, either for mass culling following disease outbreaks, or as a means of slaughter more humane than electric stunning. Argon is denser than air and displaces oxygen close to the ground during inert gas asphyxiation. Its non-reactive nature makes it suitable in a food product, and since it replaces oxygen within the dead bird, argon also enhances shelf life.
Argon is sometimes used for extinguishing fires where valuable equipment may be damaged by water or foam.
Scientific research
Liquid argon is used as the target for neutrino experiments and direct dark matter searches. The interaction between the hypothetical WIMPs and an argon nucleus produces scintillation light that is detected by photomultiplier tubes. Two-phase detectors containing argon gas are used to detect the ionized electrons produced during the WIMP–nucleus scattering. As with most other liquefied noble gases, argon has a high scintillation light yield (about 51 photons/keV), is transparent to its own scintillation light, and is relatively easy to purify. Compared to xenon, argon is cheaper and has a distinct scintillation time profile, which allows the separation of electronic recoils from nuclear recoils. On the other hand, its intrinsic beta-ray background is larger due to contamination, unless one uses argon from underground sources, which has much less contamination. Most of the argon in Earth's atmosphere was produced by electron capture of long-lived ( + e− → + ν) present in natural potassium within Earth. The activity in the atmosphere is maintained by cosmogenic production through the knockout reaction (n,2n) and similar reactions. The half-life of is only 269 years. As a result, the underground Ar, shielded by rock and water, has much less contamination. Dark-matter detectors currently operating with liquid argon include DarkSide, WArP, ArDM, microCLEAN and DEAP. Neutrino experiments include ICARUS and MicroBooNE, both of which use high-purity liquid argon in a time projection chamber for fine grained three-dimensional imaging of neutrino interactions.
At Linköping University, Sweden, the inert gas is being utilized in a vacuum chamber in which plasma is introduced to ionize metallic films. This process results in a film usable for manufacturing computer processors. The new process would eliminate the need for chemical baths and use of expensive, dangerous and rare materials.
Preservative
Argon is used to displace oxygen- and moisture-containing air in packaging material to extend the shelf-lives of the contents (argon has the European food additive code E938). Aerial oxidation, hydrolysis, and other chemical reactions that degrade the products are retarded or prevented entirely. High-purity chemicals and pharmaceuticals are sometimes packed and sealed in argon.
In winemaking, argon is used in a variety of activities to provide a barrier against oxygen at the liquid surface, which can spoil wine by fueling both microbial metabolism (as with acetic acid bacteria) and standard redox chemistry.
Argon is sometimes used as the propellant in aerosol cans.
Argon is also used as a preservative for such products as varnish, polyurethane, and paint, by displacing air to prepare a container for storage.
Since 2002, the American National Archives stores important national documents such as the Declaration of Independence and the Constitution within argon-filled cases to inhibit their degradation. Argon is preferable to the helium that had been used in the preceding five decades, because helium gas escapes through the intermolecular pores in most containers and must be regularly replaced.
Laboratory equipment
Argon may be used as the inert gas within Schlenk lines and gloveboxes. Argon is preferred to less expensive nitrogen in cases where nitrogen may react with the reagents or apparatus.
Argon may be used as the carrier gas in gas chromatography and in electrospray ionization mass spectrometry; it is the gas of choice for the plasma used in ICP spectroscopy. Argon is preferred for the sputter coating of specimens for scanning electron microscopy. Argon gas is also commonly used for sputter deposition of thin films as in microelectronics and for wafer cleaning in microfabrication.
Medical use
Cryosurgery procedures such as cryoablation use liquid argon to destroy tissue such as cancer cells. It is used in a procedure called "argon-enhanced coagulation", a form of argon plasma beam electrosurgery. The procedure carries a risk of producing gas embolism and has resulted in the death of at least one patient.
Blue argon lasers are used in surgery to weld arteries, destroy tumors, and correct eye defects.
Argon has also been used experimentally to replace nitrogen in the breathing or decompression mix known as Argox, to speed the elimination of dissolved nitrogen from the blood.
Lighting
Incandescent lights are filled with argon, to preserve the filaments at high temperature from oxidation. It is used for the specific way it ionizes and emits light, such as in plasma globes and calorimetry in experimental particle physics. Gas-discharge lamps filled with pure argon provide lilac/violet light; with argon and some mercury, blue light. Argon is also used for blue and green argon-ion lasers.
Miscellaneous uses
Argon is used for thermal insulation in energy-efficient windows. Argon is also used in technical scuba diving to inflate a dry suit because it is inert and has low thermal conductivity.
Argon is used as a propellant in the development of the Variable Specific Impulse Magnetoplasma Rocket (VASIMR). Compressed argon gas is allowed to expand, to cool the seeker heads of some versions of the AIM-9 Sidewinder missile and other missiles that use cooled thermal seeker heads. The gas is stored at high pressure.
Argon-39, with a half-life of 269 years, has been used for a number of applications, primarily ice core and ground water dating. Also, potassium–argon dating and related argon-argon dating are used to date sedimentary, metamorphic, and igneous rocks.
Argon has been used by athletes as a doping agent to simulate hypoxic conditions. In 2014, the World Anti-Doping Agency (WADA) added argon and xenon to the list of prohibited substances and methods, although at this time there is no reliable test for abuse.
Safety
Although argon is non-toxic, it is 38% more dense than air and therefore considered a dangerous asphyxiant in closed areas. It is difficult to detect because it is colorless, odorless, and tasteless. A 1994 incident, in which a man was asphyxiated after entering an argon-filled section of oil pipe under construction in Alaska, highlights the dangers of argon tank leakage in confined spaces and emphasizes the need for proper use, storage and handling.
See also
Industrial gas
Oxygen–argon ratio, a ratio of two physically similar gases, which has importance in various sectors.
References
Further reading
On triple point pressure at 69 kPa.
On triple point pressure at 83.8058 K.
External links
Argon at The Periodic Table of Videos (University of Nottingham)
USGS Periodic Table – Argon
Diving applications: Why Argon?
|
;Chemical elements;E-number additives;Industrial gases;Noble gases
|
https://en.wikipedia.org/wiki/Arsenic
|
Arsenic is a chemical element; it has symbol As and atomic number 33. It is a metalloid and one of the pnictogens, and therefore shares many properties with its group 15 neighbors phosphorus and antimony. Arsenic is notoriously toxic. It occurs naturally in many minerals, usually in combination with sulfur and metals, but also as a pure elemental crystal. It has various allotropes, but only the grey form, which has a metallic appearance, is important to industry.
The primary use of arsenic is in alloys of lead (for example, in car batteries and ammunition). Arsenic is also a common n-type dopant in semiconductor electronic devices, and a component of the III–V compound semiconductor gallium arsenide. Arsenic and its compounds, especially the trioxide, are used in the production of pesticides, treated wood products, herbicides, and insecticides. These applications are declining with the increasing recognition of the persistent toxicity of arsenic and its compounds.
Arsenic has been known since ancient times to be poisonous to humans. However, a few species of bacteria are able to use arsenic compounds as respiratory metabolites. Trace quantities of arsenic have been proposed to be an essential dietary element in rats, hamsters, goats, and chickens. Research has not been conducted to determine whether small amounts of arsenic may play a role in human metabolism. However, arsenic poisoning occurs in multicellular life if quantities are larger than needed. Arsenic contamination of groundwater is a problem that affects millions of people across the world.
The United States' Environmental Protection Agency states that all forms of arsenic are a serious risk to human health. The United States Agency for Toxic Substances and Disease Registry ranked arsenic number 1 in its 2001 prioritized list of hazardous substances at Superfund sites. Arsenic is classified as a group-A carcinogen.
Characteristics
Physical characteristics
The three most common arsenic allotropes are grey, yellow, and black arsenic, with grey being the most common. Grey arsenic (α-As, space group Rm No. 166) adopts a double-layered structure consisting of many interlocked, ruffled, six-membered rings. Because of weak bonding between the layers, grey arsenic is brittle and has a relatively low Mohs hardness of 3.5. Nearest and next-nearest neighbors form a distorted octahedral complex, with the three atoms in the same double-layer being slightly closer than the three atoms in the next. This relatively close packing leads to a high density of 5.73 g/cm3. Grey arsenic is a semimetal, but becomes a semiconductor with a bandgap of 1.2–1.4 eV if amorphized. Grey arsenic is also the most stable form.
Yellow arsenic is soft and waxy, and somewhat similar to tetraphosphorus (). Both have four atoms arranged in a tetrahedral structure in which each atom is bound to each of the other three atoms by a single bond. This unstable allotrope, being molecular, is the most volatile, least dense, and most toxic. Solid yellow arsenic is produced by rapid cooling of arsenic vapor, . It is rapidly transformed into grey arsenic by light. The yellow form has a density of 1.97 g/cm3. Black arsenic is similar in structure to black phosphorus.
Black arsenic can also be formed by cooling vapor at around 100–220 °C and by crystallization of amorphous arsenic in the presence of mercury vapors. It is glassy and brittle. Black arsenic is also a poor electrical conductor.
Arsenic sublimes upon heating at atmospheric pressure, converting directly to a gaseous form without an intervening liquid state at . The triple point is at 3.63 MPa and .
Isotopes
Arsenic occurs in nature as one stable isotope, 75As, and is therefore called a monoisotopic element. As of 2024, at least 32 radioisotopes have also been synthesized, ranging in atomic mass from 64 to 95. The most stable of these is 73As with a half-life of 80.30 days. The majority of the other isotopes have half-lives of under one day, with the exceptions being
71As ( 65.30 hours),
72As ( 26.0 hours),
74As ( 17.77 days),
76As ( 26.26 hours),
77As ( 38.83 hours).
Isotopes that are lighter than the stable 75As tend to decay by β+ decay, and those that are heavier tend to decay by β− decay, with some exceptions.
At least 10 nuclear isomers have been described, ranging in atomic mass from 66 to 84. The most stable of arsenic's isomers is 68mAs with a half-life of 111 seconds.
Chemistry
Arsenic has a similar electronegativity and ionization energies to its lighter pnictogen congener phosphorus and therefore readily forms covalent molecules with most of the nonmetals. Though stable in dry air, arsenic forms a golden-bronze tarnish upon exposure to humidity which eventually becomes a black surface layer. When heated in air, arsenic oxidizes to arsenic trioxide; the fumes from this reaction have an odor resembling garlic. This odor can be detected on striking arsenide minerals such as arsenopyrite with a hammer. It burns in oxygen to form arsenic trioxide and arsenic pentoxide, which have the same structure as the more well-known phosphorus compounds, and in fluorine to give arsenic pentafluoride. Arsenic makes arsenic acid with concentrated nitric acid, arsenous acid with dilute nitric acid, and arsenic trioxide with concentrated sulfuric acid; however, it does not react with water, alkalis, or non-oxidising acids. Arsenic reacts with metals to form arsenides, though these are not ionic compounds containing the As3− ion as the formation of such an anion would be highly endothermic and even the group 1 arsenides have properties of intermetallic compounds. Like germanium, selenium, and bromine, which like arsenic succeed the 3d transition series, arsenic is much less stable in the +5 oxidation state than its vertical neighbors phosphorus and antimony, and hence arsenic pentoxide and arsenic acid are potent oxidizers.
Compounds
Compounds of arsenic resemble, in some respects, those of phosphorus, which occupies the same group (column) of the periodic table. The most common oxidation states for arsenic are: −3 in the arsenides, which are alloy-like intermetallic compounds, +3 in the arsenites, and +5 in the arsenates and most organoarsenic compounds. Arsenic also bonds readily to itself as seen in the square ions in the mineral skutterudite. In the +3 oxidation state, arsenic is typically pyramidal owing to the influence of the lone pair of electrons.
Inorganic compounds
One of the simplest arsenic compounds is the trihydride, the highly toxic, flammable, pyrophoric arsine (AsH3). This compound is generally regarded as stable, since at room temperature it decomposes only slowly. At temperatures of 250–300 °C decomposition to arsenic and hydrogen is rapid. Several factors, such as humidity, presence of light and certain catalysts (namely aluminium) facilitate the rate of decomposition. It oxidises readily in air to form arsenic trioxide and water, and analogous reactions take place with sulfur and selenium instead of oxygen.
Arsenic forms colorless, odorless, crystalline oxides As2O3 ("white arsenic") and As2O5 which are hygroscopic and readily soluble in water to form acidic solutions. Arsenic(V) acid is a weak acid and its salts, known as arsenates, are a major source of arsenic contamination of groundwater in regions with high levels of naturally-occurring arsenic minerals. Synthetic arsenates include Scheele's Green (cupric hydrogen arsenate, acidic copper arsenate), calcium arsenate, and lead hydrogen arsenate. These three have been used as agricultural insecticides and poisons.
The protonation steps between the arsenate and arsenic acid are similar to those between phosphate and phosphoric acid. Unlike phosphorous acid, arsenous acid is genuinely tribasic, with the formula As(OH)3.
A broad variety of sulfur compounds of arsenic are known. Orpiment (As2S3) and realgar (As4S4) are somewhat abundant and were formerly used as painting pigments. In As4S10, arsenic has a formal oxidation state of +2 in As4S4 which features As-As bonds so that the total covalency of As is still 3. Both orpiment and realgar, as well as As4S3, have selenium analogs; the analogous As2Te3 is known as the mineral kalgoorlieite, and the anion As2Te− is known as a ligand in cobalt complexes.
All trihalides of arsenic(III) are well known except the astatide, which is unknown. Arsenic pentafluoride (AsF5) is the only important pentahalide, reflecting the lower stability of the +5 oxidation state; even so, it is a very strong fluorinating and oxidizing agent. (The pentachloride is stable only below −50 °C, at which temperature it decomposes to the trichloride, releasing chlorine gas.)
Alloys
Arsenic is used as the group 5 element in the III-V semiconductors gallium arsenide, indium arsenide, and aluminium arsenide. The valence electron count of GaAs is the same as a pair of Si atoms, but the band structure is completely different which results in distinct bulk properties. Other arsenic alloys include the II-V semiconductor cadmium arsenide.
Organoarsenic compounds
A large variety of organoarsenic compounds are known. Several were developed as chemical warfare agents during World War I, including vesicants such as lewisite and vomiting agents such as adamsite. Cacodylic acid, which is of historic and practical interest, arises from the methylation of arsenic trioxide, a reaction that has no analogy in phosphorus chemistry. Cacodyl was the first organometallic compound known (even though arsenic is not a true metal) and was named from the Greek κακωδία "stink" for its offensive, garlic-like odor; it is very toxic.
Occurrence and production
Arsenic is the 53rd most abundant element in the Earth's crust, comprising about 1.5 parts per million (0.00015%). Typical background concentrations of arsenic do not exceed 3 ng/m3 in the atmosphere; 100 mg/kg in soil; 400 μg/kg in vegetation; 10 μg/L in freshwater and 1.5 μg/L in seawater. Arsenic is the 22nd most abundant element in seawater and ranks 41st in abundance in the universe.
Minerals with the formula MAsS and MAs2 (M = Fe, Ni, Co) are the dominant commercial sources of arsenic, together with realgar (an arsenic sulfide mineral) and native (elemental) arsenic. An illustrative mineral is arsenopyrite (FeAsS), which is structurally related to iron pyrite. Many minor As-containing minerals are known. Arsenic also occurs in various organic forms in the environment.
In 2014, China was the top producer of white arsenic with almost 70% world share, followed by Morocco, Russia, and Belgium, according to the British Geological Survey and the United States Geological Survey. Most arsenic refinement operations in the US and Europe have closed over environmental concerns. Arsenic is found in the smelter dust from copper, gold, and lead smelters, and is recovered primarily from copper refinement dust. Arsenic is the main impurity found in copper concentrates to enter copper smelting facilities. There has been an increase in arsenic in copper concentrates over the years since copper mining has moved into deep high-impurity ores as shallow, low-arsenic copper deposits have been progressively depleted.
On roasting arsenopyrite in air, arsenic sublimes as arsenic(III) oxide leaving iron oxides, while roasting without air results in the production of gray arsenic. Further purification from sulfur and other chalcogens is achieved by sublimation in vacuum, in a hydrogen atmosphere, or by distillation from molten lead-arsenic mixture.
History
The word arsenic has its origin in the Syriac word zarnika, from Arabic al-zarnīḵ 'the orpiment', based on Persian zar ("gold") from the word zarnikh, meaning "yellow" (literally "gold-colored") and hence "(yellow) orpiment". It was adopted into Greek (using folk etymology) as arsenikon () – a neuter form of the Greek adjective arsenikos (), meaning "male", "virile".
Latin-speakers adopted the Greek term as , which in French ultimately became , whence the English word "arsenic".
Arsenic sulfides (orpiment, realgar) and oxides have been known and used since ancient times. Zosimos () describes roasting sandarach (realgar) to obtain cloud of arsenic (arsenic trioxide), which he then reduces to gray arsenic. As the symptoms of arsenic poisoning are not very specific, the substance was frequently used for murder until the advent in the 1830s of the Marsh test, a sensitive chemical test for its presence. (Another less sensitive but more general test is the Reinsch test.) Owing to its use by the ruling class to murder one another and its potency and discreetness, arsenic has been called the "poison of kings" and the "king of poisons". Arsenic became known as "the inheritance powder" due to its use in killing family members in the Renaissance era.
During the Bronze Age, arsenic was melted with copper to make arsenical bronze.
Jabir ibn Hayyan described the isolation of arsenic before 815 AD.
Albertus Magnus (Albert the Great, 1193–1280) later isolated the element from a compound in 1250, by heating soap together with arsenic trisulfide. In 1649, Johann Schröder published two ways of preparing arsenic. Crystals of elemental (native) arsenic are found in nature, although rarely.
Cadet's fuming liquid (impure cacodyl), often claimed as the first synthetic organometallic compound, was synthesized in 1760 by Louis Claude Cadet de Gassicourt through the reaction of potassium acetate with arsenic trioxide.
In the Victorian era, women would eat "arsenic" ("white arsenic" or arsenic trioxide) mixed with vinegar and chalk to improve the complexion of their faces, making their skin paler (to show they did not work in the fields). The accidental use of arsenic in the adulteration of foodstuffs led to the Bradford sweet poisoning in 1858, which resulted in 21 deaths. From the late 18th century wallpaper production began to use dyes made from arsenic,
which was thought to increase the pigment's brightness. One account of the illness and 1821 death of Napoleon implicates arsenic poisoning involving wallpaper.
Two arsenic pigments have been widely used since their discovery – Paris Green in 1814 and Scheele's Green in 1775. After the toxicity of arsenic became widely known, these chemicals were used less often as pigments and more often as insecticides. In the 1860s, an arsenic byproduct of dye production, London Purple, was widely used. This was a solid mixture of arsenic trioxide, aniline, lime, and ferrous oxide, insoluble in water and very toxic by inhalation or ingestion But it was later replaced with Paris Green, another arsenic-based dye. With better understanding of the toxicology mechanism, two other compounds were used starting in the 1890s. Arsenite of lime and arsenate of lead were used widely as insecticides until the discovery of DDT in 1942.
In small doses, soluble arsenic compounds act as stimulants, and were once popular as medicine by people in the mid-18th to 19th centuries; this use was especially prevalent for sport animals such as race horses or work dogs and continued into the 20th century.
A 2006 study of the remains of the Australian racehorse Phar Lap determined that its 1932 death was caused by a massive overdose of arsenic. Sydney veterinarian Percy Sykes stated,
"In those days, arsenic was quite a common tonic, usually given in the form of a solution (Fowler's Solution) ... It was so common that I'd reckon 90 per cent of the horses had arsenic in their system."
Applications
Agricultural
The toxicity of arsenic to insects, bacteria, and fungi led to its use as a wood preservative. In the 1930s, a process of treating wood with chromated copper arsenate (also known as CCA or Tanalith) was invented, and for decades, this treatment was the most extensive industrial use of arsenic. An increased appreciation of the toxicity of arsenic led to a ban of CCA in consumer products in 2004, initiated by the European Union and United States. However, CCA remains in heavy use in other countries (such as on Malaysian rubber plantations).
Arsenic was also used in various agricultural insecticides and poisons. For example, lead hydrogen arsenate was a common insecticide on fruit trees, but contact with the compound sometimes resulted in brain damage among those working the sprayers. In the second half of the 20th century, monosodium methyl arsenate (MSMA) and disodium methyl arsenate (DSMA) – less toxic organic forms of arsenic – replaced lead arsenate in agriculture. These organic arsenicals were in turn phased out in the United States by 2013 in all agricultural activities except cotton farming.
The biogeochemistry of arsenic is complex and includes various adsorption and desorption processes. The toxicity of arsenic is connected to its solubility and is affected by pH. Arsenite () is more soluble than arsenate () and is more toxic; however, at a lower pH, arsenate becomes more mobile and toxic. It was found that addition of sulfur, phosphorus, and iron oxides to high-arsenite soils greatly reduces arsenic phytotoxicity.
Arsenic is used as a feed additive in poultry and swine production, in particular it was used in the U.S. until 2015 to increase weight gain, improve feed efficiency, and prevent disease. An example is roxarsone, which had been used as a broiler starter by about 70% of U.S. broiler growers. In 2011, Alpharma, a subsidiary of Pfizer Inc., which produces roxarsone, voluntarily suspended sales of the drug in response to studies showing elevated levels of inorganic arsenic, a carcinogen, in treated chickens. A successor to Alpharma, Zoetis, continued to sell nitarsone until 2015, primarily for use in turkeys.
Medical use
During the 17th, 18th, and 19th centuries, a number of arsenic compounds were used as medicines, including arsphenamine (by Paul Ehrlich) and arsenic trioxide (by Thomas Fowler), for treating diseases such as cancer or psoriasis. Arsphenamine, as well as neosalvarsan, was indicated for syphilis, but has been superseded by modern antibiotics. However, arsenicals such as melarsoprol are still used for the treatment of trypanosomiasis in spite of their severe toxicity, since the disease is almost uniformly fatal if untreated. In 2000 the US Food and Drug Administration approved arsenic trioxide for the treatment of patients with acute promyelocytic leukemia that is resistant to all-trans retinoic acid.
A 2008 paper reports success in locating tumors using arsenic-74 (a positron emitter). This isotope produces clearer PET scan images than the previous radioactive agent, iodine-124, because the body tends to transport iodine to the thyroid gland producing signal noise. Nanoparticles of arsenic have shown ability to kill cancer cells with lesser cytotoxicity than other arsenic formulations.
Alloys
The main use of arsenic is in alloying with lead. Lead components in car batteries are strengthened by the presence of a very small percentage of arsenic. Dezincification of brass (a copper-zinc alloy) is greatly reduced by the addition of arsenic. "Phosphorus Deoxidized Arsenical Copper" with an arsenic content of 0.3% has an increased corrosion stability in certain environments. Gallium arsenide is an important semiconductor material, used in integrated circuits. Circuits made from GaAs are much faster (but also much more expensive) than those made from silicon. Unlike silicon, GaAs has a direct bandgap, and can be used in laser diodes and LEDs to convert electrical energy directly into light.
Military
After World War I, the United States built a stockpile of 20,000 tons of weaponized lewisite (ClCH=CHAsCl2), an organoarsenic vesicant (blister agent) and lung irritant. The stockpile was neutralized with bleach and dumped into the Gulf of Mexico in the 1950s. Lewisite, the chemical warfare agent, is known for its acute toxicity to aquatic organisms. However, studies assessing the environmental impact of this disposal in the Gulf are lacking. During the Vietnam War, the United States used Agent Blue, a mixture of sodium cacodylate and its acid form, as one of the rainbow herbicides to deprive North Vietnamese soldiers of foliage cover and rice.
Other uses
Copper acetoarsenite was used as a green pigment known under many names, including Paris Green and Emerald Green. It caused numerous arsenic poisonings. Scheele's Green, a copper arsenate, was used in the 19th century as a coloring agent in sweets.
Arsenic is used in bronzing.
As much as 2% of produced arsenic is used in lead alloys for lead shot and bullets.
Arsenic is added in small quantities to alpha-brass to make it dezincification-resistant. This grade of brass is used in plumbing fittings and other wet environments.
Arsenic is also used for taxonomic sample preservation. It was also used in embalming fluids historically.
Arsenic was used in the taxidermy process up until the 1980s.
Arsenic was used as an opacifier in ceramics, creating white glazes.
Until recently, arsenic was used in optical glass. Modern glass manufacturers have ceased using both arsenic and lead.
Biological role
Bacteria
Some species of bacteria obtain their energy in the absence of oxygen by oxidizing various fuels while reducing arsenate to arsenite. Under oxidative environmental conditions some bacteria use arsenite as fuel, which they oxidize to arsenate. The enzymes involved are known as arsenate reductases (Arr).
In 2008, bacteria were discovered that employ a version of photosynthesis in the absence of oxygen with arsenites as electron donors, producing arsenates (just as ordinary photosynthesis uses water as electron donor, producing molecular oxygen). Researchers conjecture that, over the course of history, these photosynthesizing organisms produced the arsenates that allowed the arsenate-reducing bacteria to thrive. One strain, PHS-1, has been isolated and is related to the gammaproteobacterium Ectothiorhodospira shaposhnikovii. The mechanism is unknown, but an encoded Arr enzyme may function in reverse to its known homologues.
In 2010, researchers reported the discovery of a strain of the bacterium Halomonas (designated GFAJ-1) that was allegedly capable of substituting arsenic for phosphorus in its biomolecules, including DNA, when grown in an arsenic-rich, phosphate-limited environment. This claim, published in Science, suggested that arsenic could potentially serve as a building block of life in place of phosphorus, challenging long-standing assumptions about biochemical requirements for life on Earth.
The claim was met with widespread skepticism. Subsequent studies provided evidence contradicting the initial findings. One follow-up study published in Science in 2011 demonstrated that GFAJ-1 still requires phosphate to grow and does not incorporate arsenate into its DNA in any biologically significant way. Another independent investigation in 2012 used more sensitive techniques to purify and analyze the DNA of GFAJ-1 and found no detectable arsenate incorporated into the DNA backbone. The authors concluded that the original observations were likely due to experimental contamination or insufficient purification methods. Together, these studies reaffirmed phosphorus as an essential element for all known forms of life.
Potential role in higher animals
Arsenic may be an essential trace mineral in birds, involved in the synthesis of methionine metabolites. However, the role of arsenic in bird nutrition is disputed, as other authors state that arsenic is toxic in small amounts.
Some evidence indicates that arsenic is an essential trace mineral in mammals.
Experimental studies in rodents and livestock have shown that arsenic deprivation can lead to impaired growth, reduced reproductive performance, and abnormal glucose metabolism, suggesting it may play a role in essential metabolic processes. Arsenic has been proposed to participate in methylation reactions, possibly influencing gene regulation and detoxification pathways. However, because the threshold between beneficial and toxic exposure is extremely narrow, arsenic is not currently classified as an essential element for humans, and its physiological role in higher animals remains uncertain.
Heredity
Arsenic has been linked to epigenetic changes, heritable changes in gene expression that occur without changes in DNA sequence. These include DNA methylation, histone modification, and RNA interference. Toxic levels of arsenic cause significant DNA hypermethylation of tumor suppressor genes p16 and p53, thus increasing risk of carcinogenesis. These epigenetic events have been studied in vitro using human kidney cells and in vivo using rat liver cells and peripheral blood leukocytes in humans. Inductively coupled plasma mass spectrometry (ICP-MS) is used to detect precise levels of intracellular arsenic and other arsenic bases involved in epigenetic modification of DNA. Studies investigating arsenic as an epigenetic factor can be used to develop precise biomarkers of exposure and susceptibility.
The Chinese brake fern (Pteris vittata) hyperaccumulates arsenic from the soil into its leaves and has a proposed use in phytoremediation.
Biomethylation
Inorganic arsenic and its compounds, upon entering the food chain, are progressively metabolized through a process of methylation. For example, the mold Scopulariopsis brevicaulis produces trimethylarsine if inorganic arsenic is present. The organic compound arsenobetaine is found in some marine foods such as fish and algae, and also in mushrooms in larger concentrations. The average person's intake is about 10–50 μg/day. Values about 1000 μg are not unusual following consumption of fish or mushrooms, but there is little danger in eating fish because this arsenic compound is nearly non-toxic.
Environmental issues
Exposure
Naturally occurring sources of human exposure include volcanic ash, weathering of minerals and ores, and mineralized groundwater. Arsenic is also found in food, water, soil, and air. Arsenic is absorbed by all plants, but is more concentrated in leafy vegetables, rice, apple and grape juice, and seafood. An additional route of exposure is inhalation of atmospheric gases and dusts.
During the Victorian era, arsenic was widely used in home decor, especially wallpapers. In Europe, an analysis based on 20,000 soil samples across all 28 countries show that 98% of sampled soils have concentrations less than 20 mg/kg. In addition, the arsenic hotspots are related to both frequent fertilization and close distance to mining activities. Chronic exposure to arsenic, particularly through contaminated drinking water and food, has also been linked to long-term impacts on cognitive function, including reduced verbal IQ and memory.
Occurrence in drinking water
Extensive arsenic contamination of groundwater has led to widespread arsenic poisoning in Bangladesh and neighboring countries. It is estimated that approximately 57 million people in the Bengal basin are drinking groundwater with arsenic concentrations elevated above the World Health Organization's standard of 10 parts per billion (ppb). However, a study of cancer rates in Taiwan suggested that significant increases in cancer mortality appear only at levels above 150 ppb. The arsenic in the groundwater is of natural origin, and is released from the sediment into the groundwater, caused by the anoxic conditions of the subsurface. This groundwater was used after local and western NGOs and the Bangladeshi government undertook a massive shallow tube well drinking-water program in the late twentieth century. This program was designed to prevent drinking of bacteria-contaminated surface waters, but failed to test for arsenic in the groundwater. Many other countries and districts in Southeast Asia, such as Vietnam and Cambodia, have geological environments that produce groundwater with a high arsenic content. Arsenicosis was reported in Nakhon Si Thammarat, Thailand, in 1987, and the Chao Phraya River probably contains high levels of naturally occurring dissolved arsenic without being a public health problem because much of the public uses bottled water. In Pakistan, more than 60 million people are exposed to arsenic polluted drinking water indicated by a 2017 report in Science. Podgorski's team investigated more than 1200 samples and more than 66% exceeded the WHO contamination limits of 10 micrograms per liter.
Since the 1980s, residents of the Ba Men region of Inner Mongolia, China have been chronically exposed to arsenic through drinking water from contaminated wells. A 2009 research study observed an elevated presence of skin lesions among residents with well water arsenic concentrations between 5 and 10 μg/L, suggesting that arsenic-induced toxicity may occur at relatively low concentrations with chronic exposure. Overall, 20 of China's 34 provinces have high arsenic concentrations in the groundwater supply, potentially exposing 19 million people to hazardous drinking water.
A study by IIT Kharagpur found high levels of Arsenic in groundwater of 20% of India's land, exposing more than 250 million people. States such as Punjab, Bihar, West Bengal, Assam, Haryana, Uttar Pradesh, and Gujarat have highest land area exposed to arsenic.
In the United States, arsenic is most commonly found in the ground waters of the southwest. Parts of New England, Michigan, Wisconsin, Minnesota and the Dakotas are also known to have significant concentrations of arsenic in ground water. Increased levels of skin cancer have been associated with arsenic exposure in Wisconsin, even at levels below the 10 ppb drinking water standard. According to a recent film funded by the US Superfund, millions of private wells have unknown arsenic levels, and in some areas of the US, more than 20% of the wells may contain levels that exceed established limits.
Low-level exposure to arsenic at concentrations of 100 ppb (i.e., above the 10 ppb drinking water standard) compromises the initial immune response to H1N1 or swine flu infection according to NIEHS-supported scientists. The study, conducted in laboratory mice, suggests that people exposed to arsenic in their drinking water may be at increased risk for more serious illness or death from the virus.
Some Canadians are drinking water that contains inorganic arsenic. Private-dug–well waters are most at risk for containing inorganic arsenic. Preliminary well water analysis typically does not test for arsenic. Researchers at the Geological Survey of Canada have modeled relative variation in natural arsenic hazard potential for the province of New Brunswick. This study has important implications for potable water and health concerns relating to inorganic arsenic.
Epidemiological evidence from Chile shows a dose-dependent connection between chronic arsenic exposure and various forms of cancer, in particular when other risk factors, such as cigarette smoking, are present. These effects have been demonstrated at contaminations less than 50 ppb. Arsenic is itself a constituent of tobacco smoke.
Analyzing multiple epidemiological studies on inorganic arsenic exposure suggests a small but measurable increase in risk for bladder cancer at 10 ppb. According to Peter Ravenscroft of the Department of Geography at the University of Cambridge, roughly 80 million people worldwide consume between 10 and 50 ppb arsenic in their drinking water. If they all consumed exactly 10 ppb arsenic in their drinking water, the previously cited multiple epidemiological study analysis would predict an additional 2,000 cases of bladder cancer alone. This represents a clear underestimate of the overall impact, since it does not include lung or skin cancer, and explicitly underestimates the exposure. Those exposed to levels of arsenic above the current WHO standard should weigh the costs and benefits of arsenic remediation.
Early (1973) evaluations of the processes for removing dissolved arsenic from drinking water demonstrated the efficacy of co-precipitation with either iron or aluminium oxides. In particular, iron as a coagulant was found to remove arsenic with an efficacy exceeding 90%. Several adsorptive media systems have been approved for use at point-of-service in a study funded by the United States Environmental Protection Agency (US EPA) and the National Science Foundation (NSF). A team of European and Indian scientists and engineers have set up six arsenic treatment plants in West Bengal based on in-situ remediation method (SAR Technology). This technology does not use any chemicals and arsenic is left in an insoluble form (+5 state) in the subterranean zone by recharging aerated water into the aquifer and developing an oxidation zone that supports arsenic oxidizing micro-organisms. This process does not produce any waste stream or sludge and is relatively cheap.
Another effective and inexpensive method to avoid arsenic contamination is to sink wells 500 feet or deeper to reach purer waters. A recent 2011 study funded by the US National Institute of Environmental Health Sciences' Superfund Research Program shows that deep sediments can remove arsenic and take it out of circulation. In this process, called adsorption, arsenic sticks to the surfaces of deep sediment particles and is naturally removed from the ground water.
Magnetic separations of arsenic at very low magnetic field gradients with high-surface-area and monodisperse magnetite (Fe3O4) nanocrystals have been demonstrated in point-of-use water purification. Using the high specific surface area of Fe3O4 nanocrystals, the mass of waste associated with arsenic removal from water has been dramatically reduced.
Epidemiological studies have suggested a correlation between chronic consumption of drinking water contaminated with arsenic and the incidence of all leading causes of mortality. The literature indicates that arsenic exposure is causative in the pathogenesis of diabetes.
Chaff-based filters have recently been shown to reduce the arsenic content of water to 3 μg/L. This may find applications in areas where the potable water is extracted from underground aquifers.
San Pedro de Atacama
For several centuries, the people of San Pedro de Atacama in Chile have been drinking water that is contaminated with arsenic, and some evidence suggests they have developed some immunity. Genetic studies indicate that certain populations in this region have undergone natural selection for gene variants that enhance arsenic metabolism and detoxification. This adaptation is considered one of the few documented cases of human evolution in response to chronic environmental arsenic exposure.
Hazard maps for contaminated groundwater
Around one-third of the world's population drinks water from groundwater resources. Of this, about 10 percent, approximately 300 million people, obtains water from groundwater resources that are contaminated with unhealthy levels of arsenic or fluoride. These trace elements derive mainly from minerals and ions in the ground.
Redox transformation of arsenic in natural waters
Arsenic is unique among the trace metalloids and oxyanion-forming trace metals (e.g. As, Se, Sb, Mo, V, Cr, U, Re). It is sensitive to mobilization at pH values typical of natural waters (pH 6.5–8.5) under both oxidizing and reducing conditions. Arsenic can occur in the environment in several oxidation states (−3, 0, +3 and +5), but in natural waters it is mostly found in inorganic forms as oxyanions of trivalent arsenite [As(III)] or pentavalent arsenate [As(V)]. Organic forms of arsenic are produced by biological activity, mostly in surface waters, but are rarely quantitatively important. Organic arsenic compounds may, however, occur where waters are significantly impacted by industrial pollution.
Arsenic may be solubilized by various processes. When pH is high, arsenic may be released from surface binding sites that lose their positive charge. When water level drops and sulfide minerals are exposed to air, arsenic trapped in sulfide minerals can be released into water. When organic carbon is present in water, bacteria are fed by directly reducing As(V) to As(III) or by reducing the element at the binding site, releasing inorganic arsenic.
The aquatic transformations of arsenic are affected by pH, reduction-oxidation potential, organic matter concentration and the concentrations and forms of other elements, especially iron and manganese. The main factors are pH and the redox potential. Generally, the main forms of arsenic under oxic conditions are , , , and at pH 2, 2–7, 7–11 and 11, respectively. Under reducing conditions, is predominant at pH 2–9.
Oxidation and reduction affects the migration of arsenic in subsurface environments. Arsenite is the most stable soluble form of arsenic in reducing environments and arsenate, which is less mobile than arsenite, is dominant in oxidizing environments at neutral pH. Therefore, arsenic may be more mobile under reducing conditions. The reducing environment is also rich in organic matter which may enhance the solubility of arsenic compounds. As a result, the adsorption of arsenic is reduced and dissolved arsenic accumulates in groundwater. That is why the arsenic content is higher in reducing environments than in oxidizing environments.
The presence of sulfur is another factor that affects the transformation of arsenic in natural water. Arsenic can precipitate when metal sulfides form. In this way, arsenic is removed from the water and its mobility decreases. When oxygen is present, bacteria oxidize reduced sulfur to generate energy, potentially releasing bound arsenic.
Redox reactions involving Fe also appear to be essential factors in the fate of arsenic in aquatic systems. The reduction of iron oxyhydroxides plays a key role in the release of arsenic to water. So arsenic can be enriched in water with elevated Fe concentrations. Under oxidizing conditions, arsenic can be mobilized from pyrite or iron oxides especially at elevated pH. Under reducing conditions, arsenic can be mobilized by reductive desorption or dissolution when associated with iron oxides. The reductive desorption occurs under two circumstances. One is when arsenate is reduced to arsenite which adsorbs to iron oxides less strongly. The other results from a change in the charge on the mineral surface which leads to the desorption of bound arsenic.
Some species of bacteria catalyze redox transformations of arsenic. Dissimilatory arsenate-respiring prokaryotes (DARP) speed up the reduction of As(V) to As(III). DARP use As(V) as the electron acceptor of anaerobic respiration and obtain energy to survive. Other organic and inorganic substances can be oxidized in this process. Chemoautotrophic arsenite oxidizers (CAO) and heterotrophic arsenite oxidizers (HAO) convert As(III) into As(V). CAO combine the oxidation of As(III) with the reduction of oxygen or nitrate. They use obtained energy to fix produce organic carbon from CO2. HAO cannot obtain energy from As(III) oxidation. This process may be an arsenic detoxification mechanism for the bacteria.
Equilibrium thermodynamic calculations predict that As(V) concentrations should be greater than As(III) concentrations in all but strongly reducing conditions, i.e. where sulfate reduction is occurring. However, abiotic redox reactions of arsenic are slow. Oxidation of As(III) by dissolved O2 is a particularly slow reaction. For example, Johnson and Pilson (1975) gave half-lives for the oxygenation of As(III) in seawater ranging from several months to a year. In other studies, As(V)/As(III) ratios were stable over periods of days or weeks during water sampling when no particular care was taken to prevent oxidation, again suggesting relatively slow oxidation rates. Cherry found from experimental studies that the As(V)/As(III) ratios were stable in anoxic solutions for up to 3 weeks but that gradual changes occurred over longer timescales. Sterile water samples have been observed to be less susceptible to speciation changes than non-sterile samples. Oremland found that the reduction of As(V) to As(III) in Mono Lake was rapidly catalyzed by bacteria with rate constants ranging from 0.02 to 0.3-day−1.
Wood preservation in the US
As of 2002, US-based industries consumed 19,600 metric tons of arsenic. Ninety percent of this was used for treatment of wood with chromated copper arsenate (CCA). In 2007, 50% of the 5,280 metric tons of consumption was still used for this purpose. In the United States, the voluntary phasing-out of arsenic in production of consumer products and residential and general consumer construction products began on 31 December 2003, and alternative chemicals are now used, such as Alkaline Copper Quaternary, borates, copper azole, cyproconazole, and propiconazole.
Although discontinued, this application is also one of the most concerning to the general public. The vast majority of older pressure-treated wood was treated with CCA. CCA lumber is still in widespread use in many countries, and was heavily used during the latter half of the 20th century as a structural and outdoor building material. Although the use of CCA lumber was banned in many areas after studies showed that arsenic could leach out of the wood into the surrounding soil (from playground equipment, for instance), a risk is also presented by the burning of older CCA timber. The direct or indirect ingestion of wood ash from burnt CCA lumber has caused fatalities in animals and serious poisonings in humans; the lethal human dose is approximately 20 grams of ash. Scrap CCA lumber from construction and demolition sites may be inadvertently used in commercial and domestic fires. Protocols for safe disposal of CCA lumber are not consistent throughout the world. Widespread landfill disposal of such timber raises some concern, but other studies have shown no arsenic contamination in the groundwater.
Mapping of industrial releases in the US
One tool that maps the location (and other information) of arsenic releases in the United States is TOXMAP. TOXMAP is a Geographic Information System (GIS) from the Division of Specialized Information Services of the United States National Library of Medicine (NLM) funded by the US Federal Government. With marked-up maps of the United States, TOXMAP enables users to visually explore data from the United States Environmental Protection Agency's (EPA) Toxics Release Inventory and Superfund Basic Research Programs. TOXMAP's chemical and environmental health information is taken from NLM's Toxicology Data Network (TOXNET), PubMed, and from other authoritative sources.
Bioremediation
Physical, chemical, and biological methods have been used to remediate arsenic contaminated water. Bioremediation is said to be cost-effective and environmentally friendly. Bioremediation of ground water contaminated with arsenic aims to convert arsenite, the toxic form of arsenic to humans, to arsenate. Arsenate (+5 oxidation state) is the dominant form of arsenic in surface water, while arsenite (+3 oxidation state) is the dominant form in hypoxic to anoxic environments. Arsenite is more soluble and mobile than arsenate. Many species of bacteria can transform arsenite to arsenate in anoxic conditions by using arsenite as an electron donor. This is a useful method in ground water remediation. Another bioremediation strategy is to use plants that accumulate arsenic in their tissues via phytoremediation but the disposal of contaminated plant material needs to be considered.
Bioremediation requires careful evaluation and design in accordance with existing conditions. Some sites may require the addition of an electron acceptor while others require microbe supplementation (bioaugmentation). Regardless of the method used, only constant monitoring can prevent future contamination.
Arsenic removal
Coagulation and flocculation are closely related processes common in arsenate removal from water. Due to the net negative charge carried by arsenate ions, they settle slowly or not at all due to charge repulsion. In coagulation, a positively charged coagulent such as iron and aluminum (commonly used salts: FeCl3, Fe2(SO4)3, Al2(SO4)3) neutralize the negatively charged arsenate, enable it to settle. Flocculation follows where a flocculant bridges smaller particles and allows the aggregate to precipitate out from water. However, such methods may not be efficient on arsenite as As(III) exists in uncharged arsenious acid, H3AsO3, at near-neutral pH.
The major drawbacks of coagulation and flocculation are the costly disposal of arsenate-concentrated sludge, and possible secondary contamination of environment. Moreover, coagulents such as iron may produce ion contamination that exceeds safety levels.
Toxicity and precautions
Arsenic and many of its compounds are especially potent poisons (e.g. arsine). Small amount of arsenic can be detected by pharmacopoial methods which includes reduction of arsenic to arsenious with help of zinc and can be confirmed with mercuric chloride paper.
Classification
Elemental arsenic and arsenic sulfate and trioxide compounds are classified as "toxic" and "dangerous for the environment" in the European Union under directive 67/548/EEC.
The International Agency for Research on Cancer (IARC) recognizes arsenic and inorganic arsenic compounds as group 1 carcinogens, and the EU lists arsenic trioxide, arsenic pentoxide, and arsenate salts as category 1 carcinogens.
Arsenic is known to cause arsenicosis when present in drinking water, "the most common species being arsenate [; As(V)] and arsenite [; As(III)]".
Legal limits, food, and drink
In the United States since 2006, the maximum concentration in drinking water allowed by the Environmental Protection Agency (EPA) is 10 ppb and the FDA set the same standard in 2005 for bottled water. The Department of Environmental Protection for New Jersey set a drinking water limit of 5 ppb in 2006. The IDLH (immediately dangerous to life and health) value for arsenic metal and inorganic arsenic compounds is 5 mg/m3 (5 ppb). The Occupational Safety and Health Administration has set the permissible exposure limit (PEL) to a time-weighted average (TWA) of 0.01 mg/m3 (0.01 ppb), and the National Institute for Occupational Safety and Health (NIOSH) has set the recommended exposure limit (REL) to a 15-minute constant exposure of 0.002 mg/m3 (0.002 ppb). The PEL for organic arsenic compounds is a TWA of 0.5 mg/m3. (0.5 ppb).
In 2008, based on its ongoing testing of a wide variety of American foods for toxic chemicals, the U.S. Food and Drug Administration set the "level of concern" for inorganic arsenic in apple and pear juices at 23 ppb, based on non-carcinogenic effects, and began blocking importation of products in excess of this level; it also required recalls for non-conforming domestic products. In 2011, the national Dr. Oz television show broadcast a program highlighting tests performed by an independent lab hired by the producers. Though the methodology was disputed (it did not distinguish between organic and inorganic arsenic) the tests showed levels of arsenic up to 36 ppb. In response, the FDA tested the worst brand from the Dr. Oz show and found much lower levels. Ongoing testing found 95% of the apple juice samples were below the level of concern. Later testing by Consumer Reports showed inorganic arsenic at levels slightly above 10 ppb, and the organization urged parents to reduce consumption. In July 2013, on consideration of consumption by children, chronic exposure, and carcinogenic effect, the FDA established an "action level" of 10 ppb for apple juice, the same as the drinking water standard.
Concern about arsenic in rice in Bangladesh was raised in 2002, but at the time only Australia had a legal limit for food (one milligram per kilogram, or 1000 ppb). Concern was raised about people who were eating U.S. rice exceeding WHO standards for personal arsenic intake in 2005. In 2011, the People's Republic of China set a food standard of 150 ppb for arsenic.
In the United States in 2012, testing by separate groups of researchers at the Children's Environmental Health and Disease Prevention Research Center at Dartmouth College (early in the year, focusing on urinary levels in children) and Consumer Reports (in November) found levels of arsenic in rice that resulted in calls for the FDA to set limits. The FDA released some testing results in September 2012, and as of July 2013, is still collecting data in support of a new potential regulation. It has not recommended any changes in consumer behavior.
Consumer Reports recommended:
That the EPA and FDA eliminate arsenic-containing fertilizer, drugs, and pesticides in food production;
That the FDA establish a legal limit for food;
That industry change production practices to lower arsenic levels, especially in food for children; and
That consumers test home water supplies, eat a varied diet, and cook rice with excess water, then draining it off (reducing inorganic arsenic by about one third along with a slight reduction in vitamin content).
Evidence-based public health advocates also recommend that, given the lack of regulation or labeling for arsenic in the U.S., children should eat no more than 1.5 servings per week of rice and should not drink rice milk as part of their daily diet before age 5. They also offer recommendations for adults and infants on how to limit arsenic exposure from rice, drinking water, and fruit juice.
A 2014 World Health Organization advisory conference was scheduled to consider limits of 200–300 ppb for rice.
Reducing arsenic content in rice
In 2020, scientists assessed multiple preparation procedures of rice for their capacity to reduce arsenic content and preserve nutrients, recommending a procedure involving parboiling and water-absorption.
Occupational exposure limits
Ecotoxicity
Arsenic is bioaccumulative in many organisms, marine species in particular, but it does not appear to biomagnify significantly in food webs. In polluted areas, plant growth may be affected by root uptake of arsenate, which is a phosphate analog and therefore readily transported in plant tissues and cells. In polluted areas, uptake of the more toxic arsenite ion (found more particularly in reducing conditions) is likely in poorly-drained soils.
Toxicity in animals
Biological mechanism
Arsenic's toxicity comes from the affinity of arsenic(III) oxides for thiols. Thiols, in the form of cysteine residues and cofactors such as lipoic acid and coenzyme A, are situated at the active sites of many important enzymes.
Arsenic disrupts ATP production through several mechanisms. At the level of the citric acid cycle, arsenic inhibits lipoic acid, which is a cofactor for pyruvate dehydrogenase. By competing with phosphate, arsenate uncouples oxidative phosphorylation, thus inhibiting energy-linked reduction of NAD+, mitochondrial respiration and ATP synthesis. Hydrogen peroxide production is also increased, which, it is speculated, has potential to form reactive oxygen species and oxidative stress. These metabolic interferences lead to death from multi-system organ failure. The organ failure is presumed to be from necrotic cell death, not apoptosis, since energy reserves have been too depleted for apoptosis to occur.
Exposure risks and remediation
Occupational exposure and arsenic poisoning may occur in people working in industries involving the use of inorganic arsenic and its compounds, such as wood preservation, glass production, nonferrous metal alloys, and electronic semiconductor manufacturing. Inorganic arsenic is also found in coke oven emissions associated with the smelter industry.
The conversion between As(III) and As(V) is a large factor in arsenic environmental contamination. According to Croal, Gralnick, Malasarn and Newman, "[the] understanding [of] what stimulates As(III) oxidation and/or limits As(V) reduction is relevant for bioremediation of contaminated sites (Croal). The study of chemolithoautotrophic As(III) oxidizers and the heterotrophic As(V) reducers can help the understanding of the oxidation and/or reduction of arsenic.
Treatment
Treatment of chronic arsenic poisoning is possible. British anti-lewisite (dimercaprol) is prescribed in doses of 5 mg/kg up to 300 mg every 4 hours for the first day, then every 6 hours for the second day, and finally every 8 hours for 8 additional days. However the USA's Agency for Toxic Substances and Disease Registry (ATSDR) states that the long-term effects of arsenic exposure cannot be predicted. Blood, urine, hair, and nails may be tested for arsenic; however, these tests cannot foresee possible health outcomes from the exposure. Long-term exposure and consequent excretion through urine has been linked to bladder and kidney cancer in addition to cancer of the liver, prostate, skin, lungs, and nasal cavity.
Footnotes
Bibliography
|
;Chemical elements;Chemical elements with rhombohedral structure;Endocrine disruptors;Fetotoxicants;Hepatotoxins;IARC Group 1 carcinogens;Metalloids;Minerals in space group 166;Native element minerals;Pnictogens;Semimetals;Suspected testicular toxicants;Teratogens;Trigonal minerals
|
https://en.wikipedia.org/wiki/Antimony
|
Antimony is a chemical element; it has symbol Sb () and atomic number 51. A lustrous grey metal or metalloid, it is found in nature mainly as the sulfide mineral stibnite (). Antimony compounds have been known since ancient times and were powdered for use as medicine and cosmetics, often known by the Arabic name kohl. The earliest known description of this metalloid in the West was written in 1540 by Vannoccio Biringuccio.
China is the largest producer of antimony and its compounds, with most production coming from the Xikuangshan Mine in Hunan. The industrial methods for refining antimony from stibnite are roasting followed by reduction with carbon, or direct reduction of stibnite with iron.
The most common applications for metallic antimony are in alloys with lead and tin, which have improved properties for solders, bullets, and plain bearings. It improves the rigidity of lead-alloy plates in lead–acid batteries. Antimony trioxide is a prominent additive for halogen-containing flame retardants. Antimony is used as a dopant in semiconductor devices.
Characteristics
Properties
Antimony is a member of group 15 of the periodic table, one of the elements called pnictogens, and has an electronegativity of 2.05. In accordance with periodic trends, it is more electronegative than tin or bismuth, and less electronegative than tellurium or arsenic. Antimony is stable in air at room temperature but, if heated, it reacts with oxygen to produce antimony trioxide,.
Antimony is a silvery, lustrous gray metalloid with a Mohs scale hardness of 3, which is too soft to mark hard objects. Coins of antimony were issued in China's Guizhou in 1931; durability was poor, and minting was soon discontinued because of its softness and toxicity. Antimony is resistant to attack by acids.
The only stable allotrope of antimony under standard conditions is metallic, brittle, silver-white, and shiny. It crystallises in a trigonal cell, isomorphic with bismuth and the gray allotrope of arsenic, and is formed when molten antimony is cooled slowly. Amorphous black antimony is formed upon rapid cooling of antimony vapor, and is only stable as a thin film (thickness in nanometres); thicker samples spontaneously transform into the metallic form. It oxidizes in air and may ignite spontaneously. At 100 °C, it gradually transforms into the stable form. The supposed yellow allotrope of antimony, generated only by oxidation of stibine () at −90 °C, is also impure and not a true allotrope; above this temperature and in ambient light, it transforms into the more stable black allotrope. A rare explosive form of antimony can be formed from the electrolysis of antimony trichloride, but it always contains appreciable chlorine and is not really an antimony allotrope. When scratched with a sharp implement, an exothermic reaction occurs and white fumes are given off as metallic antimony forms; when rubbed with a pestle in a mortar, a strong detonation occurs.
Elemental antimony adopts a layered structure (space group Rm No. 166) whose layers consist of fused, ruffled, six-membered rings. The nearest and next-nearest neighbors form an irregular octahedral complex, with the three atoms in each double layer slightly closer than the three atoms in the next. This relatively close packing leads to a high density of 6.697 g/cm3, but the weak bonding between the layers leads to the low hardness and brittleness of antimony.
Isotopes
Antimony has two stable isotopes: with a natural abundance of 57.36% and with a natural abundance of 42.64%. It also has 35 radioisotopes, of which the longest-lived is with a half-life of 2.75 years. In addition, 29 metastable states have been characterized. The most stable of these is with a half-life of 5.76 days. Isotopes that are lighter than the stable tend to decay by β+ decay, and those that are heavier tend to decay by β− decay, with some exceptions. Antimony is the lightest element to have an isotope with an alpha decay branch, excluding and other light nuclides with beta-delayed alpha emission.
Occurrence
The abundance of antimony in the Earth's crust is estimated at 0.2 parts per million, comparable to thallium at 0.5 ppm and silver at 0.07 ppm. It is the 63rd most abundant element in the crust. Even though this element is not abundant, it is found in more than 100 mineral species. Antimony is sometimes found natively (e.g. on Antimony Peak), but more frequently it is found in the sulfide stibnite () which is the predominant ore mineral.
Compounds
Antimony compounds are often classified according to their oxidation state: Sb(III) and Sb(V). The +5 oxidation state is more common.
Oxides and hydroxides
Antimony trioxide is formed when antimony is burnt in air. In the gas phase, the molecule of the compound is , but it polymerizes upon condensing. Antimony pentoxide () can be formed only by oxidation with concentrated nitric acid. Antimony also forms a mixed-valence oxide, antimony tetroxide (), which features both Sb(III) and Sb(V). Unlike oxides of phosphorus and arsenic, these oxides are amphoteric, do not form well-defined oxoacids, and react with acids to form antimony salts.
Antimonous acid is unknown, but the conjugate base sodium antimonite () forms upon fusing sodium oxide and . Transition metal antimonites are also known. Antimonic acid exists only as the hydrate , forming salts as the antimonate anion . When a solution containing this anion is dehydrated, the precipitate contains mixed oxides.
The most important antimony ore is stibnite (). Other sulfide minerals include pyrargyrite (), zinkenite, jamesonite, and boulangerite. Antimony pentasulfide is non-stoichiometric, which features antimony in the +3 oxidation state and S–S bonds. Several thioantimonides are known, such as and .
Halides
Antimony forms two series of halides: and . The trihalides , , , and are all molecular compounds having trigonal pyramidal molecular geometry. The trifluoride is prepared by the reaction of antimony trioxide with hydrofluoric acid:
It is Lewis acidic and readily accepts fluoride ions to form the complex anions and . Molten antimony trifluoride is a weak electrical conductor. The trichloride is prepared by dissolving stibnite in hydrochloric acid:
Arsenic sulfides are not readily attacked by the hydrochloric acid, so this method offers a route to As-free Sb.
The pentahalides and have trigonal bipyramidal molecular geometry in the gas phase, but in the liquid phase, is polymeric, whereas is monomeric. Antimony pentafluoride is a powerful Lewis acid used to make the superacid fluoroantimonic acid ().
Oxyhalides are more common for antimony than for arsenic and phosphorus. Antimony trioxide dissolves in concentrated acid to form oxoantimonyl compounds such as SbOCl and .
Antimonides, hydrides, and organoantimony compounds
Compounds in this class generally are described as derivatives of . Antimony forms antimonides with metals, such as indium antimonide (InSb) and silver antimonide (). The alkali metal and zinc antimonides, such as and , are more reactive. Treating these antimonides with acid produces the highly unstable gas stibine, :
Stibine can also be produced by treating salts with hydride reagents such as sodium borohydride. Stibine decomposes spontaneously at room temperature. Because stibine has a positive heat of formation, it is thermodynamically unstable and thus antimony does not react with hydrogen directly.
Organoantimony compounds are typically prepared by alkylation of antimony halides with Grignard reagents. A large variety of compounds are known with both Sb(III) and Sb(V) centers, including mixed chloro-organic derivatives, anions, and cations. Examples include triphenylstibine () and pentaphenylantimony ().
History
Antimony(III) sulfide, , was recognized in predynastic Egypt as an eye cosmetic (kohl) as early as about 3100 BC, when the cosmetic palette was invented.
An artifact, said to be part of a vase, made of antimony dating to about 3000 BC was found at Telloh, Chaldea (part of present-day Iraq), and a copper object plated with antimony dating between 2500 BC and 2200 BC has been found in Egypt. Austen, at a lecture by Herbert Gladstone in 1892, commented that "we only know of antimony at the present day as a highly brittle and crystalline metal, which could hardly be fashioned into a useful vase, and therefore this remarkable 'find' (artifact mentioned above) must represent the lost art of rendering antimony malleable."
The British archaeologist Roger Moorey was unconvinced the artifact was indeed a vase, mentioning that Selimkhanov, after his analysis of the Tello object (published in 1975), "attempted to relate the metal to Transcaucasian natural antimony" (i.e. native metal) and that "the antimony objects from Transcaucasia are all small personal ornaments." This weakens the evidence for a lost art "of rendering antimony malleable".
The Roman scholar Pliny the Elder described several ways of preparing antimony sulfide for medical purposes in his treatise Natural History, around 77 AD. Pliny the Elder also made a distinction between "male" and "female" forms of antimony; the male form is probably the sulfide, while the female form, which is superior, heavier, and less friable, has been suspected to be native metallic antimony.
The Greek naturalist Pedanius Dioscorides mentioned that antimony sulfide could be roasted by heating by a current of air. It is thought that this produced metallic antimony.
Antimony was frequently described in alchemical manuscripts, including the Summa Perfectionis of Pseudo-Geber, written around the 14th century. A description of a procedure for isolating antimony is later given in the 1540 book De la pirotechnia by Vannoccio Biringuccio, predating the more famous 1556 book by Agricola, De re metallica. In this context Agricola has been often incorrectly credited with the discovery of metallic antimony. The book Currus Triumphalis Antimonii (The Triumphal Chariot of Antimony), describing the preparation of metallic antimony, was published in Germany in 1604. It was purported to be written by a Benedictine monk, writing under the name Basilius Valentinus in the 15th century; if it were authentic, which it is not, it would predate Biringuccio.
The metal antimony was known to German chemist Andreas Libavius in 1615 who obtained it by adding iron to a molten mixture of antimony sulfide, salt and potassium tartrate. This procedure produced antimony with a crystalline or starred surface.
With the advent of challenges to phlogiston theory, it was recognized that antimony is an element forming sulfides, oxides, and other compounds, as do other metals.
The first discovery of naturally occurring pure antimony in the Earth's crust was described by the Swedish scientist and local mine district engineer Anton von Swab in 1783; the type-sample was collected from the Sala Silver Mine in the Bergslagen mining district of Sala, Västmanland, Sweden.
Etymology
The medieval Latin form, from which the modern languages and late Byzantine Greek take their names for antimony, is . The origin of that is uncertain, and all suggestions have some difficulty either of form or interpretation. The popular etymology, from ἀντίμοναχός anti-monachos or French , would mean "monk-killer", which is explained by the fact that many early alchemists were monks, and some antimony compounds were poisonous.
Another popular etymology is the hypothetical Greek word ἀντίμόνος antimonos, "against aloneness", explained as "not found as metal", or "not found unalloyed". However, ancient Greek would more naturally express the pure negative as α- ("not"). Edmund Oscar von Lippmann conjectured a hypothetical Greek word ανθήμόνιον anthemonion, which would mean "floret", and cites several examples of related Greek words (but not that one) which describe chemical or biological efflorescence.
The early uses of antimonium include the translations, in 1050–1100, by Constantine the African of Arabic medical treatises. Several authorities believe antimonium is a scribal corruption of some Arabic form; Meyerhof derives it from ithmid; other possibilities include athimar, the Arabic name of the metalloid, and a hypothetical as-stimmi, derived from or parallel to the Greek.
The standard chemical symbol for antimony (Sb) is credited to Jöns Jakob Berzelius, who derived the abbreviation from stibium.
The ancient words for antimony mostly have, as their chief meaning, kohl, the sulfide of antimony.
The Egyptians called antimony mśdmt or stm.
The Arabic word for the substance, as opposed to the cosmetic, can appear as ithmid, athmoud, othmod, or uthmod. Littré suggests the first form, which is the earliest, derives from stimmida, an accusative for stimmi. The Greek word στίμμι (stimmi) is used by Attic tragic poets of the 5th century BC, and is possibly a loan word from Arabic or from Egyptian stm.
Production
Process
The extraction of antimony from ores depends on the quality and composition of the ore. Most antimony is mined as the sulfide; lower-grade ores are concentrated by froth flotation, while higher-grade ores are heated to 500–600 °C, the temperature at which stibnite melts and separates from the gangue minerals. Antimony can be isolated from the crude antimony sulfide by reduction with scrap iron:
The sulfide is converted to an oxide by roasting. The product is further purified by vaporizing the volatile antimony(III) oxide, which is recovered. This sublimate is often used directly for the main applications, impurities being arsenic and sulfide. Antimony is isolated from the oxide by a carbothermal reduction:
The lower-grade ores are reduced in blast furnaces while the higher-grade ores are reduced in reverberatory furnaces.
Top producers and production volumes
In 2022, according to the US Geological Survey, China accounted for 54.5% of total antimony production, followed in second place by Russia with 18.2% and Tajikistan with 15.5%.
Chinese production of antimony is expected to decline in the future as mines and smelters are closed down by the government as part of pollution control. Especially due to an environmental protection law having gone into effect in January 2015 and revised "Emission Standards of Pollutants for Stanum, Antimony, and Mercury" having gone into effect, hurdles for economic production are higher.
Reported production of antimony in China has fallen and is unlikely to increase in the coming years, according to the Roskill report. No significant antimony deposits in China have been developed for about ten years, and the remaining economic reserves are being rapidly depleted.
Reserves
Supply risk
For antimony-importing regions, such as Europe and the U.S., antimony is considered to be a critical mineral for industrial manufacturing that is at risk of supply chain disruption. With global production coming mainly from China (74%), Tajikistan (8%), and Russia (4%), these sources are critical to supply.
European Union: Antimony is considered a critical raw material for defense, automotive, construction and textiles. The E.U. sources are 100% imported, coming mainly from Turkey (62%), Bolivia (20%) and Guatemala (7%).
United Kingdom: The British Geological Survey's 2015 risk list ranks antimony second highest (after rare earth elements) on the relative supply risk index.
United States: Antimony is a mineral commodity considered critical to the economic and national security. In 2022, no antimony was mined in the U.S.
Applications
Approximately 48% of antimony is consumed in flame retardants, 33% in lead–acid batteries, and 8% in plastics.
Flame retardants
Antimony is mainly used as the trioxide for flame-proofing compounds, always in combination with halogenated flame retardants except in halogen-containing polymers. The flame retarding effect of antimony trioxide is produced by the formation of halogenated antimony compounds, which react with hydrogen atoms, and probably also with oxygen atoms and OH radicals, thus inhibiting fire. Markets for these flame-retardants include children's clothing, toys, aircraft, and automobile seat covers. They are also added to polyester resins in fiberglass composites for such items as light aircraft engine covers. The resin will burn in the presence of an externally generated flame, but will extinguish when the external flame is removed.
Alloys
Antimony forms a highly useful alloy with lead, increasing its hardness and mechanical strength. When casting it increases fluidity of the melt and reduces shrinkage during cooling. For most applications involving lead, varying amounts of antimony are used as alloying metal. In lead–acid batteries, this addition improves plate strength and charging characteristics. For sailboats, lead keels are used to provide righting moment, ranging from 600 lbs to over 200 tons for the largest sailing superyachts; to improve hardness and tensile strength of the lead keel, antimony is mixed with lead between 2% and 5% by volume. Antimony is used in antifriction alloys (such as Babbitt metal), in bullets and lead shot, electrical cable sheathing, type metal (for example, for linotype printing machines), solder (some "lead-free" solders contain 5% Sb), in pewter, and in hardening alloys with low tin content in the manufacturing of organ pipes.
Other applications
Three other applications consume nearly all the rest of the world's supply. One application is as a stabilizer and catalyst for the production of polyethylene terephthalate. Another is as a fining agent to remove microscopic bubbles in glass, mostly for TV screens antimony ions interact with oxygen, suppressing the tendency of the latter to form bubbles. The third application is pigments.
In the 1990s antimony was increasingly being used in semiconductors as a dopant in n-type silicon wafers for diodes, infrared detectors, and Hall-effect devices. In the 1950s, the emitters and collectors of n-p-n alloy junction transistors were doped with tiny beads of a lead-antimony alloy. Indium antimonide (InSb) is used as a material for mid-infrared detectors.
The material is used as for phase-change memory, a type of computer memory.
Biology and medicine have few uses for antimony. Treatments containing antimony, known as antimonials, are used as emetics. Antimony compounds are used as antiprotozoan drugs. Potassium antimonyl tartrate, or tartar emetic, was once used as an anti-schistosomal drug from 1919 on. It was subsequently replaced by praziquantel. Antimony and its compounds are used in several veterinary preparations, such as anthiomaline and lithium antimony thiomalate, as a skin conditioner in ruminants. Antimony has a nourishing or conditioning effect on keratinized tissues in animals.
Antimony-based drugs, such as meglumine antimoniate, are also considered the drugs of choice for treatment of leishmaniasis. Early treatments used antimony(III) species (trivalent antimonials), but in 1922 Upendranath Brahmachari invented a much safer antimony(V) drug, and since then so-called pentavalent antimonials have been the standard first-line treatment. However, Leishmania strains in Bihar and neighboring regions have developed resistance to antimony. Elemental antimony as an antimony pill was once used as a medicine. It could be reused by others after ingestion and elimination.
Antimony(III) sulfide is used in the heads of some safety matches. Antimony sulfides help to stabilize the friction coefficient in automotive brake pad materials. Antimony is used in bullets, bullet tracers, paint, glass art, and as an opacifier in enamel. Antimony-124 is used together with beryllium in neutron sources; the gamma rays emitted by antimony-124 initiate the photodisintegration of beryllium. The emitted neutrons have an average energy of 24 keV. Natural antimony is used in startup neutron sources.
The powder derived from crushed antimony sulfide (kohl) has been used for millennia as an eye cosmetic. Historically it was applied to the eyes with a metal rod and with one's spittle, and was thought by the ancients to aid in curing eye infections. The practice is still seen in Yemen and in other Muslim countries.
Precautions
Antimony and many of its compounds are toxic, and the effects of antimony poisoning are similar to arsenic poisoning. The toxicity of antimony is far lower than that of arsenic; this might be caused by the significant differences of uptake, metabolism and excretion between arsenic and antimony. The uptake of antimony(III) or antimony(V) in the gastrointestinal tract is at most 20%. Antimony(V) is not quantitatively reduced to antimony(III) in the cell (in fact antimony(III) is oxidised to antimony(V) instead).
Since methylation of antimony does not occur, the excretion of antimony(V) in urine is the main way of elimination. Like arsenic, the most serious effect of acute antimony poisoning is cardiotoxicity and the resulting myocarditis; however, it can also manifest as Adams–Stokes syndrome, which arsenic does not. Reported cases of intoxication by antimony equivalent to 90 mg antimony potassium tartrate dissolved from enamel has been reported to show only short term effects. An intoxication with 6 g of antimony potassium tartrate was reported to result in death after three days.
Inhalation of antimony dust is harmful and in certain cases may be fatal; in small doses, antimony causes headaches, dizziness, and depression. Larger doses such as prolonged skin contact may cause dermatitis, or damage the kidneys and the liver, causing violent and frequent vomiting, leading to death in a few days.
Antimony is incompatible with strong oxidizing agents, strong acids, halogen acids, chlorine, or fluorine. It should be kept away from heat.
Antimony leaches from polyethylene terephthalate (PET) bottles into liquids. While levels observed for bottled water are below drinking water guidelines, fruit juice concentrates (for which no guidelines are established) produced in the UK were found to contain up to 44.7 μg/L of antimony, well above the EU limits for tap water of 5 μg/L. The guidelines are:
World Health Organization: 20 μg/L
Japan: 15 μg/L
United States Environmental Protection Agency, Health Canada and the Ontario Ministry of Environment: 6 μg/L
EU and German Federal Ministry of Environment: 5 μg/L
The tolerable daily intake (TDI) proposed by WHO is 6 μg antimony per kilogram of body weight. The immediately dangerous to life or health (IDLH) value for antimony is 50 mg/m3.
Toxicity
Certain compounds of antimony appear to be toxic, particularly antimony trioxide and antimony potassium tartrate. Effects may be similar to arsenic poisoning. Occupational exposure may cause respiratory irritation, pneumoconiosis, antimony spots on the skin, gastrointestinal symptoms, and cardiac arrhythmias. In addition, antimony trioxide is potentially carcinogenic to humans.
Adverse health effects have been observed in humans and animals following inhalation, oral, or dermal exposure to antimony and antimony compounds. Antimony toxicity typically occurs either due to occupational exposure, during therapy or from accidental ingestion. It is unclear if antimony can enter the body through the skin. The presence of low levels of antimony in saliva may also be associated with dental decay.
Notes
References
Cited sources
External links
Public Health Statement for Antimony
International Antimony Association vzw (i2a)
Chemistry in its element podcast (MP3) from the Royal Society of Chemistry's Chemistry World: Antimony
Antimony at The Periodic Table of Videos (University of Nottingham)
CDC – NIOSH Pocket Guide to Chemical Hazards – Antimony
Antimony Mineral data and specimen images
|
;Chemical elements;Chemical elements with rhombohedral structure;Metalloids;Minerals in space group 166;Native element minerals;Nuclear materials;Pnictogens;Trigonal minerals
|
https://en.wikipedia.org/wiki/Actinium
|
Actinium is a chemical element; it has symbol Ac and atomic number 89. It was discovered by Friedrich Oskar Giesel in 1902, who gave it the name emanium; the element got its name by being wrongly identified with a substance André-Louis Debierne found in 1899 and called actinium. The actinide series, a set of 15 elements between actinium and lawrencium in the periodic table, are named for actinium. Together with polonium, radium, and radon, actinium was one of the first non-primordial radioactive elements to be discovered.
A soft, silvery-white radioactive metal, actinium reacts rapidly with oxygen and moisture in air forming a white coating of actinium oxide that prevents further oxidation. As with most lanthanides and many actinides, actinium assumes oxidation state +3 in nearly all its chemical compounds. Actinium is found only in traces in uranium and thorium ores as the isotope 227Ac, which decays with a half-life of 21.772 years, predominantly emitting beta and sometimes alpha particles, and 228Ac, which is beta active with a half-life of 6.15 hours. One tonne of natural uranium in ore contains about 0.2 milligrams of actinium-227, and one tonne of thorium contains about 5 nanograms of actinium-228. The close similarity of physical and chemical properties of actinium and lanthanum makes separation of actinium from the ore impractical. Instead, the element is prepared, in milligram amounts, by the neutron irradiation of in a nuclear reactor. Owing to its scarcity, high price and radioactivity, actinium has no significant industrial use. Its current applications include a neutron source and an agent for radiation therapy.
History
André-Louis Debierne, a French chemist, announced the discovery of a new element in 1899. He separated it from pitchblende residues left by Marie and Pierre Curie after they had extracted radium. In 1899, Debierne described the substance as similar to titanium and (in 1900) as similar to thorium. Friedrich Oskar Giesel found in 1902 a substance similar to lanthanum and called it "emanium" in 1904. After a comparison of the substances' half-lives determined by Debierne, Harriet Brooks in 1904, and Otto Hahn and Otto Sackur in 1905, Debierne's chosen name for the new element was retained because it had seniority, despite the contradicting chemical properties he claimed for the element at different times.
Articles published in the 1970s and later suggest that Debierne's results published in 1904 conflict with those reported in 1899 and 1900. Furthermore, the now-known chemistry of actinium precludes its presence as anything other than a minor constituent of Debierne's 1899 and 1900 results; in fact, the chemical properties he reported make it likely that he had, instead, accidentally identified protactinium, which would not be discovered for another fourteen years, only to have it disappear due to its hydrolysis and adsorption onto his laboratory equipment. This has led some authors to advocate that Giesel alone should be credited with the discovery. A less confrontational vision of scientific discovery is proposed by Adloff. He suggests that hindsight criticism of the early publications should be mitigated by the then nascent state of radiochemistry: highlighting the prudence of Debierne's claims in the original papers, he notes that nobody can contend that Debierne's substance did not contain actinium. Debierne, who is now considered by the vast majority of historians as the discoverer, lost interest in the element and left the topic. Giesel, on the other hand, can rightfully be credited with the first preparation of radiochemically pure actinium and with the identification of its atomic number 89.
The name actinium originates from the Ancient Greek aktis, aktinos (ακτίς, ακτίνος), meaning beam or ray. Its symbol Ac is also used in abbreviations of other compounds that have nothing to do with actinium, such as acetyl, acetate and sometimes acetaldehyde.
Properties
Actinium is a soft, silvery-white, radioactive, metallic element. Its estimated shear modulus is similar to that of lead. Owing to its strong radioactivity, actinium glows in the dark with a pale blue light, which originates from the surrounding air ionized by the emitted energetic particles. Actinium has similar chemical properties to lanthanum and other lanthanides, and therefore these elements are difficult to separate when extracting from uranium ores. Solvent extraction and ion chromatography are commonly used for the separation.
The first element of the actinides, actinium gave the set its name, much as lanthanum had done for the lanthanides. The actinides are much more diverse than the lanthanides and therefore it was not until 1945 that the most significant change to Dmitri Mendeleev's periodic table since the recognition of the lanthanides, the introduction of the actinides, was generally accepted after Glenn T. Seaborg's research on the transuranium elements (although it had been proposed as early as 1892 by British chemist Henry Bassett).
Actinium reacts rapidly with oxygen and moisture in air forming a white coating of actinium oxide that impedes further oxidation. As with most lanthanides and actinides, actinium exists in the oxidation state +3, and the Ac3+ ions are colorless in solutions. The oxidation state +3 originates from the [Rn] 6d17s2 electronic configuration of actinium, with three valence electrons that are easily donated to give the stable closed-shell structure of the noble gas radon. Although the 5f orbitals are unoccupied in an actinium atom, it can be used as a valence orbital in actinium complexes and hence it is generally considered the first 5f element by authors working on it. Ac3+ is the largest of all known tripositive ions and its first coordination sphere contains approximately 10.9 ± 0.5 water molecules.
Chemical compounds
Due to actinium's intense radioactivity, only a limited number of actinium compounds are known. These include: AcF3, AcCl3, AcBr3, AcOF, AcOCl, AcOBr, Ac2S3, Ac2O3, AcPO4 and Ac(NO3)3. They all contain actinium in the oxidation state +3. In particular, the lattice constants of the analogous lanthanum and actinium compounds differ by only a few percent.
Here a, b and c are lattice constants, No is space group number and Z is the number of formula units per unit cell. Density was not measured directly but calculated from the lattice parameters.
Oxides
Actinium oxide (Ac2O3) can be obtained by heating the hydroxide at or the oxalate at , in vacuum. Its crystal lattice is isotypic with the oxides of most trivalent rare-earth metals.
Halides
Actinium trifluoride can be produced either in solution or in solid reaction. The former reaction is carried out at room temperature, by adding hydrofluoric acid to a solution containing actinium ions. In the latter method, actinium metal is treated with hydrogen fluoride vapors at in an all-platinum setup. Treating actinium trifluoride with ammonium hydroxide at yields oxyfluoride AcOF. Whereas lanthanum oxyfluoride can be easily obtained by burning lanthanum trifluoride in air at for an hour, similar treatment of actinium trifluoride yields no AcOF and only results in melting of the initial product.
AcF3 + 2 NH3 + H2O → AcOF + 2 NH4F
Actinium trichloride is obtained by reacting actinium hydroxide or oxalate with carbon tetrachloride vapors at temperatures above . Similarly to the oxyfluoride, actinium oxychloride can be prepared by hydrolyzing actinium trichloride with ammonium hydroxide at . However, in contrast to the oxyfluoride, the oxychloride could well be synthesized by igniting a solution of actinium trichloride in hydrochloric acid with ammonia.
Reaction of aluminium bromide and actinium oxide yields actinium tribromide:
Ac2O3 + 2 AlBr3 → 2 AcBr3 + Al2O3
and treating it with ammonium hydroxide at results in the oxybromide AcOBr.
Other compounds
Actinium hydride was obtained by reduction of actinium trichloride with potassium at , and its structure was deduced by analogy with the corresponding LaH2 hydride. The source of hydrogen in the reaction was uncertain.
Mixing monosodium phosphate (NaH2PO4) with a solution of actinium in hydrochloric acid yields white-colored actinium phosphate hemihydrate (AcPO4·0.5H2O), and heating actinium oxalate with hydrogen sulfide vapors at for a few minutes results in a black actinium sulfide Ac2S3. It may possibly be produced by acting with a mixture of hydrogen sulfide and carbon disulfide on actinium oxide at .
Isotopes
Naturally occurring actinium is principally composed of two radioactive isotopes; (from the radioactive family of ) and (a granddaughter of ). decays mainly as a beta emitter with a very small energy, but in 1.38% of cases it emits an alpha particle, so it can readily be identified through alpha spectrometry. Thirty-three radioisotopes have been identified, the most stable being with a half-life of 21.772 years, with a half-life of 10.0 days and with a half-life of 29.37 hours. All remaining radioactive isotopes have half-lives that are less than 10 hours and the majority of them have half-lives shorter than one minute. The shortest-lived known isotope of actinium is (half-life of 69 nanoseconds) which decays through alpha decay. Actinium also has two known meta states. The most significant isotopes for chemistry are 225Ac, 227Ac, and 228Ac.
Purified comes into equilibrium with its decay products after about a half of year. It decays according to its 21.772-year half-life emitting mostly beta (98.62%) and some alpha particles (1.38%); the successive decay products are part of the actinium series. Owing to the low available amounts, low energy of its beta particles (maximum 44.8 keV) and low intensity of alpha radiation, is difficult to detect directly by its emission and it is therefore traced via its decay products. The isotopes of actinium range in atomic weight from 203 u () to 236 u ().
Occurrence and synthesis
Actinium is found only in traces in uranium ores – one tonne of uranium in ore contains about 0.2 milligrams of 227Ac – and in thorium ores, which contain about 5 nanograms of 228Ac per one tonne of thorium. The actinium isotope 227Ac is a transient member of the uranium-actinium series decay chain, which begins with the parent isotope 235U (or 239Pu) and ends with the stable lead isotope 207Pb. The isotope 228Ac is a transient member of the thorium series decay chain, which begins with the parent isotope 232Th and ends with the stable lead isotope 208Pb. Another actinium isotope (225Ac) is transiently present in the neptunium series decay chain, beginning with 237Np (or 233U) and ending with thallium (205Tl) and near-stable bismuth (209Bi); even though all primordial 237Np has decayed away, it is continuously produced by neutron knock-out reactions on natural 238U.
The low natural concentration, and the close similarity of physical and chemical properties to those of lanthanum and other lanthanides, which are always abundant in actinium-bearing ores, render separation of actinium from the ore impractical. The most concentrated actinium sample prepared from raw material consisted of 7 micrograms of 227Ac in less than 0.1 milligrams of La2O3, and complete separation was never achieved. Instead, actinium is prepared, in milligram amounts, by the neutron irradiation of in a nuclear reactor.
^{226}_{88}Ra + ^{1}_{0}n -> ^{227}_{88}Ra ->[\beta^-][42.2 \ \ce{min}] ^{227}_{89}Ac
The reaction yield is about 2% of the radium weight. 227Ac can further capture neutrons resulting in small amounts of 228Ac. After the synthesis, actinium is separated from radium and from the products of decay and nuclear fusion, such as thorium, polonium, lead and bismuth. The extraction can be performed with thenoyltrifluoroacetone-benzene solution from an aqueous solution of the radiation products, and the selectivity to a certain element is achieved by adjusting the pH (to about 6.0 for actinium). An alternative procedure is anion exchange with an appropriate resin in nitric acid, which can result in a separation factor of 1,000,000 for radium and actinium vs. thorium in a two-stage process. Actinium can then be separated from radium, with a ratio of about 100, using a low cross-linking cation exchange resin and nitric acid as eluant.
225Ac was first produced artificially at the Institute for Transuranium Elements (ITU) in Germany using a cyclotron and at St George Hospital in Sydney using a linac in 2000. This rare isotope has potential applications in radiation therapy and is most efficiently produced by bombarding a radium-226 target with 20–30 MeV deuterium ions. This reaction also yields 226Ac which however decays with a half-life of 29 hours and thus does not contaminate 225Ac.
Actinium metal has been prepared by the reduction of actinium fluoride with lithium vapor in vacuum at a temperature between . Higher temperatures resulted in evaporation of the product and lower ones lead to an incomplete transformation. Lithium was chosen among other alkali metals because its fluoride is most volatile.
Applications
Owing to its scarcity, high price and radioactivity, 227Ac currently has no significant industrial use, but 225Ac is currently being studied for use in cancer treatments such as targeted alpha therapies.
227Ac is highly radioactive and was therefore studied for use as an active element of radioisotope thermoelectric generators, for example in spacecraft. The oxide of 227Ac pressed with beryllium is also an efficient neutron source with the activity exceeding that of the standard americium-beryllium and radium-beryllium pairs. In all those applications, 227Ac (a beta source) is merely a progenitor which generates alpha-emitting isotopes upon its decay. Beryllium captures alpha particles and emits neutrons owing to its large cross-section for the (α,n) nuclear reaction:
^{9}_{4}Be + ^{4}_{2}He -> ^{12}_{6}C + ^{1}_{0}n + \gamma
The 227AcBe neutron sources can be applied in a neutron probe – a standard device for measuring the quantity of water present in soil, as well as moisture/density for quality control in highway construction. Such probes are also used in well logging applications, in neutron radiography, tomography and other radiochemical investigations.
225Ac is applied in medicine to produce in a reusable generator or can be used alone as an agent for radiation therapy, in particular targeted alpha therapy (TAT). This isotope has a half-life of 10 days, making it much more suitable for radiation therapy than 213Bi (half-life 46 minutes). Additionally, 225Ac decays to nontoxic 209Bi rather than toxic lead, which is the final product in the decay chains of several other candidate isotopes, namely 227Th, 228Th, and 230U. Not only 225Ac itself, but also its daughters, emit alpha particles which kill cancer cells in the body. The major difficulty with application of 225Ac was that intravenous injection of simple actinium complexes resulted in their accumulation in the bones and liver for a period of tens of years. As a result, after the cancer cells were quickly killed by alpha particles from 225Ac, the radiation from the actinium and its daughters might induce new mutations. To solve this problem, 225Ac was bound to a chelating agent, such as citrate, ethylenediaminetetraacetic acid (EDTA) or diethylene triamine pentaacetic acid (DTPA). This reduced actinium accumulation in the bones, but the excretion from the body remained slow. Much better results were obtained with such chelating agents as HEHA () or DOTA () coupled to trastuzumab, a monoclonal antibody that interferes with the HER2/neu receptor. The latter delivery combination was tested on mice and proved to be effective against leukemia, lymphoma, breast, ovarian, neuroblastoma and prostate cancers.
The medium half-life of 227Ac (21.77 years) makes it a very convenient radioactive isotope in modeling the slow vertical mixing of oceanic waters. The associated processes cannot be studied with the required accuracy by direct measurements of current velocities (of the order 50 meters per year). However, evaluation of the concentration depth-profiles for different isotopes allows estimating the mixing rates. The physics behind this method is as follows: oceanic waters contain homogeneously dispersed 235U. Its decay product, 231Pa, gradually precipitates to the bottom, so that its concentration first increases with depth and then stays nearly constant. 231Pa decays to 227Ac; however, the concentration of the latter isotope does not follow the 231Pa depth profile, but instead increases toward the sea bottom. This occurs because of the mixing processes which raise some additional 227Ac from the sea bottom. Thus analysis of both 231Pa and 227Ac depth profiles allows researchers to model the mixing behavior.
There are theoretical predictions that AcHx hydrides (in this case with very high pressure) are a candidate for a near room-temperature superconductor as they have Tc significantly higher than H3S, possibly near 250 K.
Precautions
227Ac is highly radioactive and experiments with it are carried out in a specially designed laboratory equipped with a tight glove box. When actinium trichloride is administered intravenously to rats, about 33% of actinium is deposited into the bones and 50% into the liver. Its toxicity is comparable to, but slightly lower, than that of americium and plutonium. For trace quantities, fume hoods with good aeration suffice; for gram amounts, hot cells with shielding from the intense gamma radiation emitted by 227Ac are necessary.
See also
Actinium series
Notes
References
Bibliography
External links
Actinium at The Periodic Table of Videos (University of Nottingham)
NLM Hazardous Substances Databank – Actinium, Radioactive
Actinium in
|
;Actinides;Chemical elements;Chemical elements with face-centered cubic structure
|
https://en.wikipedia.org/wiki/Americium
|
Americium is a synthetic chemical element; it has symbol Am and atomic number 95. It is radioactive and a transuranic member of the actinide series in the periodic table, located under the lanthanide element europium and was thus named after the Americas by analogy.
Americium was first produced in 1944 by the group of Glenn T. Seaborg from Berkeley, California, at the Metallurgical Laboratory of the University of Chicago, as part of the Manhattan Project. Although it is the third element in the transuranic series, it was discovered fourth, after the heavier curium. The discovery was kept secret and only released to the public in November 1945. Most americium is produced by uranium or plutonium being bombarded with neutrons in nuclear reactors – one tonne of spent nuclear fuel contains about 100 grams of americium. It is widely used in commercial ionization chamber smoke detectors, as well as in neutron sources and industrial gauges. Several unusual applications, such as nuclear batteries or fuel for space ships with nuclear propulsion, have been proposed for the isotope 242mAm, but they are as yet hindered by the scarcity and high price of this nuclear isomer.
Americium is a relatively soft radioactive metal with a silvery appearance. Its most common isotopes are 241Am and 243Am. In chemical compounds, americium usually assumes the oxidation state +3, especially in solutions. Several other oxidation states are known, ranging from +2 to +7, and can be identified by their characteristic optical absorption spectra. The crystal lattices of solid americium and its compounds contain small intrinsic radiogenic defects, due to metamictization induced by self-irradiation with alpha particles, which accumulates with time; this can cause a drift of some material properties over time, more noticeable in older samples.
History
Although americium was likely produced in previous nuclear experiments, it was first intentionally synthesized, isolated and identified in late autumn 1944, at the University of California, Berkeley, by Glenn T. Seaborg, Leon O. Morgan, Ralph A. James, and Albert Ghiorso. They used a 60-inch cyclotron at the University of California, Berkeley. The element was chemically identified at the Metallurgical Laboratory (now Argonne National Laboratory) of the University of Chicago. Following the lighter neptunium, plutonium, and heavier curium, americium was the fourth transuranium element to be discovered. At the time, the periodic table had been restructured by Seaborg to its present layout, containing the actinide row below the lanthanide one. This led to americium being located right below its twin lanthanide element europium; it was thus by analogy named after the Americas: "The name americium (after the Americas) and the symbol Am are suggested for the element on the basis of its position as the sixth member of the actinide rare-earth series, analogous to europium, Eu, of the lanthanide series."
The new element was isolated from its oxides in a complex, multi-step process. First plutonium-239 nitrate (239PuNO3) solution was coated on a platinum foil of about 0.5 cm2 area, the solution was evaporated and the residue was converted into plutonium dioxide (PuO2) by calcining. After cyclotron irradiation, the coating was dissolved with nitric acid, and then precipitated as the hydroxide using concentrated aqueous ammonia solution. The residue was dissolved in perchloric acid. Further separation was carried out by ion exchange, yielding a certain isotope of curium. The separation of curium and americium was so painstaking that those elements were initially called by the Berkeley group as pandemonium (from Greek for all demons or hell) and delirium (from Latin for madness).
Initial experiments yielded four americium isotopes: 241Am, 242Am, 239Am and 238Am. Americium-241 was directly obtained from plutonium upon absorption of two neutrons. It decays by emission of a α-particle to 237Np; the half-life of this decay was first determined as years but then corrected to 432.2 years.
The times are half-lives
The second isotope 242Am was produced upon neutron bombardment of the already-created 241Am. Upon rapid β-decay, 242Am converts into the isotope of curium 242Cm (which had been discovered previously). The half-life of this decay was initially determined at 17 hours, which was close to the presently accepted value of 16.02 h.
The discovery of americium and curium in 1944 was closely related to the Manhattan Project; the results were confidential and declassified only in 1945. Seaborg leaked the synthesis of the elements 95 and 96 on the U.S. radio show for children Quiz Kids five days before the official presentation at an American Chemical Society meeting on 11 November 1945, when one of the listeners asked whether any new transuranium element besides plutonium and neptunium had been discovered during the war. After the discovery of americium isotopes 241Am and 242Am, their production and compounds were patented listing only Seaborg as the inventor. The initial americium samples weighed a few micrograms; they were barely visible and were identified by their radioactivity. The first substantial amounts of metallic americium weighing 40–200 micrograms were not prepared until 1951 by reduction of americium(III) fluoride with barium metal in high vacuum at 1100 °C.
Occurrence
The longest-lived and most common isotopes of americium, 241Am and 243Am, have half-lives of 432.2 and 7,370 years, respectively. Therefore, any primordial americium (americium that was present on Earth during its formation) should have decayed by now. Trace amounts of americium probably occur naturally in uranium minerals as a result of neutron capture and beta decay (238U → 239Pu → 240Pu → 241Am), though the quantities would be tiny and this has not been confirmed. Extraterrestrial long-lived 247Cm is probably also deposited on Earth and has 243Am as one of its intermediate decay products, but again this has not been confirmed.
Existing americium is concentrated in the areas used for the atmospheric nuclear weapons tests conducted between 1945 and 1980, as well as at the sites of nuclear incidents, such as the Chernobyl disaster. For example, the analysis of the debris at the testing site of the first U.S. hydrogen bomb, Ivy Mike, (1 November 1952, Enewetak Atoll), revealed high concentrations of various actinides including americium; but due to military secrecy, this result was not published until later, in 1956. Trinitite, the glassy residue left on the desert floor near Alamogordo, New Mexico, after the plutonium-based Trinity nuclear bomb test on 16 July 1945, contains traces of americium-241. Elevated levels of americium were also detected at the crash site of a US Boeing B-52 bomber aircraft, which carried four hydrogen bombs, in 1968 in Greenland.
In other regions, the average radioactivity of surface soil due to residual americium is only about 0.01 picocuries per gram (0.37 mBq/g). Atmospheric americium compounds are poorly soluble in common solvents and mostly adhere to soil particles. Soil analysis revealed about 1,900 times higher concentration of americium inside sandy soil particles than in the water present in the soil pores; an even higher ratio was measured in loam soils.
Americium is produced mostly artificially in small quantities, for research purposes. A tonne of spent nuclear fuel contains about 100 grams of various americium isotopes, mostly 241Am and 243Am. Their prolonged radioactivity is undesirable for the disposal, and therefore americium, together with other long-lived actinides, must be neutralized. The associated procedure may involve several steps, where americium is first separated and then converted by neutron bombardment in special reactors to short-lived nuclides. This procedure is well known as nuclear transmutation, but it is still being developed for americium. The transuranic elements from americium to fermium occurred naturally in the natural nuclear fission reactor at Oklo, but no longer do so.
Americium is also one of the elements that have theoretically been detected in Przybylski's Star.
Synthesis and extraction
Isotope nucleosynthesis
Americium has been produced in small quantities in nuclear reactors for decades, and kilograms of its 241Am and 243Am isotopes have been accumulated by now. Nevertheless, since it was first offered for sale in 1962, its price, about of 241Am, remains almost unchanged owing to the very complex separation procedure. The heavier isotope 243Am is produced in much smaller amounts; it is thus more difficult to separate, resulting in a higher cost of the order .
Americium is not synthesized directly from uranium – the most common reactor material – but from the plutonium isotope 239Pu. The latter needs to be produced first, according to the following nuclear process:
^{238}_{92}U ->[\ce{(n,\gamma)}] ^{239}_{92}U ->[\beta^-][23.5 \ \ce{min}] ^{239}_{93}Np ->[\beta^-][2.3565 \ \ce{d}] ^{239}_{94}Pu
The capture of two neutrons by 239Pu (a so-called (n,γ) reaction), followed by a β-decay, results in 241Am:
^{239}_{94}Pu ->[\ce{2(n,\gamma)}] ^{241}_{94}Pu ->[\beta^-][14.35 \ \ce{yr}] ^{241}_{95}Am
The plutonium present in spent nuclear fuel contains about 12% of 241Pu. Because it beta-decays to 241Am, 241Pu can be extracted and may be used to generate further 241Am. However, this process is rather slow: half of the original amount of 241Pu decays to 241Am after about 15 years, and the 241Am amount reaches a maximum after 70 years.
The obtained 241Am can be used for generating heavier americium isotopes by further neutron capture inside a nuclear reactor. In a light water reactor (LWR), 79% of 241Am converts to 242Am and 10% to its nuclear isomer 242mAm:
Americium-242 has a half-life of only 16 hours, which makes its further conversion to 243Am extremely inefficient. The latter isotope is produced instead in a process where 239Pu captures four neutrons under high neutron flux:
^{239}_{94}Pu ->[\ce{4(n,\gamma)}] \ ^{243}_{94}Pu ->[\beta^-][4.956 \ \ce{h}] ^{243}_{95}Am
Metal generation
Most synthesis routines yield a mixture of different actinide isotopes in oxide forms, from which isotopes of americium can be separated. In a typical procedure, the spent reactor fuel (e.g. MOX fuel) is dissolved in nitric acid, and the bulk of uranium and plutonium is removed using a PUREX-type extraction (Plutonium–URanium EXtraction) with tributyl phosphate in a hydrocarbon. The lanthanides and remaining actinides are then separated from the aqueous residue (raffinate) by a diamide-based extraction, to give, after stripping, a mixture of trivalent actinides and lanthanides. Americium compounds are then selectively extracted using multi-step chromatographic and centrifugation techniques with an appropriate reagent. A large amount of work has been done on the solvent extraction of americium. For example, a 2003 EU-funded project codenamed "EUROPART" studied triazines and other compounds as potential extraction agents. A bis-triazinyl bipyridine complex was proposed in 2009 as such a reagent is highly selective to americium (and curium). Separation of americium from the highly similar curium can be achieved by treating a slurry of their hydroxides in aqueous sodium bicarbonate with ozone, at elevated temperatures. Both Am and Cm are mostly present in solutions in the +3 valence state; whereas curium remains unchanged, americium oxidizes to soluble Am(IV) complexes which can be washed away.
Metallic americium is obtained by reduction from its compounds. Americium(III) fluoride was first used for this purpose. The reaction was conducted using elemental barium as reducing agent in a water- and oxygen-free environment inside an apparatus made of tantalum and tungsten.
An alternative is the reduction of americium dioxide by metallic lanthanum or thorium:
Physical properties
In the periodic table, americium is located to the right of plutonium, to the left of curium, and below the lanthanide europium, with which it shares many physical and chemical properties. Americium is a highly radioactive element. When freshly prepared, it has a silvery-white metallic lustre, but then slowly tarnishes in air. With a density of 12 g/cm3, americium is less dense than both curium (13.52 g/cm3) and plutonium (19.8 g/cm3); but has a higher density than europium (5.264 g/cm3)—mostly because of its higher atomic mass. Americium is relatively soft and easily deformable and has a significantly lower bulk modulus than the actinides before it: Th, Pa, U, Np and Pu. Its melting point of 1173 °C is significantly higher than that of plutonium (639 °C) and europium (826 °C), but lower than for curium (1340 °C).
At ambient conditions, americium is present in its most stable α form which has a hexagonal crystal symmetry, and a space group P63/mmc with cell parameters a = 346.8 pm and c = 1124 pm, and four atoms per unit cell. The crystal consists of a double-hexagonal close packing with the layer sequence ABAC and so is isotypic with α-lanthanum and several actinides such as α-curium. The crystal structure of americium changes with pressure and temperature. When compressed at room temperature to 5 GPa, α-Am transforms to the β modification, which has a face-centered cubic (fcc) symmetry, space group Fmm and lattice constant a = 489 pm. This fcc structure is equivalent to the closest packing with the sequence ABC. Upon further compression to 23 GPa, americium transforms to an orthorhombic γ-Am structure similar to that of α-uranium. There are no further transitions observed up to 52 GPa, except for an appearance of a monoclinic phase at pressures between 10 and 15 GPa. There is no consistency on the status of this phase in the literature, which also sometimes lists the α, β and γ phases as I, II and III. The β-γ transition is accompanied by a 6% decrease in the crystal volume; although theory also predicts a significant volume change for the α-β transition, it is not observed experimentally. The pressure of the α-β transition decreases with increasing temperature, and when α-americium is heated at ambient pressure, at 770 °C it changes into an fcc phase which is different from β-Am, and at 1075 °C it converts to a body-centered cubic structure. The pressure-temperature phase diagram of americium is thus rather similar to those of lanthanum, praseodymium and neodymium.
As with many other actinides, self-damage of the crystal structure due to alpha-particle irradiation is intrinsic to americium. It is especially noticeable at low temperatures, where the mobility of the produced structure defects is relatively low, by broadening of X-ray diffraction peaks. This effect makes somewhat uncertain the temperature of americium and some of its properties, such as electrical resistivity. So for americium-241, the resistivity at 4.2 K increases with time from about 2 μOhm·cm to 10 μOhm·cm after 40 hours, and saturates at about 16 μOhm·cm after 140 hours. This effect is less pronounced at room temperature, due to annihilation of radiation defects; also heating to room temperature the sample which was kept for hours at low temperatures restores its resistivity. In fresh samples, the resistivity gradually increases with temperature from about 2 μOhm·cm at liquid helium to 69 μOhm·cm at room temperature; this behavior is similar to that of neptunium, uranium, thorium and protactinium, but is different from plutonium and curium which show a rapid rise up to 60 K followed by saturation. The room temperature value for americium is lower than that of neptunium, plutonium and curium, but higher than for uranium, thorium and protactinium.
Americium is paramagnetic in a wide temperature range, from that of liquid helium, to room temperature and above. This behavior is markedly different from that of its neighbor curium which exhibits antiferromagnetic transition at 52 K. The thermal expansion coefficient of americium is slightly anisotropic and amounts to along the shorter a axis and for the longer c hexagonal axis. The enthalpy of dissolution of americium metal in hydrochloric acid at standard conditions is , from which the standard enthalpy change of formation (ΔfH°) of aqueous Am3+ ion is . The standard potential Am3+/Am0 is .
Chemical properties
Americium metal readily reacts with oxygen and dissolves in aqueous acids. The most stable oxidation state for americium is +3. The chemistry of americium(III) has many similarities to the chemistry of lanthanide(III) compounds. For example, trivalent americium forms insoluble fluoride, oxalate, iodate, hydroxide, phosphate and other salts. Compounds of americium in oxidation states +2, +4, +5, +6 and +7 have also been studied. This is the widest range that has been observed with actinide elements. The color of americium compounds in aqueous solution is as follows: Am3+ (yellow-reddish), Am4+ (yellow-reddish), ; (yellow), (brown) and (dark green). The absorption spectra have sharp peaks, due to f-f transitions' in the visible and near-infrared regions. Typically, Am(III) has absorption maxima at ca. 504 and 811 nm, Am(V) at ca. 514 and 715 nm, and Am(VI) at ca. 666 and 992 nm.
Americium compounds with oxidation state +4 and higher are strong oxidizing agents, comparable in strength to the permanganate ion () in acidic solutions. Whereas the Am4+ ions are unstable in solutions and readily convert to Am3+, compounds such as americium dioxide (AmO2) and americium(IV) fluoride (AmF4) are stable in the solid state.
The pentavalent oxidation state of americium was first observed in 1951. In acidic aqueous solution the ion is unstable with respect to disproportionation. The reaction
is typical. The chemistry of Am(V) and Am(VI) is comparable to the chemistry of uranium in those oxidation states. In particular, compounds like and are comparable to uranates and the ion is comparable to the uranyl ion, . Such compounds can be prepared by oxidation of Am(III) in dilute nitric acid with ammonium persulfate. Other oxidising agents that have been used include silver(I) oxide, ozone and sodium persulfate.
Chemical compounds
Oxygen compounds
Three americium oxides are known, with the oxidation states +2 (AmO), +3 (Am2O3) and +4 (AmO2). Americium(II) oxide was prepared in minute amounts and has not been characterized in detail. Americium(III) oxide is a red-brown solid with a melting point of 2205 °C. Americium(IV) oxide is the main form of solid americium which is used in nearly all its applications. As most other actinide dioxides, it is a black solid with a cubic (fluorite) crystal structure.
The oxalate of americium(III), vacuum dried at room temperature, has the chemical formula Am2(C2O4)3·7H2O. Upon heating in vacuum, it loses water at 240 °C and starts decomposing into AmO2 at 300 °C, the decomposition completes at about 470 °C. The initial oxalate dissolves in nitric acid with the maximum solubility of 0.25 g/L.
Halides
Halides of americium are known for the oxidation states +2, +3 and +4, where the +3 is most stable, especially in solutions.
Reduction of Am(III) compounds with sodium amalgam yields Am(II) salts – the black halides AmCl2, AmBr2 and AmI2. They are very sensitive to oxygen and oxidize in water, releasing hydrogen and converting back to the Am(III) state. Specific lattice constants are:
Orthorhombic AmCl2: a = , b = and c =
Tetragonal AmBr2: a = and c = . They can also be prepared by reacting metallic americium with an appropriate mercury halide HgX2, where X = Cl, Br or I:
{Am} + \underset{mercury\ halide}{HgX2} ->[{} \atop 400 - 500 ^\circ \ce C] {AmX2} + {Hg}
Americium(III) fluoride (AmF3) is poorly soluble and precipitates upon reaction of Am3+ and fluoride ions in weak acidic solutions:
Am^3+ + 3F^- -> AmF3(v)
The tetravalent americium(IV) fluoride (AmF4) is obtained by reacting solid americium(III) fluoride with molecular fluorine:
2AmF3 + F2 -> 2AmF4
Another known form of solid tetravalent americium fluoride is KAmF5. Tetravalent americium has also been observed in the aqueous phase. For this purpose, black Am(OH)4 was dissolved in 15-M NH4F with the americium concentration of 0.01 M. The resulting reddish solution had a characteristic optical absorption spectrum which is similar to that of AmF4 but differed from other oxidation states of americium. Heating the Am(IV) solution to 90 °C did not result in its disproportionation or reduction, however a slow reduction was observed to Am(III) and assigned to self-irradiation of americium by alpha particles.
Most americium(III) halides form hexagonal crystals with slight variation of the color and exact structure between the halogens. So, chloride (AmCl3) is reddish and has a structure isotypic to uranium(III) chloride (space group P63/m) and the melting point of 715 °C. The fluoride is isotypic to LaF3 (space group P63/mmc) and the iodide to BiI3 (space group R). The bromide is an exception with the orthorhombic PuBr3-type structure and space group Cmcm. Crystals of americium(III) chloride hexahydrate (AmCl3·6H2O) can be prepared by dissolving americium dioxide in hydrochloric acid and evaporating the liquid. Those crystals are hygroscopic and have yellow-reddish color and a monoclinic crystal structure.
Oxyhalides of americium in the form AmVIO2X2, AmVO2X, AmIVOX2 and AmIIIOX can be obtained by reacting the corresponding americium halide with oxygen or Sb2O3, and AmOCl can also be produced by vapor phase hydrolysis:
AmCl3 + H2O -> AmOCl + 2HCl
Chalcogenides and pnictides
The known chalcogenides of americium include the sulfide AmS2, selenides AmSe2 and Am3Se4, and tellurides Am2Te3 and AmTe2. The pnictides of americium (243Am) of the AmX type are known for the elements phosphorus, arsenic, antimony and bismuth. They crystallize in the rock-salt lattice.
Silicides and borides
Americium monosilicide (AmSi) and "disilicide" (nominally AmSix with: 1.87 < x < 2.0) were obtained by reduction of americium(III) fluoride with elementary silicon in vacuum at 1050 °C (AmSi) and 1150−1200 °C (AmSix). AmSi is a black solid isomorphic with LaSi, it has an orthorhombic crystal symmetry. AmSix has a bright silvery lustre and a tetragonal crystal lattice (space group I41/amd), it is isomorphic with PuSi2 and ThSi2. Borides of americium include AmB4 and AmB6. The tetraboride can be obtained by heating an oxide or halide of americium with magnesium diboride in vacuum or inert atmosphere.
Organoamericium compounds
Analogous to uranocene, americium is predicted to form the organometallic compound amerocene with two cyclooctatetraene ligands, with the chemical formula (η8-C8H8)2Am. A cyclopentadienyl complex is also known that is likely to be stoichiometrically AmCp3.
Formation of the complexes of the type Am(n-C3H7-BTP)3, where BTP stands for 2,6-di(1,2,4-triazin-3-yl)pyridine, in solutions containing n-C3H7-BTP and Am3+ ions has been confirmed by EXAFS. Some of these BTP-type complexes selectively interact with americium and therefore are useful in its selective separation from lanthanides and another actinides.
Biological aspects
Americium is an artificial element of recent origin, and thus does not have a biological requirement. It is harmful to life. It has been proposed to use bacteria for removal of americium and other heavy metals from rivers and streams. Thus, Enterobacteriaceae of the genus Citrobacter precipitate americium ions from aqueous solutions, binding them into a metal-phosphate complex at their cell walls. Several studies have been reported on the biosorption and bioaccumulation of americium by bacteria and fungi. In the laboratory, both americium and curium were found to support the growth of methylotrophs.
Fission
The isotope 242mAm (half-life 141 years) has the largest cross sections for absorption of thermal neutrons (5,700 barns), that results in a small critical mass for a sustained nuclear chain reaction. The critical mass for a bare 242mAm sphere is about 9–14 kg (the uncertainty results from insufficient knowledge of its material properties). It can be lowered to 3–5 kg with a metal reflector and should become even smaller with a water reflector. Such small critical mass is favorable for portable nuclear weapons, but those based on 242mAm are not known yet, probably because of its scarcity and high price. The critical masses of the two readily available isotopes, 241Am and 243Am, are relatively high – 57.6 to 75.6 kg for 241Am and 209 kg for 243Am. Scarcity and high price yet hinder application of americium as a nuclear fuel in nuclear reactors.
There are proposals of very compact 10-kW high-flux reactors using as little as 20 grams of 242mAm. Such low-power reactors would be relatively safe to use as neutron sources for radiation therapy in hospitals.
Isotopes
About 18 isotopes and 11 nuclear isomers are known for americium, having mass numbers 229, 230, and 232 through 247. There are two long-lived alpha-emitters; 243Am has a half-life of 7,370 years and is the most stable isotope, and 241Am has a half-life of 432.2 years. The most stable nuclear isomer is 242m1Am; it has a long half-life of 141 years. The half-lives of other isotopes and isomers range from 0.64 microseconds for 245m1Am to 50.8 hours for 240Am. As with most other actinides, the isotopes of americium with odd number of neutrons have relatively high rate of nuclear fission and low critical mass.
Americium-241 decays to 237Np emitting alpha particles of 5 different energies, mostly at 5.486 MeV (85.2%) and 5.443 MeV (12.8%). Because many of the resulting states are metastable, they also emit gamma rays with the discrete energies between 26.3 and 158.5 keV.
Americium-242 is a short-lived isotope with a half-life of 16.02 h. It mostly (82.7%) converts by β-decay to 242Cm, but also by electron capture to 242Pu (17.3%). Both 242Cm and 242Pu transform via nearly the same decay chain through 238Pu down to 234U.
Nearly all (99.541%) of 242m1Am decays by internal conversion to 242Am and the remaining 0.459% by α-decay to 238Np. The latter subsequently decays to 238Pu and then to 234U.
Americium-243 transforms by α-emission into 239Np, which converts by β-decay to 239Pu, and the 239Pu changes into 235U by emitting an α-particle.
Applications
Ionization-type smoke detector
Americium is used in the most common type of household smoke detector, which uses 241Am in the form of americium dioxide as its source of ionizing radiation. This isotope is preferred over 226Ra because it emits 5 times more alpha particles and relatively little harmful gamma radiation.
The amount of americium in a typical new smoke detector is 1 microcurie (37 kBq) or 0.29 microgram. This amount declines slowly as the americium decays into neptunium-237, a different transuranic element with a much longer half-life (about 2.14 million years). With its half-life of 432.2 years, the americium in a smoke detector includes about 3% neptunium after 19 years, and about 5% after 32 years. The radiation passes through an ionization chamber, an air-filled space between two electrodes, and permits a small, constant current between the electrodes. Any smoke that enters the chamber absorbs the alpha particles, which reduces the ionization and affects this current, triggering the alarm. Compared to the alternative optical smoke detector, the ionization smoke detector is cheaper and can detect particles which are too small to produce significant light scattering; however, it is more prone to false alarms.
Radionuclide
As 241Am has a roughly similar half-life to 238Pu (432.2 years vs. 87 years), it has been proposed as an active element of radioisotope thermoelectric generators, for example in spacecraft. Although americium produces less heat and electricity – the power yield is 114.7 mW/g for 241Am and 6.31 mW/g for 243Am (cf. 390 mW/g for 238Pu) – and its radiation poses more threat to humans owing to neutron emission, the European Space Agency is considering using americium for its space probes.
Another proposed space-related application of americium is a fuel for space ships with nuclear propulsion. It relies on the very high rate of nuclear fission of 242mAm, which can be maintained even in a micrometer-thick foil. Small thickness avoids the problem of self-absorption of emitted radiation. This problem is pertinent to uranium or plutonium rods, in which only surface layers provide alpha-particles. The fission products of 242mAm can either directly propel the spaceship or they can heat a thrusting gas. They can also transfer their energy to a fluid and generate electricity through a magnetohydrodynamic generator.
One more proposal which utilizes the high nuclear fission rate of 242mAm is a nuclear battery. Its design relies not on the energy of the emitted by americium alpha particles, but on their charge, that is the americium acts as the self-sustaining "cathode". A single 3.2 kg 242mAm charge of such battery could provide about 140 kW of power over a period of 80 days. Even with all the potential benefits, the current applications of 242mAm are as yet hindered by the scarcity and high price of this particular nuclear isomer.
In 2019, researchers at the UK National Nuclear Laboratory and the University of Leicester demonstrated the use of heat generated by americium to illuminate a small light bulb. This technology could lead to systems to power missions with durations up to 400 years into interstellar space, where solar panels do not function.
Neutron source
The oxide of 241Am pressed with beryllium is an efficient neutron source. Here americium acts as the alpha source, and beryllium produces neutrons owing to its large cross-section for the (α,n) nuclear reaction:
^{241}_{95}Am -> ^{237}_{93}Np + ^{4}_{2}He + \gamma
^{9}_{4}Be + ^{4}_{2}He -> ^{12}_{6}C + ^{1}_{0}n + \gamma
The most widespread use of 241AmBe neutron sources is a neutron probe – a device used to measure the quantity of water present in soil, as well as moisture/density for quality control in highway construction. 241Am neutron sources are also used in well logging applications, as well as in neutron radiography, tomography and other radiochemical investigations.
Production of other elements
Americium is a starting material for the production of other transuranic elements and transactinides – for example, 82.7% of 242Am decays to 242Cm and 17.3% to 242Pu. In the nuclear reactor, 242Am is also up-converted by neutron capture to 243Am and 244Am, which transforms by β-decay to 244Cm:
^{243}_{95}Am ->[\ce{(n,\gamma)}] ^{244}_{95}Am ->[\beta^-][10.1 \ \ce{h}] ^{244}_{96}Cm
Irradiation of 241Am by 12C or 22Ne ions yields the isotopes 247Es (einsteinium) or 260Db (dubnium), respectively. Furthermore, the element berkelium (243Bk isotope) had been first intentionally produced and identified by bombarding 241Am with alpha particles, in 1949, by the same Berkeley group, using the same 60-inch cyclotron. Similarly, nobelium was produced at the Joint Institute for Nuclear Research, Dubna, Russia, in 1965 in several reactions, one of which included irradiation of 243Am with 15N ions. Besides, one of the synthesis reactions for lawrencium, discovered by scientists at Berkeley and Dubna, included bombardment of 243Am with 18O.
Spectrometer
Americium-241 has been used as a portable source of both gamma rays and alpha particles for a number of medical and industrial uses. The 59.5409 keV gamma ray emissions from 241Am in such sources can be used for indirect analysis of materials in radiography and X-ray fluorescence spectroscopy, as well as for quality control in fixed nuclear density gauges and nuclear densometers. For example, the element has been employed to gauge glass thickness to help create flat glass. Americium-241 is also suitable for calibration of gamma-ray spectrometers in the low-energy range, since its spectrum consists of nearly a single peak and negligible Compton continuum (at least three orders of magnitude lower intensity). Americium-241 gamma rays were also used to provide passive diagnosis of thyroid function. This medical application is however obsolete.
Health concerns
As a highly radioactive element, americium and its compounds must be handled only in an appropriate laboratory under special arrangements. Although most americium isotopes predominantly emit alpha particles which can be blocked by thin layers of common materials, many of the daughter products emit gamma-rays and neutrons which have a long penetration depth.
If consumed, most of the americium is excreted within a few days, with only 0.05% absorbed in the blood, of which roughly 45% goes to the liver and 45% to the bones, and the remaining 10% is excreted. The uptake to the liver depends on the individual and increases with age. In the bones, americium is first deposited over cortical and trabecular surfaces and slowly redistributes over the bone with time. The biological half-life of 241Am is 50 years in the bones and 20 years in the liver, whereas in the gonads (testicles and ovaries) it remains permanently; in all these organs, americium promotes formation of cancer cells as a result of its radioactivity.
Americium often enters landfills from discarded smoke detectors. The rules associated with the disposal of smoke detectors are relaxed in most jurisdictions. In 1994, 17-year-old David Hahn extracted the americium from about 100 smoke detectors in an attempt to build a breeder nuclear reactor. There have been a few cases of exposure to americium, the worst case being that of chemical operations technician Harold McCluskey, who at the age of 64 was exposed to 500 times the occupational standard for americium-241 as a result of an explosion in his lab. McCluskey died at the age of 75 of unrelated pre-existing disease.
See also
Actinides in the environment
:Category:Americium compounds
Notes
References
Bibliography
Penneman, R. A. and Keenan T. K. The radiochemistry of americium and curium, University of California, Los Alamos, California, 1960
Further reading
Nuclides and Isotopes – 14th Edition, GE Nuclear Energy, 1989.
External links
Americium at The Periodic Table of Videos (University of Nottingham)
ATSDR – Public Health Statement: Americium
World Nuclear Association – Smoke Detectors and Americium
|
;Actinides;Carcinogens;Chemical elements;Chemical elements with double hexagonal close-packed structure;Synthetic elements
|
https://en.wikipedia.org/wiki/Astatine
|
Astatine is a chemical element; it has symbol At and atomic number 85. It is the rarest naturally occurring element in the Earth's crust, occurring only as the decay product of various heavier elements. All of astatine's isotopes are short-lived; the most stable is astatine-210, with a half-life of 8.1 hours. Consequently, a solid sample of the element has never been seen, because any macroscopic specimen would be immediately vaporized by the heat of its radioactivity.
The bulk properties of astatine are not known with certainty. Many of them have been estimated from its position on the periodic table as a heavier analog of fluorine, chlorine, bromine, and iodine, the four stable halogens. However, astatine also falls roughly along the dividing line between metals and nonmetals, and some metallic behavior has also been observed and predicted for it. Astatine is likely to have a dark or lustrous appearance and may be a semiconductor or possibly a metal. Chemically, several anionic species of astatine are known and most of its compounds resemble those of iodine, but it also sometimes displays metallic characteristics and shows some similarities to silver.
The first synthesis of astatine was in 1940 by Dale R. Corson, Kenneth Ross MacKenzie, and Emilio G. Segrè at the University of California, Berkeley. They named it from the Ancient Greek () 'unstable'. Four isotopes of astatine were subsequently found to be naturally occurring, although much less than one gram is present at any given time in the Earth's crust. Neither the most stable isotope, astatine-210, nor the medically useful astatine-211 occur naturally; they are usually produced by bombarding bismuth-209 with alpha particles.
Characteristics
Astatine is an extremely radioactive element; all its isotopes have half-lives of 8.1 hours or less, decaying into other astatine isotopes, bismuth, polonium, or radon. Most of its isotopes are very unstable, with half-lives of seconds or less. Of the first 101 elements in the periodic table, only francium is less stable, and all the astatine isotopes more stable than the longest-lived francium isotopes (205–211At) are in any case synthetic and do not occur in nature.
The bulk properties of astatine are not known with any certainty. Research is limited by its short half-life, which prevents the creation of weighable quantities. A visible piece of astatine would immediately vaporize itself because of the heat generated by its intense radioactivity. It remains to be seen if, with sufficient cooling, a macroscopic quantity of astatine could be deposited as a thin film. Astatine is usually classified as either a nonmetal or a metalloid; metal formation has also been predicted.
Physical
Most of the physical properties of astatine have been estimated (by interpolation or extrapolation), using theoretically or empirically derived methods. For example, halogens get darker with increasing atomic weight – fluorine is nearly colorless, chlorine is yellow-green, bromine is red-brown, and iodine is dark gray/violet. Astatine is sometimes described as probably being a black solid (assuming it follows this trend), or as having a metallic appearance (if it is a metalloid or a metal).
Astatine sublimes less readily than iodine, having a lower vapor pressure. Even so, half of a given quantity of astatine will vaporize in approximately an hour if put on a clean glass surface at room temperature. The absorption spectrum of astatine in the middle ultraviolet region has lines at 224.401 and 216.225 nm, suggestive of 6p to 7s transitions.
The structure of solid astatine is unknown. As an analog of iodine it may have an orthorhombic crystalline structure composed of diatomic astatine molecules, and be a semiconductor (with a band gap of 0.7 eV). Alternatively, if condensed astatine forms a metallic phase, as has been predicted, it may have a monatomic face-centered cubic structure; in this structure, it may well be a superconductor, like the similar high-pressure phase of iodine. Metallic astatine is expected to have a density of 8.91–8.95 g/cm3.
Evidence for (or against) the existence of diatomic astatine (At2) is sparse and inconclusive. Some sources state that it does not exist, or at least has never been observed, while other sources assert or imply its existence. Despite this controversy, many properties of diatomic astatine have been predicted; for example, its bond length would be , dissociation energy <, and heat of vaporization (∆Hvap) 54.39 kJ/mol. Many values have been predicted for the melting and boiling points of astatine, but only for At2.
Chemical
The chemistry of astatine is "clouded by the extremely low concentrations at which astatine experiments have been conducted, and the possibility of reactions with impurities, walls and filters, or radioactivity by-products, and other unwanted nano-scale interactions". Many of its apparent chemical properties have been observed using tracer studies on extremely dilute astatine solutions, typically less than 10−10 mol·L−1. Some properties, such as anion formation, align with other halogens. Astatine has some metallic characteristics as well, such as plating onto a cathode, and coprecipitating with metal sulfides in hydrochloric acid. It forms complexes with EDTA, a metal chelating agent, and is capable of acting as a metal in antibody radiolabeling; in some respects, astatine in the +1 state is akin to silver in the same state. Most of the organic chemistry of astatine is, however, analogous to that of iodine. It has been suggested that astatine can form a stable monatomic cation in aqueous solution.
Astatine has an electronegativity of 2.2 on the revised Pauling scale – lower than that of iodine (2.66) and the same as hydrogen. In hydrogen astatide (HAt), the negative charge is predicted to be on the hydrogen atom, implying that this compound could be referred to as astatine hydride according to certain nomenclatures. That would be consistent with the electronegativity of astatine on the Allred–Rochow scale (1.9) being less than that of hydrogen (2.2). However, official IUPAC stoichiometric nomenclature is based on an idealized convention of determining the relative electronegativities of the elements by the mere virtue of their position within the periodic table. According to this convention, astatine is handled as though it is more electronegative than hydrogen, irrespective of its true electronegativity. The electron affinity of astatine, at 233 kJ mol−1, is 21% less than that of iodine. In comparison, the value of Cl (349) is 6.4% higher than F (328); Br (325) is 6.9% less than Cl; and I (295) is 9.2% less than Br. The marked reduction for At was predicted as being due to spin–orbit interactions. The first ionization energy of astatine is about 899 kJ mol−1, which continues the trend of decreasing first ionization energies down the halogen group (fluorine, 1681; chlorine, 1251; bromine, 1140; iodine, 1008).
Compounds
Less reactive than iodine, astatine is the least reactive of the halogens; the chemical properties of tennessine, the next-heavier group 17 element, have not yet been investigated, however. Astatine compounds have been synthesized in nano-scale amounts and studied as intensively as possible before their radioactive disintegration. The reactions involved have been typically tested with dilute solutions of astatine mixed with larger amounts of iodine. Acting as a carrier, the iodine ensures there is sufficient material for laboratory techniques (such as filtration and precipitation) to work. Like iodine, astatine has been shown to adopt odd-numbered oxidation states ranging from −1 to +7.
Only a few compounds with metals have been reported, in the form of astatides of sodium, palladium, silver, thallium, and lead. Some characteristic properties of silver and sodium astatide, and the other hypothetical alkali and alkaline earth astatides, have been estimated by extrapolation from other metal halides.
The formation of an astatine compound with hydrogen – usually referred to as hydrogen astatide – was noted by the pioneers of astatine chemistry. As mentioned, there are grounds for instead referring to this compound as astatine hydride. It is easily oxidized; acidification by dilute nitric acid gives the At0 or At+ forms, and the subsequent addition of silver(I) may only partially, at best, precipitate astatine as silver(I) astatide (AgAt). Iodine, in contrast, is not oxidized, and precipitates readily as silver(I) iodide.
Astatine is known to bind to boron, carbon, and nitrogen. Various boron cage compounds have been prepared with At–B bonds, these being more stable than At–C bonds. Astatine can replace a hydrogen atom in benzene to form astatobenzene C6H5At; this may be oxidized to C6H5AtCl2 by chlorine. By treating this compound with an alkaline solution of hypochlorite, C6H5AtO2 can be produced. The dipyridine-astatine(I) cation, [At(C5H5N)2]+, forms ionic compounds with perchlorate (a non-coordinating anion) and with nitrate, [At(C5H5N)2]NO3. This cation exists as a coordination complex in which two dative covalent bonds separately link the astatine(I) centre with each of the pyridine rings via their nitrogen atoms.
With oxygen, there is evidence of the species AtO− and AtO+ in aqueous solution, formed by the reaction of astatine with an oxidant such as elemental bromine or (in the last case) by sodium persulfate in a solution of perchloric acid. The species previously thought to be has since been determined to be , a hydrolysis product of AtO+ (another such hydrolysis product being AtOOH). The well characterized anion can be obtained by, for example, the oxidation of astatine with potassium hypochlorite in a solution of potassium hydroxide. Preparation of lanthanum triastatate La(AtO3)3, following the oxidation of astatine by a hot Na2S2O8 solution, has been reported. Further oxidation of , such as by xenon difluoride (in a hot alkaline solution) or periodate (in a neutral or alkaline solution), yields the perastatate ion ; this is only stable in neutral or alkaline solutions. Astatine is also thought to be capable of forming cations in salts with oxyanions such as iodate or dichromate; this is based on the observation that, in acidic solutions, monovalent or intermediate positive states of astatine coprecipitate with the insoluble salts of metal cations such as silver(I) iodate or thallium(I) dichromate.
Astatine may form bonds to the other chalcogens; these include S7At+ and with sulfur, a coordination selenourea compound with selenium, and an astatine–tellurium colloid with tellurium.
Astatine is known to react with its lighter homologs iodine, bromine, and chlorine in the vapor state; these reactions produce diatomic interhalogen compounds with formulas AtI, AtBr, and AtCl. The first two compounds may also be produced in water – astatine reacts with iodine/iodide solution to form AtI, whereas AtBr requires (aside from astatine) an iodine/iodine monobromide/bromide solution. The excess of iodides or bromides may lead to and ions, or in a chloride solution, they may produce species like or via equilibrium reactions with the chlorides. Oxidation of the element with dichromate (in nitric acid solution) showed that adding chloride turned the astatine into a molecule likely to be either AtCl or AtOCl. Similarly, or may be produced. The polyhalides PdAtI2, CsAtI2, TlAtI2, and PbAtI are known or presumed to have been precipitated. In a plasma ion source mass spectrometer, the ions [AtI]+, [AtBr]+, and [AtCl]+ have been formed by introducing lighter halogen vapors into a helium-filled cell containing astatine, supporting the existence of stable neutral molecules in the plasma ion state. No astatine fluorides have been discovered yet. Their absence has been speculatively attributed to the extreme reactivity of such compounds, including the reaction of an initially formed fluoride with the walls of the glass container to form a non-volatile product. Thus, although the synthesis of an astatine fluoride is thought to be possible, it may require a liquid halogen fluoride solvent, as has already been used for the characterization of radon fluoride.
History
In 1869, when Dmitri Mendeleev published his periodic table, the space under iodine was empty; after Niels Bohr established the physical basis of the classification of chemical elements, it was suggested that the fifth halogen belonged there. Before its officially recognized discovery, it was called "eka-iodine" (from Sanskrit 'one') to imply it was one space under iodine (in the same manner as eka-silicon, eka-boron, and others). Scientists tried to find it in nature; given its extreme rarity, these attempts resulted in several false discoveries.
The first claimed discovery of eka-iodine was made by Fred Allison and his associates at the Alabama Polytechnic Institute (now Auburn University) in 1931. The discoverers named element 85 "alabamine", and assigned it the symbol Ab, designations that were used for a few years. In 1934, H. G. MacPherson of University of California, Berkeley disproved Allison's method and the validity of his discovery. There was another claim in 1937, by the chemist Rajendralal De. Working in Dacca in British India (now Dhaka in Bangladesh), he chose the name "dakin" for element 85, which he claimed to have isolated as the thorium series equivalent of radium F (polonium-210) in the radium series. The properties he reported for dakin do not correspond to those of astatine, and astatine's radioactivity would have prevented him from handling it in the quantities he claimed. Moreover, astatine is not found in the thorium series, and the true identity of dakin is not known.
In 1936, the team of Romanian physicist Horia Hulubei and French physicist Yvette Cauchois claimed to have discovered element 85 by observing its X-ray emission lines. In 1939, they published another paper which supported and extended previous data. In 1944, Hulubei published a summary of data he had obtained up to that time, claiming it was supported by the work of other researchers. He chose the name "dor", presumably from the Romanian for "longing" [for peace], as World War II had started five years earlier. As Hulubei was writing in French, a language which does not accommodate the "-ine" suffix, dor would likely have been rendered in English as "dorine", had it been adopted. In 1947, Hulubei's claim was effectively rejected by the Austrian chemist Friedrich Paneth, who would later chair the IUPAC committee responsible for recognition of new elements. Even though Hulubei's samples did contain astatine-218, his means to detect it were too weak, by current standards, to enable correct identification; moreover, he could not perform chemical tests on the element. He had also been involved in an earlier false claim as to the discovery of element 87 (francium) and this is thought to have caused other researchers to downplay his work.
In 1940, the Swiss chemist Walter Minder announced the discovery of element 85 as the beta decay product of radium A (polonium-218), choosing the name "helvetium" (from , the Latin name of Switzerland). Berta Karlik and Traude Bernert were unsuccessful in reproducing his experiments, and subsequently attributed Minder's results to contamination of his radon stream (radon-222 is the parent isotope of polonium-218). In 1942, Minder, in collaboration with the English scientist Alice Leigh-Smith, announced the discovery of another isotope of element 85, presumed to be the product of thorium A (polonium-216) beta decay. They named this substance "anglo-helvetium", but Karlik and Bernert were again unable to reproduce these results.
Later in 1940, Dale R. Corson, Kenneth Ross MacKenzie, and Emilio Segrè isolated the element at the University of California, Berkeley. Instead of searching for the element in nature, the scientists created it by bombarding bismuth-209 with alpha particles in a cyclotron (particle accelerator) to produce, after emission of two neutrons, astatine-211. The discoverers, however, did not immediately suggest a name for the element. The reason for this was that at the time, an element created synthetically in "invisible quantities" that had not yet been discovered in nature was not seen as a completely valid one; in addition, chemists were reluctant to recognize radioactive isotopes as legitimately as stable ones. In 1943, astatine was found as a product of two naturally occurring decay chains by Berta Karlik and Traude Bernert, first in the so-called uranium series, and then in the actinium series. (Since then, astatine was also found in a third decay chain, the neptunium series.) Friedrich Paneth in 1946 called to finally recognize synthetic elements, quoting, among other reasons, recent confirmation of their natural occurrence, and proposed that the discoverers of the newly discovered unnamed elements name these elements. In early 1947, Nature published the discoverers' suggestions; a letter from Corson, MacKenzie, and Segrè suggested the name "astatine" coming from the Ancient Greek () meaning , because of its propensity for radioactive decay, with the ending "-ine", found in the names of the four previously discovered halogens. The name was also chosen to continue the tradition of the four stable halogens, where the name referred to a property of the element.
Corson and his colleagues classified astatine as a metal on the basis of its analytical chemistry. Subsequent investigators reported iodine-like, cationic, or amphoteric behavior. In a 2003 retrospective, Corson wrote that "some of the properties [of astatine] are similar to iodine ... it also exhibits metallic properties, more like its metallic neighbors Po and Bi."
Isotopes
There are 41 known isotopes of astatine, with mass numbers of 188 and 190–229. Theoretical modeling suggests that about 37 more isotopes could exist. No stable or long-lived astatine isotope has been observed, nor is one expected to exist.
Astatine's alpha decay energies follow the same trend as for other heavy elements. Lighter astatine isotopes have quite high energies of alpha decay, which become lower as the nuclei become heavier. Astatine-211 has a significantly higher energy than the previous isotope, because it has a nucleus with 126 neutrons, and 126 is a magic number corresponding to a filled neutron shell. Despite having a similar half-life to the previous isotope (8.1 hours for astatine-210 and 7.2 hours for astatine-211), the alpha decay probability is much higher for the latter: 41.81% against only 0.18%. The two following isotopes release even more energy, with astatine-213 releasing the most energy. For this reason, it is the shortest-lived astatine isotope. Even though heavier astatine isotopes release less energy, no long-lived astatine isotope exists, because of the increasing role of beta decay (electron emission). This decay mode is especially important for astatine; as early as 1950 it was postulated that all isotopes of the element undergo beta decay, though nuclear mass measurements indicate that 215At is in fact beta-stable, as it has the lowest mass of all isobars with A = 215. Astatine-210 and most of the lighter isotopes exhibit beta plus decay (positron emission), astatine-217 and heavier isotopes except astatine-218 exhibit beta minus decay, while astatine-211 undergoes electron capture.
The most stable isotope is astatine-210, which has a half-life of 8.1 hours. The primary decay mode is beta plus, to the relatively long-lived (in comparison to astatine isotopes) alpha emitter polonium-210. In total, only five isotopes have half-lives exceeding one hour (astatine-207 to -211). The least stable ground state isotope is astatine-213, with a half-life of 125 nanoseconds. It undergoes alpha decay to the extremely long-lived bismuth-209.
Astatine has 24 known nuclear isomers, which are nuclei with one or more nucleons (protons or neutrons) in an excited state. A nuclear isomer may also be called a "meta-state", meaning the system has more internal energy than the "ground state" (the state with the lowest possible internal energy), making the former likely to decay into the latter. There may be more than one isomer for each isotope. The most stable of these nuclear isomers is astatine-202m1, which has a half-life of about 3 minutes, longer than those of all the ground states bar those of isotopes 203–211 and 220. The least stable is astatine-213m1; its half-life of 110 nanoseconds is shorter than 125 nanoseconds for astatine-213, the shortest-lived ground state.
Natural occurrence
Astatine is the rarest naturally occurring element. The total amount of astatine in the Earth's crust (quoted mass 2.36 × 1025 grams) is estimated by some to be less than one gram at any given time. Other sources estimate the amount of ephemeral astatine, present on earth at any given moment, to be up to one ounce (about 28 grams).
Any astatine present at the formation of the Earth has long since disappeared; the four naturally occurring isotopes (astatine-215, -217, -218 and -219) are instead continuously produced as a result of the decay of radioactive thorium and uranium ores, and trace quantities of neptunium-237. The landmass of North and South America combined, to a depth of 16 kilometers (10 miles), contains only about one trillion astatine-215 atoms at any given time (around 3.5 × 10−10 grams). Astatine-217 is produced via the radioactive decay of neptunium-237. Primordial remnants of the latter isotope—due to its relatively short half-life of 2.14 million years—are no longer present on Earth. However, trace amounts occur naturally as a product of transmutation reactions in uranium ores. Astatine-218 was the first astatine isotope discovered in nature. Astatine-219, with a half-life of 56 seconds, is the longest lived of the naturally occurring isotopes.
Isotopes of astatine are sometimes not listed as naturally occurring because of misconceptions that there are no such isotopes, or discrepancies in the literature. Astatine-216 has been counted as a naturally occurring isotope but reports of its observation (which were described as doubtful) have not been confirmed.
Synthesis
Formation
Astatine was first produced by bombarding bismuth-209 with energetic alpha particles, and this is still the major route used to create the relatively long-lived isotopes astatine-209 through astatine-211. Astatine is only produced in minuscule quantities, with modern techniques allowing production runs of up to 6.6 gigabecquerels (about 86 nanograms or 2.47 atoms). Synthesis of greater quantities of astatine using this method is constrained by the limited availability of suitable cyclotrons and the prospect of melting the target. Solvent radiolysis due to the cumulative effect of astatine decay is a related problem. With cryogenic technology, microgram quantities of astatine might be able to be generated via proton irradiation of thorium or uranium to yield radon-211, in turn decaying to astatine-211. Contamination with astatine-210 is expected to be a drawback of this method.
The most important isotope is astatine-211, the only one in commercial use. To produce the bismuth target, the metal is sputtered onto a gold, copper, or aluminium surface at 50 to 100 milligrams per square centimeter. Bismuth oxide can be used instead; this is forcibly fused with a copper plate. The target is kept under a chemically neutral nitrogen atmosphere, and is cooled with water to prevent premature astatine vaporization. In a particle accelerator, such as a cyclotron, alpha particles are collided with the bismuth. Even though only one bismuth isotope is used (bismuth-209), the reaction may occur in three possible ways, producing astatine-209, astatine-210, or astatine-211. Although higher energies can produce more astatine-211, it will produce unwanted astatine-210 that decays to toxic polonium-210 as well. Instead, the maximum energy of the particle accelerator is set to be below or slightly above the threshold of astatine-210 production, in order to maximize the production of astatine-211 while keeping the amount of astatine-210 at an acceptable level.
Separation methods
Since astatine is the main product of the synthesis, after its formation it must only be separated from the target and any significant contaminants. Several methods are available, "but they generally follow one of two approaches—dry distillation or [wet] acid treatment of the target followed by solvent extraction." The methods summarized below are modern adaptations of older procedures, as reviewed by Kugler and Keller. Pre-1985 techniques more often addressed the elimination of co-produced toxic polonium; this requirement is now mitigated by capping the energy of the cyclotron irradiation beam.
Dry
The astatine-containing cyclotron target is heated to a temperature of around 650 °C. The astatine volatilizes and is condensed in (typically) a cold trap. Higher temperatures of up to around 850 °C may increase the yield, at the risk of bismuth contamination from concurrent volatilization. Redistilling the condensate may be required to minimize the presence of bismuth (as bismuth can interfere with astatine labeling reactions). The astatine is recovered from the trap using one or more low concentration solvents such as sodium hydroxide, methanol or chloroform. Astatine yields of up to around 80% may be achieved. Dry separation is the method most commonly used to produce a chemically useful form of astatine.
Wet
The irradiated bismuth (or sometimes bismuth trioxide) target is first dissolved in, for example, concentrated nitric or perchloric acid. Following this first step, the acid can be distilled away to leave behind a white residue that contains both bismuth and the desired astatine product. This residue is then dissolved in a concentrated acid, such as hydrochloric acid. Astatine is extracted from this acid using an organic solvent such as dibutyl ether, diisopropyl ether (DIPE), or thiosemicarbazide. Using liquid-liquid extraction, the astatine product can be repeatedly washed with an acid, such as HCl, and extracted into the organic solvent layer. A separation yield of 93% using nitric acid has been reported, falling to 72% by the time purification procedures were completed (distillation of nitric acid, purging residual nitrogen oxides, and redissolving bismuth nitrate to enable liquid–liquid extraction). Wet methods involve "multiple radioactivity handling steps" and have not been considered well suited for isolating larger quantities of astatine. However, wet extraction methods are being examined for use in production of larger quantities of astatine-211, as it is thought that wet extraction methods can provide more consistency. They can enable the production of astatine in a specific oxidation state and may have greater applicability in experimental radiochemistry.
Uses and precautions
Newly formed astatine-211 is the subject of ongoing research in nuclear medicine. It must be used quickly as it decays with a half-life of 7.2 hours; this is long enough to permit multistep labeling strategies. Astatine-211 has potential for targeted alpha-particle therapy, since it decays either via emission of an alpha particle (to bismuth-207), or via electron capture (to an extremely short-lived nuclide, polonium-211, which undergoes further alpha decay), very quickly reaching its stable granddaughter lead-207. Polonium X-rays emitted as a result of the electron capture branch, in the range of 77–92 keV, enable the tracking of astatine in animals and patients. Although astatine-210 has a slightly longer half-life, it is wholly unsuitable because it usually undergoes beta plus decay to the extremely toxic polonium-210.
The principal medicinal difference between astatine-211 and iodine-131 (a radioactive iodine isotope also used in medicine) is that iodine-131 emits high-energy beta particles, and astatine does not. Beta particles have much greater penetrating power through tissues than do the much heavier alpha particles. An average alpha particle released by astatine-211 can travel up to 70 μm through surrounding tissues; an average-energy beta particle emitted by iodine-131 can travel nearly 30 times as far, to about 2 mm. The short half-life and limited penetrating power of alpha radiation through tissues offers advantages in situations where the "tumor burden is low and/or malignant cell populations are located in close proximity to essential normal tissues." Significant morbidity in cell culture models of human cancers has been achieved with from one to ten astatine-211 atoms bound per cell.
Several obstacles have been encountered in the development of astatine-based radiopharmaceuticals for cancer treatment. World War II delayed research for close to a decade. Results of early experiments indicated that a cancer-selective carrier would need to be developed and it was not until the 1970s that monoclonal antibodies became available for this purpose. Unlike iodine, astatine shows a tendency to dehalogenate from molecular carriers such as these, particularly at sp3 carbon sites (less so from sp2 sites). Given the toxicity of astatine accumulated and retained in the body, this emphasized the need to ensure it remained attached to its host molecule. While astatine carriers that are slowly metabolized can be assessed for their efficacy, more rapidly metabolized carriers remain a significant obstacle to the evaluation of astatine in nuclear medicine. Mitigating the effects of astatine-induced radiolysis of labeling chemistry and carrier molecules is another area requiring further development. A practical application for astatine as a cancer treatment would potentially be suitable for a "staggering" number of patients; production of astatine in the quantities that would be required remains an issue.
Animal studies show that astatine, similarly to iodine—although to a lesser extent, perhaps because of its slightly more metallic nature—is preferentially (and dangerously) concentrated in the thyroid gland. Unlike iodine, astatine also shows a tendency to be taken up by the lungs and spleen, possibly because of in-body oxidation of At– to At+. If administered in the form of a radiocolloid it tends to concentrate in the liver. Experiments in rats and monkeys suggest that astatine-211 causes much greater damage to the thyroid gland than does iodine-131, with repetitive injection of the nuclide resulting in necrosis and cell dysplasia within the gland. Early research suggested that injection of astatine into female rodents caused morphological changes in breast tissue; this conclusion remained controversial for many years. General agreement was later reached that this was likely caused by the effect of breast tissue irradiation combined with hormonal changes due to irradiation of the ovaries. Trace amounts of astatine can be handled safely in fume hoods if they are well-aerated; biological uptake of the element must be avoided.
Bibliography
|
;Chemical elements;Chemical elements with face-centered cubic structure;Halogens;Synthetic elements
|
https://en.wikipedia.org/wiki/Atom
|
Atoms are the basic particles of the chemical elements. An atom consists of a nucleus of protons and generally neutrons, surrounded by an electromagnetically bound swarm of electrons. The chemical elements are distinguished from each other by the number of protons that are in their atoms. For example, any atom that contains 11 protons is sodium, and any atom that contains 29 protons is copper. Atoms with the same number of protons but a different number of neutrons are called isotopes of the same element.
Atoms are extremely small, typically around 100 picometers across. A human hair is about a million carbon atoms wide. Atoms are smaller than the shortest wavelength of visible light, which means humans cannot see atoms with conventional microscopes. They are so small that accurately predicting their behavior using classical physics is not possible due to quantum effects.
More than 99.9994% of an atom's mass is in the nucleus. Protons have a positive electric charge and neutrons have no charge, so the nucleus is positively charged. The electrons are negatively charged, and this opposing charge is what binds them to the nucleus. If the numbers of protons and electrons are equal, as they normally are, then the atom is electrically neutral as a whole. If an atom has more electrons than protons, then it has an overall negative charge and is called a negative ion (or anion). Conversely, if it has more protons than electrons, it has a positive charge and is called a positive ion (or cation).
The electrons of an atom are attracted to the protons in an atomic nucleus by the electromagnetic force. The protons and neutrons in the nucleus are attracted to each other by the nuclear force. This force is usually stronger than the electromagnetic force that repels the positively charged protons from one another. Under certain circumstances, the repelling electromagnetic force becomes stronger than the nuclear force. In this case, the nucleus splits and leaves behind different elements. This is a form of nuclear decay.
Atoms can attach to one or more other atoms by chemical bonds to form chemical compounds such as molecules or crystals. The ability of atoms to attach and detach from each other is responsible for most of the physical changes observed in nature. Chemistry is the science that studies these changes.
History of atomic theory
In philosophy
The basic idea that matter is made up of tiny indivisible particles is an old idea that appeared in many ancient cultures. The word atom is derived from the ancient Greek word atomos, which means "uncuttable". But this ancient idea was based in philosophical reasoning rather than scientific reasoning. Modern atomic theory is not based on these old concepts. In the early 19th century, the scientist John Dalton found evidence that matter really is composed of discrete units, and so applied the word atom to those units.
Dalton's law of multiple proportions
In the early 1800s, John Dalton compiled experimental data gathered by him and other scientists and discovered a pattern now known as the "law of multiple proportions". He noticed that in any group of chemical compounds which all contain two particular chemical elements, the amount of Element A per measure of Element B will differ across these compounds by ratios of small whole numbers. This pattern suggested that each element combines with other elements in multiples of a basic unit of weight, with each element having a unit of unique weight. Dalton decided to call these units "atoms".
For example, there are two types of tin oxide: one is a grey powder that is 88.1% tin and 11.9% oxygen, and the other is a white powder that is 78.7% tin and 21.3% oxygen. Adjusting these figures, in the grey powder there is about 13.5 g of oxygen for every 100 g of tin, and in the white powder there is about 27 g of oxygen for every 100 g of tin. 13.5 and 27 form a ratio of 1:2. Dalton concluded that in the grey oxide there is one atom of oxygen for every atom of tin, and in the white oxide there are two atoms of oxygen for every atom of tin (SnO and SnO2).
Dalton also analyzed iron oxides. There is one type of iron oxide that is a black powder which is 78.1% iron and 21.9% oxygen; and there is another iron oxide that is a red powder which is 70.4% iron and 29.6% oxygen. Adjusting these figures, in the black powder there is about 28 g of oxygen for every 100 g of iron, and in the red powder there is about 42 g of oxygen for every 100 g of iron. 28 and 42 form a ratio of 2:3. Dalton concluded that in these oxides, for every two atoms of iron, there are two or three atoms of oxygen respectively. These substances are known today as iron(II) oxide and iron(III) oxide, and their formulas are FeO and Fe2O3 respectively. Iron(II) oxide's formula is normally written as FeO, but since it is a crystalline substance we could alternately write it as Fe2O2, and when we contrast that with Fe2O3, the 2:3 ratio for the oxygen is plain to see.
As a final example: nitrous oxide is 63.3% nitrogen and 36.7% oxygen, nitric oxide is 44.05% nitrogen and 55.95% oxygen, and nitrogen dioxide is 29.5% nitrogen and 70.5% oxygen. Adjusting these figures, in nitrous oxide there is 80 g of oxygen for every 140 g of nitrogen, in nitric oxide there is about 160 g of oxygen for every 140 g of nitrogen, and in nitrogen dioxide there is 320 g of oxygen for every 140 g of nitrogen. 80, 160, and 320 form a ratio of 1:2:4. The respective formulas for these oxides are N2O, NO, and NO2.
Discovery of the electron
In 1897, J. J. Thomson discovered that cathode rays can be deflected by electric and magnetic fields, which meant that cathode rays are not a form of light but made of electrically charged particles, and their charge was negative given the direction the particles were deflected in. He measured these particles to be 1,700 times lighter than hydrogen (the lightest atom). He called these new particles corpuscles but they were later renamed electrons since these are the particles that carry electricity. Thomson also showed that electrons were identical to particles given off by photoelectric and radioactive materials. Thomson explained that an electric current is the passing of electrons from one atom to the next, and when there was no current the electrons embedded themselves in the atoms. This in turn meant that atoms were not indivisible as scientists thought. The atom was composed of electrons whose negative charge was balanced out by some source of positive charge to create an electrically neutral atom. Ions, Thomson explained, must be atoms which have an excess or shortage of electrons.
Discovery of the nucleus
The electrons in the atom logically had to be balanced out by a commensurate amount of positive charge, but Thomson had no idea where this positive charge came from, so he tentatively proposed that it was everywhere in the atom, the atom being in the shape of a sphere. This was the mathematically simplest hypothesis to fit the available evidence, or lack thereof. Following from this, Thomson imagined that the balance of electrostatic forces would distribute the electrons throughout the sphere in a more or less even manner. Thomson's model is popularly known as the plum pudding model, though neither Thomson nor his colleagues used this analogy. Thomson's model was incomplete, it was unable to predict any other properties of the elements such as emission spectra and valencies. It was soon rendered obsolete by the discovery of the atomic nucleus.
Between 1908 and 1913, Ernest Rutherford and his colleagues Hans Geiger and Ernest Marsden performed a series of experiments in which they bombarded thin foils of metal with a beam of alpha particles. They did this to measure the scattering patterns of the alpha particles. They spotted a small number of alpha particles being deflected by angles greater than 90°. This shouldn't have been possible according to the Thomson model of the atom, whose charges were too diffuse to produce a sufficiently strong electric field. The deflections should have all been negligible. Rutherford proposed that the positive charge of the atom is concentrated in a tiny volume at the center of the atom and that the electrons surround this nucleus in a diffuse cloud. This nucleus carried almost all of the atom's mass. Only such an intense concentration of charge, anchored by its high mass, could produce an electric field that could deflect the alpha particles so strongly.
Bohr model
A problem in classical mechanics is that an accelerating charged particle radiates electromagnetic radiation, causing the particle to lose kinetic energy. Circular motion counts as acceleration, which means that an electron orbiting a central charge should spiral down into that nucleus as it loses speed. In 1913, the physicist Niels Bohr proposed a new model in which the electrons of an atom were assumed to orbit the nucleus but could only do so in a finite set of orbits, and could jump between these orbits only in discrete changes of energy corresponding to absorption or radiation of a photon. This quantization was used to explain why the electrons' orbits are stable and why elements absorb and emit electromagnetic radiation in discrete spectra. Bohr's model could only predict the emission spectra of hydrogen, not atoms with more than one electron.
Discovery of protons and neutrons
Back in 1815, William Prout observed that the atomic weights of many elements were multiples of hydrogen's atomic weight, which is in fact true for all of them if one takes isotopes into account. In 1898, J. J. Thomson found that the positive charge of a hydrogen ion is equal to the negative charge of an electron, and these were then the smallest known charged particles. Thomson later found that the positive charge in an atom is a positive multiple of an electron's negative charge. In 1913, Henry Moseley discovered that the frequencies of X-ray emissions from an excited atom were a mathematical function of its atomic number and hydrogen's nuclear charge. In 1919, Rutherford bombarded nitrogen gas with alpha particles and detected hydrogen ions being emitted from the gas, and concluded that they were produced by alpha particles hitting and splitting the nuclei of the nitrogen atoms.
These observations led Rutherford to conclude that the hydrogen nucleus is a singular particle with a positive charge equal to the electron's negative charge. He named this particle "proton" in 1920. The number of protons in an atom (which Rutherford called the "atomic number") was found to be equal to the element's ordinal number on the periodic table and therefore provided a simple and clear-cut way of distinguishing the elements from each other. The atomic weight of each element is higher than its proton number, so Rutherford hypothesized that the surplus weight was carried by unknown particles with no electric charge and a mass equal to that of the proton.
In 1928, Walter Bothe observed that beryllium emitted a highly penetrating, electrically neutral radiation when bombarded with alpha particles. It was later discovered that this radiation could knock hydrogen atoms out of paraffin wax. Initially it was thought to be high-energy gamma radiation, since gamma radiation had a similar effect on electrons in metals, but James Chadwick found that the ionization effect was too strong for it to be due to electromagnetic radiation, so long as energy and momentum were conserved in the interaction. In 1932, Chadwick exposed various elements, such as hydrogen and nitrogen, to the mysterious "beryllium radiation", and by measuring the energies of the recoiling charged particles, he deduced that the radiation was actually composed of electrically neutral particles which could not be massless like the gamma ray, but instead were required to have a mass similar to that of a proton. Chadwick now claimed these particles as Rutherford's neutrons.
The current consensus model
In 1925, Werner Heisenberg published the first consistent mathematical formulation of quantum mechanics (matrix mechanics). One year earlier, Louis de Broglie had proposed that all particles behave like waves to some extent, and in 1926 Erwin Schrödinger used this idea to develop the Schrödinger equation, which describes electrons as three-dimensional waveforms rather than points in space. A consequence of using waveforms to describe particles is that it is mathematically impossible to obtain precise values for both the position and momentum of a particle at a given point in time. This became known as the uncertainty principle, formulated by Werner Heisenberg in 1927. In this concept, for a given accuracy in measuring a position one could only obtain a range of probable values for momentum, and vice versa. Thus, the planetary model of the atom was discarded in favor of one that described atomic orbital zones around the nucleus where a given electron is most likely to be found. This model was able to explain observations of atomic behavior that previous models could not, such as certain structural and spectral patterns of atoms larger than hydrogen.
Structure
Subatomic particles
Though the word atom originally denoted a particle that cannot be cut into smaller particles, in modern scientific usage the atom is composed of various subatomic particles. The constituent particles of an atom are the electron, the proton, and the neutron.
The electron is the least massive of these particles by four orders of magnitude at , with a negative electrical charge and a size that is too small to be measured using available techniques. It was the lightest particle with a positive rest mass measured, until the discovery of neutrino mass. Under ordinary conditions, electrons are bound to the positively charged nucleus by the attraction created from opposite electric charges. If an atom has more or fewer electrons than its atomic number, then it becomes respectively negatively or positively charged as a whole; a charged atom is called an ion. Electrons have been known since the late 19th century, mostly thanks to J.J. Thomson; see history of subatomic physics for details.
Protons have a positive charge and a mass of . The number of protons in an atom is called its atomic number. Ernest Rutherford (1919) observed that nitrogen under alpha-particle bombardment ejects what appeared to be hydrogen nuclei. By 1920 he had accepted that the hydrogen nucleus is a distinct particle within the atom and named it proton.
Neutrons have no electrical charge and have a mass of . Neutrons are the heaviest of the three constituent particles, but their mass can be reduced by the nuclear binding energy. Neutrons and protons (collectively known as nucleons) have comparable dimensions—on the order of —although the 'surface' of these particles is not sharply defined. The neutron was discovered in 1932 by the English physicist James Chadwick.
In the Standard Model of physics, electrons are truly elementary particles with no internal structure, whereas protons and neutrons are composite particles composed of elementary particles called quarks. There are two types of quarks in atoms, each having a fractional electric charge. Protons are composed of two up quarks (each with charge +) and one down quark (with a charge of −). Neutrons consist of one up quark and two down quarks. This distinction accounts for the difference in mass and charge between the two particles.
The quarks are held together by the strong interaction (or strong force), which is mediated by gluons. The protons and neutrons, in turn, are held to each other in the nucleus by the nuclear force, which is a residuum of the strong force that has somewhat different range-properties (see the article on the nuclear force for more). The gluon is a member of the family of gauge bosons, which are elementary particles that mediate physical forces.
Nucleus
All the bound protons and neutrons in an atom make up a tiny atomic nucleus, and are collectively called nucleons. The radius of a nucleus is approximately equal to femtometres, where is the total number of nucleons. This is much smaller than the radius of the atom, which is on the order of 105 fm. The nucleons are bound together by a short-ranged attractive potential called the residual strong force. At distances smaller than 2.5 fm this force is much more powerful than the electrostatic force that causes positively charged protons to repel each other.
Atoms of the same element have the same number of protons, called the atomic number. Within a single element, the number of neutrons may vary, determining the isotope of that element. The total number of protons and neutrons determine the nuclide. The number of neutrons relative to the protons determines the stability of the nucleus, with certain isotopes undergoing radioactive decay.
The proton, the electron, and the neutron are classified as fermions. Fermions obey the Pauli exclusion principle which prohibits identical fermions, such as multiple protons, from occupying the same quantum state at the same time. Thus, every proton in the nucleus must occupy a quantum state different from all other protons, and the same applies to all neutrons of the nucleus and to all electrons of the electron cloud.
A nucleus that has a different number of protons than neutrons can potentially drop to a lower energy state through a radioactive decay that causes the number of protons and neutrons to more closely match. As a result, atoms with matching numbers of protons and neutrons are more stable against decay, but with increasing atomic number, the mutual repulsion of the protons requires an increasing proportion of neutrons to maintain the stability of the nucleus.
The number of protons and neutrons in the atomic nucleus can be modified, although this can require very high energies because of the strong force. Nuclear fusion occurs when multiple atomic particles join to form a heavier nucleus, such as through the energetic collision of two nuclei. For example, at the core of the Sun protons require energies of 3 to 10 keV to overcome their mutual repulsion—the coulomb barrier—and fuse together into a single nucleus. Nuclear fission is the opposite process, causing a nucleus to split into two smaller nuclei—usually through radioactive decay. The nucleus can also be modified through bombardment by high energy subatomic particles or photons. If this modifies the number of protons in a nucleus, the atom changes to a different chemical element.
If the mass of the nucleus following a fusion reaction is less than the sum of the masses of the separate particles, then the difference between these two values can be emitted as a type of usable energy (such as a gamma ray, or the kinetic energy of a beta particle), as described by Albert Einstein's mass–energy equivalence formula, E = mc2, where m is the mass loss and c is the speed of light. This deficit is part of the binding energy of the new nucleus, and it is the non-recoverable loss of the energy that causes the fused particles to remain together in a state that requires this energy to separate.
The fusion of two nuclei that create larger nuclei with lower atomic numbers than iron and nickel—a total nucleon number of about 60—is usually an exothermic process that releases more energy than is required to bring them together. It is this energy-releasing process that makes nuclear fusion in stars a self-sustaining reaction. For heavier nuclei, the binding energy per nucleon begins to decrease. That means that a fusion process producing a nucleus that has an atomic number higher than about 26, and a mass number higher than about 60, is an endothermic process. Thus, more massive nuclei cannot undergo an energy-producing fusion reaction that can sustain the hydrostatic equilibrium of a star.
Electron cloud
The electrons in an atom are attracted to the protons in the nucleus by the electromagnetic force. This force binds the electrons inside an electrostatic potential well surrounding the smaller nucleus, which means that an external source of energy is needed for the electron to escape. The closer an electron is to the nucleus, the greater the attractive force. Hence electrons bound near the center of the potential well require more energy to escape than those at greater separations.
Electrons, like other particles, have properties of both a particle and a wave. The electron cloud is a region inside the potential well where each electron forms a type of three-dimensional standing wave—a wave form that does not move relative to the nucleus. This behavior is defined by an atomic orbital, a mathematical function that characterises the probability that an electron appears to be at a particular location when its position is measured. Only a discrete (or quantized) set of these orbitals exist around the nucleus, as other possible wave patterns rapidly decay into a more stable form. Orbitals can have one or more ring or node structures, and differ from each other in size, shape and orientation.
Each atomic orbital corresponds to a particular energy level of the electron. The electron can change its state to a higher energy level by absorbing a photon with sufficient energy to boost it into the new quantum state. Likewise, through spontaneous emission, an electron in a higher energy state can drop to a lower energy state while radiating the excess energy as a photon. These characteristic energy values, defined by the differences in the energies of the quantum states, are responsible for atomic spectral lines.
The amount of energy needed to remove or add an electron—the electron binding energy—is far less than the binding energy of nucleons. For example, it requires only 13.6 eV to strip a ground-state electron from a hydrogen atom, compared to 2.23 million eV for splitting a deuterium nucleus. Atoms are electrically neutral if they have an equal number of protons and electrons. Atoms that have either a deficit or a surplus of electrons are called ions. Electrons that are farthest from the nucleus may be transferred to other nearby atoms or shared between atoms. By this mechanism, atoms are able to bond into molecules and other types of chemical compounds like ionic and covalent network crystals.
Properties
Nuclear properties
By definition, any two atoms with an identical number of protons in their nuclei belong to the same chemical element. Atoms with equal numbers of protons but a different number of neutrons are different isotopes of the same element. For example, all hydrogen atoms admit exactly one proton, but isotopes exist with no neutrons (hydrogen-1, by far the most common form, also called protium), one neutron (deuterium), two neutrons (tritium) and more than two neutrons. The known elements form a set of atomic numbers, from the single-proton element hydrogen up to the 118-proton element oganesson. All known isotopes of elements with atomic numbers greater than 82 are radioactive, although the radioactivity of element 83 (bismuth) is so slight as to be practically negligible.
About 339 nuclides occur naturally on Earth, of which 251 (about 74%) have not been observed to decay, and are referred to as "stable isotopes". Only 90 nuclides are stable theoretically, while another 161 (bringing the total to 251) have not been observed to decay, even though in theory it is energetically possible. These are also formally classified as "stable". An additional 35 radioactive nuclides have half-lives longer than 100 million years, and are long-lived enough to have been present since the birth of the Solar System. This collection of 286 nuclides are known as primordial nuclides. Finally, an additional 53 short-lived nuclides are known to occur naturally, as daughter products of primordial nuclide decay (such as radium from uranium), or as products of natural energetic processes on Earth, such as cosmic ray bombardment (for example, carbon-14).
For 80 of the chemical elements, at least one stable isotope exists. As a rule, there is only a handful of stable isotopes for each of these elements, the average being 3.1 stable isotopes per element. Twenty-six "monoisotopic elements" have only a single stable isotope, while the largest number of stable isotopes observed for any element is ten, for the element tin. Elements 43, 61, and all elements numbered 83 or higher have no stable isotopes.
Stability of isotopes is affected by the ratio of protons to neutrons, and also by the presence of certain "magic numbers" of neutrons or protons that represent closed and filled quantum shells. These quantum shells correspond to a set of energy levels within the shell model of the nucleus; filled shells, such as the filled shell of 50 protons for tin, confers unusual stability on the nuclide. Of the 251 known stable nuclides, only four have both an odd number of protons and odd number of neutrons: hydrogen-2 (deuterium), lithium-6, boron-10, and nitrogen-14. (Tantalum-180m is odd-odd and observationally stable, but is predicted to decay with a very long half-life.) Also, only four naturally occurring, radioactive odd-odd nuclides have a half-life over a billion years: potassium-40, vanadium-50, lanthanum-138, and lutetium-176. Most odd-odd nuclei are highly unstable with respect to beta decay, because the decay products are even-even, and are therefore more strongly bound, due to nuclear pairing effects.
Mass
The large majority of an atom's mass comes from the protons and neutrons that make it up. The total number of these particles (called "nucleons") in a given atom is called the mass number. It is a positive integer and dimensionless (instead of having dimension of mass), because it expresses a count. An example of use of a mass number is "carbon-12," which has 12 nucleons (six protons and six neutrons).
The actual mass of an atom at rest is often expressed in daltons (Da), also called the unified atomic mass unit (u). This unit is defined as a twelfth of the mass of a free neutral atom of carbon-12, which is approximately . Hydrogen-1 (the lightest isotope of hydrogen which is also the nuclide with the lowest mass) has an atomic weight of 1.007825 Da. The value of this number is called the atomic mass. A given atom has an atomic mass approximately equal (within 1%) to its mass number times the atomic mass unit (for example the mass of a nitrogen-14 is roughly 14 Da), but this number will not be exactly an integer except (by definition) in the case of carbon-12. The heaviest stable atom is lead-208, with a mass of .
As even the most massive atoms are far too light to work with directly, chemists instead use the unit of moles. One mole of atoms of any element always has the same number of atoms (about ). This number was chosen so that if an element has an atomic mass of 1 u, a mole of atoms of that element has a mass close to one gram. Because of the definition of the unified atomic mass unit, each carbon-12 atom has an atomic mass of exactly 12 Da, and so a mole of carbon-12 atoms weighs exactly 0.012 kg.
Shape and size
Atoms lack a well-defined outer boundary, so their dimensions are usually described in terms of an atomic radius. This is a measure of the distance out to which the electron cloud extends from the nucleus. This assumes the atom to exhibit a spherical shape, which is only obeyed for atoms in vacuum or free space. Atomic radii may be derived from the distances between two nuclei when the two atoms are joined in a chemical bond. The radius varies with the location of an atom on the atomic chart, the type of chemical bond, the number of neighboring atoms (coordination number) and a quantum mechanical property known as spin. On the periodic table of the elements, atom size tends to increase when moving down columns, but decrease when moving across rows (left to right). Consequently, the smallest atom is helium with a radius of 32 pm, while one of the largest is caesium at 225 pm.
When subjected to external forces, like electrical fields, the shape of an atom may deviate from spherical symmetry. The deformation depends on the field magnitude and the orbital type of outer shell electrons, as shown by group-theoretical considerations. Aspherical deviations might be elicited for instance in crystals, where large crystal-electrical fields may occur at low-symmetry lattice sites. Significant ellipsoidal deformations have been shown to occur for sulfur ions and chalcogen ions in pyrite-type compounds.
Atomic dimensions are thousands of times smaller than the wavelengths of light (400–700 nm) so they cannot be viewed using an optical microscope, although individual atoms can be observed using a scanning tunneling microscope. To visualize the minuteness of the atom, consider that a typical human hair is about 1 million carbon atoms in width. A single drop of water contains about 2 sextillion () atoms of oxygen, and twice the number of hydrogen atoms. A single carat diamond with a mass of contains about 10 sextillion (1022) atoms of carbon. If an apple were magnified to the size of the Earth, then the atoms in the apple would be approximately the size of the original apple.
Radioactive decay
Every element has one or more isotopes that have unstable nuclei that are subject to radioactive decay, causing the nucleus to emit particles or electromagnetic radiation. Radioactivity can occur when the radius of a nucleus is large compared with the radius of the strong force, which only acts over distances on the order of 1 fm.
The most common forms of radioactive decay are:
Alpha decay: this process is caused when the nucleus emits an alpha particle, which is a helium nucleus consisting of two protons and two neutrons. The result of the emission is a new element with a lower atomic number.
Beta decay (and electron capture): these processes are regulated by the weak force, and result from a transformation of a neutron into a proton, or a proton into a neutron. The neutron to proton transition is accompanied by the emission of an electron and an antineutrino, while proton to neutron transition (except in electron capture) causes the emission of a positron and a neutrino. The electron or positron emissions are called beta particles. Beta decay either increases or decreases the atomic number of the nucleus by one. Electron capture is more common than positron emission, because it requires less energy. In this type of decay, an electron is absorbed by the nucleus, rather than a positron emitted from the nucleus. A neutrino is still emitted in this process, and a proton changes to a neutron.
Gamma decay: this process results from a change in the energy level of the nucleus to a lower state, resulting in the emission of electromagnetic radiation. The excited state of a nucleus which results in gamma emission usually occurs following the emission of an alpha or a beta particle. Thus, gamma decay usually follows alpha or beta decay.
Other more rare types of radioactive decay include ejection of neutrons or protons or clusters of nucleons from a nucleus, or more than one beta particle. An analog of gamma emission which allows excited nuclei to lose energy in a different way, is internal conversion—a process that produces high-speed electrons that are not beta rays, followed by production of high-energy photons that are not gamma rays. A few large nuclei explode into two or more charged fragments of varying masses plus several neutrons, in a decay called spontaneous nuclear fission.
Each radioactive isotope has a characteristic decay time period—the half-life—that is determined by the amount of time needed for half of a sample to decay. This is an exponential decay process that steadily decreases the proportion of the remaining isotope by 50% every half-life. Hence after two half-lives have passed only 25% of the isotope is present, and so forth.
Magnetic moment
Elementary particles possess an intrinsic quantum mechanical property known as spin. This is analogous to the angular momentum of an object that is spinning around its center of mass, although strictly speaking these particles are believed to be point-like and cannot be said to be rotating. Spin is measured in units of the reduced Planck constant (ħ), with electrons, protons and neutrons all having spin ħ, or "spin-". In an atom, electrons in motion around the nucleus possess orbital angular momentum in addition to their spin, while the nucleus itself possesses angular momentum due to its nuclear spin.
The magnetic field produced by an atom—its magnetic moment—is determined by these various forms of angular momentum, just as a rotating charged object classically produces a magnetic field, but the most dominant contribution comes from electron spin. Due to the nature of electrons to obey the Pauli exclusion principle, in which no two electrons may be found in the same quantum state, bound electrons pair up with each other, with one member of each pair in a spin up state and the other in the opposite, spin down state. Thus these spins cancel each other out, reducing the total magnetic dipole moment to zero in some atoms with even number of electrons.
In ferromagnetic elements such as iron, cobalt and nickel, an odd number of electrons leads to an unpaired electron and a net overall magnetic moment. The orbitals of neighboring atoms overlap and a lower energy state is achieved when the spins of unpaired electrons are aligned with each other, a spontaneous process known as an exchange interaction. When the magnetic moments of ferromagnetic atoms are lined up, the material can produce a measurable macroscopic field. Paramagnetic materials have atoms with magnetic moments that line up in random directions when no magnetic field is present, but the magnetic moments of the individual atoms line up in the presence of a field.
The nucleus of an atom will have no spin when it has even numbers of both neutrons and protons, but for other cases of odd numbers, the nucleus may have a spin. Normally nuclei with spin are aligned in random directions because of thermal equilibrium, but for certain elements (such as xenon-129) it is possible to polarize a significant proportion of the nuclear spin states so that they are aligned in the same direction—a condition called hyperpolarization. This has important applications in magnetic resonance imaging.
Energy levels
The potential energy of an electron in an atom is negative relative to when the distance from the nucleus goes to infinity; its dependence on the electron's position reaches the minimum inside the nucleus, roughly in inverse proportion to the distance. In the quantum-mechanical model, a bound electron can occupy only a set of states centered on the nucleus, and each state corresponds to a specific energy level; see time-independent Schrödinger equation for a theoretical explanation. An energy level can be measured by the amount of energy needed to unbind the electron from the atom, and is usually given in units of electronvolts (eV). The lowest energy state of a bound electron is called the ground state, i.e., stationary state, while an electron transition to a higher level results in an excited state. The electron's energy increases along with n because the (average) distance to the nucleus increases. Dependence of the energy on is caused not by the electrostatic potential of the nucleus, but by interaction between electrons.
For an electron to transition between two different states, e.g. ground state to first excited state, it must absorb or emit a photon at an energy matching the difference in the potential energy of those levels, according to the Niels Bohr model, what can be precisely calculated by the Schrödinger equation. Electrons jump between orbitals in a particle-like fashion. For example, if a single photon strikes the electrons, only a single electron changes states in response to the photon; see Electron properties.
The energy of an emitted photon is proportional to its frequency, so these specific energy levels appear as distinct bands in the electromagnetic spectrum. Each element has a characteristic spectrum that can depend on the nuclear charge, subshells filled by electrons, the electromagnetic interactions between the electrons and other factors.
When a continuous spectrum of energy is passed through a gas or plasma, some of the photons are absorbed by atoms, causing electrons to change their energy level. Those excited electrons that remain bound to their atom spontaneously emit this energy as a photon, traveling in a random direction, and so drop back to lower energy levels. Thus the atoms behave like a filter that forms a series of dark absorption bands in the energy output. (An observer viewing the atoms from a view that does not include the continuous spectrum in the background, instead sees a series of emission lines from the photons emitted by the atoms.) Spectroscopic measurements of the strength and width of atomic spectral lines allow the composition and physical properties of a substance to be determined.
Close examination of the spectral lines reveals that some display a fine structure splitting. This occurs because of spin–orbit coupling, which is an interaction between the spin and motion of the outermost electron. When an atom is in an external magnetic field, spectral lines become split into three or more components; a phenomenon called the Zeeman effect. This is caused by the interaction of the magnetic field with the magnetic moment of the atom and its electrons. Some atoms can have multiple electron configurations with the same energy level, which thus appear as a single spectral line. The interaction of the magnetic field with the atom shifts these electron configurations to slightly different energy levels, resulting in multiple spectral lines. The presence of an external electric field can cause a comparable splitting and shifting of spectral lines by modifying the electron energy levels, a phenomenon called the Stark effect.
If a bound electron is in an excited state, an interacting photon with the proper energy can cause stimulated emission of a photon with a matching energy level. For this to occur, the electron must drop to a lower energy state that has an energy difference matching the energy of the interacting photon. The emitted photon and the interacting photon then move off in parallel and with matching phases. That is, the wave patterns of the two photons are synchronized. This physical property is used to make lasers, which can emit a coherent beam of light energy in a narrow frequency band.
Valence and bonding behavior
Valency is the combining power of an element. It is determined by the number of bonds it can form to other atoms or groups. The outermost electron shell of an atom in its uncombined state is known as the valence shell, and the electrons in
that shell are called valence electrons. The number of valence electrons determines the bonding
behavior with other atoms. Atoms tend to chemically react with each other in a manner that fills (or empties) their outer valence shells. For example, a transfer of a single electron between atoms is a useful approximation for bonds that form between atoms with one-electron more than a filled shell, and others that are one-electron short of a full shell, such as occurs in the compound sodium chloride and other chemical ionic salts. Many elements display multiple valences, or tendencies to share differing numbers of electrons in different compounds. Thus, chemical bonding between these elements takes many forms of electron-sharing that are more than simple electron transfers. Examples include the element carbon and the organic compounds.
The chemical elements are often displayed in a periodic table that is laid out to display recurring chemical properties, and elements with the same number of valence electrons form a group that is aligned in the same column of the table. (The horizontal rows correspond to the filling of a quantum shell of electrons.) The elements at the far right of the table have their outer shell completely filled with electrons, which results in chemically inert elements known as the noble gases.
States
Quantities of atoms are found in different states of matter that depend on the physical conditions, such as temperature and pressure. By varying the conditions, materials can transition between solids, liquids, gases, and plasmas. Within a state, a material can also exist in different allotropes. An example of this is solid carbon, which can exist as graphite or diamond. Gaseous allotropes exist as well, such as dioxygen and ozone.
At temperatures close to absolute zero, atoms can form a Bose–Einstein condensate, at which point quantum mechanical effects, which are normally only observed at the atomic scale, become apparent on a macroscopic scale. This super-cooled collection of atoms then behaves as a single super atom, which may allow fundamental checks of quantum mechanical behavior.
Identification
While atoms are too small to be seen, devices such as the scanning tunneling microscope (STM) enable their visualization at the surfaces of solids. The microscope uses the quantum tunneling phenomenon, which allows particles to pass through a barrier that would be insurmountable in the classical perspective. Electrons tunnel through the vacuum between two biased electrodes, providing a tunneling current that is exponentially dependent on their separation. One electrode is a sharp tip ideally ending with a single atom. At each point of the scan of the surface the tip's height is adjusted so as to keep the tunneling current at a set value. How much the tip moves to and away from the surface is interpreted as the height profile. For low bias, the microscope images the averaged electron orbitals across closely packed energy levels—the local density of the electronic states near the Fermi level. Because of the distances involved, both electrodes need to be extremely stable; only then periodicities can be observed that correspond to individual atoms. The method alone is not chemically specific, and cannot identify the atomic species present at the surface.
Atoms can be easily identified by their mass. If an atom is ionized by removing one of its electrons, its trajectory when it passes through a magnetic field will bend. The radius by which the trajectory of a moving ion is turned by the magnetic field is determined by the mass of the atom. The mass spectrometer uses this principle to measure the mass-to-charge ratio of ions. If a sample contains multiple isotopes, the mass spectrometer can determine the proportion of each isotope in the sample by measuring the intensity of the different beams of ions. Techniques to vaporize atoms include inductively coupled plasma atomic emission spectroscopy and inductively coupled plasma mass spectrometry, both of which use a plasma to vaporize samples for analysis.
The atom-probe tomograph has sub-nanometer resolution in 3-D and can chemically identify individual atoms using time-of-flight mass spectrometry.
Electron emission techniques such as X-ray photoelectron spectroscopy (XPS) and Auger electron spectroscopy (AES), which measure the binding energies of the core electrons, are used to identify the atomic species present in a sample in a non-destructive way. With proper focusing both can be made area-specific. Another such method is electron energy loss spectroscopy (EELS), which measures the energy loss of an electron beam within a transmission electron microscope when it interacts with a portion of a sample.
Spectra of excited states can be used to analyze the atomic composition of distant stars. Specific light wavelengths contained in the observed light from stars can be separated out and related to the quantized transitions in free gas atoms. These colors can be replicated using a gas-discharge lamp containing the same element. Helium was discovered in this way in the spectrum of the Sun 23 years before it was found on Earth.
Origin and current state
Baryonic matter forms about 4% of the total energy density of the observable universe, with an average density of about 0.25 particles/m3 (mostly protons and electrons). Within a galaxy such as the Milky Way, particles have a much higher concentration, with the density of matter in the interstellar medium (ISM) ranging from 105 to 109 atoms/m3. The Sun is believed to be inside the Local Bubble, so the density in the solar neighborhood is only about 103 atoms/m3. Stars form from dense clouds in the ISM, and the evolutionary processes of stars result in the steady enrichment of the ISM with elements more massive than hydrogen and helium.
Up to 95% of the Milky Way's baryonic matter are concentrated inside stars, where conditions are unfavorable for atomic matter. The total baryonic mass is about 10% of the mass of the galaxy; the remainder of the mass is an unknown dark matter. High temperature inside stars makes most "atoms" fully ionized, that is, separates all electrons from the nuclei. In stellar remnants—with exception of their surface layers—an immense pressure make electron shells impossible.
Formation
Electrons are thought to exist in the Universe since early stages of the Big Bang. Atomic nuclei forms in nucleosynthesis reactions. In about three minutes Big Bang nucleosynthesis produced most of the helium, lithium, and deuterium in the Universe, and perhaps some of the beryllium and boron.
Ubiquitousness and stability of atoms relies on their binding energy, which means that an atom has a lower energy than an unbound system of the nucleus and electrons. Where the temperature is much higher than ionization potential, the matter exists in the form of plasma—a gas of positively charged ions (possibly, bare nuclei) and electrons. When the temperature drops below the ionization potential, atoms become statistically favorable. Atoms (complete with bound electrons) became to dominate over charged particles 380,000 years after the Big Bang—an epoch called recombination, when the expanding Universe cooled enough to allow electrons to become attached to nuclei.
Since the Big Bang, which produced no carbon or heavier elements, atomic nuclei have been combined in stars through the process of nuclear fusion to produce more of the element helium, and (via the triple-alpha process) the sequence of elements from carbon up to iron; see stellar nucleosynthesis for details.
Isotopes such as lithium-6, as well as some beryllium and boron are generated in space through cosmic ray spallation. This occurs when a high-energy proton strikes an atomic nucleus, causing large numbers of nucleons to be ejected.
Elements heavier than iron were produced in supernovae and colliding neutron stars through the r-process, and in AGB stars through the s-process, both of which involve the capture of neutrons by atomic nuclei. Elements such as lead formed largely through the radioactive decay of heavier elements.
Earth
Most of the atoms that make up the Earth and its inhabitants were present in their current form in the nebula that collapsed out of a molecular cloud to form the Solar System. The rest are the result of radioactive decay, and their relative proportion can be used to determine the age of the Earth through radiometric dating. Most of the helium in the crust of the Earth (about 99% of the helium from gas wells, as shown by its lower abundance of helium-3) is a product of alpha decay.
There are a few trace atoms on Earth that were not present at the beginning (i.e., not "primordial"), nor are results of radioactive decay. Carbon-14 is continuously generated by cosmic rays in the atmosphere. Some atoms on Earth have been artificially generated either deliberately or as by-products of nuclear reactors or explosions. Of the transuranic elements—those with atomic numbers greater than 92—only plutonium and neptunium occur naturally on Earth. Transuranic elements have radioactive lifetimes shorter than the current age of the Earth and thus identifiable quantities of these elements have long since decayed, with the exception of traces of plutonium-244 possibly deposited by cosmic dust. Natural deposits of plutonium and neptunium are produced by neutron capture in uranium ore.
The Earth contains approximately atoms. Although small numbers of independent atoms of noble gases exist, such as argon, neon, and helium, 99% of the atmosphere is bound in the form of molecules, including carbon dioxide and diatomic oxygen and nitrogen. At the surface of the Earth, an overwhelming majority of atoms combine to form various compounds, including water, salt, silicates, and oxides. Atoms can also combine to create materials that do not consist of discrete molecules, including crystals and liquid or solid metals. This atomic matter forms networked arrangements that lack the particular type of small-scale interrupted order associated with molecular matter.
Rare and theoretical forms
Superheavy elements
All nuclides with atomic numbers higher than 82 (lead) are known to be radioactive. No nuclide with an atomic number exceeding 92 (uranium) exists on Earth as a primordial nuclide, and heavier elements generally have shorter half-lives. Nevertheless, an "island of stability" encompassing relatively long-lived isotopes of superheavy elements with atomic numbers 110 to 114 might exist. Predictions for the half-life of the most stable nuclide on the island range from a few minutes to millions of years. In any case, superheavy elements (with Z > 104) would not exist due to increasing Coulomb repulsion (which results in spontaneous fission with increasingly short half-lives) in the absence of any stabilizing effects.
Exotic matter
Each particle of matter has a corresponding antimatter particle with the opposite electrical charge. Thus, the positron is a positively charged antielectron and the antiproton is a negatively charged equivalent of a proton. When a matter and corresponding antimatter particle meet, they annihilate each other. Because of this, along with an imbalance between the number of matter and antimatter particles, the latter are rare in the universe. The first causes of this imbalance are not yet fully understood, although theories of baryogenesis may offer an explanation. As a result, no antimatter atoms have been discovered in nature. In 1996, the antimatter counterpart of the hydrogen atom (antihydrogen) was synthesized at the CERN laboratory in Geneva.
Other exotic atoms have been created by replacing one of the protons, neutrons or electrons with other particles that have the same charge. For example, an electron can be replaced by a more massive muon, forming a muonic atom. These types of atoms can be used to test fundamental predictions of physics.
Bibliography
|
;Articles containing video clips;Chemistry
|
https://en.wikipedia.org/wiki/Aluminium
|
Aluminium (or aluminum in North American English) is a chemical element; it has symbol Al and atomic number 13. It has a density lower than that of other common metals, about one-third that of steel. Aluminium has a great affinity towards oxygen, forming a protective layer of oxide on the surface when exposed to air. It visually resembles silver, both in its color and in its great ability to reflect light. It is soft, nonmagnetic, and ductile. It has one stable isotope, 27Al, which is highly abundant, making aluminium the 12th-most abundant element in the universe. The radioactivity of 26Al leads to it being used in radiometric dating.
Chemically, aluminium is a post-transition metal in the boron group; as is common for the group, aluminium forms compounds primarily in the +3 oxidation state. The aluminium cation Al3+ is small and highly charged; as such, it has more polarizing power, and bonds formed by aluminium have a more covalent character. The strong affinity of aluminium for oxygen leads to the common occurrence of its oxides in nature. Aluminium is found on Earth primarily in rocks in the crust, where it is the third-most abundant element, after oxygen and silicon, rather than in the mantle, and virtually never as the free metal. It is obtained industrially by mining bauxite, a sedimentary rock rich in aluminium minerals.
The discovery of aluminium was announced in 1825 by Danish physicist Hans Christian Ørsted. The first industrial production of aluminium was initiated by French chemist Henri Étienne Sainte-Claire Deville in 1856. Aluminium became much more available to the public with the Hall–Héroult process developed independently by French engineer Paul Héroult and American engineer Charles Martin Hall in 1886, and the mass production of aluminium led to its extensive use in industry and everyday life. In the First and Second World Wars, aluminium was a crucial strategic resource for aviation. In 1954, aluminium became the most produced non-ferrous metal, surpassing copper. In the 21st century, most aluminium was consumed in transportation, engineering, construction, and packaging in the United States, Western Europe, and Japan.
Despite its prevalence in the environment, no living organism is known to metabolize aluminium salts, but this aluminium is well tolerated by plants and animals. Because of the abundance of these salts, the potential for a biological role for them is of interest, and studies are ongoing.
Physical characteristics
Isotopes
Of aluminium isotopes, only is stable. This situation is common for elements with an odd atomic number. It is the only primordial aluminium isotope, i.e. the only one that has existed on Earth in its current form since the formation of the planet. It is therefore a mononuclidic element and its standard atomic weight is virtually the same as that of the isotope. This makes aluminium very useful in nuclear magnetic resonance (NMR), as its single stable isotope has a high NMR sensitivity. The standard atomic weight of aluminium is low in comparison with many other metals.
All other isotopes of aluminium are radioactive. The most stable of these is 26Al: while it was present along with stable 27Al in the interstellar medium from which the Solar System formed, having been produced by stellar nucleosynthesis as well, its half-life is only 717,000 years and therefore a detectable amount has not survived since the formation of the planet. However, minute traces of 26Al are produced from argon in the atmosphere by spallation caused by cosmic ray protons. The ratio of 26Al to 10Be has been used for radiodating of geological processes over 105 to 106 year time scales, in particular transport, deposition, sediment storage, burial times, and erosion. Most meteorite scientists believe that the energy released by the decay of 26Al was responsible for the melting and differentiation of some asteroids after their formation 4.55 billion years ago.
The remaining isotopes of aluminium, with mass numbers ranging from 21 to 43, all have half-lives well under an hour. Three metastable states are known, all with half-lives under a minute.
Electron shell
An aluminium atom has 13 electrons, arranged in an electron configuration of , with three electrons beyond a stable noble gas configuration. Accordingly, the combined first three ionization energies of aluminium are far lower than the fourth ionization energy alone. Such an electron configuration is shared with the other well-characterized members of its group, boron, gallium, indium, and thallium; it is also expected for nihonium. Aluminium can surrender its three outermost electrons in many chemical reactions (see below). The electronegativity of aluminium is 1.61 (Pauling scale).
A free aluminium atom has a radius of 143 pm. With the three outermost electrons removed, the radius shrinks to 39 pm for a 4-coordinated atom or 53.5 pm for a 6-coordinated atom. At standard temperature and pressure, aluminium atoms (when not affected by atoms of other elements) form a face-centered cubic crystal system bound by metallic bonding provided by atoms' outermost electrons; hence aluminium (at these conditions) is a metal. This crystal system is shared by many other metals, such as lead and copper; the size of a unit cell of aluminium is comparable to that of those other metals. The system, however, is not shared by the other members of its group: boron has ionization energies too high to allow metallization, thallium has a hexagonal close-packed structure, and gallium and indium have unusual structures that are not close-packed like those of aluminium and thallium. The few electrons that are available for metallic bonding in aluminium are a probable cause for it being soft with a low melting point and low electrical resistivity.
Bulk
Aluminium metal has an appearance ranging from silvery white to dull gray depending on its surface roughness. Aluminium mirrors provides high reflectivity for light in the ultraviolet, visible (on par with silver), and the far infrared region. Aluminium is also good at reflecting solar radiation, although prolonged exposure to sunlight in air can deteriorate the reflectivity of the metal; this may be prevented if aluminium is anodized, which adds a protective layer of oxide on the surface.
The density of aluminium is 2.70 g/cm3, about 1/3 that of steel, much lower than other commonly encountered metals, making aluminium parts easily identifiable through their lightness. Aluminium's low density compared to most other metals arises from the fact that its nuclei are much lighter, while difference in the unit cell size does not compensate for this difference. The only lighter metals are the metals of groups 1 and 2, which apart from beryllium and magnesium are too reactive for structural use (and beryllium is very toxic). Aluminium is not as strong or stiff as steel, but the low density makes up for this in the aerospace industry and for many other applications where light weight and relatively high strength are crucial.
Pure aluminium is quite soft and lacking in strength. In most applications various aluminium alloys are used instead because of their higher strength and hardness. The yield strength of pure aluminium is 7–11 MPa, while aluminium alloys have yield strengths ranging from 200 MPa to 600 MPa. Aluminium is ductile, with a percent elongation of 50–70%, and malleable allowing it to be easily drawn and extruded. It is also easily machined and cast.
Aluminium is an excellent thermal and electrical conductor, having around 60% the conductivity of copper, both thermal and electrical, while having only 30% of copper's density. Aluminium is capable of superconductivity, with a superconducting critical temperature of 1.2 kelvin and a critical magnetic field of about 100 gauss (10 milliteslas). It is paramagnetic and thus essentially unaffected by static magnetic fields. The high electrical conductivity, however, means that it is strongly affected by alternating magnetic fields through the induction of eddy currents.
Chemistry
Aluminium combines characteristics of pre- and post-transition metals. Since it has few available electrons for metallic bonding, like its heavier group 13 congeners, it has the characteristic physical properties of a post-transition metal, with longer-than-expected interatomic distances. Furthermore, as Al3+ is a small and highly charged cation, it is strongly polarizing and bonding in aluminium compounds tends towards covalency; this behavior is similar to that of beryllium (Be2+), and the two display an example of a diagonal relationship.
The underlying core under aluminium's valence shell is that of the preceding noble gas, whereas those of its heavier congeners gallium, indium, thallium, and nihonium also include a filled d-subshell and in some cases a filled f-subshell. Hence, the inner electrons of aluminium shield the valence electrons almost completely, unlike those of aluminium's heavier congeners. As such, aluminium is the most electropositive metal in its group, and its hydroxide is in fact more basic than that of gallium. Aluminium also bears minor similarities to the metalloid boron in the same group: AlX3 compounds are valence isoelectronic to BX3 compounds (they have the same valence electronic structure), and both behave as Lewis acids and readily form adducts. Additionally, one of the main motifs of boron chemistry is regular icosahedral structures, and aluminium forms an important part of many icosahedral quasicrystal alloys, including the Al–Zn–Mg class.
Aluminium has a high chemical affinity to oxygen, which renders it suitable for use as a reducing agent in the thermite reaction. A fine powder of aluminium reacts explosively on contact with liquid oxygen; under normal conditions, however, aluminium forms a thin oxide layer (~5 nm at room temperature) that protects the metal from further corrosion by oxygen, water, or dilute acid, a process termed passivation. Aluminium is not attacked by oxidizing acids because of its passivation. This allows aluminium to be used to store reagents such as nitric acid, concentrated sulfuric acid, and some organic acids.
In hot concentrated hydrochloric acid, aluminium reacts with water with evolution of hydrogen, and in aqueous sodium hydroxide or potassium hydroxide at room temperature to form aluminates—protective passivation under these conditions is negligible. Aqua regia also dissolves aluminium. Aluminium is corroded by dissolved chlorides, such as common sodium chloride. The oxide layer on aluminium is also destroyed by contact with mercury due to amalgamation or with salts of some electropositive metals. As such, the strongest aluminium alloys are less corrosion-resistant due to galvanic reactions with alloyed copper, and aluminium's corrosion resistance is greatly reduced by aqueous salts, particularly in the presence of dissimilar metals.
Aluminium reacts with most nonmetals upon heating, forming compounds such as aluminium nitride (AlN), aluminium sulfide (Al2S3), and the aluminium halides (AlX3). It also forms a wide range of intermetallic compounds involving metals from every group on the periodic table.
Inorganic compounds
The vast majority of compounds, including all aluminium-containing minerals and all commercially significant aluminium compounds, feature aluminium in the oxidation state 3+. The coordination number of such compounds varies, but generally Al3+ is either six- or four-coordinate. Almost all compounds of aluminium(III) are colorless.
In aqueous solution, Al3+ exists as the hexaaqua cation [Al(H2O)6]3+, which has an approximate Ka of 10−5. Such solutions are acidic as this cation can act as a proton donor and progressively hydrolyze until a precipitate of aluminium hydroxide, Al(OH)3, forms. This is useful for clarification of water, as the precipitate nucleates on suspended particles in the water, hence removing them. Increasing the pH even further leads to the hydroxide dissolving again as aluminate, [Al(H2O)2(OH)4]−, is formed.
Aluminium hydroxide forms both salts and aluminates and dissolves in acid and alkali, as well as on fusion with acidic and basic oxides. This behavior of Al(OH)3 is termed amphoterism and is characteristic of weakly basic cations that form insoluble hydroxides and whose hydrated species can also donate their protons. One effect of this is that aluminium salts with weak acids are hydrolyzed in water to the aquated hydroxide and the corresponding nonmetal hydride: for example, aluminium sulfide yields hydrogen sulfide. However, some salts like aluminium carbonate exist in aqueous solution but are unstable as such; and only incomplete hydrolysis takes place for salts with strong acids, such as the halides, nitrate, and sulfate. For similar reasons, anhydrous aluminium salts cannot be made by heating their "hydrates": hydrated aluminium chloride is in fact not AlCl3·6H2O but [Al(H2O)6]Cl3, and the Al–O bonds are so strong that heating is not sufficient to break them and form Al–Cl bonds. This reaction is observed instead:
2[Al(H2O)6]Cl3 Al2O3 + 6 HCl + 9 H2O
All four trihalides are well known. Unlike the structures of the three heavier trihalides, aluminium fluoride (AlF3) features six-coordinate aluminium, which explains its involatility and insolubility as well as high heat of formation. Each aluminium atom is surrounded by six fluorine atoms in a distorted octahedral arrangement, with each fluorine atom being shared between the corners of two octahedra. Such {AlF6} units also exist in complex fluorides such as cryolite, Na3AlF6. AlF3 melts at and is made by reaction of aluminium oxide with hydrogen fluoride gas at .
With heavier halides, the coordination numbers are lower. The other trihalides are dimeric or polymeric with tetrahedral four-coordinate aluminium centers. Aluminium trichloride (AlCl3) has a layered polymeric structure below its melting point of but transforms on melting to Al2Cl6 dimers. At higher temperatures those increasingly dissociate into trigonal planar AlCl3 monomers similar to the structure of BCl3. Aluminium tribromide and aluminium triiodide form Al2X6 dimers in all three phases and hence do not show such significant changes of properties upon phase change. These materials are prepared by treating aluminium with the halogen. The aluminium trihalides form many addition compounds or complexes; their Lewis acidic nature makes them useful as catalysts for the Friedel–Crafts reactions. Aluminium trichloride has major industrial uses involving this reaction, such as in the manufacture of anthraquinones and styrene; it is also often used as the precursor for many other aluminium compounds and as a reagent for converting nonmetal fluorides into the corresponding chlorides (a transhalogenation reaction).
Aluminium forms one stable oxide with the chemical formula Al2O3, commonly called alumina. It can be found in nature in the mineral corundum, α-alumina; there is also a γ-alumina phase. Its crystalline form, corundum, is very hard (Mohs hardness 9), has a high melting point of , has very low volatility, is chemically inert, and a good electrical insulator, it is often used in abrasives (such as toothpaste), as a refractory material, and in ceramics, as well as being the starting material for the electrolytic production of aluminium. Sapphire and ruby are impure corundum contaminated with trace amounts of other metals. The two main oxide-hydroxides, AlO(OH), are boehmite and diaspore. There are three main trihydroxides: bayerite, gibbsite, and nordstrandite, which differ in their crystalline structure (polymorphs). Many other intermediate and related structures are also known. Most are produced from ores by a variety of wet processes using acid and base. Heating the hydroxides leads to formation of corundum. These materials are of central importance to the production of aluminium and are themselves extremely useful. Some mixed oxide phases are also very useful, such as spinel (MgAl2O4), Na-β-alumina (NaAl11O17), and tricalcium aluminate (Ca3Al2O6, an important mineral phase in Portland cement).
The only stable chalcogenides under normal conditions are aluminium sulfide (Al2S3), selenide (Al2Se3), and telluride (Al2Te3). All three are prepared by direct reaction of their elements at about and quickly hydrolyze completely in water to yield aluminium hydroxide and the respective hydrogen chalcogenide. As aluminium is a small atom relative to these chalcogens, these have four-coordinate tetrahedral aluminium with various polymorphs having structures related to wurtzite, with two-thirds of the possible metal sites occupied either in an orderly (α) or random (β) fashion; the sulfide also has a γ form related to γ-alumina, and an unusual high-temperature hexagonal form where half the aluminium atoms have tetrahedral four-coordination and the other half have trigonal bipyramidal five-coordination.
Four pnictides – aluminium nitride (AlN), aluminium phosphide (AlP), aluminium arsenide (AlAs), and aluminium antimonide (AlSb) – are known. They are all III-V semiconductors isoelectronic to silicon and germanium, all of which but AlN have the zinc blende structure. All four can be made by high-temperature (and possibly high-pressure) direct reaction of their component elements.
Aluminium alloys well with most other metals (with the exception of most alkali metals and group 13 metals) and over 150 intermetallics with other metals are known. Preparation involves heating fixed metals together in certain proportion, followed by gradual cooling and annealing. Bonding in them is predominantly metallic and the crystal structure primarily depends on efficiency of packing.
There are few compounds with lower oxidation states. A few aluminium(I) compounds exist: AlF, AlCl, AlBr, and AlI exist in the gaseous phase when the respective trihalide is heated with aluminium, and at cryogenic temperatures. A stable derivative of aluminium monoiodide is the cyclic adduct formed with triethylamine, Al4I4(NEt3)4. Al2O and Al2S also exist but are very unstable. Very simple aluminium(II) compounds are invoked or observed in the reactions of Al metal with oxidants. For example, aluminium monoxide, AlO, has been detected in the gas phase after explosion and in stellar absorption spectra. More thoroughly investigated are compounds of the formula R4Al2 which contain an Al–Al bond and where R is a large organic ligand.
Organoaluminium compounds and related hydrides
A variety of compounds of empirical formula AlR3 and AlR1.5Cl1.5 exist. The aluminium trialkyls and triaryls are reactive, volatile, and colorless liquids or low-melting solids. They catch fire spontaneously in air and react with water, thus necessitating precautions when handling them. They often form dimers, unlike their boron analogues, but this tendency diminishes for branched-chain alkyls (e.g. Pri, Bui, Me3CCH2); for example, triisobutylaluminium exists as an equilibrium mixture of the monomer and dimer. These dimers, such as trimethylaluminium (Al2Me6), usually feature tetrahedral Al centers formed by dimerization with some alkyl group bridging between both aluminium atoms. They are hard acids and react readily with ligands, forming adducts. In industry, they are mostly used in alkene insertion reactions, as discovered by Karl Ziegler, most importantly in "growth reactions" that form long-chain unbranched primary alkenes and alcohols, and in the low-pressure polymerization of ethene and propene. There are also some heterocyclic and cluster organoaluminium compounds involving Al–N bonds.
The industrially most important aluminium hydride is lithium aluminium hydride (LiAlH4), which is used as a reducing agent in organic chemistry. It can be produced from lithium hydride and aluminium trichloride. The simplest hydride, aluminium hydride or alane, is not as important. It is a polymer with the formula (AlH3)n, in contrast to the corresponding boron hydride that is a dimer with the formula (BH3)2.
Natural occurrence
Space
Aluminium's per-particle abundance in the Solar System is 3.15 ppm (parts per million). It is the twelfth most abundant of all elements and third most abundant among the elements that have odd atomic numbers, after hydrogen and nitrogen. The only stable isotope of aluminium, 27Al, is the eighteenth most abundant nucleus in the universe. It is created almost entirely after fusion of carbon in massive stars that will later become Type II supernovas: this fusion creates 26Mg, which upon capturing free protons and neutrons, becomes aluminium. Some smaller quantities of 27Al are created in hydrogen burning shells of evolved stars, where 26Mg can capture free protons. Essentially all aluminium now in existence is 27Al. 26Al was present in the early Solar System with abundance of 0.005% relative to 27Al but its half-life of 728,000 years is too short for any original nuclei to survive; 26Al is therefore extinct. Unlike for 27Al, hydrogen burning is the primary source of 26Al, with the nuclide emerging after a nucleus of 25Mg catches a free proton. However, the trace quantities of 26Al that do exist are the most common gamma ray emitter in the interstellar gas; if the original 26Al were still present, gamma ray maps of the Milky Way would be brighter.
Earth
Overall, the Earth is about 1.59% aluminium by mass (seventh in abundance by mass). Aluminium occurs in greater proportion in the Earth's crust than in the universe at large. This is because aluminium easily forms the oxide and becomes bound into rocks and stays in the Earth's crust, while less reactive metals sink to the core. In the Earth's crust, aluminium is the most abundant metallic element (8.23% by mass) and the third most abundant of all elements (after oxygen and silicon). A large number of silicates in the Earth's crust contain aluminium. In contrast, the Earth's mantle is only 2.38% aluminium by mass. Aluminium also occurs in seawater at a concentration of 0.41 μg/kg.
Because of its strong affinity for oxygen, aluminium is almost never found in the elemental state; instead it is found in oxides or silicates. Feldspars, the most common group of minerals in the Earth's crust, are aluminosilicates. Aluminium also occurs in the minerals beryl, cryolite, garnet, spinel, and turquoise. Impurities in Al2O3, such as chromium and iron, yield the gemstones ruby and sapphire, respectively. Native aluminium metal is extremely rare and can only be found as a minor phase in low oxygen fugacity environments, such as the interiors of certain volcanoes. Native aluminium has been reported in cold seeps in the northeastern continental slope of the South China Sea. It is possible that these deposits resulted from bacterial reduction of tetrahydroxoaluminate Al(OH)4−.
Although aluminium is a common and widespread element, not all aluminium minerals are economically viable sources of the metal. Almost all metallic aluminium is produced from the ore bauxite (AlOx(OH)3–2x). Bauxite occurs as a weathering product of low iron and silica bedrock in tropical climatic conditions. In 2017, most bauxite was mined in Australia, China, Guinea, and India.
History
The history of aluminium has been shaped by usage of alum. The first written record of alum, made by Greek historian Herodotus, dates back to the 5th century BCE. The ancients are known to have used alum as a dyeing mordant and for city defense. After the Crusades, alum, an indispensable good in the European fabric industry, was a subject of international commerce; it was imported to Europe from the eastern Mediterranean until the mid-15th century.
The nature of alum remained unknown. Around 1530, Swiss physician Paracelsus suggested alum was a salt of an earth of alum. In 1595, German doctor and chemist Andreas Libavius experimentally confirmed this. In 1722, German chemist Friedrich Hoffmann announced his belief that the base of alum was a distinct earth. In 1754, German chemist Andreas Sigismund Marggraf synthesized alumina by boiling clay in sulfuric acid and subsequently adding potash.
Attempts to produce aluminium date back to 1760. The first successful attempt, however, was completed in 1824 by Danish physicist and chemist Hans Christian Ørsted. He reacted anhydrous aluminium chloride with potassium amalgam, yielding a lump of metal looking similar to tin. He presented his results and demonstrated a sample of the new metal in 1825. In 1827, German chemist Friedrich Wöhler repeated Ørsted's experiments but did not identify any aluminium. (The reason for this inconsistency was only discovered in 1921.) He conducted a similar experiment in the same year by mixing anhydrous aluminium chloride with potassium (the Wöhler process) and produced a powder of aluminium. In 1845, he was able to produce small pieces of the metal and described some physical properties of this metal. For many years thereafter, Wöhler was credited as the discoverer of aluminium.
As Wöhler's method could not yield great quantities of aluminium, the metal remained rare; its cost exceeded that of gold. The first industrial production of aluminium was established in 1856 by French chemist Henri Etienne Sainte-Claire Deville and companions. Deville had discovered that aluminium trichloride could be reduced by sodium, which was more convenient and less expensive than potassium, which Wöhler had used. Even then, aluminium was still not of great purity and produced aluminium differed in properties by sample. Because of its electricity-conducting capacity, aluminium was used as the cap of the Washington Monument, completed in 1885, the tallest building in the world at the time. The non-corroding metal cap was intended to serve as a lightning rod peak.
The first industrial large-scale production method was independently developed in 1886 by French engineer Paul Héroult and American engineer Charles Martin Hall; it is now known as the Hall–Héroult process. The Hall–Héroult process converts alumina into metal. Austrian chemist Carl Joseph Bayer discovered a way of purifying bauxite to yield alumina, now known as the Bayer process, in 1889. Modern production of aluminium is based on the Bayer and Hall–Héroult processes.
As large-scale production caused aluminium prices to drop, the metal became widely used in jewelry, eyeglass frames, optical instruments, tableware, and foil, and other everyday items in the 1890s and early 20th century. Aluminium's ability to form hard yet light alloys with other metals provided the metal with many uses at the time. During World War I, major governments demanded large shipments of aluminium for light strong airframes; during World War II, demand by major governments for aviation was even higher.
From the early 20th century to 1980, the aluminium industry was characterized by cartelization, as aluminium firms colluded to keep prices high and stable. The first aluminium cartel, the Aluminium Association, was founded in 1901 by the Pittsburgh Reduction Company (renamed Alcoa in 1907) and Aluminium Industrie AG. The British Aluminium Company, Produits Chimiques d’Alais et de la Camargue, and Société Electro-Métallurgique de Froges also joined the cartel.
By the mid-20th century, aluminium had become a part of everyday life and an essential component of housewares. In 1954, production of aluminium surpassed that of copper, historically second in production only to iron, making it the most produced non-ferrous metal. During the mid-20th century, aluminium emerged as a civil engineering material, with building applications in both basic construction and interior finish work, and increasingly being used in military engineering, for both airplanes and land armor vehicle engines. Earth's first artificial satellite, launched in 1957, consisted of two separate aluminium semi-spheres joined and all subsequent space vehicles have used aluminium to some extent. The aluminium can was invented in 1956 and employed as a storage for drinks in 1958.
Throughout the 20th century, the production of aluminium rose rapidly: while the world production of aluminium in 1900 was 6,800 metric tons, the annual production first exceeded 100,000 metric tons in 1916; 1,000,000 tons in 1941; 10,000,000 tons in 1971. In the 1970s, the increased demand for aluminium made it an exchange commodity; it entered the London Metal Exchange, the oldest industrial metal exchange in the world, in 1978. The output continued to grow: the annual production of aluminium exceeded 50,000,000 metric tons in 2013.
The real price for aluminium declined from $14,000 per metric ton in 1900 to $2,340 in 1948 (in 1998 United States dollars). Extraction and processing costs were lowered over technological progress and the scale of the economies. However, the need to exploit lower-grade poorer quality deposits and the use of fast increasing input costs (above all, energy) increased the net cost of aluminium; the real price began to grow in the 1970s with the rise of energy cost. Production moved from the industrialized countries to countries where production was cheaper. Production costs in the late 20th century changed because of advances in technology, lower energy prices, exchange rates of the United States dollar, and alumina prices. The BRIC countries' combined share in primary production and primary consumption grew substantially in the first decade of the 21st century. China is accumulating an especially large share of the world's production thanks to an abundance of resources, cheap energy, and governmental stimuli; it also increased its consumption share from 2% in 1972 to 40% in 2010. In the United States, Western Europe, and Japan, most aluminium was consumed in transportation, engineering, construction, and packaging. In 2021, prices for industrial metals such as aluminium have soared to near-record levels as energy shortages in China drive up costs for electricity.
Etymology
The names aluminium and aluminum are derived from the word alumine, an obsolete term for alumina, the primary naturally occurring oxide of aluminium. Alumine was borrowed from French, which in turn derived it from alumen, the classical Latin name for alum, the mineral from which it was collected. The Latin word alumen stems from the Proto-Indo-European root *alu- meaning "bitter" or "beer".
Origins
British chemist Humphry Davy, who performed a number of experiments aimed to isolate the metal, is credited as the person who named the element. The first name proposed for the metal to be isolated from alum was alumium, which Davy suggested in an 1808 article on his electrochemical research, published in Philosophical Transactions of the Royal Society. It appeared that the name was created from the English word alum and the Latin suffix -ium; but it was customary then to give elements names originating in Latin, so this name was not adopted universally. This name was criticized by contemporary chemists from France, Germany, and Sweden, who insisted the metal should be named for the oxide, alumina, from which it would be isolated. The English name alum does not come directly from Latin, whereas alumine/alumina comes from the Latin word alumen (upon declension, alumen changes to alumin-).
One example was Essai sur la Nomenclature chimique (July 1811), written in French by a Swedish chemist, Jöns Jacob Berzelius, in which the name aluminium is given to the element that would be synthesized from alum. (Another article in the same journal issue also refers to the metal whose oxide is the basis of sapphire, i.e. the same metal, as to aluminium.) A January 1811 summary of one of Davy's lectures at the Royal Society mentioned the name aluminium as a possibility. The next year, Davy published a chemistry textbook in which he used the spelling aluminum. Both spellings have coexisted since. Their usage is currently regional: aluminum dominates in the United States and Canada; aluminium is prevalent in the rest of the English-speaking world.
Spelling
In 1812, British scientist Thomas Young wrote an anonymous review of Davy's book, in which he proposed the name aluminium instead of aluminum, which he thought had a "less classical sound". This name persisted: although the spelling was occasionally used in Britain, the American scientific language used from the start.
Ludwig Wilhelm Gilbert had proposed Thonerde-metall, after the German "Thonerde" for alumina, in his Annalen der Physik but that name never caught on at all even in Germany. Joseph W. Richards in 1891 found just one occurrence of argillium in Swedish, from the French "argille" for clay. The French themselves had used aluminium from the start. However, in England and Germany Davy's spelling aluminum was initially used; until German chemist Friedrich Wöhler published his account of the Wöhler process in 1827 in which he used the spelling aluminium, which caused that spelling's largely wholesale adoption in England and Germany, with the exception of a small number of what Richards characterized as "patriotic" English chemists that were "averse to foreign innovations" who occasionally still used aluminum.
Most scientists throughout the world used in the 19th century; and it was entrenched in several other European languages, such as French, German, and Dutch.
In 1828, an American lexicographer, Noah Webster, entered only the aluminum spelling in his American Dictionary of the English Language. In the 1830s, the spelling gained usage in the United States; by the 1860s, it had become the more common spelling there outside science. In 1892, Hall used the spelling in his advertising handbill for his new electrolytic method of producing the metal, despite his constant use of the spelling in all the patents he filed between 1886 and 1903. It is unknown whether this spelling was introduced by mistake or intentionally, but Hall preferred aluminum since its introduction because it resembled platinum, the name of a prestigious metal. By 1890, both spellings had been common in the United States, the spelling being slightly more common; by 1895, the situation had reversed; by 1900, aluminum had become twice as common as aluminium; in the next decade, the spelling dominated American usage. In 1925, the American Chemical Society adopted this spelling.
The International Union of Pure and Applied Chemistry (IUPAC) adopted aluminium as the standard international name for the element in 1990. In 1993, they recognized aluminum as an acceptable variant; the most recent 2005 edition of the IUPAC nomenclature of inorganic chemistry also acknowledges this spelling. IUPAC official publications use the spelling as primary, and they list both where it is appropriate.
Production and refinement
The production of aluminium starts with the extraction of bauxite rock from the ground. The bauxite is processed and transformed using the Bayer process into alumina, which is then processed using the Hall–Héroult process, resulting in the final aluminium.
Aluminium production is highly energy-consuming, and so the producers tend to locate smelters in places where electric power is both plentiful and inexpensive. Production of one kilogram of aluminium requires 7 kilograms of oil energy equivalent, as compared to 1.5 kilograms for steel and 2 kilograms for plastic. As of 2024, the world's largest producers of aluminium were China, Russia, India, Canada, and the United Arab Emirates, while China is by far the top producer of aluminium with a world share of over 55%.
According to the International Resource Panel's Metal Stocks in Society report, the global per capita stock of aluminium in use in society (i.e. in cars, buildings, electronics, etc.) is . Much of this is in more-developed countries ( per capita) rather than less-developed countries ( per capita).
Bayer process
Bauxite is converted to alumina by the Bayer process. Bauxite is blended for uniform composition and then is ground fine. The resulting slurry is mixed with a hot solution of sodium hydroxide; the mixture is then treated in a digester vessel at a pressure well above atmospheric, dissolving the aluminium hydroxide in bauxite while converting impurities into relatively insoluble compounds:
After this reaction, the slurry is at a temperature above its atmospheric boiling point. It is cooled by removing steam as pressure is reduced. The bauxite residue is separated from the solution and discarded. The solution, free of solids, is seeded with small crystals of aluminium hydroxide; this causes decomposition of the [Al(OH)4]− ions to aluminium hydroxide. After about half of aluminium has precipitated, the mixture is sent to classifiers. Small crystals of aluminium hydroxide are collected to serve as seeding agents; coarse particles are converted to alumina by heating; the excess solution is removed by evaporation, (if needed) purified, and recycled.
Hall–Héroult process
The conversion of alumina to aluminium is achieved by the Hall–Héroult process. In this energy-intensive process, a solution of alumina in a molten () mixture of cryolite (Na3AlF6) with calcium fluoride is electrolyzed to produce metallic aluminium. The liquid aluminium sinks to the bottom of the solution and is tapped off, and usually cast into large blocks called aluminium billets for further processing.
Anodes of the electrolysis cell are made of carbon—the most resistant material against fluoride corrosion—and either bake at the process or are prebaked. The former, also called Söderberg anodes, are less power-efficient and fumes released during baking are costly to collect, which is why they are being replaced by prebaked anodes even though they save the power, energy, and labor to prebake the cathodes. Carbon for anodes should be preferably pure so that neither aluminium nor the electrolyte is contaminated with ash. Despite carbon's resistivity against corrosion, it is still consumed at a rate of 0.4–0.5 kg per each kilogram of produced aluminium. Cathodes are made of anthracite; high purity for them is not required because impurities leach only very slowly. The cathode is consumed at a rate of 0.02–0.04 kg per each kilogram of produced aluminium. A cell is usually terminated after 2–6 years following a failure of the cathode.
The Hall–Heroult process produces aluminium with a purity of above 99%. Further purification can be done by the Hoopes process. This process involves the electrolysis of molten aluminium with a sodium, barium, and aluminium fluoride electrolyte. The resulting aluminium has a purity of 99.99%.
Electric power represents about 20 to 40% of the cost of producing aluminium, depending on the location of the smelter. Aluminium production consumes roughly 5% of electricity generated in the United States. Because of this, alternatives to the Hall–Héroult process have been researched, but none has turned out to be economically feasible.
Recycling
Recovery of the metal through recycling has become an important task of the aluminium industry. Recycling was a low-profile activity until the late 1960s, when the growing use of aluminium beverage cans brought it to public awareness. Recycling involves melting the scrap, a process that requires only 5% of the energy used to produce aluminium from ore, though a significant part (up to 15% of the input material) is lost as dross (ash-like oxide). An aluminium stack melter produces significantly less dross, with values reported below 1%.
White dross from primary aluminium production and from secondary recycling operations still contains useful quantities of aluminium that can be extracted industrially. The process produces aluminium billets, together with a highly complex waste material. This waste is difficult to manage. It reacts with water, releasing a mixture of gases including, among others, acetylene, hydrogen sulfide and significant amounts of ammonia. Despite these difficulties, the waste is used as a filler in asphalt and concrete. Its potential for hydrogen production has also been considered and researched.
Applications
Metal
The global production of aluminium in 2016 was 58.8 million metric tons. It exceeded that of any other metal except iron (1,231 million metric tons).
Aluminium is almost always alloyed, which markedly improves its mechanical properties, especially when tempered. For example, the common aluminium foils and beverage cans are alloys of 92% to 99% aluminium. The main alloying agents for both wrought and cast aluminium are copper, zinc, magnesium, manganese, and silicon (e.g., duralumin) with the levels of other metals in a few percent by weight.
The major uses for aluminium are in:
Transportation (automobiles, aircraft, trucks, railway cars, marine vessels, bicycles, spacecraft, etc.). Aluminium is used because of its low density;
Packaging (cans, foil, frame, etc.). Aluminium is used because it is non-toxic (see below), non-adsorptive, and splinter-proof;
Building and construction (windows, doors, siding, building wire, sheathing, roofing, etc.). Since steel is cheaper, aluminium is used when lightness, corrosion resistance, or engineering features are important;
Electricity-related uses (conductor alloys, motors, and generators, transformers, capacitors, etc.). Aluminium is used because it is relatively cheap, highly conductive, has adequate mechanical strength and low density, and resists corrosion;
A wide range of household items, from cooking utensils to furniture. Low density, good appearance, ease of fabrication, and durability are the key factors of aluminium usage;
Machinery and equipment (processing equipment, pipes, tools). Aluminium is used because of its corrosion resistance, non-pyrophoricity, and mechanical strength.
Compounds
The great majority (about 90%) of aluminium oxide is converted to metallic aluminium. Being a very hard material (Mohs hardness 9), alumina is widely used as an abrasive; being extraordinarily chemically inert, it is useful in highly reactive environments such as high pressure sodium lamps. Aluminium oxide is commonly used as a catalyst for industrial processes; e.g. the Claus process to convert hydrogen sulfide to sulfur in refineries and to alkylate amines. Many industrial catalysts are supported by alumina, meaning that the expensive catalyst material is dispersed over a surface of the inert alumina. Another principal use is as a drying agent or absorbent.
Several sulfates of aluminium have industrial and commercial application. Aluminium sulfate (in its hydrate form) is produced on the annual scale of several millions of metric tons. About two-thirds is consumed in water treatment. The next major application is in the manufacture of paper. It is also used as a mordant in dyeing, in pickling seeds, deodorizing of mineral oils, in leather tanning, and in production of other aluminium compounds. Two kinds of alum, ammonium alum and potassium alum, were formerly used as mordants and in leather tanning, but their use has significantly declined following availability of high-purity aluminium sulfate. Anhydrous aluminium chloride is used as a catalyst in chemical and petrochemical industries, the dyeing industry, and in synthesis of various inorganic and organic compounds. Aluminium hydroxychlorides are used in purifying water, in the paper industry, and as antiperspirants. Sodium aluminate is used in treating water and as an accelerator of solidification of cement.
Many aluminium compounds have niche applications, for example:
Aluminium acetate in solution is used as an astringent.
Aluminium phosphate is used in the manufacture of glass, ceramic, pulp and paper products, cosmetics, paints, varnishes, and in dental cement.
Aluminium hydroxide is used as an antacid, and mordant; it is used also in water purification, the manufacture of glass and ceramics, and in the waterproofing of fabrics.
Lithium aluminium hydride is a powerful reducing agent used in organic chemistry.
Organoaluminiums are used as Lewis acids and co-catalysts.
Methylaluminoxane is a co-catalyst for Ziegler–Natta olefin polymerization to produce vinyl polymers such as polyethene.
Aqueous aluminium ions (such as aqueous aluminium sulfate) are used to treat against fish parasites such as Gyrodactylus salaris.
In many vaccines, certain aluminium salts serve as an immune adjuvant (immune response booster) to allow the protein in the vaccine to achieve sufficient potency as an immune stimulant. Until 2004, most of the adjuvants used in vaccines were aluminium-adjuvanted.
Biology
Despite its widespread occurrence in the Earth's crust, aluminium has no known function in biology. At pH 6–9 (relevant for most natural waters), aluminium precipitates out of water as the hydroxide and is hence not available; most elements behaving this way have no biological role or are toxic. Aluminium sulfate has an LD50 of 6207 mg/kg (oral, mouse), which corresponds to 435 grams (about one pound) for a mouse.
Toxicity
Aluminium is classified as a non-carcinogen by the United States Department of Health and Human Services. A review published in 1988 said that there was little evidence that normal exposure to aluminium presents a risk to healthy adult, and a 2014 multi-element toxicology review was unable to find deleterious effects of aluminium consumed in amounts not greater than 40 mg/day per kg of body mass. Most aluminium consumed will leave the body in feces; most of the small part of it that enters the bloodstream, will be excreted via urine; nevertheless some aluminium does pass the blood-brain barrier and is lodged preferentially in the brains of Alzheimer's patients. Evidence published in 1989 indicates that, for Alzheimer's patients, aluminium may act by electrostatically crosslinking proteins, thus down-regulating genes in the superior temporal gyrus.
Effects
Aluminium, although rarely, can cause vitamin D-resistant osteomalacia, erythropoietin-resistant microcytic anemia, and central nervous system alterations. People with kidney insufficiency are especially at a risk. Chronic ingestion of hydrated aluminium silicates (for excess gastric acidity control) may result in aluminium binding to intestinal contents and increased elimination of other metals, such as iron or zinc; sufficiently high doses (>50 g/day) can cause anemia.
During the 1988 Camelford water pollution incident, people in Camelford had their drinking water contaminated with aluminium sulfate for several weeks. A final report into the incident in 2013 concluded it was unlikely that this had caused long-term health problems.
Aluminium has been suspected of being a possible cause of Alzheimer's disease, but research into this for over 40 years has found, , no good evidence of causal effect.
Aluminium increases estrogen-related gene expression in human breast cancer cells cultured in the laboratory. In very high doses, aluminium is associated with altered function of the blood–brain barrier. A small percentage of people have contact allergies to aluminium and experience itchy red rashes, headache, muscle pain, joint pain, poor memory, insomnia, depression, asthma, irritable bowel syndrome, or other symptoms upon contact with products containing aluminium.
Exposure to powdered aluminium or aluminium welding fumes can cause pulmonary fibrosis. Fine aluminium powder can ignite or explode, posing another workplace hazard.
Exposure routes
Food is the main source of aluminium. Drinking water contains more aluminium than solid food; however, aluminium in food may be absorbed more than aluminium from water. Major sources of human oral exposure to aluminium include food (due to its use in food additives, food and beverage packaging, and cooking utensils), drinking water (due to its use in municipal water treatment), and aluminium-containing medications (particularly antacid/antiulcer and buffered aspirin formulations). Dietary exposure in Europeans averages to 0.2–1.5 mg/kg/week but can be as high as 2.3 mg/kg/week. Higher exposure levels of aluminium are mostly limited to miners, aluminium production workers, and dialysis patients.
Consumption of antacids, antiperspirants, vaccines, and cosmetics provide possible routes of exposure. Consumption of acidic foods or liquids with aluminium enhances aluminium absorption, and maltol has been shown to increase the accumulation of aluminium in nerve and bone tissues.
Treatment
In case of suspected sudden intake of a large amount of aluminium, the only treatment is deferoxamine mesylate which may be given to help eliminate aluminium from the body by chelation therapy. However, this should be applied with caution as this reduces not only aluminium body levels, but also those of other metals such as copper or iron.
Environmental effects
High levels of aluminium occur near mining sites; small amounts of aluminium are released to the environment at coal-fired power plants or incinerators. Aluminium in the air is washed out by the rain or normally settles down but small particles of aluminium remain in the air for a long time.
Acidic precipitation is the main natural factor to mobilize aluminium from natural sources and the main reason for the environmental effects of aluminium; however, the main factor of presence of aluminium in salt and freshwater are the industrial processes that also release aluminium into air.
In water, aluminium acts as a toxiс agent on gill-breathing animals such as fish when the water is acidic, in which aluminium may precipitate on gills, which causes loss of plasma- and hemolymph ions leading to osmoregulatory failure. Organic complexes of aluminium may be easily absorbed and interfere with metabolism in mammals and birds, even though this rarely happens in practice.
Aluminium is primary among the factors that reduce plant growth on acidic soils. Although it is generally harmless to plant growth in pH-neutral soils, in acid soils the concentration of toxic Al3+ cations increases and disturbs root growth and function. Wheat has developed a tolerance to aluminium, releasing organic compounds that bind to harmful aluminium cations. Sorghum is believed to have the same tolerance mechanism.
Aluminium production possesses its own challenges to the environment on each step of the production process. The major challenge is the emission of greenhouse gases. These gases result from electrical consumption of the smelters and the byproducts of processing. The most potent of these gases are perfluorocarbons, namely CF4 and C2F6, from the smelting process.
Biodegradation of metallic aluminium is extremely rare; most aluminium-corroding organisms do not directly attack or consume the aluminium, but instead produce corrosive wastes. The fungus Geotrichum candidum can consume the aluminium in compact discs. The bacterium Pseudomonas aeruginosa and the fungus Cladosporium resinae are commonly detected in aircraft fuel tanks that use kerosene-based fuels (not avgas), and laboratory cultures can degrade aluminium.
Bibliography
Further reading
Mimi Sheller, Aluminum Dream: The Making of Light Modernity. Cambridge, Mass.: Massachusetts Institute of Technology Press, 2014.
External links
Aluminium at The Periodic Table of Videos (University of Nottingham)
Toxicological Profile for Aluminum (PDF) (September 2008) – 357-page report from the United States Department of Health and Human Services, Public Health Service, Agency for Toxic Substances and Disease Registry
Aluminum entry (last reviewed 30 October 2019) in the NIOSH Pocket Guide to Chemical Hazards published by the CDC's National Institute for Occupational Safety and Health
Current and historical prices (1998–present) for aluminum futures on the global commodities market
|
;Airship technology;Aluminium;Chemical elements;Chemical elements with face-centered cubic structure;E-number additives;Electrical conductors;Native element minerals;Post-transition metals;Pyrotechnic fuels;Reducing agents
|
https://en.wikipedia.org/wiki/Andrey%20Markov
|
Andrey Andreyevich Markov (14 June 1856 – 20 July 1922) was a Russian mathematician best known for his work on stochastic processes. A primary subject of his research later became known as the Markov chain. He was also a strong, close to master-level, chess player.
Markov and his younger brother Vladimir Andreyevich Markov (1871–1897) proved the Markov brothers' inequality.
His son, another Andrey Andreyevich Markov (1903–1979), was also a notable mathematician, making contributions to constructive mathematics and recursive function theory.
Biography
Andrey Markov was born on 14 June 1856 in Russia. He attended the St. Petersburg Grammar School, where some teachers saw him as a rebellious student. In his academics he performed poorly in most subjects other than mathematics. Later in life he attended Saint Petersburg Imperial University (now Saint Petersburg State University). Among his teachers were Yulian Sokhotski (differential calculus, higher algebra), Konstantin Posse (analytic geometry), Yegor Zolotarev (integral calculus), Pafnuty Chebyshev (number theory and probability theory), Aleksandr Korkin (ordinary and partial differential equations), Mikhail Okatov (mechanism theory), Osip Somov (mechanics), and Nikolai Budajev (descriptive and higher geometry). He completed his studies at the university and was later asked if he would like to stay and have a career as a mathematician. He later taught at high schools and continued his own mathematical studies. In this time he found a practical use for his mathematical skills. He figured out that he could use chains to model the alliteration of vowels and consonants in Russian literature. He also contributed to many other mathematical aspects in his time. He died at age 66 on 20 July 1922.
Timeline
In 1877, Markov was awarded a gold medal for his outstanding solution of the problem
About Integration of Differential Equations by Continued Fractions with an Application to the Equation .
During the following year, he passed the candidate's examinations, and he remained at the university to prepare for a lecturer's position.
In April 1880, Markov defended his master's thesis "On the Binary Square Forms with Positive Determinant", which was directed by Aleksandr Korkin and Yegor Zolotarev. Four years later in 1884, he defended his doctoral thesis titled "On Certain Applications of the Algebraic Continuous Fractions".
His pedagogical work began after the defense of his master's thesis in autumn 1880. As a privatdozent he lectured on differential and integral calculus. Later he lectured alternately on "introduction to analysis", probability theory (succeeding Chebyshev, who had left the university in 1882) and the calculus of differences. From 1895 through 1905 he also lectured in differential calculus.
One year after the defense of his doctoral thesis, Markov was appointed extraordinary professor (1886) and in the same year he was elected adjunct to the Academy of Sciences. In 1890, after the death of Viktor Bunyakovsky, Markov became an extraordinary member of the academy. His promotion to an ordinary professor of St. Petersburg University followed in the fall of 1894.
In 1896, Markov was elected an ordinary member of the academy as the successor of Chebyshev. In 1905, he was appointed merited professor and was granted the right to retire, which he did immediately. Until 1910, however, he continued to lecture in the calculus of differences.
In connection with student riots in 1908, professors and lecturers of St. Petersburg University were ordered to monitor their students. Markov refused to accept this decree, and he wrote an explanation in which he declined to be an "agent of the governance". Markov was removed from further teaching duties at St. Petersburg University, and hence he decided to retire from the university.
Markov was an atheist. In 1912, he responded to Leo Tolstoy's excommunication from the Russian Orthodox Church by requesting his own excommunication. The Church complied with his request.
In 1913, the council of St. Petersburg elected nine scientists honorary members of the university. Markov was among them, but his election was not affirmed by the minister of education. The affirmation only occurred four years later, after the February Revolution in 1917. Markov then resumed his teaching activities and lectured on probability theory and the calculus of differences until his death in 1922.
Notes
References
|
19th-century mathematicians from the Russian Empire;20th-century Russian mathematicians;Former Russian Orthodox Christians;Full Members of the Russian Academy of Sciences (1917–1925);Full members of the Saint Petersburg Academy of Sciences;Markov, Andrei Andreyevich;People from Ryazan;Probability theorists;Russian atheists;Russian scientists;Russian statisticians;Saint Petersburg State University alumni
|
https://en.wikipedia.org/wiki/Acute%20disseminated%20encephalomyelitis
|
Acute disseminated encephalomyelitis (ADEM), or acute demyelinating encephalomyelitis, is a rare autoimmune disease marked by a sudden, widespread attack of inflammation in the brain and spinal cord. As well as causing the brain and spinal cord to become inflamed, ADEM also attacks the nerves of the central nervous system and damages their myelin insulation, which, as a result, destroys the white matter. The cause is often a trigger such as from viral infection or, in extraordinarily rare cases, vaccinations.
ADEM's symptoms resemble the symptoms of multiple sclerosis (MS), so the disease itself is sorted into the classification of the multiple sclerosis borderline diseases. However, ADEM has several features that distinguish it from MS. Unlike MS, ADEM occurs usually in children and is marked with rapid fever, although adolescents and adults can get the disease too. ADEM consists of a single flare-up whereas MS is marked with several flare-ups (or relapses), over a long period of time. Relapses following ADEM are reported in up to a quarter of patients, but the majority of these 'multiphasic' presentations following ADEM likely represent MS. ADEM is also distinguished by a loss of consciousness, coma and death, which is very rare in MS, except in severe cases.
It affects about 8 per 1,000,000 people per year. Although it occurs in all ages, most reported cases are in children and adolescents, with the average age around 5 to 8 years old. The disease affects males and females almost equally. ADEM shows seasonal variation with higher incidence in winter and spring months which may coincide with higher viral infections during these months. The mortality rate may be as high as 5%; however, full recovery is seen in 50 to 75% of cases with increase in survival rates up to 70 to 90% with figures including minor residual disability as well. The average time to recover from ADEM flare-ups is one to six months.
ADEM produces multiple inflammatory lesions in the brain and spinal cord, particularly in the white matter. Usually these are found in the subcortical and central white matter and cortical gray-white junction of both cerebral hemispheres, cerebellum, brainstem, and spinal cord, but periventricular white matter and gray matter of the cortex, thalami and basal ganglia may also be involved.
When a person has more than one demyelinating episode of ADEM, the disease is then called recurrent disseminated encephalomyelitis or multiphasic disseminated encephalomyelitis (MDEM). Also, a fulminant course in adults has been described.
Signs and symptoms
ADEM has an abrupt onset and a monophasic course. Symptoms usually begin 1–3 weeks after infection. Major symptoms include fever, headache, nausea and vomiting, confusion, vision impairment, drowsiness, seizures and coma. Although initially the symptoms are usually mild, they worsen rapidly over the course of hours to days, with the average time to maximum severity being about four and a half days. Additional symptoms include hemiparesis, paraparesis, and cranial nerve palsies.
ADEM in COVID-19
Neurological symptoms were the main presentation of COVID-19, which did not correlate with the severity of respiratory symptoms. The high incidence of ADEM with hemorrhage is striking. Brain inflammation is likely caused by an immune response to the disease rather than neurotropism. Cerebrospinal fluid analysis was not indicative of an infectious process, neurological impairment was not present in the acute phase of the infection, and neuroimaging findings were not typical of classical toxic and metabolic disorders. The finding of bilateral periventricular relatively asymmetrical lesions allied with deep white matter involvement, that may also be present in cortical gray-white matter junction, thalami, basal ganglia, cerebellum, and brainstem suggests an acute demyelination process. Additionally, hemorrhagic white matter lesions, clusters of macrophages related to axonal injury and ADEM-like appearance were also found in subcortical white matter.
Causes
Since the discovery of the anti-MOG specificity against multiple sclerosis diagnosis it is considered that ADEM is one of the possible clinical causes of anti-MOG associated encephalomyelitis.
There are several theories about how the anti-MOG antibodies appear in the patient's serum:
A preceding antigenic challenge can be identified in approximately two-thirds of people. Some viral infections thought to induce ADEM include influenza virus, dengue, enterovirus, measles, mumps, rubella, varicella zoster, Epstein–Barr virus, cytomegalovirus, herpes simplex virus, hepatitis A, coxsackievirus and COVID-19. Bacterial infections include Mycoplasma pneumoniae, Borrelia burgdorferi, Leptospira, and beta-hemolytic Streptococci.
Exposure to vaccines: The only vaccine proven related to ADEM is the Semple form of the rabies vaccine, but hepatitis B, pertussis, diphtheria, measles, mumps, rubella, pneumococcus, varicella, influenza, Japanese encephalitis, and polio vaccines have all been associated with the condition. The majority of the studies that correlate vaccination with ADEM onset use only small samples or are case studies. Large-scale epidemiological studies (e.g., of MMR vaccine or smallpox vaccine) do not show increased risk of ADEM following vaccination. An upper bound for the risk of ADEM from measles vaccination, if it exists, can be estimated to be 10 per million, which is far lower than the risk of developing ADEM from an actual measles infection, which is about 1 per 1,000 cases. For a rubella infection, the risk is 1 per 5,000 cases. Some early vaccines, later shown to have been contaminated with host animal central nervous system tissue, had ADEM incidence rates as high as 1 in 600.
In rare cases, ADEM seems to follow from organ transplantation.
Diagnosis
The term ADEM has been inconsistently used at different times. Currently, the commonly accepted international standard for the clinical case definition is the one published by the International Pediatric MS Study Group, revision 2007.
Given that the definition is clinical, it is currently unknown if all the cases of ADEM are positive for anti-MOG autoantibody; in any case, it appears to be strongly related to ADEM diagnosis.
Differential diagnosis
Multiple sclerosis
While ADEM and MS both involve autoimmune demyelination, they differ in many clinical, genetic, imaging, and histopathological aspects. Some authors consider MS and its borderline forms to constitute a spectrum, differing only in chronicity, severity, and clinical course, while others consider them discretely different diseases.
Typically, ADEM appears in children following an antigenic challenge and remains monophasic. Nevertheless, ADEM does occur in adults, and can also be clinically multiphasic.
Problems for differential diagnosis increase due to the lack of agreement for a definition of multiple sclerosis. If MS were defined only by the separation in time and space of the demyelinating lesions as McDonald did, it would not be enough to make a difference, as some cases of ADEM satisfy these conditions. Therefore, some authors propose to establish the dividing line as the shape of the lesions around the veins, being therefore "perivenous vs. confluent demyelination".
The pathology of ADEM is very similar to that of MS with some differences. The pathological hallmark of ADEM is perivenous inflammation with limited "sleeves of demyelination". Nevertheless, MS-like plaques (confluent demyelination) can appear
Plaques in the white matter in MS are sharply delineated, while the glial scar in ADEM is smooth. Axons are better preserved in ADEM lesions. Inflammation in ADEM is widely disseminated and ill-defined, and finally, lesions are strictly perivenous, while in MS they are disposed around veins, but not so sharply.
Nevertheless, the co-occurrence of perivenous and confluent demyelination in some individuals suggests pathogenic overlap between acute disseminated encephalomyelitis and multiple sclerosis and misclassification even with biopsy or even postmortem ADEM in adults can progress to MS
Multiphasic disseminated encephalomyelitis
When the person has more than one demyelinating episode of ADEM, the disease is then called recurrent disseminated encephalomyelitis or multiphasic disseminated encephalomyelitis (MDEM).
It has been found that anti-MOG auto-antibodies are related to this kind of ADEM
Another variant of ADEM in adults has been described, also related to anti-MOG auto-antibodies, has been named fulminant disseminated encephalomyelitis, and it has been reported to be clinically ADEM, but showing MS-like lesions on autopsy. It has been classified inside the anti-MOG associated inflammatory demyelinating diseases.
Acute hemorrhagic leukoencephalitis
Acute hemorrhagic leukoencephalitis (AHL, or AHLE), acute hemorrhagic encephalomyelitis (AHEM), acute necrotizing hemorrhagic leukoencephalitis (ANHLE), Weston-Hurst syndrome, or Hurst's disease, is a hyperacute and frequently fatal form of ADEM. AHL is relatively rare (less than 100 cases have been reported in the medical literature ), it is seen in about 2% of ADEM cases, and is characterized by necrotizing vasculitis of venules and hemorrhage, and edema. Death is common in the first week and overall mortality is about 70%, but increasing evidence points to favorable outcomes after aggressive treatment with corticosteroids, immunoglobulins, cyclophosphamide, and plasma exchange. About 70% of survivors show residual neurological deficits, but some survivors have shown surprisingly little deficit considering the extent of the white matter affected.
This disease has been occasionally associated with ulcerative colitis and Crohn's disease, malaria, sepsis associated with immune complex deposition, methanol poisoning, and other underlying conditions. Also anecdotal association with MS has been reported
Laboratory studies that support diagnosis of AHL are: peripheral leukocytosis, cerebrospinal fluid (CSF) pleocytosis associated with normal glucose and increased protein. On magnetic resonance imaging (MRI), lesions of AHL typically show extensive T2-weighted and fluid-attenuated inversion recovery (FLAIR) white matter hyperintensities with areas of hemorrhages, significant edema, and mass effect.
Treatment
No controlled clinical trials have been conducted on ADEM treatment, but aggressive treatment aimed at rapidly reducing inflammation of the CNS is standard. The widely accepted first-line treatment is high doses of intravenous corticosteroids, such as methylprednisolone or dexamethasone, followed by 3–6 weeks of gradually lower oral doses of prednisolone. Patients treated with methylprednisolone have shown better outcomes than those treated with dexamethasone. Oral tapers of less than three weeks duration show a higher chance of relapsing, and tend to show poorer outcomes. Other anti-inflammatory and immunosuppressive therapies have been reported to show beneficial effect, such as plasmapheresis, high doses of intravenous immunoglobulin (IVIg), mitoxantrone and cyclophosphamide. These are considered alternative therapies, used when corticosteroids cannot be used or fail to show an effect.
There is some evidence to suggest that patients may respond to a combination of methylprednisolone and immunoglobulins if they fail to respond to either separately
In a study of 16 children with ADEM, 10 recovered completely after high-dose methylprednisolone, one severe case that failed to respond to steroids recovered completely after IV Ig; the five most severe cases – with ADAM and severe peripheral neuropathy – were treated with combined high-dose methylprednisolone and immunoglobulin, two remained paraplegic, one had motor and cognitive handicaps, and two recovered. A recent review of IVIg treatment of ADEM (of which the previous study formed the bulk of the cases) found that 70% of children showed complete recovery after treatment with IVIg, or IVIg plus corticosteroids. A study of IVIg treatment in adults with ADEM showed that IVIg seems more effective in treating sensory and motor disturbances, while steroids seem more effective in treating impairments of cognition, consciousness and rigor. This same study found one subject, a 71-year-old man who had not responded to steroids, that responded to an IVIg treatment 58 days after disease onset.
Prognosis
Full recovery is seen in 50 to 70% of cases, ranging to 70 to 90% recovery with some minor residual disability (typically assessed using measures such as mRS or EDSS), average time to recover is one to six months. The mortality rate may be as high as 5–10%. Poorer outcomes are associated with unresponsiveness to steroid therapy, unusually severe neurological symptoms, or sudden onset. Children tend to have more favorable outcomes than adults, and cases presenting without fevers tend to have poorer outcomes. The latter effect may be due to either protective effects of fever, or that diagnosis and treatment is sought more rapidly when fever is present.
ADEM can progress to MS. It will be considered MS if some lesions appear in different times and brain areas
Motor deficits
Residual motor deficits are estimated to remain in about 8 to 30% of cases, the range in severity from mild clumsiness to ataxia and hemiparesis.
Neurocognitive
Patients with demyelinating illnesses, such as MS, have shown cognitive deficits even when there is minimal physical disability. Research suggests that similar effects are seen after ADEM, but that the deficits are less severe than those seen in MS. A study of six children with ADEM (mean age at presentation 7.7 years) were tested for a range of neurocognitive tests after an average of 3.5 years of recovery. All six children performed in the normal range on most tests, including verbal IQ and performance IQ, but performed at least one standard deviation below age norms in at least one cognitive domain, such as complex attention (one child), short-term memory (one child) and internalizing behaviour/affect (two children). Group means for each cognitive domain were all within one standard deviation of age norms, demonstrating that, as a group, they were normal. These deficits were less severe than those seen in similar aged children with a diagnosis of MS.
Another study compared nineteen children with a history of ADEM, of which 10 were five years of age or younger at the time (average age 3.8 years old, tested an average of 3.9 years later) and nine were older (mean age 7.7y at time of ADEM, tested an average of 2.2 years later) to nineteen matched controls. Scores on IQ tests and educational achievement were lower for the young onset ADEM group (average IQ 90) compared to the late onset (average IQ 100) and control groups (average IQ 106), while the late onset ADEM children scored lower on verbal processing speed. Again, all groups means were within one standard deviation of the controls, meaning that while effects were statistically reliable, the children were as a whole, still within the normal range. There were also more behavioural problems in the early onset group, although there is some suggestion that this may be due, at least in part, to the stress of hospitalization at a young age.
Research
The relationship between ADEM and anti-MOG associated encephalomyelitis is currently under research. A new entity called MOGDEM has been proposed.
About animal models, the main animal model for MS, experimental autoimmune encephalomyelitis (EAE) is also an animal model for ADEM. Being an acute monophasic illness, EAE is far more similar to ADEM than MS.
|
Autoimmune diseases;Central nervous system disorders;Enterovirus-associated diseases;Measles;Multiple sclerosis;Rare diseases
|
https://en.wikipedia.org/wiki/Ada%20Lovelace
|
Augusta Ada King, Countess of Lovelace (née Byron; 10 December 1815 – 27 November 1852), also known as Ada Lovelace, was an English mathematician and writer chiefly known for her work on Charles Babbage's proposed mechanical general-purpose computer, the Analytical Engine. She was the first to recognise that the machine had applications beyond pure calculation.
Lovelace was the only legitimate child of poet Lord Byron and reformer Anne Isabella Milbanke. All her half-siblings, Lord Byron's other children, were born out of wedlock to other women. Lord Byron separated from his wife a month after Ada was born and left England forever. He died in Greece when she was eight. Lady Byron was anxious about her daughter's upbringing and promoted Lovelace's interest in mathematics and logic in an effort to prevent her from developing her father's perceived insanity. Despite this, Lovelace remained interested in her father, naming her two sons Byron and Gordon. Upon her death, she was buried next to her father at her request. Although often ill in her childhood, Lovelace pursued her studies assiduously. She married William King in 1835. King was made Earl of Lovelace in 1838, Ada thereby becoming Countess of Lovelace.
Lovelace's educational and social exploits brought her into contact with scientists such as Andrew Crosse, Charles Babbage, Sir David Brewster, Charles Wheatstone and Michael Faraday, and the author Charles Dickens, contacts which she used to further her education. Lovelace described her approach as "poetical science" and herself as an "Analyst (& Metaphysician)".
When she was eighteen, Lovelace's mathematical talents led her to a long working relationship and friendship with fellow British mathematician Charles Babbage. She was in particular interested in Babbage's work on the Analytical Engine. Lovelace first met him on 5 June 1833, when she and her mother attended one of Charles Babbage's Saturday night soirées with their mutual friend, and Lovelace's private tutor, Mary Somerville.
Though Babbage's Analytical Engine was never constructed and exercised no influence on the later invention of electronic computers, it has been recognised in retrospect as a Turing-complete general-purpose computer which anticipated the essential features of a modern electronic computer; Babbage is therefore known as the "father of computers," and Lovelace is credited with several computing "firsts" for her collaboration with him.
Between 1842 and 1843, Lovelace translated an article by the military engineer Luigi Menabrea (later Prime Minister of Italy) about the Analytical Engine, supplementing it with an elaborate set of seven notes, simply called "Notes". These notes described a method of using the machine to calculate Bernoulli numbers which is often called the first published computer program.
She also developed a vision of the capability of computers to go beyond mere calculating or number-crunching, while many others, including Babbage himself, focused only on those capabilities. Lovelace was the first to point out the possibility of encoding information besides mere arithmetical figures, such as music, and manipulating it with such a machine. Her mindset of "poetical science" led her to ask questions about the Analytical Engine (as shown in her notes), examining how individuals and society relate to technology as a collaborative tool.
The programming language Ada is named after her.
Biography
Childhood
Lord Byron expected his child to be a "glorious boy" and was disappointed when Lady Byron gave birth to a girl. The child was named after Byron's half-sister, Augusta Leigh, and was called "Ada" by Byron himself. On 16 January 1816, at Lord Byron's command, Lady Byron left for her parents' home at Kirkby Mallory, taking their five-week-old daughter with her. Although English law at the time granted full custody of children to the father in cases of separation, Lord Byron made no attempt to claim his parental rights, but did request that his sister keep him informed of Ada's welfare.
On 21 April, Lord Byron signed the deed of separation, although very reluctantly, and left England for good a few days later. Aside from an acrimonious separation, Lady Byron continued throughout her life to make allegations about her husband's immoral behaviour. This set of events made Lovelace infamous in Victorian society. Ada did not have a relationship with her father. He died in 1824 when she was eight years old. Her mother was the only significant parental figure in her life. Lovelace was not shown the family portrait of her father until her 20th birthday.
Lovelace did not have a close relationship with her mother. She was often left in the care of her maternal grandmother Judith, Hon. Lady Milbanke, who doted on her. However, because of societal attitudes of the time—which favoured the husband in any separation, with the welfare of any child acting as mitigation—Lady Byron had to present herself as a loving mother to the rest of society. This included writing anxious letters to Lady Milbanke about her daughter's welfare, with a cover note saying to retain the letters in case she had to use them to show maternal concern. In one letter to Lady Milbanke, she referred to her daughter as "it": "I talk to it for your satisfaction, not my own, and shall be very glad when you have it under your own." Lady Byron had her teenage daughter watched by close friends for any sign of moral deviation. Lovelace dubbed these observers the "Furies" and later complained they exaggerated and invented stories about her.
Lovelace was often ill, beginning in early childhood. At the age of eight, she experienced headaches that obscured her vision. In June 1829, she was paralyzed after a bout of measles. She was subjected to continuous bed rest for nearly a year, something which may have extended her period of disability. By 1831, she was able to walk with crutches. Despite the illnesses, she developed her mathematical and technological skills.
Ada Byron had an affair with a tutor in early 1833. She tried to elope with him after she was caught, but the tutor's relatives recognised her and contacted her mother. Lady Byron and her friends covered the incident up to prevent a public scandal. Lovelace never met her younger half-sister, Allegra, the daughter of Lord Byron and Claire Clairmont. Allegra died in 1822 at the age of five. Lovelace did have some contact with Elizabeth Medora Leigh, the daughter of Byron's half-sister Augusta Leigh, who purposely avoided Lovelace as much as possible when introduced at court.
Adult years
Lovelace became close friends with her tutor Mary Somerville, who introduced her to Charles Babbage in 1833. She had a strong respect and affection for Somerville, and they corresponded for many years. Other acquaintances included the scientists Andrew Crosse, Sir David Brewster, Charles Wheatstone, Michael Faraday and the author Charles Dickens. She was presented at Court at the age of seventeen "and became a popular belle of the season" in part because of her "brilliant mind". By 1834 Ada was a regular at Court and started attending various events. She danced often and was able to charm many people, and was described by most people as being dainty, although John Hobhouse, Byron's friend, described her as "a large, coarse-skinned young woman but with something of my friend's features, particularly the mouth". This description followed their meeting on 24 February 1834 in which Ada made it clear to Hobhouse that she did not like him, probably due to her mother's influence, which led her to dislike all of her father's friends. This first impression was not to last, and they later became friends.
On 8 July 1835, she married William, 8th Baron King, becoming Lady King. They had three homes: Ockham Park, Surrey; a Scottish estate on Loch Torridon in Ross-shire; and a house in London. They spent their honeymoon at Ashley Combe near Porlock Weir, Somerset, which had been built as a hunting lodge in 1799 and was improved by King in preparation for their honeymoon. It later became their summer retreat and was further improved during this time. From 1845, the family's main house was Horsley Towers, built in the Tudorbethan fashion by the architect of the Houses of Parliament, Charles Barry, and later greatly enlarged to Lovelace's own designs.
They had three children: Byron (born 1836); Anne Isabella (called Annabella, born 1837); and Ralph Gordon (born 1839). Immediately after the birth of Annabella, Lady King experienced "a tedious and suffering illness, which took months to cure". Ada was a descendant of the extinct Barons Lovelace and in 1838, her husband was made Earl of Lovelace and Viscount Ockham, meaning Ada became the Countess of Lovelace. In 1843–44, Ada's mother assigned William Benjamin Carpenter to teach Ada's children and to act as a "moral" instructor for Ada. He quickly fell for her and encouraged her to express any frustrated affections, claiming that his marriage meant he would never act in an "unbecoming" manner. When it became clear that Carpenter was trying to start an affair, Ada cut it off.
In 1841, Lovelace and Medora Leigh (the daughter of Lord Byron's half-sister Augusta Leigh) were told by Ada's mother that Ada's father was also Medora's father. On 27 February 1841, Ada wrote to her mother: "I am not in the least astonished. In fact, you merely confirm what I have for years and years felt scarcely a doubt about, but should have considered it most improper in me to hint to you that I in any way suspected." She did not blame the incestuous relationship on Byron, but instead blamed Augusta Leigh: "I fear she is more inherently wicked than he ever was." In the 1840s, Ada flirted with scandals: firstly, from a relaxed approach to extra-marital relationships with men, leading to rumours of affairs; and secondly, from her love of gambling. She apparently lost more than £3,000 on the horses during the later 1840s. The gambling led to her forming a syndicate with male friends, and an ambitious attempt in 1851 to create a mathematical model for successful large bets. This went disastrously wrong, leaving her thousands of pounds in debt to the syndicate, forcing her to admit it all to her husband. She had a shadowy relationship with Andrew Crosse's son John from 1844 onwards. John Crosse destroyed most of their correspondence after her death as part of a legal agreement. She bequeathed him the only heirlooms her father had personally left to her. During her final illness, she would panic at the idea of the younger Crosse being kept from visiting her.
Education
From 1832, when she was seventeen, her mathematical abilities began to emerge, and her interest in mathematics dominated the majority of her adult life. Her mother's obsession with rooting out any of the insanity of which she accused Byron was one of the reasons that Ada was taught mathematics from an early age. She was privately educated in mathematics and science by William Frend, William King, and Mary Somerville, the noted 19th-century researcher and scientific author. In the 1840s, the mathematician Augustus De Morgan extended her "much help in her mathematical studies" including study of advanced calculus topics including the "numbers of Bernoulli" (that formed her celebrated algorithm for Babbage's Analytical Engine). In a letter to Lady Byron, De Morgan suggested that Ada's skill in mathematics might lead her to become "an original mathematical investigator, perhaps of first-rate eminence".
Lovelace often questioned basic assumptions through integrating poetry and science. Whilst studying differential calculus, she wrote to De Morgan:
I may remark that the curious transformations many formulae can undergo, the unsuspected and to a beginner apparently impossible identity of forms exceedingly dissimilar at first sight, is I think one of the chief difficulties in the early part of mathematical studies. I am often reminded of certain sprites and fairies one reads of, who are at one's elbows in one shape now, and the next minute in a form most dissimilar.
Lovelace believed that intuition and imagination were critical to effectively applying mathematical and scientific concepts. She valued metaphysics as much as mathematics, viewing both as tools for exploring "the unseen worlds around us".
Death
Lovelace died at the age of 36 on 27 November 1852 from cervical cancer (which contemporary accounts called uterine cancer, since a distinction between the two was not made at time). The illness lasted several months, in which time Annabella took command over whom Ada saw, and excluded all of her friends and confidants. Under her mother's influence, Ada had a religious transformation and was coaxed into repenting of her previous conduct and making Annabella her executor. She lost contact with her husband after confessing something to him on 30 August which caused him to abandon her bedside. It is not known what she told him. She was buried, at her request, next to her father at the Church of St. Mary Magdalene in Hucknall, Nottinghamshire.
Work
Throughout her life, Lovelace was strongly interested in scientific developments and fads of the day, including phrenology and mesmerism. After her work with Babbage, Lovelace continued to work on other projects. In 1844, she commented to a friend Woronzow Greig about her desire to create a mathematical model for how the brain gives rise to thoughts and nerves to feelings ("a calculus of the nervous system"). She never achieved this, however. In part, her interest in the brain came from a long-running preoccupation, inherited from her mother, about her "potential" madness. As part of her research into this project, she visited the electrical engineer Andrew Crosse in 1844 to learn how to carry out electrical experiments. In the same year, she wrote a review of a paper by Baron Karl von Reichenbach, Researches on Magnetism, but this was not published and does not appear to have progressed past the first draft. In 1851, the year before her cancer struck, she wrote to her mother mentioning "certain productions" she was working on regarding the relation of maths and music.
Lovelace first met Charles Babbage in June 1833, through their mutual friend Mary Somerville. Later that month, Babbage invited Lovelace to see the prototype for his difference engine. She became fascinated with the machine and used her relationship with Somerville to visit Babbage as often as she could. Babbage was impressed by Lovelace's intellect and analytic skills. He called her "The Enchantress of Number". In 1843, he wrote to her:
In 1840, Babbage was invited to give a seminar at the University of Turin about his Analytical Engine. Luigi Menabrea, a young Italian engineer and the future Prime Minister of Italy, transcribed Babbage's lecture into French, and this transcript was subsequently published in the Bibliothèque universelle de Genève in October 1842. Babbage's friend Charles Wheatstone commissioned Lovelace to translate Menabrea's paper into English.
During a nine-month period in 1842–43, Lovelace translated Menabrea's article. She then augmented the paper with notes, which were added to the translation. The translation and notes were then published in the September 1843 edition of Taylor's Scientific Memoirs under the initialism AAL.
Explaining the Analytical Engine's function was a difficult task; many other scientists did not grasp the concept and the British establishment had shown little interest in it. Lovelace's notes even had to explain how the Analytical Engine differed from the original Difference Engine. Her work was well received at the time; the scientist Michael Faraday described himself as a supporter of her writing.
Lovelace and Babbage had a minor falling out when the papers were published, when he tried to leave his own statement (criticising the government's treatment of his Engine) as an unsigned preface, which could have been mistakenly interpreted as a joint declaration. When Taylor's Scientific Memoirs ruled that the statement should be signed, Babbage wrote to Lovelace asking her to withdraw the paper. This was the first that she knew he was leaving it unsigned, and she wrote back refusing to withdraw the paper. The historian Benjamin Woolley theorised that "His actions suggested he had so enthusiastically sought Ada's involvement, and so happily indulged her ... because of her 'celebrated name'." Their friendship recovered, and they continued to correspond. On 12 August 1851, when she was dying of cancer, Lovelace wrote to him asking him to be her executor, though this letter did not give him the necessary legal authority. Part of the terrace at Worthy Manor was known as Philosopher's Walk; it was there that Lovelace and Babbage were reputed to have walked while discussing mathematical principles.
First published computer program
The notes, around three times longer than the article itself, are important in the early history of computers, especially since the seventh one described, in complete detail, a method for calculating a sequence of Bernoulli numbers using the Analytical Engine, which might have run correctly had it ever been built. Though Babbage's personal notes from 1837 to 1840 contain the first programs for the engine, the algorithm in Note G is often called the first published computer program. The engine was never completed and so the program was never tested.
In 1953, more than a century after her death, Ada Lovelace's notes on Babbage's Analytical Engine were republished as an appendix to B. V. Bowden's Faster than Thought: A Symposium on Digital Computing Machines. The engine has now been recognised as an early model for a computer and her notes as a description of a computer and software.
Controversy over contribution
Based on this work, Lovelace is often called the first computer programmer and her method has been called the world's first computer program.
Eugene Eric Kim and Betty Alexandra Toole consider it "incorrect" to regard Lovelace as the first computer programmer. Babbage claims credit in his autobiography for the algorithm in Note G, and regardless of the extent of Lovelace's contribution to it, she was not the very first person to write a program for the Analytical Engine, as Babbage had written the initial programs for it, although the majority were never published. Bromley notes several dozen sample programs prepared by Babbage between 1837 and 1840, all substantially predating Lovelace's notes. Dorothy K. Stein regards Lovelace's notes as "more a reflection of the mathematical uncertainty of the author, the political purposes of the inventor, and, above all, of the social and cultural context in which it was written, than a blueprint for a scientific development".
Allan G. Bromley, in the 1990 article Difference and Analytical Engines:
Bruce Collier wrote that Lovelace "made a considerable contribution to publicizing the Analytical Engine, but there is no evidence that she advanced the design or theory of it in any way".
Doron Swade has said that Ada only published the first computer program instead of actually writing it, but agrees that she was the only person to see the potential of the analytical engine as a machine capable of expressing entities other than quantities.
In his book, Idea Makers, Stephen Wolfram defends Lovelace's contributions. While acknowledging that Babbage wrote several unpublished algorithms for the Analytical Engine prior to Lovelace's notes, Wolfram argues that "there's nothing as sophisticated—or as clean—as Ada's computation of the Bernoulli numbers. Babbage certainly helped and commented on Ada's work, but she was definitely the driver of it." Wolfram then suggests that Lovelace's main achievement was to distill from Babbage's correspondence "a clear exposition of the abstract operation of the machine—something which Babbage never did".
Insight into potential of computing devices
In her notes, Ada Lovelace emphasised the difference between the Analytical Engine and previous calculating machines, particularly its ability to be programmed to solve problems of any complexity. She realised the potential of the device extended far beyond mere number crunching. In her notes, she wrote:
This analysis was an important development from previous ideas about the capabilities of computing devices and anticipated the implications of modern computing one hundred years before they were realised. Walter Isaacson ascribes Ada's insight regarding the application of computing to any process based on logical symbols to an observation about textiles: "When she saw some mechanical looms that used punchcards to direct the weaving of beautiful patterns, it reminded her of how Babbage's engine used punched cards to make calculations." This insight is seen as significant by writers such as Betty Toole and Benjamin Woolley, as well as the programmer John Graham-Cumming, whose project Plan 28 has the aim of constructing the first complete Analytical Engine.
According to the historian of computing and Babbage specialist Doron Swade:
Ada saw something that Babbage in some sense failed to see. In Babbage's world his engines were bound by number...What Lovelace saw...was that number could represent entities other than quantity. So once you had a machine for manipulating numbers, if those numbers represented other things, letters, musical notes, then the machine could manipulate symbols of which number was one instance, according to rules. It is this fundamental transition from a machine which is a number cruncher to a machine for manipulating symbols according to rules that is the fundamental transition from calculation to computation—to general-purpose computation—and looking back from the present high ground of modern computing, if we are looking and sifting history for that transition, then that transition was made explicitly by Ada in that 1843 paper.
Note G also contains Lovelace's dismissal of artificial intelligence. She wrote that "The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform. It can follow analysis; but it has no power of anticipating any analytical relations or truths." This objection has been the subject of much debate and rebuttal, for example by Alan Turing in his paper "Computing Machinery and Intelligence". Most modern computer scientists argue that this view is outdated and that computer software can develop in ways that cannot necessarily be anticipated by programmers.
Distinction between mechanism and logical structure
Lovelace recognized the difference between the details of the computing mechanism, as covered in an 1834 article on the Difference Engine,
and the logical structure of the Analytical Engine, on which the article she was reviewing dwelt. She noted that different specialists might be required in each area.
The [1834 article] chiefly treats it under its mechanical aspect, entering but slightly into the mathematical principles of which that engine is the representative, but giving, in considerable length, many details of the mechanism and contrivances by means of which it tabulates the various orders of differences. M. Menabrea, on the contrary, exclusively developes the analytical view; taking it for granted that mechanism is able to perform certain processes, but without attempting to explain how; and devoting his whole attention to explanations and illustrations of the manner in which analytical laws can be so arranged and combined as to bring every branch of that vast subject within the grasp of the assumed powers of mechanism. It is obvious that, in the invention of a calculating engine, these two branches of the subject are equally essential fields of investigation... They are indissolubly connected, though so different in their intrinsic nature, that perhaps the same mind might not be likely to prove equally profound or successful in both.
Commemoration
The computer language Ada, created on behalf of the United States Department of Defense, was named after Lovelace. The reference manual for the language was approved on 10 December 1980 and the Department of Defense Military Standard for the language, MIL-STD-1815, was given the number of the year of her birth.
In 1981, the Association for Women in Computing inaugurated its Ada Lovelace Award. , the British Computer Society (BCS) has awarded the Lovelace Medal, and in 2008 initiated an annual competition for women students. BCSWomen sponsors the Lovelace Colloquium, an annual conference for women undergraduates. Ada College is a further-education college in Tottenham Hale, London, focused on digital skills.
Ada Lovelace Day is an annual event celebrated on the second Tuesday of October, which began in 2009. Its goal is to "... raise the profile of women in science, technology, engineering, and maths," and to "create new role models for girls and women" in these fields.
The Ada Initiative was a non-profit organisation dedicated to increasing the involvement of women in the free culture and open source movements. A specialist technical college, for pupils aged 16–19, in England is named "Ada, the National College for Digital Skills", it has campuses in Whitechapel, Tottenham Hale and Manchester. The building of the department of Engineering Mathematics at the University of Bristol is called the Ada Lovelace Building. The Engineering in Computer Science and Telecommunications College building in Zaragoza University is called the Ada Byron Building. The computer centre in the village of Porlock, near where Lovelace lived, is named after her. Ada Lovelace House is a council-owned building in Kirkby-in-Ashfield, Nottinghamshire, near where Lovelace spent her infancy.
In 2012, a Google Doodle and blog post honoured her on her birthday. In 2013, Ada Developers Academy was founded and named after her. Its mission is to diversify tech by providing women and gender-diverse people the skills, experience, and community support to become professional software developers to change the face of tech. On 17 September 2013, the BBC Radio 4 biography programme Great Lives devoted an episode to Ada Lovelace; she was sponsored by TV presenter Konnie Huq.
As of November 2015, all new British passports have included an illustration of Lovelace and Babbage. In 2017, a Google Doodle honoured her with other women on International Women's Day. On 2 February 2018, Satellogic, a high-resolution Earth observation imaging and analytics company, launched a ÑuSat type micro-satellite named in honour of Ada Lovelace. In March 2018, The New York Times published a belated obituary for Ada Lovelace.
On 27 July 2018, Senator Ron Wyden submitted, in the United States Senate, the designation of 9 October 2018 as National Ada Lovelace Day: "To honor the life and contributions of Ada Lovelace as a leading woman in science and mathematics". The resolution (S.Res.592) was considered, and agreed to without amendment and with a preamble by unanimous consent. In November 2020 it was announced that Trinity College Dublin, whose library had previously held forty busts, all of them of men, was commissioning four new busts of women, one of whom was to be Lovelace.
In March 2022, a statue of Ada Lovelace was installed at the site of the former Ergon House in the City of Westminster, London, honoring its scientific history. The redevelopment was part of a complex with Imperial Chemical House. The statue was sculpted by Etienne and Mary Millner and based on the portrait by Margaret Sarah Carpenter. The sculpture was unveiled on International Women's Day, 2022. It stands on the 7th floor of Millbank Quarter overlooking the junction of Dean Bradley Street and Horseferry Road.
In September 2022, Nvidia announced the Ada Lovelace graphics processing unit (GPU) microarchitecture. In July 2023, the Royal Mint issued four commemorative £2 coins in various metals to "honour the innovative contributions of computer science visionary Ada Lovelace and her legacy as a female trailblazer."
Bicentenary (2015)
The bicentenary of Ada Lovelace's birth was celebrated with a number of events, including:
The Ada Lovelace Bicentenary Lectures on Computability, Israel Institute for Advanced Studies, 20 December 2015 – 31 January 2016.
Ada Lovelace Symposium, University of Oxford, 13–14 October 2015.
Ada.Ada.Ada, a one-woman show about the life and work of Ada Lovelace (using an LED dress), premiered at Edinburgh International Science Festival on 11 April 2015, and continued to touring internationally to promote diversity on STEM at technology conferences, businesses, government and educational organisations.
Special exhibitions were displayed by the Science Museum in London, England and the Weston Library (part of the Bodleian Library) in Oxford, England.
In popular culture
Novels and plays
Lovelace is portrayed in Romulus Linney's 1977 play Childe Byron. In Tom Stoppard's 1993 play Arcadia, the precocious teenage genius Thomasina Coverly—a character "apparently based" on Ada Lovelace (the play also involves Lord Byron)—comes to understand chaos theory, and theorises the second law of thermodynamics, before either is officially recognised.
In the 1990 steampunk novel The Difference Engine by William Gibson and Bruce Sterling, Lovelace delivers a lecture on the "punched cards" programme which proves Gödel's incompleteness theorems decades before their actual discovery. Lovelace and Mary Shelley as teenagers are the central characters in Jordan Stratford's steampunk series, The Wollstonecraft Detective Agency.
Lovelace features in John Crowley's 2005 novel, Lord Byron's Novel: The Evening Land, as an unseen character whose personality is forcefully depicted in her annotations and anti-heroic efforts to archive her father's lost novel.
The 2015 play Ada and the Engine by Lauren Gunderson portrays Lovelace and Charles Babbage in unrequited love, and it imagines a post-death meeting between Lovelace and her father. Lovelace and Babbage are also the main characters in Sydney Padua's webcomic and graphic novel The Thrilling Adventures of Lovelace and Babbage. The comic features extensive footnotes on the history of Ada Lovelace, and many lines of dialogue are drawn from actual correspondence.
Film and television
In the 1997 film Conceiving Ada, a computer scientist obsessed with Ada finds a way of communicating with her in the past by means of "undying information waves".
Lovelace, identified as Ada Augusta Byron, is portrayed by Lily Lesser in the second series of The Frankenstein Chronicles aired on ITV in 2017. She is employed as an "analyst" to provide the workings of a life-sized humanoid automaton. The brass workings of the machine are reminiscent of Babbage's analytical engine. Her employment is described as keeping her occupied until she returns to her studies in advanced mathematics.
Lovelace and Babbage appear as characters in the second season of the ITV series Victoria (2017). Emerald Fennell portrays Lovelace in the episode, "The Green-Eyed Monster."
"Lovelace" is the name of the operating system designed by the character Cameron Howe in Halt and Catch Fire, which aired on AMC in the US in 2015.
Lovelace features as a character in "Spyfall, Part 2", the second episode of Doctor Who, series 12, which first aired on BBC One on 5 January 2020. The character was portrayed by Sylvie Briggs, alongside characterisations of Charles Babbage and Noor Inayat Khan.
Computing and STEM
Ada Lovelace Day
A computer language, initially developed by the US Department of Defense, is called Ada.
The Lovelace Medal awarded by the British Computer Society (BCS).
The Lovelace Lectures at the BCS sponsored by the Alan Turing Institute.
The Lovelace Lectures at Durham University.
The Ada Lovelace Award awarded by the Association for Women in Computing
The Ada Initiative supporting open technology and women is named after her.
Ada Lovelace Building, the engineering mathematics building at the University of Bristol.
Ada Lovelace Building, in Exeter Science Park.
Ada Byron Building, in the Department of Computer Science and Systems Engineering at the University of Zaragoza.
Ada Byron Research Centre in University of Malaga, Andalucía.
Ada Lovelace Institute, a think tank dedicated to ensuring data and AI work for people and society.
Ada Lovelace Centre for Digital Scholarship, Oxford
Ada Lovelace Center for Digital Humanities at the FU Berlin.
ADA Lovelace Centre for Analytics, Data, Applications at Fraunhofer IIS originally called the ADA Lovelace Centre for Artificial Intelligence.
Ada Lovelace Excellence Scholarship at the University of Southampton.
Adafruit Industries
Ada Lovelace Centre, part of the Science and Technology Facilities Council, a UK government agency that carries out research in science and engineering.
The Cardano cryptocurrency platform, launched in 2017, uses Ada as the name for the cryptocurrency and Lovelace as the smallest sub-unit of an Ada.
Ada, an artwork incorporating artificial intelligence house at Microsoft's Building 99.
In 2021, the code name of Nvidia's GPU architecture in its RTX 4000 series is Ada Lovelace. It is the first Nvidia architecture to feature both a first and last name.
Ada Byron University Programming Contest at the Polytechnic University of Valencia.
Other
A green plaque is to be found on Fordhook Avenue on the corner of 5 Station Parade, Uxbridge Road, Ealing.
Blue plaques are at Mallory Park and St James's Square.
Ada Lovelace C of E High School in Greenford, specialising in music, digital technologies and languages.
Ada Lovelace House, council offices in Nottinghamshire, later proposed to be let to small business.
Ada Byron King Building at Nottingham Trent University
Ada Lovelace Suite at Seaham Hall.
The Lovelace Memorial is a Grade II Listed monument in Kirkby Mallory.
A clone of Ada Lovelace appears in the 2023 video game Starfield
Ada Lovelace is a playable leader in Sid Meier's Civilization VII.
Publications
Lovelace, Ada King. Ada, the Enchantress of Numbers: A Selection from the Letters of Lord Byron's Daughter and her Description of the First Computer. Mill Valley, CA: Strawberry Press, 1992. .
Also available on Wikisource: The Menebrea article, The notes by Ada Lovelace.
Publication history
Six copies of the 1843 first edition of Sketch of the Analytical Engine with Ada Lovelace's "Notes" have been located. Three are held at Harvard University, one at the University of Oklahoma, and one at the United States Air Force Academy. On 20 July 2018, the sixth copy was sold at auction to an anonymous buyer for £95,000. A digital facsimile of one of the copies in the Harvard University Library is available online.
In December 2016, a letter written by Ada Lovelace was forfeited by Martin Shkreli to the New York State Department of Taxation and Finance for unpaid taxes owed by Shkreli.
See also
Ai-Da – humanoid robot, completed in 2019
Code: Debugging the Gender Gap
List of pioneers in computer science
Timeline of women in science
Women in computing
Women in STEM fields
Explanatory notes
References
Citations
General and cited sources
.
.
.
.
.
.
.
With notes upon the memoir by the translator.
Miller, Clair Cain. "Ada Lovelace, 1815–1852," New York Times, 8 March 2018.
.
.
.
.
.
.
.
Further reading
Jennifer Chiaverini, 2017, Enchantress of Numbers, Dutton, 426 pp.
Christopher Hollings, Ursula Martin, and Adrian Rice, 2018, Ada Lovelace: The Making of a Computer Scientist, Bodleian Library, 114 pp.
Miranda Seymour, 2018, In Byron's Wake: The Turbulent Lives of Byron's Wife and Daughter: Annabella Milbanke and Ada Lovelace, Pegasus, 547 pp.
Jenny Uglow (22 November 2018), "Stepping Out of Byron's Shadow", The New York Review of Books, vol. LXV, no. 18, pp. 30–32.
|
;1815 births;1852 deaths;19th-century British inventors;19th-century British women mathematicians;19th-century English mathematicians;19th-century English nobility;19th-century English women writers;19th-century English writers;19th-century women inventors;Ada;Ada (programming language);Amateur mathematicians;British countesses;British women computer scientists;British women inventors;Burials at the Church of St Mary Magdalene, Hucknall;Burials in Nottinghamshire;Computer designers;Daughters of barons;Deaths from cancer in England;Deaths from uterine cancer in the United Kingdom;English computer programmers;English people of Scottish descent;English women poets;Family of Lord Byron;Godwin family;Mathematicians from London;Women of the Victorian era
|
https://en.wikipedia.org/wiki/Analog%20signal
|
An analog signal (American English) or analogue signal (British and Commonwealth English) is any continuous-time signal representing some other quantity, i.e., analogous to another quantity. For example, in an analog audio signal, the instantaneous signal voltage varies continuously with the pressure of the sound waves.
In contrast, a digital signal represents the original time-varying quantity as a sampled sequence of quantized values. Digital sampling imposes some bandwidth and dynamic range constraints on the representation and adds quantization noise.
The term analog signal usually refers to electrical signals; however, mechanical, pneumatic, hydraulic, and other systems may also convey or be considered analog signals.
Representation
An analog signal uses some property of the medium to convey the signal's information. For example, an aneroid barometer uses rotary position as the signal to convey pressure information. In an electrical signal, the voltage, current, or frequency of the signal may be varied to represent the information.
Any information may be conveyed by an analog signal; such a signal may be a measured response to changes in a physical variable, such as sound, light, temperature, position, or pressure. The physical variable is converted to an analog signal by a transducer. For example, sound striking the diaphragm of a microphone induces corresponding fluctuations in the current produced by a coil in an electromagnetic microphone or the voltage produced by a condenser microphone. The voltage or the current is said to be an analog of the sound.
Noise
An analog signal is subject to electronic noise and distortion introduced by communication channels, recording and signal processing operations, which can progressively degrade the signal-to-noise ratio (SNR). As the signal is transmitted, copied, or processed, the unavoidable noise introduced in the signal path will accumulate as a generation loss, progressively and irreversibly degrading the SNR, until in extreme cases, the signal can be overwhelmed. Noise can show up as hiss and intermodulation distortion in audio signals, or snow in video signals. Generation loss is irreversible as there is no reliable method to distinguish the noise from the signal.
Converting an analog signal to digital form introduces a low-level quantization noise into the signal due to finite resolution of digital systems. Once in digital form, the signal can be transmitted, stored, and processed without introducing additional noise or distortion using error detection and correction.
Noise accumulation in analog systems can be minimized by electromagnetic shielding, balanced lines, low-noise amplifiers and high-quality electrical components.
See also
Amplifier
Analog computer
Analog device
Analog signal processing
Magnetic tape
Preamplifier
References
Further reading
|
Analog circuits;Electronic design;Television terminology;Video signal
|
https://en.wikipedia.org/wiki/Alcohol%20%28chemistry%29
|
In chemistry, an alcohol (), is a type of organic compound that carries at least one hydroxyl () functional group bound to a saturated carbon atom. Alcohols range from the simple, like methanol and ethanol, to complex, like sugar alcohols and cholesterol. The presence of an OH group strongly modifies the properties of hydrocarbons, conferring hydrophilic (water-loving) properties. The OH group provides a site at which many reactions can occur.
History
The flammable nature of the exhalations of wine was already known to ancient natural philosophers such as Aristotle (384–322 BCE), Theophrastus (–287 BCE), and Pliny the Elder (23/24–79 CE). However, this did not immediately lead to the isolation of alcohol, even despite the development of more advanced distillation techniques in second- and third-century Roman Egypt. An important recognition, first found in one of the writings attributed to Jābir ibn Ḥayyān (ninth century CE), was that by adding salt to boiling wine, which increases the wine's relative volatility, the flammability of the resulting vapors may be enhanced. The distillation of wine is attested in Arabic works attributed to al-Kindī (–873 CE) and to al-Fārābī (–950), and in the 28th book of al-Zahrāwī's (Latin: Abulcasis, 936–1013) Kitāb al-Taṣrīf (later translated into Latin as Liber servatoris). In the twelfth century, recipes for the production of aqua ardens ("burning water", i.e., alcohol) by distilling wine with salt started to appear in a number of Latin works, and by the end of the thirteenth century, it had become a widely known substance among Western European chemists.
The works of Taddeo Alderotti (1223–1296) describe a method for concentrating alcohol involving repeated fractional distillation through a water-cooled still, by which an alcohol purity of 90% could be obtained. The medicinal properties of ethanol were studied by Arnald of Villanova (1240–1311 CE) and John of Rupescissa (–1366), the latter of whom regarded it as a life-preserving substance able to prevent all diseases (the aqua vitae or "water of life", also called by John the quintessence of wine).
Nomenclature
Etymology
The word "alcohol" derives from the Arabic kohl (), a powder used as an eyeliner. The first part of the word () is the Arabic definite article, equivalent to the in English. The second part of the word () has several antecedents in Semitic languages, ultimately deriving from the Akkadian (), meaning stibnite or antimony.
Like its antecedents in Arabic and older languages, the term alcohol was originally used for the very fine powder produced by the sublimation of the natural mineral stibnite to form antimony trisulfide . It was considered to be the essence or "spirit" of this mineral. It was used as an antiseptic, eyeliner, and cosmetic. Later the meaning of alcohol was extended to distilled substances in general, and then narrowed again to ethanol, when "spirits" was a synonym for hard liquor.
Paracelsus and Libavius both used the term alcohol to denote a fine powder, the latter speaking of an alcohol derived from antimony. At the same time Paracelsus uses the word for a volatile liquid; alcool or alcool vini occurs often in his writings.
Bartholomew Traheron, in his 1543 translation of John of Vigo, introduces the word as a term used by "barbarous" authors for "fine powder." Vigo wrote: "the barbarous auctours use alcohol, or (as I fynde it sometymes wryten) alcofoll, for moost fine poudre."
The 1657 Lexicon Chymicum, by William Johnson glosses the word as "antimonium sive stibium." By extension, the word came to refer to any fluid obtained by distillation, including "alcohol of wine," the distilled essence of wine. Libavius in Alchymia (1594) refers to "". Johnson (1657) glosses alcohol vini as "." The word's meaning became restricted to "spirit of wine" (the chemical known today as ethanol) in the 18th century and was extended to the class of substances so-called as "alcohols" in modern chemistry after 1850.
The term ethanol was invented in 1892, blending "ethane" with the "-ol" ending of "alcohol", which was generalized as a libfix.
The term alcohol originally referred to the primary alcohol ethanol (ethyl alcohol), which is used as a drug and is the main alcohol present in alcoholic drinks.
The suffix -ol appears in the International Union of Pure and Applied Chemistry (IUPAC) chemical name of all substances where the hydroxyl group is the functional group with the highest priority. When a higher priority group is present in the compound, the prefix hydroxy- is used in its IUPAC name. The suffix -ol in non-IUPAC names (such as paracetamol or cholesterol) also typically indicates that the substance is an alcohol. However, some compounds that contain hydroxyl functional groups have trivial names that do not include the suffix -ol or the prefix hydroxy-, e.g. the sugars glucose and sucrose.
Systematic names
IUPAC nomenclature is used in scientific publications, and in writings where precise identification of the substance is important. In naming simple alcohols, the name of the alkane chain loses the terminal e and adds the suffix -ol, e.g., as in "ethanol" from the alkane chain name "ethane". When necessary, the position of the hydroxyl group is indicated by a number between the alkane name and the -ol: propan-1-ol for , propan-2-ol for . If a higher priority group is present (such as an aldehyde, ketone, or carboxylic acid), then the prefix hydroxy-is used, e.g., as in 1-hydroxy-2-propanone (). Compounds having more than one hydroxy group are called polyols. They are named using suffixes -diol, -triol, etc., following a list of the position numbers of the hydroxyl groups, as in propane-1,2-diol for CH3CH(OH)CH2OH (propylene glycol).
In cases where the hydroxy group is bonded to an sp2 carbon on an aromatic ring, the molecule is classified separately as a phenol and is named using the IUPAC rules for naming phenols. Phenols have distinct properties and are not classified as alcohols.
Common names
In other less formal contexts, an alcohol is often called with the name of the corresponding alkyl group followed by the word "alcohol", e.g., methyl alcohol, ethyl alcohol. Propyl alcohol may be n-propyl alcohol or isopropyl alcohol, depending on whether the hydroxyl group is bonded to the end or middle carbon on the straight propane chain. As described under systematic naming, if another group on the molecule takes priority, the alcohol moiety is often indicated using the "hydroxy-" prefix.
In archaic nomenclature, alcohols can be named as derivatives of methanol using "-carbinol" as the ending. For instance, can be named trimethylcarbinol.
Primary, secondary, and tertiary
Alcohols are then classified into primary, secondary (sec-, s-), and tertiary (tert-, t-), based upon the number of carbon atoms connected to the carbon atom that bears the hydroxyl functional group. The respective numeric shorthands 1°, 2°, and 3° are sometimes used in informal settings. The primary alcohols have general formulas . The simplest primary alcohol is methanol (), for which R = H, and the next is ethanol, for which , the methyl group. Secondary alcohols are those of the form RR'CHOH, the simplest of which is 2-propanol (). For the tertiary alcohols, the general form is RR'R"COH. The simplest example is tert-butanol (2-methylpropan-2-ol), for which each of R, R', and R" is . In these shorthands, R, R', and R" represent substituents, alkyl or other attached, generally organic groups.
Examples
Applications
Alcohols have a long history of myriad uses. For simple mono-alcohols, which is the focus on this article, the following are most important industrial alcohols:
methanol, mainly for the production of formaldehyde and as a fuel additive
ethanol, mainly for alcoholic beverages, fuel additive, solvent, and to sterilize hospital instruments.
1-propanol, 1-butanol, and isobutyl alcohol for use as a solvent and precursor to solvents
C6–C11 alcohols used for plasticizers, e.g. in polyvinylchloride
fatty alcohol (C12–C18), precursors to detergents
Methanol is the most common industrial alcohol, with about 12 million tons/y produced in 1980. The combined capacity of the other alcohols is about the same, distributed roughly equally.
Toxicity
With respect to acute toxicity, simple alcohols have low acute toxicities. Doses of several milliliters are tolerated. For pentanols, hexanols, octanols, and longer alcohols, LD50 range from 2–5 g/kg (rats, oral). Ethanol is less acutely toxic. All alcohols are mild skin irritants.
Methanol and ethylene glycol are more toxic than other simple alcohols. Their metabolism is affected by the presence of ethanol, which has a higher affinity for liver alcohol dehydrogenase. In this way, methanol will be excreted intact in urine.
Physical properties
In general, the hydroxyl group makes alcohols polar. Those groups can form hydrogen bonds to one another and to most other compounds. Owing to the presence of the polar OH alcohols are more water-soluble than simple hydrocarbons. Methanol, ethanol, and propanol are miscible in water. 1-Butanol, with a four-carbon chain, is moderately soluble.
Because of hydrogen bonding, alcohols tend to have higher boiling points than comparable hydrocarbons and ethers. The boiling point of the alcohol ethanol is 78.29 °C, compared to 69 °C for the hydrocarbon hexane, and 34.6 °C for diethyl ether.
Occurrence in nature
Alcohols occur widely in nature, as derivatives of glucose such as cellulose and hemicellulose, and in phenols and their derivatives such as lignin. Starting from biomass, 180 billion tons/y of complex carbohydrates (sugar polymers) are produced commercially (as of 2014). Many other alcohols are pervasive in organisms, as manifested in other sugars such as fructose and sucrose, in polyols such as glycerol, and in some amino acids such as serine. Simple alcohols like methanol, ethanol, and propanol occur in modest quantities in nature, and are industrially synthesized in large quantities for use as chemical precursors, fuels, and solvents.
Production
Hydroxylation
Many alcohols are produced by hydroxylation, i.e., the installation of a hydroxy group using oxygen or a related oxidant. Hydroxylation is the means by which the body processes many poisons, converting lipophilic compounds into hydrophilic derivatives that are more readily excreted. Enzymes called hydroxylases and oxidases facilitate these conversions.
Many industrial alcohols, such as cyclohexanol for the production of nylon, are produced by hydroxylation.
Ziegler and oxo processes
In the Ziegler process, linear alcohols are produced from ethylene and triethylaluminium followed by oxidation and hydrolysis. An idealized synthesis of 1-octanol is shown:
The process generates a range of alcohols that are separated by distillation.
Many higher alcohols are produced by hydroformylation of alkenes followed by hydrogenation. When applied to a terminal alkene, as is common, one typically obtains a linear alcohol:
Such processes give fatty alcohols, which are useful for detergents.
Hydration reactions
Some low molecular weight alcohols of industrial importance are produced by the addition of water to alkenes. Ethanol, isopropanol, 2-butanol, and tert-butanol are produced by this general method. Two implementations are employed, the direct and indirect methods. The direct method avoids the formation of stable intermediates, typically using acid catalysts. In the indirect method, the alkene is converted to the sulfate ester, which is subsequently hydrolyzed. The direct hydration uses ethylene (ethylene hydration) or other alkenes from cracking of fractions of distilled crude oil.
Hydration is also used industrially to produce the diol ethylene glycol from ethylene oxide.
Fermentation
Ethanol is obtained by fermentation of glucose (which is often obtained from starch) in the presence of yeast. Carbon dioxide is cogenerated. Like ethanol, butanol can be produced by fermentation processes. Saccharomyces yeast are known to produce these higher alcohols at temperatures above . The bacterium Clostridium acetobutylicum can feed on cellulose (also an alcohol) to produce butanol on an industrial scale.
Substitution
Primary alkyl halides react with aqueous NaOH or KOH to give alcohols in nucleophilic aliphatic substitution. Secondary and especially tertiary alkyl halides will give the elimination (alkene) product instead. Grignard reagents react with carbonyl groups to give secondary and tertiary alcohols. Related reactions are the Barbier reaction and the Nozaki–Hiyama–Kishi reaction.
Reduction
Aldehydes or ketones are reduced with sodium borohydride or lithium aluminium hydride (after an acidic workup). Another reduction using aluminium isopropoxide is the Meerwein–Ponndorf–Verley reduction. Noyori asymmetric hydrogenation is the asymmetric reduction of β-keto-esters.
Hydrolysis
Alkenes engage in an acid catalyzed hydration reaction using concentrated sulfuric acid as a catalyst that gives usually secondary or tertiary alcohols. Formation of a secondary alcohol via alkene reduction and hydration is shown:
The hydroboration-oxidation and oxymercuration-reduction of alkenes are more reliable in organic synthesis. Alkenes react with N-bromosuccinimide and water in halohydrin formation reaction. Amines can be converted to diazonium salts, which are then hydrolyzed.
Reactions
Deprotonation
With aqueous pKa values of around 16–19, alcohols are, in general, slightly weaker acids than water. With strong bases such as sodium hydride or sodium they form salts called alkoxides, with the general formula (where R is an alkyl and M is a metal).
The acidity of alcohols is strongly affected by solvation. In the gas phase, alcohols are more acidic than in water. In DMSO, alcohols (and water) have a pKa of around 29–32. As a consequence, alkoxides (and hydroxide) are powerful bases and nucleophiles (e.g., for the Williamson ether synthesis) in this solvent. In particular, or in DMSO can be used to generate significant equilibrium concentrations of acetylide ions through the deprotonation of alkynes (see Favorskii reaction).
Nucleophilic substitution
Tertiary alcohols react with hydrochloric acid to produce tertiary alkyl chloride. Primary and secondary alcohols are converted to the corresponding chlorides using thionyl chloride and various phosphorus chloride reagents.
Primary and secondary alcohols, likewise, convert to alkyl bromides using phosphorus tribromide, for example:
In the Barton–McCombie deoxygenation an alcohol is deoxygenated to an alkane with tributyltin hydride or a trimethylborane-water complex in a radical substitution reaction.
Dehydration
Meanwhile, the oxygen atom has lone pairs of nonbonded electrons that render it weakly basic in the presence of strong acids such as sulfuric acid. For example, with methanol:
Upon treatment with strong acids, alcohols undergo the E1 elimination reaction to produce alkenes. The reaction, in general, obeys Zaytsev's rule, which states that the most stable (usually the most substituted) alkene is formed. Tertiary alcohols are eliminated easily at just above room temperature, but primary alcohols require a higher temperature.
This is a diagram of acid catalyzed dehydration of ethanol to produce ethylene:
A more controlled elimination reaction requires the formation of the xanthate ester.
Protonolysis
Tertiary alcohols react with strong acids to generate carbocations. The reaction is related to their dehydration, e.g. isobutylene from tert-butyl alcohol. A special kind of dehydration reaction involves triphenylmethanol and especially its amine-substituted derivatives. When treated with acid, these alcohols lose water to give stable carbocations, which are commercial dyes.
Esterification
Alcohol and carboxylic acids react in the so-called Fischer esterification. The reaction usually requires a catalyst, such as concentrated sulfuric acid:
Other types of ester are prepared in a similar manner−for example, tosyl (tosylate) esters are made by reaction of the alcohol with 4-toluenesulfonyl chloride in pyridine.
Oxidation
Primary alcohols () can be oxidized either to aldehydes () or to carboxylic acids (). The oxidation of secondary alcohols () normally terminates at the ketone () stage. Tertiary alcohols () are resistant to oxidation.
The direct oxidation of primary alcohols to carboxylic acids normally proceeds via the corresponding aldehyde, which is transformed via an aldehyde hydrate () by reaction with water before it can be further oxidized to the carboxylic acid.
Reagents useful for the transformation of primary alcohols to aldehydes are normally also suitable for the oxidation of secondary alcohols to ketones. These include Collins reagent and Dess–Martin periodinane. The direct oxidation of primary alcohols to carboxylic acids can be carried out using potassium permanganate or the Jones reagent.
See also
Beer chemistry
Enol
Ethanol fuel
Fatty alcohol
Index of alcohol-related articles
List of alcohols
Lucas test
Polyol
Rubbing alcohol
Sugar alcohol
Transesterification
Wine chemistry
Citations
General references
|
;Addiction;Antiseptics;Functional groups;Organic chemistry
|
https://en.wikipedia.org/wiki/Anarcho-capitalism
|
Anarcho-capitalism (colloquially: ancap or an-cap) is a political philosophy and economic theory that advocates for the abolition of centralized states in favor of stateless societies, where systems of private property are enforced by private agencies. Anarcho-capitalists argue that society can self-regulate and civilize through the voluntary exchange of goods and services. This would ideally result in a voluntary society based on concepts such as the non-aggression principle, free markets and self-ownership. In the absence of statute, private defence agencies and/or insurance companies would operate competitively in a market and fulfill the roles of courts and the police, similar to a state apparatus. Some anarcho-capitalist philosophies understand control of private property as part of the self, and some permit voluntary slavery. The vast majority of anarcho‑capitalists deny this, and critics of capitalism argue that this minority opinion is not unique to anarcho-capitalists, but is an essential consequence of the capitalist contract theory (wage slavery).
According to its proponents, various historical theorists have espoused philosophies similar to anarcho-capitalism. While the earliest extant attestation of "anarchocapitalism" is in Karl Hess's essay "The Death of Politics" published by Playboy in March 1969, American economist Murray Rothbard was credited with coining the terms anarcho-capitalist and anarcho-capitalism in 1971. A leading figure in the 20th-century American libertarian movement, Rothbard synthesized elements from the Austrian School, classical liberalism and 19th-century American individualist anarchists and mutualists Lysander Spooner and Benjamin Tucker, while rejecting the labor theory of value. Rothbard's anarcho-capitalist society would operate under a mutually agreed-upon "legal code which would be generally accepted, and which the courts would pledge themselves to follow". This legal code would recognize contracts between individuals, private property, self-ownership and tort law in keeping with the non-aggression principle. Unlike a state, enforcement measures would only apply to those who initiated force or fraud. Rothbard views the power of the state as unjustified, arguing that it violates individual rights and reduces prosperity, and creates social and economic problems.
Anarcho-capitalists and right-libertarians cite several historical precedents of what they believe to be examples of quasi-anarcho-capitalism, including the Republic of Cospaia, Acadia, Anglo-Saxon England, Medieval Iceland, the American Old West, Gaelic Ireland, and merchant law, admiralty law, and early common law.
Anarcho-capitalism is distinguished from minarchism, which advocates a minimal governing body (typically a night-watchman state limited to protecting individuals from aggression and enforcing private property) and from objectivism (which is a broader philosophy advocating a limited role, yet unlimited size, of said government). Anarcho-capitalists consider themselves to be anarchists despite supporting private property and private institutions.
Classification
Anarcho-capitalism developed from Austrian School-neoliberalism and individualist anarchism. Almost all anarchist movements do not consider anarcho-capitalism to be anarchist because it lacks the historically central anti-capitalist emphasis of anarchism. They also argue that anarchism is incompatible with capitalist structures. According to several scholars, Anarcho-capitalism lies outside the tradition of the vast majority of anarchist schools of thought and is more closely affiliated with capitalism, right-libertarianism and neoliberalism. Traditionally, anarchists oppose and reject capitalism, and consider "anarcho-capitalism" to be a contradiction in terms, although anarcho-capitalists and some right-libertarians consider anarcho-capitalism to be a form of anarchism.
According to the Encyclopædia Britannica:Anarcho-capitalism is occasionally seen as part of the New Right.
Philosophy
Author J Michael Oliver says that during the 1960s, a philosophical movement arose in the US that championed "reason, ethical egoism, and free-market capitalism". According to Oliver, anarcho-capitalism is a political theory which follows Objectivism, a philosophical system developed by Ayn Rand, but he acknowledges that his advocacy of anarcho-capitalism is "quite at odds with Rand's ardent defense of 'limited government. Professor Lisa Duggan also says that Rand's anti-statist, pro–"free market" stances went on to shape the politics of anarcho-capitalism.
According to Patrik Schumacher, the political ideology and programme of anarcho-capitalism envisages the radicalization of the neoliberal "rollback of the state", and calls for the extension of "entrepreneurial freedom" and "competitive market rationality" to the point where the scope for private enterprise is all-encompassing and "leaves no space for state action whatsoever".
On the state
Anarcho-capitalists oppose the state and seek to privatize any useful service the government presently provides, such as education, infrastructure, or the enforcement of law. They see capitalism and the "free market" as the basis for a free and prosperous society. Murray Rothbard stated that the difference between free-market capitalism and state capitalism is the difference between "peaceful, voluntary exchange" and a "collusive partnership" between business and government that "uses coercion to subvert the free market". Rothbard argued that all government services, including defense, are inefficient because they lack a market-based pricing mechanism regulated by "the voluntary decisions of consumers purchasing services that fulfill their highest-priority needs" and by investors seeking the most profitable enterprises to invest in.
Maverick Edwards of the Liberty University describes anarcho-capitalism as a political, social, and economic theory that places markets as the central "governing body" and where government no longer "grants" rights to its citizenry.
Non-aggression principle
Writer Stanisław Wójtowicz says that although anarcho-capitalists are against centralized states, they believe that all people would naturally share and agree to a specific moral theory based on the non-aggression principle. While the Friedmanian formulation of anarcho-capitalism is robust to the presence of violence and in fact, assumes some degree of violence will occur, anarcho-capitalism as formulated by Rothbard and others holds strongly to the central libertarian nonaggression axiom, sometimes non-aggression principle. Rothbard wrote:
Rothbard's defense of the self-ownership principle stems from what he believed to be his falsification of all other alternatives, namely that either a group of people can own another group of people, or that no single person has full ownership over one's self. Rothbard dismisses these two cases on the basis that they cannot result in a universal ethic, i.e. a just natural law that can govern all people, independent of place and time. The only alternative that remains to Rothbard is self-ownership which he believes is both axiomatic and universal.
In general, the non-aggression axiom is described by Rothbard as a prohibition against the initiation of force, or the threat of force, against persons (in which he includes direct violence, assault and murder) or property (in which he includes fraud, burglary, theft and taxation). The initiation of force is usually referred to as aggression or coercion. The difference between anarcho-capitalists and other libertarians is largely one of the degree to which they take this axiom. Minarchist libertarians such as libertarian political parties would retain the state in some smaller and less invasive form, retaining at the very least public police, courts, and military. However, others might give further allowance for other government programs. In contrast, Rothbard rejects any level of "state intervention", defining the state as a coercive monopoly and as the only entity in human society, excluding acknowledged criminals, that derives its income entirely from coercion, in the form of taxation, which Rothbard describes as "compulsory seizure of the property of the State's inhabitants, or subjects."
Some anarcho-capitalists such as Rothbard accept the non-aggression axiom on an intrinsic moral or natural law basis. It is in terms of the non-aggression principle that Rothbard defined his interpretation of anarchism, "a system which provides no legal sanction for such aggression ['against person and property']"; and wrote that "what anarchism proposes to do, then, is to abolish the State, i.e. to abolish the regularized institution of aggressive coercion". In an interview published in the American libertarian journal The New Banner, Rothbard stated that "capitalism is the fullest expression of anarchism, and anarchism is the fullest expression of capitalism".
Property
Private property
Anarcho-capitalists postulate the privatization of everything, including cities with all their infrastructures, public spaces, streets and urban management systems.
Central to Rothbardian anarcho-capitalism are the concepts of self-ownership and original appropriation that combines personal and private property. Hans-Hermann Hoppe wrote:
Rothbard however rejected the Lockean proviso, and followed the rule of "first come, first served", without any consideration of how much resources are left for other individuals.
Anarcho-capitalists advocate private ownership of the means of production and the allocation of the product of labor created by workers within the context of wage labour and the free market – that is through decisions made by property and capital owners, regardless of what an individual needs or does not need. Original appropriation allows an individual to claim any never-before-used resources, including land and by improving or otherwise using it, own it with the same "absolute right" as their own body, and retaining those rights forever, regardless of whether the resource is still being used by them. According to Rothbard, property can only come about through labor, therefore original appropriation of land is not legitimate by merely claiming it or building a fence around itit is only by using land and by mixing one's labor with it that original appropriation is legitimized: "Any attempt to claim a new resource that someone does not use would have to be considered invasive of the property right of whoever the first user will turn out to be". Rothbard argued that the resource need not continue to be used in order for it to be the person's property as "for once his labor is mixed with the natural resource, it remains his owned land. His labor has been irretrievably mixed with the land, and the land is therefore his or his assigns' in perpetuity".
Rothbard also spoke about a theory of justice in property rights:
In Justice and Property Rights, Rothbard wrote that "any identifiable owner (the original victim of theft or his heir) must be accorded his property". In the case of slavery, Rothbard claimed that in many cases "the old plantations and the heirs and descendants of the former slaves can be identified, and the reparations can become highly specific indeed". Rothbard believed slaves rightfully own any land they were forced to work on under the homestead principle. If property is held by the state, Rothbard advocated its confiscation and "return to the private sector", writing that "any property in the hands of the State is in the hands of thieves, and should be liberated as quickly as possible". Rothbard proposed that state universities be seized by the students and faculty under the homestead principle. Rothbard also supported the expropriation of nominally "private property" if it is the result of state-initiated force such as businesses that receive grants and subsidies. Rothbard further proposed that businesses who receive at least 50% of their funding from the state be confiscated by the workers, writing: "What we libertarians object to, then, is not government per se but crime, what we object to is unjust or criminal property titles; what we are for is not 'private' property per se but just, innocent, non-criminal private property".
Similarly, Karl Hess wrote that "libertarianism wants to advance principles of property but that it in no way wishes to defend, willy nilly, all property which now is called private ... Much of that property is stolen. Much is of dubious title. All of it is deeply intertwined with an immoral, coercive state system".
Anarchists view capitalism as an inherently authoritarian and hierarchical system and seek the abolishment of private property. There is disagreement between anarchists and anarcho-capitalists as the former generally rejects anarcho-capitalism as a form of anarchism and considers anarcho-capitalism a contradiction in terms, while the latter holds that the abolishment of private property would require expropriation which is "counterproductive to order" and would require a state.
Common property
As opposed to anarchists, most anarcho-capitalists reject the commons. However, some of them propose that non-state public or community property can also exist in an anarcho-capitalist society. For anarcho-capitalists, what is important is that it is "acquired" and transferred without help or hindrance from what they call the "compulsory state". Deontological anarcho-capitalists believe that the only just and most economically beneficial way to acquire property is through voluntary trade, gift, or labor-based original appropriation, rather than through aggression or fraud.
Anarcho-capitalists state that there could be cases where common property may develop in a Lockean natural rights framework. Anarcho-capitalists make the example of a number of private businesses which may arise in an area, each owning the land and buildings that they use, but they argue that the paths between them become clear through customer and commercial movement. These paths may become valuable to the community, but according to them ownership cannot be attributed to any single person and original appropriation does not apply because many contributed the labor necessary to create them. In order to prevent it from falling to the "tragedy of the commons", anarcho-capitalists suggest transitioning from common to private property, wherein an individual would make a homesteading claim based on disuse, acquire title by the assent of the community consensus, form a corporation with other involved parties, or other means.
American economist Randall G. Holcombe sees challenges stemming from the idea of common property under anarcho-capitalism, such as whether an individual might claim fishing rights in the area of a major shipping lane and thereby forbid passage through it. In contrast, Hoppe's work on anarcho-capitalist theory is based on the assumption that all property is privately held, "including all streets, rivers, airports, and harbors" which forms the foundation of his views on immigration.
Intellectual property
Most anarcho-capitalists strongly oppose intellectual property (i.e., trademarks, patents, copyrights). Intellectual property is typically opposed because ideas are seen as lacking scarcity; A implementing an idea does not prevent B from implementing the same idea. Further the arbitrarity of intellectual property is commonly criticized. Stephan N. Kinsella argues that ownership only relates to tangible assets.
Contractual society
The society envisioned by anarcho-capitalists has been labelled by them as a "contractual society" which Rothbard described as "a society based purely on voluntary action, entirely unhampered by violence or threats of violence" The system relies on contracts between individuals as the legal framework which would be enforced by private police and security forces as well as private arbitrations.
Rothbard argues that limited liability for corporations could also exist through contract, arguing that "[c]orporations are not at all monopolistic privileges; they are free associations of individuals pooling their capital. On the purely free market, those men would simply announce to their creditors that their liability is limited to the capital specifically invested in the corporation".
There are limits to the right to contract under some interpretations of anarcho-capitalism. Rothbard believes that the right to contract is based in inalienable rights and because of this any contract that implicitly violates those rights can be voided at will, preventing a person from permanently selling himself or herself into unindentured slavery. That restriction aside, the right to contract under anarcho-capitalist order would be pretty broad. For example, Rothbard went as far as to justify stork markets, arguing that a market in guardianship rights would facilitate the transfer of guardianship from abusive or neglectful parents to those more interested or suited to raising children. Other anarcho-capitalists have also suggested the legalization of organ markets, as in Iran's renal market. Other interpretations conclude that banning such contracts would in itself be an unacceptably invasive interference in the right to contract.
Included in the right of contract is "the right to contract oneself out for employment by others". While anarchists criticize wage labour describing it as wage slavery, anarcho-capitalists view it as a consensual contract. Some anarcho-capitalists prefer to see self-employment prevail over wage labor. David D. Friedman has expressed a preference for a society where "almost everyone is self-employed" and "instead of corporations there are large groups of entrepreneurs related by trade, not authority. Each sells not his time, but what his time produces".
Law and order and the use of violence
Different anarcho-capitalists propose different forms of anarcho-capitalism and one area of disagreement is in the area of law. In The Market for Liberty, Morris and Linda Tannehill object to any statutory law whatsoever. They argue that all one has to do is ask if one is aggressing against another in order to decide if an act is right or wrong. However, while also supporting a natural prohibition on force and fraud, Rothbard supports the establishment of a mutually agreed-upon centralized libertarian legal code which private courts would pledge to follow, as he presumes a high degree of convergence amongst individuals about what constitutes natural justice.
Unlike both the Tannehills and Rothbard who see an ideological commonality of ethics and morality as a requirement, David D. Friedman proposes that "the systems of law will be produced for profit on the open market, just as books and bras are produced today. There could be competition among different brands of law, just as there is competition among different brands of cars". Friedman says whether this would lead to a libertarian society "remains to be proven". He says it is a possibility that very un-libertarian laws may result, such as laws against drugs, but he thinks this would be rare. He reasons that "if the value of a law to its supporters is less than its cost to its victims, that law ... will not survive in an anarcho-capitalist society".
Anarcho-capitalists only accept the collective defense of individual liberty (i.e. courts, military, or police forces) insofar as such groups are formed and paid for on an explicitly voluntary basis. However, their complaint is not just that the state's defensive services are funded by taxation, but that the state assumes it is the only legitimate practitioner of physical forcethat is, they believe it forcibly prevents the private sector from providing comprehensive security, such as a police, judicial and prison systems to protect individuals from aggressors. Anarcho-capitalists believe that there is nothing morally superior about the state which would grant it, but not private individuals, a right to use physical force to restrain aggressors. If competition in security provision were allowed to exist, prices would also be lower and services would be better according to anarcho-capitalists. According to Molinari: "Under a regime of liberty, the natural organization of the security industry would not be different from that of other industries". Proponents believe that private systems of justice and defense already exist, naturally forming where the market is allowed to "compensate for the failure of the state", namely private arbitration, security guards, neighborhood watch groups and so on. These private courts and police are sometimes referred to generically as private defense agencies. The defense of those unable to pay for such protection might be financed by charitable organizations relying on voluntary donation rather than by state institutions relying on taxation, or by cooperative self-help by groups of individuals. Edward Stringham argues that private adjudication of disputes could enable the market to internalize externalities and provide services that customers desire.
Rothbard stated that the American Revolutionary War and the American Civil War were the only two just wars in American military history. Some anarcho-capitalists such as Rothbard feel that violent revolution is counter-productive and prefer voluntary forms of economic secession to the extent possible. Retributive justice is often a component of the contracts imagined for an anarcho-capitalist society. According to Matthew O'Keefee, some anarcho-capitalists believe prisons or indentured servitude would be justifiable institutions to deal with those who violate anarcho-capitalist property relations while others believe exile or forced restitution are sufficient. Rothbard stressed the importance of restitution as the primary focus of a libertarian legal order and advocated for corporal punishment for petty vandals and the death penalty for murders.
American economist Bruce L. Benson argues that legal codes may impose punitive damages for intentional torts in the interest of deterring crime. Benson gives the example of a thief who breaks into a house by picking a lock. Even if caught before taking anything, Benson argues that the thief would still owe the victim for violating the sanctity of his property rights. Benson opines that despite the lack of objectively measurable losses in such cases, "standardized rules that are generally perceived to be fair by members of the community would, in all likelihood, be established through precedent, allowing judgments to specify payments that are reasonably appropriate for most criminal offenses".
Morris and Linda Tannehill raise a similar example, saying that a bank robber who had an attack of conscience and returned the money would still owe reparations for endangering the employees' and customers' lives and safety, in addition to the costs of the defense agency answering the teller's call for help. However, they believe that the robber's loss of reputation would be even more damaging. They suggest that specialized companies would list aggressors so that anyone wishing to do business with a man could first check his record, provided they trust the veracity of the companies' records. They further theorise that the bank robber would find insurance companies listing him as a very poor risk and other firms would be reluctant to enter into contracts with him.
Fraud and breach of contract
There is a debate among anarcho-capitalists over whether to codify the concepts and standards of 'fraud' and 'breach of contract'.
For example, Mark D. Friedman has argued that most right-libertarian theories on this topic are unconvincing. So there were attempts to solve this problem in other ways.
Influences
Murray Rothbard has listed different ideologies of which his interpretations, he said, have influenced anarcho-capitalism. This includes his interpretation of anarchism, and more precisely individualist anarchism; classical liberalism and the Austrian School of economic thought. Scholars additionally associate anarcho-capitalism with neo-classical liberalism, radical neoliberalism and right-libertarianism.
Anarchism
In both its social and individualist forms, anarchism is usually considered an anti-capitalist and radical left-wing or far-left movement that promotes libertarian socialist economic theories such as collectivism, communism, individualism, mutualism and syndicalism. Because anarchism is usually described alongside libertarian Marxism as the libertarian wing of the socialist movement and as having a historical association with anti-capitalism and socialism, anarchists believe that capitalism is incompatible with social and economic equality and therefore do not recognize anarcho-capitalism as an anarchist school of thought. In particular, anarchists argue that capitalist transactions are not voluntary and that maintaining the class structure of a capitalist society requires coercion which is incompatible with an anarchist society. The usage of libertarian is also in dispute. While both anarchists and anarcho-capitalists have used it, libertarian was synonymous with anarchist until the mid-20th century, when anarcho-capitalist theory developed.
Anarcho-capitalists are distinguished from the dominant anarchist tradition by their relation to property and capital. While both anarchism and anarcho-capitalism share general antipathy towards government authority, anarcho-capitalism favors free-market capitalism. Anarchists, including egoists such as Max Stirner, have supported the protection of an individual's freedom from powers of both government and private property owners. In contrast, while condemning governmental encroachment on personal liberties, anarcho-capitalists support freedoms based on private property rights. Anarcho-capitalist theorist Murray Rothbard argued that protesters should rent a street for protest from its owners. The abolition of public amenities is a common theme in some anarcho-capitalist writings.
As anarcho-capitalism puts laissez-faire economics before economic equality, it is commonly viewed as incompatible with the anti-capitalist and egalitarian tradition of anarchism. Although anarcho-capitalist theory implies the abolition of the state in favour of a fully laissez-faire economy, it lies outside the tradition of anarchism. While using the language of anarchism, anarcho-capitalism only shares anarchism's antipathy towards the state and not anarchism's antipathy towards hierarchy as theorists expect from anarcho-capitalist economic power relations. It follows a different paradigm from anarchism and has a fundamentally different approach and goals. In spite of the anarcho- in its title, anarcho-capitalism is more closely affiliated with capitalism, right-libertarianism, and liberalism than with anarchism. Some within this laissez-faire tradition reject the designation of anarcho-capitalism, believing that capitalism may either refer to the laissez-faire market they support or the government-regulated system that they oppose.
Rothbard argued that anarcho-capitalism is the only true form of anarchismthe only form of anarchism that could possibly exist in reality as he maintained that any other form presupposes authoritarian enforcement of a political ideology such as "redistribution of private property", which he attributed to anarchism. According to this argument, the capitalist free market is "the natural situation" that would result from people being free from state authority and entails the establishment of all voluntary associations in society such as cooperatives, non-profit organizations, businesses and so on. Moreover, anarcho-capitalists, as well as classical liberal minarchists, argue that the application of anarchist ideals as advocated by what they term "left-wing anarchists" would require an authoritarian body of some sort to impose it. Based on their understanding and interpretation of anarchism, in order to forcefully prevent people from accumulating capital, which they believe is a goal of anarchists, there would necessarily be a redistributive organization of some sort which would have the authority to in essence exact a tax and re-allocate the resulting resources to a larger group of people. They conclude that this theoretical body would inherently have political power and would be nothing short of a state. The difference between such an arrangement and an anarcho-capitalist system is what anarcho-capitalists see as the voluntary nature of organization within anarcho-capitalism contrasted with a "centralized ideology" and a "paired enforcement mechanism" which they believe would be necessary under what they describe as a "coercively" egalitarian-anarchist system.
Rothbard also argued that the capitalist system of today is not properly anarchistic because it often colludes with the state. According to Rothbard, "what Marx and later writers have done is to lump together two extremely different and even contradictory concepts and actions under the same portmanteau term. These two contradictory concepts are what I would call 'free-market capitalism' on the one hand, and 'state capitalism' on the other". "The difference between free-market capitalism and state capitalism", writes Rothbard, "is precisely the difference between, on the one hand, peaceful, voluntary exchange, and on the other, violent expropriation". He continues: "State capitalism inevitably creates all sorts of problems which become insoluble".
Traditional anarchists reject the notion of capitalism, hierarchies and private property. Albert Meltzer argued that anarcho-capitalism simply cannot be anarchism because capitalism and the state are inextricably interlinked and because capitalism exhibits domineering hierarchical structures such as that between an employer and an employee. Anna Morgenstern approaches this topic from the opposite perspective, arguing that anarcho-capitalists are not really capitalists because "mass concentration of capital is impossible" without the state. According to Jeremy Jennings, "[i]t is hard not to conclude that these ideas," referring to anarcho-capitalism, have "roots deep in classical liberalism" and "are described as anarchist only on the basis of a misunderstanding of what anarchism is." For Jennings, "anarchism does not stand for the untrammelled freedom of the individual (as the 'anarcho-capitalists' appear to believe) but, as we have already seen, for the extension of individuality and community." Similarly, Barbara Goodwin, Emeritus Professor of Politics at the University of East Anglia, Norwich, argues that anarcho-capitalism's "true place is in the group of right-wing libertarians", not in anarchism.
Some right-libertarian scholars like Michael Huemer, who identify with the ideology, describe anarcho-capitalism as a "variety of anarchism". British author Andrew Heywood also believes that "individualist anarchism overlaps with libertarianism and is usually linked to a strong belief in the market as a self-regulating mechanism, most obviously manifest in the form of anarcho-capitalism". Frank H. Brooks, author of The Individualist Anarchists: An Anthology of Liberty (1881–1908), believes that "anarchism has always included a significant strain of radical individualism, from the hyperrationalism of Godwin, to the egoism of Stirner, to the libertarians and anarcho-capitalists of today".
While both anarchism and anarcho-capitalism are in opposition to the state, they nevertheless interpret state-rejection differently. Austrian school economist David Prychitko, in the context of anarcho-capitalism says that "while society without a state is necessary for full-fledged anarchy, it is nevertheless insufficient". According to Ruth Kinna, anarcho-capitalists are anti-statists who draw more on right-wing liberal theory and the Austrian School than anarchist traditions. Kinna writes that "[i]n order to highlight the clear distinction between the two positions", anarchists describe anarcho-capitalists as "propertarians". Anarcho-capitalism is usually seen as part of the New Right.
Some anarcho-capitalists understand anarchism to mean something other than "opposition to hierarchy" and therefore consider the two traditions to be philosophically distinct. Therefore, the anarchist critique that anarcho-capitalist societies would necessarily contain hierarchies is not concerning to these anarcho-capitalists. Additionally, Rothbard discusses the difference between "government" and "governance" thus, proponents of anarcho-capitalism think the philosophy's common name is indeed consistent, as it promotes private governance, but is vehemently anti-government.
Classical liberalism
Historian and libertarian Ralph Raico argued that what liberal philosophers "had come up with was a form of individualist anarchism, or, as it would be called today, anarcho-capitalism or market anarchism". He also said that Gustave de Molinari was proposing a doctrine of the private production of security, a position which was later taken up by Murray Rothbard. Some anarcho-capitalists consider Molinari to be the first proponent of anarcho-capitalism. In the preface to the 1977 English translation by Murray Rothbard called The Production of Security the "first presentation anywhere in human history of what is now called anarcho-capitalism", although admitting that "Molinari did not use the terminology, and probably would have balked at the name". Hans-Hermann Hoppe said that "the 1849 article 'The Production of Security' is probably the single most important contribution to the modern theory of anarcho-capitalism". According to Hans-Hermann Hoppe, one of the 19th century precursors of anarcho-capitalism were philosopher Herbert Spencer, classical liberal Auberon Herbert and liberal socialist Franz Oppenheimer.
Ruth Kinna credits Murray Rothbard with coining the term anarcho-capitalism, which is – Kinna proposes – to describe "a commitment to unregulated private property and laissez-faire economics, prioritizing the liberty-rights of individuals, unfettered by government regulation, to accumulate, consume and determine the patterns of their lives as they see fit". According to Kinna, anarcho-capitalists "will sometimes label themselves market anarchists because they recognize the negative connotations of 'capitalism'. But the literature of anarcho-capitalism draws on classical liberal theory, particularly the Austrian School – Friedrich von Hayek and Ludwig von Mises – rather than recognizable anarchist traditions. Ayn Rand's laissez-faire, anti-government, corporate philosophy – Objectivism – is sometimes associated with anarcho-capitalism". Other scholars similarly associate anarcho-capitalism with anti-state classical liberalism, neo-classical liberalism, radical neoliberalism and right-libertarianism.
Paul Dragos Aligica writes that there is a "foundational difference between the classical liberal and the anarcho-capitalist positions". Classical liberalism, while accepting critical arguments against collectivism, acknowledges a certain level of public ownership and collective governance as necessary to provide practical solutions to political problems. In contrast anarcho-capitalism, according to Aligica, denies any requirement for any form of public administration, and allows no meaningful role for the public sphere, which is seen as sub-optimal and illegitimate.
Individualist anarchism
Murray Rothbard, a student of Ludwig von Mises, stated that he was influenced by the work of the 19th-century American individualist anarchists. In the winter of 1949, Rothbard decided to reject minimal state laissez-faire and embrace his interpretation of individualist anarchism. In 1965, Rothbard wrote that "Lysander Spooner and Benjamin R. Tucker were unsurpassed as political philosophers and nothing is more needed today than a revival and development of the largely forgotten legacy they left to political philosophy". However, Rothbard thought that they had a faulty understanding of economics as the 19th-century individualist anarchists had a labor theory of value as influenced by the classical economists, while Rothbard was a student of Austrian School economics which does not agree with the labor theory of value. Rothbard sought to meld 19th-century American individualist anarchists' advocacy of economic individualism and free markets with the principles of Austrian School economics, arguing that "[t]here is, in the body of thought known as 'Austrian economics', a scientific explanation of the workings of the free market (and of the consequences of government intervention in that market) which individualist anarchists could easily incorporate into their political and social Weltanschauung". Rothbard held that the economic consequences of the political system they advocate would not result in an economy with people being paid in proportion to labor amounts, nor would profit and interest disappear as they expected. Tucker thought that unregulated banking and money issuance would cause increases in the money supply so that interest rates would drop to zero or near to it. Peter Marshall states that "anarcho-capitalism overlooks the egalitarian implications of traditional individualist anarchists like Spooner and Tucker". Stephanie Silberstein states that "While Spooner was no free-market capitalist, nor an anarcho-capitalist, he was not as opposed to capitalism as most socialists were."
In "The Spooner-Tucker Doctrine: An Economist's View", Rothbard explained his disagreements. Rothbard disagreed with Tucker that it would cause the money supply to increase because he believed that the money supply in a free market would be self-regulating. If it were not, then Rothbard argued inflation would occur so it is not necessarily desirable to increase the money supply in the first place. Rothbard claimed that Tucker was wrong to think that interest would disappear regardless because he believed people, in general, do not wish to lend their money to others without compensation, so there is no reason why this would change just because banking was unregulated. Tucker held a labor theory of value and thought that in a free market people would be paid in proportion to how much labor they exerted and that exploitation or usury was taking place if they were not. As Tucker explained in State Socialism and Anarchism, his theory was that unregulated banking would cause more money to be available and that this would allow the proliferation of new businesses which would, in turn, raise demand for labor. This led Tucker to believe that the labor theory of value would be vindicated and equal amounts of labor would receive equal pay. As an Austrian School economist, Rothbard did not agree with the labor theory and believed that prices of goods and services are proportional to marginal utility rather than to labor amounts in the free market. As opposed to Tucker he did not think that there was anything exploitative about people receiving an income according to how much "buyers of their services value their labor" or what that labor produces.
Without the labor theory of value, some argue that 19th-century individualist anarchists approximate the modern movement of anarcho-capitalism, although this has been contested or rejected. As economic theory changed, the popularity of the labor theory of classical economics was superseded by the subjective theory of value of neoclassical economics and Rothbard combined Mises' Austrian School of economics with the absolutist views of human rights and rejection of the state he had absorbed from studying the individualist American anarchists of the 19th century such as Tucker and Spooner. In the mid-1950s, Rothbard wrote an unpublished article named "Are Libertarians 'Anarchists'?" under the pseudonym "Aubrey Herbert", concerned with differentiating himself from communist and socialistic economic views of anarchists, including the individualist anarchists of the 19th century, concluding that "we are not anarchists and that those who call us anarchists are not on firm etymological ground and are being completely unhistorical. On the other hand, it is clear that we are not archists either: we do not believe in establishing a tyrannical central authority that will coerce the noninvasive as well as the invasive. Perhaps, then, we could call ourselves by a new name: nonarchist." Joe Peacott, an American individualist anarchist in the mutualist tradition, criticizes anarcho-capitalists for trying to hegemonize the individualist anarchism label and make appear as if all individualist anarchists are in favor of capitalism. Peacott states that "individualists, both past and present, agree with the communist anarchists that present-day capitalism is based on economic coercion, not on voluntary contract. Rent and interest are the mainstays of modern capitalism and are protected and enforced by the state. Without these two unjust institutions, capitalism could not exist".
Anarchist activists and scholars do not consider anarcho-capitalism as a part of the anarchist movement, arguing that anarchism has historically been an anti-capitalist movement and see it as incompatible with capitalist forms. Although some regard anarcho-capitalism as a form of individualist anarchism, many others disagree or contest the existence of an individualist–socialist divide. In coming to terms that anarchists mostly identified with socialism, Rothbard wrote that individualist anarchism is different from anarcho-capitalism and other capitalist theories due to the individualist anarchists retaining the labor theory of value and socialist doctrines. Similarly, many writers deny that anarcho-capitalism is a form of anarchism or that capitalism is compatible with anarchism.
The Palgrave Handbook of Anarchism writes that "[a]s Benjamin Franks rightly points out, individualisms that defend or reinforce hierarchical forms such as the economic-power relations of anarcho-capitalism are incompatible with practices of social anarchism based on developing immanent goods which contest such as inequalities". Laurence Davis cautiously asks "[I]s anarcho-capitalism really a form of anarchism or instead a wholly different ideological paradigm whose adherents have attempted to expropriate the language of anarchism for their own anti-anarchist ends?" Davis cites Iain McKay, "whom Franks cites as an authority to support his contention that 'academic analysis has followed activist currents in rejecting the view that anarcho-capitalism has anything to do with social anarchism, as arguing "quite emphatically on the very pages cited by Franks that anarcho-capitalism is by no means a type of anarchism". McKay writes that "[i]t is important to stress that anarchist opposition to the so-called capitalist 'anarchists' does not reflect some kind of debate within anarchism, as many of these types like to pretend, but a debate between anarchism and its old enemy capitalism. ... Equally, given that anarchists and 'anarcho'-capitalists have fundamentally different analyses and goals it is hardly 'sectarian' to point this out".
Davis writes that "Franks asserts without supporting evidence that most major forms of individualist anarchism have been largely anarcho-capitalist in content, and concludes from this premise that most forms of individualism are incompatible with anarchism". Davis argues that "the conclusion is unsustainable because the premise is false, depending as it does for any validity it might have on the further assumption that anarcho-capitalism is indeed a form of anarchism. If we reject this view, then we must also reject the individual anarchist versus the communal anarchist 'chasm' style of argument that follows from it". Davis maintains that "the ideological core of anarchism is the belief that society can and should be organised without hierarchy and domination. Historically, anarchists have struggles against a wide range of regimes of domination, from capitalism, the state system, patriarchy, heterosexism, and the domination of nature to colonialism, the war system, slavery, fascism, white supremacy, and certain forms of organised religion". According to Davis, "[w]hile these visions range from the predominantly individualistic to the predominantly communitarian, features common to virtually all include an emphasis on self-management and self-regulatory methods of organisation, voluntary association, decentralised society, based on the principle of free association, in which people will manage and govern themselves". Finally, Davis includes a footnote stating that "[i]ndividualist anarchism may plausibly be re regarded as a form of both socialism and anarchism. Whether the individualist anarchists were consistent anarchists (and socialists) is another question entirely. ... McKay comments as follows: 'any individualist anarchism which supports wage labour is inconsistent anarchism. It can easily be made consistent anarchism by applying its own principles consistently. In contrast 'anarcho'-capitalism rejects so many of the basic, underlying, principles of anarchism ... that it cannot be made consistent with the ideals of anarchism.
Historical precedents
Several anarcho-capitalists and right-libertarians have discussed historical precedents of what they believe were examples of anarcho-capitalism.
Free cities of medieval Europe
Economist and libertarian scholar Bryan Caplan considers the free cities of medieval Europe as examples of "anarchist" or "nearly anarchistic" societies, further arguing:
Medieval Iceland
According to the libertarian theorist David D. Friedman, "[m]edieval Icelandic institutions have several peculiar and interesting characteristics; they might almost have been invented by a mad economist to test the lengths to which market systems could supplant government in its most fundamental functions". While not directly labeling it anarcho-capitalist, Friedman argues that the legal system of the Icelandic Commonwealth comes close to being a real-world anarcho-capitalist legal system. Although noting that there was a single legal system, Friedman argues that enforcement of the law was entirely private and highly capitalist, providing some evidence of how such a society would function. Friedman further wrote that "[e]ven where the Icelandic legal system recognized an essentially 'public' offense, it dealt with it by giving some individual (in some cases chosen by lot from those affected) the right to pursue the case and collect the resulting fine, thus fitting it into an essentially private system".
Friedman and Bruce L. Benson argued that the Icelandic Commonwealth saw significant economic and social progress in the absence of systems of criminal law, an executive, or bureaucracy. This commonwealth was led by chieftains, whose position could be bought and sold like that of private property. Being a member of the chieftainship was also completely voluntary.
American Old West
According to Terry L. Anderson and P. J. Hill, the Old West in the United States in the period of 1830 to 1900 was similar to anarcho-capitalism in that "private agencies provided the necessary basis for an orderly society in which property was protected and conflicts were resolved" and that the common popular perception that the Old West was chaotic with little respect for property rights is incorrect. Since squatters had no claim to western lands under federal law, extra-legal organizations formed to fill the void. Benson explains:
According to Anderson, "[d]efining anarcho-capitalist to mean minimal government with property rights developed from the bottom up, the western frontier was anarcho-capitalistic. People on the frontier invented institutions that fit the resource constraints they faced".
Gaelic Ireland
In his work For a New Liberty, Murray Rothbard has claimed ancient Gaelic Ireland as an example of nearly anarcho-capitalist society. In his depiction, citing the work of Professor Joseph Peden, the basic political unit of ancient Ireland was the tuath, which is portrayed as "a body of persons voluntarily united for socially beneficial purposes" with its territorial claim being limited to "the sum total of the landed properties of its members". Civil disputes were settled by private arbiters called "brehons" and the compensation to be paid to the wronged party was insured through voluntary surety relationships. Commenting on the "kings" of tuaths, Rothbard stated:
Law merchant, admiralty law, and early common law
Some libertarians have cited law merchant, admiralty law and early common law as examples of anarcho-capitalism.
In his work Power and Market, Rothbard stated:
Somalia from 1991 to 2012
Economist Alex Tabarrok argued that Somalia in its stateless period provided a "unique test of the theory of anarchy", in some aspects near of that espoused by anarcho-capitalists David D. Friedman and Murray Rothbard. Nonetheless, both anarchists and some anarcho-capitalists argue that Somalia was not an anarchist society.
Analysis and criticism
State, justice and defense
Anarchists such as Brian Morris argue that anarcho-capitalism does not in fact get rid of the state. He says that anarcho-capitalists "simply replaced the state with private security firms, and can hardly be described as anarchists as the term is normally understood". In "Libertarianism: Bogus Anarchy", anarchist Peter Sabatini notes:
Similarly, Bob Black argues that an anarcho-capitalist wants to "abolish the state to his own satisfaction by calling it something else". He states that they do not denounce what the state does, they just "object to who's doing it".
Paul Birch argues that legal disputes involving several jurisdictions and different legal systems will be too complex and costly. He therefore argues that anarcho-capitalism is inherently unstable, and would evolve, entirely through the operation of free market forces, into either a single dominant private court with a natural monopoly of justice over the territory (a de facto state), a society of multiple city states, each with a territorial monopoly, or a 'pure anarchy' that would rapidly descend into chaos.
Randall G. Holcombe argues that anarcho-capitalism turns justice into a commodity as private defense and court firms would favour those who pay more for their services. He argues that defense agencies could form cartels and oppress people without fear of competition. Philosopher Albert Meltzer argued that since anarcho-capitalism promotes the idea of private armies, it actually supports a "limited State". He contends that it "is only possible to conceive of Anarchism which is free, communistic and offering no economic necessity for repression of countering it".
Libertarian Robert Nozick argues that a competitive legal system would evolve toward a monopoly governmenteven without violating individuals' rights in the process. In Anarchy, State, and Utopia, Nozick defends minarchism and argues that an anarcho-capitalist society would inevitably transform into a minarchist state through the eventual emergence of a monopolistic private defense and judicial agency that no longer faces competition. He argues that anarcho-capitalism results in an unstable system that would not endure in the real world. While anarcho-capitalists such as Roy Childs and Murray Rothbard have rejected Nozick's arguments, with Rothbard arguing that the process described by Nozick, with the dominant protection agency outlawing its competitors, in fact violates its own clients' rights, John Jefferson actually advocates Nozick's argument and states that such events would best operate in laissez-faire. Robert Ellickson presented a Hayekian case against anarcho-capitalism, calling it a "pipe-dream" and stating that anarcho-capitalists "by imagining a stable system of competing private associations, ignore both the inevitability of territorial monopolists in governance, and the importance of institutions to constrain those monopolists' abuses".
Some libertarians argue that anarcho-capitalism would result in different standards of justice and law due to relying too much on the market. Friedman responded to this criticism by arguing that it assumes the state is controlled by a majority group that has similar legal ideals. If the populace is diverse, different legal standards would therefore be appropriate.
Rights and freedom
Negative and positive rights are rights that oblige either action (positive rights) or inaction (negative rights). Anarcho-capitalists believe that negative rights should be recognized as legitimate, but positive rights should be rejected as an intrusion. Some critics reject the distinction between positive and negative rights. Peter Marshall also states that the anarcho-capitalist definition of freedom is entirely negative and that it cannot guarantee the positive freedom of individual autonomy and independence.
About anarcho-capitalism, anarcho-syndicalist and anti-capitalist intellectual Noam Chomsky says:
Economics and property
Social anarchists argue that anarcho-capitalism allows individuals to accumulate significant power through free markets and private property. Friedman responded by arguing that the Icelandic Commonwealth was able to prevent the wealthy from abusing the poor by requiring individuals who engaged in acts of violence to compensate their victims financially.
Anarchists argue that certain capitalist transactions are not voluntary and that maintaining the class structure of a capitalist society requires coercion which violates anarchist principles. Anthropologist David Graeber noted his skepticism about anarcho-capitalism along the same lines, arguing:
Some critics argue that the anarcho-capitalist concept of voluntary choice ignores constraints due to both human and non-human factors such as the need for food and shelter as well as active restriction of both used and unused resources by those enforcing property claims. If a person requires employment in order to feed and house himself, the employer-employee relationship could be considered involuntary. Another criticism is that employment is involuntary because the economic system that makes it necessary for some individuals to serve others is supported by the enforcement of coercive private property relations. Some philosophies view any ownership claims on land and natural resources as immoral and illegitimate. Objectivist philosopher Harry Binswanger criticizes anarcho-capitalism by arguing that "capitalism requires government", questioning who or what would enforce treaties and contracts.
Some right-libertarian critics of anarcho-capitalism who support the full privatization of capital such as geolibertarians argue that land and the raw materials of nature remain a distinct factor of production and cannot be justly converted to private property because they are not products of human labor. Some socialists, including market anarchists and mutualists, adamantly oppose absentee ownership. Anarcho-capitalists have strong abandonment criteria, namely that one maintains ownership until one agrees to trade or gift it. Anti-state critics of this view posit comparatively weak abandonment criteria, arguing that one loses ownership when one stops personally occupying and using it as well as the idea of perpetually binding original appropriation is anathema to traditional schools of anarchism.
Propertarianism
Critics charge that the Propertarianist perspective prevents freedom from making sense as an independent value in anarcho-capitalist theory:
Matt Zwolinski has argued that we can call Rothbard-influenced scholars “propertarians” because the concept that really matters in their work is property.
Literature
The following is a partial list of notable nonfiction works discussing anarcho-capitalism.
Bruce L. Benson, The Enterprise of Law: Justice Without The State
To Serve and Protect: Privatization and Community in Criminal Justice
David D. Friedman, The Machinery of Freedom
Edward P. Stringham, Anarchy and the Law: The Political Economy of Choice
George H. Smith, "Justice Entrepreneurship in a Free Market"
Gerard Casey, Libertarian Anarchy: Against the State
Hans-Hermann Hoppe, Anarcho-Capitalism: An Annotated Bibliography
A Theory of Socialism and Capitalism
Democracy: The God That Failed
The Economics and Ethics of Private Property
Linda and Morris Tannehill, The Market for Liberty
Michael Huemer, The Problem of Political Authority
Murray Rothbard, founder of anarcho-capitalism:
For a New Liberty
Man, Economy, and State
Power and Market
The Ethics of Liberty
|
;Anarcho-capitalism;Austrian School;Capitalist systems;Classical liberalism;Economic ideologies;Ideologies of capitalism;Libertarianism by form;Murray Rothbard;Political ideologies;Right-libertarianism;Syncretic political movements
|
https://en.wikipedia.org/wiki/Analysis
|
Analysis (: analyses) is the process of breaking a complex topic or substance into smaller parts in order to gain a better understanding of it. The technique has been applied in the study of mathematics and logic since before Aristotle (384–322 BC), though analysis as a formal concept is a relatively recent development.
The word comes from the Ancient Greek (analysis, "a breaking-up" or "an untying" from ana- "up, throughout" and lysis "a loosening"). From it also comes the word's plural, analyses.
As a formal concept, the method has variously been ascribed to René Descartes (Discourse on the Method), and Galileo Galilei. It has also been ascribed to Isaac Newton, in the form of a practical method of physical discovery (which he did not name).
The converse of analysis is synthesis: putting the pieces back together again in a new or different whole.
Science and technology
Chemistry
The field of chemistry uses analysis in three ways: to identify the components of a particular chemical compound (qualitative analysis), to identify the proportions of components in a mixture (quantitative analysis), and to break down chemical processes and examine chemical reactions between elements of matter. For an example of its use, analysis of the concentration of elements is important in managing a nuclear reactor, so nuclear scientists will analyze neutron activation to develop discrete measurements within vast samples. A matrix can have a considerable effect on the way a chemical analysis is conducted and the quality of its results. Analysis can be done manually or with a device.
Types of Analysis
Qualitative Analysis It is concerned with which components are in a given sample or compound.
Example: Precipitation reaction
Quantitative Analysis It is to determine the quantity of individual component present in a given sample or compound.
Example: To find concentration by uv-spectrophotometer.
Isotopes
Chemists can use isotope analysis to assist analysts with issues in anthropology, archeology, food chemistry, forensics, geology, and a host of other questions of physical science. Analysts can discern the origins of natural and man-made isotopes in the study of environmental radioactivity.
Computer science
Requirements analysis – encompasses those tasks that go into determining the needs or conditions to meet for a new or altered product, taking account of the possibly conflicting requirements of the various stakeholders, such as beneficiaries or users.
Competitive analysis (online algorithm) – shows how online algorithms perform and demonstrates the power of randomization in algorithms
Lexical analysis – the process of processing an input sequence of characters and producing as output a sequence of symbols
Object-oriented analysis and design – à la Booch
Program analysis (computer science) – the process of automatically analysing the behavior of computer programs
Semantic analysis (computer science) – a pass by a compiler that adds semantical information to the parse tree and performs certain checks
Static code analysis – the analysis of computer software that is performed without actually executing programs built from that
Structured systems analysis and design methodology – à la Yourdon
Syntax analysis – a process in compilers that recognizes the structure of programming languages, also known as parsing
Worst-case execution time – determines the longest time that a piece of software can take to run
Engineering
Analysts in the field of engineering look at requirements, structures, mechanisms, systems and dimensions. Electrical engineers analyse systems in electronics. Life cycles and system failures are broken down and studied by engineers. It is also looking at different factors incorporated within the design.
Mathematics
Modern mathematical analysis is the study of infinite processes. It is the branch of mathematics that includes calculus. It can be applied in the study of classical concepts of mathematics, such as real numbers, complex variables, trigonometric functions, and algorithms, or of non-classical concepts like constructivism, harmonics, infinity, and vectors.
Florian Cajori explains in A History of Mathematics (1893) the difference between modern and ancient mathematical analysis, as distinct from logical analysis, as follows:
The terms synthesis and analysis are used in mathematics in a more special sense than in logic. In ancient mathematics they had a different meaning from what they now have. The oldest definition of mathematical analysis as opposed to synthesis is that given in [appended to] Euclid, XIII. 5, which in all probability was framed by Eudoxus: "Analysis is the obtaining of the thing sought by assuming it and so reasoning up to an admitted truth; synthesis is the obtaining of the thing sought by reasoning up to the inference and proof of it."
The analytic method is not conclusive, unless all operations involved in it are known to be reversible. To remove all doubt, the Greeks, as a rule, added to the analytic process a synthetic one, consisting of a reversion of all operations occurring in the analysis. Thus the aim of analysis was to aid in the discovery of synthetic proofs or solutions.
James Gow uses a similar argument as Cajori, with the following clarification, in his A Short History of Greek Mathematics (1884):
The synthetic proof proceeds by shewing that the proposed new truth involves certain admitted truths. An analytic proof begins by an assumption, upon which a synthetic reasoning is founded. The Greeks distinguished theoretic from problematic analysis. A theoretic analysis is of the following kind. To prove that A is B, assume first that A is B. If so, then, since B is C and C is D and D is E, therefore A is E. If this be known a falsity, A is not B. But if this be a known truth and all the intermediate propositions be convertible, then the reverse process, A is E, E is D, D is C, C is B, therefore A is B, constitutes a synthetic proof of the original theorem. Problematic analysis is applied in all cases where it is proposed to construct a figure which is assumed to satisfy a given condition. The problem is then converted into some theorem which is involved in the condition and which is proved synthetically, and the steps of this synthetic proof taken backwards are a synthetic solution of the problem.
Psychotherapy
Psychoanalysis – seeks to elucidate connections among unconscious components of patients' mental processes
Transactional analysis
Transactional analysis is used by therapists to try to gain a better understanding of the unconscious. It focuses on understanding and intervening human behavior.
Signal processing
Finite element analysis – a computer simulation technique used in engineering analysis
Independent component analysis
Link quality analysis – the analysis of signal quality
Path quality analysis
Fourier analysis
Statistics
In statistics, the term analysis may refer to any method used
for data analysis. Among the many such methods, some are:
Analysis of variance (ANOVA) – a collection of statistical models and their associated procedures which compare means by splitting the overall observed variance into different parts
Boolean analysis – a method to find deterministic dependencies between variables in a sample, mostly used in exploratory data analysis
Cluster analysis – techniques for finding groups (called clusters), based on some measure of proximity or similarity
Factor analysis – a method to construct models describing a data set of observed variables in terms of a smaller set of unobserved variables (called factors)
Meta-analysis – combines the results of several studies that address a set of related research hypotheses
Multivariate analysis – analysis of data involving several variables, such as by factor analysis, regression analysis, or principal component analysis
Principal component analysis – transformation of a sample of correlated variables into uncorrelated variables (called principal components), mostly used in exploratory data analysis
Regression analysis – techniques for analysing the relationships between several predictive variables and one or more outcomes in the data
Scale analysis (statistics) – methods to analyse survey data by scoring responses on a numeric scale
Sensitivity analysis – the study of how the variation in the output of a model depends on variations in the inputs
Sequential analysis – evaluation of sampled data as it is collected, until the criterion of a stopping rule is met
Spatial analysis – the study of entities using geometric or geographic properties
Time-series analysis – methods that attempt to understand a sequence of data points spaced apart at uniform time intervals
Business
Financial statement analysis – the analysis of the accounts and the economic prospects of a firm
Financial analysis – refers to an assessment of the viability, stability, and profitability of a business, sub-business or project
Gap analysis – involves the comparison of actual performance with potential or desired performance of an organization
Business analysis – involves identifying the needs and determining the solutions to business problems
Price analysis – involves the breakdown of a price to a unit figure
Market analysis – consists of suppliers and customers, and price is determined by the interaction of supply and demand
Sum-of-the-parts analysis – method of valuation of a multi-divisional company
Opportunity analysis – consists of customers trends within the industry, customer demand and experience determine purchasing behavior
Economics
Agroecosystem analysis
Input–output model if applied to a region, is called Regional Impact Multiplier System
Government
Intelligence
The field of intelligence employs analysts to break down and understand a wide array of questions. Intelligence agencies may use heuristics, inductive and deductive reasoning, social network analysis, dynamic network analysis, link analysis, and brainstorming to sort through problems they face. Military intelligence may explore issues through the use of game theory, Red Teaming, and wargaming. Signals intelligence applies cryptanalysis and frequency analysis to break codes and ciphers. Business intelligence applies theories of competitive intelligence analysis and competitor analysis to resolve questions in the marketplace. Law enforcement intelligence applies a number of theories in crime analysis.
Policy
Policy analysis – The use of statistical data to predict the effects of policy decisions made by governments and agencies
Policy analysis includes a systematic process to find the most efficient and effective option to address the current situation.
Qualitative analysis – The use of anecdotal evidence to predict the effects of policy decisions or, more generally, influence policy decisions
Humanities and social sciences
Linguistics
Linguistics explores individual languages and language in general. It breaks language down and analyses its component parts: theory, sounds and their meaning, utterance usage, word origins, the history of words, the meaning of words and word combinations, sentence construction, basic construction beyond the sentence level, stylistics, and conversation. It examines the above using statistics and modeling, and semantics. It analyses language in context of anthropology, biology, evolution, geography, history, neurology, psychology, and sociology. It also takes the applied approach, looking at individual language development and clinical issues.
Literature
Literary criticism is the analysis of literature. The focus can be as diverse as the analysis of Homer or Freud. While not all literary-critical methods are primarily analytical in nature, the main approach to the teaching of literature in the west since the mid-twentieth century, literary formal analysis or close reading, is. This method, rooted in the academic movement labelled The New Criticism, approaches texts – chiefly short poems such as sonnets, which by virtue of their small size and significant complexity lend themselves well to this type of analysis – as units of discourse that can be understood in themselves, without reference to biographical or historical frameworks. This method of analysis breaks up the text linguistically in a study of prosody (the formal analysis of meter) and phonic effects such as alliteration and rhyme, and cognitively in examination of the interplay of syntactic structures, figurative language, and other elements of the poem that work to produce its larger effects.
Music
Musical analysis – a process attempting to answer the question "How does this music work?"
Musical Analysis is a study of how the composers use the notes together to compose music. Those studying music will find differences with each composer's musical analysis, which differs depending on the culture and history of music studied. An analysis of music is meant to simplify the music for you.
Schenkerian analysis
Schenkerian analysis is a collection of music analysis that focuses on the production of the graphic representation. This includes both analytical procedure as well as the notational style. Simply put, it analyzes tonal music which includes all chords and tones within a composition.
Philosophy
Philosophical analysis – a general term for the techniques used by philosophers
Philosophical analysis refers to the clarification and composition of words put together and the entailed meaning behind them. Philosophical analysis dives deeper into the meaning of words and seeks to clarify that meaning by contrasting the various definitions. It is the study of reality, justification of claims, and the analysis of various concepts. Branches of philosophy include logic, justification, metaphysics, values and ethics. If questions can be answered empirically, meaning it can be answered by using the senses, then it is not considered philosophical. Non-philosophical questions also include events that happened in the past, or questions science or mathematics can answer.
Analysis is the name of a prominent journal in philosophy.
Other
Aura analysis – a pseudoscientific technique in which supporters of the method claim that the body's aura, or energy field is analysed
Bowling analysis – Analysis of the performance of cricket players
Lithic analysis – the analysis of stone tools using basic scientific techniques
Lithic analysis is most often used by archeologists in determining which types of tools were used at a given time period pertaining to current artifacts discovered.
Protocol analysis – a means for extracting persons' thoughts while they are performing a task
See also
Formal analysis
Metabolism in biology
Methodology
Scientific method
Synthesis (disambiguation) – list of terms related to synthesis, the converse of analysis
References
External links
|
;Abstraction;Critical thinking skills;Emergence;Empiricism;Epistemological theories;Intelligence;Mathematical modeling;Metaphysics of mind;Methodology;Ontology;Philosophy of logic;Rationalism;Reasoning;Research methods;Scientific method;Theory of mind
|
https://en.wikipedia.org/wiki/Assembly%20line
|
An assembly line, often called progressive assembly, is a manufacturing process where the unfinished product moves in a direct line from workstation to workstation, with parts added in sequence until the final product is completed. By mechanically moving parts to workstations and transferring the unfinished product from one workstation to another, a finished product can be assembled faster and with less labor than having workers carry parts to a stationary product.
Assembly lines are common methods of assembling complex items such as automobiles and other transportation equipment, household appliances and electronic goods.
Workers in charge of the works of assembly line are called assemblers.
Concepts
Assembly lines are designed for the sequential organization of workers, tools or machines, and parts. The motion of workers is minimized to the extent possible. All parts or assemblies are handled either by conveyors or motorized vehicles such as forklifts, or gravity, with no manual trucking. Heavy lifting is done by machines such as overhead cranes or forklifts. Each worker typically performs one simple operation unless job rotation strategies are applied.
According to Henry Ford:
Designing assembly lines is a well-established mathematical challenge, referred to as an assembly line balancing problem. In the simple assembly line balancing problem the aim is to assign a set of tasks that need to be performed on the workpiece to a sequence of workstations. Each task requires a given task duration for completion. The assignment of tasks to stations is typically limited by two constraints: (1) a precedence graph which indicates what other tasks need to be completed before a particular task can be initiated (e.g. not putting in a screw before drilling the hole) and (2) a cycle time which restricts the sum of task processing times which can be completed at each workstation before the work-piece is moved to the next station by the conveyor belt. Major planning problems for operating assembly lines include supply chain integration, inventory control and production scheduling.
Simple example
Consider the assembly of a car: assume that certain steps in the assembly line are to install the engine, install the hood, and install the wheels (in that order, with arbitrary interstitial steps); only one of these steps can be done at a time. In traditional production, only one car would be assembled at a time. If engine installation takes 20 minutes, hood installation takes five minutes, and wheels installation takes 10 minutes, then a car can be produced every 35 minutes.
In an assembly line, car assembly is split between several stations, all working simultaneously. When a station is finished with a car, it passes it on to the next. By having three stations, three cars can be operated on at the same time, each at a different stage of assembly.
After finishing its work on the first car, the engine installation crew can begin working on the second car. While the engine installation crew works on the second car, the first car can be moved to the hood station and fitted with a hood, then to the wheels station and be fitted with wheels. After the engine has been installed on the second car, the second car moves to the hood assembly. At the same time, the third car moves to the engine assembly. When the third car's engine has been mounted, it then can be moved to the hood station; meanwhile, subsequent cars (if any) can be moved to the engine installation station.
Assuming no loss of time when moving a car from one station to another, the longest stage on the assembly line determines the throughput (20 minutes for the engine installation) so a car can be produced every 20 minutes, once the first car taking 35 minutes has been produced.
History
Before the Industrial Revolution, most manufactured products were made individually by hand. A single craftsman or team of craftsmen would create each part of a product. They would use their skills and tools such as files and knives to create the individual parts. They would then assemble them into the final product, making cut-and-try changes in the parts until they fit and could work together (craft production).
Division of labor was practiced by Ancient Greeks, Chinese and other ancient civilizations. In Ancient Greece it was discussed by Plato and Xenophon. Adam Smith discussed the division of labour in the manufacture of pins at length in his book The Wealth of Nations (published in 1776).
The Venetian Arsenal, dating to about 1104, operated similar to a production line. Ships moved down a canal and were fitted by the various shops they passed. At the peak of its efficiency in the early 16th century, the Arsenal employed some 16,000 people who could apparently produce nearly one ship each day and could fit out, arm, and provision a newly built galley with standardized parts on an assembly-line basis. Although the Arsenal lasted until the early Industrial Revolution, production line methods did not become common even then.
Industrial Revolution
The Industrial Revolution led to a proliferation of manufacturing and invention. Many industries, notably textiles, firearms, clocks and watches, horse-drawn vehicles, railway locomotives, sewing machines, and bicycles, saw expeditious improvement in materials handling, machining, and assembly during the 19th century, although modern concepts such as industrial engineering and logistics had not yet been named.
The automatic flour mill built by Oliver Evans in 1785 was called the beginning of modern bulk material handling by Roe (1916). Evans's mill used a leather belt bucket elevator, screw conveyors, canvas belt conveyors, and other mechanical devices to completely automate the process of making flour. The innovation spread to other mills and breweries.
Probably the earliest industrial example of a linear and continuous assembly process is the Portsmouth Block Mills, built between 1801 and 1803. Marc Isambard Brunel (father of Isambard Kingdom Brunel), with the help of Henry Maudslay and others, designed 22 types of machine tools to make the parts for the rigging blocks used by the Royal Navy. This factory was so successful that it remained in use until the 1960s, with the workshop still visible at HM Dockyard in Portsmouth, and still containing some of the original machinery.
One of the earliest examples of an almost modern factory layout, designed for easy material handling, was the Bridgewater Foundry. The factory grounds were bordered by the Bridgewater Canal and the Liverpool and Manchester Railway. The buildings were arranged in a line with a railway for carrying the work going through the buildings. Cranes were used for lifting the heavy work, which sometimes weighed in the tens of tons. The work passed sequentially through to erection of framework and final assembly.
The first flow assembly line was initiated at the factory of Richard Garrett & Sons, Leiston Works in Leiston in the English county of Suffolk for the manufacture of portable steam engines. The assembly line area was called 'The Long Shop' on account of its length and was fully operational by early 1853. The boiler was brought up from the foundry and put at the start of the line, and as it progressed through the building it would stop at various stages where new parts would be added. From the upper level, where other parts were made, the lighter parts would be lowered over a balcony and then fixed onto the machine on the ground level. When the machine reached the end of the shop, it would be completed.
Interchangeable parts
During the early 19th century, the development of machine tools such as the screw-cutting lathe, metal planer, and milling machine, and of toolpath control via jigs and fixtures, provided the prerequisites for the modern assembly line by making interchangeable parts a practical reality.
Late 19th-century steam and electric conveyors
Steam-powered conveyor lifts began being used for loading and unloading ships some time in the last quarter of the 19th century. Hounshell (1984) shows a sketch of an electric-powered conveyor moving cans through a filling line in a canning factory.
The meatpacking industry of Chicago is believed to be one of the first industrial assembly lines (or disassembly lines) to be utilized in the United States starting in 1867. Workers would stand at fixed stations and a pulley system would bring the meat to each worker and they would complete one task. Henry Ford and others have written about the influence of this slaughterhouse practice on the later developments at Ford Motor Company.
20th century
According to Domm, the implementation of mass production of an automobile via an assembly line may be credited to Ransom Olds, who used it to build the first mass-produced automobile, the Oldsmobile Curved Dash. Olds patented the assembly line concept, which he put to work in his Olds Motor Vehicle Company factory in 1901.
At Ford Motor Company, the assembly line was introduced by William "Pa" Klann upon his return from visiting Swift & Company's slaughterhouse in Chicago and viewing what was referred to as the "disassembly line", where carcasses were butchered as they moved along a conveyor. The efficiency of one person removing the same piece over and over without moving to another station caught his attention. He reported the idea to Peter E. Martin, soon to be head of Ford production, who was doubtful at the time but encouraged him to proceed. Others at Ford have claimed to have put the idea forth to Henry Ford, but Pa Klann's slaughterhouse revelation is well documented in the archives at the Henry Ford Museum and elsewhere, making him an important contributor to the modern automated assembly line concept. Ford was appreciative, having visited the highly automated 40-acre Sears mail order handling facility around 1906. At Ford, the process was an evolution by trial and error of a team consisting primarily of Peter E. Martin, the factory superintendent; Charles E. Sorensen, Martin's assistant; Clarence W. Avery; C. Harold Wills, draftsman and toolmaker; Charles Ebender; and József Galamb. Some of the groundwork for such development had recently been laid by the intelligent layout of machine tool placement that Walter Flanders had been doing at Ford up to 1908.
The moving assembly line was developed for the Ford Model T and began operation on October 7, 1913, at the Highland Park Ford Plant, and continued to evolve after that, using time and motion study. The assembly line, driven by conveyor belts, reduced production time for a Model T to just 93 minutes by dividing the process into 45 steps. Producing cars quicker than paint of the day could dry, it had an immense influence on the world.
In 1922, Ford (through his ghostwriter Crowther) said of his 1913 assembly line:
Charles E. Sorensen, in his 1956 memoir My Forty Years with Ford, presented a different version of development that was not so much about individual "inventors" as a gradual, logical development of industrial engineering:
As a result of these developments in method, Ford's cars came off the line in three-minute intervals or six feet per minute. This was much faster than previous methods, increasing production by eight to one (requiring 12.5 man-hours before, 1 hour 33 minutes after), while using less manpower. It was so successful, paint became a bottleneck. Only japan black would dry fast enough, forcing the company to drop the variety of colours available before 1914, until fast-drying Duco lacquer was developed in 1926.
The assembly line technique was an integral part of the diffusion of the automobile into American society. Decreased costs of production allowed the cost of the Model T to fall within the budget of the American middle class. In 1908, the price of a Model T was around $825, and by 1912 it had decreased to around $575. This price reduction is comparable to a reduction from $15,000 to $10,000 in dollar terms from the year 2000. In 1914, an assembly line worker could buy a Model T with four months' pay.
Ford's complex safety procedures—especially assigning each worker to a specific location instead of allowing them to roam about—dramatically reduced the rate of injury. The combination of high wages and high efficiency is called "Fordism", and was copied by most major industries. The efficiency gains from the assembly line also coincided with the take-off of the United States. The assembly line forced workers to work at a certain pace with very repetitive motions which led to more output per worker while other countries were using less productive methods.
In the automotive industry, its success was dominating, and quickly spread worldwide. Ford France and Ford Britain in 1911, Ford Denmark 1923, Ford Germany and Ford Japan 1925; in 1919, Vulcan (Southport, Lancashire) was the first native European manufacturer to adopt it. Soon, companies had to have assembly lines, or risk going broke by not being able to compete; by 1930, 250 companies which did not had disappeared.
The massive demand for military hardware in World War II prompted assembly-line techniques in shipbuilding and aircraft production. Thousands of Liberty ships were built making extensive use of prefabrication, enabling ship assembly to be completed in weeks or even days. After having produced fewer than 3,000 planes for the United States Military in 1939, American aircraft manufacturers built over 300,000 planes in World War II. Vultee pioneered the use of the powered assembly line for aircraft manufacturing. Other companies quickly followed. As William S. Knudsen (having worked at Ford, GM and the National Defense Advisory Commission) observed, "We won because we smothered the enemy in an avalanche of production, the like of which he had never seen, nor dreamed possible."
Improved working conditions
In his 1922 autobiography, Henry Ford mentions several benefits of the assembly line including:
Workers do not do any heavy lifting.
No stooping or bending over.
No special training was required.
There are jobs that almost anyone can do.
Provided employment to immigrants.
The gains in productivity allowed Ford to increase worker pay from $1.50 per day to $5.00 per day once employees reached three years of service on the assembly line. Ford continued on to reduce the hourly work week while continuously lowering the Model T price. These goals appear altruistic; however, it has been argued that they were implemented by Ford in order to reduce high employee turnover: when the assembly line was introduced in 1913, it was discovered that "every time the company wanted to add 100 men to its factory personnel, it was necessary to hire 963" in order to counteract the natural distaste the assembly line seems to have inspired.
Sociological problems
Sociological work has explored the social alienation and boredom that many workers feel because of the repetition of doing the same specialized task all day long.
Karl Marx expressed in his theory of alienation the belief that, in order to achieve job satisfaction, workers need to see themselves in the objects they have created, that products should be "mirrors in which workers see their reflected essential nature". Marx viewed labour as a chance for people to externalize facets of their personalities. Marxists argue that performing repetitive, specialized tasks causes a feeling of disconnection between what a worker does all day, who they really are, and what they would ideally be able to contribute to society. Furthermore, Marx views these specialised jobs as insecure, since the worker is expendable as soon as costs rise and technology can replace more expensive human labour.
Since workers have to stand in the same place for hours and repeat the same motion hundreds of times per day, repetitive stress injuries are a possible pathology of occupational safety. Industrial noise also proved dangerous. When it was not too high, workers were often prohibited from talking. Charles Piaget, a skilled worker at the LIP factory, recalled that besides being prohibited from speaking, the semi-skilled workers had only 25 centimeters in which to move. Industrial ergonomics later tried to minimize physical trauma.
See also
Modern Times, a 1936 film featuring the Tramp character (played by Charlie Chaplin) struggling to adapt to assembly line work
Final Offer, a documentary film about the 1984 UAW/CAW contract negotiations shows working life on the floor of the GM Oshawa Ontario Car Assembly Plant
Reconfigurable and flexible manufacturing systems, involving Post-Fordism and lean manufacturing-influenced production
References
Footnotes
Works cited
External links
Homepage for assembly line optimization research
Assembly line optimization problems
History of the assembly line and its widespread effects
Cars Assembly Line
ca:Producció en cadena
|
American inventions;Articles containing video clips;Culture of Detroit;History of science and technology in the United States;Industrial processes;Manufacturing buildings and structures;Mass production;Types of production
|
https://en.wikipedia.org/wiki/Automorphism
|
In mathematics, an automorphism is an isomorphism from a mathematical object to itself. It is, in some sense, a symmetry of the object, and a way of mapping the object to itself while preserving all of its structure. The set of all automorphisms of an object forms a group, called the automorphism group. It is, loosely speaking, the symmetry group of the object.
Definition
In an algebraic structure such as a group, a ring, or vector space, an automorphism is simply a bijective homomorphism of an object into itself. (The definition of a homomorphism depends on the type of algebraic structure; see, for example, group homomorphism, ring homomorphism, and linear operator.)
More generally, for an object in some category, an automorphism is a morphism of the object to itself that has an inverse morphism; that is, a morphism is an automorphism if there is a morphism such that where is the identity morphism of . For algebraic structures, the two definitions are equivalent; in this case, the identity morphism is simply the identity function, and is often called the trivial automorphism.
Automorphism group
The automorphisms of an object form a group under composition of morphisms, which is called the automorphism group of . This results straightforwardly from the definition of a category.
The automorphism group of an object in a category is often denoted , or simply Aut(X) if the category is clear from context.
Examples
In set theory, an arbitrary permutation of the elements of a set X is an automorphism. The automorphism group of X is also called the symmetric group on X.
In elementary arithmetic, the set of integers, , considered as a group under addition, has a unique nontrivial automorphism: negation. Considered as a ring, however, it has only the trivial automorphism. Generally speaking, negation is an automorphism of any abelian group, but not of a ring or field.
A group automorphism is a group isomorphism from a group to itself. Informally, it is a permutation of the group elements such that the structure remains unchanged. For every group G there is a natural group homomorphism G → Aut(G) whose image is the group Inn(G) of inner automorphisms and whose kernel is the center of G. Thus, if G has trivial center it can be embedded into its own automorphism group.
In linear algebra, an endomorphism of a vector space V is a linear operator V → V. An automorphism is an invertible linear operator on V. When the vector space is finite-dimensional, the automorphism group of V is the same as the general linear group, GL(V). (The algebraic structure of all endomorphisms of V is itself an algebra over the same base field as V, whose invertible elements precisely consist of GL(V).)
A field automorphism is a bijective ring homomorphism from a field to itself.
The field of the rational numbers has no other automorphism than the identity, since an automorphism must fix the additive identity and the multiplicative identity ; the sum of a finite number of must be fixed, as well as the additive inverses of these sums (that is, the automorphism fixes all integers); finally, since every rational number is the quotient of two integers, all rational numbers must be fixed by any automorphism.
The field of the real numbers has no automorphisms other than the identity. Indeed, the rational numbers must be fixed by every automorphism, per above; an automorphism must preserve inequalities since is equivalent to and the latter property is preserved by every automorphism; finally every real number must be fixed since it is the least upper bound of a sequence of rational numbers.
The field of the complex numbers has a unique nontrivial automorphism that fixes the real numbers. It is the complex conjugation, which maps to The axiom of choice implies the existence of uncountably many automorphisms that do not fix the real numbers.
The study of automorphisms of algebraic field extensions is the starting point and the main object of Galois theory.
The automorphism group of the quaternions () as a ring are the inner automorphisms, by the Skolem–Noether theorem: maps of the form . This group is isomorphic to SO(3), the group of rotations in 3-dimensional space.
The automorphism group of the octonions () is the exceptional Lie group G2.
In graph theory an automorphism of a graph is a permutation of the nodes that preserves edges and non-edges. In particular, if two nodes are joined by an edge, so are their images under the permutation.
In geometry, an automorphism may be called a motion of the space. Specialized terminology is also used:
In metric geometry an automorphism is a self-isometry. The automorphism group is also called the isometry group.
In the category of Riemann surfaces, an automorphism is a biholomorphic map (also called a conformal map), from a surface to itself. For example, the automorphisms of the Riemann sphere are Möbius transformations.
An automorphism of a differentiable manifold M is a diffeomorphism from M to itself. The automorphism group is sometimes denoted Diff(M).
In topology, morphisms between topological spaces are called continuous maps, and an automorphism of a topological space is a homeomorphism of the space to itself, or self-homeomorphism (see homeomorphism group). In this example it is not sufficient for a morphism to be bijective to be an isomorphism.
History
One of the earliest group automorphisms (automorphism of a group, not simply a group of automorphisms of points) was given by the Irish mathematician William Rowan Hamilton in 1856, in his icosian calculus, where he discovered an order two automorphism, writing:
so that is a new fifth root of unity, connected with the former fifth root by relations of perfect reciprocity.
Inner and outer automorphisms
In some categories—notably groups, rings, and Lie algebras—it is possible to separate automorphisms into two types, called "inner" and "outer" automorphisms.
In the case of groups, the inner automorphisms are the conjugations by the elements of the group itself. For each element a of a group G, conjugation by a is the operation given by (or a−1ga; usage varies). One can easily check that conjugation by a is a group automorphism. The inner automorphisms form a normal subgroup of Aut(G), denoted by Inn(G); this is called Goursat's lemma.
The other automorphisms are called outer automorphisms. The quotient group is usually denoted by Out(G); the non-trivial elements are the cosets that contain the outer automorphisms.
The same definition holds in any unital ring or algebra where a is any invertible element. For Lie algebras the definition is slightly different.
See also
Antiautomorphism
Automorphism (in Sudoku puzzles)
Characteristic subgroup
Endomorphism ring
Frobenius automorphism
Morphism
Order automorphism (in order theory).
Relation-preserving automorphism
Fractional Fourier transform
References
External links
Automorphism at Encyclopaedia of Mathematics
|
Abstract algebra;Morphisms;Symmetry
|
https://en.wikipedia.org/wiki/Alloy
|
An alloy is a mixture of chemical elements of which in most cases at least one is a metallic element, although it is also sometimes used for mixtures of elements; herein only metallic alloys are described. Metallic alloys often have properties that differ from those of the pure elements from which they are made.
The vast majority of metals used for commercial purposes are alloyed to improve their properties or behavior, such as increased strength, hardness or corrosion resistance. Metals may also be alloyed to reduce their overall cost, for instance alloys of gold and copper.
A typical example of an alloy is 304 grade stainless steel which is commonly used for kitchen utensils, pans, knives and forks. Sometime also known as 18/8, it as an alloy consisting broadly of 74% iron, 18% chromium and 8% nickel. The chromium and nickel alloying elements add strength and hardness to the majority iron element, but their main function is to make it resistant to rust/corrosion.
In an alloy, the atoms are joined by metallic bonding rather than by covalent bonds typically found in chemical compounds. The alloy constituents are usually measured by mass percentage for practical applications, and in atomic fraction for basic science studies. Alloys are usually classified as substitutional or interstitial alloys, depending on the atomic arrangement that forms the alloy. They can be further classified as homogeneous (consisting of a single phase), or heterogeneous (consisting of two or more phases) or intermetallic. An alloy may be a solid solution of metal elements (a single phase, where all metallic grains (crystals) are of the same composition) or a mixture of metallic phases (two or more solutions, forming a microstructure of different crystals within the metal).
Examples of alloys include red gold (gold and copper), white gold (gold and silver), sterling silver (silver and copper), steel or silicon steel (iron with non-metallic carbon or silicon respectively), solder, brass, pewter, duralumin, bronze, and amalgams.
Alloys are used in a wide variety of applications, from the steel alloys, used in everything from buildings to automobiles to surgical tools, to exotic titanium alloys used in the aerospace industry, to beryllium-copper alloys for non-sparking tools.
Characteristics
An alloy is a mixture of chemical elements, which forms an impure substance (admixture) that retains the characteristics of a metal. An alloy is distinct from an impure metal in that, with an alloy, the added elements are well controlled to produce desirable properties, while impure metals such as wrought iron are less controlled, but are often considered useful. Alloys are made by mixing two or more elements, at least one of which is a metal. This is usually called the primary metal or the base metal, and the name of this metal may also be the name of the alloy. The other constituents may or may not be metals but, when mixed with the molten base, they will be soluble and dissolve into the mixture.
The mechanical properties of alloys will often be quite different from those of its individual constituents. A metal that is normally very soft (malleable), such as aluminium, can be altered by alloying it with another soft metal, such as copper. Although both metals are very soft and ductile, the resulting aluminium alloy will have much greater strength. Adding a small amount of non-metallic carbon to iron trades its great ductility for the greater strength of an alloy called steel. Due to its very-high strength, but still substantial toughness, and its ability to be greatly altered by heat treatment, steel is one of the most useful and common alloys in modern use. By adding chromium to steel, its resistance to corrosion can be enhanced, creating stainless steel, while adding silicon will alter its electrical characteristics, producing silicon steel.
Like oil and water, a molten metal may not always mix with another element. For example, pure iron is almost completely insoluble with copper. Even when the constituents are soluble, each will usually have a saturation point, beyond which no more of the constituent can be added. Iron, for example, can hold a maximum of 6.67% carbon. Although the elements of an alloy usually must be soluble in the liquid state, they may not always be soluble in the solid state. If the metals remain soluble when solid, the alloy forms a solid solution, becoming a homogeneous structure consisting of identical crystals, called a phase. If as the mixture cools the constituents become insoluble, they may separate to form two or more different types of crystals, creating a heterogeneous microstructure of different phases, some with more of one constituent than the other. However, in other alloys, the insoluble elements may not separate until after crystallization occurs. If cooled very quickly, they first crystallize as a homogeneous phase, but they are supersaturated with the secondary constituents. As time passes, the atoms of these supersaturated alloys can separate from the crystal lattice, becoming more stable, and forming a second phase that serves to reinforce the crystals internally.
Some alloys, such as electrum—an alloy of silver and gold—occur naturally. Meteorites are sometimes made of naturally occurring alloys of iron and nickel, but are not native to the Earth. One of the first alloys made by humans was bronze, which is a mixture of the metals tin and copper. Bronze was an extremely useful alloy to the ancients, because it is much stronger and harder than either of its components. Steel was another common alloy. However, in ancient times, it could only be created as an accidental byproduct from the heating of iron ore in fires (smelting) during the manufacture of iron. Other ancient alloys include pewter, brass and pig iron. In the modern age, steel can be created in many forms. Carbon steel can be made by varying only the carbon content, producing soft alloys like mild steel or hard alloys like spring steel. Alloy steels can be made by adding other elements, such as chromium, molybdenum, vanadium or nickel, resulting in alloys such as high-speed steel or tool steel. Small amounts of manganese are usually alloyed with most modern steels because of its ability to remove unwanted impurities, like phosphorus, sulfur and oxygen, which can have detrimental effects on the alloy. However, most alloys were not created until the 1900s, such as various aluminium, titanium, nickel, and magnesium alloys. Some modern superalloys, such as incoloy, inconel, and hastelloy, may consist of a multitude of different elements.
An alloy is technically an impure metal, but when referring to alloys, the term impurities usually denotes undesirable elements. Such impurities are introduced from the base metals and alloying elements, but are removed during processing. For instance, sulfur is a common impurity in steel. Sulfur combines readily with iron to form iron sulfide, which is very brittle, creating weak spots in the steel. Lithium, sodium and calcium are common impurities in aluminium alloys, which can have adverse effects on the structural integrity of castings. Conversely, otherwise pure-metals that contain unwanted impurities are often called "impure metals" and are not usually referred to as alloys. Oxygen, present in the air, readily combines with most metals to form metal oxides; especially at higher temperatures encountered during alloying. Great care is often taken during the alloying process to remove excess impurities, using fluxes, chemical additives, or other methods of extractive metallurgy.
Theory
Alloying a metal is done by combining it with one or more other elements. The most common and oldest alloying process is performed by heating the base metal beyond its melting point and then dissolving the solutes into the molten liquid, which may be possible even if the melting point of the solute is far greater than that of the base. For example, in its liquid state, titanium is a very strong solvent capable of dissolving most metals and elements. In addition, it readily absorbs gases like oxygen and burns in the presence of nitrogen. This increases the chance of contamination from any contacting surface, and so must be melted in vacuum induction-heating and special, water-cooled, copper crucibles. However, some metals and solutes, such as iron and carbon, have very high melting-points and were impossible for ancient people to melt. Thus, alloying (in particular, interstitial alloying) may also be performed with one or more constituents in a gaseous state, such as found in a blast furnace to make pig iron (liquid-gas), nitriding, carbonitriding or other forms of case hardening (solid-gas), or the cementation process used to make blister steel (solid-gas). It may also be done with one, more, or all of the constituents in the solid state, such as found in ancient methods of pattern welding (solid-solid), shear steel (solid-solid), or crucible steel production (solid-liquid), mixing the elements via solid-state diffusion.
By adding another element to a metal, differences in the size of the atoms create internal stresses in the lattice of the metallic crystals; stresses that often enhance its properties. For example, the combination of carbon with iron produces steel, which is stronger than iron, its primary element. The electrical and thermal conductivity of alloys is usually lower than that of the pure metals. The physical properties, such as density, reactivity, Young's modulus of an alloy may not differ greatly from those of its base element, but engineering properties such as tensile strength, ductility, and shear strength may be substantially different from those of the constituent materials. This is sometimes a result of the sizes of the atoms in the alloy, because larger atoms exert a compressive force on neighboring atoms, and smaller atoms exert a tensile force on their neighbors, helping the alloy resist deformation. Sometimes alloys may exhibit marked differences in behavior even when small amounts of one element are present. For example, impurities in semiconducting ferromagnetic alloys lead to different properties, as first predicted by White, Hogan, Suhl, Tian Abrie and Nakamura.
Unlike pure metals, most alloys do not have a single melting point, but a melting range during which the material is a mixture of solid and liquid phases (a slush). The temperature at which melting begins is called the solidus, and the temperature when melting is just complete is called the liquidus. For many alloys there is a particular alloy proportion (in some cases more than one), called either a eutectic mixture or a peritectic composition, which gives the alloy a unique and low melting point, and no liquid/solid slush transition.
Heat treatment
Alloying elements are added to a base metal, to induce hardness, toughness, ductility, or other desired properties. Most metals and alloys can be work hardened by creating defects in their crystal structure. These defects are created during plastic deformation by hammering, bending, extruding, et cetera, and are permanent unless the metal is recrystallized. Otherwise, some alloys can also have their properties altered by heat treatment. Nearly all metals can be softened by annealing, which recrystallizes the alloy and repairs the defects, but not as many can be hardened by controlled heating and cooling. Many alloys of aluminium, copper, magnesium, titanium, and nickel can be strengthened to some degree by some method of heat treatment, but few respond to this to the same degree as does steel.
The base metal iron of the iron-carbon alloy known as steel, undergoes a change in the arrangement (allotropy) of the atoms of its crystal matrix at a certain temperature (usually between and , depending on carbon content). This allows the smaller carbon atoms to enter the interstices of the iron crystal. When this diffusion happens, the carbon atoms are said to be in solution in the iron, forming a particular single, homogeneous, crystalline phase called austenite. If the steel is cooled slowly, the carbon can diffuse out of the iron and it will gradually revert to its low temperature allotrope. During slow cooling, the carbon atoms will no longer be as soluble with the iron, and will be forced to precipitate out of solution, nucleating into a more concentrated form of iron carbide (Fe3C) in the spaces between the pure iron crystals. The steel then becomes heterogeneous, as it is formed of two phases, the iron-carbon phase called cementite (or carbide), and pure iron ferrite. Such a heat treatment produces a steel that is rather soft. If the steel is cooled quickly, however, the carbon atoms will not have time to diffuse and precipitate out as carbide, but will be trapped within the iron crystals. When rapidly cooled, a diffusionless (martensite) transformation occurs, in which the carbon atoms become trapped in solution. This causes the iron crystals to deform as the crystal structure tries to change to its low temperature state, leaving those crystals very hard but much less ductile (more brittle).
While the high strength of steel results when diffusion and precipitation is prevented (forming martensite), most heat-treatable alloys are precipitation hardening alloys, that depend on the diffusion of alloying elements to achieve their strength. When heated to form a solution and then cooled quickly, these alloys become much softer than normal, during the diffusionless transformation, but then harden as they age. The solutes in these alloys will precipitate over time, forming intermetallic phases, which are difficult to discern from the base metal. Unlike steel, in which the solid solution separates into different crystal phases (carbide and ferrite), precipitation hardening alloys form different phases within the same crystal. These intermetallic alloys appear homogeneous in crystal structure, but tend to behave heterogeneously, becoming hard and somewhat brittle.
In 1906, precipitation hardening alloys were discovered by Alfred Wilm. Precipitation hardening alloys, such as certain alloys of aluminium, titanium, and copper, are heat-treatable alloys that soften when quenched (cooled quickly), and then harden over time. Wilm had been searching for a way to harden aluminium alloys for use in machine-gun cartridge cases. Knowing that aluminium-copper alloys were heat-treatable to some degree, Wilm tried quenching a ternary alloy of aluminium, copper, and the addition of magnesium, but was initially disappointed with the results. However, when Wilm retested it the next day he discovered that the alloy increased in hardness when left to age at room temperature, and far exceeded his expectations. Although an explanation for the phenomenon was not provided until 1919, duralumin was one of the first "age hardening" alloys used, becoming the primary building material for the first Zeppelins, and was soon followed by many others. Because they often exhibit a combination of high strength and low weight, these alloys became widely used in many forms of industry, including the construction of modern aircraft.
Mechanisms
When a molten metal is mixed with another substance, there are two mechanisms that can cause an alloy to form, called atom exchange and the interstitial mechanism. The relative size of each element in the mix plays a primary role in determining which mechanism will occur. When the atoms are relatively similar in size, the atom exchange method usually happens, where some of the atoms composing the metallic crystals are substituted with atoms of the other constituent. This is called a substitutional alloy. Examples of substitutional alloys include bronze and brass, in which some of the copper atoms are substituted with either tin or zinc atoms respectively.
In the case of the interstitial mechanism, one atom is usually much smaller than the other and can not successfully substitute for the other type of atom in the crystals of the base metal. Instead, the smaller atoms become trapped in the interstitial sites between the atoms of the crystal matrix. This is referred to as an interstitial alloy. Steel is an example of an interstitial alloy, because the very small carbon atoms fit into interstices of the iron matrix.
Stainless steel is an example of a combination of interstitial and substitutional alloys, because the carbon atoms fit into the interstices, but some of the iron atoms are substituted by nickel and chromium atoms.
History and examples
Meteoric iron
The use of alloys by humans started with the use of meteoric iron, a naturally occurring alloy of nickel and iron. It is the main constituent of iron meteorites. As no metallurgic processes were used to separate iron from nickel, the alloy was used as it was. Meteoric iron could be forged from a red heat to make objects such as tools, weapons, and nails. In many cultures it was shaped by cold hammering into knives and arrowheads. They were often used as anvils. Meteoric iron was very rare and valuable, and difficult for ancient people to work.
Bronze and brass
Iron is usually found as iron ore on Earth, except for one deposit of native iron in Greenland, which was used by the Inuit. Native copper, however, was found worldwide, along with silver, gold, and platinum, which were also used to make tools, jewelry, and other objects since Neolithic times. Copper was the hardest of these metals, and the most widely distributed. It became one of the most important metals to the ancients. Around 10,000 years ago in the highlands of Anatolia (Turkey), humans learned to smelt metals such as copper and tin from ore. Around 2500 BC, people began alloying the two metals to form bronze, which was much harder than its ingredients. Tin was rare, however, being found mostly in Great Britain. In the Middle East, people began alloying copper with zinc to form brass. Ancient civilizations took into account the mixture and the various properties it produced, such as hardness, toughness and melting point, under various conditions of temperature and work hardening, developing much of the information contained in modern alloy phase diagrams. For example, arrowheads from the Chinese Qin dynasty (around 200 BC) were often constructed with a hard bronze-head, but a softer bronze-tang, combining the alloys to prevent both dulling and breaking during use.
Amalgams
Mercury has been smelted from cinnabar for thousands of years. Mercury dissolves many metals, such as gold, silver, and tin, to form amalgams (an alloy in a soft paste or liquid form at ambient temperature). Amalgams have been used since 200 BC in China for gilding objects such as armor and mirrors with precious metals. The ancient Romans often used mercury-tin amalgams for gilding their armor. The amalgam was applied as a paste and then heated until the mercury vaporized, leaving the gold, silver, or tin behind. Mercury was often used in mining, to extract precious metals like gold and silver from their ores.
Precious metals
Many ancient civilizations alloyed metals for purely aesthetic purposes. In ancient Egypt and Mycenae, gold was often alloyed with copper to produce red-gold, or iron to produce a bright burgundy-gold. Gold was often found alloyed with silver or other metals to produce various types of colored gold. These metals were also used to strengthen each other, for more practical purposes. Copper was often added to silver to make sterling silver, increasing its strength for use in dishes, silverware, and other practical items. Quite often, precious metals were alloyed with less valuable substances as a means to deceive buyers. Around 250 BC, Archimedes was commissioned by the King of Syracuse to find a way to check the purity of the gold in a crown, leading to the famous bath-house shouting of "Eureka!" upon the discovery of Archimedes' principle.
Pewter
The term pewter covers a variety of alloys consisting primarily of tin. As a pure metal, tin is much too soft to use for most practical purposes. However, during the Bronze Age, tin was a rare metal in many parts of Europe and the Mediterranean, so it was often valued higher than gold. To make jewellery, cutlery, or other objects from tin, workers usually alloyed it with other metals to increase strength and hardness. These metals were typically lead, antimony, bismuth or copper. These solutes were sometimes added individually in varying amounts, or added together, making a wide variety of objects, ranging from practical items such as dishes, surgical tools, candlesticks or funnels, to decorative items like ear rings and hair clips.
The earliest examples of pewter come from ancient Egypt, around 1450 BC. The use of pewter was widespread across Europe, from France to Norway and Britain (where most of the ancient tin was mined) to the Near East. The alloy was also used in China and the Far East, arriving in Japan around 800 AD, where it was used for making objects like ceremonial vessels, tea canisters, or chalices used in shinto shrines.
Iron
The first known smelting of iron began in Anatolia, around 1800 BC. Called the bloomery process, it produced very soft but ductile wrought iron. By 800 BC, iron-making technology had spread to Europe, arriving in Japan around 700 AD. Pig iron, a very hard but brittle alloy of iron and carbon, was being produced in China as early as 1200 BC, but did not arrive in Europe until the Middle Ages. Pig iron has a lower melting point than iron, and was used for making cast-iron. However, these metals found little practical use until the introduction of crucible steel around 300 BC. These steels were of poor quality, and the introduction of pattern welding, around the 1st century AD, sought to balance the extreme properties of the alloys by laminating them, to create a tougher metal. Around 700 AD, the Japanese began folding bloomery-steel and cast-iron in alternating layers to increase the strength of their swords, using clay fluxes to remove slag and impurities. This method of Japanese swordsmithing produced one of the purest steel-alloys of the ancient world.
While the use of iron started to become more widespread around 1200 BC, mainly because of interruptions in the trade routes for tin, the metal was much softer than bronze. However, very small amounts of steel, (an alloy of iron and around 1% carbon), was always a byproduct of the bloomery process. The ability to modify the hardness of steel by heat treatment had been known since 1100 BC, and the rare material was valued for the manufacture of tools and weapons. Because the ancients could not produce temperatures high enough to melt iron fully, the production of steel in decent quantities did not occur until the introduction of blister steel during the Middle Ages. This method introduced carbon by heating wrought iron in charcoal for long periods of time, but the absorption of carbon in this manner is extremely slow thus the penetration was not very deep, so the alloy was not homogeneous. In 1740, Benjamin Huntsman began melting blister steel in a crucible to even out the carbon content, creating the first process for the mass production of tool steel. Huntsman's process was used for manufacturing tool steel until the early 1900s.
The introduction of the blast furnace to Europe in the Middle Ages meant that people could produce pig iron in much higher volumes than wrought iron. Because pig iron could be melted, people began to develop processes to reduce carbon in liquid pig iron to create steel. Puddling had been used in China since the first century, and was introduced in Europe during the 1700s, where molten pig iron was stirred while exposed to the air, to remove the carbon by oxidation. In 1858, Henry Bessemer developed a process of steel-making by blowing hot air through liquid pig iron to reduce the carbon content. The Bessemer process led to the first large scale manufacture of steel.
Steel is an alloy of iron and carbon, but the term alloy steel usually only refers to steels that contain other elements— like vanadium, molybdenum, or cobalt—in amounts sufficient to alter the properties of the base steel. Since ancient times, when steel was used primarily for tools and weapons, the methods of producing and working the metal were often closely guarded secrets. Even long after the Age of Enlightenment, the steel industry was very competitive and manufacturers went through great lengths to keep their processes confidential, resisting any attempts to scientifically analyze the material for fear it would reveal their methods. For example, the people of Sheffield, a center of steel production in England, were known to routinely bar visitors and tourists from entering town to deter industrial espionage. Thus, almost no metallurgical information existed about steel until 1860. Because of this lack of understanding, steel was not generally considered an alloy until the decades between 1930 and 1970 (primarily due to the work of scientists like William Chandler Roberts-Austen, Adolf Martens, and Edgar Bain), so "alloy steel" became the popular term for ternary and quaternary steel-alloys.
After Benjamin Huntsman developed his crucible steel in 1740, he began experimenting with the addition of elements like manganese (in the form of a high-manganese pig-iron called spiegeleisen), which helped remove impurities such as phosphorus and oxygen; a process adopted by Bessemer and still used in modern steels (albeit in concentrations low enough to still be considered carbon steel). Afterward, many people began experimenting with various alloys of steel without much success. However, in 1882, Robert Hadfield, being a pioneer in steel metallurgy, took an interest and produced a steel alloy containing around 12% manganese. Called mangalloy, it exhibited extreme hardness and toughness, becoming the first commercially viable alloy-steel. Afterward, he created silicon steel, launching the search for other possible alloys of steel.
Robert Forester Mushet found that by adding tungsten to steel it could produce a very hard edge that would resist losing its hardness at high temperatures. "R. Mushet's special steel" (RMS) became the first high-speed steel. Mushet's steel was quickly replaced by tungsten carbide steel, developed by Taylor and White in 1900, in which they doubled the tungsten content and added small amounts of chromium and vanadium, producing a superior steel for use in lathes and machining tools. In 1903, the Wright brothers used a chromium-nickel steel to make the crankshaft for their airplane engine, while in 1908 Henry Ford began using vanadium steels for parts like crankshafts and valves in his Model T Ford, due to their higher strength and resistance to high temperatures. In 1912, the Krupp Ironworks in Germany developed a rust-resistant steel by adding 21% chromium and 7% nickel, producing the first stainless steel.
Others
Due to their high reactivity, most metals were not discovered until the 19th century. A method for extracting aluminium from bauxite was proposed by Humphry Davy in 1807, using an electric arc. Although his attempts were unsuccessful, by 1855 the first sales of pure aluminium reached the market. However, as extractive metallurgy was still in its infancy, most aluminium extraction-processes produced unintended alloys contaminated with other elements found in the ore; the most abundant of which was copper. These aluminium-copper alloys (at the time termed "aluminium bronze") preceded pure aluminium, offering greater strength and hardness over the soft, pure metal, and to a slight degree were found to be heat treatable. However, due to their softness and limited hardenability these alloys found little practical use, and were more of a novelty, until the Wright brothers used an aluminium alloy to construct the first airplane engine in 1903. During the time between 1865 and 1910, processes for extracting many other metals were discovered, such as chromium, vanadium, tungsten, iridium, cobalt, and molybdenum, and various alloys were developed.
Prior to 1910, research mainly consisted of private individuals tinkering in their own laboratories. However, as the aircraft and automotive industries began growing, research into alloys became an industrial effort in the years following 1910, as new magnesium alloys were developed for pistons and wheels in cars, and pot metal for levers and knobs, and aluminium alloys developed for airframes and aircraft skins were put into use. The Doehler Die Casting Co. of Toledo, Ohio were known for the production of Brastil, a high tensile corrosion resistant bronze alloy.
Bibliography
|
;Chemistry;Metallurgy
|
https://en.wikipedia.org/wiki/Atomic%20physics
|
Atomic physics is the field of physics that studies atoms as an isolated system of electrons and an atomic nucleus. Atomic physics typically refers to the study of atomic structure and the interaction between atoms. It is primarily concerned with the way in which electrons are arranged around the nucleus and
the processes by which these arrangements change. This comprises ions, neutral atoms and, unless otherwise stated, it can be assumed that the term atom includes ions.
The term atomic physics can be associated with nuclear power and nuclear weapons, due to the synonymous use of atomic and nuclear in standard English. Physicists distinguish between atomic physics—which deals with the atom as a system consisting of a nucleus and electrons—and nuclear physics, which studies nuclear reactions and special properties of atomic nuclei.
As with many scientific fields, strict delineation can be highly contrived and atomic physics is often considered in the wider context of atomic, molecular, and optical physics. Physics research groups are usually so classified.
Isolated atoms
Atomic physics primarily considers atoms in isolation. Atomic models will consist of a single nucleus that may be surrounded by one or more bound electrons. It is not concerned with the formation of molecules (although much of the physics is identical), nor does it examine atoms in a solid state as condensed matter. It is concerned with processes such as ionization and excitation by photons or collisions with atomic particles.
While modelling atoms in isolation may not seem realistic, if one considers atoms in a gas or plasma then the time-scales for atom-atom interactions are huge in comparison to the atomic processes that are generally considered. This means that the individual atoms can be treated as if each were in isolation, as the vast majority of the time they are. By this consideration, atomic physics provides the underlying theory in plasma physics and atmospheric physics, even though both deal with very large numbers of atoms.
Electronic configuration
Electrons form notional shells around the nucleus. These are normally in a ground state but can be excited by the absorption of energy from light (photons), magnetic fields, or interaction with a colliding particle (typically ions or other electrons).
Electrons that populate a shell are said to be in a bound state. The energy necessary to remove an electron from its shell (taking it to infinity) is called the binding energy. Any quantity of energy absorbed by the electron in excess of this amount is converted to kinetic energy according to the conservation of energy. The atom is said to have undergone the process of ionization.
If the electron absorbs a quantity of energy less than the binding energy, it will be transferred to an excited state. After a certain time, the electron in an excited state will "jump" (undergo a transition) to a lower state. In a neutral atom, the system will emit a photon of the difference in energy, since energy is conserved.
If an inner electron has absorbed more than the binding energy (so that the atom ionizes), then a more outer electron may undergo a transition to fill the inner orbital. In this case, a visible photon or a characteristic X-ray is emitted, or a phenomenon known as the Auger effect may take place, where the released energy is transferred to another bound electron, causing it to go into the continuum. The Auger effect allows one to multiply ionize an atom with a single photon.
There are rather strict selection rules as to the electronic configurations that can be reached by excitation by light — however, there are no such rules for excitation by collision processes.
Bohr Model of the Atom
The Bohr model, proposed by Niels Bohr in 1913, is a revolutionary theory describing the structure of the hydrogen atom. It introduced the idea of quantized orbits for electrons, combining classical and quantum physics.
Key Postulates of the Bohr Model
1. Electrons Move in Circular Orbits:
• Electrons revolve around the nucleus in fixed, circular paths called orbits or energy levels.
• These orbits are stable and do not radiate energy.
2. Quantization of Angular Momentum:
• The angular momentum of an electron is quantized and given by:
where:
• Mass of the electron.
• Velocity of the electron.
• Radius of the orbit.
• Reduced Planck's constant ().
• Principal quantum number, representing the orbit.
3. Energy Levels:
• Each orbit has a specific energy. The total energy of an electron in the th orbit is:
where is the ground-state energy of the hydrogen atom.
4. Emission or Absorption of Energy:
• Electrons can transition between orbits by absorbing or emitting energy equal to the difference between the energy levels:
where:
• Planck's constant.
• Frequency of emitted/absorbed radiation.
• Final and initial energy levels.
History and developments
One of the earliest steps towards atomic physics was the recognition that matter was composed
of atoms. It forms a part of the texts written in 6th century BC to 2nd century BC, such as those of Democritus or written by . This theory was later developed in the modern sense of the basic unit of a chemical element by the British chemist and physicist John Dalton in the 18th century. At this stage, it was not clear what atoms were, although they could be described and classified by their properties (in bulk). The invention of the periodic system of elements by Dmitri Mendeleev was another great step forward.
The true beginning of atomic physics is marked by the discovery of spectral lines and attempts to describe the phenomenon, most notably by Joseph von Fraunhofer. The study of these lines led to the Bohr atom model and to the birth of quantum mechanics. In seeking to explain atomic spectra, an entirely new mathematical model of matter was revealed. As far as atoms and their electron shells were concerned, not only did this yield a better overall description, i.e. the atomic orbital model, but it also provided a new theoretical basis for chemistry
(quantum chemistry) and spectroscopy.
Since the Second World War, both theoretical and experimental fields have advanced at a rapid pace. This can be attributed to progress in computing technology, which has allowed larger and more sophisticated models of atomic structure and associated collision processes. Similar technological advances in accelerators, detectors, magnetic field generation and lasers have greatly assisted experimental work.
Beyond the well-known phenomena which can be describe with regular quantum mechanics chaotic processes can occur which need different descriptions.
Significant atomic physicists
See also
Particle physics
Isomeric shift
Atomism
Ionisation
Quantum Mechanics
Electron Correlation
Quantum Chemistry
Bound State
Bibliography
Sommerfeld, A. (1923) Atomic structure and spectral lines. (translated from German "Atombau und Spektrallinien" 1921), Dutton Publisher.
Smirnov, B.E. (2003) Physics of Atoms and Ions, Springer. .
Szász, L. (1992) The Electronic Structure of Atoms, John Willey & Sons. .
Bethe, H.A. & Salpeter E.E. (1957) Quantum Mechanics of One- and Two Electron Atoms. Springer.
Born, M. (1937) Atomic Physics. Blackie & Son Limited.
Cox, P.A. (1996) Introduction to Quantum Theory and Atomic Spectra. Oxford University Press. ISBN 0-19-855916
References
External links
MIT-Harvard Center for Ultracold Atoms
Stanford QFARM Initiative for Quantum Science & Enginneering
Joint Quantum Institute at University of Maryland and NIST
Atomic Physics on the Internet
JILA (Atomic Physics)
ORNL Physics Division
|
;Atomic, molecular, and optical physics
|
https://en.wikipedia.org/wiki/Atomic%20orbital
|
In quantum mechanics, an atomic orbital () is a function describing the location and wave-like behavior of an electron in an atom. This function describes an electron's charge distribution around the atom's nucleus, and can be used to calculate the probability of finding an electron in a specific region around the nucleus.
Each orbital in an atom is characterized by a set of values of three quantum numbers , , and , which respectively correspond to electron's energy, its orbital angular momentum, and its orbital angular momentum projected along a chosen axis (magnetic quantum number). The orbitals with a well-defined magnetic quantum number are generally complex-valued. Real-valued orbitals can be formed as linear combinations of and orbitals, and are often labeled using associated harmonic polynomials (e.g., xy, ) which describe their angular structure.
An orbital can be occupied by a maximum of two electrons, each with its own projection of spin . The simple names s orbital, p orbital, d orbital, and f orbital refer to orbitals with angular momentum quantum number and respectively. These names, together with their n values, are used to describe electron configurations of atoms. They are derived from description by early spectroscopists of certain series of alkali metal spectroscopic lines as sharp, principal, diffuse, and fundamental. Orbitals for continue alphabetically (g, h, i, k, ...), omitting j because some languages do not distinguish between letters "i" and "j".
Atomic orbitals are basic building blocks of the atomic orbital model (or electron cloud or wave mechanics model), a modern framework for visualizing submicroscopic behavior of electrons in matter. In this model, the electron cloud of an atom may be seen as being built up (in approximation) in an electron configuration that is a product of simpler hydrogen-like atomic orbitals. The repeating periodicity of blocks of 2, 6, 10, and 14 elements within sections of periodic table arises naturally from total number of electrons that occupy a complete set of s, p, d, and f orbitals, respectively, though for higher values of quantum number , particularly when the atom bears a positive charge, energies of certain sub-shells become very similar and so, the order in which they are said to be populated by electrons (e.g., Cr = [Ar]4s13d5 and Cr2+ = [Ar]3d4) can be rationalized only somewhat arbitrarily.
Electron properties
With the development of quantum mechanics and experimental findings (such as the two slit diffraction of electrons), it was found that the electrons orbiting a nucleus could not be fully described as particles, but needed to be explained by wave–particle duality. In this sense, electrons have the following properties:
Wave-like properties:
Electrons do not orbit a nucleus in the manner of a planet orbiting a star, but instead exist as standing waves. Thus the lowest possible energy an electron can take is similar to the fundamental frequency of a wave on a string. Higher energy states are similar to harmonics of that fundamental frequency.
The electrons are never in a single point location, though the probability of interacting with the electron at a single point can be found from the electron's wave function. The electron's charge acts like it is smeared out in space in a continuous distribution, proportional at any point to the squared magnitude of the electron's wave function.
Particle-like properties:
The number of electrons orbiting a nucleus can be only an integer.
Electrons jump between orbitals like particles. For example, if one photon strikes the electrons, only one electron changes state as a result.
Electrons retain particle-like properties such as: each wave state has the same electric charge as its electron particle. Each wave state has a single discrete spin (spin up or spin down) depending on its superposition.
Thus, electrons cannot be described simply as solid particles. An analogy might be that of a large and often oddly shaped "atmosphere" (the electron), distributed around a relatively tiny planet (the nucleus). Atomic orbitals exactly describe the shape of this "atmosphere" only when one electron is present. When more electrons are added, the additional electrons tend to more evenly fill in a volume of space around the nucleus so that the resulting collection ("electron cloud") tends toward a generally spherical zone of probability describing the electron's location, because of the uncertainty principle.
One should remember that these orbital 'states', as described here, are merely eigenstates of an electron in its orbit. An actual electron exists in a superposition of states, which is like a weighted average, but with complex number weights. So, for instance, an electron could be in a pure eigenstate (2, 1, 0), or a mixed state (2, 1, 0) + (2, 1, 1), or even the mixed state (2, 1, 0) + (2, 1, 1). For each eigenstate, a property has an eigenvalue. So, for the three states just mentioned, the value of is 2, and the value of is 1. For the second and third states, the value for is a superposition of 0 and 1. As a superposition of states, it is ambiguous—either exactly 0 or exactly 1—not an intermediate or average value like the fraction . A superposition of eigenstates (2, 1, 1) and (3, 2, 1) would have an ambiguous and , but would definitely be 1. Eigenstates make it easier to deal with the math. You can choose a different basis of eigenstates by superimposing eigenstates from any other basis (see Real orbitals below).
Formal quantum mechanical definition
Atomic orbitals may be defined more precisely in formal quantum mechanical language. They are approximate solutions to the Schrödinger equation for the electrons bound to the atom by the electric field of the atom's nucleus. Specifically, in quantum mechanics, the state of an atom, i.e., an eigenstate of the atomic Hamiltonian, is approximated by an expansion (see configuration interaction expansion and basis set) into linear combinations of anti-symmetrized products (Slater determinants) of one-electron functions. The spatial components of these one-electron functions are called atomic orbitals. (When one considers also their spin component, one speaks of atomic spin orbitals.) A state is actually a function of the coordinates of all the electrons, so that their motion is correlated, but this is often approximated by this independent-particle model of products of single electron wave functions. (The London dispersion force, for example, depends on the correlations of the motion of the electrons.)
In atomic physics, the atomic spectral lines correspond to transitions (quantum leaps) between quantum states of an atom. These states are labeled by a set of quantum numbers summarized in the term symbol and usually associated with particular electron configurations, i.e., by occupation schemes of atomic orbitals (for example, 1s2 2s2 2p6 for the ground state of neon-term symbol: 1S0).
This notation means that the corresponding Slater determinants have a clear higher weight in the configuration interaction expansion. The atomic orbital concept is therefore a key concept for visualizing the excitation process associated with a given transition. For example, one can say for a given transition that it corresponds to the excitation of an electron from an occupied orbital to a given unoccupied orbital. Nevertheless, one has to keep in mind that electrons are fermions ruled by the Pauli exclusion principle and cannot be distinguished from each other. Moreover, it sometimes happens that the configuration interaction expansion converges very slowly and that one cannot speak about simple one-determinant wave function at all. This is the case when electron correlation is large.
Fundamentally, an atomic orbital is a one-electron wave function, even though many electrons are not in one-electron atoms, and so the one-electron view is an approximation. When thinking about orbitals, we are often given an orbital visualization heavily influenced by the Hartree–Fock approximation, which is one way to reduce the complexities of molecular orbital theory.
Types of orbital
Atomic orbitals can be the hydrogen-like "orbitals" which are exact solutions to the Schrödinger equation for a hydrogen-like "atom" (i.e., atom with one electron). Alternatively, atomic orbitals refer to functions that depend on the coordinates of one electron (i.e., orbitals) but are used as starting points for approximating wave functions that depend on the simultaneous coordinates of all the electrons in an atom or molecule. The coordinate systems chosen for orbitals are usually spherical coordinates in atoms and Cartesian in polyatomic molecules. The advantage of spherical coordinates here is that an orbital wave function is a product of three factors each dependent on a single coordinate: . The angular factors of atomic orbitals generate s, p, d, etc. functions as real combinations of spherical harmonics (where and are quantum numbers). There are typically three mathematical forms for the radial functions which can be chosen as a starting point for the calculation of the properties of atoms and molecules with many electrons:
The hydrogen-like orbitals are derived from the exact solutions of the Schrödinger equation for one electron and a nucleus, for a hydrogen-like atom. The part of the function that depends on distance r from the nucleus has radial nodes and decays as .
The Slater-type orbital (STO) is a form without radial nodes but decays from the nucleus as does a hydrogen-like orbital.
The form of the Gaussian type orbital (Gaussians) has no radial nodes and decays as .
Although hydrogen-like orbitals are still used as pedagogical tools, the advent of computers has made STOs preferable for atoms and diatomic molecules since combinations of STOs can replace the nodes in hydrogen-like orbitals. Gaussians are typically used in molecules with three or more atoms. Although not as accurate by themselves as STOs, combinations of many Gaussians can attain the accuracy of hydrogen-like orbitals.
History
The term orbital was introduced by Robert S. Mulliken in 1932 as short for one-electron orbital wave function. Niels Bohr explained around 1913 that electrons might revolve around a compact nucleus with definite angular momentum. Bohr's model was an improvement on the 1911 explanations of Ernest Rutherford, that of the electron moving around a nucleus. Japanese physicist Hantaro Nagaoka published an orbit-based hypothesis for electron behavior as early as 1904. These theories were each built upon new observations starting with simple understanding and becoming more correct and complex. Explaining the behavior of these electron "orbits" was one of the driving forces behind the development of quantum mechanics.
Early models
With J. J. Thomson's discovery of the electron in 1897, it became clear that atoms were not the smallest building blocks of nature, but were rather composite particles. The newly discovered structure within atoms tempted many to imagine how the atom's constituent parts might interact with each other. Thomson theorized that multiple electrons revolve in orbit-like rings within a positively charged jelly-like substance, and between the electron's discovery and 1909, this "plum pudding model" was the most widely accepted explanation of atomic structure.
Shortly after Thomson's discovery, Hantaro Nagaoka predicted a different model for electronic structure. Unlike the plum pudding model, the positive charge in Nagaoka's "Saturnian Model" was concentrated into a central core, pulling the electrons into circular orbits reminiscent of Saturn's rings. Few people took notice of Nagaoka's work at the time, and Nagaoka himself recognized a fundamental defect in the theory even at its conception, namely that a classical charged object cannot sustain orbital motion because it is accelerating and therefore loses energy due to electromagnetic radiation. Nevertheless, the Saturnian model turned out to have more in common with modern theory than any of its contemporaries.
Bohr atom
In 1909, Ernest Rutherford discovered that the bulk of the atomic mass was tightly condensed into a nucleus, which was also found to be positively charged. It became clear from his analysis in 1911 that the plum pudding model could not explain atomic structure. In 1913, Rutherford's post-doctoral student, Niels Bohr, proposed a new model of the atom, wherein electrons orbited the nucleus with classical periods, but were permitted to have only discrete values of angular momentum, quantized in units ħ. This constraint automatically allowed only certain electron energies. The Bohr model of the atom fixed the problem of energy loss from radiation from a ground state (by declaring that there was no state below this), and more importantly explained the origin of spectral lines.
After Bohr's use of Einstein's explanation of the photoelectric effect to relate energy levels in atoms with the wavelength of emitted light, the connection between the structure of electrons in atoms and the emission and absorption spectra of atoms became an increasingly useful tool in the understanding of electrons in atoms. The most prominent feature of emission and absorption spectra (known experimentally since the middle of the 19th century), was that these atomic spectra contained discrete lines. The significance of the Bohr model was that it related the lines in emission and absorption spectra to the energy differences between the orbits that electrons could take around an atom. This was, however, not achieved by Bohr through giving the electrons some kind of wave-like properties, since the idea that electrons could behave as matter waves was not suggested until eleven years later. Still, the Bohr model's use of quantized angular momenta and therefore quantized energy levels was a significant step toward the understanding of electrons in atoms, and also a significant step towards the development of quantum mechanics in suggesting that quantized restraints must account for all discontinuous energy levels and spectra in atoms.
With de Broglie's suggestion of the existence of electron matter waves in 1924, and for a short time before the full 1926 Schrödinger equation treatment of hydrogen-like atoms, a Bohr electron "wavelength" could be seen to be a function of its momentum; so a Bohr orbiting electron was seen to orbit in a circle at a multiple of its half-wavelength. The Bohr model for a short time could be seen as a classical model with an additional constraint provided by the 'wavelength' argument. However, this period was immediately superseded by the full three-dimensional wave mechanics of 1926. In our current understanding of physics, the Bohr model is called a semi-classical model because of its quantization of angular momentum, not primarily because of its relationship with electron wavelength, which appeared in hindsight a dozen years after the Bohr model was proposed.
The Bohr model was able to explain the emission and absorption spectra of hydrogen. The energies of electrons in the n = 1, 2, 3, etc. states in the Bohr model match those of current physics. However, this did not explain similarities between different atoms, as expressed by the periodic table, such as the fact that helium (two electrons), neon (10 electrons), and argon (18 electrons) exhibit similar chemical inertness. Modern quantum mechanics explains this in terms of electron shells and subshells which can each hold a number of electrons determined by the Pauli exclusion principle. Thus the n = 1 state can hold one or two electrons, while the n = 2 state can hold up to eight electrons in 2s and 2p subshells. In helium, all n = 1 states are fully occupied; the same is true for n = 1 and n = 2 in neon. In argon, the 3s and 3p subshells are similarly fully occupied by eight electrons; quantum mechanics also allows a 3d subshell but this is at higher energy than the 3s and 3p in argon (contrary to the situation for hydrogen) and remains empty.
Modern conceptions and connections to the Heisenberg uncertainty principle
Immediately after Heisenberg discovered his uncertainty principle, Bohr noted that the existence of any sort of wave packet implies uncertainty in the wave frequency and wavelength, since a spread of frequencies is needed to create the packet itself. In quantum mechanics, where all particle momenta are associated with waves, it is the formation of such a wave packet which localizes the wave, and thus the particle, in space. In states where a quantum mechanical particle is bound, it must be localized as a wave packet, and the existence of the packet and its minimum size implies a spread and minimal value in particle wavelength, and thus also momentum and energy. In quantum mechanics, as a particle is localized to a smaller region in space, the associated compressed wave packet requires a larger and larger range of momenta, and thus larger kinetic energy. Thus the binding energy to contain or trap a particle in a smaller region of space increases without bound as the region of space grows smaller. Particles cannot be restricted to a geometric point in space, since this would require infinite particle momentum.
In chemistry, Erwin Schrödinger, Linus Pauling, Mulliken and others noted that the consequence of Heisenberg's relation was that the electron, as a wave packet, could not be considered to have an exact location in its orbital. Max Born suggested that the electron's position needed to be described by a probability distribution which was connected with finding the electron at some point in the wave-function which described its associated wave packet. The new quantum mechanics did not give exact results, but only the probabilities for the occurrence of a variety of possible such results. Heisenberg held that the path of a moving particle has no meaning if we cannot observe it, as we cannot with electrons in an atom.
In the quantum picture of Heisenberg, Schrödinger and others, the Bohr atom number n for each orbital became known as an n-sphere in a three-dimensional atom and was pictured as the most probable energy of the probability cloud of the electron's wave packet which surrounded the atom.
Orbital names
Orbital notation and subshells
Orbitals have been given names, which are usually given in the form:
where X is the energy level corresponding to the principal quantum number ; type is a lower-case letter denoting the shape or subshell of the orbital, corresponding to the angular momentum quantum number .
For example, the orbital 1s (pronounced as the individual numbers and letters: "'one' 'ess'") is the lowest energy level () and has an angular quantum number of , denoted as s. Orbitals with are denoted as p, d and f respectively.
The set of orbitals for a given n and is called a subshell, denoted
.
The superscript y shows the number of electrons in the subshell. For example, the notation 2p4 indicates that the 2p subshell of an atom contains 4 electrons. This subshell has 3 orbitals, each with n = 2 and = 1.
X-ray notation
There is also another, less common system still used in X-ray science known as X-ray notation, which is a continuation of the notations used before orbital theory was well understood. In this system, the principal quantum number is given a letter associated with it. For , the letters associated with those numbers are K, L, M, N, O, ... respectively.
Hydrogen-like orbitals
The simplest atomic orbitals are those that are calculated for systems with a single electron, such as the hydrogen atom. An atom of any other element ionized down to a single electron (He+, Li2+, etc.) is very similar to hydrogen, and the orbitals take the same form. In the Schrödinger equation for this system of one negative and one positive particle, the atomic orbitals are the eigenstates of the Hamiltonian operator for the energy. They can be obtained analytically, meaning that the resulting orbitals are products of a polynomial series, and exponential and trigonometric functions. (see hydrogen atom).
For atoms with two or more electrons, the governing equations can be solved only with the use of methods of iterative approximation. Orbitals of multi-electron atoms are qualitatively similar to those of hydrogen, and in the simplest models, they are taken to have the same form. For more rigorous and precise analysis, numerical approximations must be used.
A given (hydrogen-like) atomic orbital is identified by unique values of three quantum numbers: , , and . The rules restricting the values of the quantum numbers, and their energies (see below), explain the electron configuration of the atoms and the periodic table.
The stationary states (quantum states) of a hydrogen-like atom are its atomic orbitals. However, in general, an electron's behavior is not fully described by a single orbital. Electron states are best represented by time-depending "mixtures" (linear combinations) of multiple orbitals. See Linear combination of atomic orbitals molecular orbital method.
The quantum number first appeared in the Bohr model where it determines the radius of each circular electron orbit. In modern quantum mechanics however, determines the mean distance of the electron from the nucleus; all electrons with the same value of n lie at the same average distance. For this reason, orbitals with the same value of n are said to comprise a "shell". Orbitals with the same value of n and also the same value of are even more closely related, and are said to comprise a "subshell".
Quantum numbers
Because of the quantum mechanical nature of the electrons around a nucleus, atomic orbitals can be uniquely defined by a set of integers known as quantum numbers. These quantum numbers occur only in certain combinations of values, and their physical interpretation changes depending on whether real or complex versions of the atomic orbitals are employed.
Complex orbitals
In physics, the most common orbital descriptions are based on the solutions to the hydrogen atom, where orbitals are given by the product between a radial function and a pure spherical harmonic. The quantum numbers, together with the rules governing their possible values, are as follows:
The principal quantum number describes the energy of the electron and is always a positive integer. In fact, it can be any positive integer, but for reasons discussed below, large numbers are seldom encountered. Each atom has, in general, many orbitals associated with each value of n; these orbitals together are sometimes called electron shells.
The azimuthal quantum number describes the orbital angular momentum of each electron and is a non-negative integer. Within a shell where is some integer , ranges across all (integer) values satisfying the relation . For instance, the shell has only orbitals with , and the shell has only orbitals with , and . The set of orbitals associated with a particular value of are sometimes collectively called a subshell.
The magnetic quantum number, , describes the projection of the orbital angular momentum along a chosen axis. It determines the magnitude of the current circulating around that axis and the orbital contribution to the magnetic moment of an electron via the Ampèrian loop model. Within a subshell , obtains the integer values in the range .
The above results may be summarized in the following table. Each cell represents a subshell, and lists the values of available in that subshell. Empty cells represent subshells that do not exist.
Subshells are usually identified by their - and -values. is represented by its numerical value, but is represented by a letter as follows: 0 is represented by 's', 1 by 'p', 2 by 'd', 3 by 'f', and 4 by 'g'. For instance, one may speak of the subshell with and as a '2s subshell'.
Each electron also has angular momentum in the form of quantum mechanical spin given by spin s = . Its projection along a specified axis is given by the spin magnetic quantum number, ms, which can be + or −. These values are also called "spin up" or "spin down" respectively.
The Pauli exclusion principle states that no two electrons in an atom can have the same values of all four quantum numbers. If there are two electrons in an orbital with given values for three quantum numbers, (, , ), these two electrons must differ in their spin projection ms.
The above conventions imply a preferred axis (for example, the z direction in Cartesian coordinates), and they also imply a preferred direction along this preferred axis. Otherwise there would be no sense in distinguishing from . As such, the model is most useful when applied to physical systems that share these symmetries. The Stern–Gerlach experimentwhere an atom is exposed to a magnetic fieldprovides one such example.
Real orbitals
Instead of the complex orbitals described above, it is common, especially in the chemistry literature, to use real atomic orbitals. These real orbitals arise from simple linear combinations of complex orbitals. Using the Condon–Shortley phase convention, real orbitals are related to complex orbitals in the same way that the real spherical harmonics are related to complex spherical harmonics. Letting denote a complex orbital with quantum numbers , , and , the real orbitals may be defined by
If , with the radial part of the orbital, this definition is equivalent to where is the real spherical harmonic related to either the real or imaginary part of the complex spherical harmonic .
Real spherical harmonics are physically relevant when an atom is embedded in a crystalline solid, in which case there are multiple preferred symmetry axes but no single preferred direction. Real atomic orbitals are also more frequently encountered in introductory chemistry textbooks and shown in common orbital visualizations. In real hydrogen-like orbitals, quantum numbers and have the same interpretation and significance as their complex counterparts, but is no longer a good quantum number (but its absolute value is).
Some real orbitals are given specific names beyond the simple designation. Orbitals with quantum number are called orbitals. With this one can already assign names to complex orbitals such as ; the first symbol is the quantum number, the second character is the symbol for that particular quantum number and the subscript is the quantum number.
As an example of how the full orbital names are generated for real orbitals, one may calculate . From the table of spherical harmonics, with . Then
Likewise . As a more complicated example:
In all these cases we generate a Cartesian label for the orbital by examining, and abbreviating, the polynomial in appearing in the numerator. We ignore any terms in the polynomial except for the term with the highest exponent in .
We then use the abbreviated polynomial as a subscript label for the atomic state, using the same nomenclature as above to indicate the and quantum numbers.
The expression above all use the Condon–Shortley phase convention which is favored by quantum physicists. Other conventions exist for the phase of the spherical harmonics. Under these different conventions the and orbitals may appear, for example, as the sum and difference of and , contrary to what is shown above.
Below is a list of these Cartesian polynomial names for the atomic orbitals. There does not seem to be reference in the literature as to how to abbreviate the long Cartesian spherical harmonic polynomials for so there does not seem be consensus on the naming of orbitals or higher according to this nomenclature.
Shapes of orbitals
Simple pictures showing orbital shapes are intended to describe the angular forms of regions in space where the electrons occupying the orbital are likely to be found. The diagrams cannot show the entire region where an electron can be found, since according to quantum mechanics there is a non-zero probability of finding the electron (almost) anywhere in space. Instead the diagrams are approximate representations of boundary or contour surfaces where the probability density has a constant value, chosen so that there is a certain probability (for example 90%) of finding the electron within the contour. Although as the square of an absolute value is everywhere non-negative, the sign of the wave function is often indicated in each subregion of the orbital picture.
Sometimes the function is graphed to show its phases, rather than which shows probability density but has no phase (which is lost when taking absolute value, since is a complex number). orbital graphs tend to have less spherical, thinner lobes than graphs, but have the same number of lobes in the same places, and otherwise are recognizable. This article, to show wave function phase, shows mostly graphs.
The lobes can be seen as standing wave interference patterns between the two counter-rotating, ring-resonant traveling wave and modes; the projection of the orbital onto the xy plane has a resonant wavelength around the circumference. Although rarely shown, the traveling wave solutions can be seen as rotating banded tori; the bands represent phase information. For each there are two standing wave solutions and . If , the orbital is vertical, counter rotating information is unknown, and the orbital is z-axis symmetric. If there are no counter rotating modes. There are only radial modes and the shape is spherically symmetric.
Nodal planes and nodal spheres are surfaces on which the probability density vanishes. The number of nodal surfaces is controlled by the quantum numbers and . An orbital with azimuthal quantum number has radial nodal planes passing through the origin. For example, the s orbitals () are spherically symmetric and have no nodal planes, whereas the p orbitals () have a single nodal plane between the lobes. The number of nodal spheres equals , consistent with the restriction on the quantum numbers. The principal quantum number controls the total number of nodal surfaces which is . Loosely speaking, is energy, is analogous to eccentricity, and is orientation.
In general, determines size and energy of the orbital for a given nucleus; as increases, the size of the orbital increases. The higher nuclear charge of heavier elements causes their orbitals to contract by comparison to lighter ones, so that the size of the atom remains very roughly constant, even as the number of electrons increases.
Also in general terms, determines an orbital's shape, and its orientation. However, since some orbitals are described by equations in complex numbers, the shape sometimes depends on also. Together, the whole set of orbitals for a given and fill space as symmetrically as possible, though with increasingly complex sets of lobes and nodes.
The single s orbitals () are shaped like spheres. For it is roughly a solid ball (densest at center and fades outward exponentially), but for , each single s orbital is made of spherically symmetric surfaces which are nested shells (i.e., the "wave-structure" is radial, following a sinusoidal radial component as well). See illustration of a cross-section of these nested shells, at right. The s orbitals for all numbers are the only orbitals with an anti-node (a region of high wave function density) at the center of the nucleus. All other orbitals (p, d, f, etc.) have angular momentum, and thus avoid the nucleus (having a wave node at the nucleus). Recently, there has been an effort to experimentally image the 1s and 2p orbitals in a SrTiO3 crystal using scanning transmission electron microscopy with energy dispersive x-ray spectroscopy. Because the imaging was conducted using an electron beam, Coulombic beam-orbital interaction that is often termed as the impact parameter effect is included in the outcome (see the figure at right).
The shapes of p, d and f orbitals are described verbally here and shown graphically in the Orbitals table below. The three p orbitals for have the form of two ellipsoids with a point of tangency at the nucleus (the two-lobed shape is sometimes referred to as a "dumbbell"—there are two lobes pointing in opposite directions from each other). The three p orbitals in each shell are oriented at right angles to each other, as determined by their respective linear combination of values of . The overall result is a lobe pointing along each direction of the primary axes.
Four of the five d orbitals for look similar, each with four pear-shaped lobes, each lobe tangent at right angles to two others, and the centers of all four lying in one plane. Three of these planes are the xy-, xz-, and yz-planes—the lobes are between the pairs of primary axes—and the fourth has the center along the x and y axes themselves. The fifth and final d orbital consists of three regions of high probability density: a torus in between two pear-shaped regions placed symmetrically on its z axis. The overall total of 18 directional lobes point in every primary axis direction and between every pair.
There are seven f orbitals, each with shapes more complex than those of the d orbitals.
Additionally, as is the case with the s orbitals, individual p, d, f and g orbitals with values higher than the lowest possible value, exhibit an additional radial node structure which is reminiscent of harmonic waves of the same type, as compared with the lowest (or fundamental) mode of the wave. As with s orbitals, this phenomenon provides p, d, f, and g orbitals at the next higher possible value of (for example, 3p orbitals vs. the fundamental 2p), an additional node in each lobe. Still higher values of further increase the number of radial nodes, for each type of orbital.
The shapes of atomic orbitals in one-electron atom are related to 3-dimensional spherical harmonics. These shapes are not unique, and any linear combination is valid, like a transformation to cubic harmonics, in fact it is possible to generate sets where all the d's are the same shape, just like the and are the same shape.
Although individual orbitals are most often shown independent of each other, the orbitals coexist around the nucleus at the same time. Also, in 1927, Albrecht Unsöld proved that if one sums the electron density of all orbitals of a particular azimuthal quantum number of the same shell (e.g., all three 2p orbitals, or all five 3d orbitals) where each orbital is occupied by an electron or each is occupied by an electron pair, then all angular dependence disappears; that is, the resulting total density of all the atomic orbitals in that subshell (those with the same ) is spherical. This is known as Unsöld's theorem.
Orbitals table
This table shows the real hydrogen-like wave functions for all atomic orbitals up to 7s, and therefore covers the occupied orbitals in the ground state of all elements in the periodic table up to radium and some beyond. "ψ" graphs are shown with − and + wave function phases shown in two different colors (arbitrarily red and blue). The orbital is the same as the orbital, but the and are formed by taking linear combinations of the and orbitals (which is why they are listed under the label). Also, the and are not the same shape as the , since they are pure spherical harmonics.
* No elements with 6f, 7d or 7f electrons have been discovered yet.
† Elements with 7p electrons have been discovered, but their electronic configurations are only predicted – save the exceptional Lr, which fills 7p1 instead of 6d1.
‡ For the elements whose highest occupied orbital is a 6d orbital, only some electronic configurations have been confirmed. (Mt, Ds, Rg and Cn are still missing).
These are the real-valued orbitals commonly used in chemistry. Only the orbitals where are eigenstates of the orbital angular momentum operator, . The columns with are combinations of two eigenstates. See comparison in the following picture:
Qualitative understanding of shapes
The shapes of atomic orbitals can be qualitatively understood by considering the analogous case of standing waves on a circular drum. To see the analogy, the mean vibrational displacement of each bit of drum membrane from the equilibrium point over many cycles (a measure of average drum membrane velocity and momentum at that point) must be considered relative to that point's distance from the center of the drum head. If this displacement is taken as being analogous to the probability of finding an electron at a given distance from the nucleus, then it will be seen that the many modes of the vibrating disk form patterns that trace the various shapes of atomic orbitals. The basic reason for this correspondence lies in the fact that the distribution of kinetic energy and momentum in a matter-wave is predictive of where the particle associated with the wave will be. That is, the probability of finding an electron at a given place is also a function of the electron's average momentum at that point, since high electron momentum at a given position tends to "localize" the electron in that position, via the properties of electron wave-packets (see the Heisenberg uncertainty principle for details of the mechanism).
This relationship means that certain key features can be observed in both drum membrane modes and atomic orbitals. For example, in all of the modes analogous to s orbitals (the top row in the animated illustration below), it can be seen that the very center of the drum membrane vibrates most strongly, corresponding to the antinode in all s orbitals in an atom. This antinode means the electron is most likely to be at the physical position of the nucleus (which it passes straight through without scattering or striking it), since it is moving (on average) most rapidly at that point, giving it maximal momentum.
A mental "planetary orbit" picture closest to the behavior of electrons in s orbitals, all of which have no angular momentum, might perhaps be that of a Keplerian orbit with the orbital eccentricity of 1 but a finite major axis, not physically possible (because particles were to collide), but can be imagined as a limit of orbits with equal major axes but increasing eccentricity.
Below, a number of drum membrane vibration modes and the respective wave functions of the hydrogen atom are shown. A correspondence can be considered where the wave functions of a vibrating drum head are for a two-coordinate system and the wave functions for a vibrating sphere are three-coordinate .
None of the other sets of modes in a drum membrane have a central antinode, and in all of them the center of the drum does not move. These correspond to a node at the nucleus for all non-s orbitals in an atom. These orbitals all have some angular momentum, and in the planetary model, they correspond to particles in orbit with eccentricity less than 1.0, so that they do not pass straight through the center of the primary body, but keep somewhat away from it.
In addition, the drum modes analogous to p and d modes in an atom show spatial irregularity along the different radial directions from the center of the drum, whereas all of the modes analogous to s modes are perfectly symmetrical in radial direction. The non-radial-symmetry properties of non-s orbitals are necessary to localize a particle with angular momentum and a wave nature in an orbital where it must tend to stay away from the central attraction force, since any particle localized at the point of central attraction could have no angular momentum. For these modes, waves in the drum head tend to avoid the central point. Such features again emphasize that the shapes of atomic orbitals are a direct consequence of the wave nature of electrons.
Orbital energy
In atoms with one electron (hydrogen-like atom), the energy of an orbital (and, consequently, any electron in the orbital) is determined mainly by . The orbital has the lowest possible energy in the atom. Each successively higher value of has a higher energy, but the difference decreases as increases. For high , the energy becomes so high that the electron can easily escape the atom. In single electron atoms, all levels with different within a given are degenerate in the Schrödinger approximation, and have the same energy. This approximation is broken slightly in the solution to the Dirac equation (where energy depends on and another quantum number ), and by the effect of the magnetic field of the nucleus and quantum electrodynamics effects. The latter induce tiny binding energy differences especially for s electrons that go nearer the nucleus, since these feel a very slightly different nuclear charge, even in one-electron atoms; see Lamb shift.
In atoms with multiple electrons, the energy of an electron depends not only on its orbital, but also on its interactions with other electrons. These interactions depend on the detail of its spatial probability distribution, and so the energy levels of orbitals depend not only on but also on . Higher values of are associated with higher values of energy; for instance, the 2p state is higher than the 2s state. When , the increase in energy of the orbital becomes so large as to push the energy of orbital above the energy of the s orbital in the next higher shell; when the energy is pushed into the shell two steps higher. The filling of the 3d orbitals does not occur until the 4s orbitals have been filled.
The increase in energy for subshells of increasing angular momentum in larger atoms is due to electron–electron interaction effects, and it is specifically related to the ability of low angular momentum electrons to penetrate more effectively toward the nucleus, where they are subject to less screening from the charge of intervening electrons. Thus, in atoms with higher atomic number, the of electrons becomes more and more of a determining factor in their energy, and the principal quantum numbers of electrons becomes less and less important in their energy placement.
The energy sequence of the first 35 subshells (e.g., 1s, 2p, 3d, etc.) is given in the following table. Each cell represents a subshell with and given by its row and column indices, respectively. The number in the cell is the subshell's position in the sequence. For a linear listing of the subshells in terms of increasing energies in multielectron atoms, see the section below.
Note: empty cells indicate non-existent sublevels, while numbers in italics indicate sublevels that could (potentially) exist, but which do not hold electrons in any element currently known.
Electron placement and the periodic table
Several rules govern the placement of electrons in orbitals (electron configuration). The first dictates that no two electrons in an atom may have the same set of values of quantum numbers (this is the Pauli exclusion principle). These quantum numbers include the three that define orbitals, as well as the spin magnetic quantum number . Thus, two electrons may occupy a single orbital, so long as they have different values of . Because takes one of only two values ( or −), at most two electrons can occupy each orbital.
Additionally, an electron always tends to fall to the lowest possible energy state. It is possible for it to occupy any orbital so long as it does not violate the Pauli exclusion principle, but if lower-energy orbitals are available, this condition is unstable. The electron will eventually lose energy (by releasing a photon) and drop into the lower orbital. Thus, electrons fill orbitals in the order specified by the energy sequence given above.
This behavior is responsible for the structure of the periodic table. The table may be divided into several rows (called 'periods'), numbered starting with 1 at the top. The presently known elements occupy seven periods. If a certain period has number i, it consists of elements whose outermost electrons fall in the ith shell. Niels Bohr was the first to propose (1923) that the periodicity in the properties of the elements might be explained by the periodic filling of the electron energy levels, resulting in the electronic structure of the atom.
The periodic table may also be divided into several numbered rectangular 'blocks'. The elements belonging to a given block have this common feature: their highest-energy electrons all belong to the same -state (but the associated with that -state depends upon the period). For instance, the leftmost two columns constitute the 's-block'. The outermost electrons of Li and Be respectively belong to the 2s subshell, and those of Na and Mg to the 3s subshell.
The following is the order for filling the "subshell" orbitals, which also gives the order of the "blocks" in the periodic table:
1s, 2s, 2p, 3s, 3p, 4s, 3d, 4p, 5s, 4d, 5p, 6s, 4f, 5d, 6p, 7s, 5f, 6d, 7p
The "periodic" nature of the filling of orbitals, as well as emergence of the s, p, d, and f "blocks", is more obvious if this order of filling is given in matrix form, with increasing principal quantum numbers starting the new rows ("periods") in the matrix. Then, each subshell (composed of the first two quantum numbers) is repeated as many times as required for each pair of electrons it may contain. The result is a compressed periodic table, with each entry representing two successive elements:
Although this is the general order of orbital filling according to the Madelung rule, there are exceptions, and the actual electronic energies of each element are also dependent upon additional details of the atoms (see ).
The number of electrons in an electrically neutral atom increases with the atomic number. The electrons in the outermost shell, or valence electrons, tend to be responsible for an element's chemical behavior. Elements that contain the same number of valence electrons can be grouped together and display similar chemical properties.
Relativistic effects
For elements with high atomic number , the effects of relativity become more pronounced, and especially so for s electrons, which move at relativistic velocities as they penetrate the screening electrons near the core of high- atoms. This relativistic increase in momentum for high speed electrons causes a corresponding decrease in wavelength and contraction of 6s orbitals relative to 5d orbitals (by comparison to corresponding s and d electrons in lighter elements in the same column of the periodic table); this results in 6s valence electrons becoming lowered in energy.
Examples of significant physical outcomes of this effect include the lowered melting temperature of mercury (which results from 6s electrons not being available for metal bonding) and the golden color of gold and caesium.
In the Bohr model, an electron has a velocity given by , where is the atomic number, is the fine-structure constant, and is the speed of light. In non-relativistic quantum mechanics, therefore, any atom with an atomic number greater than 137 would require its 1s electrons to be traveling faster than the speed of light. Even in the Dirac equation, which accounts for relativistic effects, the wave function of the electron for atoms with is oscillatory and unbounded. The significance of element 137, also known as untriseptium, was first pointed out by the physicist Richard Feynman. Element 137 is sometimes informally called feynmanium (symbol Fy). However, Feynman's approximation fails to predict the exact critical value of due to the non-point-charge nature of the nucleus and very small orbital radius of inner electrons, resulting in a potential seen by inner electrons which is effectively less than . The critical value, which makes the atom unstable with regard to high-field breakdown of the vacuum and production of electron-positron pairs, does not occur until is about 173. These conditions are not seen except transiently in collisions of very heavy nuclei such as lead or uranium in accelerators, where such electron-positron production from these effects has been claimed to be observed.
There are no nodes in relativistic orbital densities, although individual components of the wave function will have nodes.
pp hybridization (conjectured)
In late period 8 elements, a hybrid of 8p3/2 and 9p1/2 is expected to exist, where "3/2" and "1/2" refer to the total angular momentum quantum number. This "pp" hybrid may be responsible for the p-block of the period due to properties similar to p subshells in ordinary valence shells. Energy levels of 8p3/2 and 9p1/2 come close due to relativistic spin–orbit effects; the 9s subshell should also participate, as these elements are expected to be analogous to the respective 5p elements indium through xenon.
Transitions between orbitals
Bound quantum states have discrete energy levels. When applied to atomic orbitals, this means that the energy differences between states are also discrete. A transition between these states (i.e., an electron absorbing or emitting a photon) can thus happen only if the photon has an energy corresponding with the exact energy difference between said states.
Consider two states of the hydrogen atom:
State , , and
State , , and
By quantum theory, state 1 has a fixed energy of , and state 2 has a fixed energy of . Now, what would happen if an electron in state 1 were to move to state 2? For this to happen, the electron would need to gain an energy of exactly . If the electron receives energy that is less than or greater than this value, it cannot jump from state 1 to state 2. Now, suppose we irradiate the atom with a broad-spectrum of light. Photons that reach the atom that have an energy of exactly will be absorbed by the electron in state 1, and that electron will jump to state 2. However, photons that are greater or lower in energy cannot be absorbed by the electron, because the electron can jump only to one of the orbitals, it cannot jump to a state between orbitals. The result is that only photons of a specific frequency will be absorbed by the atom. This creates a line in the spectrum, known as an absorption line, which corresponds to the energy difference between states 1 and 2.
The atomic orbital model thus predicts line spectra, which are observed experimentally. This is one of the main validations of the atomic orbital model.
The atomic orbital model is nevertheless an approximation to the full quantum theory, which only recognizes many electron states. The predictions of line spectra are qualitatively useful but are not quantitatively accurate for atoms and ions other than those containing only one electron.
|
Articles containing video clips;Atomic physics;Chemical bonding;Electron states;Quantum chemistry
|
https://en.wikipedia.org/wiki/Amino%20acid
|
Amino acids are organic compounds that contain both amino and carboxylic acid functional groups. Although over 500 amino acids exist in nature, by far the most important are the 22 α-amino acids incorporated into proteins. Only these 22 appear in the genetic code of life.
Amino acids can be classified according to the locations of the core structural functional groups (alpha- (α-), beta- (β-), gamma- (γ-) amino acids, etc.); other categories relate to polarity, ionization, and side-chain group type (aliphatic, acyclic, aromatic, polar, etc.). In the form of proteins, amino-acid residues form the second-largest component (water being the largest) of human muscles and other tissues. Beyond their role as residues in proteins, amino acids participate in a number of processes such as neurotransmitter transport and biosynthesis. It is thought that they played a key role in enabling life on Earth and its emergence.
Amino acids are formally named by the IUPAC-IUBMB Joint Commission on Biochemical Nomenclature in terms of the fictitious "neutral" structure shown in the illustration. For example, the systematic name of alanine is 2-aminopropanoic acid, based on the formula . The Commission justified this approach as follows:
The systematic names and formulas given refer to hypothetical forms in which amino groups are unprotonated and carboxyl groups are undissociated. This convention is useful to avoid various nomenclatural problems but should not be taken to imply that these structures represent an appreciable fraction of the amino-acid molecules.
History
The first few amino acids were discovered in the early 1800s. In 1806, French chemists Louis-Nicolas Vauquelin and Pierre Jean Robiquet isolated a compound from asparagus that was subsequently named asparagine, the first amino acid to be discovered. Cystine was discovered in 1810, although its monomer, cysteine, remained undiscovered until 1884. Glycine and leucine were discovered in 1820. The last of the 20 common amino acids to be discovered was threonine in 1935 by William Cumming Rose, who also determined the essential amino acids and established the minimum daily requirements of all amino acids for optimal growth.
The unity of the chemical category was recognized by Wurtz in 1865, but he gave no particular name to it. The first use of the term "amino acid" in the English language dates from 1898, while the German term, , was used earlier. Proteins were found to yield amino acids after enzymatic digestion or acid hydrolysis. In 1902, Emil Fischer and Franz Hofmeister independently proposed that proteins are formed from many amino acids, whereby bonds are formed between the amino group of one amino acid with the carboxyl group of another, resulting in a linear structure that Fischer termed "peptide".
General structure
2-, alpha-, or α-amino acids have the generic formula in most cases, where R is an organic substituent known as a "side chain".
Of the many hundreds of described amino acids, 22 are proteinogenic ("protein-building"). It is these 22 compounds that combine to give a vast array of peptides and proteins assembled by ribosomes. Non-proteinogenic or modified amino acids may arise from post-translational modification or during nonribosomal peptide synthesis.
Chirality
The carbon atom next to the carboxyl group is called the α–carbon. In proteinogenic amino acids, it bears the amine and the R group or side chain specific to each amino acid, as well as a hydrogen atom. With the exception of glycine, for which the side chain is also a hydrogen atom, the α–carbon is stereogenic. All chiral proteogenic amino acids have the L configuration. They are "left-handed" enantiomers, which refers to the stereoisomers of the alpha carbon.
A few D-amino acids ("right-handed") have been found in nature, e.g., in bacterial envelopes, as a neuromodulator (D-serine), and in some antibiotics. Rarely, D-amino acid residues are found in proteins, and are converted from the L-amino acid as a post-translational modification.
Side chains
Polar charged side chains
Five amino acids possess a charge at neutral pH. Often these side chains appear at the surfaces on proteins to enable their solubility in water, and side chains with opposite charges form important electrostatic contacts called salt bridges that maintain structures within a single protein or between interfacing proteins. Many proteins bind metal into their structures specifically, and these interactions are commonly mediated by charged side chains such as aspartate, glutamate and histidine. Under certain conditions, each ion-forming group can be charged, forming double salts.
The two negatively charged amino acids at neutral pH are aspartate (Asp, D) and glutamate (Glu, E). The anionic carboxylate groups behave as Brønsted bases in most circumstances. Enzymes in very low pH environments, like the aspartic protease pepsin in mammalian stomachs, may have catalytic aspartate or glutamate residues that act as Brønsted acids.
There are three amino acids with side chains that are cations at neutral pH: arginine (Arg, R), lysine (Lys, K) and histidine (His, H). Arginine has a charged guanidino group and lysine a charged alkyl amino group, and are fully protonated at pH 7. Histidine's imidazole group has a pKa of 6.0, and is only around 10% protonated at neutral pH. Because histidine is easily found in its basic and conjugate acid forms it often participates in catalytic proton transfers in enzyme reactions.
Polar uncharged side chains
The polar, uncharged amino acids serine (Ser, S), threonine (Thr, T), asparagine (Asn, N) and glutamine (Gln, Q) readily form hydrogen bonds with water and other amino acids. They do not ionize in normal conditions, a prominent exception being the catalytic serine in serine proteases. This is an example of severe perturbation, and is not characteristic of serine residues in general. Threonine has two chiral centers, not only the L (2S) chiral center at the α-carbon shared by all amino acids apart from achiral glycine, but also (3R) at the β-carbon. The full stereochemical specification is (2S,3R)-L-threonine.
Hydrophobic side chains
Nonpolar amino acid interactions are the primary driving force behind the processes that fold proteins into their functional three dimensional structures. None of these amino acids' side chains ionize easily, and therefore do not have pKas, with the exception of tyrosine (Tyr, Y). The hydroxyl of tyrosine can deprotonate at high pH forming the negatively charged phenolate. Because of this one could place tyrosine into the polar, uncharged amino acid category, but its very low solubility in water matches the characteristics of hydrophobic amino acids well.
Special case side chains
Several side chains are not described well by the charged, polar and hydrophobic categories. Glycine (Gly, G) could be considered a polar amino acid since its small size means that its solubility is largely determined by the amino and carboxylate groups. However, the lack of any side chain provides glycine with a unique flexibility among amino acids with large ramifications to protein folding. Cysteine (Cys, C) can also form hydrogen bonds readily, which would place it in the polar amino acid category, though it can often be found in protein structures forming covalent bonds, called disulphide bonds, with other cysteines. These bonds influence the folding and stability of proteins, and are essential in the formation of antibodies. Proline (Pro, P) has an alkyl side chain and could be considered hydrophobic, but because the side chain joins back onto the alpha amino group it becomes particularly inflexible when incorporated into proteins. Similar to glycine this influences protein structure in a way unique among amino acids. Selenocysteine (Sec, U) is a rare amino acid not directly encoded by DNA, but is incorporated into proteins via the ribosome. Selenocysteine has a lower redox potential compared to the similar cysteine, and participates in several unique enzymatic reactions. Pyrrolysine (Pyl, O) is another amino acid not encoded in DNA, but synthesized into protein by ribosomes. It is found in archaeal species where it participates in the catalytic activity of several methyltransferases.
β- and γ-amino acids
Amino acids with the structure , such as β-alanine, a component of carnosine and a few other peptides, are β-amino acids. Ones with the structure are γ-amino acids, and so on, where X and Y are two substituents (one of which is normally H).
Zwitterions
The common natural forms of amino acids have a zwitterionic structure, with ( in the case of proline) and functional groups attached to the same C atom, and are thus α-amino acids, and are the only ones found in proteins during translation in the ribosome.
In aqueous solution at pH close to neutrality, amino acids exist as zwitterions, i.e. as dipolar ions with both and in charged states, so the overall structure is . At physiological pH the so-called "neutral forms" are not present to any measurable degree. Although the two charges in the zwitterion structure add up to zero it is misleading to call a species with a net charge of zero "uncharged".
In strongly acidic conditions (pH below 3), the carboxylate group becomes protonated and the structure becomes an ammonio carboxylic acid, . This is relevant for enzymes like pepsin that are active in acidic environments such as the mammalian stomach and lysosomes, but does not significantly apply to intracellular enzymes. In highly basic conditions (pH greater than 10, not normally seen in physiological conditions), the ammonio group is deprotonated to give .
Although various definitions of acids and bases are used in chemistry, the only one that is useful for chemistry in aqueous solution is that of Brønsted: an acid is a species that can donate a proton to another species, and a base is one that can accept a proton. This criterion is used to label the groups in the above illustration. The carboxylate side chains of aspartate and glutamate residues are the principal Brønsted bases in proteins. Likewise, lysine, tyrosine and cysteine will typically act as a Brønsted acid. Histidine under these conditions can act both as a Brønsted acid and a base.
Isoelectric point
For amino acids with uncharged side-chains the zwitterion predominates at pH values between the two pKa values, but coexists in equilibrium with small amounts of net negative and net positive ions. At the midpoint between the two pKa values, the trace amount of net negative and trace of net positive ions balance, so that average net charge of all forms present is zero. This pH is known as the isoelectric point pI, so pI = (pKa1 + pKa2).
For amino acids with charged side chains, the pKa of the side chain is involved. Thus for aspartate or glutamate with negative side chains, the terminal amino group is essentially entirely in the charged form , but this positive charge needs to be balanced by the state with just one C-terminal carboxylate group is negatively charged. This occurs halfway between the two carboxylate pKa values: pI = (pKa1 + pKa(R)), where pKa(R) is the side chain pKa.
Similar considerations apply to other amino acids with ionizable side-chains, including not only glutamate (similar to aspartate), but also cysteine, histidine, lysine, tyrosine and arginine with positive side chains.
Amino acids have zero mobility in electrophoresis at their isoelectric point, although this behaviour is more usually exploited for peptides and proteins than single amino acids. Zwitterions have minimum solubility at their isoelectric point, and some amino acids (in particular, with nonpolar side chains) can be isolated by precipitation from water by adjusting the pH to the required isoelectric point.
Physicochemical properties
The 20 canonical amino acids can be classified according to their properties. Important factors are charge, hydrophilicity or hydrophobicity, size, and functional groups. These properties influence protein structure and protein–protein interactions. The water-soluble proteins tend to have their hydrophobic residues (Leu, Ile, Val, Phe, and Trp) buried in the middle of the protein, whereas hydrophilic side chains are exposed to the aqueous solvent. (In biochemistry, a residue refers to a specific monomer within the polymeric chain of a polysaccharide, protein or nucleic acid.) The integral membrane proteins tend to have outer rings of exposed hydrophobic amino acids that anchor them in the lipid bilayer. Some peripheral membrane proteins have a patch of hydrophobic amino acids on their surface that sticks to the membrane. In a similar fashion, proteins that have to bind to positively charged molecules have surfaces rich in negatively charged amino acids such as glutamate and aspartate, while proteins binding to negatively charged molecules have surfaces rich in positively charged amino acids like lysine and arginine. For example, lysine and arginine are present in large amounts in the low-complexity regions of nucleic-acid binding proteins. There are various hydrophobicity scales of amino acid residues.
Some amino acids have special properties. Cysteine can form covalent disulfide bonds to other cysteine residues. Proline forms a cycle to the polypeptide backbone, and glycine is more flexible than other amino acids.
Glycine and proline are strongly present within low complexity regions of both eukaryotic and prokaryotic proteins, whereas the opposite is the case with cysteine, phenylalanine, tryptophan, methionine, valine, leucine, isoleucine, which are highly reactive, or complex, or hydrophobic.
Many proteins undergo a range of posttranslational modifications, whereby additional chemical groups are attached to the amino acid residue side chains sometimes producing lipoproteins (that are hydrophobic), or glycoproteins (that are hydrophilic) allowing the protein to attach temporarily to a membrane. For example, a signaling protein can attach and then detach from a cell membrane, because it contains cysteine residues that can have the fatty acid palmitic acid added to them and subsequently removed.
Table of standard amino acid abbreviations and properties
Although one-letter symbols are included in the table, IUPAC–IUBMB recommend that "Use of the one-letter symbols should be restricted to the comparison of long sequences".
The one-letter notation was chosen by IUPAC-IUB based on the following rules:
Initial letters are used where there is no ambiguity: C cysteine, H histidine, I isoleucine, M methionine, S serine, V valine,
Where arbitrary assignment is needed, the structurally simpler amino acids are given precedence: A Alanine, G glycine, L leucine, P proline, T threonine,
F PHenylalanine and R aRginine are assigned by being phonetically suggestive,
W tryptophan is assigned based on the double ring being visually suggestive to the bulky letter W,
K lysine and Y tyrosine are assigned as alphabetically nearest to their initials L and T (note that U was avoided for its similarity with V, while X was reserved for undetermined or atypical amino acids); for tyrosine the mnemonic tYrosine was also proposed,
D aspartate was assigned arbitrarily, with the proposed mnemonic asparDic acid; E glutamate was assigned in alphabetical sequence being larger by merely one methylene –CH2– group,
N asparagine was assigned arbitrarily, with the proposed mnemonic asparagiNe; Q glutamine was assigned in alphabetical sequence of those still available (note again that O was avoided due to similarity with D), with the proposed mnemonic Qlutamine.
Two additional amino acids are in some species coded for by codons that are usually interpreted as stop codons:
In addition to the specific amino acid codes, placeholders are used in cases where chemical or crystallographic analysis of a peptide or protein cannot conclusively determine the identity of a residue. They are also used to summarize conserved protein sequence motifs. The use of single letters to indicate sets of similar residues is similar to the use of abbreviation codes for degenerate bases.
Unk is sometimes used instead of Xaa, but is less standard.
Ter or * (from termination) is used in notation for mutations in proteins when a stop codon occurs. It corresponds to no amino acid at all.
In addition, many nonstandard amino acids have a specific code. For example, several peptide drugs, such as Bortezomib and MG132, are artificially synthesized and retain their protecting groups, which have specific codes. Bortezomib is Pyz–Phe–boroLeu, and MG132 is Z–Leu–Leu–Leu–al. To aid in the analysis of protein structure, photo-reactive amino acid analogs are available. These include photoleucine (pLeu) and photomethionine (pMet).
Occurrence and functions in biochemistry
Proteinogenic amino acids
Amino acids are the precursors to proteins. They join by condensation reactions to form short polymer chains called peptides or longer chains called either polypeptides or proteins. These chains are linear and unbranched, with each amino acid residue within the chain attached to two neighboring amino acids. In nature, the process of making proteins encoded by RNA genetic material is called translation and involves the step-by-step addition of amino acids to a growing protein chain by a ribozyme that is called a ribosome. The order in which the amino acids are added is read through the genetic code from an mRNA template, which is an RNA derived from one of the organism's genes.
Twenty-two amino acids are naturally incorporated into polypeptides and are called proteinogenic or natural amino acids. Of these, 20 are encoded by the universal genetic code. The remaining 2, selenocysteine and pyrrolysine, are incorporated into proteins by unique synthetic mechanisms. Selenocysteine is incorporated when the mRNA being translated includes a SECIS element, which causes the UGA codon to encode selenocysteine instead of a stop codon. Pyrrolysine is used by some methanogenic archaea in enzymes that they use to produce methane. It is coded for with the codon UAG, which is normally a stop codon in other organisms.
Several independent evolutionary studies have suggested that Gly, Ala, Asp, Val, Ser, Pro, Glu, Leu, Thr may belong to a group of amino acids that constituted the early genetic code, whereas Cys, Met, Tyr, Trp, His, Phe may belong to a group of amino acids that constituted later additions of the genetic code.
Standard vs nonstandard amino acids
The 20 amino acids that are encoded directly by the codons of the universal genetic code are called standard or canonical amino acids. A modified form of methionine (N-formylmethionine) is often incorporated in place of methionine as the initial amino acid of proteins in bacteria, mitochondria and plastids (including chloroplasts). Other amino acids are called nonstandard or non-canonical. Most of the nonstandard amino acids are also non-proteinogenic (i.e. they cannot be incorporated into proteins during translation), but two of them are proteinogenic, as they can be incorporated translationally into proteins by exploiting information not encoded in the universal genetic code.
The two nonstandard proteinogenic amino acids are selenocysteine (present in many non-eukaryotes as well as most eukaryotes, but not coded directly by DNA) and pyrrolysine (found only in some archaea and at least one bacterium). The incorporation of these nonstandard amino acids is rare. For example, 25 human proteins include selenocysteine in their primary structure, and the structurally characterized enzymes (selenoenzymes) employ selenocysteine as the catalytic moiety in their active sites. Pyrrolysine and selenocysteine are encoded via variant codons. For example, selenocysteine is encoded by stop codon and SECIS element.
N-formylmethionine (which is often the initial amino acid of proteins in bacteria, mitochondria, and chloroplasts) is generally considered as a form of methionine rather than as a separate proteinogenic amino acid. Codon–tRNA combinations not found in nature can also be used to "expand" the genetic code and form novel proteins known as alloproteins incorporating non-proteinogenic amino acids.
Non-proteinogenic amino acids
Aside from the 22 proteinogenic amino acids, many non-proteinogenic amino acids are known. Those either are not found in proteins (for example carnitine, GABA, levothyroxine) or are not produced directly and in isolation by standard cellular machinery. For example, hydroxyproline, is synthesised from proline. Another example is selenomethionine).
Non-proteinogenic amino acids that are found in proteins are formed by post-translational modification. Such modifications can also determine the localization of the protein, e.g., the addition of long hydrophobic groups can cause a protein to bind to a phospholipid membrane. Examples:
the carboxylation of glutamate allows for better binding of calcium cations,
Hydroxyproline, generated by hydroxylation of proline, is a major component of the connective tissue collagen.
Hypusine in the translation initiation factor EIF5A, contains a modification of lysine.
Some non-proteinogenic amino acids are not found in proteins. Examples include 2-aminoisobutyric acid and the neurotransmitter gamma-aminobutyric acid. Non-proteinogenic amino acids often occur as intermediates in the metabolic pathways for standard amino acids – for example, ornithine and citrulline occur in the urea cycle, part of amino acid catabolism (see below). A rare exception to the dominance of α-amino acids in biology is the β-amino acid beta alanine (3-aminopropanoic acid), which is used in plants and microorganisms in the synthesis of pantothenic acid (vitamin B5), a component of coenzyme A.
In mammalian nutrition
Animals ingest amino acids in the form of protein. The protein is broken down into its constituent amino acids in the process of digestion. The amino acids are then used to synthesize new proteins and other nitrogenous biomolecules, or they are further catabolized through oxidation to provide a source of energy. The oxidation pathway starts with the removal of the amino group by a transaminase; the amino group is then fed into the urea cycle. The other product of transamidation is a keto acid that enters the citric acid cycle. Glucogenic amino acids can also be converted into glucose, through gluconeogenesis.
Of the 20 standard amino acids, nine (His, Ile, Leu, Lys, Met, Phe, Thr, Trp and Val) are called essential amino acids because the human body cannot synthesize them from other compounds at the level needed for normal growth, so they must be obtained from food.
Semi-essential and conditionally essential amino acids, and juvenile requirements
In addition, cysteine, tyrosine, and arginine are considered semiessential amino acids, and taurine a semi-essential aminosulfonic acid in children. Some amino acids are conditionally essential for certain ages or medical conditions. Essential amino acids may also vary from species to species. The metabolic pathways that synthesize these monomers are not fully developed.
Non-protein functions
Many proteinogenic and non-proteinogenic amino acids have biological functions beyond being precursors to proteins and peptides. In humans, amino acids also have important roles in diverse biosynthetic pathways. Defenses against herbivores in plants sometimes employ amino acids. Examples:
Standard amino acids
Tryptophan is a precursor of the neurotransmitter serotonin.
Tyrosine (and its precursor phenylalanine) are precursors of the catecholamine neurotransmitters dopamine, epinephrine and norepinephrine and various trace amines.
Phenylalanine is a precursor of phenethylamine and tyrosine in humans. In plants, it is a precursor of various phenylpropanoids, which are important in plant metabolism.
Glycine is a precursor of porphyrins such as heme.
Arginine is a precursor of nitric oxide.
Ornithine and S-adenosylmethionine are precursors of polyamines.
Aspartate, glycine, and glutamine are precursors of nucleotides.
Roles for nonstandard amino acids
Carnitine is used in lipid transport.
gamma-aminobutyric acid is a neurotransmitter.
5-HTP (5-hydroxytryptophan) is used for experimental treatment of depression.
L-DOPA (L-dihydroxyphenylalanine) for Parkinson's treatment,
Eflornithine inhibits ornithine decarboxylase and used in the treatment of sleeping sickness.
Canavanine, an analogue of arginine found in many legumes is an antifeedant, protecting the plant from predators.
Mimosine found in some legumes, is another possible antifeedant. This compound is an analogue of tyrosine and can poison animals that graze on these plants.
However, not all of the functions of other abundant nonstandard amino acids are known.
Uses in industry
Animal feed
Amino acids are sometimes added to animal feed because some of the components of these feeds, such as soybeans, have low levels of some of the essential amino acids, especially of lysine, methionine, threonine, and tryptophan. Likewise amino acids are used to chelate metal cations in order to improve the absorption of minerals from feed supplements.
Food
The food industry is a major consumer of amino acids, especially glutamic acid, which is used as a flavor enhancer, and aspartame (aspartylphenylalanine 1-methyl ester), which is used as an artificial sweetener. Amino acids are sometimes added to food by manufacturers to alleviate symptoms of mineral deficiencies, such as anemia, by improving mineral absorption and reducing negative side effects from inorganic mineral supplementation.
Chemical building blocks
Amino acids are low-cost feedstocks used in chiral pool synthesis as enantiomerically pure building blocks.
Amino acids are used in the synthesis of some cosmetics.
Aspirational uses
Fertilizer
The chelating ability of amino acids is sometimes used in fertilizers to facilitate the delivery of minerals to plants in order to correct mineral deficiencies, such as iron chlorosis. These fertilizers are also used to prevent deficiencies from occurring and to improve the overall health of the plants.
Biodegradable plastics
Amino acids have been considered as components of biodegradable polymers, which have applications as environmentally friendly packaging and in medicine in drug delivery and the construction of prosthetic implants. An interesting example of such materials is polyaspartate, a water-soluble biodegradable polymer that may have applications in disposable diapers and agriculture. Due to its solubility and ability to chelate metal ions, polyaspartate is also being used as a biodegradable antiscaling agent and a corrosion inhibitor.
Synthesis
Chemical synthesis
The commercial production of amino acids usually relies on mutant bacteria that overproduce individual amino acids using glucose as a carbon source. Some amino acids are produced by enzymatic conversions of synthetic intermediates. 2-Aminothiazoline-4-carboxylic acid is an intermediate in one industrial synthesis of L-cysteine for example. Aspartic acid is produced by the addition of ammonia to fumarate using a lyase.
Biosynthesis
In plants, nitrogen is first assimilated into organic compounds in the form of glutamate, formed from alpha-ketoglutarate and ammonia in the mitochondrion. For other amino acids, plants use transaminases to move the amino group from glutamate to another alpha-keto acid. For example, aspartate aminotransferase converts glutamate and oxaloacetate to alpha-ketoglutarate and aspartate. Other organisms use transaminases for amino acid synthesis, too.
Nonstandard amino acids are usually formed through modifications to standard amino acids. For example, homocysteine is formed through the transsulfuration pathway or by the demethylation of methionine via the intermediate metabolite S-adenosylmethionine, while hydroxyproline is made by a post translational modification of proline.
Microorganisms and plants synthesize many uncommon amino acids. For example, some microbes make 2-aminoisobutyric acid and lanthionine, which is a sulfide-bridged derivative of alanine. Both of these amino acids are found in peptidic lantibiotics such as alamethicin. However, in plants, 1-aminocyclopropane-1-carboxylic acid is a small disubstituted cyclic amino acid that is an intermediate in the production of the plant hormone ethylene.
Primordial synthesis
The formation of amino acids and peptides is assumed to have preceded and perhaps induced the emergence of life on earth. Amino acids can form from simple precursors under various conditions. Surface-based chemical metabolism of amino acids and very small compounds may have led to the build-up of amino acids, coenzymes and phosphate-based small carbon molecules. Amino acids and similar building blocks could have been elaborated into proto-peptides, with peptides being considered key players in the origin of life.
In the famous Urey-Miller experiment, the passage of an electric arc through a mixture of methane, hydrogen, and ammonia produces a large number of amino acids. Since then, scientists have discovered a range of ways and components by which the potentially prebiotic formation and chemical evolution of peptides may have occurred, such as condensing agents, the design of self-replicating peptides and a number of non-enzymatic mechanisms by which amino acids could have emerged and elaborated into peptides. Several hypotheses invoke the Strecker synthesis whereby hydrogen cyanide, simple aldehydes, ammonia, and water produce amino acids.
According to a review, amino acids, and even peptides, "turn up fairly regularly in the various experimental broths that have been allowed to be cooked from simple chemicals. This is because nucleotides are far more difficult to synthesize chemically than amino acids." For a chronological order, it suggests that there must have been a 'protein world' or at least a 'polypeptide world', possibly later followed by the 'RNA world' and the 'DNA world'. Codon–amino acids mappings may be the biological information system at the primordial origin of life on Earth. While amino acids and consequently simple peptides must have formed under different experimentally probed geochemical scenarios, the transition from an abiotic world to the first life forms is to a large extent still unresolved.
Reactions
Amino acids undergo the reactions expected of the constituent functional groups.
Peptide bond formation
As both the amine and carboxylic acid groups of amino acids can react to form amide bonds, one amino acid molecule can react with another and become joined through an amide linkage. This polymerization of amino acids is what creates proteins. This condensation reaction yields the newly formed peptide bond and a molecule of water. In cells, this reaction does not occur directly; instead, the amino acid is first activated by attachment to a transfer RNA molecule through an ester bond. This aminoacyl-tRNA is produced in an ATP-dependent reaction carried out by an aminoacyl tRNA synthetase. This aminoacyl-tRNA is then a substrate for the ribosome, which catalyzes the attack of the amino group of the elongating protein chain on the ester bond. As a result of this mechanism, all proteins made by ribosomes are synthesized starting at their N-terminus and moving toward their C-terminus.
However, not all peptide bonds are formed in this way. In a few cases, peptides are synthesized by specific enzymes. For example, the tripeptide glutathione is an essential part of the defenses of cells against oxidative stress. This peptide is synthesized in two steps from free amino acids. In the first step, gamma-glutamylcysteine synthetase condenses cysteine and glutamate through a peptide bond formed between the side chain carboxyl of the glutamate (the gamma carbon of this side chain) and the amino group of the cysteine. This dipeptide is then condensed with glycine by glutathione synthetase to form glutathione.
In chemistry, peptides are synthesized by a variety of reactions. One of the most-used in solid-phase peptide synthesis uses the aromatic oxime derivatives of amino acids as activated units. These are added in sequence onto the growing peptide chain, which is attached to a solid resin support. Libraries of peptides are used in drug discovery through high-throughput screening.
The combination of functional groups allow amino acids to be effective polydentate ligands for metal–amino acid chelates.
The multiple side chains of amino acids can also undergo chemical reactions.
Catabolism
Degradation of an amino acid often involves deamination by moving its amino group to α-ketoglutarate, forming glutamate. This process involves transaminases, often the same as those used in amination during synthesis. In many vertebrates, the amino group is then removed through the urea cycle and is excreted in the form of urea. However, amino acid degradation can produce uric acid or ammonia instead. For example, serine dehydratase converts serine to pyruvate and ammonia. After removal of one or more amino groups, the remainder of the molecule can sometimes be used to synthesize new amino acids, or it can be used for energy by entering glycolysis or the citric acid cycle, as detailed in image at right.
Complexation
Amino acids are bidentate ligands, forming transition metal amino acid complexes.
Chemical analysis
The total nitrogen content of organic matter is mainly formed by the amino groups in proteins. The Total Kjeldahl Nitrogen (TKN) is a measure of nitrogen widely used in the analysis of (waste) water, soil, food, feed and organic matter in general. As the name suggests, the Kjeldahl method is applied. More sensitive methods are available.
|
;Nitrogen cycle;Zwitterions
|
https://en.wikipedia.org/wiki/Acoustic%20theory
|
Acoustic theory is a scientific field that relates to the description of sound waves. It derives from fluid dynamics. See acoustics for the engineering approach.
For sound waves of any magnitude of a disturbance in velocity, pressure, and density we have
In the case that the fluctuations in velocity, density, and pressure are small, we can approximate these as
Where is the perturbed velocity of the fluid, is the pressure of the fluid at rest, is the perturbed pressure of the system as a function of space and time, is the density of the fluid at rest, and is the variance in the density of the fluid over space and time.
In the case that the velocity is irrotational (), we then have the acoustic wave equation that describes the system:
Where we have
Derivation for a medium at rest
Starting with the Continuity Equation and the Euler Equation:
If we take small perturbations of a constant pressure and density:
Then the equations of the system are
Noting that the equilibrium pressures and densities are constant, this simplifies to
A Moving Medium
Starting with
We can have these equations work for a moving medium by setting , where is the constant velocity that the whole fluid is moving at before being disturbed (equivalent to a moving observer) and is the fluid velocity.
In this case the equations look very similar:
Note that setting returns the equations at rest.
Linearized Waves
Starting with the above given equations of motion for a medium at rest:
Let us now take to all be small quantities.
In the case that we keep terms to first order, for the continuity equation, we have the term going to 0. This similarly applies for the density perturbation times the time derivative of the velocity. Moreover, the spatial components of the material derivative go to 0. We thus have, upon rearranging the equilibrium density:
Next, given that our sound wave occurs in an ideal fluid, the motion is adiabatic, and then we can relate the small change in the pressure to the small change in the density by
Under this condition, we see that we now have
Defining the speed of sound of the system:
Everything becomes
For Irrotational Fluids
In the case that the fluid is irrotational, that is , we can then write and thus write our equations of motion as
The second equation tells us that
And the use of this equation in the continuity equation tells us that
This simplifies to
Thus the velocity potential obeys the wave equation in the limit of small disturbances. The boundary conditions required to solve for the potential come from the fact that the velocity of the fluid must be 0 normal to the fixed surfaces of the system.
Taking the time derivative of this wave equation and multiplying all sides by the unperturbed density, and then using the fact that tells us that
Similarly, we saw that . Thus we can multiply the above equation appropriately and see that
Thus, the velocity potential, pressure, and density all obey the wave equation. Moreover, we only need to solve one such equation to determine all other three. In particular, we have
For a moving medium
Again, we can derive the small-disturbance limit for sound waves in a moving medium. Again, starting with
We can linearize these into
For Irrotational Fluids in a Moving Medium
Given that we saw that
If we make the previous assumptions of the fluid being ideal and the velocity being irrotational, then we have
Under these assumptions, our linearized sound equations become
Importantly, since is a constant, we have , and then the second equation tells us that
Or just that
Now, when we use this relation with the fact that , alongside cancelling and rearranging terms, we arrive at
We can write this in a familiar form as
This differential equation must be solved with the appropriate boundary conditions. Note that setting returns us the wave equation. Regardless, upon solving this equation for a moving medium, we then have
See also
Acoustic attenuation
Sound
Fourier analysis
References
|
Acoustics;Fluid dynamics;Sound
|
https://en.wikipedia.org/wiki/Ada%20%28programming%20language%29
|
Ada is a structured, statically typed, imperative, and object-oriented high-level programming language, inspired by Pascal and other languages. It has built-in language support for design by contract (DbC), extremely strong typing, explicit concurrency, tasks, synchronous message passing, protected objects, and non-determinism. Ada improves code safety and maintainability by using the compiler to find errors in favor of runtime errors. Ada is an international technical standard, jointly defined by the International Organization for Standardization (ISO), and the International Electrotechnical Commission (IEC). , the standard, ISO/IEC 8652:2023, is called Ada 2022 informally.
Ada was originally designed by a team led by French computer scientist Jean Ichbiah of Honeywell under contract to the United States Department of Defense (DoD) from 1977 to 1983 to supersede over 450 programming languages then used by the DoD. Ada was named after Ada Lovelace (1815–1852), who has been credited as the first computer programmer.
Features
Ada was originally designed for embedded and real-time systems. The Ada 95 revision, designed by S. Tucker Taft of Intermetrics between 1992 and 1995, improved support for systems, numerical, financial, and object-oriented programming (OOP).
Features of Ada include: strong typing, modular programming mechanisms (packages), run-time checking, parallel processing (tasks, synchronous message passing, protected objects, and nondeterministic select statements), exception handling, and generics. Ada 95 added support for object-oriented programming, including dynamic dispatch.
The syntax of Ada minimizes choices of ways to perform basic operations, and prefers English keywords (such as and ) to symbols (such as and ). Ada uses the basic arithmetical operators , , , and , but avoids using other symbols. Code blocks are delimited by words such as 'declare', 'begin', and 'end', where the 'end' (in most cases) is followed by the keyword of the block that it closes (e.g., ... , ... ). In the case of conditional blocks this avoids a dangling else that could pair with the wrong nested 'if'-expression in other languages such as C or Java.
Ada is designed for developing very large software systems. Ada packages can be compiled separately. Ada package specifications (the package interface) can also be compiled separately without the implementation to check for consistency. This makes it possible to detect problems early during the design phase, before implementation starts.
A large number of compile-time checks are supported to help avoid bugs that would not be detectable until run-time in some other languages or would require explicit checks to be added to the source code. For example, the syntax requires explicitly named closing of blocks to prevent errors due to mismatched end tokens. The adherence to strong typing allows detecting many common software errors (wrong parameters, range violations, invalid references, mismatched types, etc.) either during compile-time, or otherwise during run-time. As concurrency is part of the language specification, the compiler can in some cases detect potential deadlocks. Compilers also commonly check for misspelled identifiers, visibility of packages, redundant declarations, etc. and can provide warnings and useful suggestions on how to fix the error.
Ada also supports run-time checks to protect against access to unallocated memory, buffer overflow errors, range violations, off-by-one errors, array access errors, and other detectable bugs. These checks can be disabled in the interest of runtime efficiency, but can often be compiled efficiently. It also includes facilities to help program verification. For these reasons, Ada is sometimes used in critical systems, where any anomaly might lead to very serious consequences, e.g., accidental death, injury or severe financial loss. Examples of systems where Ada is used include avionics, air traffic control, railways, banking, military and space technology.
Ada's dynamic memory management is high-level and type-safe. Ada has no generic or untyped pointers; nor does it implicitly declare any pointer type. Instead, all dynamic memory allocation and deallocation must occur via explicitly declared access types. Each access type has an associated storage pool that handles the low-level details of memory management; the programmer can either use the default storage pool or define new ones (this is particularly relevant for Non-Uniform Memory Access). It is even possible to declare several different access types that all designate the same type but use different storage pools. Also, the language provides for accessibility checks, both at compile time and at run time, that ensures that an access value cannot outlive the type of the object it points to.
Though the semantics of the language allow automatic garbage collection of inaccessible objects, most implementations do not support it by default, as it would cause unpredictable behaviour in real-time systems. Ada supports a limited form of region-based memory management, and in Ada, destroying a storage pool also destroys all the objects in the pool.
A double-dash (), resembling an em dash, denotes comment text. Comments stop at end of line; there is intentionally no way to make a comment span multiple lines, to prevent unclosed comments from accidentally voiding whole sections of source code. Disabling a whole block of code therefore requires the prefixing of each line (or column) individually with . While this clearly denotes disabled code by creating a column of repeated '--' down the page, it also renders the experimental dis/re-enablement of large blocks a more drawn-out process in editors without block commenting support.
The semicolon () is a statement terminator, and the null or no-operation statement is . A single without a statement to terminate is not allowed.
Unlike most ISO standards, the Ada language definition (known as the Ada Reference Manual or ARM, or sometimes the Language Reference Manual or LRM) is free content. Thus, it is a common reference for Ada programmers, not only programmers implementing Ada compilers. Apart from the reference manual, there is also an extensive rationale document which explains the language design and the use of various language constructs. This document is also widely used by programmers. When the language was revised, a new rationale document was written.
One notable free software tool that is used by many Ada programmers to aid them in writing Ada source code is the GNAT Programming Studio, and GNAT which is part of the GNU Compiler Collection.
Alire is a package and toolchain management tool for Ada.
History
In the 1970s the US Department of Defense (DoD) became concerned by the number of different programming languages being used for its embedded computer system projects, many of which were obsolete or hardware-dependent, and none of which supported safe modular programming. In 1975, a working group, the High Order Language Working Group (HOLWG), was formed with the intent to reduce this number by finding or creating a programming language generally suitable for the department's and the UK Ministry of Defence's requirements. After many iterations beginning with an original straw-man proposal the eventual programming language was named Ada. The total number of high-level programming languages in use for such projects fell from over 450 in 1983 to 37 by 1996.
HOLWG crafted the Steelman language requirements , a series of documents stating the requirements they felt a programming language should satisfy. Many existing languages were formally reviewed, but the team concluded in 1977 that no existing language met the specifications. The requirements were created by the United States Department of Defense in The Department of Defense Common High Order Language program in 1978. The predecessors of this document were called, in order, "Strawman", "Woodenman", "Tinman" and "Ironman". The requirements focused on the needs of embedded computer applications, and emphasised reliability, maintainability, and efficiency. Notably, they included exception handling facilities, run-time checking, and parallel computing.
It was concluded that no existing language met these criteria to a sufficient extent, so a contest was called to create a language that would be closer to fulfilling them. The design that won this contest became the Ada programming language. The resulting language followed the Steelman requirements closely, though not exactly.
Requests for proposals for a new programming language were issued and four contractors were hired to develop their proposals under the names of Red (Intermetrics led by Benjamin Brosgol), Green (Honeywell, led by Jean Ichbiah), Blue (SofTech, led by John Goodenough) and Yellow (SRI International, led by Jay Spitzen). In April 1978, after public scrutiny, the Red and Green proposals passed to the next phase. In May 1979, the Green proposal, designed by Jean Ichbiah at Honeywell, was chosen and given the name Ada—after Augusta Ada King, Countess of Lovelace, usually known as Ada Lovelace. This proposal was influenced by the language LIS that Ichbiah and his group had developed in the 1970s. The preliminary Ada reference manual was published in ACM SIGPLAN Notices in June 1979. The Military Standard reference manual was approved on December 10, 1980 (Ada Lovelace's birthday), and given the number MIL-STD-1815 in honor of Ada Lovelace's birth year. In 1981, Tony Hoare took advantage of his Turing Award speech to criticize Ada for being overly complex and hence unreliable, but subsequently seemed to recant in the foreword he wrote for an Ada textbook.
Ada attracted much attention from the programming community as a whole during its early days. Its backers and others predicted that it might become a dominant language for general purpose programming and not only defense-related work. Ichbiah publicly stated that within ten years, only two programming languages would remain: Ada and Lisp. Early Ada compilers struggled to implement the large, complex language, and both compile-time and run-time performance tended to be slow and tools primitive. Compiler vendors expended most of their efforts in passing the massive, language-conformance-testing, government-required Ada Compiler Validation Capability (ACVC) validation suite that was required in another novel feature of the Ada language effort.
The first validated Ada implementation was the NYU Ada/Ed translator, certified on April 11, 1983. NYU Ada/Ed is implemented in the high-level set language SETL. Several commercial companies began offering Ada compilers and associated development tools, including Alsys, TeleSoft, DDC-I, Advanced Computer Techniques, Tartan Laboratories, Irvine Compiler, TLD Systems, and Verdix. Computer manufacturers who had a significant business in the defense, aerospace, or related industries, also offered Ada compilers and tools on their platforms; these included Concurrent Computer Corporation, Cray Research, Inc., Digital Equipment Corporation, Harris Computer Systems, and Siemens Nixdorf Informationssysteme AG.
In 1991, the US Department of Defense began to require the use of Ada (the Ada mandate) for all software, though exceptions to this rule were often granted. The Department of Defense Ada mandate was effectively removed in 1997, as the DoD began to embrace commercial off-the-shelf (COTS) technology. Similar requirements existed in other NATO countries: Ada was required for NATO systems involving command and control and other functions, and Ada was the mandated or preferred language for defense-related applications in countries such as Sweden, Germany, and Canada.
By the late 1980s and early 1990s, Ada compilers had improved in performance, but there were still barriers to fully exploiting Ada's abilities, including a tasking model that was different from what most real-time programmers were used to.
Because of Ada's safety-critical support features, it is now used not only for military applications, but also in commercial projects where a software bug can have severe consequences, e.g., avionics and air traffic control, commercial rockets such as the Ariane 4 and 5, satellites and other space systems, railway transport and banking.
For example, the Primary Flight Control System, the fly-by-wire system software in the Boeing 777, was written in Ada, as were the fly-by-wire systems for the aerodynamically unstable Eurofighter Typhoon, Saab Gripen, Lockheed Martin F-22 Raptor and the DFCS replacement flight control system for the Grumman F-14 Tomcat. The Canadian Automated Air Traffic System was written in 1 million lines of Ada (SLOC count). It featured advanced distributed processing, a distributed Ada database, and object-oriented design. Ada is also used in other air traffic systems, e.g., the UK's next-generation Interim Future Area Control Tools Support () air traffic control system is designed and implemented using SPARK Ada.
It is also used in the French TVM in-cab signalling system on the TGV high-speed rail system, and the metro suburban trains in Paris, London, Hong Kong and New York City.
The Ada 95 revision of the language went beyond the Steelman requirements, targeting general-purpose systems in addition to embedded ones, and adding features supporting object-oriented programming.
Standardization
Preliminary Ada can be found in ACM Sigplan Notices Vol 14, No 6, June 1979
Ada was first published in 1980 as an ANSI standard ANSI/MIL-STD 1815. As this very first version held many errors and inconsistencies, the revised edition was published in 1983 as ANSI/MIL-STD 1815A. Without any further changes, it became an ISO standard in 1987. This version of the language is commonly known as Ada 83, from the date of its adoption by ANSI, but is sometimes referred to also as Ada 87, from the date of its adoption by ISO. There is also a French translation; DIN translated it into German as DIN 66268 in 1988.
Ada 95, the joint ISO/IEC/ANSI standard ISO/IEC 8652:1995 was published in February 1995, making it the first ISO standard object-oriented programming language. To help with the standard revision and future acceptance, the US Air Force funded the development of the GNAT Compiler. Presently, the GNAT Compiler is part of the GNU Compiler Collection.
Work has continued on improving and updating the technical content of the Ada language. A Technical Corrigendum to Ada 95 was published in October 2001, and a major Amendment, ISO/IEC 8652:1995/Amd 1:2007 was published on March 9, 2007, commonly known as Ada 2005 because work on the new standard was finished that year.
At the Ada-Europe 2012 conference in Stockholm, the Ada Resource Association (ARA) and Ada-Europe announced the completion of the design of the latest version of the Ada language and the submission of the reference manual to the ISO/IEC JTC 1/SC 22/WG 9 of the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) for approval. ISO/IEC 8652:2012(see Ada 2012 RM) was published in December 2012, known as Ada 2012. A technical corrigendum, ISO/IEC 8652:2012/COR 1:2016, was published (see RM 2012 with TC 1).
On May 2, 2023, the Ada community saw the formal approval of publication of the Ada 2022 edition of the programming language standard.
Despite the names Ada 83, 95 etc., legally there is only one Ada standard, the one of the last ISO/IEC standard: with the acceptance of a new standard version, the previous one becomes withdrawn. The other names are just informal ones referencing a certain edition.
Other related standards include ISO/IEC 8651-3:1988 Information processing systems—Computer graphics—Graphical Kernel System (GKS) language bindings—Part 3: Ada.
Language constructs
Ada is an ALGOL-like programming language featuring control structures with reserved words such as if, then, else, while, for, and so on. However, Ada also has many data structuring facilities and other abstractions which were not included in the original ALGOL 60, such as type definitions, records, pointers, enumerations. Such constructs were in part inherited from or inspired by Pascal.
"Hello, world!" in Ada
A common example of a language's syntax is the "Hello, World!" program:
(hello.adb)
with Ada.Text_IO;
procedure Hello is
begin
Ada.Text_IO.Put_Line ("Hello, world!");
end Hello;
This program can be compiled by using the freely available open source compiler GNAT, by executing
gnatmake hello.adb
Data types
Ada's type system is not based on a set of predefined primitive types but allows users to declare their own types. This declaration in turn is not based on the internal representation of the type but on describing the goal which should be achieved. This allows the compiler to determine a suitable memory size for the type, and to check for violations of the type definition at compile time and run time (i.e., range violations, buffer overruns, type consistency, etc.). Ada supports numerical types defined by a range, modulo types, aggregate types (records and arrays), and enumeration types. Access types define a reference to an instance of a specified type; untyped pointers are not permitted.
Special types provided by the language are task types and protected types.
For example, a date might be represented as:
type Day_type is range 1 .. 31;
type Month_type is range 1 .. 12;
type Year_type is range 1800 .. 2100;
type Hours is mod 24;
type Weekday is (Monday, Tuesday, Wednesday, Thursday, Friday, Saturday, Sunday);
type Date is
record
Day : Day_type;
Month : Month_type;
Year : Year_type;
end record;
Important to note: Day_type, Month_type, Year_type, Hours are incompatible types, meaning that for instance the following expression is illegal:
Today: Day_type := 4;
Current_Month: Month_type := 10;
... Today + Current_Month ... -- illegal
The predefined plus-operator can only add values of the same type, so the expression is illegal.
Types can be refined by declaring subtypes:
subtype Working_Hours is Hours range 0 .. 12; -- at most 12 Hours to work a day
subtype Working_Day is Weekday range Monday .. Friday; -- Days to work
Work_Load: constant array(Working_Day) of Working_Hours -- implicit type declaration
:= (Friday => 6, Monday => 4, others => 10); -- lookup table for working hours with initialization
Types can have modifiers such as limited, abstract, private etc. Private types do not show their inner structure; objects of limited types cannot be copied. Ada 95 adds further features for object-oriented extension of types.
Control structures
Ada is a structured programming language, meaning that the flow of control is structured into standard statements. All standard constructs and deep-level early exit are supported, so the use of the also supported "go to" commands is seldom needed.
-- while a is not equal to b, loop.
while a /= b loop
Ada.Text_IO.Put_Line ("Waiting");
end loop;
if a > b then
Ada.Text_IO.Put_Line ("Condition met");
else
Ada.Text_IO.Put_Line ("Condition not met");
end if;
for i in 1 .. 10 loop
Ada.Text_IO.Put ("Iteration: ");
Ada.Text_IO.Put (i);
Ada.Text_IO.Put_Line;
end loop;
loop
a := a + 1;
exit when a = 10;
end loop;
case i is
when 0 => Ada.Text_IO.Put ("zero");
when 1 => Ada.Text_IO.Put ("one");
when 2 => Ada.Text_IO.Put ("two");
-- case statements have to cover all possible cases:
when others => Ada.Text_IO.Put ("none of the above");
end case;
for aWeekday in Weekday'Range loop -- loop over an enumeration
Put_Line ( Weekday'Image(aWeekday) ); -- output string representation of an enumeration
if aWeekday in Working_Day then -- check of a subtype of an enumeration
Put_Line ( " to work for " &
Working_Hours'Image (Work_Load(aWeekday)) ); -- access into a lookup table
end if;
end loop;
Packages, procedures and functions
Among the parts of an Ada program are packages, procedures and functions.
Functions differ from procedures in that they must return a value. Function calls cannot be used "as a statement", and their result must be assigned to a variable. However, since Ada 2012, functions are not required to be pure and may mutate their suitably declared parameters or the global state.
Example:
Package specification (example.ads)
package Example is
type Number is range 1 .. 11;
procedure Print_and_Increment (j: in out Number);
end Example;
Package body (example.adb)
with Ada.Text_IO;
package body Example is
i : Number := Number'First;
procedure Print_and_Increment (j: in out Number) is
function Next (k: in Number) return Number is
begin
return k + 1;
end Next;
begin
Ada.Text_IO.Put_Line ( "The total is: " & Number'Image(j) );
j := Next (j);
end Print_and_Increment;
-- package initialization executed when the package is elaborated
begin
while i < Number'Last loop
Print_and_Increment (i);
end loop;
end Example;
This program can be compiled, e.g., by using the freely available open-source compiler GNAT, by executing
gnatmake -z example.adb
Packages, procedures and functions can nest to any depth, and each can also be the logical outermost block.
Each package, procedure or function can have its own declarations of constants, types, variables, and other procedures, functions and packages, which can be declared in any order.
Pragmas
A pragma is a compiler directive that conveys information to the compiler to allow specific manipulating of compiled output. Certain pragmas are built into the language, while others are implementation-specific.
Examples of common usage of compiler pragmas would be to disable certain features, such as run-time type checking or array subscript boundary checking, or to instruct the compiler to insert object code instead of a function call (as C/C++ does with inline functions).
Generics
Notes
International standards
ISO/IEC 8652: Information technology—Programming languages—Ada
ISO/IEC 15291: Information technology—Programming languages—Ada Semantic Interface Specification (ASIS)
ISO/IEC 18009: Information technology—Programming languages—Ada: Conformity assessment of a language processor (ACATS)
IEEE Standard 1003.5b-1996, the POSIX Ada binding
Ada Language Mapping Specification, the CORBA interface description language (IDL) to Ada mapping
Rationale
These documents have been published in various forms, including print.
Also available apps.dtic.mil, pdf
Books
795 pages.
Further reading
External links
Ada Resource Association
DOD Ada programming language (ANSI/MIL STD 1815A-1983) specification
JTC1/SC22/WG9 ISO home of Ada Standards
Ada Programming Language Materials, 1981–1990. Charles Babbage Institute, University of Minnesota.
Department of Defense (June 1978), Requirements for High Order Computer Programming Languages: "Steelman"
David A. Wheeler (1996), Introduction to Steelman On-Line (version 1.2).
SoftTech Inc. (1976), "Evaluation of ALGOL 68, JOVIAL J3B, Pascal, Simula 67, and TACPOL Versus TINMAN - Requirements for a Common High Order Programming Language." - See also: ALGOL 68, JOVIAL J3B, Pascal, Simula 67, and TACPOL (Defense Technical Information Center - DTIC ADA037637, Report Number 1021-14).
David A. Wheeler (1997), "Ada, C, C++, and Java vs. The Steelman". Originally published in Ada Letters July/August 1997.
|
;.NET programming languages;1980 software;Ada Lovelace;Articles with example Ada code;Avionics programming languages;High Integrity Programming Language;High-level programming languages;Multi-paradigm programming languages;Programming language standards;Programming languages;Programming languages created in 1980;Programming languages with an ISO standard;Statically typed programming languages;Systems programming languages
|
https://en.wikipedia.org/wiki/Advanced%20Encryption%20Standard
|
The Advanced Encryption Standard (AES), also known by its original name Rijndael (), is a specification for the encryption of electronic data established by the U.S. National Institute of Standards and Technology (NIST) in 2001.
AES is a variant of the Rijndael block cipher developed by two Belgian cryptographers, Joan Daemen and Vincent Rijmen, who submitted a proposal to NIST during the AES selection process. Rijndael is a family of ciphers with different key and block sizes. For AES, NIST selected three members of the Rijndael family, each with a block size of 128 bits, but three different key lengths: 128, 192 and 256 bits.
AES has been adopted by the U.S. government. It supersedes the Data Encryption Standard (DES), which was published in 1977. The algorithm described by AES is a symmetric-key algorithm, meaning the same key is used for both encrypting and decrypting the data.
In the United States, AES was announced by the NIST as U.S. FIPS PUB 197 (FIPS 197) on November 26, 2001. This announcement followed a five-year standardization process in which fifteen competing designs were presented and evaluated, before the Rijndael cipher was selected as the most suitable.
AES is included in the ISO/IEC 18033-3 standard. AES became effective as a U.S. federal government standard on May 26, 2002, after approval by U.S. Secretary of Commerce Donald Evans. AES is available in many different encryption packages, and is the first (and only) publicly accessible cipher approved by the U.S. National Security Agency (NSA) for top secret information when used in an NSA approved cryptographic module.
Definitive standards
The Advanced Encryption Standard (AES) is defined in each of:
FIPS PUB 197: Advanced Encryption Standard (AES)
ISO/IEC 18033-3: Block ciphers
Description of the ciphers
AES is based on a design principle known as a substitution–permutation network, and is efficient in both software and hardware. Unlike its predecessor DES, AES does not use a Feistel network. AES is a variant of Rijndael, with a fixed block size of 128 bits, and a key size of 128, 192, or 256 bits. By contrast, Rijndael per se is specified with block and key sizes that may be any multiple of 32 bits, with a minimum of 128 and a maximum of 256 bits. Most AES calculations are done in a particular finite field.
AES operates on a 4 × 4 column-major order array of 16 bytes termed the state:
The key size used for an AES cipher specifies the number of transformation rounds that convert the input, called the plaintext, into the final output, called the ciphertext. The number of rounds are as follows:
10 rounds for 128-bit keys.
12 rounds for 192-bit keys.
14 rounds for 256-bit keys.
Each round consists of several processing steps, including one that depends on the encryption key itself. A set of reverse rounds are applied to transform ciphertext back into the original plaintext using the same encryption key.
High-level description of the algorithm
round keys are derived from the cipher key using the AES key schedule. AES requires a separate 128-bit round key block for each round plus one more.
Initial round key addition:
each byte of the state is combined with a byte of the round key using bitwise xor.
9, 11 or 13 rounds:
a non-linear substitution step where each byte is replaced with another according to a lookup table.
a transposition step where the last three rows of the state are shifted cyclically a certain number of steps.
a linear mixing operation which operates on the columns of the state, combining the four bytes in each column.
Final round (making 10, 12 or 14 rounds in total):
The step
In the step, each byte in the state array is replaced with a using an 8-bit substitution box. Before round 0, the state array is simply the plaintext/input. This operation provides the non-linearity in the cipher. The S-box used is derived from the multiplicative inverse over , known to have good non-linearity properties. To avoid attacks based on simple algebraic properties, the S-box is constructed by combining the inverse function with an invertible affine transformation. The S-box is also chosen to avoid any fixed points (and so is a derangement), i.e., , and also any opposite fixed points, i.e., .
While performing the decryption, the step (the inverse of ) is used, which requires first taking the inverse of the affine transformation and then finding the multiplicative inverse.
The step
The step operates on the rows of the state; it cyclically shifts the bytes in each row by a certain offset. For AES, the first row is left unchanged. Each byte of the second row is shifted one to the left. Similarly, the third and fourth rows are shifted by offsets of two and three respectively. In this way, each column of the output state of the step is composed of bytes from each column of the input state. The importance of this step is to avoid the columns being encrypted independently, in which case AES would degenerate into four independent block ciphers.
The step
In the step, the four bytes of each column of the state are combined using an invertible linear transformation. The function takes four bytes as input and outputs four bytes, where each input byte affects all four output bytes. Together with , provides diffusion in the cipher.
During this operation, each column is transformed using a fixed matrix (matrix left-multiplied by column gives new value of column in the state):
Matrix multiplication is composed of multiplication and addition of the entries. Entries are bytes treated as coefficients of polynomial of order . Addition is simply XOR. Multiplication is modulo irreducible polynomial . If processed bit by bit, then, after shifting, a conditional XOR with 1B16 should be performed if the shifted value is larger than FF16 (overflow must be corrected by subtraction of generating polynomial). These are special cases of the usual multiplication in .
In more general sense, each column is treated as a polynomial over and is then multiplied modulo with a fixed polynomial . The coefficients are displayed in their hexadecimal equivalent of the binary representation of bit polynomials from . The step can also be viewed as a multiplication by the shown particular MDS matrix in the finite field . This process is described further in the article Rijndael MixColumns.
The
In the step, the subkey is combined with the state. For each round, a subkey is derived from the main key using Rijndael's key schedule; each subkey is the same size as the state. The subkey is added by combining of the state with the corresponding byte of the subkey using bitwise XOR.
Optimization of the cipher
On systems with 32-bit or larger words, it is possible to speed up execution of this cipher by combining the and steps with the step by transforming them into a sequence of table lookups. This requires four 256-entry 32-bit tables (together occupying 4096 bytes). A round can then be performed with 16 table lookup operations and 12 32-bit exclusive-or operations, followed by four 32-bit exclusive-or operations in the step. Alternatively, the table lookup operation can be performed with a single 256-entry 32-bit table (occupying 1024 bytes) followed by circular rotation operations.
Using a byte-oriented approach, it is possible to combine the , , and steps into a single round operation.
Security
The National Security Agency (NSA) reviewed all the AES finalists, including Rijndael, and stated that all of them were secure enough for U.S. Government non-classified data. In June 2003, the U.S. Government announced that AES could be used to protect classified information:
The design and strength of all key lengths of the AES algorithm (i.e., 128, 192 and 256) are sufficient to protect classified information up to the SECRET level. TOP SECRET information will require use of either the 192 or 256 key lengths. The implementation of AES in products intended to protect national security systems and/or information must be reviewed and certified by NSA prior to their acquisition and use.
AES has 10 rounds for 128-bit keys, 12 rounds for 192-bit keys, and 14 rounds for 256-bit keys.
Known attacks
For cryptographers, a cryptographic "break" is anything faster than a brute-force attacki.e., performing one trial decryption for each possible key in sequence . A break can thus include results that are infeasible with current technology. Despite being impractical, theoretical breaks can sometimes provide insight into vulnerability patterns. The largest successful publicly known brute-force attack against a widely implemented block-cipher encryption algorithm was against a 64-bit RC5 key by distributed.net in 2006.
The key space increases by a factor of 2 for each additional bit of key length, and if every possible value of the key is equiprobable; this translates into a doubling of the average brute-force key search time with every additional bit of key length. This implies that the effort of a brute-force search increases exponentially with key length. Key length in itself does not imply security against attacks, since there are ciphers with very long keys that have been found to be vulnerable.
AES has a fairly simple algebraic framework. In 2002, a theoretical attack, named the "XSL attack", was announced by Nicolas Courtois and Josef Pieprzyk, purporting to show a weakness in the AES algorithm, partially due to the low complexity of its nonlinear components. Since then, other papers have shown that the attack, as originally presented, is unworkable; see XSL attack on block ciphers.
During the AES selection process, developers of competing algorithms wrote of Rijndael's algorithm "we are concerned about [its] use ... in security-critical applications." In October 2000, however, at the end of the AES selection process, Bruce Schneier, a developer of the competing algorithm Twofish, wrote that while he thought successful academic attacks on Rijndael would be developed someday, he "did not believe that anyone will ever discover an attack that will allow someone to read Rijndael traffic."
By 2006, the best known attacks were on 7 rounds for 128-bit keys, 8 rounds for 192-bit keys, and 9 rounds for 256-bit keys.
Until May 2009, the only successful published attacks against the full AES were side-channel attacks on some specific implementations. In 2009, a new related-key attack was discovered that exploits the simplicity of AES's key schedule and has a complexity of 2119. In December 2009 it was improved to 299.5. This is a follow-up to an attack discovered earlier in 2009 by Alex Biryukov, Dmitry Khovratovich, and Ivica Nikolić, with a complexity of 296 for one out of every 235 keys. However, related-key attacks are not of concern in any properly designed cryptographic protocol, as a properly designed protocol (i.e., implementational software) will take care not to allow related keys, essentially by constraining an attacker's means of selecting keys for relatedness.
Another attack was blogged by Bruce Schneier
on July 30, 2009, and released as a preprint
on August 3, 2009. This new attack, by Alex Biryukov, Orr Dunkelman, Nathan Keller, Dmitry Khovratovich, and Adi Shamir, is against AES-256 that uses only two related keys and 239 time to recover the complete 256-bit key of a 9-round version, or 245 time for a 10-round version with a stronger type of related subkey attack, or 270 time for an 11-round version. 256-bit AES uses 14 rounds, so these attacks are not effective against full AES.
The practicality of these attacks with stronger related keys has been criticized, for instance, by the paper on chosen-key-relations-in-the-middle attacks on AES-128 authored by Vincent Rijmen in 2010.
In November 2009, the first known-key distinguishing attack against a reduced 8-round version of AES-128 was released as a preprint.
This known-key distinguishing attack is an improvement of the rebound, or the start-from-the-middle attack, against AES-like permutations, which view two consecutive rounds of permutation as the application of a so-called Super-S-box. It works on the 8-round version of AES-128, with a time complexity of 248, and a memory complexity of 232. 128-bit AES uses 10 rounds, so this attack is not effective against full AES-128.
The first key-recovery attacks on full AES were by Andrey Bogdanov, Dmitry Khovratovich, and Christian Rechberger, and were published in 2011. The attack is a biclique attack and is faster than brute force by a factor of about four. It requires 2126.2 operations to recover an AES-128 key. For AES-192 and AES-256, 2190.2 and 2254.6 operations are needed, respectively. This result has been further improved to 2126.0 for AES-128, 2189.9 for AES-192, and 2254.3 for AES-256 by Biaoshuai Tao and Hongjun Wu in a 2015 paper, which are the current best results in key recovery attack against AES.
This is a very small gain, as a 126-bit key (instead of 128 bits) would still take billions of years to brute force on current and foreseeable hardware. Also, the authors calculate the best attack using their technique on AES with a 128-bit key requires storing 288 bits of data. That works out to about 38 trillion terabytes of data, which was more than all the data stored on all the computers on the planet in 2016. A paper in 2015 later improved the space complexity to 256 bits, which is 9007 terabytes (while still keeping a time complexity of approximately 2126).
According to the Snowden documents, the NSA is doing research on whether a cryptographic attack based on tau statistic may help to break AES.
At present, there is no known practical attack that would allow someone without knowledge of the key to read data encrypted by AES when correctly implemented.
Side-channel attacks
Side-channel attacks do not attack the cipher as a black box, and thus are not related to cipher security as defined in the classical context, but are important in practice. They attack implementations of the cipher on hardware or software systems that inadvertently leak data. There are several such known attacks on various implementations of AES.
In April 2005, D. J. Bernstein announced a cache-timing attack that he used to break a custom server that used OpenSSL's AES encryption. The attack required over 200 million chosen plaintexts. The custom server was designed to give out as much timing information as possible (the server reports back the number of machine cycles taken by the encryption operation). However, as Bernstein pointed out, "reducing the precision of the server's timestamps, or eliminating them from the server's responses, does not stop the attack: the client simply uses round-trip timings based on its local clock, and compensates for the increased noise by averaging over a larger number of samples."
In October 2005, Dag Arne Osvik, Adi Shamir and Eran Tromer presented a paper demonstrating several cache-timing attacks against the implementations in AES found in OpenSSL and Linux's dm-crypt partition encryption function. One attack was able to obtain an entire AES key after only 800 operations triggering encryptions, in a total of 65 milliseconds. This attack requires the attacker to be able to run programs on the same system or platform that is performing AES.
In December 2009 an attack on some hardware implementations was published that used differential fault analysis and allows recovery of a key with a complexity of 232.
In November 2010 Endre Bangerter, David Gullasch and Stephan Krenn published a paper which described a practical approach to a "near real time" recovery of secret keys from AES-128 without the need for either cipher text or plaintext. The approach also works on AES-128 implementations that use compression tables, such as OpenSSL. Like some earlier attacks, this one requires the ability to run unprivileged code on the system performing the AES encryption, which may be achieved by malware infection far more easily than commandeering the root account.
In March 2016, C. Ashokkumar, Ravi Prakash Giri and Bernard Menezes presented a side-channel attack on AES implementations that can recover the complete 128-bit AES key in just 6–7 blocks of plaintext/ciphertext, which is a substantial improvement over previous works that require between 100 and a million encryptions. The proposed attack requires standard user privilege and key-retrieval algorithms run under a minute.
Many modern CPUs have built-in hardware instructions for AES, which protect against timing-related side-channel attacks.
Quantum attacks
AES-256 is considered to be quantum resistant, as it has similar quantum resistance to AES-128's resistance against traditional, non-quantum, attacks at 128 bits of security. AES-192 and AES-128 are not considered quantum resistant due to their smaller key sizes. AES-192 has a strength of 96 bits against quantum attacks and AES-128 has 64 bits of strength against quantum attacks, making them both insecure.
NIST/CSEC validation
The Cryptographic Module Validation Program (CMVP) is operated jointly by the United States Government's National Institute of Standards and Technology (NIST) Computer Security Division and the Communications Security Establishment (CSE) of the Government of Canada. The use of cryptographic modules validated to NIST FIPS 140-2 is required by the United States Government for encryption of all data that has a classification of Sensitive but Unclassified (SBU) or above. From NSTISSP #11, National Policy Governing the Acquisition of Information Assurance: "Encryption products for protecting classified information will be certified by NSA, and encryption products intended for protecting sensitive information will be certified in accordance with NIST FIPS 140-2."
The Government of Canada also recommends the use of FIPS 140 validated cryptographic modules in unclassified applications of its departments.
Although NIST publication 197 ("FIPS 197") is the unique document that covers the AES algorithm, vendors typically approach the CMVP under FIPS 140 and ask to have several algorithms (such as Triple DES or SHA1) validated at the same time. Therefore, it is rare to find cryptographic modules that are uniquely FIPS 197 validated and NIST itself does not generally take the time to list FIPS 197 validated modules separately on its public web site. Instead, FIPS 197 validation is typically just listed as an "FIPS approved: AES" notation (with a specific FIPS 197 certificate number) in the current list of FIPS 140 validated cryptographic modules.
The Cryptographic Algorithm Validation Program (CAVP) allows for independent validation of the correct implementation of the AES algorithm. Successful validation results in being listed on the NIST validations page. This testing is a pre-requisite for the FIPS 140-2 module validation. However, successful CAVP validation in no way implies that the cryptographic module implementing the algorithm is secure. A cryptographic module lacking FIPS 140-2 validation or specific approval by the NSA is not deemed secure by the US Government and cannot be used to protect government data.
FIPS 140-2 validation is challenging to achieve both technically and fiscally. There is a standardized battery of tests as well as an element of source code review that must be passed over a period of a few weeks. The cost to perform these tests through an approved laboratory can be significant (e.g., well over $30,000 US) and does not include the time it takes to write, test, document and prepare a module for validation. After validation, modules must be re-submitted and re-evaluated if they are changed in any way. This can vary from simple paperwork updates if the security functionality did not change to a more substantial set of re-testing if the security functionality was impacted by the change.
Test vectors
Test vectors are a set of known ciphers for a given input and key. NIST distributes the reference of AES test vectors as AES Known Answer Test (KAT) Vectors.
Performance
High speed and low RAM requirements were some of the criteria of the AES selection process. As the chosen algorithm, AES performed well on a wide variety of hardware, from 8-bit smart cards to high-performance computers.
On a Pentium Pro, AES encryption requires 18 clock cycles per byte (cpb), equivalent to a throughput of about 11 MiB/s for a 200 MHz processor.
On Intel Core and AMD Ryzen CPUs supporting AES-NI instruction set extensions, throughput can be multiple GiB/s. On an Intel Westmere CPU, AES encryption using AES-NI takes about 1.3 cpb for AES-128 and 1.8 cpb for AES-256.
Implementations
|
Advanced Encryption Standard;Belgian inventions;Cryptography
|
https://en.wikipedia.org/wiki/Alpha%20decay
|
Alpha decay or α-decay is a type of radioactive decay in which an atomic nucleus emits an alpha particle (helium nucleus). The parent nucleus transforms or "decays" into a daughter product, with a mass number that is reduced by four and an atomic number that is reduced by two. An alpha particle is identical to the nucleus of a helium-4 atom, which consists of two protons and two neutrons. It has a charge of and a mass of . For example, uranium-238 decays to form thorium-234.
While alpha particles have a charge , this is not usually shown because a nuclear equation describes a nuclear reaction without considering the electrons – a convention that does not imply that the nuclei necessarily occur in neutral atoms.
Alpha decay typically occurs in the heaviest nuclides. Theoretically, it can occur only in nuclei somewhat heavier than nickel (element 28), where the overall binding energy per nucleon is no longer a maximum and the nuclides are therefore unstable toward spontaneous fission-type processes. In practice, this mode of decay has only been observed in nuclides considerably heavier than nickel, with the lightest known alpha emitter being the second lightest isotope of antimony, 104Sb. Exceptionally, however, beryllium-8 decays to two alpha particles.
Alpha decay is by far the most common form of cluster decay, where the parent atom ejects a defined daughter collection of nucleons, leaving another defined product behind. It is the most common form because of the combined extremely high nuclear binding energy and relatively small mass of the alpha particle. Like other cluster decays, alpha decay is fundamentally a quantum tunneling process. Unlike beta decay, it is governed by the interplay between both the strong nuclear force and the electromagnetic force.
Alpha particles have a typical kinetic energy of 5 MeV (or ≈ 0.13% of their total energy, 110 TJ/kg) and have a speed of about 15,000,000 m/s, or 5% of the speed of light. There is surprisingly small variation around this energy, due to the strong dependence of the half-life of this process on the energy produced. Because of their relatively large mass, the electric charge of and relatively low velocity, alpha particles are very likely to interact with other atoms and lose their energy, and their forward motion can be stopped by a few centimeters of air.
Approximately 99% of the helium produced on Earth is the result of the alpha decay of underground deposits of minerals containing uranium or thorium. The helium is brought to the surface as a by-product of natural gas production.
History
Alpha particles were first described in the investigations of radioactivity by Ernest Rutherford in 1899, and by 1907 they were identified as He2+ ions.
By 1928, George Gamow had solved the theory of alpha decay via tunneling. The alpha particle is trapped inside the nucleus by an attractive nuclear potential well
and a repulsive electromagnetic potential barrier. Classically, it is forbidden to escape, but according to the (then) newly discovered principles of quantum mechanics, it has a tiny (but non-zero) probability of "tunneling" through the barrier and appearing on the other side to escape the nucleus. Gamow solved a model potential for the nucleus and derived, from first principles, a relationship between the half-life of the decay, and the energy of the emission, which had been previously discovered empirically and was known as the Geiger–Nuttall law.
Mechanism
The nuclear force holding an atomic nucleus together is very strong, in general much stronger than the repulsive electromagnetic forces between the protons. However, the nuclear force is also short-range, dropping quickly in strength beyond about 3 femtometers, while the electromagnetic force has an unlimited range. The strength of the attractive nuclear force keeping a nucleus together is thus proportional to the number of the nucleons, but the total disruptive electromagnetic force of proton-proton repulsion trying to break the nucleus apart is roughly proportional to the square of its atomic number. A nucleus with 210 or more nucleons is so large that the strong nuclear force holding it together can just barely counterbalance the electromagnetic repulsion between the protons it contains. Alpha decay occurs in such nuclei as a means of increasing stability by reducing size.
One curiosity is why alpha particles, helium nuclei, should be preferentially emitted as opposed to other particles like a single proton or neutron or other atomic nuclei. Part of the reason is the high binding energy of the alpha particle, which means that its mass is less than the sum of the masses of two free protons and two free neutrons. This increases the disintegration energy. Computing the total disintegration energy given by the equation
where is the initial mass of the nucleus, is the mass of the nucleus after particle emission, and is the mass of the emitted (alpha-)particle, one finds that in certain cases it is positive and so alpha particle emission is possible, whereas other decay modes would require energy to be added. For example, performing the calculation for uranium-232 shows that alpha particle emission releases 5.4 MeV of energy, while a single proton emission would require 6.1 MeV. Most of the disintegration energy becomes the kinetic energy of the alpha particle, although to fulfill conservation of momentum, part of the energy goes to the recoil of the nucleus itself (see atomic recoil). However, since the mass numbers of most alpha-emitting radioisotopes exceed 210, far greater than the mass number of the alpha particle (4), the fraction of the energy going to the recoil of the nucleus is generally quite small, less than 2%. Nevertheless, the recoil energy (on the scale of keV) is still much larger than the strength of chemical bonds (on the scale of eV), so the daughter nuclide will break away from the chemical environment the parent was in. The energies and ratios of the alpha particles can be used to identify the radioactive parent via alpha spectrometry.
These disintegration energies, however, are substantially smaller than the repulsive potential barrier created by the interplay between the strong nuclear and the electromagnetic force, which prevents the alpha particle from escaping. The energy needed to bring an alpha particle from infinity to a point near the nucleus just outside the range of the nuclear force's influence is generally in the range of about 25 MeV. An alpha particle within the nucleus can be thought of as being inside a potential barrier whose walls are 25 MeV above the potential at infinity. However, decay alpha particles only have energies of around 4 to 9 MeV above the potential at infinity, far less than the energy needed to overcome the barrier and escape.
Quantum tunneling
Quantum mechanics, however, allows the alpha particle to escape via quantum tunneling. The quantum tunneling theory of alpha decay, independently developed by George Gamow and by Ronald Wilfred Gurney and Edward Condon in 1928, was hailed as a very striking confirmation of quantum theory. Essentially, the alpha particle escapes from the nucleus not by acquiring enough energy to pass over the wall confining it, but by tunneling through the wall. Gurney and Condon made the following observation in their paper on it:
It has hitherto been necessary to postulate some special arbitrary 'instability' of the nucleus, but in the following note, it is pointed out that disintegration is a natural consequence of the laws of quantum mechanics without any special hypothesis... Much has been written of the explosive violence with which the α-particle is hurled from its place in the nucleus. But from the process pictured above, one would rather say that the α-particle almost slips away unnoticed.
The theory supposes that the alpha particle can be considered an independent particle within a nucleus, that is in constant motion but held within the nucleus by strong interaction. At each collision with the repulsive potential barrier of the electromagnetic force, there is a small non-zero probability that it will tunnel its way out. An alpha particle with a speed of 1.5×107 m/s within a nuclear diameter of approximately 10−14 m will collide with the barrier more than 1021 times per second. However, if the probability of escape at each collision is very small, the half-life of the radioisotope will be very long, since it is the time required for the total probability of escape to reach 50%. As an extreme example, the half-life of the isotope bismuth-209 is .
The isotopes in beta-decay stable isobars that are also stable with regards to double beta decay with mass number A = 5, A = 8, 143 ≤ A ≤ 155, 160 ≤ A ≤ 162, and A ≥ 165 are theorized to undergo alpha decay. All other mass numbers (isobars) have exactly one theoretically stable nuclide. Those with mass 5 decay to helium-4 and a proton or a neutron, and those with mass 8 decay to two helium-4 nuclei; their half-lives (helium-5, lithium-5, and beryllium-8) are very short, unlike the half-lives for all other such nuclides with A ≤ 209, which are very long. (Such nuclides with A ≤ 209 are primordial nuclides except 146Sm.)
Working out the details of the theory leads to an equation relating the half-life of a radioisotope to the decay energy of its alpha particles, a theoretical derivation of the empirical Geiger–Nuttall law.
Uses
Americium-241, an alpha emitter, is used in smoke detectors. The alpha particles ionize air in an open ion chamber and a small current flows through the ionized air. Smoke particles from the fire that enter the chamber reduce the current, triggering the smoke detector's alarm.
Radium-223 is also an alpha emitter. It is used in the treatment of skeletal metastases (cancers in the bones).
Alpha decay can provide a safe power source for radioisotope thermoelectric generators used for space probes and were used for artificial heart pacemakers. Alpha decay is much more easily shielded against than other forms of radioactive decay.
Static eliminators typically use polonium-210, an alpha emitter, to ionize the air, allowing the "static cling" to dissipate more rapidly.
Toxicity
Highly charged and heavy, alpha particles lose their several MeV of energy within a small volume of material, along with a very short mean free path. This increases the chance of double-strand breaks to the DNA in cases of internal contamination, when ingested, inhaled, injected or introduced through the skin. Otherwise, touching an alpha source is typically not harmful, as alpha particles are effectively shielded by a few centimeters of air, a piece of paper, or the thin layer of dead skin cells that make up the epidermis; however, many alpha sources are also accompanied by beta-emitting radio daughters, and both are often accompanied by gamma photon emission.
Relative biological effectiveness (RBE) quantifies the ability of radiation to cause certain biological effects, notably either cancer or cell-death, for equivalent radiation exposure. Alpha radiation has a high linear energy transfer (LET) coefficient, which is about one ionization of a molecule/atom for every angstrom of travel by the alpha particle. The RBE has been set at the value of 20 for alpha radiation by various government regulations. The RBE is set at 10 for neutron irradiation, and at 1 for beta radiation and ionizing photons.
However, the recoil of the parent nucleus (alpha recoil) gives it a significant amount of energy, which also causes ionization damage (see ionizing radiation). This energy is roughly the weight of the alpha () divided by the weight of the parent (typically about 200 Da) times the total energy of the alpha. By some estimates, this might account for most of the internal radiation damage, as the recoil nucleus is part of an atom that is much larger than an alpha particle, and causes a very dense trail of ionization; the atom is typically a heavy metal, which preferentially collect on the chromosomes. In some studies, this has resulted in an RBE approaching 1,000 instead of the value used in governmental regulations.
The largest natural contributor to public radiation dose is radon, a naturally occurring, radioactive gas found in soil and rock. If the gas is inhaled, some of the radon particles may attach to the inner lining of the lung. These particles continue to decay, emitting alpha particles, which can damage cells in the lung tissue. The death of Marie Curie at age 66 from aplastic anemia was probably caused by prolonged exposure to high doses of ionizing radiation, but it is not clear if this was due to alpha radiation or X-rays. Curie worked extensively with radium, which decays into radon, along with other radioactive materials that emit beta and gamma rays. However, Curie also worked with unshielded X-ray tubes during World War I, and analysis of her skeleton during a reburial showed a relatively low level of radioisotope burden.
The Russian defector Alexander Litvinenko's 2006 murder by radiation poisoning is thought to have been carried out with polonium-210, an alpha emitter.
|
Helium;Nuclear physics;Radioactivity
|
https://en.wikipedia.org/wiki/Analytical%20engine
|
The analytical engine was a proposed digital mechanical general-purpose computer designed by English mathematician and computer pioneer Charles Babbage. It was first described in 1837 as the successor to Babbage's difference engine, which was a design for a simpler mechanical calculator.
The analytical engine incorporated an arithmetic logic unit, control flow in the form of conditional branching and loops, and integrated memory, making it the first design for a general-purpose computer that could be described in modern terms as Turing-complete. In other words, the structure of the analytical engine was essentially the same as that which has dominated computer design in the electronic era. The analytical engine is one of the most successful achievements of Charles Babbage.
Babbage was never able to complete construction of any of his machines due to conflicts with his chief engineer and inadequate funding. It was not until 1941 that Konrad Zuse built the first general-purpose computer, Z3, more than a century after Babbage had proposed the pioneering analytical engine in 1837.
Design
Babbage's first attempt at a mechanical computing device, the difference engine, was a special-purpose machine designed to tabulate logarithms and trigonometric functions by evaluating finite differences to create approximating polynomials. Construction of this machine was never completed; Babbage had conflicts with his chief engineer, Joseph Clement, and ultimately the British government withdrew its funding for the project.
During this project, Babbage realised that a much more general design, the analytical engine, was possible. The work on the design of the analytical engine started around 1833.
The input, consisting of programs ("formulae") and data, was to be provided to the machine via punched cards, a method being used at the time to direct mechanical looms such as the Jacquard loom. For output, the machine would have a printer, a curve plotter, and a bell. The machine would also be able to punch numbers onto cards to be read in later. It employed ordinary base-10 fixed-point arithmetic.
There was to be a store (that is, a memory) capable of holding 1,000 numbers of 40 decimal digits each (ca. 16.6 kB). An arithmetic unit (the "mill") would be able to perform all four arithmetic operations, plus comparisons and optionally square roots. Initially (1838) it was conceived as a difference engine curved back upon itself, in a generally circular layout, with the long store exiting off to one side. Later drawings (1858) depict a regularised grid layout. Like the central processing unit (CPU) in a modern computer, the mill would rely upon its own internal procedures, roughly equivalent to microcode in modern CPUs, to be stored in the form of pegs inserted into rotating drums called "barrels", to carry out some of the more complex instructions the user's program might specify.
The programming language to be employed by users was akin to modern day assembly languages. Loops and conditional branching were possible, and so the language as conceived would have been Turing-complete as later defined by Alan Turing. Three different types of punch cards were used: one for arithmetical operations, one for numerical constants, and one for load and store operations, transferring numbers from the store to the arithmetical unit or back. There were three separate readers for the three types of cards. Babbage developed some two dozen programs for the analytical engine between 1837 and 1840, and one program later. These programs treat polynomials, iterative formulas, Gaussian elimination, and Bernoulli numbers.
In 1842, the Italian mathematician Luigi Federico Menabrea published a description of the engine in French, based on lectures Babbage gave when he visited Turin in 1840. In 1843, the description was translated into English and extensively annotated by Ada Lovelace, who had become interested in the engine eight years earlier. In recognition of her additions to Menabrea's paper, which included a way to calculate Bernoulli numbers using the machine (widely considered to be the first complete computer program), she has been described by many as the first computer programmer, although others have challenged this view.
Construction
Late in his life, Babbage sought ways to build a simplified version of the machine, and assembled a small part of it before his death in 1871.
In 1878, a committee of the British Association for the Advancement of Science described the analytical engine as "a marvel of mechanical ingenuity", but recommended against constructing it. The committee acknowledged the usefulness and value of the machine, but could not estimate the cost of building it, and were unsure whether the machine would function correctly after being built.
Intermittently from 1880 to 1910, Babbage's son Henry Prevost Babbage was constructing a part of the mill and the printing apparatus. In 1910, it was able to calculate a (faulty) list of multiples of pi. This constituted only a small part of the whole engine; it was not programmable and had no storage. (Popular images of this section have sometimes been mislabelled, implying that it was the entire mill or even the entire engine.) Henry Babbage's "analytical engine mill" is on display at the Science Museum in London. Henry also proposed building a demonstration version of the full engine, with a smaller storage capacity: "perhaps for a first machine ten (columns) would do, with fifteen wheels in each". Such a version could manipulate 20 numbers of 25 digits each, and what it could be told to do with those numbers could still be impressive. "It is only a question of cards and time", wrote Henry Babbage in 1888, "... and there is no reason why (twenty thousand) cards should not be used if necessary, in an analytical engine for the purposes of the mathematician".
In 1991, the London Science Museum built a complete and working specimen of Babbage's Difference Engine No. 2, a design that incorporated refinements Babbage discovered during the development of the analytical engine. This machine was built using materials and engineering tolerances that would have been available to Babbage, quelling the suggestion that Babbage's designs could not have been produced using the manufacturing technology of his time.
In October 2010, John Graham-Cumming started a "Plan 28" campaign to raise funds by "public subscription" to enable serious historical and academic study of Babbage's plans, with a view to then build and test a fully working virtual design which will then in turn enable construction of the physical analytical engine. As of May 2016, actual construction had not been attempted, since no consistent understanding could yet be obtained from Babbage's original design drawings. In particular it was unclear whether it could handle the indexed variables which were required for Lovelace's Bernoulli program. By 2017, the "Plan 28" effort reported that a searchable database of all catalogued material was available, and an initial review of Babbage's voluminous Scribbling Books had been completed.
Many of Babbage's original drawings have been digitised and are publicly available online.
Instruction set
Babbage is not known to have written down an explicit set of instructions for the engine in the manner of a modern processor manual. Instead he showed his programs as lists of states during their execution, showing what operator was run at each step with little indication of how the control flow would be guided.
Allan G. Bromley has assumed that the card deck could be read in forwards and backwards directions as a function of conditional branching after testing for conditions, which would make the engine Turing-complete:
...the cards could be ordered to move forward and reverse (and hence to loop)...
The introduction for the first time, in 1845, of user operations for a variety of service functions including, most importantly, an effective system for user control of looping in user programs.
There is no indication how the direction of turning of the operation and variable cards is specified. In the absence of other evidence I have had to adopt the minimal default assumption that both the operation and variable cards can only be turned backward as is necessary to implement the loops used in Babbage's sample programs. There would be no mechanical or microprogramming difficulty in placing the direction of motion under the control of the user.
In their emulator of the engine, Fourmilab say:
The Engine's Card Reader is not constrained to simply process the cards in a chain one after another from start to finish. It can, in addition, directed by the very cards it reads and advised by whether the Mill's run-up lever is activated, either advance the card chain forward, skipping the intervening cards, or backward, causing previously-read cards to be processed once again.
This emulator does provide a written symbolic instruction set, though this has been constructed by its authors rather than based on Babbage's original works. For example, a factorial program would be written as:
N0 6
N1 1
N2 1
×
L1
L0
S1
–
L0
L2
S0
L2
L0
CB?11
where the CB is the conditional branch instruction or "combination card" used to make the control flow jump, in this case backward by 11 cards.
Influence
Predicted influence
Babbage understood that the existence of an automatic computer would kindle interest in the field now known as algorithmic efficiency, writing in his Passages from the Life of a Philosopher, "As soon as an analytical engine exists, it will necessarily guide the future course of the science. Whenever any result is sought by its aid, the question will then arise—By what course of calculation can these results be arrived at by the machine in the shortest time?"
Computer science
From 1872, Henry continued diligently with his father's work and then intermittently in retirement in 1875.
Percy Ludgate wrote about the engine in 1914 and published his own design for an analytical engine in 1909. It was drawn up in detail, but never built, and the drawings have never been found. Ludgate's engine would be much smaller (about than Babbage's, and hypothetically would be capable of multiplying two 20-decimal-digit numbers in about six seconds.
In his work Essays on Automatics (1914) Leonardo Torres Quevedo, inspired by Babbage, designed a theoretical electromechanical calculating machine which was to be controlled by a read-only program. The paper also contains the idea of floating-point arithmetic. In 1920, to celebrate the 100th anniversary of the invention of the arithmometer, Torres presented in Paris the electromechanical arithmometer, which consisted of an arithmetic unit connected to a (possibly remote) typewriter, on which commands could be typed and the results printed automatically.
Vannevar Bush's paper Instrumental Analysis (1936) included several references to Babbage's work. In the same year he started the Rapid Arithmetical Machine project to investigate the problems of constructing an electronic digital computer.
Despite this groundwork, Babbage's work fell into historical obscurity, and the analytical engine was unknown to builders of electromechanical and electronic computing machines in the 1930s and 1940s when they began their work, resulting in the need to re-invent many of the architectural innovations Babbage had proposed. Howard Aiken, who built the quickly-obsoleted electromechanical calculator, the Harvard Mark I, between 1937 and 1945, praised Babbage's work likely as a way of enhancing his own stature, but knew nothing of the analytical engine's architecture during the construction of the Mark I, and considered his visit to the constructed portion of the analytical engine "the greatest disappointment of my life". The Mark I showed no influence from the analytical engine and lacked the analytical engine's most prescient architectural feature, conditional branching. J. Presper Eckert and John W. Mauchly similarly were not aware of the details of Babbage's analytical engine work prior to the completion of their design for the first electronic general-purpose computer, the ENIAC.
Comparison to other early computers
If the analytical engine had been built, it would have been digital, programmable and Turing-complete. It would, however, have been very slow. Luigi Federico Menabrea reported in Sketch of the Analytical Engine: "Mr. Babbage believes he can, by his engine, form the product of two numbers, each containing twenty figures, in three minutes".
By comparison the Harvard Mark I could perform the same task in just six seconds (though it is debatable that computer is Turing complete; the ENIAC, which is, would also have been faster). A modern CPU could do the same thing in under a billionth of a second.
In popular culture
The cyberpunk novelists William Gibson and Bruce Sterling co-authored a steampunk novel of alternative history titled The Difference Engine in which Babbage's difference and analytical engines became available to Victorian society. The novel explores the consequences and implications of the early introduction of computational technology.
Moriarty by Modem, a short story by Jack Nimersheim, describes an alternative history where Babbage's analytical engine was indeed completed and had been deemed highly classified by the British government. The characters of Sherlock Holmes and Moriarty had in reality been a set of prototype programs written for the analytical engine. This short story follows Holmes as his program is implemented on modern computers and he is forced to compete against his nemesis yet again in the modern counterparts of Babbage's analytical engine.
A similar setting to The Difference Engine is used by Sydney Padua in the webcomic The Thrilling Adventures of Lovelace and Babbage. It features an alternative history where Ada Lovelace and Babbage have built the analytical engine and use it to fight crime at Queen Victoria's request. The comic is based on thorough research on the biographies of and correspondence between Babbage and Lovelace, which is then twisted for humorous effect.
The Orion's Arm online project features the Machina Babbagenseii, fully sentient Babbage-inspired mechanical computers. Each is the size of a large asteroid, only capable of surviving in microgravity conditions, and processes data at 0.5% the speed of a human brain.
Charles Babbage and Ada Lovelace appear in an episode of Doctor Who, "Spyfall Part 2", where the engine is displayed and referenced.
Bibliography
|
Ada Lovelace;Charles Babbage;Computer-related introductions in 1837;English inventions;Mechanical calculators;Mechanical computers;One-of-a-kind computers
|
https://en.wikipedia.org/wiki/Almost%20all
|
In mathematics, the term "almost all" means "all but a negligible quantity". More precisely, if is a set, "almost all elements of " means "all elements of but those in a negligible subset of ". The meaning of "negligible" depends on the mathematical context; for instance, it can mean finite, countable, or null.
In contrast, "almost no" means "a negligible quantity"; that is, "almost no elements of " means "a negligible quantity of elements of ".
Meanings in different areas of mathematics
Prevalent meaning
Throughout mathematics, "almost all" is sometimes used to mean "all (elements of an infinite set) except for finitely many". This use occurs in philosophy as well. Similarly, "almost all" can mean "all (elements of an uncountable set) except for countably many".
Examples:
Almost all positive integers are greater than 1012.
Almost all prime numbers are odd (2 is the only exception).
Almost all polyhedra are irregular (as there are only nine exceptions: the five platonic solids and the four Kepler–Poinsot polyhedra).
If P is a nonzero polynomial, then P(x) ≠ 0 for almost all x (if not all x).
Meaning in measure theory
When speaking about the reals, sometimes "almost all" can mean "all reals except for a null set". Similarly, if S is some set of reals, "almost all numbers in S" can mean "all numbers in S except for those in a null set". The real line can be thought of as a one-dimensional Euclidean space. In the more general case of an n-dimensional space (where n is a positive integer), these definitions can be generalised to "all points except for those in a null set" or "all points in S except for those in a null set" (this time, S is a set of points in the space). Even more generally, "almost all" is sometimes used in the sense of "almost everywhere" in measure theory, or in the closely related sense of "almost surely" in probability theory.
Examples:
In a measure space, such as the real line, countable sets are null. The set of rational numbers is countable, so almost all real numbers are irrational.
Georg Cantor's first set theory article proved that the set of algebraic numbers is countable as well, so almost all reals are transcendental.
Almost all reals are normal.
The Cantor set is also null. Thus, almost all reals are not in it even though it is uncountable.
The derivative of the Cantor function is 0 for almost all numbers in the unit interval. It follows from the previous example because the Cantor function is locally constant, and thus has derivative 0 outside the Cantor set.
Meaning in number theory
In number theory, "almost all positive integers" can mean "the positive integers in a set whose natural density is 1". That is, if A is a set of positive integers, and if the proportion of positive integers in A below n (out of all positive integers below n) tends to 1 as n tends to infinity, then almost all positive integers are in A.
More generally, let S be an infinite set of positive integers, such as the set of even positive numbers or the set of primes, if A is a subset of S, and if the proportion of elements of S below n that are in A (out of all elements of S below n) tends to 1 as n tends to infinity, then it can be said that almost all elements of S are in A.
Examples:
The natural density of cofinite sets of positive integers is 1, so each of them contains almost all positive integers.
Almost all positive integers are composite.
Almost all even positive numbers can be expressed as the sum of two primes.
Almost all primes are isolated. Moreover, for every positive integer , almost all primes have prime gaps of more than both to their left and to their right; that is, there is no other prime between and .
Meaning in graph theory
In graph theory, if A is a set of (finite labelled) graphs, it can be said to contain almost all graphs, if the proportion of graphs with n vertices that are in A tends to 1 as n tends to infinity. However, it is sometimes easier to work with probabilities, so the definition is reformulated as follows. The proportion of graphs with n vertices that are in A equals the probability that a random graph with n vertices (chosen with the uniform distribution) is in A, and choosing a graph in this way has the same outcome as generating a graph by flipping a coin for each pair of vertices to decide whether to connect them. Therefore, equivalently to the preceding definition, the set A contains almost all graphs if the probability that a coin-flip–generated graph with n vertices is in A tends to 1 as n tends to infinity. Sometimes, the latter definition is modified so that the graph is chosen randomly in some other way, where not all graphs with n vertices have the same probability, and those modified definitions are not always equivalent to the main one.
The use of the term "almost all" in graph theory is not standard; the term "asymptotically almost surely" is more commonly used for this concept.
Example:
Almost all graphs are asymmetric.
Almost all graphs have diameter 2.
Meaning in topology
In topology and especially dynamical systems theory (including applications in economics), "almost all" of a topological space's points can mean "all of the space's points except for those in a meagre set". Some use a more limited definition, where a subset contains almost all of the space's points only if it contains some open dense set.
Example:
Given an irreducible algebraic variety, the properties that hold for almost all points in the variety are exactly the generic properties. This is due to the fact that in an irreducible algebraic variety equipped with the Zariski topology, all nonempty open sets are dense.
Meaning in algebra
In abstract algebra and mathematical logic, if U is an ultrafilter on a set X, "almost all elements of X" sometimes means "the elements of some element of U". For any partition of X into two disjoint sets, one of them will necessarily contain almost all elements of X. It is possible to think of the elements of a filter on X as containing almost all elements of X, even if it isn't an ultrafilter.
Proofs
See also
Almost
Almost everywhere
Almost surely
References
Primary sources
Secondary sources
|
Mathematical terminology
|
https://en.wikipedia.org/wiki/Antiparticle
|
In particle physics, every type of particle of "ordinary" matter (as opposed to antimatter) is associated with an antiparticle with the same mass but with opposite physical charges (such as electric charge). For example, the antiparticle of the electron is the positron (also known as an antielectron). While the electron has a negative electric charge, the positron has a positive electric charge, and is produced naturally in certain types of radioactive decay. The opposite is also true: the antiparticle of the positron is the electron.
Some particles, such as the photon, are their own antiparticle. Otherwise, for each pair of antiparticle partners, one is designated as the normal particle (the one that occurs in matter usually interacted with in daily life). The other (usually given the prefix "anti-") is designated the antiparticle.
Particle–antiparticle pairs can annihilate each other, producing photons; since the charges of the particle and antiparticle are opposite, total charge is conserved. For example, the positrons produced in natural radioactive decay quickly annihilate themselves with electrons, producing pairs of gamma rays, a process exploited in positron emission tomography.
The laws of nature are very nearly symmetrical with respect to particles and antiparticles. For example, an antiproton and a positron can form an antihydrogen atom, which is believed to have the same properties as a hydrogen atom. This leads to the question of why the formation of matter after the Big Bang resulted in a universe consisting almost entirely of matter, rather than being a half-and-half mixture of matter and antimatter. The discovery of charge parity violation helped to shed light on this problem by showing that this symmetry, originally thought to be perfect, was only approximate. The question about how the formation of matter after the Big Bang resulted in a universe consisting almost entirely of matter remains an unanswered one, and explanations so far are not truly satisfactory, overall.
Because charge is conserved, it is not possible to create an antiparticle without either destroying another particle of the same charge (as is for instance the case when antiparticles are produced naturally via beta decay or the collision of cosmic rays with Earth's atmosphere), or by the simultaneous creation of both a particle and its antiparticle (pair production), which can occur in particle accelerators such as the Large Hadron Collider at CERN.
Particles and their antiparticles have equal and opposite charges, so that an uncharged particle also gives rise to an uncharged antiparticle. In many cases, the antiparticle and the particle coincide: pairs of photons, Z0 bosons, mesons, and hypothetical gravitons and some hypothetical WIMPs all self-annihilate. However, electrically neutral particles need not be identical to their antiparticles: for example, the neutron and antineutron are distinct.
History
Experiment
In 1932, soon after the prediction of positrons by Paul Dirac, Carl D. Anderson found that cosmic-ray collisions produced these particles in a cloud chamber – a particle detector in which moving electrons (or positrons) leave behind trails as they move through the gas. The electric charge-to-mass ratio of a particle can be measured by observing the radius of curling of its cloud-chamber track in a magnetic field. Positrons, because of the direction that their paths curled, were at first mistaken for electrons travelling in the opposite direction. Positron paths in a cloud-chamber trace the same helical path as an electron but rotate in the opposite direction with respect to the magnetic field direction due to their having the same magnitude of charge-to-mass ratio but with opposite charge and, therefore, opposite signed charge-to-mass ratios.
The antiproton and antineutron were found by Emilio Segrè and Owen Chamberlain in 1955 at the University of California, Berkeley. Since then, the antiparticles of many other subatomic particles have been created in particle accelerator experiments. In recent years, complete atoms of antimatter have been assembled out of antiprotons and positrons, collected in electromagnetic traps.
Dirac hole theory
Solutions of the Dirac equation contain negative energy quantum states. As a result, an electron could always radiate energy and fall into a negative energy state. Even worse, it could keep radiating infinite amounts of energy because there were infinitely many negative energy states available. To prevent this unphysical situation from happening, Dirac proposed that a "sea" of negative-energy electrons fills the universe, already occupying all of the lower-energy states so that, due to the Pauli exclusion principle, no other electron could fall into them. Sometimes, however, one of these negative-energy particles could be lifted out of this Dirac sea to become a positive-energy particle. But, when lifted out, it would leave behind a hole in the sea that would act exactly like a positive-energy electron with a reversed charge. These holes were interpreted as "negative-energy electrons" by Paul Dirac and mistakenly identified with protons in his 1930 paper A Theory of Electrons and Protons However, these "negative-energy electrons" turned out to be positrons, and not protons.
This picture implied an infinite negative charge for the universea problem of which Dirac was aware. Dirac tried to argue that we would perceive this as the normal state of zero charge. Another difficulty was the difference in masses of the electron and the proton. Dirac tried to argue that this was due to the electromagnetic interactions with the sea, until Hermann Weyl proved that hole theory was completely symmetric between negative and positive charges. Dirac also predicted a reaction + → + , where an electron and a proton annihilate to give two photons. Robert Oppenheimer and Igor Tamm, however, proved that this would cause ordinary matter to disappear too fast. A year later, in 1931, Dirac modified his theory and postulated the positron, a new particle of the same mass as the electron. The discovery of this particle the next year removed the last two objections to his theory.
Within Dirac's theory, the problem of infinite charge of the universe remains. Some bosons also have antiparticles, but since bosons do not obey the Pauli exclusion principle (only fermions do), hole theory does not work for them. A unified interpretation of antiparticles is now available in quantum field theory, which solves both these problems by describing antimatter as negative energy states of the same underlying matter field, i.e. particles moving backwards in time.
Elementary antiparticles
Composite antiparticles
Particle–antiparticle annihilation
If a particle and antiparticle are in the appropriate quantum states, then they can annihilate each other and produce other particles. Reactions such as + → (the two-photon annihilation of an electron-positron pair) are an example. The single-photon annihilation of an electron-positron pair, + → , cannot occur in free space because it is impossible to conserve energy and momentum together in this process. However, in the Coulomb field of a nucleus the translational invariance is broken and single-photon annihilation may occur. The reverse reaction (in free space, without an atomic nucleus) is also impossible for this reason. In quantum field theory, this process is allowed only as an intermediate quantum state for times short enough that the violation of energy conservation can be accommodated by the uncertainty principle. This opens the way for virtual pair production or annihilation in which a one particle quantum state may fluctuate into a two particle state and back. These processes are important in the vacuum state and renormalization of a quantum field theory. It also opens the way for neutral particle mixing through processes such as the one pictured here, which is a complicated example of mass renormalization.
Properties
Quantum states of a particle and an antiparticle are interchanged by the combined application of charge conjugation , parity and time reversal .
and are linear, unitary operators, is antilinear and antiunitary,
.
If denotes the quantum state of a particle with momentum and spin whose component in the z-direction is , then one has
where denotes the charge conjugate state, that is, the antiparticle. In particular a massive particle and its antiparticle transform under the same irreducible representation of the Poincaré group which means the antiparticle has the same mass and the same spin.
If , and
can be defined separately on the particles and antiparticles, then
where the proportionality sign indicates that there might be a phase on the right hand side.
As anticommutes with the charges, , particle and antiparticle have opposite electric charges q and -q.
Quantum field theory
This section draws upon the ideas, language and notation of canonical quantization of a quantum field theory.
One may try to quantize an electron field without mixing the annihilation and creation operators by writing
where we use the symbol k to denote the quantum numbers p and σ of the previous section and the sign of the energy, E(k), and ak denotes the corresponding annihilation operators. Of course, since we are dealing with fermions, we have to have the operators satisfy canonical anti-commutation relations. However, if one now writes down the Hamiltonian
then one sees immediately that the expectation value of H need not be positive. This is because E(k) can have any sign whatsoever, and the combination of creation and annihilation operators has expectation value 1 or 0.
So one has to introduce the charge conjugate antiparticle field, with its own creation and annihilation operators satisfying the relations
where k has the same p, and opposite σ and sign of the energy. Then one can rewrite the field in the form
where the first sum is over positive energy states and the second over those of negative energy. The energy becomes
where E0 is an infinite negative constant. The vacuum state is defined as the state with no particle or antiparticle, i.e., and . Then the energy of the vacuum is exactly E0. Since all energies are measured relative to the vacuum, H is positive definite. Analysis of the properties of ak and bk shows that one is the annihilation operator for particles and the other for antiparticles. This is the case of a fermion.
This approach is due to Vladimir Fock, Wendell Furry and Robert Oppenheimer. If one quantizes a real scalar field, then one finds that there is only one kind of annihilation operator; therefore, real scalar fields describe neutral bosons. Since complex scalar fields admit two different kinds of annihilation operators, which are related by conjugation, such fields describe charged bosons.
Feynman–Stückelberg interpretation
By considering the propagation of the negative energy modes of the electron field backward in time, Ernst Stückelberg reached a pictorial understanding of the fact that the particle and antiparticle have equal mass m and spin J but opposite charges q. This allowed him to rewrite perturbation theory precisely in the form of diagrams. Richard Feynman later gave an independent systematic derivation of these diagrams from a particle formalism, and they are now called Feynman diagrams. Each line of a diagram represents a particle propagating either backward or forward in time. In Feynman diagrams, anti-particles are shown traveling backwards in time relative to normal matter, and vice versa. This technique is the most widespread method of computing amplitudes in quantum field theory today.
Since this picture was first developed by Stückelberg, and acquired its modern form in Feynman's work, it is called the Feynman–Stückelberg interpretation of antiparticles to honor both scientists.
|
Antimatter;Particle physics;Quantum field theory;Subatomic particles
|
https://en.wikipedia.org/wiki/Atanasoff%E2%80%93Berry%20computer
|
The Atanasoff–Berry computer (ABC) was the first automatic electronic digital computer. The device was limited by the technology of the day. The ABC's priority is debated among historians of computer technology, because it was neither programmable, nor Turing-complete. Conventionally, the ABC would be considered the first electronic ALU (arithmetic logic unit) which is integrated into every modern processor's design.
Its unique contribution was to make computing faster by being the first to use vacuum tubes to do arithmetic calculations. Prior to this, slower electro-mechanical methods were used by Konrad Zuse's Z1 computer, and the simultaneously developed Harvard Mark I. The first electronic, programmable, digital machine, the Colossus computer from 1943 to 1945, used similar tube-based technology as ABC.
Overview
Conceived in 1937, the machine was built by Iowa State College mathematics and physics professor John Vincent Atanasoff with the help of graduate student Clifford Berry. It was designed only to solve systems of linear equations and was successfully tested in 1942. However, its intermediate result storage mechanism, a paper card writer/reader, was not perfected, and when John Vincent Atanasoff left Iowa State College for World War II assignments, work on the machine was discontinued. The ABC pioneered important elements of modern computing, including binary arithmetic and electronic switching elements, but its special-purpose nature and lack of a changeable, stored program distinguish it from modern computers. The computer was designated an IEEE Milestone in 1990.
Atanasoff and Berry's computer work was not widely known until it was rediscovered in the 1960s, amid patent disputes over the first instance of an electronic computer. At that time ENIAC, that had been created by John Mauchly and J. Presper Eckert, was considered to be the first computer in the modern sense, but in 1973 a U.S. District Court invalidated the ENIAC patent and concluded that the ENIAC inventors had derived the subject matter of the electronic digital computer from Atanasoff. When, in the mid-1970s, the secrecy surrounding the British World War II development of the Colossus computers that pre-dated ENIAC, was lifted and Colossus was described at a conference in Los Alamos, New Mexico, in June 1976, John Mauchly and Konrad Zuse were reported to have been astonished.
Design and construction
According to Atanasoff's account, several key principles of the Atanasoff–Berry computer were conceived in a sudden insight after a long nighttime drive to Rock Island, Illinois, during the winter of 1937–38. The ABC innovations included electronic computation, binary arithmetic, parallel processing, regenerative capacitor memory, and a separation of memory and computing functions. The mechanical and logic design was worked out by Atanasoff over the next year. A grant application to build a proof of concept prototype was submitted in March 1939 to the Agronomy department, which was also interested in speeding up computation for economic and research analysis. $5,000 of further funding () to complete the machine came from the nonprofit Research Corporation of New York City.
The ABC was built by Atanasoff and Berry in the basement of the physics building at Iowa State College from 1939 to 1942. The initial funds were released in September, and the 11-tube prototype was first demonstrated in October 1939. A December demonstration prompted a grant for construction of the full-scale machine. The ABC was built and tested over the next two years. A January 15, 1941, story in the Des Moines Register announced the ABC as "an electrical computing machine" with more than 300 vacuum tubes that would "compute complicated algebraic equations" (but gave no precise technical description of the computer). The system weighed more than . It contained approximately of wire, 280 dual-triode vacuum tubes, 31 thyratrons, and was about the size of a desk.
It was not programmable, which distinguishes it from more general machines of the same era, such as Konrad Zuse's 1941 Z3 (or earlier iterations) and the Colossus computers of 1943–1945. Nor did it implement the stored-program architecture, first implemented in the Manchester Baby of 1948, required for fully general-purpose practical computing machines.
The machine was, however, the first to implement:
Using vacuum tubes, rather than wheels, ratchets, mechanical switches, or telephone relays, allowing for greater speed than previous computers
Using capacitors for memory, rather than mechanical components, allowing for greater speed and density
The memory of the Atanasoff–Berry computer was a system called regenerative capacitor memory, which consisted of a pair of drums, each containing 1600 capacitors that rotated on a common shaft once per second. The capacitors on each drum were organized into 32 "bands" of 50 (30 active bands and two spares in case a capacitor failed), giving the machine a speed of 30 additions/subtractions per second. Data was represented as 50-bit binary fixed-point numbers. The electronics of the memory and arithmetic units could store and operate on 60 such numbers at a time (3000 bits).
The alternating current power-line frequency of 60 Hz was the primary clock rate for the lowest-level operations.
The arithmetic logic functions were fully electronic, implemented with vacuum tubes. The family of logic gates ranged from inverters to two- and three-input gates. The input and output levels and operating voltages were compatible between the different gates. Each gate consisted of one inverting vacuum-tube amplifier, preceded by a resistor divider input network that defined the logical function. The control logic functions, which only needed to operate once per drum rotation and therefore did not require electronic speed, were electromechanical, implemented with relays.
The ALU operated on only one bit of each number at a time; it kept the carry/borrow bit in a capacitor for use in the next AC cycle.
Although the Atanasoff–Berry computer was an important step up from earlier calculating machines, it was not able to run entirely automatically through an entire problem. An operator was needed to operate the control switches to set up its functions, much like the electro-mechanical calculators and unit record equipment of the time. Selection of the operation to be performed, reading, writing, converting to or from binary to decimal, or reducing a set of equations was made by front-panel switches and, in some cases, jumpers.
There were two forms of input and output: primary user input and output and an intermediate results output and input. The intermediate results storage allowed operation on problems too large to be handled entirely within the electronic memory. (The largest problem that could be solved without the use of the intermediate output and input was two simultaneous equations, a trivial problem.)
Intermediate results were binary, written onto paper sheets by electrostatically modifying the resistance at 1500 locations to represent 30 of the 50-bit numbers (one equation). Each sheet could be written or read in one second. The reliability of the system was limited to about 1 error in 100,000 calculations by these units, primarily attributed to lack of control of the sheets' material characteristics. In retrospect, a solution could have been to add a parity bit to each number as written. This problem was not solved by the time Atanasoff left the university for war-related work.
Primary user input was decimal, via standard IBM 80-column punched cards, and output was decimal, via a front-panel display.
Function
The ABC was designed for a specific purpose the solution of systems of simultaneous linear equations. It could handle systems with up to 29 equations, a difficult problem for the time. Problems of this scale were becoming common in physics, the department in which John Atanasoff worked. The machine could be fed two linear equations with up to 29 variables and a constant term and eliminate one of the variables. This process would be repeated manually for each of the equations, which would result in a system of equations with one fewer variable. Then the whole process would be repeated to eliminate another variable.
George W. Snedecor, the head of Iowa State's Statistics Department, was very likely the first user of an electronic digital computer to solve real-world mathematics problems. He submitted many of these problems to Atanasoff.
Patent dispute
On June 26, 1947, J. Presper Eckert and John Mauchly were the first to file for patent on a digital computing device (ENIAC), much to the surprise of Atanasoff. The ABC had been examined by John Mauchly in June 1941, and Isaac Auerbach, a former student of Mauchly's, alleged that it influenced his later work on ENIAC, although Mauchly denied this. The ENIAC patent did not issue until 1964, and by 1967 Honeywell sued Sperry Rand in an attempt to break the ENIAC patents, arguing that the ABC constituted prior art. The United States District Court for the District of Minnesota released its judgement on October 19, 1973, finding in Honeywell v. Sperry Rand that the ENIAC patent was a derivative of John Atanasoff's invention.
Campbell-Kelly and Aspray conclude:
The case was legally resolved on October 19, 1973, when U.S. District Judge Earl R. Larson held the ENIAC patent invalid, ruling that the ENIAC derived many basic ideas from the Atanasoff–Berry computer. Judge Larson explicitly stated:
Herman Goldstine, one of the original developers of ENIAC wrote:
Replica
The original ABC was eventually dismantled in 1948, when the university converted the basement to classrooms, and all of its pieces except for one memory drum were discarded.
In 1997, a team of researchers led by Delwyn Bluhm and John Gustafson from Ames Laboratory (located on the Iowa State University campus) finished building a working replica of the Atanasoff–Berry computer at a cost of $350,000 (equivalent to $ in ). The replica ABC was on display in the first floor lobby of the Durham Center for Computation and Communication at Iowa State University and was subsequently exhibited at the Computer History Museum.
See also
History of computing hardware
List of vacuum-tube computers
Mikhail Kravchuk
References
Bibliography
External links
The Birth of the ABC
Reconstruction of the ABC, 1994-1997
John Gustafson, Reconstruction of the Atanasoff-Berry Computer
The ENIAC patent trial
Honeywell v. Sperry Rand Records, 1846-1973, Charles Babbage Institute, University of Minnesota.
The Atanasoff-Berry Computer In Operation (YouTube)
|
1940s computers;Computer-related introductions in 1942;Early computers;Iowa State University;One-of-a-kind computers;Paper data storage;Serial computers;Vacuum tube computers
|
https://en.wikipedia.org/wiki/Andes
|
The Andes ( ), Andes Mountains or Andean Mountain Range (; ) are the longest continental mountain range in the world, forming a continuous highland along the western edge of South America. The range is long and wide (widest between 18°S and 20°S latitude) and has an average height of about . The Andes extend from south to north through seven South American countries: Argentina, Chile, Bolivia, Peru, Ecuador, Colombia, and Venezuela.
Along their length, the Andes are split into several ranges, separated by intermediate depressions. The Andes are the location of several high plateaus—some of which host major cities such as Quito, Bogotá, Cali, Arequipa, Medellín, Bucaramanga, Sucre, Mérida, El Alto, and La Paz. The Altiplano Plateau is the world's second highest after the Tibetan Plateau. These ranges are in turn grouped into three major divisions based on climate: the Tropical Andes, the Dry Andes, and the Wet Andes.
The Andes are the highest mountain range outside of Asia. The range's highest peak, Argentina's Aconcagua, rises to an elevation of about above sea level. The peak of Chimborazo in the Ecuadorian Andes is farther from the Earth's center than any other location on the Earth's surface, due to the equatorial bulge resulting from the Earth's rotation. The world's highest volcanoes are in the Andes, including Ojos del Salado on the Chile-Argentina border, which rises to .
The Andes are also part of the American Cordillera, a chain of mountain ranges (cordillera) that consists of an almost continuous sequence of mountain ranges that form the western "backbone" of the Americas and Antarctica.
Etymology
The etymology of the word Andes has been debated. The majority consensus is that it derives from the Quechua word "east" as in Antisuyu (Quechua for "east region"), one of the four regions of the Inca Empire.
The term cordillera comes from the Spanish word cordel "rope" and is used as a descriptive name for several contiguous sections of the Andes, as well as the entire Andean range, and the combined mountain chain along the western part of the North and South American continents.
Geography
The Andes can be divided into three sections:
The Southern Andes in Argentina and Chile, south of Llullaillaco,
The Central Andes in Peru and Bolivia, and
The Northern Andes in Venezuela, Colombia, and Ecuador.
At the northern end of the Andes, the separate Sierra Nevada de Santa Marta range is often, but not always, treated as part of the Northern Andes.
The Leeward Antilles islands Aruba, Bonaire, and Curaçao, which lie in the Caribbean Sea off the coast of Venezuela, were formerly thought to represent the submerged peaks of the extreme northern edge of the Andes range, but ongoing geological studies indicate that such a simplification does not do justice to the complex tectonic boundary between the South American and Caribbean plates.
Geology
The Andes are an orogenic belt of mountains along the Pacific Ring of Fire, a zone of volcanic activity that encompasses the Pacific rim of the Americas as well as the Asia-Pacific region. The Andes are the result of tectonic plate processes extending during the Mesozoic and Tertiary eras, caused by the subduction of oceanic crust beneath the South American Plate as the Nazca Plate and South American Plate converge. These processes were accelerated by the effects of climate. As the uplift of the Andes created a rain shadow on the western fringes of Chile, ocean currents and prevailing winds carried moisture away from the Chilean coast. This caused some areas of the subduction zone to be sediment-starved, which in turn prevented the subducting plate from having a well lubricated surface. These factors increased the rate of contractional coastal uplift in the Andes. The main cause of the rise of the Andes is the contraction of the western rim of the South American Plate due to the subduction of the Nazca Plate and the Antarctic Plate. To the east, the Andes range is bounded by several sedimentary basins, such as the Orinoco Basin, the Amazon Basin, the Madre de Dios Basin, and the Gran Chaco, that separate the Andes from the ancient cratons in eastern South America. In the south, the Andes share a long boundary with the former Patagonia Terrane. To the west, the Andes end at the Pacific Ocean, although the Peru-Chile trench can be considered their ultimate western limit. From a geographical approach, the Andes are considered to have their western boundaries marked by the appearance of coastal lowlands and less-rugged topography. The Andes also contain large quantities of iron ore located in many mountains within the range.
The Andean orogen has a series of bends or oroclines. The Bolivian Orocline is a seaward-concave bending in the coast of South America and the Andes Mountains at about 18° S. At this point, the orientation of the Andes turns from northwest in Peru to south in Chile and Argentina. The Andean segments north and south of the Orocline have been rotated 15° counter-clockwise to 20° clockwise respectively. The Bolivian Orocline area overlaps with the area of the maximum width of the Altiplano Plateau, and according to Isacks (1988) the Orocline is related to crustal shortening. The specific point at 18° S where the coastline bends is known as the Arica Elbow. Further south lies the Maipo Orocline, a more subtle orocline between 30° S and 38°S with a seaward-concave break in the trend at 33° S. Near the southern tip of the Andes lies the Patagonian Orocline.
Orogeny
The western rim of the South American Plate has been the place of several pre-Andean orogenies since at least the late Proterozoic and early Paleozoic, when several terranes and microcontinents collided and amalgamated with the ancient cratons of eastern South America, by then the South American part of Gondwana.
The formation of the modern Andes began with the events of the Triassic, when Pangaea began the breakup that resulted in developing several rifts. The development continued through the Jurassic Period. It was during the Cretaceous Period that the Andes began to take their present form, by the uplifting, faulting, and folding of sedimentary and metamorphic rocks of the ancient cratons to the east. The rise of the Andes has not been constant, as different regions have had different degrees of tectonic stress, uplift, and erosion.
Across the Drake Passage lie the mountains of the Antarctic Peninsula south of the Scotia Plate, which appear to be a continuation of the Andes chain.
The far east regions of the Andes experience a series of changes resulting from the Andean orogeny. Parts of the Sunsás Orogen in Amazonian craton disappeared from the surface of the earth, being overridden by the Andes. The Sierras de Córdoba, where the effects of the ancient Pampean orogeny can be observed, owe their modern uplift and relief to the Andean orogeny in the Tertiary. Further south in southern Patagonia, the onset of the Andean orogeny caused the Magallanes Basin to evolve from being an extensional back-arc basin in the Mesozoic to being a contractional foreland basin in the Cenozoic.
Seismic activity
Tectonic forces above the subduction zone along the entire west coast of South America where the Nazca Plate and a part of the Antarctic Plate are sliding beneath the South American Plate continue to produce an ongoing orogenic event resulting in minor to major earthquakes and volcanic eruptions to this day. Many high-magnitude earthquakes have been recorded in the region, such as the 2010 Maule earthquake (M8.8), the 2015 Illapel earthquake (M8.2), and the 1960 Valdivia earthquake (M9.5), which as of 2024 was the strongest ever recorded on seismometers.
The amount, magnitude, and type of seismic activity varies greatly along the subduction zone. These differences are due to a wide range of factors, including friction between the plates, angle of subduction, buoyancy of the subducting plate, rate of subduction, and hydration value of the mantle material. The highest rate of seismic activity is observed in the central portion of the boundary, between 33°S and 35°S. In this area, the angle of subduction is very low, meaning the subducting plate is nearly horizontal. Studies of mantle hydration across the subduction zone have shown a correlation between increased material hydration and lower-magnitude, more-frequent seismic activity. Zones exhibiting dehydration instead are thought to have a higher potential for larger, high-magnitude earthquakes in the future.
The mountain range is also a source of shallow intraplate earthquakes within the South American Plate. The largest such earthquake (as of 2024) struck Peru in 1947 and measured 7.5. In the Peruvian Andes, these earthquakes display normal (1946), strike-slip (1976), and reverse (1969, 1983) mechanisms. The Amazonian Craton is actively underthrusted beneath the sub-Andes region of Peru, producing thrust faults. In Colombia, Ecuador, and Peru, thrust faulting occurs along the sub-Andes due in response to contraction brought on by subduction, while in the high Andes, normal faulting occurs in response to gravitational forces.
In the extreme south, a major transform fault separates Tierra del Fuego from the small Scotia Plate.
Volcanism
The Andes range has many active volcanoes distributed in four volcanic zones separated by areas of inactivity. The Andean volcanism is a result of the subduction of the Nazca Plate and Antarctic Plate underneath the South American Plate. The belt is subdivided into four main volcanic zones that are separated from each other by volcanic gaps. The volcanoes of the belt are diverse in terms of activity style, products, and morphology. Although some differences can be explained by which volcanic zone a volcano belongs to, there are significant differences inside volcanic zones and even between neighboring volcanoes. Despite being a typical location for calc-alkalic and subduction volcanism, the Andean Volcanic Belt has a large range of volcano-tectonic settings, such as rift systems, extensional zones, transpressional faults, subduction of mid-ocean ridges, and seamount chains apart from a large range of crustal thicknesses and magma ascent paths, and different amount of crustal assimilations.
Ore deposits and evaporites
The Andes Mountains host large ore and salt deposits, and some of their eastern fold and thrust belts act as traps for commercially exploitable amounts of hydrocarbons. In the forelands of the Atacama Desert, some of the largest porphyry copper mineralizations occur, making Chile and Peru the first- and second-largest exporters of copper in the world. Porphyry copper in the western slopes of the Andes has been generated by hydrothermal fluids (mostly water) during the cooling of plutons or volcanic systems. The porphyry mineralization further benefited from the dry climate that reduced the disturbing actions of meteoric water. The dry climate in the central western Andes has also led to the creation of extensive saltpeter deposits that were extensively mined until the invention of synthetic nitrates. Yet another result of the dry climate are the salars of Atacama and Uyuni, the former being the largest source of lithium and the latter the world's largest reserve of the element. Early Mesozoic and Neogene plutonism in Bolivia's Cordillera Central created the Bolivian tin belt as well as the famous, now mostly depleted, silver deposits of Cerro Rico de Potosí.
Climate
The Andes Mountains is connected connection to the climate of South America, particularly through the hyper-arid conditions of the adjacent Atacama Desert. The Atacama Bench, a prominent low-relief feature along the Pacific seaboard, serves as a key geomorphological record of the long-term interplay between Andean tectonics and Cenozoic climate. While the initial uplift and shortening of the Andes were driven by the subduction of the Nazca Plate beneath the South American Plate, arid climate acted as an important feedback mechanism. Reduced erosion rates in the increasingly arid Atacama region may have effectively stopped tectonic activity in certain parts of the mountain range. This lack of erosion could have facilitated the eastward propagation of deformation, leading to the widening of the Andean orogen over time. Thus, the Atacama Desert and its geological features, like the Atacama Bench, offer critical insights into the coupled evolution of the Andes Mountains and the changing regional climate.
History
The Andes Mountains, initially inhabited by hunter-gatherers, experienced the development of agriculture and the rise of politically centralized civilizations, which culminated in the establishment of the century-long Inca Empire. This all changed in the 16th century, when the Spanish conquistadors colonized the mountains in advance of the mining economy.
In the tide of anti-imperialist nationalism, the Andes became the scene of a series of independence wars in the 19th century, when rebel forces swept through the region to overthrow Spanish colonial rule. Since then, many former Spanish territories have become five independent Andean states.
Climate and hydrology
The climate in the Andes varies greatly depending on latitude, altitude, and proximity to the sea. Temperature, atmospheric pressure, and humidity decrease in higher elevations. The southern section is rainy and cool, while the central section is dry. The northern Andes are typically rainy and warm, with an average temperature of in Colombia. The climate is known to change drastically in rather short distances. Rainforests exist just kilometers away from the snow-covered peak of Cotopaxi. The mountains have a large effect on the temperatures of nearby areas. The snow line depends on the location. It is between in the tropical Ecuadorian, Colombian, Venezuelan, and northern Peruvian Andes, rising to in the drier mountains of southern Peru and northern Chile south to about 30°S before descending to on Aconcagua at 32°S, at 40°S, at 50°S, and only in Tierra del Fuego at 55°S; from 50°S, several of the larger glaciers descend to sea level.
The Andes of Chile and Argentina can be divided into two climatic and glaciological zones: the Dry Andes and the Wet Andes. Since the Dry Andes extend from the latitudes of the Atacama Desert to the area of the Maule River, precipitation is more sporadic, and there are strong temperature oscillations. The line of equilibrium may shift drastically over short periods of time, leaving a whole glacier in the ablation area or in the accumulation area.
In the high Andes of Central Chile and Mendoza Province, rock glaciers are larger and more common than glaciers; this is due to the high exposure to solar radiation. In these regions, glaciers occur typically at higher altitudes than rock glaciers. The lowest active rock glaciers occur at 900 m a.s.l. in Aconcagua.
Though precipitation increases with height, there are semiarid conditions in the nearly highest mountains of the Andes. This dry steppe climate is considered to be typical of the subtropical position at 32–34° S. The valley bottoms have no woods, just dwarf scrub. The largest glaciers, for example the Plomo Glacier and the Horcones Glaciers, do not even reach in length and have only insignificant ice thickness. At glacial times, however, 20,000 years ago, the glaciers were over ten times longer. On the east side of this section of the Mendozina Andes, they flowed down to and on the west side to about above sea level. The massifs of Aconcagua (), Tupungato (), and Nevado Juncal () are tens of kilometres away from each other and were connected by a joint ice stream network. The Andes' dendritic glacier arms, components of valley glaciers, were up to long and over thick, and spanned a vertical distance of . The climatic glacier snowline (ELA) was lowered from to at glacial times.
Flora
The Andean region cuts across several natural and floristic regions, due to its extension, from Caribbean Venezuela to cold, windy, and wet Cape Horn passing through the hyperarid Atacama Desert. Rainforests and tropical dry forests used to encircle much of the northern Andes but are now greatly diminished, especially in the Chocó and inter-Andean valleys of Colombia. Opposite the humid Andean slopes are the relatively dry Andean slopes in most of western Peru, Chile, and Argentina. Along with several Interandean Valles, they are typically dominated by deciduous woodland, shrub and xeric vegetation, reaching the extreme in the slopes near the virtually lifeless Atacama Desert.
About 30,000 species of vascular plants live in the Andes, with roughly half being endemic to the region, surpassing the diversity of any other hotspot. The small tree Cinchona pubescens, a source of quinine that is used to treat malaria, is found widely in the Andes as far south as Bolivia. Other important crops that originated from the Andes are tobacco and potatoes. The high-altitude Polylepis forests and woodlands are found in the Andean areas of Colombia, Ecuador, Peru, Bolivia, and Chile. These trees, by locals referred to as Queñua, Yagual, and other names, can be found at altitudes of above sea level. It remains unclear if the patchy distribution of these forests and woodlands is natural, or the result of clearing that began during the Incan period. Regardless, in modern times, the clearance has accelerated, and the trees are now considered highly endangered, with some believing that as little as 10% of the original woodland remains.
Fauna
The Andes are rich in fauna: With almost 1,000 species, of which roughly 2/3 are endemic to the region, the Andes are the most important region in the world for amphibians. The diversity of animals in the Andes is high, with almost 600 species of mammals (13% endemic), more than 1,700 species of birds (about 1/3 endemic), more than 600 species of reptiles (about 45% endemic), and almost 400 species of fish (about 1/3 endemic).
The vicuña and guanaco can be found living in the Altiplano, while the closely related domesticated llama and alpaca are widely kept by locals as pack animals and for their meat and wool. The crepuscular (active during dawn and dusk) chinchillas, two threatened members of the rodent order, inhabit the Andes' alpine regions. The Andean condor, the largest bird of its kind in the Western Hemisphere, occurs throughout much of the Andes but generally in very low densities. Other animals found in the relatively open habitats of the high Andes include the huemul, cougar, foxes in the genus Pseudalopex, and, for birds, certain species of tinamous (notably members of the genus Nothoprocta), Andean goose, giant coot, flamingos (mainly associated with hypersaline lakes), lesser rhea, Andean flicker, diademed sandpiper-plover, miners, sierra-finches and diuca-finches.
Lake Titicaca hosts several endemics, among them the highly endangered Titicaca flightless grebe and Titicaca water frog. A few species of hummingbirds, notably some hillstars, can be seen at altitudes above , but far higher diversities can be found at lower altitudes, especially in the humid Andean forests ("cloud forests") growing on slopes in Colombia, Ecuador, Peru, Bolivia, and far northwestern Argentina. These forest-types, which includes the Yungas and parts of the Chocó, are very rich in flora and fauna, although few large mammals exist, exceptions being the threatened mountain tapir, spectacled bear, and yellow-tailed woolly monkey.
Birds of humid Andean forests include mountain toucans, quetzals, and the Andean cock-of-the-rock, while mixed-species flocks dominated by tanagers and furnariids are commonly seen—in contrast to several vocal but typically cryptic species of wrens, tapaculos, and antpittas.
A number of species such as the royal cinclodes and white-browed tit-spinetail are associated with Polylepis, and consequently also threatened.
Human activity
The Andes Mountains form a north–south axis of cultural influences. A long series of cultural development culminated in the expansion of the Inca civilization and Inca Empire in the central Andes during the 15th century. The Incas formed this civilization through imperialistic militarism as well as careful and meticulous governmental management. The government sponsored the construction of aqueducts and roads in addition to pre-existing installations. Some of these constructions still exist today.
Devastated by European diseases and by civil war, the Incas were defeated in 1532 by an alliance composed of tens of thousands of allies from nations they had subjugated (e.g. Huancas, Chachapoyas, Cañaris) and a small army of 180 Spaniards led by Francisco Pizarro. One of the few Inca sites the Spanish never found in their conquest was Machu Picchu, which lay hidden on a peak on the eastern edge of the Andes where they descend to the Amazon. The main surviving languages of the Andean peoples are those of the Quechua and Aymara language families. Woodbine Parish and Joseph Barclay Pentland surveyed a large part of the Bolivian Andes from 1826 to 1827.
Cities
In modern times, the largest cities in the Andes are Bogotá, with a metropolitan population of over ten million, and Santiago, Medellín, Cali, and Quito. Lima is a coastal city adjacent to the Andes and is the largest city of all Andean countries. It is the seat of the Andean Community of Nations.
La Paz, Bolivia's seat of government, is the highest capital city in the world, at an elevation of approximately . Parts of the La Paz conurbation, including the city of El Alto, extend up to .
Other cities in or near the Andes include Bariloche, Catamarca, Jujuy, Mendoza, Salta, San Juan, Tucumán, and Ushuaia in Argentina; Calama and Rancagua in Chile; Cochabamba, Oruro, Potosí, Sucre, Tarija, and Yacuiba in Bolivia; Arequipa, Cajamarca, Cusco, Huancayo, Huánuco, Huaraz, Juliaca, and Puno in Peru; Ambato, Cuenca, Ibarra, Latacunga, Loja, Riobamba, and Tulcán in Ecuador; Armenia, Cúcuta, Bucaramanga, Duitama, Ibagué, Ipiales, Manizales, Palmira, Pasto, Pereira, Popayán, Rionegro, Sogamoso, Tunja, and Villavicencio in Colombia; and Barquisimeto, La Grita, Mérida, San Cristóbal, Tovar, Trujillo, and Valera in Venezuela. The cities of Caracas, Valencia, and Maracay are in the Venezuelan Coastal Range, which is a debatable extension of the Andes at the northern extremity of South America.
Transportation
Cities and large towns are connected with asphalt-paved roads, while smaller towns are often connected by dirt roads, which may require a four-wheel-drive vehicle.
The rough terrain has historically put the costs of building highways and railroads that cross the Andes out of reach of most neighboring countries, even with modern civil engineering practices. For example, the main crossover of the Andes between Argentina and Chile is still accomplished through the Paso Internacional Los Libertadores. Only recently have the ends of some highways that came rather close to one another from the east and the west been connected. Much of the transportation of passengers is done via aircraft.
There is one railroad that connects Chile with Peru via the Andes, however, and there are others that make the same connection via southern Bolivia.
There are multiple highways in Bolivia that cross the Andes. Some of these were built during a period of war between Bolivia and Paraguay, in order to transport Bolivian troops and their supplies to the war front in the lowlands of southeastern Bolivia and western Paraguay.
For decades, Chile claimed ownership of land on the eastern side of the Andes. These claims were given up in about 1870 during the War of the Pacific between Chile and the allied Bolivia and Peru, in a diplomatic deal to keep Peru out of the war. The Chilean Army and Chilean Navy defeated the combined forces of Bolivia and Peru, and Chile took over Bolivia's only province on the Pacific Coast, some land from Peru that was returned to Peru decades later. Bolivia has been completely landlocked ever since. It mostly uses seaports in eastern Argentina and Uruguay for international trade because its diplomatic relations with Chile have been suspended since 1978.
Because of the tortuous terrain in places, villages and towns in the mountains—to which travel via motorized vehicles is of little use—are still located in the high Andes of Chile, Bolivia, Peru, and Ecuador. Locally, the relatives of the camel, the llama, and the alpaca continue to carry out important uses as pack animals, but this use has generally diminished in modern times. Donkeys, mules, and horses are also useful.
Agriculture
The ancient peoples of the Andes such as the Incas have practiced irrigation techniques for over 6,000 years. Because of the mountain slopes, terracing has been a common practice. Terracing, however, was only extensively employed after Incan imperial expansions to fuel their expanding realm. The potato holds a very important role as an internally consumed staple crop. Maize was also an important crop for these people, and was used for the production of chicha, important to Andean native people. Currently, tobacco, cotton, and coffee are the main export crops. Coca, despite eradication programs in some countries, remains an important crop for legal local use in a mildly stimulating herbal tea, and illegally for the production of cocaine.
Irrigation
In unirrigated land, pasture is the most common type of land use. In the rainy season (summer), part of the rangeland is used for cropping (mainly potatoes, barley, broad beans, and wheat).
Irrigation is helpful in advancing the sowing data of the summer crops, which guarantees an early yield in periods of food shortage. Also, by early sowing, maize can be cultivated higher up in the mountains (up to ). In addition, it makes cropping in the dry season (winter) possible and allows the cultivation of frost-resistant vegetable crops like onion and carrot.
Mining
The Andes rose to fame for their mineral wealth during the Spanish conquest of South America. Although Andean Amerindian peoples crafted ceremonial jewelry of gold and other metals, the mineralizations of the Andes were first mined on a large scale after the Spanish arrival. Potosí in present-day Bolivia and Cerro de Pasco in Peru were among the principal mines of the Spanish Empire in the New World. Río de la Plata and Argentina derive their names from the silver of Potosí.
Currently, mining in the Andes of Chile and Peru places these countries as the first and second major producers of copper in the world. Peru also contains the 4th-largest goldmine in the world: the Yanacocha. The Bolivian Andes principally produce tin, although historically silver mining had a huge impact on the economy of 17th-century Europe. In Chile in the higher portions of the Andes there are only mining districs dominated by large-scale mining, while medium and small-scale mining is more common at lower altitudes.
There is a long history of mining in the Andes, from the Spanish silver mines in Potosí in the 16th century to the vast current porphyry copper deposits of Chuquicamata and Escondida in Chile and Toquepala in Peru. Other metals, including iron, gold, and tin, in addition to non-metallic resources are important. The Andes have a vast supply of lithium; Argentina, Bolivia, and Chile have the three largest reserves in the world respectively.
Peaks
This list contains some of the major peaks in the Andes mountain range. The highest peak is Aconcagua of Argentina.
Argentina
Aconcagua,
Cerro Bonete,
Galán,
Mercedario,
Pissis,
The border between Argentina and Chile
Cerro Bayo,
Cerro Fitz Roy, or 3,405 m, Patagonia, also known as Cerro Chaltén
Cerro Escorial,
Cordón del Azufre,
Falso Azufre,
Incahuasi,
Lastarria,
Llullaillaco,
Maipo,
Marmolejo,
Ojos del Salado,
Olca,
Sierra Nevada de Lagunas Bravas,
Socompa,
Nevado Tres Cruces, (south summit) (III Region)
Tronador,
Tupungato,
Nacimiento,
Bolivia
Janq'u Uma,
Cabaraya,
Chacaltaya,
Chachacomani,
Chaupi Orco,
Huayna Potosí,
Illampu,
Illimani,
Laram Q'awa,
Macizo de Pacuni,
Mururata,
Nevado Anallajsi,
Nevado Charquini,
Nevado Sajama,
Patilla Pata,
Tata Sabaya,
Tunari,
Uturuncu,
Wayna Potosí,
Border between Bolivia and Chile
Acotango,
Aucanquilcha,
Michincha,
Iru Phutunqu,
Licancabur,
Olca,
Parinacota,
Paruma,
Pomerape,
Chile
Monte San Valentin,
Cerro Paine Grande,
Cerro Macá, c.
Monte Darwin, c.
Volcan Hudson, c.
Cerro Castillo Dynevor, c.
Mount Tarn, c.
Polleras, c.
Acamarachi, c.
Colombia
Nevado del Huila,
Nevado del Ruiz,
Nevado del Tolima,
Pico Pan de Azúcar,
Ritacuba Negro,
Nevado del Cumbal,
Cerro Negro de Mayasquer,
Ritacuba Blanco,
Nevado del Quindío,
Puracé,
Santa Isabel,
Doña Juana,
Galeras,
Azufral,
Ecuador
Antisana,
Cayambe,
Chiles,
Chimborazo,
Corazón,
Cotopaxi,
El Altar,
Illiniza,
Pichincha,
Quilotoa,
Reventador,
Sangay,
Tungurahua,
Peru
Alpamayo,
Artesonraju,
Carnicero,
Chumpe,
Coropuna,
El Misti,
El Toro,
Huandoy,
Huascarán,
Jirishanca,
Pumasillo,
Rasac,
Rondoy,
Sarapo,
Salcantay,
Seria Norte,
Siula Grande,
Huaytapallana,
Yerupaja,
Yerupaja Chico,
Venezuela
Pico Bolívar,
Pico Humboldt,
Pico Bonpland,
Pico La Concha,
Pico Piedras Blancas,
Pico El Águila,
Pico El Toro
Pico El León
Pico Mucuñuque
See also
Andean Geology—a scientific journal
Andesite line
Apu (god)
Mountain passes of the Andes
List of mountain ranges
Sutter Buttes
Notes
References
Biggar, J. (2005). The Andes: A Guide For Climbers. 3rd. edition. Andes: Kirkcudbrightshire.
de Roy, T. (2005). The Andes: As the Condor Flies. Firefly books: Richmond Hill.
Fjeldså, J. & N. Krabbe (1990). The Birds of the High Andes. Zoological Museum, University of Copenhagen:
Fjeldså, J. & M. Kessler (1996). Conserving the biological diversity of Polylepis woodlands of the highlands on Peru and Bolivia, a contribution to sustainable natural resource management in the Andes. NORDECO: Copenhagen.
Bibliography
External links
University of Arizona: Andes geology
Blueplanetbiomes.org: Climate and animal life of the Andes
Discover-peru.org: Regions and Microclimates in the Andes
Peaklist.org: Complete list of mountains in South America with an elevation at/above
|
;*;Mountain ranges of South America;Physiographic divisions;Regions of South America
|
https://en.wikipedia.org/wiki/Ammonia
|
Ammonia is an inorganic chemical compound of nitrogen and hydrogen with the formula . A stable binary hydride and the simplest pnictogen hydride, ammonia is a colourless gas with a distinctive pungent smell. Biologically, it is a common nitrogenous waste, and it contributes significantly to the nutritional needs of terrestrial organisms by serving as a precursor to fertilisers. Around 70% of ammonia produced industrially is used to make fertilisers in various forms and composition, such as urea and diammonium phosphate. Ammonia in pure form is also applied directly into the soil.
Ammonia, either directly or indirectly, is also a building block for the synthesis of many chemicals.
Ammonia occurs in nature and has been detected in the interstellar medium. In many countries, it is classified as an extremely hazardous substance.
Ammonia is toxic, causing damage to cells and tissues. For this reason it is excreted by most animals in the urine, in the form of dissolved urea.
Ammonia is produced biologically in a process called nitrogen fixation, but even more is generated industrially by the Haber process. The process helped revolutionize agriculture by providing cheap fertilizers. The global industrial production of ammonia in 2021 was 235 million tonnes. Industrial ammonia is transported by road in tankers, by rail in tank wagons, by sea in gas carriers, or in cylinders.
Ammonia boils at at a pressure of one atmosphere, but the liquid can often be handled in the laboratory without external cooling. Household ammonia or ammonium hydroxide is a solution of ammonia in water.
Etymology
Pliny, in Book XXXI of his Natural History, refers to a salt named hammoniacum, so called because of the proximity of its source to the Temple of Jupiter Amun (Greek Ἄμμων Ammon) in the Roman province of Cyrenaica. However, the description Pliny gives of the salt does not conform to the properties of ammonium chloride. According to Herbert Hoover's commentary in his English translation of Georgius Agricola's De re metallica, it is likely to have been common sea salt. In any case, that salt ultimately gave ammonia and ammonium compounds their name.
Natural occurrence (abiological)
Traces of ammonia/ammonium are found in rainwater. Ammonium chloride (sal ammoniac), and ammonium sulfate are found in volcanic districts. Crystals of ammonium bicarbonate have been found in Patagonia guano.
Ammonia is found throughout the Solar System on Mars, Jupiter, Saturn, Uranus, Neptune, and Pluto, among other places: on smaller, icy bodies such as Pluto, ammonia can act as a geologically important antifreeze, as a mixture of water and ammonia can have a melting point as low as if the ammonia concentration is high enough and thus allow such bodies to retain internal oceans and active geology at a far lower temperature than would be possible with water alone. Substances containing ammonia, or those that are similar to it, are called ammoniacal.
Properties
Ammonia is a colourless gas with a characteristically pungent smell. It is lighter than air, its density being 0.589 times that of air. It is easily liquefied due to the strong hydrogen bonding between molecules. Gaseous ammonia turns to a colourless liquid, which boils at , and freezes to colourless crystals at . Little data is available at very high temperatures and pressures, but the liquid-vapor critical point occurs at 405 K and 11.35 MPa.
Solid
The crystal symmetry is cubic, Pearson symbol cP16, space group P213 No.198, lattice constant 0.5125 nm.
Liquid
Liquid ammonia possesses strong ionising powers reflecting its high ε of 22 at . Liquid ammonia has a very high standard enthalpy change of vapourization (23.5 kJ/mol; for comparison, water's is 40.65 kJ/mol, methane 8.19 kJ/mol and phosphine 14.6 kJ/mol) and can be transported in pressurized or refrigerated vessels; however, at standard temperature and pressure liquid anhydrous ammonia will vaporize.
Solvent properties
Ammonia readily dissolves in water. In an aqueous solution, it can be expelled by boiling. The aqueous solution of ammonia is basic, and may be described as aqueous ammonia or ammonium hydroxide. The maximum concentration of ammonia in water (a saturated solution) has a specific gravity of 0.880 and is often known as '.880 ammonia'.
Liquid ammonia is a widely studied nonaqueous ionising solvent. Its most conspicuous property is its ability to dissolve alkali metals to form highly coloured, electrically conductive solutions containing solvated electrons. Apart from these remarkable solutions, much of the chemistry in liquid ammonia can be classified by analogy with related reactions in aqueous solutions. Comparison of the physical properties of with those of water shows has the lower melting point, boiling point, density, viscosity, dielectric constant and electrical conductivity. These differences are attributed at least in part to the weaker hydrogen bonding in . The ionic self-dissociation constant of liquid at −50 °C is about 10−33.
Liquid ammonia is an ionising solvent, although less so than water, and dissolves a range of ionic compounds, including many nitrates, nitrites, cyanides, thiocyanates, metal cyclopentadienyl complexes and metal bis(trimethylsilyl)amides. Most ammonium salts are soluble and act as acids in liquid ammonia solutions. The solubility of halide salts increases from fluoride to iodide. A saturated solution of ammonium nitrate (Divers' solution, named after Edward Divers) contains 0.83 mol solute per mole of ammonia and has a vapour pressure of less than 1 bar even at . However, few oxyanion salts with other cations dissolve.
Liquid ammonia will dissolve all of the alkali metals and other electropositive metals such as Ca, Sr, Ba, Eu and Yb (also Mg using an electrolytic process). At low concentrations (<0.06 mol/L), deep blue solutions are formed: these contain metal cations and solvated electrons, free electrons that are surrounded by a cage of ammonia molecules.
These solutions are strong reducing agents. At higher concentrations, the solutions are metallic in appearance and in electrical conductivity. At low temperatures, the two types of solution can coexist as immiscible phases.
Redox properties of liquid ammonia
The range of thermodynamic stability of liquid ammonia solutions is very narrow, as the potential for oxidation to dinitrogen, E° (), is only +0.04 V. In practice, both oxidation to dinitrogen and reduction to dihydrogen are slow. This is particularly true of reducing solutions: the solutions of the alkali metals mentioned above are stable for several days, slowly decomposing to the metal amide and dihydrogen. Most studies involving liquid ammonia solutions are done in reducing conditions; although oxidation of liquid ammonia is usually slow, there is still a risk of explosion, particularly if transition metal ions are present as possible catalysts.
Structure
The ammonia molecule has a trigonal pyramidal shape, as predicted by the valence shell electron pair repulsion theory (VSEPR theory) with an experimentally determined bond angle of 106.7°. The central nitrogen atom has five outer electrons with an additional electron from each hydrogen atom. This gives a total of eight electrons, or four electron pairs that are arranged tetrahedrally. Three of these electron pairs are used as bond pairs, which leaves one lone pair of electrons. The lone pair repels more strongly than bond pairs; therefore, the bond angle is not 109.5°, as expected for a regular tetrahedral arrangement, but 106.7°. This shape gives the molecule a dipole moment and makes it polar. The molecule's polarity, and especially its ability to form hydrogen bonds, makes ammonia highly miscible with water. The lone pair makes ammonia a base, a proton acceptor. Ammonia is moderately basic; a 1.0 M aqueous solution has a pH of 11.6, and if a strong acid is added to such a solution until the solution is neutral (), 99.4% of the ammonia molecules are protonated. Temperature and salinity also affect the proportion of ammonium . The latter has the shape of a regular tetrahedron and is isoelectronic with methane.
The ammonia molecule readily undergoes nitrogen inversion at room temperature; a useful analogy is an umbrella turning itself inside out in a strong wind. The energy barrier to this inversion is 24.7 kJ/mol, and the resonance frequency is 23.79 GHz, corresponding to microwave radiation of a wavelength of 1.260 cm. The absorption at this frequency was the first microwave spectrum to be observed and was used in the first maser.
Amphotericity
One of the most characteristic properties of ammonia is its basicity. Ammonia is considered to be a weak base. It combines with acids to form ammonium salts; thus, with hydrochloric acid it forms ammonium chloride (sal ammoniac); with nitric acid, ammonium nitrate, etc. Perfectly dry ammonia gas will not combine with perfectly dry hydrogen chloride gas; moisture is necessary to bring about the reaction.
As a demonstration experiment under air with ambient moisture, opened bottles of concentrated ammonia and hydrochloric acid solutions produce a cloud of ammonium chloride, which seems to appear 'out of nothing' as the salt aerosol forms where the two diffusing clouds of reagents meet between the two bottles.
The salts produced by the action of ammonia on acids are known as the ammonium salts and all contain the ammonium ion ().
Although ammonia is well known as a weak base, it can also act as an extremely weak acid. It is a protic substance and is capable of formation of amides (which contain the ion). For example, lithium dissolves in liquid ammonia to give a blue solution (solvated electron) of lithium amide:
Self-dissociation
Like water, liquid ammonia undergoes molecular autoionisation to form its acid and base conjugates:
Ammonia often functions as a weak base, so it has some buffering ability. Shifts in pH will cause more or fewer ammonium cations () and amide anions () to be present in solution. At standard pressure and temperature,
Combustion
Ammonia does not burn readily or sustain combustion, except under narrow fuel-to-air mixtures of 15–28% ammonia by volume in air. When mixed with oxygen, it burns with a pale yellowish-green flame. Ignition occurs when chlorine is passed into ammonia, forming nitrogen and hydrogen chloride; if chlorine is present in excess, then the highly explosive nitrogen trichloride () is also formed.
The combustion of ammonia to form nitrogen and water is exothermic:
The standard enthalpy change of combustion, ΔH°c, expressed per mole of ammonia and with condensation of the water formed, is −382.81 kJ/mol. Dinitrogen is the thermodynamic product of combustion: all nitrogen oxides are unstable with respect to and , which is the principle behind the catalytic converter. Nitrogen oxides can be formed as kinetic products in the presence of appropriate catalysts, a reaction of great industrial importance in the production of nitric acid:
A subsequent reaction leads to :
The combustion of ammonia in air is very difficult in the absence of a catalyst (such as platinum gauze or warm chromium(III) oxide), due to the relatively low heat of combustion, a lower laminar burning velocity, high auto-ignition temperature, high heat of vapourization, and a narrow flammability range. However, recent studies have shown that efficient and stable combustion of ammonia can be achieved using swirl combustors, thereby rekindling research interest in ammonia as a fuel for thermal power production. The flammable range of ammonia in dry air is 15.15–27.35% and in 100% relative humidity air is 15.95–26.55%. For studying the kinetics of ammonia combustion, knowledge of a detailed reliable reaction mechanism is required, but this has been challenging to obtain.
Precursor to organonitrogen compounds
Ammonia is a direct or indirect precursor to most manufactured nitrogen-containing compounds. It is the precursor to nitric acid, which is the source for most N-substituted aromatic compounds.
Amines can be formed by the reaction of ammonia with alkyl halides or, more commonly, with alcohols:
Its ring-opening reaction with ethylene oxide give ethanolamine, diethanolamine, and triethanolamine.
Amides can be prepared by the reaction of ammonia with carboxylic acid and their derivatives. For example, ammonia reacts with formic acid (HCOOH) to yield formamide () when heated. Acyl chlorides are the most reactive, but the ammonia must be present in at least a twofold excess to neutralise the hydrogen chloride formed. Esters and anhydrides also react with ammonia to form amides. Ammonium salts of carboxylic acids can be dehydrated to amides by heating to 150–200 °C as long as no thermally sensitive groups are present.
Amino acids, using Strecker amino-acid synthesis
Acrylonitrile, in the Sohio process
Other organonitrogen compounds include alprazolam, ethanolamine, ethyl carbamate and hexamethylenetetramine.
Precursor to inorganic nitrogenous compounds
Nitric acid is generated via the Ostwald process by oxidation of ammonia with air over a platinum catalyst at , ≈9 atm. Nitric oxide and nitrogen dioxide are intermediate in this conversion:
Nitric acid is used for the production of fertilisers, explosives, and many organonitrogen compounds.
The hydrogen in ammonia is susceptible to replacement by a myriad substituents.
Ammonia gas reacts with metallic sodium to give sodamide, .
With chlorine, monochloramine is formed.
Pentavalent ammonia is known as λ5-amine, nitrogen pentahydride decomposes spontaneously into trivalent ammonia (λ3-amine) and hydrogen gas at normal conditions. This substance was once investigated as a possible solid rocket fuel in 1966.
Ammonia is also used to make the following compounds:
Hydrazine, in the Olin Raschig process and the peroxide process
Hydrogen cyanide, in the BMA process and the Andrussow process
Hydroxylamine and ammonium carbonate, in the Raschig process
Urea, in the Bosch–Meiser urea process and in Wöhler synthesis
ammonium perchlorate, ammonium nitrate, and ammonium bicarbonate
Ammonia is a ligand forming metal ammine complexes. For historical reasons, ammonia is named ammine in the nomenclature of coordination compounds. One notable ammine complex is cisplatin (, a widely used anticancer drug. Ammine complexes of chromium(III) formed the basis of Alfred Werner's revolutionary theory on the structure of coordination compounds. Werner noted only two isomers (fac- and mer-) of the complex could be formed, and concluded the ligands must be arranged around the metal ion at the vertices of an octahedron.
Ammonia forms 1:1 adducts with a variety of Lewis acids such as , phenol, and . Ammonia is a hard base (HSAB theory) and its E & C parameters are EB = 2.31 and CB = 2.04. Its relative donor strength toward a series of acids, versus other Lewis bases, can be illustrated by C-B plots.
Detection and determination
Ammonia in solution
Ammonia and ammonium salts can be readily detected, in very minute traces, by the addition of Nessler's solution, which gives a distinct yellow colouration in the presence of the slightest trace of ammonia or ammonium salts. The amount of ammonia in ammonium salts can be estimated quantitatively by distillation of the salts with sodium (NaOH) or potassium hydroxide (KOH), the ammonia evolved being absorbed in a known volume of standard sulfuric acid and the excess of acid then determined volumetrically; or the ammonia may be absorbed in hydrochloric acid and the ammonium chloride so formed precipitated as ammonium hexachloroplatinate, .
Gaseous ammonia
Sulfur sticks are burnt to detect small leaks in industrial ammonia refrigeration systems. Larger quantities can be detected by warming the salts with a caustic alkali or with quicklime, when the characteristic smell of ammonia will be at once apparent. Ammonia is an irritant and irritation increases with concentration; the permissible exposure limit is 25 ppm, and lethal above 500 ppm by volume. Higher concentrations are hardly detected by conventional detectors, the type of detector is chosen according to the sensitivity required (e.g. semiconductor, catalytic, electrochemical). Holographic sensors have been proposed for detecting concentrations up to 12.5% in volume.
In a laboratorial setting, gaseous ammonia can be detected by using concentrated hydrochloric acid or gaseous hydrogen chloride. A dense white fume (which is ammonium chloride vapor) arises from the reaction between ammonia and HCl(g).
Ammoniacal nitrogen (NH3–N)
Ammoniacal nitrogen (NH3–N) is a measure commonly used for testing the quantity of ammonium ions, derived naturally from ammonia, and returned to ammonia via organic processes, in water or waste liquids. It is a measure used mainly for quantifying values in waste treatment and water purification systems, as well as a measure of the health of natural and man-made water reserves. It is measured in units of mg/L (milligram per litre).
History
The ancient Greek historian Herodotus mentioned that there were outcrops of salt in an area of Libya that was inhabited by a people called the 'Ammonians' (now the Siwa oasis in northwestern Egypt, where salt lakes still exist). The Greek geographer Strabo also mentioned the salt from this region. However, the ancient authors Dioscorides, Apicius, Arrian, Synesius, and Aëtius of Amida described this salt as forming clear crystals that could be used for cooking and that were essentially rock salt. Hammoniacus sal appears in the writings of Pliny, although it is not known whether the term is equivalent to the more modern sal ammoniac (ammonium chloride).
The fermentation of urine by bacteria produces a solution of ammonia; hence fermented urine was used in Classical Antiquity to wash cloth and clothing, to remove hair from hides in preparation for tanning, to serve as a mordant in dyeing cloth, and to remove rust from iron. It was also used by ancient dentists to wash teeth.
In the form of sal ammoniac (نشادر, nushadir), ammonia was important to the Muslim alchemists. It was mentioned in the Book of Stones, likely written in the 9th century and attributed to Jābir ibn Hayyān. It was also important to the European alchemists of the 13th century, being mentioned by Albertus Magnus. It was also used by dyers in the Middle Ages in the form of fermented urine to alter the colour of vegetable dyes. In the 15th century, Basilius Valentinus showed that ammonia could be obtained by the action of alkalis on sal ammoniac. At a later period, when sal ammoniac was obtained by distilling the hooves and horns of oxen and neutralizing the resulting carbonate with hydrochloric acid, the name 'spirit of hartshorn' was applied to ammonia.
Gaseous ammonia was first isolated by Joseph Black in 1756 by reacting sal ammoniac (ammonium chloride) with calcined magnesia (magnesium oxide). It was isolated again by Peter Woulfe in 1767, by Carl Wilhelm Scheele in 1770 and by Joseph Priestley in 1773 and was termed by him 'alkaline air'. Eleven years later in 1785, Claude Louis Berthollet ascertained its composition.
The production of ammonia from nitrogen in the air (and hydrogen) was invented by Fritz Haber and Robert LeRossignol. The patent was sent in 1909 (USPTO Nr 1,202,995) and awarded in 1916. Later, Carl Bosch developed the industrial method for ammonia production (Haber–Bosch process). It was first used on an industrial scale in Germany during World War I, following the allied blockade that cut off the supply of nitrates from Chile. The ammonia was used to produce explosives to sustain war efforts. The Nobel Prize in Chemistry 1918 was awarded to Fritz Haber "for the synthesis of ammonia from its elements".
Before the availability of natural gas, hydrogen as a precursor to ammonia production was produced via the electrolysis of water or using the chloralkali process.
With the advent of the steel industry in the 20th century, ammonia became a byproduct of the production of coking coal.
Applications
Fertiliser
In the US , approximately 88% of ammonia was used as fertilisers either as its salts, solutions or anhydrously. When applied to soil, it helps provide increased yields of crops such as maize and wheat. 30% of agricultural nitrogen applied in the US is in the form of anhydrous ammonia, and worldwide, 110 million tonnes are applied each year.
Solutions of ammonia ranging from 16% to 25% are used in the fermentation industry as a source of nitrogen for microorganisms and to adjust pH during fermentation.
Refrigeration–R717
Because of ammonia's vapourization properties, it is a useful refrigerant. It was commonly used before the popularisation of chlorofluorocarbons (Freons). Anhydrous ammonia is widely used in industrial refrigeration applications and hockey rinks because of its high energy efficiency and low cost. It suffers from the disadvantage of toxicity, and requiring corrosion resistant components, which restricts its domestic and small-scale use. Along with its use in modern vapour-compression refrigeration it is used in a mixture along with hydrogen and water in absorption refrigerators. The Kalina cycle, which is of growing importance to geothermal power plants, depends on the wide boiling range of the ammonia–water mixture.
Ammonia coolant is also used in the radiators aboard the International Space Station in loops that are used to regulate the internal temperature and enable temperature-dependent experiments. The ammonia is under sufficient pressure to remain liquid throughout the process. Single-phase ammonia cooling systems also serve the power electronics in each pair of solar arrays.
The potential importance of ammonia as a refrigerant has increased with the discovery that vented CFCs and HFCs are potent and stable greenhouse gases.
Antimicrobial agent for food products
As early as in 1895, it was known that ammonia was 'strongly antiseptic; it requires 1.4 grams per litre to preserve beef tea (broth).' In one study, anhydrous ammonia destroyed 99.999% of zoonotic bacteria in three types of animal feed, but not silage. Anhydrous ammonia is currently used commercially to reduce or eliminate microbial contamination of beef.
Lean finely textured beef (popularly known as 'pink slime') in the beef industry is made from fatty beef trimmings (c. 50–70% fat) by removing the fat using heat and centrifugation, then treating it with ammonia to kill E. coli. The process was deemed effective and safe by the US Department of Agriculture based on a study that found that the treatment reduces E. coli to undetectable levels. There have been safety concerns about the process as well as consumer complaints about the taste and smell of ammonia-treated beef.
Fuel
Ammonia has been used as fuel, and is a proposed alternative to fossil fuels and hydrogen. Being liquid at ambient temperature under its own vapour pressure and having high volumetric and gravimetric energy density, ammonia is considered a suitable carrier for hydrogen, and may be cheaper than direct transport of liquid hydrogen.
Compared to hydrogen, ammonia is easier to store. Compared to hydrogen as a fuel, ammonia is much more energy efficient, and could be produced, stored and delivered at a much lower cost than hydrogen, which must be kept compressed or as a cryogenic liquid. The raw energy density of liquid ammonia is 11.5 MJ/L, which is about a third that of diesel.
Ammonia can be converted back to hydrogen to be used to power hydrogen fuel cells, or it may be used directly within high-temperature solid oxide direct ammonia fuel cells to provide efficient power sources that do not emit greenhouse gases. Ammonia to hydrogen conversion can be achieved through the sodium amide process or the catalytic decomposition of ammonia using solid catalysts.
Ammonia engines or ammonia motors, using ammonia as a working fluid, have been proposed and occasionally used. The principle is similar to that used in a fireless locomotive, but with ammonia as the working fluid, instead of steam or compressed air. Ammonia engines were used experimentally in the 19th century by Goldsworthy Gurney in the UK and the St. Charles Streetcar Line in New Orleans in the 1870s and 1880s, and during World War II ammonia was used to power buses in Belgium.
Ammonia is sometimes proposed as a practical alternative to fossil fuel for internal combustion engines. However, ammonia cannot be easily used in existing Otto cycle engines because of its very narrow flammability range. Despite this, several tests have been run. Its high octane rating of 120 and low flame temperature allows the use of high compression ratios without a penalty of high production. Since ammonia contains no carbon, its combustion cannot produce carbon dioxide, carbon monoxide, hydrocarbons, or soot.
Ammonia production currently creates 1.8% of global emissions. 'Green ammonia' is ammonia produced by using green hydrogen (hydrogen produced by electrolysis with electricity from renewable energy), whereas 'blue ammonia' is ammonia produced using blue hydrogen (hydrogen produced by steam methane reforming) where the carbon dioxide has been captured and stored.
Rocket engines have also been fueled by ammonia. The Reaction Motors XLR99 rocket engine that powered the hypersonic research aircraft used liquid ammonia. Although not as powerful as other fuels, it left no soot in the reusable rocket engine, and its density approximately matches the density of the oxidiser, liquid oxygen, which simplified the aircraft's design.
In 2020, Saudi Arabia shipped 40 metric tons of liquid 'blue ammonia' to Japan for use as a fuel. It was produced as a by-product by petrochemical industries, and can be burned without giving off greenhouse gases. Its energy density by volume is nearly double that of liquid hydrogen. If the process of creating it can be scaled up via purely renewable resources, producing green ammonia, it could make a major difference in avoiding climate change. The company ACWA Power and the city of Neom have announced the construction of a green hydrogen and ammonia plant in 2020.
Green ammonia is considered as a potential fuel for future container ships. In 2020, the companies DSME and MAN Energy Solutions announced the construction of an ammonia-based ship, DSME plans to commercialize it by 2025. The use of ammonia as a potential alternative fuel for aircraft jet engines is also being explored.
Japan intends to implement a plan to develop ammonia co-firing technology that can increase the use of ammonia in power generation, as part of efforts to assist domestic and other Asian utilities to accelerate their transition to carbon neutrality.
In October 2021, the first International Conference on Fuel Ammonia (ICFA2021) was held.
In June 2022, IHI Corporation succeeded in reducing greenhouse gases by over 99% during combustion of liquid ammonia in a 2,000-kilowatt-class gas turbine achieving truly -free power generation.
In July 2022, Quad nations of Japan, the U.S., Australia and India agreed to promote technological development for clean-burning hydrogen and ammonia as fuels at the security grouping's first energy meeting. , however, significant amounts of are produced. Nitrous oxide may also be a problem as it is a "greenhouse gas that is known to possess up to 300 times the Global Warming Potential (GWP) of carbon dioxide".
The IEA forecasts that ammonia will meet approximately 45% of shipping fuel demands by 2050.
At high temperature and in the presence of a suitable catalyst ammonia decomposes into its constituent elements. Decomposition of ammonia is a slightly endothermic process requiring 23 kJ/mol (5.5 kcal/mol) of ammonia, and yields hydrogen and nitrogen gas.
Other
Remediation of gaseous emissions
Ammonia is used to scrub from the burning of fossil fuels, and the resulting product is converted to ammonium sulfate for use as fertiliser. Ammonia neutralises the nitrogen oxide () pollutants emitted by diesel engines. This technology, called SCR (selective catalytic reduction), relies on a vanadia-based catalyst.
Ammonia may be used to mitigate gaseous spills of phosgene.
Stimulant
Ammonia, as the vapour released by smelling salts, has found significant use as a respiratory stimulant. Ammonia is commonly used in the illegal manufacture of methamphetamine through a Birch reduction. The Birch method of making methamphetamine is dangerous because the alkali metal and liquid ammonia are both extremely reactive, and the temperature of liquid ammonia makes it susceptible to explosive boiling when reactants are added.
Textile
Liquid ammonia is used for treatment of cotton materials, giving properties like mercerisation, using alkalis. In particular, it is used for prewashing of wool.
Lifting gas
At standard temperature and pressure, ammonia is less dense than atmosphere and has approximately 45–48% of the lifting power of hydrogen or helium. Ammonia has sometimes been used to fill balloons as a lifting gas. Because of its relatively high boiling point (compared to helium and hydrogen), ammonia could potentially be refrigerated and liquefied aboard an airship to reduce lift and add ballast (and returned to a gas to add lift and reduce ballast).
Fuming
Ammonia has been used to darken quartersawn white oak in Arts & Crafts and Mission-style furniture. Ammonia fumes react with the natural tannins in the wood and cause it to change colour.
Safety
The US Occupational Safety and Health Administration (OSHA) has set a 15-minute exposure limit for gaseous ammonia of 35 ppm by volume in the environmental air and an 8-hour exposure limit of 25 ppm by volume. The National Institute for Occupational Safety and Health (NIOSH) recently reduced the IDLH (Immediately Dangerous to Life or Health, the level to which a healthy worker can be exposed for 30 minutes without suffering irreversible health effects) from 500 to 300 ppm based on recent more conservative interpretations of original research in 1943. Other organisations have varying exposure levels. US Navy Standards [U.S. Bureau of Ships 1962] maximum allowable concentrations (MACs): for continuous exposure (60 days) is 25 ppm; for exposure of 1 hour is 400 ppm.
Ammonia vapour has a sharp, irritating, pungent odor that acts as a warning of potentially dangerous exposure. The average odor threshold is 5 ppm, well below any danger or damage. Exposure to very high concentrations of gaseous ammonia can result in lung damage and death. Ammonia is regulated in the US as a non-flammable gas, but it meets the definition of a material that is toxic by inhalation and requires a hazardous safety permit when transported in quantities greater than .
Liquid ammonia is dangerous because it is hygroscopic and because it can cause caustic burns. See for more information.
Toxicity
The toxicity of ammonia solutions does not usually cause problems for humans and other mammals, as a specific mechanism exists to prevent its build-up in the bloodstream. Ammonia is converted to carbamoyl phosphate by the enzyme carbamoyl phosphate synthetase, and then enters the urea cycle to be either incorporated into amino acids or excreted in the urine. Fish and amphibians lack this mechanism, as they can usually eliminate ammonia from their bodies by direct excretion. Ammonia even at dilute concentrations is highly toxic to aquatic animals, and for this reason it is classified as "dangerous for the environment". Atmospheric ammonia plays a key role in the formation of fine particulate matter.
Ammonia is a constituent of tobacco smoke.
Coking wastewater
Ammonia is present in coking wastewater streams, as a liquid by-product of the production of coke from coal. In some cases, the ammonia is discharged to the marine environment where it acts as a pollutant. The Whyalla Steelworks in South Australia is one example of a coke-producing facility that discharges ammonia into marine waters.
Aquaculture
Ammonia toxicity is believed to be a cause of otherwise unexplained losses in fish hatcheries. Excess ammonia may accumulate and cause alteration of metabolism or increases in the body pH of the exposed organism. Tolerance varies among fish species. At lower concentrations, around 0.05 mg/L, un-ionised ammonia is harmful to fish species and can result in poor growth and feed conversion rates, reduced fecundity and fertility and increase stress and susceptibility to bacterial infections and diseases. Exposed to excess ammonia, fish may suffer loss of equilibrium, hyper-excitability, increased respiratory activity and oxygen uptake and increased heart rate. At concentrations exceeding 2.0 mg/L, ammonia causes gill and tissue damage, extreme lethargy, convulsions, coma, and death. Experiments have shown that the lethal concentration for a variety of fish species ranges from 0.2 to 2.0 mg/L.
During winter, when reduced feeds are administered to aquaculture stock, ammonia levels can be higher. Lower ambient temperatures reduce the rate of algal photosynthesis so less ammonia is removed by any algae present. Within an aquaculture environment, especially at large scale, there is no fast-acting remedy to elevated ammonia levels. Prevention rather than correction is recommended to reduce harm to farmed fish and in open water systems, the surrounding environment.
Storage information
Similar to propane, anhydrous ammonia boils below room temperature when at atmospheric pressure. A storage vessel capable of is suitable to contain the liquid. Ammonia is used in numerous different industrial applications requiring carbon or stainless steel storage vessels. Ammonia with at least 0.2% by weight water content is not corrosive to carbon steel. carbon steel construction storage tanks with 0.2% by weight or more of water could last more than 50 years in service. Experts warn that ammonium compounds not be allowed to come in contact with bases (unless in an intended and contained reaction), as dangerous quantities of ammonia gas could be released.
Laboratory
The hazards of ammonia solutions depend on the concentration: 'dilute' ammonia solutions are usually 5–10% by weight (< 5.62 mol/L); 'concentrated' solutions are usually prepared at >25% by weight. A 25% (by weight) solution has a density of 0.907 g/cm3, and a solution that has a lower density will be more concentrated. The European Union classification of ammonia solutions is given in the table.
The ammonia vapour from concentrated ammonia solutions is severely irritating to the eyes and the respiratory tract, and experts warn that these solutions only be handled in a fume hood. Saturated ('0.880'–see ) solutions can develop a significant pressure inside a closed bottle in warm weather, and experts also warn that the bottle be opened with care. This is not usually a problem for 25% ('0.900') solutions.
Experts warn that ammonia solutions not be mixed with halogens, as toxic and/or explosive products are formed. Experts also warn that prolonged contact of ammonia solutions with silver, mercury or iodide salts can also lead to explosive products: such mixtures are often formed in qualitative inorganic analysis, and that it needs to be lightly acidified but not concentrated (<6% w/v) before disposal once the test is completed.
Laboratory use of anhydrous ammonia (gas or liquid)
Anhydrous ammonia is classified as toxic (T) and dangerous for the environment (N). The gas is flammable (autoignition temperature: 651 °C) and can form explosive mixtures with air (16–25%). The permissible exposure limit (PEL) in the United States is 50 ppm (35 mg/m3), while the IDLH concentration is estimated at 300 ppm. Repeated exposure to ammonia lowers the sensitivity to the smell of the gas: normally the odour is detectable at concentrations of less than 50 ppm, but desensitised individuals may not detect it even at concentrations of 100 ppm. Anhydrous ammonia corrodes copper- and zinc-containing alloys, which makes brass fittings not appropriate for handling the gas. Liquid ammonia can also attack rubber and certain plastics.
Ammonia reacts violently with the halogens. Nitrogen triiodide, a primary high explosive, is formed when ammonia comes in contact with iodine. Ammonia causes the explosive polymerisation of ethylene oxide. It also forms explosive fulminating compounds with compounds of gold, silver, mercury, germanium or tellurium, and with stibine. Violent reactions have also been reported with acetaldehyde, hypochlorite solutions, potassium ferricyanide and peroxides.
Production
Ammonia has one of the highest rates of production of any inorganic chemical. Production is sometimes expressed in terms of 'fixed nitrogen'. Global production was estimated as being 160 million tonnes in 2020 (147 tons of fixed nitrogen). China accounted for 26.5% of that, followed by Russia at 11.0%, the United States at 9.5%, and India at 8.3%.
Before the start of World War I, most ammonia was obtained by the dry distillation of nitrogenous vegetable and animal waste products, including camel dung, where it was distilled by the reduction of nitrous acid and nitrites with hydrogen; in addition, it was produced by the distillation of coal, and also by the decomposition of ammonium salts by alkaline hydroxides such as quicklime:
For small scale laboratory synthesis, one can heat urea and calcium hydroxide or sodium hydroxide:
Haber–Bosch
Electrochemical
The electrochemical synthesis of ammonia involves the reductive formation of lithium nitride, which can be protonated to ammonia, given a proton source. The first use of this chemistry was reported in 1930, where lithium solutions in ethanol were used to produce ammonia at pressures of up to 1000 bar, with ethanol acting as the proton source. Beyond simply mediating proton transfer to the nitrogen reduction reaction, ethanol has been found to play a multifaceted role, influencing electrolyte transformations and contributing to the formation of the solid electrolyte interphase, which enhances overall reaction efficiency.
In 1994, Tsuneto et al. used lithium electrodeposition in tetrahydrofuran to synthesize ammonia at more moderate pressures with reasonable Faradaic efficiency. Subsequent studies have further explored the ethanol–tetrahydrofuran system for electrochemical ammonia synthesis.
In 2020, a solvent-agnostic gas diffusion electrode was shown to improve nitrogen transport to the reactive lithium. production rates of up to and Faradaic efficiencies of up to 47.5 ± 4% at ambient temperature and 1 bar pressure were achieved.
In 2021, it was demonstrated that ethanol could be replaced with a tetraalkyl phosphonium salt. The study observed production rates of at 69 ± 1% Faradaic efficiency experiments under 0.5 bar hydrogen and 19.5 bar nitrogen partial pressure at ambient temperature. Technology based on this electrochemistry is being developed for commercial fertiliser and fuel production.
In 2022, ammonia was produced via the lithium mediated process in a continuous-flow electrolyzer also demonstrating the hydrogen gas as proton source. The study synthesized ammonia at 61 ± 1% Faradaic efficiency at a current density of −6 mA/cm2 at 1 bar and room temperature.
Biochemistry and medicine
Ammonia is essential for life. For example, it is required for the formation of amino acids and nucleic acids, fundamental building blocks of life. Ammonia is however quite toxic. Nature thus uses carriers for ammonia. Within a cell, glutamate serves this role. In the bloodstream, glutamine is a source of ammonia.
Ethanolamine, required for cell membranes, is the substrate for ethanolamine ammonia-lyase, which produces ammonia:
Ammonia is both a metabolic waste and a metabolic input throughout the biosphere. It is an important source of nitrogen for living systems. Although atmospheric nitrogen abounds (more than 75%), few living creatures are capable of using atmospheric nitrogen in its diatomic form, gas. Therefore, nitrogen fixation is required for the synthesis of amino acids, which are the building blocks of protein. Some plants rely on ammonia and other nitrogenous wastes incorporated into the soil by decaying matter. Others, such as nitrogen-fixing legumes, benefit from symbiotic relationships with rhizobia bacteria that create ammonia from atmospheric nitrogen.
In humans, inhaling ammonia in high concentrations can be fatal. Exposure to ammonia can cause headaches, edema, impaired memory, seizures and coma as it is neurotoxic in nature.
Biosynthesis
In certain organisms, ammonia is produced from atmospheric nitrogen by enzymes called nitrogenases. The overall process is called nitrogen fixation. Intense effort has been directed toward understanding the mechanism of biological nitrogen fixation. The scientific interest in this problem is motivated by the unusual structure of the active site of the enzyme, which consists of an ensemble.
Ammonia is also a metabolic product of amino acid deamination catalyzed by enzymes such as glutamate dehydrogenase 1. Ammonia excretion is common in aquatic animals. In humans, it is quickly converted to urea (by liver), which is much less toxic, particularly less basic. This urea is a major component of the dry weight of urine. Most reptiles, birds, insects, and snails excrete uric acid solely as nitrogenous waste.
Physiology
Ammonia plays a role in both normal and abnormal animal physiology. It is biosynthesised through normal amino acid metabolism and is toxic in high concentrations. The liver converts ammonia to urea through a series of reactions known as the urea cycle. Liver dysfunction, such as that seen in cirrhosis, may lead to elevated amounts of ammonia in the blood (hyperammonemia). Likewise, defects in the enzymes responsible for the urea cycle, such as ornithine transcarbamylase, lead to hyperammonemia. Hyperammonemia contributes to the confusion and coma of hepatic encephalopathy, as well as the neurological disease common in people with urea cycle defects and organic acidurias.
Ammonia is important for normal animal acid/base balance. After formation of ammonium from glutamine, α-ketoglutarate may be degraded to produce two bicarbonate ions, which are then available as buffers for dietary acids. Ammonium is excreted in the urine, resulting in net acid loss. Ammonia may itself diffuse across the renal tubules, combine with a hydrogen ion, and thus allow for further acid excretion.
Excretion
Ammonium ions are a toxic waste product of metabolism in animals. In fish and aquatic invertebrates, it is excreted directly into the water. In mammals, sharks, and amphibians, it is converted in the urea cycle to urea, which is less toxic and can be stored more efficiently. In birds, reptiles, and terrestrial snails, metabolic ammonium is converted into uric acid, which is solid and can therefore be excreted with minimal water loss.
Extraterrestrial occurrence
Ammonia has been detected in the atmospheres of the giant planets Jupiter, Saturn, Uranus and Neptune, along with other gases such as methane, hydrogen, and helium. The interior of Saturn may include frozen ammonia crystals. It is found on Deimos and Phobos–the two moons of Mars.
Interstellar space
Ammonia was first detected in interstellar space in 1968, based on microwave emissions from the direction of the galactic core. This was the first polyatomic molecule to be so detected. The sensitivity of the molecule to a broad range of excitations and the ease with which it can be observed in a number of regions has made ammonia one of the most important molecules for studies of molecular clouds. The relative intensity of the ammonia lines can be used to measure the temperature of the emitting medium.
The following isotopic species of ammonia have been detected: ,, , , and . The detection of triply deuterated ammonia was considered a surprise as deuterium is relatively scarce. It is thought that the low-temperature conditions allow this molecule to survive and accumulate.
Since its interstellar discovery, has proved to be an invaluable spectroscopic tool in the study of the interstellar medium. With a large number of transitions sensitive to a wide range of excitation conditions, has been widely astronomically detected–its detection has been reported in hundreds of journal articles. Listed below is a sample of journal articles that highlights the range of detectors that have been used to identify ammonia.
The study of interstellar ammonia has been important to a number of areas of research in the last few decades. Some of these are delineated below and primarily involve using ammonia as an interstellar thermometer.
Interstellar formation mechanisms
The interstellar abundance for ammonia has been measured for a variety of environments. The []/[] ratio has been estimated to range from 10−7 in small dark clouds up to 10−5 in the dense core of the Orion molecular cloud complex. Although a total of 18 total production routes have been proposed, the principal formation mechanism for interstellar is the reaction:
The rate constant, k, of this reaction depends on the temperature of the environment, with a value of at 10 K. The rate constant was calculated from the formula . For the primary formation reaction, and . Assuming an abundance of and an electron abundance of 10−7 typical of molecular clouds, the formation will proceed at a rate of in a molecular cloud of total density .
All other proposed formation reactions have rate constants of between two and 13 orders of magnitude smaller, making their contribution to the abundance of ammonia relatively insignificant. As an example of the minor contribution other formation reactions play, the reaction:
has a rate constant of 2.2. Assuming densities of 105 and []/[] ratio of 10−7, this reaction proceeds at a rate of 2.2, more than three orders of magnitude slower than the primary reaction above.
Some of the other possible formation reactions are:
Interstellar destruction mechanisms
There are 113 total proposed reactions leading to the destruction of . Of these, 39 were tabulated in extensive tables of the chemistry among C, N and O compounds. A review of interstellar ammonia cites the following reactions as the principal dissociation mechanisms:
with rate constants of 4.39×10−9 and 2.2×10−9, respectively. The above equations (, ) run at a rate of 8.8×10−9 and 4.4×10−13, respectively. These calculations assumed the given rate constants and abundances of []/[] = 10−5, []/[] = 2×10−5, []/[] = 2×10−9, and total densities of n = 105, typical of cold, dense, molecular clouds. Clearly, between these two primary reactions, equation () is the dominant destruction reaction, with a rate ≈10,000 times faster than equation (). This is due to the relatively high abundance of .
Single antenna detections
Radio observations of from the Effelsberg 100-m Radio Telescope reveal that the ammonia line is separated into two components–a background ridge and an unresolved core. The background corresponds well with the locations previously detected CO. The 25 m Chilbolton telescope in England detected radio signatures of ammonia in H II regions, HNH2O masers, H–H objects, and other objects associated with star formation. A comparison of emission line widths indicates that turbulent or systematic velocities do not increase in the central cores of molecular clouds.
Microwave radiation from ammonia was observed in several galactic objects including W3(OH), Orion A, W43, W51, and five sources in the galactic centre. The high detection rate indicates that this is a common molecule in the interstellar medium and that high-density regions are common in the galaxy.
Interferometric studies
VLA observations of in seven regions with high-velocity gaseous outflows revealed condensations of less than 0.1 pc in L1551, S140, and Cepheus A. Three individual condensations were detected in Cepheus A, one of them with a highly elongated shape. They may play an important role in creating the bipolar outflow in the region.
Extragalactic ammonia was imaged using the VLA in IC 342. The hot gas has temperatures above 70 K, which was inferred from ammonia line ratios and appears to be closely associated with the innermost portions of the nuclear bar seen in CO. was also monitored by VLA toward a sample of four galactic ultracompact HII regions: G9.62+0.19, G10.47+0.03, G29.96−0.02, and G31.41+0.31. Based upon temperature and density diagnostics, it is concluded that in general such clumps are probably the sites of massive star formation in an early evolutionary phase prior to the development of an ultracompact HII region.
Infrared detections
Absorption at 2.97 micrometres due to solid ammonia was recorded from interstellar grains in the Becklin–Neugebauer Object and probably in NGC 2264-IR as well. This detection helped explain the physical shape of previously poorly understood and related ice absorption lines.
A spectrum of the disk of Jupiter was obtained from the Kuiper Airborne Observatory, covering the 100 to 300 cm−1 spectral range. Analysis of the spectrum provides information on global mean properties of ammonia gas and an ammonia ice haze.
A total of 149 dark cloud positions were surveyed for evidence of 'dense cores' by using the (J,K) = (1,1) rotating inversion line of NH3. In general, the cores are not spherically shaped, with aspect ratios ranging from 1.1 to 4.4. It is also found that cores with stars have broader lines than cores without stars.
Ammonia has been detected in the Draco Nebula and in one or possibly two molecular clouds, which are associated with the high-latitude galactic infrared cirrus. The finding is significant because they may represent the birthplaces for the Population I metallicity B-type stars in the galactic halo that could have been borne in the galactic disk.
Observations of nearby dark clouds
By balancing and stimulated emission with spontaneous emission, it is possible to construct a relation between excitation temperature and density. Moreover, since the transitional levels of ammonia can be approximated by a 2-level system at low temperatures, this calculation is fairly simple. This premise can be applied to dark clouds, regions suspected of having extremely low temperatures and possible sites for future star formation. Detections of ammonia in dark clouds show very narrow linesindicative not only of low temperatures, but also of a low level of inner-cloud turbulence. Line ratio calculations provide a measurement of cloud temperature that is independent of previous CO observations. The ammonia observations were consistent with CO measurements of rotation temperatures of ≈10 K. With this, densities can be determined, and have been calculated to range between 104 and 105 cm−3 in dark clouds. Mapping of gives typical clouds sizes of 0.1 pc and masses near 1 solar mass. These cold, dense cores are the sites of future star formation.
UC HII regions
Ultra-compact HII regions are among the best tracers of high-mass star formation. The dense material surrounding UCHII regions is likely primarily molecular. Since a complete study of massive star formation necessarily involves the cloud from which the star formed, ammonia is an invaluable tool in understanding this surrounding molecular material. Since this molecular material can be spatially resolved, it is possible to constrain the heating/ionising sources, temperatures, masses, and sizes of the regions. Doppler-shifted velocity components allow for the separation of distinct regions of molecular gas that can trace outflows and hot cores originating from forming stars.
Extragalactic detection
Ammonia has been detected in external galaxies, and by simultaneously measuring several lines, it is possible to directly measure the gas temperature in these galaxies. Line ratios imply that gas temperatures are warm (≈50 K), originating from dense clouds with sizes of tens of parsecs. This picture is consistent with the picture within our Milky Way galaxyhot dense molecular cores form around newly forming stars embedded in larger clouds of molecular material on the scale of several hundred parsecs (giant molecular clouds; GMCs).
Works cited
|
;Bases (chemistry);Foul-smelling chemicals;Gaseous signaling molecules;Household chemicals;Industrial gases;Inorganic solvents;Nitrogen cycle;Nitrogen hydrides;Nitrogen(−III) compounds;Refrigerants;Rocket fuels;Toxicology
|
https://en.wikipedia.org/wiki/Assembly%20language
|
In computer programming, assembly language (alternatively assembler language or symbolic machine code), often referred to simply as assembly and commonly abbreviated as ASM or asm, is any low-level programming language with a very strong correspondence between the instructions in the language and the architecture's machine code instructions. Assembly language usually has one statement per machine instruction (1:1), but constants, comments, assembler directives, symbolic labels of, e.g., memory locations, registers, and macros are generally also supported.
The first assembly code in which a language is used to represent machine code instructions is found in Kathleen and Andrew Donald Booth's 1947 work, Coding for A.R.C.. Assembly code is converted into executable machine code by a utility program referred to as an assembler. The term "assembler" is generally attributed to Wilkes, Wheeler and Gill in their 1951 book The Preparation of Programs for an Electronic Digital Computer, who, however, used the term to mean "a program that assembles another program consisting of several sections into a single program". The conversion process is referred to as assembly, as in assembling the source code. The computational step when an assembler is processing a program is called assembly time.
Because assembly depends on the machine code instructions, each assembly language is specific to a particular computer architecture.
Sometimes there is more than one assembler for the same architecture, and sometimes an assembler is specific to an operating system or to particular operating systems. Most assembly languages do not provide specific syntax for operating system calls, and most assembly languages can be used universally with any operating system, as the language provides access to all the real capabilities of the processor, upon which all system call mechanisms ultimately rest. In contrast to assembly languages, most high-level programming languages are generally portable across multiple architectures but require interpreting or compiling, much more complicated tasks than assembling.
In the first decades of computing, it was commonplace for both systems programming and application programming to take place entirely in assembly language. While still irreplaceable for some purposes, the majority of programming is now conducted in higher-level interpreted and compiled languages. In "No Silver Bullet", Fred Brooks summarised the effects of the switch away from assembly language programming: "Surely the most powerful stroke for software productivity, reliability, and simplicity has been the progressive use of high-level languages for programming. Most observers credit that development with at least a factor of five in productivity, and with concomitant gains in reliability, simplicity, and comprehensibility."
Today, it is typical to use small amounts of assembly language code within larger systems implemented in a higher-level language, for performance reasons or to interact directly with hardware in ways unsupported by the higher-level language. For instance, just under 2% of version 4.9 of the Linux kernel source code is written in assembly; more than 97% is written in C.
Assembly language syntax
Assembly language uses a mnemonic to represent, e.g., each low-level machine instruction or opcode, each directive, typically also each architectural register, flag, etc. Some of the mnemonics may be built-in and some user-defined. Many operations require one or more operands in order to form a complete instruction. Most assemblers permit named constants, registers, and labels for program and memory locations, and can calculate expressions for operands. Thus, programmers are freed from tedious repetitive calculations and assembler programs are much more readable than machine code. Depending on the architecture, these elements may also be combined for specific instructions or addressing modes using offsets or other data as well as fixed addresses. Many assemblers offer additional mechanisms to facilitate program development, to control the assembly process, and to aid debugging.
Some are column oriented, with specific fields in specific columns; this was very common for machines using punched cards in the 1950s and early 1960s. Some assemblers have free-form syntax, with fields separated by delimiters, e.g., punctuation, white space. Some assemblers are hybrid, with, e.g., labels, in a specific column and other fields separated by delimiters; this became more common than column-oriented syntax in the 1960s.
Terminology
A macro assembler is an assembler that includes a macroinstruction facility so that (parameterized) assembly language text can be represented by a name, and that name can be used to insert the expanded text into other code.
Open code refers to any assembler input outside of a macro definition.
A cross assembler (see also cross compiler) is an assembler that is run on a computer or operating system (the host system) of a different type from the system on which the resulting code is to run (the target system). Cross-assembling facilitates the development of programs for systems that do not have the resources to support software development, such as an embedded system or a microcontroller. In such a case, the resulting object code must be transferred to the target system, via read-only memory (ROM, EPROM, etc.), a programmer (when the read-only memory is integrated in the device, as in microcontrollers), or a data link using either an exact bit-by-bit copy of the object code or a text-based representation of that code (such as Intel hex or Motorola S-record).
A high-level assembler is a program that provides language abstractions more often associated with high-level languages, such as advanced control structures (IF/THEN/ELSE, DO CASE, etc.) and high-level abstract data types, including structures/records, unions, classes, and sets.
A microassembler is a program that helps prepare a microprogram to control the low level operation of a computer.
A meta-assembler is "a program that accepts the syntactic and semantic description of an assembly language, and generates an assembler for that language", or that accepts an assembler source file along with such a description and assembles the source file in accordance with that description. "Meta-Symbol" assemblers for the SDS 9 Series and SDS Sigma series of computers are meta-assemblers. Sperry Univac also provided a Meta-Assembler for the UNIVAC 1100/2200 series.
inline assembler (or embedded assembler) is assembler code contained within a high-level language program. This is most often used in systems programs which need direct access to the hardware.
Key concepts
Assembler
An assembler program creates object code by translating combinations of mnemonics and syntax for operations and addressing modes into their numerical equivalents. This representation typically includes an operation code ("opcode") as well as other control bits and data. The assembler also calculates constant expressions and resolves symbolic names for memory locations and other entities. The use of symbolic references is a key feature of assemblers, saving tedious calculations and manual address updates after program modifications. Most assemblers also include macro facilities for performing textual substitution – e.g., to generate common short sequences of instructions as inline, instead of called subroutines.
Some assemblers may also be able to perform some simple types of instruction set-specific optimizations. One concrete example of this may be the ubiquitous x86 assemblers from various vendors. Called jump-sizing, most of them are able to perform jump-instruction replacements (long jumps replaced by short or relative jumps) in any number of passes, on request. Others may even do simple rearrangement or insertion of instructions, such as some assemblers for RISC architectures that can help optimize a sensible instruction scheduling to exploit the CPU pipeline as efficiently as possible.
Assemblers have been available since the 1950s, as the first step above machine language and before high-level programming languages such as Fortran, Algol, COBOL and Lisp. There have also been several classes of translators and semi-automatic code generators with properties similar to both assembly and high-level languages, with Speedcode as perhaps one of the better-known examples.
There may be several assemblers with different syntax for a particular CPU or instruction set architecture. For instance, an instruction to add memory data to a register in a x86-family processor might be add eax,[ebx], in original Intel syntax, whereas this would be written addl (%ebx),%eax in the AT&T syntax used by the GNU Assembler. Despite different appearances, different syntactic forms generally generate the same numeric machine code. A single assembler may also have different modes in order to support variations in syntactic forms as well as their exact semantic interpretations (such as FASM-syntax, TASM-syntax, ideal mode, etc., in the special case of x86 assembly programming).
Number of passes
There are two types of assemblers based on how many passes through the source are needed (how many times the assembler reads the source) to produce the object file.
One-pass assemblers process the source code once. For symbols used before they are defined, the assembler will emit "errata" after the eventual definition, telling the linker or the loader to patch the locations where the as yet undefined symbols had been used.
Multi-pass assemblers create a table with all symbols and their values in the first passes, then use the table in later passes to generate code.
In both cases, the assembler must be able to determine the size of each instruction on the initial passes in order to calculate the addresses of subsequent symbols. This means that if the size of an operation referring to an operand defined later depends on the type or distance of the operand, the assembler will make a pessimistic estimate when first encountering the operation, and if necessary, pad it with one or more
"no-operation" instructions in a later pass or the errata. In an assembler with peephole optimization, addresses may be recalculated between passes to allow replacing pessimistic code with code tailored to the exact distance from the target.
The original reason for the use of one-pass assemblers was memory size and speed of assembly – often a second pass would require storing the symbol table in memory (to handle forward references), rewinding and rereading the program source on tape, or rereading a deck of cards or punched paper tape. Later computers with much larger memories (especially disc storage), had the space to perform all necessary processing without such re-reading. The advantage of the multi-pass assembler is that the absence of errata makes the linking process (or the program load if the assembler directly produces executable code) faster.
Example: in the following code snippet, a one-pass assembler would be able to determine the address of the backward reference BKWD when assembling statement S2, but would not be able to determine the address of the forward reference FWD when assembling the branch statement S1; indeed, FWD may be undefined. A two-pass assembler would determine both addresses in pass 1, so they would be known when generating code in pass 2.
B
...
EQU *
...
EQU *
...
B
High-level assemblers
More sophisticated high-level assemblers provide language abstractions such as:
High-level procedure/function declarations and invocations
Advanced control structures (IF/THEN/ELSE, SWITCH)
High-level abstract data types, including structures/records, unions, classes, and sets
Sophisticated macro processing (although available on ordinary assemblers since the late 1950s for, e.g., the IBM 700 series and IBM 7000 series, and since the 1960s for IBM System/360 (S/360), amongst other machines)
Object-oriented programming features such as classes, objects, abstraction, polymorphism, and inheritance
See Language design below for more details.
Assembly language
A program written in assembly language consists of a series of mnemonic processor instructions and meta-statements (known variously as declarative operations, directives, pseudo-instructions, pseudo-operations and pseudo-ops), comments and data. Assembly language instructions usually consist of an opcode mnemonic followed by an operand, which might be a list of data, arguments or parameters. Some instructions may be "implied", which means the data upon which the instruction operates is implicitly defined by the instruction itself—such an instruction does not take an operand. The resulting statement is translated by an assembler into machine language instructions that can be loaded into memory and executed.
For example, the instruction below tells an x86/IA-32 processor to move an immediate 8-bit value into a register. The binary code for this instruction is 10110 followed by a 3-bit identifier for which register to use. The identifier for the AL register is 000, so the following machine code loads the AL register with the data 01100001.
10110000 01100001
This binary computer code can be made more human-readable by expressing it in hexadecimal as follows.
B0 61
Here, B0 means "Move a copy of the following value into AL", and 61 is a hexadecimal representation of the value 01100001, which is 97 in decimal. Assembly language for the 8086 family provides the mnemonic MOV (an abbreviation of move) for instructions such as this, so the machine code above can be written as follows in assembly language, complete with an explanatory comment if required, after the semicolon. This is much easier to read and to remember.
MOV AL, 61h ; Load AL with 97 decimal (61 hex)
In some assembly languages (including this one) the same mnemonic, such as MOV, may be used for a family of related instructions for loading, copying and moving data, whether these are immediate values, values in registers, or memory locations pointed to by values in registers or by immediate (a.k.a. direct) addresses. Other assemblers may use separate opcode mnemonics such as L for "move memory to register", ST for "move register to memory", LR for "move register to register", MVI for "move immediate operand to memory", etc.
If the same mnemonic is used for different instructions, that means that the mnemonic corresponds to several different binary instruction codes, excluding data (e.g. the 61h in this example), depending on the operands that follow the mnemonic. For example, for the x86/IA-32 CPUs, the Intel assembly language syntax MOV AL, AH represents an instruction that moves the contents of register AH into register AL. The hexadecimal form of this instruction is:
88 E0
The first byte, 88h, identifies a move between a byte-sized register and either another register or memory, and the second byte, E0h, is encoded (with three bit-fields) to specify that both operands are registers, the source is AH, and the destination is AL.
In a case like this where the same mnemonic can represent more than one binary instruction, the assembler determines which instruction to generate by examining the operands. In the first example, the operand 61h is a valid hexadecimal numeric constant and is not a valid register name, so only the B0 instruction can be applicable. In the second example, the operand AH is a valid register name and not a valid numeric constant (hexadecimal, decimal, octal, or binary), so only the 88 instruction can be applicable.
Assembly languages are always designed so that this sort of lack of ambiguity is universally enforced by their syntax. For example, in the Intel x86 assembly language, a hexadecimal constant must start with a numeral digit, so that the hexadecimal number 'A' (equal to decimal ten) would be written as 0Ah or 0AH, not AH, specifically so that it cannot appear to be the name of register AH. (The same rule also prevents ambiguity with the names of registers BH, CH, and DH, as well as with any user-defined symbol that ends with the letter H and otherwise contains only characters that are hexadecimal digits, such as the word "BEACH".)
Returning to the original example, while the x86 opcode 10110000 (B0) copies an 8-bit value into the AL register, 10110001 (B1) moves it into CL and 10110010 (B2) does so into DL. Assembly language examples for these follow.
MOV AL, 1h ; Load AL with immediate value 1
MOV CL, 2h ; Load CL with immediate value 2
MOV DL, 3h ; Load DL with immediate value 3
The syntax of MOV can also be more complex as the following examples show.
MOV EAX, [EBX] ; Move the 4 bytes in memory at the address contained in EBX into EAX
MOV [ESI+EAX], CL ; Move the contents of CL into the byte at address ESI+EAX
MOV DS, DX ; Move the contents of DX into segment register DS
In each case, the MOV mnemonic is translated directly into one of the opcodes 88-8C, 8E, A0-A3, B0-BF, C6 or C7 by an assembler, and the programmer normally does not have to know or remember which.
Transforming assembly language into machine code is the job of an assembler, and the reverse can at least partially be achieved by a disassembler. Unlike high-level languages, there is a one-to-one correspondence between many simple assembly statements and machine language instructions. However, in some cases, an assembler may provide pseudoinstructions (essentially macros) which expand into several machine language instructions to provide commonly needed functionality. For example, for a machine that lacks a "branch if greater or equal" instruction, an assembler may provide a pseudoinstruction that expands to the machine's "set if less than" and "branch if zero (on the result of the set instruction)". Most full-featured assemblers also provide a rich macro language (discussed below) which is used by vendors and programmers to generate more complex code and data sequences. Since the information about pseudoinstructions and macros defined in the assembler environment is not present in the object program, a disassembler cannot reconstruct the macro and pseudoinstruction invocations but can only disassemble the actual machine instructions that the assembler generated from those abstract assembly-language entities. Likewise, since comments in the assembly language source file are ignored by the assembler and have no effect on the object code it generates, a disassembler is always completely unable to recover source comments.
Each computer architecture has its own machine language. Computers differ in the number and type of operations they support, in the different sizes and numbers of registers, and in the representations of data in storage. While most general-purpose computers are able to carry out essentially the same functionality, the ways they do so differ; the corresponding assembly languages reflect these differences.
Multiple sets of mnemonics or assembly-language syntax may exist for a single instruction set, typically instantiated in different assembler programs. In these cases, the most popular one is usually that supplied by the CPU manufacturer and used in its documentation.
Two examples of CPUs that have two different sets of mnemonics are the Intel 8080 family and the Intel 8086/8088. Because Intel claimed copyright on its assembly language mnemonics (on each page of their documentation published in the 1970s and early 1980s, at least), some companies that independently produced CPUs compatible with Intel instruction sets invented their own mnemonics. The Zilog Z80 CPU, an enhancement of the Intel 8080A, supports all the 8080A instructions plus many more; Zilog invented an entirely new assembly language, not only for the new instructions but also for all of the 8080A instructions. For example, where Intel uses the mnemonics MOV, MVI, LDA, STA, LXI, LDAX, STAX, LHLD, and SHLD for various data transfer instructions, the Z80 assembly language uses the mnemonic LD for all of them. A similar case is the NEC V20 and V30 CPUs, enhanced copies of the Intel 8086 and 8088, respectively. Like Zilog with the Z80, NEC invented new mnemonics for all of the 8086 and 8088 instructions, to avoid accusations of infringement of Intel's copyright. (It is questionable whether such copyrights can be valid, and later CPU companies such as AMD and Cyrix republished Intel's x86/IA-32 instruction mnemonics exactly with neither permission nor legal penalty.) It is doubtful whether in practice many people who programmed the V20 and V30 actually wrote in NEC's assembly language rather than Intel's; since any two assembly languages for the same instruction set architecture are isomorphic (somewhat like English and Pig Latin), there is no requirement to use a manufacturer's own published assembly language with that manufacturer's products.
"Hello, world!" on x86 Linux
In 32-bit assembly language for Linux on an x86 processor, "Hello, world!" can be printed like this.
section .text
global _start
_start:
mov edx,len ; length of string, third argument to write()
mov ecx,msg ; address of string, second argument to write()
mov ebx,1 ; file descriptor (standard output), first argument to write()
mov eax,4 ; system call number for write()
int 0x80 ; system call trap
mov ebx,0 ; exit code, first argument to exit()
mov eax,1 ; system call number for exit()
int 0x80 ; system call trap
section .data
msg db 'Hello, world!', 0xa
len equ $ - msg
Language design
Basic elements
There is a large degree of diversity in the way the authors of assemblers categorize statements and in the nomenclature that they use. In particular, some describe anything other than a machine mnemonic or extended mnemonic as a pseudo-operation (pseudo-op). A typical assembly language consists of 3 types of instruction statements that are used to define program operations:
Opcode mnemonics
Data definitions
Assembly directives
Opcode mnemonics and extended mnemonics
Instructions (statements) in assembly language are generally very simple, unlike those in high-level languages. Generally, a mnemonic is a symbolic name for a single executable machine language instruction (an opcode), and there is at least one opcode mnemonic defined for each machine language instruction. Each instruction typically consists of an operation or opcode plus zero or more operands. Most instructions refer to a single value or a pair of values. Operands can be immediate (value coded in the instruction itself), registers specified in the instruction or implied, or the addresses of data located elsewhere in storage. This is determined by the underlying processor architecture: the assembler merely reflects how this architecture works. Extended mnemonics are often used to specify a combination of an opcode with a specific operand, e.g., the System/360 assemblers use as an extended mnemonic for with a mask of 15 and ("NO OPeration" – do nothing for one step) for with a mask of 0.
Extended mnemonics are often used to support specialized uses of instructions, often for purposes not obvious from the instruction name. For example, many CPU's do not have an explicit NOP instruction, but do have instructions that can be used for the purpose. In 8086 CPUs the instruction is used for , with being a pseudo-opcode to encode the instruction . Some disassemblers recognize this and will decode the instruction as . Similarly, IBM assemblers for System/360 and System/370 use the extended mnemonics and for and with zero masks. For the SPARC architecture, these are known as synthetic instructions.
Some assemblers also support simple built-in macro-instructions that generate two or more machine instructions. For instance, with some Z80 assemblers the instruction is recognized to generate followed by . These are sometimes known as pseudo-opcodes.
Mnemonics are arbitrary symbols; in 1985 the IEEE published Standard 694 for a uniform set of mnemonics to be used by all assemblers. The standard has since been withdrawn.
Data directives
There are instructions used to define data elements to hold data and variables. They define the type of data, the length and the alignment of data. These instructions can also define whether the data is available to outside programs (programs assembled separately) or only to the program in which the data section is defined. Some assemblers classify these as pseudo-ops.
Assembly directives
Assembly directives, also called pseudo-opcodes, pseudo-operations or pseudo-ops, are commands given to an assembler "directing it to perform operations other than assembling instructions". Directives affect how the assembler operates and "may affect the object code, the symbol table, the listing file, and the values of internal assembler parameters". Sometimes the term pseudo-opcode is reserved for directives that generate object code, such as those that generate data.
The names of pseudo-ops often start with a dot to distinguish them from machine instructions. Pseudo-ops can make the assembly of the program dependent on parameters input by a programmer, so that one program can be assembled in different ways, perhaps for different applications. Or, a pseudo-op can be used to manipulate presentation of a program to make it easier to read and maintain. Another common use of pseudo-ops is to reserve storage areas for run-time data and optionally initialize their contents to known values.
Symbolic assemblers let programmers associate arbitrary names (labels or symbols) with memory locations and various constants. Usually, every constant and variable is given a name so instructions can reference those locations by name, thus promoting self-documenting code. In executable code, the name of each subroutine is associated with its entry point, so any calls to a subroutine can use its name. Inside subroutines, GOTO destinations are given labels. Some assemblers support local symbols which are often lexically distinct from normal symbols (e.g., the use of "10$" as a GOTO destination).
Some assemblers, such as NASM, provide flexible symbol management, letting programmers manage different namespaces, automatically calculate offsets within data structures, and assign labels that refer to literal values or the result of simple computations performed by the assembler. Labels can also be used to initialize constants and variables with relocatable addresses.
Assembly languages, like most other computer languages, allow comments to be added to program source code that will be ignored during assembly. Judicious commenting is essential in assembly language programs, as the meaning and purpose of a sequence of binary machine instructions can be difficult to determine. The "raw" (uncommented) assembly language generated by compilers or disassemblers is quite difficult to read when changes must be made.
Macros
Many assemblers support predefined macros, and others support programmer-defined (and repeatedly re-definable) macros involving sequences of text lines in which variables and constants are embedded. The macro definition is most commonly a mixture of assembler statements, e.g., directives, symbolic machine instructions, and templates for assembler statements. This sequence of text lines may include opcodes or directives. Once a macro has been defined its name may be used in place of a mnemonic. When the assembler processes such a statement, it replaces the statement with the text lines associated with that macro, then processes them as if they existed in the source code file (including, in some assemblers, expansion of any macros existing in the replacement text). Macros in this sense date to IBM autocoders of the 1950s.
Macro assemblers typically have directives to, e.g., define macros, define variables, set variables to the result of an arithmetic, logical or string expression, iterate, conditionally generate code. Some of those directives may be restricted to use within a macro definition, e.g., MEXIT in HLASM, while others may be permitted within open code (outside macro definitions), e.g., AIF and COPY in HLASM.
In assembly language, the term "macro" represents a more comprehensive concept than it does in some other contexts, such as the pre-processor in the C programming language, where its #define directive typically is used to create short single line macros. Assembler macro instructions, like macros in PL/I and some other languages, can be lengthy "programs" by themselves, executed by interpretation by the assembler during assembly.
Since macros can have 'short' names but expand to several or indeed many lines of code, they can be used to make assembly language programs appear to be far shorter, requiring fewer lines of source code, as with higher level languages. They can also be used to add higher levels of structure to assembly programs, optionally introduce embedded debugging code via parameters and other similar features.
Macro assemblers often allow macros to take parameters. Some assemblers include quite sophisticated macro languages, incorporating such high-level language elements as optional parameters, symbolic variables, conditionals, string manipulation, and arithmetic operations, all usable during the execution of a given macro, and allowing macros to save context or exchange information. Thus a macro might generate numerous assembly language instructions or data definitions, based on the macro arguments. This could be used to generate record-style data structures or "unrolled" loops, for example, or could generate entire algorithms based on complex parameters. For instance, a "sort" macro could accept the specification of a complex sort key and generate code crafted for that specific key, not needing the run-time tests that would be required for a general procedure interpreting the specification. An organization using assembly language that has been heavily extended using such a macro suite can be considered to be working in a higher-level language since such programmers are not working with a computer's lowest-level conceptual elements. Underlining this point, macros were used to implement an early virtual machine in SNOBOL4 (1967), which was written in the SNOBOL Implementation Language (SIL), an assembly language for a virtual machine. The target machine would translate this to its native code using a macro assembler. This allowed a high degree of portability for the time.
Macros were used to customize large scale software systems for specific customers in the mainframe era and were also used by customer personnel to satisfy their employers' needs by making specific versions of manufacturer operating systems. This was done, for example, by systems programmers working with IBM's Conversational Monitor System / Virtual Machine (VM/CMS) and with IBM's "real time transaction processing" add-ons, Customer Information Control System CICS, and ACP/TPF, the airline/financial system that began in the 1970s and still runs many large computer reservation systems (CRS) and credit card systems today.
It is also possible to use solely the macro processing abilities of an assembler to generate code written in completely different languages, for example, to generate a version of a program in COBOL using a pure macro assembler program containing lines of COBOL code inside assembly time operators instructing the assembler to generate arbitrary code. IBM OS/360 uses macros to perform system generation. The user specifies options by coding a series of assembler macros. Assembling these macros generates a job stream to build the system, including job control language and utility control statements.
This is because, as was realized in the 1960s, the concept of "macro processing" is independent of the concept of "assembly", the former being in modern terms more word processing, text processing, than generating object code. The concept of macro processing appeared, and appears, in the C programming language, which supports "preprocessor instructions" to set variables, and make conditional tests on their values. Unlike certain previous macro processors inside assemblers, the C preprocessor is not Turing-complete because it lacks the ability to either loop or "go to", the latter allowing programs to loop.
Despite the power of macro processing, it fell into disuse in many high level languages (major exceptions being C, C++ and PL/I) while remaining a perennial for assemblers.
Macro parameter substitution is strictly by name: at macro processing time, the value of a parameter is textually substituted for its name. The most famous class of bugs resulting was the use of a parameter that itself was an expression and not a simple name when the macro writer expected a name. In the macro:
foo: macro a
load a*b
the intention was that the caller would provide the name of a variable, and the "global" variable or constant b would be used to multiply "a". If foo is called with the parameter a-c, the macro expansion of load a-c*b occurs. To avoid any possible ambiguity, users of macro processors can parenthesize formal parameters inside macro definitions, or callers can parenthesize the input parameters.
Support for structured programming
Packages of macros have been written providing structured programming elements to encode execution flow. The earliest example of this approach was in the Concept-14 macro set, originally proposed by Harlan Mills (March 1970), and implemented by Marvin Kessler at IBM's Federal Systems Division, which provided IF/ELSE/ENDIF and similar control flow blocks for OS/360 assembler programs. This was a way to reduce or eliminate the use of GOTO operations in assembly code, one of the main factors causing spaghetti code in assembly language. This approach was widely accepted in the early 1980s (the latter days of large-scale assembly language use). IBM's High Level Assembler Toolkit includes such a macro package.
Another design was A-Natural, a "stream-oriented" assembler for 8080/Z80 processors from Whitesmiths Ltd. (developers of the Unix-like Idris operating system, and what was reported to be the first commercial C compiler). The language was classified as an assembler because it worked with raw machine elements such as opcodes, registers, and memory references; but it incorporated an expression syntax to indicate execution order. Parentheses and other special symbols, along with block-oriented structured programming constructs, controlled the sequence of the generated instructions. A-natural was built as the object language of a C compiler, rather than for hand-coding, but its logical syntax won some fans.
There has been little apparent demand for more sophisticated assemblers since the decline of large-scale assembly language development. In spite of that, they are still being developed and applied in cases where resource constraints or peculiarities in the target system's architecture prevent the effective use of higher-level languages.
Assemblers with a strong macro engine allow structured programming via macros, such as the switch macro provided with the Masm32 package (this code is a complete program):
include \masm32\include\masm32rt.inc ; use the Masm32 library
.code
demomain:
REPEAT 20
switch rv(nrandom, 9) ; generate a number between 0 and 8
mov ecx, 7
case 0
print "case 0"
case ecx ; in contrast to most other programming languages,
print "case 7" ; the Masm32 switch allows "variable cases"
case 1 .. 3
.if eax==1
print "case 1"
.elseif eax==2
print "case 2"
.else
print "cases 1 to 3: other"
.endif
case 4, 6, 8
print "cases 4, 6 or 8"
default
mov ebx, 19 ; print 20 stars
.Repeat
print "*"
dec ebx
.Until Sign? ; loop until the sign flag is set
endsw
print chr$(13, 10)
ENDM
exit
end demomain
Use of assembly language
When the stored-program computer was introduced, programs were written in machine code, and loaded into the computer from punched paper tape or toggled directly into memory from console switches. Kathleen Booth "is credited with inventing assembly language" based on theoretical work she began in 1947, while working on the ARC2 at Birkbeck, University of London, following consultation by Andrew Booth (later her husband) with mathematician John von Neumann and physicist Herman Goldstine at the Institute for Advanced Study.
In late 1948, the Electronic Delay Storage Automatic Calculator (EDSAC) had an assembler (named "initial orders") integrated into its bootstrap program. It used one-letter mnemonics developed by David Wheeler, who is credited by the IEEE Computer Society as the creator of the first "assembler". Reports on the EDSAC introduced the term "assembly" for the process of combining fields into an instruction word. SOAP (Symbolic Optimal Assembly Program) was an assembly language for the IBM 650 computer written by Stan Poley in 1955.
Assembly languages eliminated much of the error-prone, tedious, and time-consuming first-generation programming needed with the earliest computers, freeing programmers from tedium such as remembering numeric codes and calculating addresses. They were once widely used for all sorts of programming. By the late 1950s their use had largely been supplanted by higher-level languages in the search for improved programming productivity. Today, assembly language is still used for direct hardware manipulation, access to specialized processor instructions, or to address critical performance issues. Typical uses are device drivers, low-level embedded systems, and real-time systems (see ).
Numerous programs were written entirely in assembly language. The Burroughs MCP (1961) was the first computer for which an operating system was not developed entirely in assembly language; it was written in Executive Systems Problem Oriented Language (ESPOL), an Algol dialect. Many commercial applications were written in assembly language as well, including a large amount of the IBM mainframe software developed by large corporations. COBOL, FORTRAN and some PL/I eventually displaced assembly language, although a number of large organizations retained assembly-language application infrastructures well into the 1990s.
Assembly language was the primary development language for 8-bit home computers such as the Apple II, Atari 8-bit computers, ZX Spectrum, and Commodore 64. Interpreted BASIC on these systems did not offer maximum execution speed and full use of facilities to take full advantage of the available hardware. Assembly language was the default choice for programming 8-bit consoles such as the Atari 2600 and Nintendo Entertainment System.
Key software for IBM PC compatibles such as MS-DOS, Turbo Pascal, and the Lotus 1-2-3 spreadsheet was written in assembly language. As computer speed grew exponentially, assembly language became a tool for speeding up parts of programs, such as the rendering of Doom, rather than a dominant development language. In the 1990s, assembly language was used to maximise performance from systems such as the Sega Saturn, and as the primary language for arcade hardware using the TMS34010 integrated CPU/GPU such as Mortal Kombat and NBA Jam.
Current usage
There has been debate over the usefulness and performance of assembly language relative to high-level languages.
Although assembly language has specific niche uses where it is important (see below), there are other tools for optimization.
, the TIOBE index of programming language popularity ranks assembly language at 11, ahead of Visual Basic, for example. Assembler can be used to optimize for speed or optimize for size. In the case of speed optimization, modern optimizing compilers are claimed to render high-level languages into code that can run as fast as hand-written assembly, despite some counter-examples. The complexity of modern processors and memory sub-systems makes effective optimization increasingly difficult for compilers and assembly programmers alike. Increasing processor performance has meant that most CPUs sit idle most of the time, with delays caused by predictable bottlenecks such as cache misses, I/O operations and paging, making raw code execution speed a non-issue for many programmers.
There are still certain computer programming domains in which the use of assembly programming is more common:
Writing code for systems with that have limited high-level language options such as the Atari 2600, Commodore 64, and graphing calculators. Programs for these computers of the 1970s and 1980s are often written in the context of demoscene or retrogaming subcultures.
Code that must interact directly with the hardware, for example in device drivers and interrupt handlers.
In an embedded processor or DSP, high-repetition interrupts require the shortest number of cycles per interrupt, such as an interrupt that occurs 1000 or 10000 times a second.
Programs that need to use processor-specific instructions not implemented in a compiler. A common example is the bitwise rotation instruction at the core of many encryption algorithms, as well as querying the parity of a byte or the 4-bit carry of an addition.
Stand-alone executables that are required to execute without recourse to the run-time components or libraries associated with a high-level language, such as the firmware for telephones, automobile fuel and ignition systems, air-conditioning control systems,and security systems.
Programs with performance-sensitive inner loops, where assembly language provides optimization opportunities that are difficult to achieve in a high-level language. For example, linear algebra with BLAS or discrete cosine transformation (e.g. SIMD assembly version from x264).
Programs that create vectorized functions for programs in higher-level languages such as C. In the higher-level language this is sometimes aided by compiler intrinsic functions which map directly to SIMD mnemonics, but nevertheless result in a one-to-one assembly conversion specific for the given vector processor.
Real-time programs such as simulations, flight navigation systems, and medical equipment. For example, in a fly-by-wire system, telemetry must be interpreted and acted upon within strict time constraints. Such systems must eliminate sources of unpredictable delays, which may be created by interpreted languages, automatic garbage collection, paging operations, or preemptive multitasking. Choosing assembly or lower-level languages for such systems gives programmers greater visibility and control over processing details.
Cryptographic algorithms that must always take strictly the same time to execute, preventing timing attacks.
Video encoders and decoders such as rav1e (an encoder for AV1) and dav1d (the reference decoder for AV1) contain assembly to leverage AVX2 and ARM Neon instructions when available.
Modify and extend legacy code written for IBM mainframe computers.
Situations where complete control over the environment is required, in extremely high-security situations where nothing can be taken for granted.
Computer viruses, bootloaders, certain device drivers, or other items very close to the hardware or low-level operating system.
Instruction set simulators for monitoring, tracing and debugging where additional overhead is kept to a minimum.
Situations where no high-level language exists, on a new or specialized processor for which no cross compiler is available.
Reverse engineering and modifying program files such as:
existing binaries that may or may not have originally been written in a high-level language, for example when trying to recreate programs for which source code is not available or has been lost, or cracking copy protection of proprietary software.
Video games (also termed ROM hacking), which is possible via several methods. The most widely employed method is altering program code at the assembly language level.
Assembly language is still taught in most computer science and electronic engineering programs. Although few programmers today regularly work with assembly language as a tool, the underlying concepts remain important. Such fundamental topics as binary arithmetic, memory allocation, stack processing, character set encoding, interrupt processing, and compiler design would be hard to study in detail without a grasp of how a computer operates at the hardware level. Since a computer's behaviour is fundamentally defined by its instruction set, the logical way to learn such concepts is to study an assembly language. Most modern computers have similar instruction sets. Therefore, studying a single assembly language is sufficient to learn the basic concepts, recognize situations where the use of assembly language might be appropriate, and to see how efficient executable code can be created from high-level languages.
Typical applications
Assembly language is typically used in a system's boot code, the low-level code that initializes and tests the system hardware prior to booting the operating system and is often stored in ROM. (BIOS on IBM-compatible PC systems and CP/M is an example.)
Assembly language is often used for low-level code, for instance for operating system kernels, which cannot rely on the availability of pre-existing system calls and must indeed implement them for the particular processor architecture on which the system will be running.
Some compilers translate high-level languages into assembly first before fully compiling, allowing the assembly code to be viewed for debugging and optimization purposes.
Some compilers for relatively low-level languages, such as Pascal or C, allow the programmer to embed assembly language directly in the source code (so called inline assembly). Programs using such facilities can then construct abstractions using different assembly language on each hardware platform. The system's portable code can then use these processor-specific components through a uniform interface.
Assembly language is useful in reverse engineering. Many programs are distributed only in machine code form which is straightforward to translate into assembly language by a disassembler, but more difficult to translate into a higher-level language through a decompiler. Tools such as the Interactive Disassembler make extensive use of disassembly for such a purpose. This technique is used by hackers to crack commercial software, and competitors to produce software with similar results from competing companies.
Assembly language is used to enhance speed of execution, especially in early personal computers with limited processing power and RAM.
Assemblers can be used to generate blocks of data, with no high-level language overhead, from formatted and commented source code, to be used by other code.
See also
Compiler
Comparison of assemblers
Disassembler
Hexadecimal
Instruction set architecture
Little man computer – an educational computer model with a base-10 assembly language
Nibble
Typed assembly language
Notes
References
Further reading
(2+xiv+270+6 pages)
("An online book full of helpful ASM info, tutorials and code examples" by the ASM Community, archived at the internet archive.)
External links
Assembly Language and Learning Assembly Language pages on WikiWikiWeb
Assembly Language Programming Examples
|
*Assembly language;Computer-related introductions in 1949;Embedded systems;Low-level programming languages;Programming language implementation;Programming languages created in 1949
|
https://en.wikipedia.org/wiki/Ambrosia
|
In the ancient Greek myths, ambrosia (, ) is the food or drink of the Greek gods, and is often depicted as conferring longevity or immortality upon whoever consumed it. It was brought to the gods in Olympus by doves and served either by Hebe or by Ganymede at the heavenly feast.
Ancient art sometimes depicted ambrosia as distributed by the nymph named Ambrosia, a nurse of Dionysus.
Definition
Ambrosia is very closely related to the gods' other form of sustenance, nectar. The two terms may not have originally been distinguished; though in Homer's poems nectar is usually the drink and ambrosia the food of the gods; it was with ambrosia that Hera "cleansed all defilement from her lovely flesh", and with ambrosia Athena prepared Penelope in her sleep, so that when she appeared for the final time before her suitors, the effects of years had been stripped away, and they were inflamed with passion at the sight of her. On the other hand, in Alcman, nectar is the food, and in Sappho and Anaxandrides, ambrosia is the drink. A character in Aristophanes' Knights says, "I dreamed the goddess poured ambrosia over your head—out of a ladle." Both descriptions could be correct, as ambrosia could be a liquid considered a food (such as honey).
The consumption of ambrosia was typically reserved for divine beings. Upon his assumption into immortality on Olympus, Heracles is given ambrosia by Athena, while the hero Tydeus is denied the same thing when the goddess discovers him eating human brains. In one version of the myth of Tantalus, part of Tantalus' crime is that after tasting ambrosia himself, he attempts to steal some to give to other mortals. Those who consume ambrosia typically have ichor, not blood, in their veins.
Both nectar and ambrosia are fragrant, and may be used as perfume: in the Odyssey Menelaus and his men are disguised as seals in untanned seal skins, "and the deadly smell of the seal skins vexed us sore; but the goddess saved us; she brought ambrosia and put it under our nostrils." Homer speaks of ambrosial raiment, ambrosial locks of hair, even the gods' ambrosial sandals.
Among later writers, ambrosia has been so often used with generic meanings of "delightful liquid" that such late writers as Athenaeus, Paulus and Dioscurides employ it as a technical term in contexts of cookery, medicine, and botany. Pliny used the term in connection with different plants, as did early herbalists.
Additionally, some modern ethnomycologists, such as Danny Staples, identify ambrosia with the hallucinogenic mushroom Amanita muscaria: "it was the food of the gods, their ambrosia, and nectar was the pressed sap of its juices", Staples asserts.
W. H. Roscher thinks that both nectar and ambrosia were kinds of honey, in which case their power of conferring immortality would be due to the supposed healing and cleansing powers of honey, and because fermented honey (mead) preceded wine as an entheogen in the Aegean world; on some Minoan seals, goddesses were represented with bee faces (compare Merope and Melissa).
Etymology
The concept of an immortality drink is attested in at least two ancient Indo-European languages: Greek and Sanskrit. The Greek ἀμβροσία (ambrosia) is semantically linked to the Sanskrit (amṛta) as both words denote a drink or food that gods use to achieve immortality. The two words appear to be derived from the same Indo-European form *ṇ-mṛ-tós, "un-dying" (n-: negative prefix from which the prefix a- in both Greek and Sanskrit are derived; mṛ: zero grade of *mer-, "to die"; and -to-: adjectival suffix). A semantically similar etymology exists for nectar, the beverage of the gods (Greek: νέκταρ néktar) presumed to be a compound of the PIE roots *nek-, "death", and -*tar, "overcoming".
Other examples in mythology
In one version of the story of the birth of Achilles, Thetis anoints the infant with ambrosia and passes the child through the fire to make him immortal but Peleus, appalled, stops her, leaving only his heel unimmortalised (Argonautica 4.869–879).
In the Iliad xvi, Apollo washes the black blood from the corpse of Sarpedon and anoints it with ambrosia, readying it for its dreamlike return to Sarpedon's native Lycia. Similarly, Thetis anoints the corpse of Patroclus in order to preserve it. Ambrosia and nectar are depicted as unguents (xiv. 170; xix. 38).
In the Odyssey, Calypso is described as having "spread a table with ambrosia and set it by Hermes, and mixed the rosy-red nectar." It is ambiguous whether he means the ambrosia itself is rosy-red, or if he is describing a rosy-red nectar Hermes drinks along with the ambrosia. Later, Circe mentions to Odysseus that a flock of doves are the bringers of ambrosia to Olympus.
In the Odyssey (ix.345–359), Polyphemus likens the wine given to him by Odysseus to ambrosia and nectar.
One of the impieties of Tantalus, according to Pindar, was that he offered to his guests the ambrosia of the Deathless Ones, a theft akin to that of Prometheus, Karl Kerenyi noted (in Heroes of the Greeks).
In the Homeric hymn to Aphrodite, the goddess uses "ambrosial bridal oil that she had ready perfumed."
In the story of Eros and Psyche as told by Apuleius, Psyche is given ambrosia upon her completion of the quests set by Aphrodite and her acceptance on Olympus. After she partakes, she and Eros are wed as gods.
In the Aeneid, Aeneas encounters his mother in an alternate, or illusory form. When she became her godly form "Her hair's ambrosia breathed a holy fragrance."
Ambrosia (nymph)
Lycurgus, king of Thrace, forbade the cult of Dionysus, whom he drove from Thrace, and attacked the gods' entourage when they celebrated the god. Among them was Ambrosia, who turned herself into a grapevine to hide from his wrath. Dionysus, enraged by the king's actions, drove him mad. In his fit of insanity he killed his son, whom he mistook for a stock of ivy, and then himself.
References
Sources
Clay, Jenny Strauss, "Immortal and ageless forever", The Classical Journal 77.2 (December 1981:pp. 112–117).
Ruck, Carl A.P. and Danny Staples, The World of Classical Myth 1994, p. 26 et seq.
Wright, F. A., "The Food of the Gods", The Classical Review 31.1, (February 1917:4–6).
External links
|
Achilles;Ancient Greek cuisine;Immortality;Metamorphoses;Mount Olympus;Mythological food and drink;Mythological medicines and drugs;Thetis
|
https://en.wikipedia.org/wiki/Amber
|
Amber is fossilized tree resin. Examples of it have been appreciated for its color and natural beauty since the Neolithic times, and worked as a gemstone since antiquity. Amber is used in jewelry and as a healing agent in folk medicine.
There are five classes of amber, defined on the basis of their chemical constituents. Because it originates as a soft, sticky tree resin, amber sometimes contains animal and plant material as inclusions. Amber occurring in coal seams is also called resinite, and the term ambrite is applied to that found specifically within New Zealand coal seams.
Etymology
The English word amber derives from Arabic via Middle Latin ambar and Middle French ambre. The word referred to what is now known as ambergris (ambre gris or "gray amber"), a solid waxy substance derived from the sperm whale. The word, in its sense of "ambergris," was adopted in Middle English in the 14th century.
In the Romance languages, the sense of the word was extended to Baltic amber (fossil resin) from as early as the late 13th century. At first called white or yellow amber (ambre jaune), this meaning was adopted in English by the early 15th century. As the use of ambergris waned, this became the main sense of the word.
The two substances ("yellow amber" and "gray amber") conceivably became associated or confused because they both were found washed up on beaches. Ambergris is less dense than water and floats, whereas amber is denser and floats only in concentrated saline, or strong salty seawater though less dense than stone.
The classical names for amber, Ancient Greek (ēlektron) and one of its Latin names, electrum, are connected to a term ἠλέκτωρ (ēlektōr) meaning "beaming Sun". According to myth, when Phaëton, son of Helios (the Sun) was killed, his mourning sisters became poplar trees, and their tears became elektron, amber. The word elektron gave rise to the words electric, electricity, and their relatives because of amber's ability to bear a charge of static electricity.
Varietal names
A number of regional and varietal names have been applied to ambers over the centuries, including allingite, beckerite, gedanite, kochenite, krantzite, and stantienite.
History
Theophrastus discussed amber in the 4th century BCE, as did Pytheas (), whose work "On the Ocean" is lost, but was referenced by Pliny, according to whose Natural History:Pytheas says that the Gutones, a people of Germany, inhabit the shores of an estuary of the Ocean called Mentonomon, their territory extending a distance of six thousand stadia; that, at one day's sail from this territory, is the Isle of Abalus, upon the shores of which, amber is thrown up by the waves in spring, it being an excretion of the sea in a concrete form; as, also, that the inhabitants use this amber by way of fuel, and sell it to their neighbors, the Teutones.
Earlier Pliny says that Pytheas refers to a large island—three days' sail from the Scythian coast and called Balcia by Xenophon of Lampsacus (author of a fanciful travel book in Greek)—as Basilia—a name generally equated with Abalus. Given the presence of amber, the island could have been Heligoland, Zealand, the shores of Gdańsk Bay, the Sambia Peninsula or the Curonian Lagoon, which were historically the richest sources of amber in northern Europe. There were well-established trade routes for amber connecting the Baltic with the Mediterranean (known as the "Amber Road"). Pliny states explicitly that the Germans exported amber to Pannonia, from where the Veneti distributed it onwards.
The ancient Italic peoples of southern Italy used to work amber; the National Archaeological Museum of Siritide (Museo Archeologico Nazionale della Siritide) at Policoro in the province of Matera (Basilicata) displays important surviving examples. It has been suggested that amber used in antiquity, as at Mycenae and in the prehistory of the Mediterranean, came from deposits in Sicily.
Pliny also cites the opinion of Nicias ( 470–413 BCE), according to whom amber Besides the fanciful explanations according to which amber is "produced by the Sun", Pliny cites opinions that are well aware of its origin in tree resin, citing the native Latin name of succinum (sūcinum, from sucus "juice"). In Book 37, section XI of Natural History, Pliny wrote:
He also states that amber is also found in Egypt and India, and he even refers to the electrostatic properties of amber, by saying that "in Syria the women make the whorls of their spindles of this substance, and give it the name of harpax [from ἁρπάζω, "to drag"] from the circumstance that it attracts leaves towards it, chaff, and the light fringe of tissues".
The Romans traded for amber from the shores of the southern Baltic at least as far back as the time of Nero.
Amber has a long history of use in China, with the first written record from 200 BCE. Early in the 19th century, the first reports of amber found in North America came from discoveries in New Jersey along Crosswicks Creek near Trenton, at Camden, and near Woodbury.
Composition and formation
Amber is heterogeneous in composition, but consists of several resinous more or less soluble in alcohol, ether and chloroform, associated with an insoluble bituminous substance. Amber is a macromolecule formed by free radical polymerization of several precursors in the labdane family, for example, communic acid, communol, and biformene. These labdanes are diterpenes (C20H32) and trienes, equipping the organic skeleton with three alkene groups for polymerization. As amber matures over the years, more polymerization takes place as well as isomerization reactions, crosslinking and cyclization.
Most amber has a hardness between 2.0 and 2.5 on the Mohs scale, a refractive index of 1.5–1.6, a specific gravity between 1.06 and 1.10, and a melting point of 250–300 °C. Heated above , amber decomposes, yielding an oil of amber, and leaves a black residue which is known as "amber colophony", or "amber pitch"; when dissolved in oil of turpentine or in linseed oil this forms "amber varnish" or "amber lac".
Molecular polymerization, resulting from high pressures and temperatures produced by overlying sediment, transforms the resin first into copal. Sustained heat and pressure drives off terpenes and results in the formation of amber. For this to happen, the resin must be resistant to decay. Many trees produce resin, but in the majority of cases this deposit is broken down by physical and biological processes. Exposure to sunlight, rain, microorganisms, and extreme temperatures tends to disintegrate the resin. For the resin to survive long enough to become amber, it must be resistant to such forces or be produced under conditions that exclude them. Fossil resins from Europe fall into two categories, the Baltic ambers and another that resembles the Agathis group. Fossil resins from the Americas and Africa are closely related to the modern genus Hymenaea, while Baltic ambers are thought to be fossil resins from plants of the family Sciadopityaceae that once lived in north Europe.
The abnormal development of resin in living trees (succinosis) can result in the formation of amber. Impurities are quite often present, especially when the resin has dropped onto the ground, so the material may be useless except for varnish-making. Such impure amber is called firniss. Such inclusion of other substances can cause the amber to have an unexpected color. Pyrites may give a bluish color. Bony amber owes its cloudy opacity to numerous tiny bubbles inside the resin. However, so-called black amber is really a kind of jet. In darkly clouded and even opaque amber, inclusions can be imaged using high-energy, high-contrast, high-resolution X-rays.
Extraction and processing
Distribution and mining
Amber is globally distributed in or around all continents, mainly in rocks of Cretaceous age or younger. Historically, the coast west of Königsberg in Prussia was the world's leading source of amber. The first mentions of amber deposits there date back to the 12th century. Juodkrantė in Lithuania was established in the mid-19th century as a mining town of amber. About 90% of the world's extractable amber is still located in that area, which was transferred to the Russian Soviet Federative Socialist Republic of the USSR in 1946, becoming the Kaliningrad Oblast.
Pieces of amber torn from the seafloor are cast up by the waves and collected by hand, dredging, or diving. Elsewhere, amber is mined, both in open works and underground galleries. Then nodules of blue earth have to be removed and an opaque crust must be cleaned off, which can be done in revolving barrels containing sand and water. Erosion removes this crust from sea-worn amber. Dominican amber is mined through bell pitting, which is dangerous because of the risk of tunnel collapse.
An important source of amber is Kachin State in northern Myanmar, which has been a major source of amber in China for at least 1,800 years. Contemporary mining of this deposit has attracted attention for unsafe working conditions and its role in funding internal conflict in the country. Amber from the Rivne Oblast of Ukraine, referred to as Rivne amber, is mined illegally by organised crime groups, who deforest the surrounding areas and pump water into the sediments to extract the amber, causing severe environmental deterioration.
Treatment
The Vienna amber factories, which use pale amber to manufacture pipes and other smoking tools, turn it on a lathe and polish it with whitening and water or with rotten stone and oil. The final luster is given by polishing with flannel.
When gradually heated in an oil bath, amber "becomes soft and flexible. Two pieces of amber may be united by smearing the surfaces with linseed oil, heating them, and then pressing them together while hot. Cloudy amber may be clarified in an oil bath, as the oil fills the numerous pores that cause the turbidity. Small fragments, formerly thrown away or used only for varnish are now used on a large scale in the formation of "ambroid" or "pressed amber". The pieces are carefully heated with exclusion of air and then compressed into a uniform mass by intense hydraulic pressure, the softened amber being forced through holes in a metal plate. The product is extensively used for the production of cheap jewelry and articles for smoking. This pressed amber yields brilliant interference colors in polarized light."
Amber has often been imitated by other resins like copal and kauri gum, as well as by celluloid and even glass. Baltic amber is sometimes colored artificially but also called "true amber".
Appearance
Amber occurs in a range of different colors. As well as the usual yellow-orange-brown that is associated with the color "amber", amber can range from a whitish color through a pale lemon yellow, to brown and almost black. Other uncommon colors include red amber (sometimes known as "cherry amber"), green amber, and even blue amber, which is rare and highly sought after.
Yellow amber is a hard fossil resin from evergreen trees, and despite the name it can be translucent, yellow, orange, or brown colored. Known to the Iranians by the Pahlavi compound word kah-ruba (from kah "straw" plus rubay "attract, snatch", referring to its electrical properties), which entered Arabic as kahraba' or kahraba (which later became the Arabic word for electricity, كهرباء kahrabā), it too was called amber in Europe (Old French and Middle English ambre). Found along the southern shore of the Baltic Sea, yellow amber reached the Middle East and western Europe via trade. Its coastal acquisition may have been one reason yellow amber came to be designated by the same term as ambergris. Moreover, like ambergris, the resin could be burned as an incense. The resin's most popular use was, however, for ornamentation—easily cut and polished, it could be transformed into beautiful jewelry. Much of the most highly prized amber is transparent, in contrast to the very common cloudy amber and opaque amber. Opaque amber contains numerous minute bubbles. This kind of amber is known as "bony amber".
Although all Dominican amber is fluorescent, the rarest Dominican amber is blue amber. It turns blue in natural sunlight and any other partially or wholly ultraviolet light source. In long-wave UV light it has a very strong reflection, almost white. Only about is found per year, which makes it valuable and expensive.
Sometimes amber retains the form of drops and stalactites, just as it exuded from the ducts and receptacles of the injured trees. It is thought that, in addition to exuding onto the surface of the tree, amber resin also originally flowed into hollow cavities or cracks within trees, thereby leading to the development of large lumps of amber of irregular form.
Classification
Amber can be classified into several forms. Most fundamentally, there are two types of plant resin with the potential for fossilization. Terpenoids, produced by conifers and angiosperms, consist of ring structures formed of isoprene (C5H8) units. Phenolic resins are today only produced by angiosperms, and tend to serve functional uses. The extinct medullosans produced a third type of resin, which is often found as amber within their veins. The composition of resins is highly variable; each species produces a unique blend of chemicals which can be identified by the use of pyrolysis–gas chromatography–mass spectrometry. The overall chemical and structural composition is used to divide ambers into five classes. There is also a separate classification of amber gemstones, according to the way of production.
Class I
This class is by far the most abundant. It comprises labdatriene carboxylic acids such as communic or ozic acids. It is further split into three sub-classes. Classes Ia and Ib utilize regular labdanoid diterpenes (e.g. communic acid, communol, biformenes), while Ic uses enantio labdanoids (ozic acid, ozol, enantio biformenes).
Class Ia includes Succinite (= 'normal' Baltic amber) and Glessite. They have a communic acid base, and they also include much succinic acid. Baltic amber yields on dry distillation succinic acid, the proportion varying from about 3% to 8%, and being greatest in the pale opaque or bony varieties. The aromatic and irritating fumes emitted by burning amber are mainly from this acid. Baltic amber is distinguished by its yield of succinic acid, hence the name succinite. Succinite has a hardness between 2 and 3, which is greater than many other fossil resins. Its specific gravity varies from 1.05 to 1.10. It can be distinguished from other ambers via infrared spectroscopy through a specific carbonyl absorption peak. Infrared spectroscopy can detect the relative age of an amber sample. Succinic acid may not be an original component of amber but rather a degradation product of abietic acid.
Class Ib ambers are based on communic acid; however, they lack succinic acid.
Class Ic is mainly based on enantio-labdatrienonic acids, such as ozic and zanzibaric acids. Its most familiar representative is Dominican amber,. which is mostly transparent and often contains a higher number of fossil inclusions. This has enabled the detailed reconstruction of the ecosystem of a long-vanished tropical forest. Resin from the extinct species Hymenaea protera is the source of Dominican amber and probably of most amber found in the tropics. It is not "succinite" but "retinite".
Class II
These ambers are formed from resins with a sesquiterpenoid base, such as cadinene.
Class III
These ambers are polystyrenes.
Class IV
Class IV is something of a catch-all: its ambers are not polymerized, but mainly consist of cedrene-based sesquiterpenoids.
Class V
Class V resins are considered to be produced by a pine or pine relative. They comprise a mixture of diterpinoid resins and n-alkyl compounds. Their main variety is Highgate copalite.
Geological record
The oldest amber recovered dates to the late Carboniferous period (). Its chemical composition makes it difficult to match the amber to its producers – it is most similar to the resins produced by flowering plants; however, the first flowering plants appeared in the Early Cretaceous, about 200 million years after the oldest amber known to date, and they were not common until the Late Cretaceous. Amber becomes abundant long after the Carboniferous, in the Early Cretaceous, when it is found in association with insects. The oldest amber with arthropod inclusions comes from the Late Triassic (late Carnian 230 Ma) of Italy, where four microscopic (0.2–0.1 mm) mites, Triasacarus, Ampezzoa, Minyacarus and Cheirolepidoptus, and a poorly preserved nematoceran fly were found in millimetre-sized droplets of amber. The oldest amber with significant numbers of arthropod inclusions comes from Lebanon. This amber, referred to as Lebanese amber, is roughly 125–135 million years old, is considered of high scientific value, providing evidence of some of the oldest sampled ecosystems.
In Lebanon, more than 450 outcrops of Lower Cretaceous amber were discovered by Dany Azar, a Lebanese paleontologist and entomologist. Among these outcrops, 20 have yielded biological inclusions comprising the oldest representatives of several recent families of terrestrial arthropods. Even older Jurassic amber has been found recently in Lebanon as well. Many remarkable insects and spiders were recently discovered in the amber of Jordan including the oldest zorapterans, clerid beetles, umenocoleid roaches, and achiliid planthoppers.
Burmese amber from the Hukawng Valley in northern Myanmar is the only commercially exploited Cretaceous amber. Uranium–lead dating of zircon crystals associated with the deposit have given an estimated depositional age of approximately 99 million years ago. Over 1,300 species have been described from the amber, with over 300 in 2019 alone.
Baltic amber is found as irregular nodules in marine glauconitic sand, known as blue earth, occurring in Upper Eocene strata of Sambia in Prussia. It appears to have been partly derived from older Eocene deposits and it occurs also as a derivative phase in later formations, such as glacial drift. Relics of an abundant flora occur as inclusions trapped within the amber while the resin was yet fresh, suggesting relations with the flora of eastern Asia and the southern part of North America. Heinrich Göppert named the common amber-yielding pine of the Baltic forests Pinites succiniter, but as the wood does not seem to differ from that of the existing genus it has been also called Pinus succinifera. It is improbable that the production of amber was limited to a single species; and indeed a large number of conifers belonging to different genera are represented in the amber-flora.
Paleontological significance
Amber is a unique preservational mode, preserving otherwise unfossilizable parts of organisms; as such it is helpful in the reconstruction of ecosystems as well as organisms; the chemical composition of the resin, however, is of limited utility in reconstructing the phylogenetic affinity of the resin producer. Amber sometimes contains animals or plant matter that became caught in the resin as it was secreted. Insects, spiders and even their webs, annelids, frogs, crustaceans, bacteria and amoebae, marine microfossils, wood, flowers and fruit, hair, feathers and other small organisms have been recovered in Cretaceous ambers (deposited c. ). There is even an ammonite Puzosia (Bhimaites) and marine gastropods found in Burmese amber.
The preservation of prehistoric organisms in amber forms a key plot point in Michael Crichton's 1990 novel Jurassic Park and the 1993 movie adaptation by Steven Spielberg. In the story, scientists are able to extract the preserved blood of dinosaurs from prehistoric mosquitoes trapped in amber, from which they genetically clone living dinosaurs. Scientifically this is as yet impossible, since no amber with fossilized mosquitoes has ever yielded preserved blood. Amber is, however, conducive to preserving DNA, since it dehydrates and thus stabilizes organisms trapped inside. One projection in 1999 estimated that DNA trapped in amber could last up to 100 million years, far beyond most estimates of around 1 million years in the most ideal conditions, although a later 2013 study was unable to extract DNA from insects trapped in much more recent Holocene copal. In 1938, 12-year-old David Attenborough (brother of Richard who played John Hammond in Jurassic Park) was given a piece of amber containing prehistoric creatures from his adoptive sister; it would be the focus of his 2004 BBC documentary The Amber Time Machine.
Use
Amber has been used since prehistory (Solutrean) in the manufacture of jewelry and ornaments, and also in folk medicine.
Jewelry
Amber has been used as jewelry since the Stone Age, from 13,000 years ago. Amber ornaments have been found in Mycenaean tombs and elsewhere across Europe. To this day it is used in the manufacture of smoking and glassblowing mouthpieces. Amber's place in culture and tradition lends it a tourism value; Palanga Amber Museum is dedicated to the fossilized resin.
Historical medicinal uses
Amber has long been used in folk medicine for its purported healing properties. Amber and extracts were used from the time of Hippocrates in ancient Greece for a wide variety of treatments through the Middle Ages and up until the early twentieth century.
Amber necklaces are a traditional European remedy for colic or teething pain with purported analgesic properties of succinic acid, although there is no evidence that this is an effective remedy or delivery method. The American Academy of Pediatrics and the FDA have warned strongly against their use, as they present both a choking and a strangulation hazard.
Scent of amber and amber perfumery
In ancient China, it was customary to burn amber during large festivities. If amber is heated under the right conditions, oil of amber is produced, and in past times this was combined carefully with nitric acid to create "artificial musk" – a resin with a peculiar musky odor. Although when burned, amber does give off a characteristic "pinewood" fragrance, modern products, such as perfume, do not normally use actual amber because fossilized amber produces very little scent. In perfumery, scents referred to as "amber" are often created and patented to emulate the opulent golden warmth of the fossil.
The scent of amber was originally derived from emulating the scent of ambergris and/or the plant resin labdanum, but since sperm whales are endangered, the scent of amber is now largely derived from labdanum. The term "amber" is loosely used to describe a scent that is warm, musky, rich and honey-like, and also somewhat earthy. Benzoin is usually part of the recipe. Vanilla and cloves are sometimes used to enhance the aroma. "Amber" perfumes may be created using combinations of labdanum, benzoin resin, copal (a type of tree resin used in incense manufacture), vanilla, Dammara resin and/or synthetic materials.
In Arab Muslim tradition, popular scents include amber, jasmine, musk and oud (agarwood).
Imitation substances
Young resins used as imitations:
Kauri resin from Agathis australis trees in New Zealand.
The copals (subfossil resins). The African and American (Colombia) copals from Leguminosae trees family (genus Hymenaea). Amber of the Dominican or Mexican type (Class I of fossil resins). Copals from Manilia (Indonesia) and from New Zealand from trees of the genus Agathis (family Araucariaceae)
Other fossil resins: burmite in Burma, rumenite in Romania, and simetite in Sicily.
Other natural resins — cellulose or chitin, etc.
Plastics used as imitations:
Stained glass (inorganic material) and other ceramic materials
Celluloid
Cellulose nitrate (first obtained in 1833) — a product of treatment of cellulose with nitration mixture.
Acetylcellulose (not in the use at present)
Galalith or "artificial horn" (condensation product of casein and formaldehyde), other trade names: Alladinite, Erinoid, Lactoid.
Casein — a conjugated protein forming from the casein precursor – caseinogen.
Resolane (phenolic resins or phenoplasts, not in the use at present)
Bakelite resine (resol, phenolic resins), product from Africa are known under the misleading name "African amber".
Carbamide resins — melamine, formaldehyde and urea-formaldehyde resins.
Epoxy novolac (phenolic resins), unofficial name "antique amber", not in the use at present
Polyesters (Polish amber imitation) with styrene. For example, unsaturated polyester resins (polymals) are produced by Chemical Industrial Works "Organika" in Sarzyna, Poland; estomal are produced by Laminopol firm. Polybern or sticked amber is artificial resins the curled chips are obtained, whereas in the case of amber – small scraps. "African amber" (polyester, synacryl is then probably other name of the same resine) are produced by Reichhold firm; Styresol trade mark or alkid resin (used in Russia, Reichhold, Inc. patent, 1948.
Polyethylene
Epoxy resins
Polystyrene and polystyrene-like polymers (vinyl polymers).
The resins of acrylic type (vinyl polymers), especially polymethyl methacrylate PMMA (trade mark Plexiglass, metaplex).
See also
Ammolite
Illyrian amber jewellery
List of types of amber
Petrified wood
Pearl
Poly(methyl methacrylate)
Precious coral
Notes
References
Bibliography
External links
Farlang many full text historical references on Amber Theophrastus, George Frederick Kunz, and special on Baltic amber.
IPS Publications on amber inclusions International Paleoentomological Society: Scientific Articles on amber and its inclusions
Webmineral on Amber Physical properties and mineralogical information
Mindat Amber Image and locality information on amber
NY Times 40 million year old extinct bee in Dominican amber
|
;Amorphous solids;Fossil resins;Traditional medicine
|
https://en.wikipedia.org/wiki/Absolute%20zero
|
Absolute zero is the coldest point on the thermodynamic temperature scale, a state at which the enthalpy and entropy of a cooled ideal gas reach their minimum value. The fundamental particles of nature have minimum vibrational motion, retaining only quantum mechanical, zero-point energy-induced particle motion. The theoretical temperature is determined by extrapolating the ideal gas law; by international agreement, absolute zero is taken as 0 kelvin (International System of Units), which is −273.15 degrees on the Celsius scale, and equals −459.67 degrees on the Fahrenheit scale (United States customary units or imperial units). The Kelvin and Rankine temperature scales set their zero points at absolute zero by definition.
It is commonly thought of as the lowest temperature possible, but it is not the lowest enthalpy state possible, because all real substances begin to depart from the ideal gas when cooled as they approach the change of state to liquid, and then to solid; and the sum of the enthalpy of vaporization (gas to liquid) and enthalpy of fusion (liquid to solid) exceeds the ideal gas's change in enthalpy to absolute zero. In the quantum-mechanical description, matter at absolute zero is in its ground state, the point of lowest internal energy.
The laws of thermodynamics show that absolute zero cannot be reached using only thermodynamic means, because the temperature of the substance being cooled approaches the temperature of the cooling agent asymptotically. Even a system at absolute zero, if it could somehow be achieved, would still possess quantum mechanical zero-point energy, the energy of its ground state at absolute zero; the kinetic energy of the ground state cannot be removed.
Scientists and technologists routinely achieve temperatures close to absolute zero, where matter exhibits quantum effects such as superconductivity, superfluidity, and Bose–Einstein condensation.
Thermodynamics near absolute zero
At temperatures near , nearly all molecular motion ceases and ΔS = 0 for any adiabatic process, where S is the entropy. In such a circumstance, pure substances can (ideally) form perfect crystals with no structural imperfections as T → 0. Max Planck's strong form of the third law of thermodynamics states the entropy of a perfect crystal vanishes at absolute zero. The original Nernst heat theorem makes the weaker and less controversial claim that the entropy change for any isothermal process approaches zero as T → 0:
The implication is that the entropy of a perfect crystal approaches a constant value. An adiabat is a state with constant entropy, typically represented on a graph as a curve in a manner similar to isotherms and isobars.
The Nernst postulate identifies the isotherm T = 0 as coincident with the adiabat S = 0, although other isotherms and adiabats are distinct. As no two adiabats intersect, no other adiabat can intersect the T = 0 isotherm. Consequently no adiabatic process initiated at nonzero temperature can lead to zero temperature (≈ Callen, pp. 189–190).
A perfect crystal is one in which the internal lattice structure extends uninterrupted in all directions. The perfect order can be represented by translational symmetry along three (not usually orthogonal) axes. Every lattice element of the structure is in its proper place, whether it is a single atom or a molecular grouping. For substances that exist in two (or more) stable crystalline forms, such as diamond and graphite for carbon, there is a kind of chemical degeneracy. The question remains whether both can have zero entropy at T = 0 even though each is perfectly ordered.
Perfect crystals never occur in practice; imperfections, and even entire amorphous material inclusions, can and do get "frozen in" at low temperatures, so transitions to more stable states do not occur.
Using the Debye model, the specific heat and entropy of a pure crystal are proportional to T 3, while the enthalpy and chemical potential are proportional to T 4 (Guggenheim, p. 111). These quantities drop toward their T = 0 limiting values and approach with zero slopes. For the specific heats at least, the limiting value itself is definitely zero, as borne out by experiments to below 10 K. Even the less detailed Einstein model shows this curious drop in specific heats. In fact, all specific heats vanish at absolute zero, not just those of crystals. Likewise for the coefficient of thermal expansion. Maxwell's relations show that various other quantities also vanish. These phenomena were unanticipated.
Since the relation between changes in Gibbs free energy (G), the enthalpy (H) and the entropy is
thus, as T decreases, ΔG and ΔH approach each other (so long as ΔS is bounded). Experimentally, it is found that all spontaneous processes (including chemical reactions) result in a decrease in G as they proceed toward equilibrium. If ΔS and/or T are small, the condition ΔG < 0 may imply that ΔH < 0, which would indicate an exothermic reaction. However, this is not required; endothermic reactions can proceed spontaneously if the TΔS term is large enough.
Moreover, the slopes of the derivatives of ΔG and ΔH converge and are equal to zero at T = 0. This ensures that ΔG and ΔH are nearly the same over a considerable range of temperatures and justifies the approximate empirical Principle of Thomsen and Berthelot, which states that the equilibrium state to which a system proceeds is the one that evolves the greatest amount of heat, i.e., an actual process is the most exothermic one (Callen, pp. 186–187).
One model that estimates the properties of an electron gas at absolute zero in metals is the Fermi gas. The electrons, being fermions, must be in different quantum states, which leads the electrons to get very high typical velocities, even at absolute zero. The maximum energy that electrons can have at absolute zero is called the Fermi energy. The Fermi temperature is defined as this maximum energy divided by the Boltzmann constant, and is on the order of 80,000 K for typical electron densities found in metals. For temperatures significantly below the Fermi temperature, the electrons behave in almost the same way as at absolute zero. This explains the failure of the classical equipartition theorem for metals that eluded classical physicists in the late 19th century.
Relation with Bose–Einstein condensate
A Bose–Einstein condensate (BEC) is a state of matter of a dilute gas of weakly interacting bosons confined in an external potential and cooled to temperatures very near absolute zero. Under such conditions, a large fraction of the bosons occupy the lowest quantum state of the external potential, at which point quantum effects become apparent on a macroscopic scale.
This state of matter was first predicted by Satyendra Nath Bose and Albert Einstein in 1924–1925. Bose first sent a paper to Einstein on the quantum statistics of light quanta (now called photons). Einstein was impressed, translated the paper from English to German and submitted it for Bose to the Zeitschrift für Physik, which published it. Einstein then extended Bose's ideas to material particles (or matter) in two other papers.
Seventy years later, in 1995, the first gaseous condensate was produced by Eric Cornell and Carl Wieman at the University of Colorado at Boulder NIST-JILA lab, using a gas of rubidium atoms cooled to ().
In 2003, researchers at the Massachusetts Institute of Technology (MIT) achieved a temperature of () in a BEC of sodium atoms. The associated black body (peak emittance) wavelength of 6.4 megameters is roughly the radius of Earth.
In 2021, University of Bremen physicists achieved a BEC with a temperature of only , the current coldest temperature record.
Absolute temperature scales
Absolute, or thermodynamic, temperature is conventionally measured in kelvin (Celsius-scaled increments) and in the Rankine scale (Fahrenheit-scaled increments) with increasing rarity. Absolute temperature measurement is uniquely determined by a multiplicative constant which specifies the size of the degree, so the ratios of two absolute temperatures, T2/T1, are the same in all scales. The most transparent definition of this standard comes from the Maxwell–Boltzmann distribution. It can also be found in Fermi–Dirac statistics (for particles of half-integer spin) and Bose–Einstein statistics (for particles of integer spin). All of these define the relative numbers of particles in a system as decreasing exponential functions of energy (at the particle level) over kT, with k representing the Boltzmann constant and T representing the temperature observed at the macroscopic level.
Negative temperatures
Temperatures that are expressed as negative numbers on the familiar Celsius or Fahrenheit scales are simply colder than the zero points of those scales. Certain systems can achieve truly negative temperatures; that is, their thermodynamic temperature (expressed in kelvins) can be of a negative quantity. A system with a truly negative temperature is not colder than absolute zero. Rather, a system with a negative temperature is hotter than any system with a positive temperature, in the sense that if a negative-temperature system and a positive-temperature system come in contact, heat flows from the negative to the positive-temperature system.
Most familiar systems cannot achieve negative temperatures because adding energy always increases their entropy. However, some systems have a maximum amount of energy that they can hold, and as they approach that maximum energy their entropy actually begins to decrease. Because temperature is defined by the relationship between energy and entropy, such a system's temperature becomes negative, even though energy is being added. As a result, the Boltzmann factor for states of systems at negative temperature increases rather than decreases with increasing state energy. Therefore, no complete system, i.e. including the electromagnetic modes, can have negative temperatures, since there is no highest energy state, so that the sum of the probabilities of the states would diverge for negative temperatures. However, for quasi-equilibrium systems (e.g. spins out of equilibrium with the electromagnetic field) this argument does not apply, and negative effective temperatures are attainable.
On 3 January 2013, physicists announced that for the first time they had created a quantum gas made up of potassium atoms with a negative temperature in motional degrees of freedom.
History
One of the first to discuss the possibility of an absolute minimal temperature was Robert Boyle. His 1665 New Experiments and Observations touching Cold, articulated the dispute known as the primum frigidum. The concept was well known among naturalists of the time. Some contended an absolute minimum temperature occurred within earth (as one of the four classical elements), others within water, others air, and some more recently within nitre. But all of them seemed to agree that, "There is some body or other that is of its own nature supremely cold and by participation of which all other bodies obtain that quality."
Limit to the "degree of cold"
The question of whether there is a limit to the degree of coldness possible, and, if so, where the zero must be placed, was first addressed by the French physicist Guillaume Amontons in 1703, in connection with his improvements in the air thermometer. His instrument indicated temperatures by the height at which a certain mass of air sustained a column of mercury—the pressure, or "spring" of the air varying with temperature. Amontons therefore argued that the zero of his thermometer would be that temperature at which the spring of the air was reduced to nothing. He used a scale that marked the boiling point of water at +73 and the melting point of ice at +, so that the zero was equivalent to about −240 on the Celsius scale. Amontons held that the absolute zero cannot be reached, so never attempted to compute it explicitly. The value of −240 °C, or "431 divisions [in Fahrenheit's thermometer] below the cold of freezing water" was published by George Martine in 1740.
This close approximation to the modern value of −273.15 °C for the zero of the air thermometer was further improved upon in 1779 by Johann Heinrich Lambert, who observed that might be regarded as absolute cold.
Values of this order for the absolute zero were not, however, universally accepted about this period. Pierre-Simon Laplace and Antoine Lavoisier, in their 1780 treatise on heat, arrived at values ranging from 1,500 to 3,000 below the freezing point of water, and thought that in any case it must be at least 600 below. John Dalton in his Chemical Philosophy gave ten calculations of this value, and finally adopted −3,000 °C as the natural zero of temperature.
Charles's law
From 1787 to 1802, it was determined by Jacques Charles (unpublished), John Dalton, and Joseph Louis Gay-Lussac that, at constant pressure, ideal gases expanded or contracted their volume linearly (Charles's law) by about 1/273 parts per degree Celsius of temperature's change up or down, between 0° and 100° C. This suggested that the volume of a gas cooled at about −273 °C would reach zero.
Lord Kelvin's work
After James Prescott Joule had determined the mechanical equivalent of heat, Lord Kelvin approached the question from an entirely different point of view, and in 1848 devised a scale of absolute temperature that was independent of the properties of any particular substance and was based on Carnot's theory of the Motive Power of Heat and data published by Henri Victor Regnault. It followed from the principles on which this scale was constructed that its zero was placed at −273 °C, at almost precisely the same point as the zero of the air thermometer, where the air volume would reach "nothing". This value was not immediately accepted; values ranging from to , derived from laboratory measurements and observations of astronomical refraction, remained in use in the early 20th century.
The race to absolute zero
With a better theoretical understanding of absolute zero, scientists were eager to reach this temperature in the lab. By 1845, Michael Faraday had managed to liquefy most gases then known to exist, and reached a new record for lowest temperatures by reaching . Faraday believed that certain gases, such as oxygen, nitrogen, and hydrogen, were permanent gases and could not be liquefied. Decades later, in 1873 Dutch theoretical scientist Johannes Diderik van der Waals demonstrated that these gases could be liquefied, but only under conditions of very high pressure and very low temperatures. In 1877, Louis Paul Cailletet in France and Raoul Pictet in Switzerland succeeded in producing the first droplets of liquid air at . This was followed in 1883 by the production of liquid oxygen by the Polish professors Zygmunt Wróblewski and Karol Olszewski.
Scottish chemist and physicist James Dewar and Dutch physicist Heike Kamerlingh Onnes took on the challenge to liquefy the remaining gases, hydrogen and helium. In 1898, after 20 years of effort, Dewar was the first to liquefy hydrogen, reaching a new low-temperature record of . However, Kamerlingh Onnes, his rival, was the first to liquefy helium, in 1908, using several precooling stages and the Hampson–Linde cycle. He lowered the temperature to the boiling point of helium . By reducing the pressure of the liquid helium, he achieved an even lower temperature, near 1.5 K. These were the coldest temperatures achieved on Earth at the time and his achievement earned him the Nobel Prize in 1913. Kamerlingh Onnes would continue to study the properties of materials at temperatures near absolute zero, describing superconductivity and superfluids for the first time.
Very low temperatures
The average temperature of the universe today is approximately , based on measurements of cosmic microwave background radiation. Standard models of the future expansion of the universe predict that the average temperature of the universe is decreasing over time. This temperature is calculated as the mean density of energy in space; it should not be confused with the mean electron temperature (total energy divided by particle count) which has increased over time.
Absolute zero cannot be achieved, although it is possible to reach temperatures close to it through the use of evaporative cooling, cryocoolers, dilution refrigerators, and nuclear adiabatic demagnetization. The use of laser cooling has produced temperatures of less than a billionth of a kelvin. At very low temperatures in the vicinity of absolute zero, matter exhibits many unusual properties, including superconductivity, superfluidity, and Bose–Einstein condensation. To study such phenomena, scientists have worked to obtain even lower temperatures.
In November 2000, nuclear spin temperatures below were reported for an experiment at the Helsinki University of Technology's Low Temperature Lab in Espoo, Finland. However, this was the temperature of one particular degree of freedom—a quantum property called nuclear spin—not the overall average thermodynamic temperature for all possible degrees in freedom.
In February 2003, the Boomerang Nebula was observed to have been releasing gases at a speed of for the last 1,500 years. This has cooled it down to approximately 1 K, as deduced by astronomical observation, which is the lowest natural temperature ever recorded.
In November 2003, 90377 Sedna was discovered and is one of the coldest known objects in the Solar System, with an average surface temperature of , due to its extremely far orbit of 903 astronomical units.
In May 2005, the European Space Agency proposed research in space to achieve femtokelvin temperatures.
In May 2006, the Institute of Quantum Optics at the University of Hannover gave details of technologies and benefits of femtokelvin research in space.
In January 2013, physicist Ulrich Schneider of the University of Munich in Germany reported to have achieved temperatures formally below absolute zero ("negative temperature") in gases. The gas is artificially forced out of equilibrium into a high potential energy state, which is, however, cold. When it then emits radiation it approaches the equilibrium, and can continue emitting despite reaching formal absolute zero; thus, the temperature is formally negative.
In September 2014, scientists in the CUORE collaboration at the Laboratori Nazionali del Gran Sasso in Italy cooled a copper vessel with a volume of one cubic meter to for 15 days, setting a record for the lowest temperature in the known universe over such a large contiguous volume.
In June 2015, experimental physicists at MIT cooled molecules in a gas of sodium potassium to a temperature of 500 nanokelvin, and it is expected to exhibit an exotic state of matter by cooling these molecules somewhat further.
In 2017, Cold Atom Laboratory (CAL), an experimental instrument was developed for launch to the International Space Station (ISS) in 2018. The instrument has created extremely cold conditions in the microgravity environment of the ISS leading to the formation of Bose–Einstein condensates. In this space-based laboratory, temperatures as low as are projected to be achievable, and it could further the exploration of unknown quantum mechanical phenomena and test some of the most fundamental laws of physics.
The current world record for effective temperatures was set in 2021 at through matter-wave lensing of rubidium Bose–Einstein condensates.
See also
Degenerate matter
Kelvin (unit of temperature)
Charles's law
Heat
International Temperature Scale of 1990
Orders of magnitude (temperature)
Thermodynamic temperature
Triple point
Ultracold atom
Kinetic energy
Entropy
Planck temperature and Hagedorn temperature, hypothetical upper limits to the thermodynamic temperature scale
References
Further reading
BIPM Mise en pratique - Kelvin - Appendix 2 - SI Brochure.
External links
"Absolute zero": a two part NOVA episode originally aired January 2008
"What is absolute zero?" Lansing State Journal
|
Cold;Cryogenics;Temperature
|
https://en.wikipedia.org/wiki/Adiabatic%20process
|
An adiabatic process (adiabatic ) is a type of thermodynamic process that occurs without transferring heat between the thermodynamic system and its environment. Unlike an isothermal process, an adiabatic process transfers energy to the surroundings only as work and/or mass flow. As a key concept in thermodynamics, the adiabatic process supports the theory that explains the first law of thermodynamics. The opposite term to "adiabatic" is diabatic.
Some chemical and physical processes occur too rapidly for energy to enter or leave the system as heat, allowing a convenient "adiabatic approximation". For example, the adiabatic flame temperature uses this approximation to calculate the upper limit of flame temperature by assuming combustion loses no heat to its surroundings.
In meteorology, adiabatic expansion and cooling of moist air, which can be triggered by winds flowing up and over a mountain for example, can cause the water vapor pressure to exceed the saturation vapor pressure. Expansion and cooling beyond the saturation vapor pressure is often idealized as a pseudo-adiabatic process whereby excess vapor instantly precipitates into water droplets. The change in temperature of an air undergoing pseudo-adiabatic expansion differs from air undergoing adiabatic expansion because latent heat is released by precipitation.
Description
A process without transfer of heat to or from a system, so that , is called adiabatic, and such a system is said to be adiabatically isolated. The simplifying assumption frequently made is that a process is adiabatic. For example, the compression of a gas within a cylinder of an engine is assumed to occur so rapidly that on the time scale of the compression process, little of the system's energy can be transferred out as heat to the surroundings. Even though the cylinders are not insulated and are quite conductive, that process is idealized to be adiabatic. The same can be said to be true for the expansion process of such a system.
The assumption of adiabatic isolation is useful and often combined with other such idealizations to calculate a good first approximation of a system's behaviour. For example, according to Laplace, when sound travels in a gas, there is no time for heat conduction in the medium, and so the propagation of sound is adiabatic. For such an adiabatic process, the modulus of elasticity (Young's modulus) can be expressed as , where is the ratio of specific heats at constant pressure and at constant volume () and is the pressure of the gas.
Various applications of the adiabatic assumption
For a closed system, one may write the first law of thermodynamics as , where denotes the change of the system's internal energy, the quantity of energy added to it as heat, and the work done by the system on its surroundings.
If the system has such rigid walls that work cannot be transferred in or out (), and the walls are not adiabatic and energy is added in the form of heat (), and there is no phase change, then the temperature of the system will rise.
If the system has such rigid walls that pressure–volume work cannot be done, but the walls are adiabatic (), and energy is added as isochoric (constant volume) work in the form of friction or the stirring of a viscous fluid within the system (), and there is no phase change, then the temperature of the system will rise.
If the system walls are adiabatic () but not rigid (), and, in a fictive idealized process, energy is added to the system in the form of frictionless, non-viscous pressure–volume work (), and there is no phase change, then the temperature of the system will rise. Such a process is called an isentropic process and is said to be "reversible". Ideally, if the process were reversed the energy could be recovered entirely as work done by the system. If the system contains a compressible gas and is reduced in volume, the uncertainty of the position of the gas is reduced, and seemingly would reduce the entropy of the system, but the temperature of the system will rise as the process is isentropic (). Should the work be added in such a way that friction or viscous forces are operating within the system, then the process is not isentropic, and if there is no phase change, then the temperature of the system will rise, the process is said to be "irreversible", and the work added to the system is not entirely recoverable in the form of work.
If the walls of a system are not adiabatic, and energy is transferred in as heat, entropy is transferred into the system with the heat. Such a process is neither adiabatic nor isentropic, having , and according to the second law of thermodynamics.
Naturally occurring adiabatic processes are irreversible (entropy is produced).
The transfer of energy as work into an adiabatically isolated system can be imagined as being of two idealized extreme kinds. In one such kind, no entropy is produced within the system (no friction, viscous dissipation, etc.), and the work is only pressure-volume work (denoted by ). In nature, this ideal kind occurs only approximately because it demands an infinitely slow process and no sources of dissipation.
The other extreme kind of work is isochoric work (), for which energy is added as work solely through friction or viscous dissipation within the system. A stirrer that transfers energy to a viscous fluid of an adiabatically isolated system with rigid walls, without phase change, will cause a rise in temperature of the fluid, but that work is not recoverable. Isochoric work is irreversible. The second law of thermodynamics observes that a natural process, of transfer of energy as work, always consists at least of isochoric work and often both of these extreme kinds of work. Every natural process, adiabatic or not, is irreversible, with , as friction or viscosity are always present to some extent.
Adiabatic compression and expansion
The adiabatic compression of a gas causes a rise in temperature of the gas. Adiabatic expansion against pressure, or a spring, causes a drop in temperature. In contrast, free expansion is an isothermal process for an ideal gas.
Adiabatic compression occurs when the pressure of a gas is increased by work done on it by its surroundings, e.g., a piston compressing a gas contained within a cylinder and raising the temperature where in many practical situations heat conduction through walls can be slow compared with the compression time. This finds practical application in diesel engines which rely on the lack of heat dissipation during the compression stroke to elevate the fuel vapor temperature sufficiently to ignite it.
Adiabatic compression occurs in the Earth's atmosphere when an air mass descends, for example, in a Katabatic wind, Foehn wind, or Chinook wind flowing downhill over a mountain range. When a parcel of air descends, the pressure on the parcel increases. Because of this increase in pressure, the parcel's volume decreases and its temperature increases as work is done on the parcel of air, thus increasing its internal energy, which manifests itself by a rise in the temperature of that mass of air. The parcel of air can only slowly dissipate the energy by conduction or radiation (heat), and to a first approximation it can be considered adiabatically isolated and the process an adiabatic process.
Adiabatic expansion occurs when the pressure on an adiabatically isolated system is decreased, allowing it to expand in size, thus causing it to do work on its surroundings. When the pressure applied on a parcel of gas is reduced, the gas in the parcel is allowed to expand; as the volume increases, the temperature falls as its internal energy decreases. Adiabatic expansion occurs in the Earth's atmosphere with orographic lifting and lee waves, and this can form pilei or lenticular clouds.
Due in part to adiabatic expansion in mountainous areas, snowfall infrequently occurs in some parts of the Sahara desert.
Adiabatic expansion does not have to involve a fluid. One technique used to reach very low temperatures (thousandths and even millionths of a degree above absolute zero) is via adiabatic demagnetisation, where the change in magnetic field on a magnetic material is used to provide adiabatic expansion. Also, the contents of an expanding universe can be described (to first order) as an adiabatically expanding fluid. (See heat death of the universe.)
Rising magma also undergoes adiabatic expansion before eruption, particularly significant in the case of magmas that rise quickly from great depths such as kimberlites.
In the Earth's convecting mantle (the asthenosphere) beneath the lithosphere, the mantle temperature is approximately an adiabat. The slight decrease in temperature with shallowing depth is due to the decrease in pressure the shallower the material is in the Earth.
Such temperature changes can be quantified using the ideal gas law, or the hydrostatic equation for atmospheric processes.
In practice, no process is truly adiabatic. Many processes rely on a large difference in time scales of the process of interest and the rate of heat dissipation across a system boundary, and thus are approximated by using an adiabatic assumption. There is always some heat loss, as no perfect insulators exist.
Ideal gas (reversible process)
The mathematical equation for an ideal gas undergoing a reversible (i.e., no entropy generation) adiabatic process can be represented by the polytropic process equation
where is pressure, is volume, and is the adiabatic index or heat capacity ratio defined as
Here is the specific heat for constant pressure, is the specific heat for constant volume, and is the number of degrees of freedom (3 for a monatomic gas, 5 for a diatomic gas or a gas of linear molecules such as carbon dioxide).
For a monatomic ideal gas, , and for a diatomic gas (such as nitrogen and oxygen, the main components of air), . Note that the above formula is only applicable to classical ideal gases (that is, gases far above absolute zero temperature) and not Bose–Einstein or Fermi gases.
One can also use the ideal gas law to rewrite the above relationship between and as
where T is the absolute or thermodynamic temperature.
Example of adiabatic compression
The compression stroke in a gasoline engine can be used as an example of adiabatic compression. The model assumptions are: the uncompressed volume of the cylinder is one litre (1 L = 1000 cm3 = 0.001 m3); the gas within is the air consisting of molecular nitrogen and oxygen only (thus a diatomic gas with 5 degrees of freedom, and so ); the compression ratio of the engine is 10:1 (that is, the 1 L volume of uncompressed gas is reduced to 0.1 L by the piston); and the uncompressed gas is at approximately room temperature and pressure (a warm room temperature of ~27 °C, or 300 K, and a pressure of 1 bar = 100 kPa, i.e. typical sea-level atmospheric pressure).
so the adiabatic constant for this example is about
The gas is now compressed to a 0.1 L (0.0001 m3) volume, which we assume happens quickly enough that no heat enters or leaves the gas through the walls. The adiabatic constant remains the same, but with the resulting pressure unknown
We can now solve for the final pressure
or 25.1 bar. This pressure increase is more than a simple 10:1 compression ratio would indicate; this is because the gas is not only compressed, but the work done to compress the gas also increases its internal energy, which manifests itself by a rise in the gas temperature and an additional rise in pressure above what would result from a simplistic calculation of 10 times the original pressure.
We can solve for the temperature of the compressed gas in the engine cylinder as well, using the ideal gas law, PV = nRT (n is amount of gas in moles and R the gas constant for that gas). Our initial conditions being 100 kPa of pressure, 1 L volume, and 300 K of temperature, our experimental constant (nR) is:
We know the compressed gas has = 0.1 L and = , so we can solve for temperature:
That is a final temperature of 753 K, or 479 °C, or 896 °F, well above the ignition point of many fuels. This is why a high-compression engine requires fuels specially formulated to not self-ignite (which would cause engine knocking when operated under these conditions of temperature and pressure), or that a supercharger with an intercooler to provide a pressure boost but with a lower temperature rise would be advantageous. A diesel engine operates under even more extreme conditions, with compression ratios of 16:1 or more being typical, in order to provide a very high gas pressure, which ensures immediate ignition of the injected fuel.
Adiabatic free expansion of a gas
For an adiabatic free expansion of an ideal gas, the gas is contained in an insulated container and then allowed to expand in a vacuum. Because there is no external pressure for the gas to expand against, the work done by or on the system is zero. Since this process does not involve any heat transfer or work, the first law of thermodynamics then implies that the net internal energy change of the system is zero. For an ideal gas, the temperature remains constant because the internal energy only depends on temperature in that case. Since at constant temperature, the entropy is proportional to the volume, the entropy increases in this case, therefore this process is irreversible.
Derivation of P–V relation for adiabatic compression and expansion
The definition of an adiabatic process is that heat transfer to the system is zero, . Then, according to the first law of thermodynamics,
where is the change in the internal energy of the system and is work done by the system. Any work () done must be done at the expense of internal energy , since no heat is being supplied from the surroundings. Pressure–volume work done by the system is defined as
However, does not remain constant during an adiabatic process but instead changes along with .
It is desired to know how the values of and relate to each other as the adiabatic process proceeds. For an ideal gas (recall ideal gas law ) the internal energy is given by
where is the number of degrees of freedom divided by 2, is the universal gas constant and is the number of moles in the system (a constant).
Differentiating equation (a3) yields
Equation (a4) is often expressed as because .
Now substitute equations (a2) and (a4) into equation (a1) to obtain
factorize :
and divide both sides by :
After integrating the left and right sides from to and from to and changing the sides respectively,
Exponentiate both sides, substitute with , the heat capacity ratio
and eliminate the negative sign to obtain
Therefore,
and
At the same time, the work done by the pressure–volume changes as a result from this process, is equal to
Since we require the process to be adiabatic, the following equation needs to be true
By the previous derivation,
Rearranging (b4) gives
Substituting this into (b2) gives
Integrating, we obtain the expression for work,
Substituting in the second term,
Rearranging,
Using the ideal gas law and assuming a constant molar quantity (as often happens in practical cases),
By the continuous formula,
or
Substituting into the previous expression for ,
Substituting this expression and (b1) in (b3) gives
Simplifying,
Derivation of discrete formula and work expression
The change in internal energy of a system, measured from state 1 to state 2, is equal to
At the same time, the work done by the pressure–volume changes as a result from this process, is equal to
Since we require the process to be adiabatic, the following equation needs to be true
By the previous derivation,
Rearranging (c4) gives
Substituting this into (c2) gives
Integrating we obtain the expression for work,
Substituting in second term,
Rearranging,
Using the ideal gas law and assuming a constant molar quantity (as often happens in practical cases),
By the continuous formula,
or
Substituting into the previous expression for ,
Substituting this expression and (c1) in (c3) gives
Simplifying,
Graphing adiabats
An adiabat is a curve of constant entropy in a diagram. Some properties of adiabats on a P–V diagram are indicated. These properties may be read from the classical behaviour of ideal gases, except in the region where PV becomes small (low temperature), where quantum effects become important.
Every adiabat asymptotically approaches both the V axis and the P axis (just like isotherms).
Each adiabat intersects each isotherm exactly once.
An adiabat looks similar to an isotherm, except that during an expansion, an adiabat loses more pressure than an isotherm, so it has a steeper inclination (more vertical).
If isotherms are concave towards the north-east direction (45° from V-axis), then adiabats are concave towards the east north-east (31° from V-axis).
If adiabats and isotherms are graphed at regular intervals of entropy and temperature, respectively (like altitude on a contour map), then as the eye moves towards the axes (towards the south-west), it sees the density of isotherms stay constant, but it sees the density of adiabats grow. The exception is very near absolute zero, where the density of adiabats drops sharply and they become rare (see Nernst's theorem).
Etymology
The term adiabatic () is an anglicization of the Greek term ἀδιάβατος "impassable" (used by Xenophon of rivers). It is used in the thermodynamic sense by Rankine (1866), and adopted by Maxwell in 1871 (explicitly attributing the term to Rankine).
The etymological origin corresponds here to an impossibility of transfer of energy as heat and of transfer of matter across the wall.
The Greek word ἀδιάβατος is formed from privative ἀ- ("not") and διαβατός, "passable", in turn deriving from διά ("through"), and βαῖνειν ("to walk, go, come").
Furthermore, in atmospheric thermodynamics, a diabatic process is one in which heat is exchanged. An adiabatic process is the opposite – a process in which no heat is exchanged.
Conceptual significance in thermodynamic theory
The adiabatic process has been important for thermodynamics since its early days. It was important in the work of Joule because it provided a way of nearly directly relating quantities of heat and work.
Energy can enter or leave a thermodynamic system enclosed by walls that prevent mass transfer only as heat or work. Therefore, a quantity of work in such a system can be related almost directly to an equivalent quantity of heat in a cycle of two limbs. The first limb is an isochoric adiabatic work process increasing the system's internal energy; the second, an isochoric and workless heat transfer returning the system to its original state. Accordingly, Rankine measured quantity of heat in units of work, rather than as a calorimetric quantity. In 1854, Rankine used a quantity that he called "the thermodynamic function" that later was called entropy, and at that time he wrote also of the "curve of no transmission of heat", which he later called an adiabatic curve. Besides its two isothermal limbs, Carnot's cycle has two adiabatic limbs.
For the foundations of thermodynamics, the conceptual importance of this was emphasized by Bryan, by Carathéodory, and by Born. The reason is that calorimetry presupposes a type of temperature as already defined before the statement of the first law of thermodynamics, such as one based on empirical scales. Such a presupposition involves making the distinction between empirical temperature and absolute temperature. Rather, the definition of absolute thermodynamic temperature is best left till the second law is available as a conceptual basis.
In the eighteenth century, the law of conservation of energy was not yet fully formulated or established, and the nature of heat was debated. One approach to these problems was to regard heat, measured by calorimetry, as a primary substance that is conserved in quantity. By the middle of the nineteenth century, it was recognized as a form of energy, and the law of conservation of energy was thereby also recognized. The view that eventually established itself, and is currently regarded as right, is that the law of conservation of energy is a primary axiom, and that heat is to be analyzed as consequential. In this light, heat cannot be a component of the total energy of a single body because it is not a state variable but, rather, a variable that describes a transfer between two bodies. The adiabatic process is important because it is a logical ingredient of this current view.
Divergent usages of the word adiabatic
This present article is written from the viewpoint of macroscopic thermodynamics, and the word adiabatic is used in this article in the traditional way of thermodynamics, introduced by Rankine. It is pointed out in the present article that, for example, if a compression of a gas is rapid, then there is little time for heat transfer to occur, even when the gas is not adiabatically isolated by a definite wall. In this sense, a rapid compression of a gas is sometimes approximately or loosely said to be adiabatic, though often far from isentropic, even when the gas is not adiabatically isolated by a definite wall.
Some authors, like Pippard, recommend using "adiathermal" to refer to processes where no heat-exchange occurs (such as Joule expansion), and "adiabatic" to reversible quasi-static adiathermal processes (so that rapid compression of a gas is not "adiabatic"). And Laidler has summarized the complicated etymology of "adiabatic".
Quantum mechanics and quantum statistical mechanics, however, use the word adiabatic in a very different sense, one that can at times seem almost opposite to the classical thermodynamic sense. In quantum theory, the word adiabatic can mean something perhaps near isentropic, or perhaps near quasi-static, but the usage of the word is very different between the two disciplines.
On the one hand, in quantum theory, if a perturbative element of compressive work is done almost infinitely slowly (that is to say quasi-statically), it is said to have been done adiabatically. The idea is that the shapes of the eigenfunctions change slowly and continuously, so that no quantum jump is triggered, and the change is virtually reversible. While the occupation numbers are unchanged, nevertheless there is change in the energy levels of one-to-one corresponding, pre- and post-compression, eigenstates. Thus a perturbative element of work has been done without heat transfer and without introduction of random change within the system. For example, Max Born writes
On the other hand, in quantum theory, if a perturbative element of compressive work is done rapidly, it changes the occupation numbers and energies of the eigenstates in proportion to the transition moment integral and in accordance with time-dependent perturbation theory, as well as perturbing the functional form of the eigenstates themselves. In that theory, such a rapid change is said not to be adiabatic, and the contrary word diabatic is applied to it.
Recent research suggests that the power absorbed from the perturbation corresponds to the rate of these non-adiabatic transitions. This corresponds to the classical process of energy transfer in the form of heat, but with the relative time scales reversed in the quantum case. Quantum adiabatic processes occur over relatively long time scales, while classical adiabatic processes occur over relatively short time scales. It should also be noted that the concept of 'heat' (in reference to the quantity of thermal energy transferred) breaks down at the quantum level, and the specific form of energy (typically electromagnetic) must be considered instead. The small or negligible absorption of energy from the perturbation in a quantum adiabatic process provides a good justification for identifying it as the quantum analogue of adiabatic processes in classical thermodynamics, and for the reuse of the term.
In classical thermodynamics, such a rapid change would still be called adiabatic because the system is adiabatically isolated, and there is no transfer of energy as heat. The strong irreversibility of the change, due to viscosity or other entropy production, does not impinge on this classical usage.
Thus for a mass of gas, in macroscopic thermodynamics, words are so used that a compression is sometimes loosely or approximately said to be adiabatic if it is rapid enough to avoid significant heat transfer, even if the system is not adiabatically isolated. But in quantum statistical theory, a compression is not called adiabatic if it is rapid, even if the system is adiabatically isolated in the classical thermodynamic sense of the term. The words are used differently in the two disciplines, as stated just above.
See also
Fire piston
Heat burst
Related physics topics
First law of thermodynamics
Entropy (classical thermodynamics)
Adiabatic conductivity
Adiabatic lapse rate
Total air temperature
Magnetic refrigeration
Berry phase
Related thermodynamic processes
Cyclic process
Isobaric process
Isenthalpic process
Isentropic process
Isochoric process
Isothermal process
Polytropic process
Quasistatic process
References
General
Nave, Carl Rod. "Adiabatic Processes". HyperPhysics.
Thorngren, Dr. Jane R. "Adiabatic Processes". Daphne – A Palomar College Web Server, 21 July 1995. .
External links
Article in HyperPhysics Encyclopaedia
|
Atmospheric thermodynamics;Entropy;Thermodynamic processes
|
https://en.wikipedia.org/wiki/APL%20%28programming%20language%29
|
APL (named after the book A Programming Language) is a programming language developed in the 1960s by Kenneth E. Iverson. Its central datatype is the multidimensional array. It uses a large range of special graphic symbols to represent most functions and operators, leading to very concise code. It has been an important influence on the development of concept modeling, spreadsheets, functional programming, and computer math packages. It has also inspired several other programming languages.
History
Mathematical notation
A mathematical notation for manipulating arrays was developed by Kenneth E. Iverson, starting in 1957 at Harvard University. In 1960, he began work for IBM where he developed this notation with Adin Falkoff and published it in his book A Programming Language in 1962. The preface states its premise:
This notation was used inside IBM for short research reports on computer systems, such as the Burroughs B5000 and its stack mechanism when stack machines versus register machines were being evaluated by IBM for upcoming computers.
Iverson also used his notation in a draft of the chapter A Programming Language, written for a book he was writing with Fred Brooks, Automatic Data Processing, which would be published in 1963.
In 1979, Iverson received the Turing Award for his work on APL.
Development into a computer programming language
As early as 1962, the first attempt to use the notation to describe a complete computer system happened after Falkoff discussed with William C. Carter his work to standardize the instruction set for the machines that later became the IBM System/360 family.
In 1963, Herbert Hellerman, working at the IBM Systems Research Institute, implemented a part of the notation on an IBM 1620 computer, and it was used by students in a special high school course on calculating transcendental functions by series summation. Students tested their code in Hellerman's lab. This implementation of a part of the notation was called Personalized Array Translator (PAT).
In 1963, Falkoff, Iverson, and Edward H. Sussenguth Jr., all working at IBM, used the notation for a formal description of the IBM System/360 series machine architecture and functionality, which resulted in a paper published in IBM Systems Journal in 1964. After this was published, the team turned their attention to an implementation of the notation on a computer system. One of the motivations for this focus of implementation was the interest of John L. Lawrence who had new duties with Science Research Associates, an educational company bought by IBM in 1964. Lawrence asked Iverson and his group to help use the language as a tool to develop and use computers in education.
After Lawrence M. Breed and Philip S. Abrams of Stanford University joined the team at IBM Research, they continued their prior work on an implementation programmed in FORTRAN IV for a part of the notation which had been done for the IBM 7090 computer running on the IBSYS operating system. This work was finished in late 1965 and later named IVSYS (for Iverson system). The basis of this implementation was described in detail by Abrams in a Stanford University Technical Report, "An Interpreter for Iverson Notation" in 1966. The academic aspect of this was formally supervised by Niklaus Wirth. Like Hellerman's PAT system earlier, this implementation omitted the APL character set, but used special English reserved words for functions and operators. The system was later adapted for a time-sharing system and, by November 1966, it had been reprogrammed for the IBM System/360 Model 50 computer running in a time-sharing mode and was used internally at IBM.
Hardware
A key development in the ability to use APL effectively, before the wide use of cathode-ray tube (CRT) terminals, was the development of a special IBM Selectric typewriter interchangeable typing element with all the special APL characters on it. This was used on paper printing terminal workstations using the Selectric typewriter and typing element mechanism, such as the IBM 1050 and IBM 2741 terminal. Keycaps could be placed over the normal keys to show which APL characters would be entered and typed when that key was struck. For the first time, a programmer could type in and see proper APL characters as used in Iverson's notation and not be forced to use awkward English keyword representations of them. Falkoff and Iverson had the special APL Selectric typing elements, 987 and 988, designed in late 1964, although no APL computer system was available to use them. Iverson cited Falkoff as the inspiration for the idea of using an IBM Selectric typing element for the APL character set.
Many APL symbols, even with the APL characters on the Selectric typing element, still had to be typed in by over-striking two extant element characters. An example is the grade up character, which had to be made from a delta (shift-H) and a Sheffer stroke (shift-M). This was necessary because the APL character set was much larger than the 88 characters allowed on the typing element, even when letters were restricted to upper-case (capitals).
Commercial availability
The first APL interactive login and creation of an APL workspace was in 1966 by Larry Breed using an IBM 1050 terminal at the IBM Mohansic Labs near Thomas J. Watson Research Center, the home of APL, in Yorktown Heights, New York.
IBM was chiefly responsible for introducing APL to the marketplace. The first publicly available version of APL was released in 1968 for the IBM 1130. IBM provided APL\1130 for free but without liability or support. It would run in as little as 8k 16-bit words of memory, and used a dedicated 1 megabyte hard disk.
APL gained its foothold on mainframe timesharing systems from the late 1960s through the early 1980s, in part because it would support multiple users on lower-specification systems that had no dynamic address translation hardware. Additional improvements in performance for selected IBM System/370 mainframe systems included the APL Assist Microcode in which some support for APL execution was included in the processor's firmware, as distinct from being implemented entirely by higher-level software. Somewhat later, as suitably performing hardware was finally growing available in the mid- to late-1980s, many users migrated their applications to the personal computer environment.
Early IBM APL interpreters for IBM 360 and IBM 370 hardware implemented their own multi-user management instead of relying on the host services, thus they were their own timesharing systems. First introduced for use at IBM in 1966, the APL\360 system was a multi-user interpreter. The ability to programmatically communicate with the operating system for information and setting interpreter system variables was done through special privileged "I-beam" functions, using both monadic and dyadic operations.
In 1973, IBM released APL.SV, which was a continuation of the same product, but which offered shared variables as a means to access facilities outside of the APL system, such as operating system files. In the mid-1970s, the IBM mainframe interpreter was even adapted for use on the IBM 5100 desktop computer, which had a small CRT and an APL keyboard, when most other small computers of the time only offered BASIC. In the 1980s, the VSAPL program product enjoyed wide use with Conversational Monitor System (CMS), Time Sharing Option (TSO), VSPC, MUSIC/SP, and CICS users.
In 1973–1974, Patrick E. Hagerty directed the implementation of the University of Maryland APL interpreter for the 1100 line of the Sperry UNIVAC 1100/2200 series mainframe computers. In 1974, student Alan Stebbens was assigned the task of implementing an internal function. Xerox APL was available from June 1975 for Xerox 560 and Sigma 6, 7, and 9 mainframes running CP-V and for Honeywell CP-6.
In the 1960s and 1970s, several timesharing firms arose that sold APL services using modified versions of the IBM APL\360 interpreter. In North America, the better-known ones were IP Sharp Associates, Scientific Time Sharing Corporation (STSC), Time Sharing Resources (TSR), and The Computer Company (TCC). CompuServe also entered the market in 1978 with an APL Interpreter based on a modified version of Digital Equipment Corp and Carnegie Mellon's, which ran on DEC's KI and KL 36-bit machines. CompuServe's APL was available both to its commercial market and the consumer information service. With the advent first of less expensive mainframes such as the IBM 4300, and later the personal computer, by the mid-1980s, the timesharing industry was all but gone.
Sharp APL was available from IP Sharp Associates, first as a timesharing service in the 1960s, and later as a program product starting around 1979. Sharp APL was an advanced APL implementation with many language extensions, such as packages (the ability to put one or more objects into a single variable), a file system, nested arrays, and shared variables.
APL interpreters were available from other mainframe and mini-computer manufacturers also, notably Burroughs, Control Data Corporation (CDC), Data General, Digital Equipment Corporation (DEC), Harris, Hewlett-Packard (HP), Siemens, Xerox and others.
Garth Foster of Syracuse University sponsored regular meetings of the APL implementers' community at Syracuse's Minnowbrook Conference Center in Blue Mountain Lake, New York. In later years, Eugene McDonnell organized similar meetings at the Asilomar Conference Grounds near Monterey, California, and at Pajaro Dunes near Watsonville, California. The SIGAPL special interest group of the Association for Computing Machinery continues to support the APL community.
Microcomputers
On microcomputers, which became available from the mid-1970s onwards, BASIC became the dominant programming language. Nevertheless, some microcomputers provided APL instead – the first being the Intel 8008-based MCM/70 which was released in 1974 and which was primarily used in education. Another machine of this time was the VideoBrain Family Computer, released in 1977, which was supplied with its dialect of APL called APL/S.
The Commodore SuperPET, introduced in 1981, included an APL interpreter developed by the University of Waterloo.
In 1976, Bill Gates claimed in his Open Letter to Hobbyists that Microsoft Corporation was implementing APL for the Intel 8080 and Motorola 6800 but had "very little incentive to make [it] available to hobbyists" because of software piracy. It was never released.
APL2
Starting in the early 1980s, IBM APL development, under the leadership of Jim Brown, implemented a new version of the APL language that contained as its primary enhancement the concept of nested arrays, where an array can contain other arrays, and new language features which facilitated integrating nested arrays into program workflow. Ken Iverson, no longer in control of the development of the APL language, left IBM and joined I. P. Sharp Associates, where one of his major contributions was directing the evolution of Sharp APL to be more in accord with his vision. APL2 was first released for CMS and TSO in 1984. The APL2 Workstation edition (Windows, OS/2, AIX, Linux, and Solaris) followed later.
As other vendors were busy developing APL interpreters for new hardware, notably Unix-based microcomputers, APL2 was almost always the standard chosen for new APL interpreter developments. Even today, most APL vendors or their users cite APL2 compatibility as a selling point for those products. IBM cites its use for problem solving, system design, prototyping, engineering and scientific computations, expert systems, for teaching mathematics and other subjects, visualization and database access.
Modern implementations
Various implementations of APL by APLX, Dyalog, et al., include extensions for object-oriented programming, support for .NET, XML-array conversion primitives, graphing, operating system interfaces, and lambda calculus expressions. Freeware versions include GNU APL for Linux and NARS2000 for Windows (which also runs on Linux under Wine). Both of these are fairly complete versions of APL2 with various language extensions.
Derivative languages
APL has formed the basis of, or influenced, the following languages:
A and A+, an alternative APL, the latter with graphical extensions.
FP, a functional programming language.
Ivy, an interpreter for an APL-like language developed by Rob Pike, and which uses ASCII as input.
J, which was also designed by Iverson, and which uses ASCII with digraphs instead of special symbols.
K, a proprietary variant of APL developed by Arthur Whitney.
MATLAB, a numerical computation tool.
Nial, a high-level array programming language with a functional programming notation.
Polymorphic Programming Language, an interactive, extensible language with a similar base language.
S, a statistical programming language (usually now seen in the open-source version known as R).
Snap!, a low-code block-based programming language, born as an extended reimplementation of Scratch
Speakeasy, a numerical computing interactive environment.
Wolfram Language, the programming language of Mathematica.
Language characteristics
Character set
APL has been criticized and praised for its choice of a unique character set. In the 1960s and 1970s, few terminal devices or even displays could reproduce the APL character set. The most popular ones employed the IBM Selectric print mechanism used with a special APL type element. One of the early APL line terminals (line-mode operation only, not full screen) was the Texas Instruments TI Model 745 () with the full APL character set which featured half and full duplex telecommunications modes, for interacting with an APL time-sharing service or remote mainframe to run a remote computer job, remote job entry (RJE).
Over time, with the universal use of high-quality graphic displays, printing devices and Unicode support, the APL character font problem has largely been eliminated. However, entering APL characters requires the use of input method editors, keyboard mappings, virtual/on-screen APL symbol sets, or easy-reference printed keyboard cards which can frustrate beginners accustomed to other programming languages. With beginners who have no prior experience with other programming languages, a study involving high school students found that typing and using APL characters did not hinder the students in any measurable way.
In defense of APL, it requires fewer characters to type, and keyboard mappings become memorized over time. Special APL keyboards are also made and in use today, as are freely downloadable fonts for operating systems such as Microsoft Windows. The reported productivity gains assume that one spends enough time working in the language to make it worthwhile to memorize the symbols, their semantics, keyboard mappings, and many idioms for common tasks.
Design
Unlike traditionally structured programming languages, APL code is typically structured as chains of monadic or dyadic functions, and operators acting on arrays. APL has many nonstandard primitives (functions and operators) that are indicated by a single symbol or a combination of a few symbols. All primitives are defined to have the same precedence, and always associate to the right. Thus, APL is read or best understood from right-to-left.
Early APL implementations ( or so) had no programming loop control flow structures, such as do or while loops, and if-then-else constructs. Instead, they used array operations, and use of structured programming constructs was often unneeded, since an operation could be performed on a full array in one statement. For example, the iota function (ι) can replace for-loop iteration: ιN when applied to a scalar positive integer yields a one-dimensional array (vector), 1 2 3 ... N. Later APL implementations generally include comprehensive control structures, so that data structure and program control flow can be clearly and cleanly separated.
The APL environment is called a workspace. In a workspace the user can define programs and data, i.e., the data values exist also outside the programs, and the user can also manipulate the data without having to define a program. In the examples below, the APL interpreter first types six spaces before awaiting the user's input. Its own output starts in column one.
The user can save the workspace with all values, programs, and execution status.
APL uses a set of non-ASCII symbols, which are an extension of traditional arithmetic and algebraic notation. Having single character names for single instruction, multiple data (SIMD) vector functions is one way that APL enables compact formulation of algorithms for data transformation such as computing Conway's Game of Life in one line of code. In nearly all versions of APL, it is theoretically possible to express any computable function in one expression, that is, in one line of code.
Due to the unusual character set, many programmers use special keyboards with APL keytops to write APL code. Although there are various ways to write APL code using only ASCII characters, in practice it is almost never done. (This may be thought to support Iverson's thesis about notation as a tool of thought.) Most if not all modern implementations use standard keyboard layouts, with special mappings or input method editors to access non-ASCII characters. Historically, the APL font has been distinctive, with uppercase italic alphabetic characters and upright numerals and symbols. Most vendors continue to display the APL character set in a custom font.
Advocates of APL claim that the examples of so-called write-only code (badly written and almost incomprehensible code) are almost invariably examples of poor programming practice or novice mistakes, which can occur in any language. Advocates also claim that they are far more productive with APL than with more conventional computer languages, and that working software can be implemented in far less time and with far fewer programmers than using other technology.
They also may claim that because it is compact and terse, APL lends itself well to larger-scale software development and complexity, because the number of lines of code can be reduced greatly. Many APL advocates and practitioners also view standard programming languages such as COBOL and Java as being comparatively tedious. APL is often found where time-to-market is important, such as with trading systems.
Terminology
APL makes a clear distinction between functions and operators. Functions take arrays (variables or constants or expressions) as arguments, and return arrays as results. Operators (similar to higher-order functions) take functions or arrays as arguments, and derive related functions. For example, the sum function is derived by applying the reduction operator to the addition function. Applying the same reduction operator to the maximum function (which returns the larger of two numbers) derives a function which returns the largest of a group (vector) of numbers. In the J language, Iverson substituted the terms verb for function and adverb or conjunction for operator.
APL also identifies those features built into the language, and represented by a symbol, or a fixed combination of symbols, as primitives. Most primitives are either functions or operators. Coding APL is largely a process of writing non-primitive functions and (in some versions of APL) operators. However a few primitives are considered to be neither functions nor operators, most noticeably assignment.
Some words used in APL literature have meanings that differ from those in both mathematics and the generality of computer science.
Syntax
APL has explicit representations of functions, operators, and syntax, thus providing a basis for the clear and explicit statement of extended facilities in the language, and tools to experiment on them.
Examples
Hello, world
This displays "Hello, world":
'Hello, world'
A design theme in APL is to define default actions in some cases that would produce syntax errors in most other programming languages.
The 'Hello, world' string constant above displays, because display is the default action on any expression for which no action is specified explicitly (e.g. assignment, function parameter).
Exponentiation
Another example of this theme is that exponentiation in APL is written as , which indicates raising 2 to the power 3 (this would be written as or in some languages, or relegated to a function call such as in others). Many languages use to signify multiplication, as in , but APL chooses to use . However, if no base is specified (as with the statement in APL, or in other languages), most programming languages one would see this as a syntax error. APL, however, assumes the missing base to be the natural logarithm constant e, and interprets as .
Simple statistics
Suppose that is an array of numbers. Then gives its average. Reading right-to-left, gives the number of elements in X, and since is a dyadic operator, the term to its left is required as well. It is surrounded by parentheses since otherwise X would be taken (so that the summation would be of —each element of X divided by the number of elements in X), and gives the sum of the elements of X. Building on this, the following expression computes standard deviation:
Naturally, one would define this expression as a function for repeated use rather than rewriting it each time. Further, since assignment is an operator, it can appear within an expression, so the following would place suitable values into T, AV and SD:
Pick 6 lottery numbers
This following immediate-mode expression generates a typical set of Pick 6 lottery numbers: six pseudo-random integers ranging from 1 to 40, guaranteed non-repeating, and displays them sorted in ascending order:
x[⍋x←6?40]
The above does a lot, concisely, although it may seem complex to a new APLer. It combines the following APL functions (also called primitives and glyphs):
The first to be executed (APL executes from rightmost to leftmost) is dyadic function ? (named deal when dyadic) that returns a vector consisting of a select number (left argument: 6 in this case) of random integers ranging from 1 to a specified maximum (right argument: 40 in this case), which, if said maximum ≥ vector length, is guaranteed to be non-repeating; thus, generate/create 6 random integers ranging from 1 to 40.
This vector is then assigned (←) to the variable x, because it is needed later.
This vector is then sorted in ascending order by a monadic ⍋ function, which has as its right argument everything to the right of it up to the next unbalanced close-bracket or close-parenthesis. The result of ⍋ is the indices that will put its argument into ascending order.
Then the output of ⍋ is used to index the variable x, which we saved earlier for this purpose, thereby selecting its items in ascending sequence.
Since there is no function to the left of the left-most x to tell APL what to do with the result, it simply outputs it to the display (on a single line, separated by spaces) without needing any explicit instruction to do that.
? also has a monadic equivalent called roll, which simply returns one random integer between 1 and its sole operand [to the right of it], inclusive. Thus, a role-playing game program might use the expression ?20 to roll a twenty-sided die.
Prime numbers
The following expression finds all prime numbers from 1 to R. In both time and space, the calculation complexity is (in Big O notation).
(~R∊R∘.×R)/R←1↓⍳R
Executed from right to left, this means:
Iota ⍳ creates a vector containing integers from 1 to R (if R= 6 at the start of the program, ⍳R is 1 2 3 4 5 6)
Drop first element of this vector (↓ function), i.e., 1. So 1↓⍳R is 2 3 4 5 6
Set R to the new vector (←, assignment primitive), i.e., 2 3 4 5 6
The / replicate operator is dyadic (binary) and the interpreter first evaluates its left argument (fully in parentheses):
Generate outer product of R multiplied by R, i.e., a matrix that is the multiplication table of R by R (°.× operator), i.e.,
Build a vector the same length as R with 1 in each place where the corresponding number in R is in the outer product matrix (∈, set inclusion or element of or Epsilon operator), i.e., 0 0 1 0 1
Logically negate (not) values in the vector (change zeros to ones and ones to zeros) (∼, logical not or Tilde operator), i.e., 1 1 0 1 0
Select the items in R for which the corresponding element is 1 (/ replicate operator), i.e., 2 3 5
(This assumes the APL origin is 1, i.e., indices start with 1. APL can be set to use 0 as the origin, so that ι6 is 0 1 2 3 4 5, which is convenient for some calculations.)
Sorting
The following expression sorts a word list stored in matrix X according to word length:
X[⍋X+.≠' ';]
Game of Life
The following function "life", written in Dyalog APL, takes a Boolean matrix and calculates the new generation according to Conway's Game of Life. It demonstrates the power of APL to implement a complex algorithm in very little code, but understanding it requires some advanced knowledge of APL (as the same program would in many languages).
life ← {⊃1 ⍵ ∨.∧ 3 4 = +/ +⌿ ¯1 0 1 ∘.⊖ ¯1 0 1 ⌽¨ ⊂⍵}
HTML tags removal
In the following example, also Dyalog, the first line assigns some HTML code to a variable txt and then uses an APL expression to remove all the HTML tags:
txt←'<html><body><p>This is <em>emphasized</em> text.</p></body></html>'
{⍵ /⍨ ~{⍵∨≠\⍵}⍵∊'<>'} txt
This is emphasized text.
Naming
APL derives its name from the initials of Iverson's book A Programming Language, even though the book describes Iverson's mathematical notation, rather than the implemented programming language described in this article. The name is used only for actual implementations, starting with APL\360.
Adin Falkoff coined the name in 1966 during the implementation of APL\360 at IBM:
APL is occasionally re-interpreted as Array Programming Language or Array Processing Language, thereby making APL into a backronym.
Logo
There has always been cooperation between APL vendors, and joint conferences were held on a regular basis from 1969 until 2010. At such conferences, APL merchandise was often handed out, featuring APL motifs or collection of vendor logos. Common were apples (as a pun on the similarity in pronunciation of apple and APL) and the code snippet which are the symbols produced by the classic APL keyboard layout when holding the APL modifier key and typing "APL".
Despite all these community efforts, no universal vendor-agnostic logo for the programming language emerged. As popular programming languages increasingly have established recognisable logos, Fortran getting one in 2020, British APL Association launched a campaign in the second half of 2021, to establish such a logo for APL, and after a community election and multiple rounds of feedback, a logo was chosen in May 2022.
Use
APL is used for many purposes including financial and insurance applications, artificial intelligence,
neural networks
and robotics. It has been argued that APL is a calculation tool and not a programming language; its symbolic nature and array capabilities have made it popular with domain experts and data scientists who do not have or require the skills of a computer programmer.
APL is well suited to image manipulation and computer animation, where graphic transformations can be encoded as matrix multiplications. One of the first commercial computer graphics houses, Digital Effects, produced an APL graphics product named Visions, which was used to create television commercials and animation for the 1982 film Tron. Latterly, the Stormwind boating simulator uses APL to implement its core logic, its interfacing to the rendering pipeline middleware and a major part of its physics engine.
Today, APL remains in use in a wide range of commercial and scientific applications, for example
investment management,
asset management,
health care,
and DNA profiling.
Notable implementations
APL\360
The first implementation of APL using recognizable APL symbols was APL\360 which ran on the IBM System/360, and was completed in November 1966 though at that time remained in use only within IBM. In 1973 its implementors, Larry Breed, Dick Lathwell and Roger Moore, were awarded the Grace Murray Hopper Award from the Association for Computing Machinery (ACM). It was given "for their work in the design and implementation of APL\360, setting new standards in simplicity, efficiency, reliability and response time for interactive systems."
In 1975, the IBM 5100 microcomputer offered APL\360 as one of two built-in ROM-based interpreted languages for the computer, complete with a keyboard and display that supported all the special symbols used in the language.
Significant developments to APL\360 included CMS/APL, which made use of the virtual storage capabilities of CMS and APLSV, which introduced shared variables, system variables and system functions. It was subsequently ported to the IBM System/370 and VSPC platforms until its final release in 1983, after which it was replaced by APL2.
APL\1130
In 1968, APL\1130 became the first publicly available APL system, created by IBM for the IBM 1130. It became the most popular IBM Type-III Library software that IBM released.
APL*Plus and Sharp APL
APL*Plus and Sharp APL are versions of APL\360 with added business-oriented extensions such as data formatting and facilities to store APL arrays in external files. They were jointly developed by two companies, employing various members of the original IBM APL\360 development team.
The two companies were I. P. Sharp Associates (IPSA), an APL\360 services company formed in 1964 by Ian Sharp, Roger Moore and others, and STSC, a time-sharing and consulting service company formed in 1969 by Lawrence Breed and others. Together the two developed APL*Plus and thereafter continued to work together but develop APL separately as APL*Plus and Sharp APL. STSC ported APL*Plus to many platforms with versions being made for the VAX 11, PC and UNIX, whereas IPSA took a different approach to the arrival of the personal computer and made Sharp APL available on this platform using additional PC-XT/360 hardware. In 1993, Soliton Incorporated was formed to support Sharp APL and it developed Sharp APL into SAX (Sharp APL for Unix). , APL*Plus continues as APL2000 APL+Win.
In 1985, Ian Sharp, and Dan Dyer of STSC, jointly received the Kenneth E. Iverson Award for Outstanding Contribution to APL.
APL2
APL2 was a significant re-implementation of APL by IBM which was developed from 1971 and first released in 1984. It provides many additions to the language, of which the most notable is nested (non-rectangular) array support. The entire APL2 Products and Services Team was awarded the Iverson Award in 2007.
In 2021, IBM sold APL2 to Log-On Software, who develop and sell the product as Log-On APL2.
APLGOL
In 1972, APLGOL was released as an experimental version of APL that added structured programming language constructs to the language framework. New statements were added for interstatement control, conditional statement execution, and statement structuring, as well as statements to clarify the intent of the algorithm. It was implemented for Hewlett-Packard in 1977.
Dyalog APL
Dyalog APL was first released by British company Dyalog Ltd. in 1983 and, , is available for AIX, Linux (including on the Raspberry Pi), macOS and Microsoft Windows platforms. It is based on APL2, with extensions to support object-oriented programming, functional programming, and tacit programming. Licences are free for personal/non-commercial use.
In 1995, two of the development team – John Scholes and Peter Donnelly – were awarded the Iverson Award for their work on the interpreter. Gitte Christensen and Morten Kromberg were joint recipients of the Iverson Award in 2016.
NARS2000
NARS2000 is an open-source APL interpreter written by Bob Smith, a prominent APL developer and implementor from STSC in the 1970s and 1980s. NARS2000 contains advanced features and new datatypes and runs natively on Microsoft Windows, and other platforms under Wine. It is named after a development tool from the 1980s, NARS (Nested Arrays Research System).
APLX
APLX is a cross-platform dialect of APL, based on APL2 and with several extensions, which was first released by British company MicroAPL in 2002. Although no longer in development or on commercial sale it is now available free of charge from Dyalog.
York APL
York APL was developed at the York University, Ontario around 1968, running on IBM 360 mainframes. One notable difference between it and APL\360 was that it defined the "shape" (ρ) of a scalar as 1 whereas APL\360 defined it as the more mathematically correct 0 — this made it easier to write functions that acted the same with scalars and vectors.
GNU APL
GNU APL is a free implementation of Extended APL as specified in ISO/IEC 13751:2001 and is thus an implementation of APL2. It runs on Linux, macOS, several BSD dialects, and on Windows (either using Cygwin for full support of all its system functions or as a native 64-bit Windows binary with some of its system functions missing). GNU APL uses Unicode internally and can be scripted. It was written by Jürgen Sauermann.
Richard Stallman, founder of the GNU Project, was an early adopter of APL, using it to write a text editor as a high school student in the summer of 1969.
Interpretation and compilation of APL
APL is traditionally an interpreted language, having language characteristics such as weak variable typing not well suited to compilation. However, with arrays as its core data structure it provides opportunities for performance gains through parallelism, parallel computing, massively parallel applications, and very-large-scale integration (VLSI), and from the outset APL has been regarded as a high-performance language – for example, it was noted for the speed with which it could perform complicated matrix operations "because it operates on arrays and performs operations like matrix inversion internally".
Nevertheless, APL is rarely purely interpreted and compilation or partial compilation techniques that are, or have been, used include the following:
Idiom recognition
Most APL interpreters support idiom recognition and evaluate common idioms as single operations. For example, by evaluating the idiom BV/⍳⍴A as a single operation (where BV is a Boolean vector and A is an array), the creation of two intermediate arrays is avoided.
Optimised bytecode
Weak typing in APL means that a name may reference an array (of any datatype), a function or an operator. In general, the interpreter cannot know in advance which form it will be and must therefore perform analysis, syntax checking etc. at run-time. However, in certain circumstances, it is possible to deduce in advance what type a name is expected to reference and then generate bytecode which can be executed with reduced run-time overhead. This bytecode can also be optimised using compilation techniques such as constant folding or common subexpression elimination. The interpreter will execute the bytecode when present and when any assumptions which have been made are met. Dyalog APL includes support for optimised bytecode.
Compilation
Compilation of APL has been the subject of research and experiment since the language first became available; the first compiler is considered to be the Burroughs APL-700 which was released around 1971. In order to be able to compile APL, language limitations have to be imposed. APEX is a research APL compiler which was written by Robert Bernecky and is available under the GNU General Public License.
The STSC APL Compiler is a hybrid of a bytecode optimiser and a compiler – it enables compilation of functions to machine code provided that its sub-functions and globals are declared, but the interpreter is still used as a runtime library and to execute functions which do not meet the compilation requirements.
Standards
APL has been standardized by the American National Standards Institute (ANSI) working group X3J10 and International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC), ISO/IEC Joint Technical Committee 1 Subcommittee 22 Working Group 3. The Core APL language is specified in ISO 8485:1989, and the Extended APL language is specified in ISO/IEC 13751:2001.
Video
– a 1974 talk show style interview with the original developers of APL.
– a 1975 live demonstration of APL by Professor Bob Spence, Imperial College London.
– a 2009 tutorial by John Scholes of Dyalog Ltd. which implements Conway's Game of Life in a single line of APL.
– a 2009 introduction to APL by Graeme Robertson.
Online resources
TryAPL.org, an online APL primer
APL2C, a source of links to APL compilers
|
;.NET programming languages;Array programming languages;Articles with example code;Command shells;Dynamic programming languages;Dynamically typed programming languages;Functional languages;Homoiconic programming languages;IBM software;Programming languages;Programming languages created in 1964;Programming languages with an ISO standard
|
https://en.wikipedia.org/wiki/AWK
|
AWK () is a domain-specific language designed for text processing and typically used as a data extraction and reporting tool. Like sed and grep, it is a filter, and it is a standard feature of most Unix-like operating systems.
The AWK language is a data-driven scripting language consisting of a set of actions to be taken against streams of textual data – either run directly on files or used as part of a pipeline – for purposes of extracting or transforming text, such as producing formatted reports. The language extensively uses the string datatype, associative arrays (that is, arrays indexed by key strings), and regular expressions. While AWK has a limited intended application domain and was especially designed to support one-liner programs, the language is Turing-complete, and even the early Bell Labs users of AWK often wrote well-structured large AWK programs.
AWK was created at Bell Labs in the 1970s, and its name is derived from the surnames of its authors: Alfred Aho (author of egrep), Peter Weinberger (who worked on tiny relational databases), and Brian Kernighan. The acronym is pronounced the same as the name of the bird species auk, which is illustrated on the cover of The AWK Programming Language. When written in all lowercase letters, as awk, it refers to the Unix or Plan 9 program that runs scripts written in the AWK programming language.
History
According to Brian Kernighan, one of the goals of AWK was to have a tool that would easily manipulate both numbers and strings. AWK was also inspired by Marc Rochkind's programming language that was used to search for patterns in input data, and was implemented using yacc.
As one of the early tools to appear in Version 7 Unix, AWK added computational features to a Unix pipeline besides the Bourne shell, the only scripting language available in a standard Unix environment. It is one of the mandatory utilities of the Single UNIX Specification, and is required by the Linux Standard Base specification.
In 1983, AWK was one of several UNIX tools available for Charles River Data Systems' UNOS operating system under Bell Laboratories license.
AWK was significantly revised and expanded in 1985–88, resulting in the GNU AWK implementation written by Paul Rubin, Jay Fenlason, and Richard Stallman, released in 1988. GNU AWK may be the most widely deployed version because it is included with GNU-based Linux packages. GNU AWK has been maintained solely by Arnold Robbins since 1994. Brian Kernighan's nawk (New AWK) source was first released in 1993 unpublicized, and publicly since the late 1990s; many BSD systems use it to avoid the GPL license.
AWK was preceded by sed (1974). Both were designed for text processing. They share the line-oriented, data-driven paradigm, and are particularly suited to writing one-liner programs, due to the implicit main loop and current line variables. The power and terseness of early AWK programs – notably the powerful regular expression handling and conciseness due to implicit variables, which facilitate one-liners – together with the limitations of AWK at the time, were important inspirations for the Perl language (1987). In the 1990s, Perl became very popular, competing with AWK in the niche of Unix text-processing languages.
Structure of AWK programs
An AWK program is a series of pattern action pairs, written as:
condition { action }
condition { action }
...
where condition is typically an expression and action is a series of commands. The input is split into records, where by default records are separated by newline characters so that the input is split into lines. The program tests each record against each of the conditions in turn, and executes the action for each expression that is true. Either the condition or the action may be omitted. The condition defaults to matching every record. The default action is to print the record. This is the same pattern-action structure as sed.
In addition to a simple AWK expression, such as foo == 1 or /^foo/, the condition can be BEGIN or END causing the action to be executed before or after all records have been read, or pattern1, pattern2 which matches the range of records starting with a record that matches pattern1 up to and including the record that matches pattern2 before again trying to match against pattern1 on subsequent lines.
In addition to normal arithmetic and logical operators, AWK expressions include the tilde operator, ~, which matches a regular expression against a string. As handy syntactic sugar, /regexp/ without using the tilde operator matches against the current record; this syntax derives from sed, which in turn inherited it from the ed editor, where / is used for searching. This syntax of using slashes as delimiters for regular expressions was subsequently adopted by Perl and ECMAScript, and is now common. The tilde operator was also adopted by Perl.
Commands
AWK commands are the statements that are substituted for action in the examples above. AWK commands can include function calls, variable assignments, calculations, or any combination thereof. AWK contains built-in support for many functions; many more are provided by the various flavors of AWK. Also, some flavors support the inclusion of dynamically linked libraries, which can also provide more functions.
The print command
The print command is used to output text. The output text is always terminated with a predefined string called the output record separator (ORS) whose default value is a newline. The simplest form of this command is:
print
This displays the contents of the current record. In AWK, records are broken down into fields, and these can be displayed separately:
print $1
Displays the first field of the current record
print $1, $3
Displays the first and third fields of the current record, separated by a predefined string called the output field separator (OFS) whose default value is a single space character
Although these fields ($X) may bear resemblance to variables (the $ symbol indicates variables in the usual Unix shells and in Perl), they actually refer to the fields of the current record. A special case, $0, refers to the entire record. In fact, the commands "print" and "print $0" are identical in functionality.
The print command can also display the results of calculations and/or function calls:
/regex_pattern/ {
# Actions to perform in the event the record (line) matches the above regex_pattern
print 3+2
print foobar(3)
print foobar(variable)
print sin(3-2)
}
Output may be sent to a file:
/regex_pattern/ {
# Actions to perform in the event the record (line) matches the above regex_pattern
print "expression" > "file name"
}
or through a pipe:
/regex_pattern/ {
# Actions to perform in the event the record (line) matches the above regex_pattern
print "expression" | "command"
}
Built-in variables
AWK's built-in variables include the field variables: $1, $2, $3, and so on ($0 represents the entire record). They hold the text or values in the individual text-fields in a record.
Other variables include:
NR: Number of Records. Keeps a current count of the number of input records read so far from all data files. It starts at zero, but is never automatically reset to zero.
FNR: File Number of Records. Keeps a current count of the number of input records read so far in the current file. This variable is automatically reset to zero each time a new file is started.
NF: Number of Fields. Contains the number of fields in the current input record. The last field in the input record can be designated by $NF, the 2nd-to-last field by $(NF-1), the 3rd-to-last field by $(NF-2), etc.
FILENAME: Contains the name of the current input-file.
FS: Field Separator. Contains the "field separator" used to divide fields in the input record. The default, "white space", allows any sequence of space and tab characters. FS can be reassigned with another character or character sequence to change the field separator.
RS: Record Separator. Stores the current "record separator" character. Since, by default, an input line is the input record, the default record separator character is a "newline".
OFS: Output Field Separator. Stores the "output field separator", which separates the fields when awk prints them. The default is a "space" character.
ORS: Output Record Separator. Stores the "output record separator", which separates the output records when awk prints them. The default is a "newline" character.
OFMT: Output Format. Stores the format for numeric output. The default format is "%.6g".
Variables and syntax
Variable names can use any of the characters [A-Za-z0-9_], with the exception of language keywords, and cannot begin with a numeric digit. The operators + - * / represent addition, subtraction, multiplication, and division, respectively. For string concatenation, simply place two variables (or string constants) next to each other. It is optional to use a space in between if string constants are involved, but two variable names placed adjacent to each other require a space in between. Double quotes delimit string constants. Statements need not end with semicolons. Finally, comments can be added to programs by using # as the first character on a line, or behind a command or sequence of commands.
User-defined functions
In a format similar to C, function definitions consist of the keyword function, the function name, argument names and the function body. Here is an example of a function.
function add_three(number) {
return number + 3
}
This statement can be invoked as follows:
(pattern) {
print add_three(36) # Outputs '''39'''
}
Functions can have variables that are in the local scope. The names of these are added to the end of the argument list, though values for these should be omitted when calling the function. It is convention to add some whitespace in the argument list before the local variables, to indicate where the parameters end and the local variables begin.
Examples
Hello, World!
Here is the customary "Hello, World!" program written in AWK:
BEGIN {
print "Hello, world!"
exit
}
Print lines longer than 80 characters
Print all lines longer than 80 characters. The default action is to print the current line.
length($0) > 80
Count words
Count words in the input and print the number of lines, words, and characters (like wc):
{
words += NF
chars += length + 1 # add one to account for the newline character at the end of each record (line)
}
END { print NR, words, chars }
As there is no pattern for the first line of the program, every line of input matches by default, so the increment actions are executed for every line. words += NF is shorthand for words = words + NF.
Sum last word
{ s += $NF }
END { print s + 0 }
s is incremented by the numeric value of $NF, which is the last word on the line as defined by AWK's field separator (by default, white-space). NF is the number of fields in the current line, e.g. 4. Since $4 is the value of the fourth field, $NF is the value of the last field in the line regardless of how many fields this line has, or whether it has more or fewer fields than surrounding lines. $ is actually a unary operator with the highest operator precedence. (If the line has no fields, then NF is 0, $0 is the whole line, which in this case is empty apart from possible white-space, and so has the numeric value 0.)
At the end of the input, the END pattern matches, so s is printed. However, since there may have been no lines of input at all, in which case no value has ever been assigned to s, s will be an empty string by default. Adding zero to a variable is an AWK idiom for coercing it from a string to a numeric value. This results from AWK's arithmetic operators, like addition, implicitly casting their operands to numbers before computation as required. (Similarly, concatenating a variable with an empty string coerces from a number to a string, e.g., s "". Note, there is no operator to concatenate strings, they are just placed adjacently.) On an empty input, the coercion in { print s + 0 } causes the program to print 0, whereas with just the action { print s }, an empty line would be printed.
Match a range of input lines
NR % 4 == 1, NR % 4 == 3 { printf "%6d %s\n", NR, $0 }
The action statement prints each line numbered. The printf function emulates the standard C printf and works similarly to the print command described above. The pattern to match, however, works as follows: NR is the number of records, typically lines of input, AWK has so far read, i.e. the current line number, starting at 1 for the first line of input. % is the modulo operator. NR % 4 == 1 is true for the 1st, 5th, 9th, etc., lines of input. Likewise, NR % 4 == 3 is true for the 3rd, 7th, 11th, etc., lines of input. The range pattern is false until the first part matches, on line 1, and then remains true up to and including when the second part matches, on line 3. It then stays false until the first part matches again on line 5.
Thus, the program prints lines 1,2,3, skips line 4, and then 5,6,7, and so on. For each line, it prints the line number (on a 6 character-wide field) and then the line contents. For example, when executed on this input:
Rome
Florence
Milan
Naples
Turin
Venice
The previous program prints:
1 Rome
2 Florence
3 Milan
5 Turin
6 Venice
Printing the initial or the final part of a file
As a special case, when the first part of a range pattern is constantly true, e.g. 1, the range will start at the beginning of the input. Similarly, if the second part is constantly false, e.g. 0, the range will continue until the end of input. For example,
/^--cut here--$/, 0
prints lines of input from the first line matching the regular expression ^--cut here--$, that is, a line containing only the phrase "--cut here--", to the end.
Calculate word frequencies
Word frequency using associative arrays:
BEGIN {
FS="[^a-zA-Z]+"
}
{
for (i=1; i<=NF; i++)
words[tolower($i)]++
}
END {
for (i in words)
print i, words[i]
}
The BEGIN block sets the field separator to any sequence of non-alphabetic characters. Separators can be regular expressions. After that, we get to a bare action, which performs the action on every input line. In this case, for every field on the line, we add one to the number of times that word, first converted to lowercase, appears. Finally, in the END block, we print the words with their frequencies. The line
for (i in words)
creates a loop that goes through the array words, setting i to each subscript of the array. This is different from most languages, where such a loop goes through each value in the array. The loop thus prints out each word followed by its frequency count. tolower was an addition to the One True awk (see below) made after the book was published.
Match pattern from command line
This program can be represented in several ways. The first one uses the Bourne shell to make a shell script that does everything. It is the shortest of these methods:
#!/bin/sh
pattern="$1"
shift
awk '/'"$pattern"'/ { print FILENAME ":" $0 }' "$@"
The $pattern in the awk command is not protected by single quotes so that the shell does expand the variable but it needs to be put in double quotes to properly handle patterns containing spaces. A pattern by itself in the usual way checks to see if the whole line ($0) matches. FILENAME contains the current filename. awk has no explicit concatenation operator; two adjacent strings concatenate them. $0 expands to the original unchanged input line.
There are alternate ways of writing this. This shell script accesses the environment directly from within awk:
#!/bin/sh
export pattern="$1"
shift
awk '$0 ~ ENVIRON["pattern"] { print FILENAME ":" $0 }' "$@"
This is a shell script that uses ENVIRON, an array introduced in a newer version of the One True awk after the book was published. The subscript of ENVIRON is the name of an environment variable; its result is the variable's value. This is like the getenv function in various standard libraries and POSIX. The shell script makes an environment variable pattern containing the first argument, then drops that argument and has awk look for the pattern in each file.
~ checks to see if its left operand matches its right operand; !~ is its inverse. A regular expression is just a string and can be stored in variables.
The next way uses command-line variable assignment, in which an argument to awk can be seen as an assignment to a variable:
#!/bin/sh
pattern="$1"
shift
awk '$0 ~ pattern { print FILENAME ":" $0 }' pattern="$pattern" "$@"
Or You can use the -v var=value command line option (e.g. awk -v pattern="$pattern" ...).
Finally, this is written in pure awk, without help from a shell or without the need to know too much about the implementation of the awk script (as the variable assignment on command line one does), but is a bit lengthy:
BEGIN {
pattern = ARGV[1]
for (i = 1; i < ARGC; i++) # remove first argument
ARGV[i] = ARGV[i + 1]
ARGC--
if (ARGC == 1) { # the pattern was the only thing, so force read from standard input (used by book)
ARGC = 2
ARGV[1] = "-"
}
}
$0 ~ pattern { print FILENAME ":" $0 }
The BEGIN is necessary not only to extract the first argument, but also to prevent it from being interpreted as a filename after the BEGIN block ends. ARGC, the number of arguments, is always guaranteed to be ≥1, as ARGV[0] is the name of the command that executed the script, most often the string "awk". ARGV[ARGC] is the empty string, "". # initiates a comment that expands to the end of the line.
Note the if block. awk only checks to see if it should read from standard input before it runs the command. This means that
awk 'prog'
only works because the fact that there are no filenames is only checked before prog is run! If you explicitly set ARGC to 1 so that there are no arguments, awk will simply quit because it feels there are no more input files. Therefore, you need to explicitly say to read from standard input with the special filename -.
Self-contained AWK scripts
On Unix-like operating systems self-contained AWK scripts can be constructed using the shebang syntax.
For example, a script that sends the content of a given file to standard output may be built by creating a file named print.awk with the following content:
#!/usr/bin/awk -f
{ print $0 }
It can be invoked with: ./print.awk <filename>
The -f tells awk that the argument that follows is the file to read the AWK program from, which is the same flag that is used in sed. Since they are often used for one-liners, both these programs default to executing a program given as a command-line argument, rather than a separate file.
Versions and implementations
AWK was originally written in 1977 and distributed with Version 7 Unix.
In 1985 its authors started expanding the language, most significantly by adding user-defined functions. The language is described in the book The AWK Programming Language, published 1988, and its implementation was made available in releases of UNIX System V. To avoid confusion with the incompatible older version, this version was sometimes called "new awk" or nawk. This implementation was released under a free software license in 1996 and is still maintained by Brian Kernighan (see external links below).
Old versions of Unix, such as UNIX/32V, included awkcc, which converted AWK to C. Kernighan wrote a program to turn awk into ; its state is not known.
BWK awk, also known as nawk, refers to the version by Brian Kernighan. It has been dubbed the "One True AWK" because of the use of the term in association with the book that originally described the language and the fact that Kernighan was one of the original authors of AWK. FreeBSD refers to this version as one-true-awk. This version also has features not in the book, such as tolower and ENVIRON that are explained above; see the FIXES file in the source archive for details. This version is used by, for example, Android, FreeBSD, NetBSD, OpenBSD, macOS, and illumos. Brian Kernighan and Arnold Robbins are the main contributors to a source repository for nawk: .
gawk (GNU awk) is another free-software implementation and the only implementation that makes serious progress implementing internationalization and localization and TCP/IP networking. It was written before the original implementation became freely available. It includes its own debugger, and its profiler enables the user to make measured performance enhancements to a script. It also enables the user to extend functionality with shared libraries. Some Linux distributions include gawk as their default AWK implementation. As of version 5.2 (September 2022) gawk includes a persistent memory feature that can remember script-defined variables and functions from one invocation of a script to the next and pass data between unrelated scripts, as described in the Persistent-Memory gawk User Manual: .
gawk-csv. The CSV extension of gawk provides facilities for inputting and outputting CSV formatted data.
mawk is a very fast AWK implementation by Mike Brennan based on a bytecode interpreter.
libmawk is a fork of mawk, allowing applications to embed multiple parallel instances of awk interpreters.
awka (whose front end is written atop the mawk program) is another translator of AWK scripts into C code. When compiled, statically including the author's libawka.a, the resulting executables are considerably sped up and, according to the author's tests, compare very well with other versions of AWK, Perl, or Tcl. Small scripts will turn into programs of 160–170 kB.
tawk (Thompson AWK) is an AWK compiler for Solaris, DOS, OS/2, and Windows, previously sold by Thompson Automation Software (which has ceased its activities).
Jawk is a project to implement AWK in Java, hosted on SourceForge. Extensions to the language are added to provide access to Java features within AWK scripts (i.e., Java threads, sockets, collections, etc.).
xgawk is a fork of gawk that extends gawk with dynamically loadable libraries. The XMLgawk extension was integrated into the official GNU Awk release 4.1.0.
QSEAWK is an embedded AWK interpreter implementation included in the QSE library that provides embedding application programming interface (API) for C and C++.
libfawk is a very small, function-only, reentrant, embeddable interpreter written in C
BusyBox includes an AWK implementation written by Dmitry Zakharov. This is a very small implementation suitable for embedded systems.
CLAWK by Michael Parker provides an AWK implementation in Common Lisp, based upon the regular expression library of the same author.
goawk is an AWK implementation in Go with a few convenience extensions by Ben Hoyt, hosted on Github.
The gawk manual has a list of more AWK implementations.
Books
|
1977 software;Cross-platform software;Domain-specific programming languages;Free and open source interpreters;Pattern matching programming languages;Plan 9 commands;Programming languages created in 1977;Scripting languages;Standard Unix programs;Text-oriented programming languages;Unix SUS2008 utilities;Unix text processing utilities
|
https://en.wikipedia.org/wiki/Apollo%20program
|
The Apollo program, also known as Project Apollo, was the United States human spaceflight program led by NASA, which successfully landed the first humans on the Moon in 1969. Apollo followed Project Mercury that put the first Americans in space. It was conceived in 1960 as a three-person spacecraft during President Dwight D. Eisenhower's administration. Apollo was later dedicated to President John F. Kennedy's national goal for the 1960s of "landing a man on the Moon and returning him safely to the Earth" in an address to Congress on May 25, 1961. It was the third American human spaceflight program to fly, preceded by Project Gemini conceived in 1961 to extend spaceflight capability in support of Apollo.
Kennedy's goal was accomplished on the Apollo 11 mission when astronauts Neil Armstrong and Buzz Aldrin landed their Apollo Lunar Module (LM) on July 20, 1969, and walked on the lunar surface, while Michael Collins remained in lunar orbit in the command and service module (CSM), and all three landed safely on Earth in the Pacific Ocean on July 24. Five subsequent Apollo missions also landed astronauts on the Moon, the last, Apollo 17, in December 1972. In these six spaceflights, twelve people walked on the Moon.
Apollo ran from 1961 to 1972, with the first crewed flight in 1968. It encountered a major setback in 1967 when an Apollo 1 cabin fire killed the entire crew during a prelaunch test. After the first successful landing, sufficient flight hardware remained for nine follow-on landings with a plan for extended lunar geological and astrophysical exploration. Budget cuts forced the cancellation of three of these. Five of the remaining six missions achieved successful landings, but the Apollo 13 landing had to be aborted after an oxygen tank exploded en route to the Moon, crippling the CSM. The crew barely managed a safe return to Earth by using the lunar module as a "lifeboat" on the return journey. Apollo used the Saturn family of rockets as launch vehicles, which were also used for an Apollo Applications Program, which consisted of Skylab, a space station that supported three crewed missions in 1973–1974, and the Apollo–Soyuz Test Project, a joint United States-Soviet Union low Earth orbit mission in 1975.
Apollo set several major human spaceflight milestones. It stands alone in sending crewed missions beyond low Earth orbit. Apollo 8 was the first crewed spacecraft to orbit another celestial body, and Apollo 11 was the first crewed spacecraft to land humans on one.
Overall, the Apollo program returned of lunar rocks and soil to Earth, greatly contributing to the understanding of the Moon's composition and geological history. The program laid the foundation for NASA's subsequent human spaceflight capability and funded construction of its Johnson Space Center and Kennedy Space Center. Apollo also spurred advances in many areas of technology incidental to rocketry and human spaceflight, including avionics, telecommunications, and computers.
Name
The program was named after Apollo, the Greek god of light, music, and the Sun, by NASA manager Abe Silverstein, who later said, "I was naming the spacecraft like I'd name my baby." Silverstein chose the name at home one evening, early in 1960, because he felt "Apollo riding his chariot across the Sun was appropriate to the grand scale of the proposed program".
The context of this was that the program focused at its beginning mainly on developing an advanced crewed spacecraft, the Apollo command and service module, succeeding the Mercury program. A lunar landing became the focus of the program only in 1961. Thereafter Project Gemini instead followed the Mercury program to test and study advanced crewed spaceflight technology.
Background
Origin and spacecraft feasibility studies
The Apollo program was conceived during the Eisenhower administration in early 1960, as a follow-up to Project Mercury. While the Mercury capsule could support only one astronaut on a limited Earth orbital mission, Apollo would carry three. Possible missions included ferrying crews to a space station, circumlunar flights, and eventual crewed lunar landings.
In July 1960, NASA Deputy Administrator Hugh L. Dryden announced the Apollo program to industry representatives at a series of Space Task Group conferences. Preliminary specifications were laid out for a spacecraft with a mission module cabin separate from the command module (piloting and reentry cabin), and a propulsion and equipment module. On August 30, a feasibility study competition was announced, and on October 25, three study contracts were awarded to General Dynamics/Convair, General Electric, and the Glenn L. Martin Company. Meanwhile, NASA performed its own in-house spacecraft design studies led by Maxime Faget, to serve as a gauge to judge and monitor the three industry designs.
Political pressure builds
In November 1960, John F. Kennedy was elected president after a campaign that promised American superiority over the Soviet Union in the fields of space exploration and missile defense. Up to the election of 1960, Kennedy had been speaking out against the "missile gap" that he and many other senators said had developed between the Soviet Union and the United States due to the inaction of President Eisenhower. Beyond military power, Kennedy used aerospace technology as a symbol of national prestige, pledging to make the US not "first but, first and, first if, but first period". Despite Kennedy's rhetoric, he did not immediately come to a decision on the status of the Apollo program once he became president. He knew little about the technical details of the space program, and was put off by the massive financial commitment required by a crewed Moon landing. When Kennedy's newly appointed NASA Administrator James E. Webb requested a 30 percent budget increase for his agency, Kennedy supported an acceleration of NASA's large booster program but deferred a decision on the broader issue.
On April 12, 1961, Soviet cosmonaut Yuri Gagarin became the first person to fly in space, reinforcing American fears about being left behind in a technological competition with the Soviet Union. At a meeting of the US House Committee on Science and Astronautics one day after Gagarin's flight, many congressmen pledged their support for a crash program aimed at ensuring that America would catch up. Kennedy was circumspect in his response to the news, refusing to make a commitment on America's response to the Soviets.
On April 20, Kennedy sent a memo to Vice President Lyndon B. Johnson, asking Johnson to look into the status of America's space program, and into programs that could offer NASA the opportunity to catch up. Johnson responded approximately one week later, concluding that "we are neither making maximum effort nor achieving results necessary if this country is to reach a position of leadership." His memo concluded that a crewed Moon landing was far enough in the future that it was likely the United States would achieve it first.
On May 25, 1961, twenty days after the first American crewed spaceflight Freedom 7, Kennedy proposed the crewed Moon landing in a Special Message to the Congress on Urgent National Needs:
NASA expansion
At the time of Kennedy's proposal, only one American had flown in space—less than a month earlier—and NASA had not yet sent an astronaut into orbit. Even some NASA employees doubted whether Kennedy's ambitious goal could be met. By 1963, Kennedy even came close to agreeing to a joint US-USSR Moon mission, to eliminate duplication of effort.
With the clear goal of a crewed landing replacing the more nebulous goals of space stations and circumlunar flights, NASA decided that, in order to make progress quickly, it would discard the feasibility study designs of Convair, GE, and Martin, and proceed with Faget's command and service module design. The mission module was determined to be useful only as an extra room, and therefore unnecessary. They used Faget's design as the specification for another competition for spacecraft procurement bids in October 1961. On November 28, 1961, it was announced that North American Aviation had won the contract, although its bid was not rated as good as the Martin proposal. Webb, Dryden and Robert Seamans chose it in preference due to North American's longer association with NASA and its predecessor.
Landing humans on the Moon by the end of 1969 required the most sudden burst of technological creativity, and the largest commitment of resources ($25 billion; $ in US dollars) ever made by any nation in peacetime. At its peak, the Apollo program employed 400,000 people and required the support of over 20,000 industrial firms and universities.
On July 1, 1960, NASA established the Marshall Space Flight Center (MSFC) in Huntsville, Alabama. MSFC designed the heavy lift-class Saturn launch vehicles, which would be required for Apollo.
Manned Spacecraft Center
It became clear that managing the Apollo program would exceed the capabilities of Robert R. Gilruth's Space Task Group, which had been directing the nation's crewed space program from NASA's Langley Research Center. So Gilruth was given authority to grow his organization into a new NASA center, the Manned Spacecraft Center (MSC). A site was chosen in Houston, Texas, on land donated by Rice University, and Administrator Webb announced the conversion on September 19, 1961. It was also clear NASA would soon outgrow its practice of controlling missions from its Cape Canaveral Air Force Station launch facilities in Florida, so a new Mission Control Center would be included in the MSC.
In September 1962, by which time two Project Mercury astronauts had orbited the Earth, Gilruth had moved his organization to rented space in Houston, and construction of the MSC facility was under way, Kennedy visited Rice to reiterate his challenge in a famous speech:
The MSC was completed in September 1963. It was renamed by the United States Congress in honor of Lyndon B. Johnson soon after his death in 1973.
Launch Operations Center
It also became clear that Apollo would outgrow the Canaveral launch facilities in Florida. The two newest launch complexes were already being built for the Saturn I and IB rockets at the northernmost end: LC-34 and LC-37. But an even bigger facility would be needed for the mammoth rocket required for the crewed lunar mission, so land acquisition was started in July 1961 for a Launch Operations Center (LOC) immediately north of Canaveral at Merritt Island. The design, development and construction of the center was conducted by Kurt H. Debus, a member of Wernher von Braun's original V-2 rocket engineering team. Debus was named the LOC's first Director. Construction began in November 1962. Following Kennedy's death, President Johnson issued an executive order on November 29, 1963, to rename the LOC and Cape Canaveral in honor of Kennedy.
The LOC included Launch Complex 39, a Launch Control Center, and a Vertical Assembly Building (VAB). in which the space vehicle (launch vehicle and spacecraft) would be assembled on a mobile launcher platform and then moved by a crawler-transporter to one of several launch pads. Although at least three pads were planned, only two, designated AandB, were completed in October 1965. The LOC also included an Operations and Checkout Building (OCB) to which Gemini and Apollo spacecraft were initially received prior to being mated to their launch vehicles. The Apollo spacecraft could be tested in two vacuum chambers capable of simulating atmospheric pressure at altitudes up to , which is nearly a vacuum.
Organization
Administrator Webb realized that in order to keep Apollo costs under control, he had to develop greater project management skills in his organization, so he recruited George E. Mueller for a high management job. Mueller accepted, on the condition that he have a say in NASA reorganization necessary to effectively administer Apollo. Webb then worked with Associate Administrator (later Deputy Administrator) Seamans to reorganize the Office of Manned Space Flight (OMSF). On July 23, 1963, Webb announced Mueller's appointment as Deputy Associate Administrator for Manned Space Flight, to replace then Associate Administrator D. Brainerd Holmes on his retirement effective September 1. Under Webb's reorganization, the directors of the Manned Spacecraft Center (Gilruth), Marshall Space Flight Center (von Braun), and the Launch Operations Center (Debus) reported to Mueller.
Based on his industry experience on Air Force missile projects, Mueller realized some skilled managers could be found among high-ranking officers in the U.S. Air Force, so he got Webb's permission to recruit General Samuel C. Phillips, who gained a reputation for his effective management of the Minuteman program, as OMSF program controller. Phillips's superior officer Bernard A. Schriever agreed to loan Phillips to NASA, along with a staff of officers under him, on the condition that Phillips be made Apollo Program Director. Mueller agreed, and Phillips managed Apollo from January 1964, until it achieved the first human landing in July 1969, after which he returned to Air Force duty.
Charles Fishman, in One Giant Leap, estimated the number of people and organizations involved into the Apollo program as "410,000 men and women at some 20,000 different companies contributed to the effort".
Choosing a mission mode
Once Kennedy had defined a goal, the Apollo mission planners were faced with the challenge of designing a spacecraft that could meet it while minimizing risk to human life, limiting cost, and not exceeding limits in possible technology and astronaut skill. Four possible mission modes were considered:
Direct Ascent: The spacecraft would be launched as a unit and travel directly to the lunar surface, without first going into lunar orbit. A Earth return ship would land all three astronauts atop a descent propulsion stage, which would be left on the Moon. This design would have required development of the extremely powerful Saturn C-8 or Nova launch vehicle to carry a payload to the Moon.
Earth Orbit Rendezvous (EOR): Multiple rocket launches (up to 15 in some plans) would carry parts of the Direct Ascent spacecraft and propulsion units for translunar injection (TLI). These would be assembled into a single spacecraft in Earth orbit.
Lunar Surface Rendezvous: Two spacecraft would be launched in succession. The first, an automated vehicle carrying propellant for the return to Earth, would land on the Moon, to be followed some time later by the crewed vehicle. Propellant would have to be transferred from the automated vehicle to the crewed vehicle.
Lunar Orbit Rendezvous (LOR): This turned out to be the winning configuration, which achieved the goal with Apollo 11 on July 20, 1969: a single Saturn V launched a spacecraft that was composed of a Apollo command and service module which remained in orbit around the Moon and a two-stage Apollo Lunar Module spacecraft which was flown by two astronauts to the surface, flown back to dock with the command module and was then discarded. Landing the smaller spacecraft on the Moon, and returning an even smaller part () to lunar orbit, minimized the total mass to be launched from Earth, but this was the last method initially considered because of the perceived risk of rendezvous and docking.
In early 1961, direct ascent was generally the mission mode in favor at NASA. Many engineers feared that rendezvous and docking, maneuvers that had not been attempted in Earth orbit, would be nearly impossible in lunar orbit. LOR advocates including John Houbolt at Langley Research Center emphasized the important weight reductions that were offered by the LOR approach. Throughout 1960 and 1961, Houbolt campaigned for the recognition of LOR as a viable and practical option. Bypassing the NASA hierarchy, he sent a series of memos and reports on the issue to Associate Administrator Robert Seamans; while acknowledging that he spoke "somewhat as a voice in the wilderness", Houbolt pleaded that LOR should not be discounted in studies of the question.
Seamans's establishment of an ad hoc committee headed by his special technical assistant Nicholas E. Golovin in July 1961, to recommend a launch vehicle to be used in the Apollo program, represented a turning point in NASA's mission mode decision. This committee recognized that the chosen mode was an important part of the launch vehicle choice, and recommended in favor of a hybrid EOR-LOR mode. Its consideration of LOR—as well as Houbolt's ceaseless work—played an important role in publicizing the workability of the approach. In late 1961 and early 1962, members of the Manned Spacecraft Center began to come around to support LOR, including the newly hired deputy director of the Office of Manned Space Flight, Joseph Shea, who became a champion of LOR. The engineers at Marshall Space Flight Center (MSFC), who were heavily invested in direct ascent, took longer to become convinced of its merits, but their conversion was announced by Wernher von Braun at a briefing on June 7, 1962.
But even after NASA reached internal agreement, it was far from smooth sailing. Kennedy's science advisor Jerome Wiesner, who had expressed his opposition to human spaceflight to Kennedy before the President took office, and had opposed the decision to land people on the Moon, hired Golovin, who had left NASA, to chair his own "Space Vehicle Panel", ostensibly to monitor, but actually to second-guess NASA's decisions on the Saturn V launch vehicle and LOR by forcing Shea, Seamans, and even Webb to defend themselves, delaying its formal announcement to the press on July 11, 1962, and forcing Webb to still hedge the decision as "tentative".
Wiesner kept up the pressure, even making the disagreement public during a two-day September visit by the President to Marshall Space Flight Center. Wiesner blurted out "No, that's no good" in front of the press, during a presentation by von Braun. Webb jumped in and defended von Braun, until Kennedy ended the squabble by stating that the matter was "still subject to final review". Webb held firm and issued a request for proposal to candidate Lunar Excursion Module (LEM) contractors. Wiesner finally relented, unwilling to settle the dispute once and for all in Kennedy's office, because of the President's involvement with the October Cuban Missile Crisis, and fear of Kennedy's support for Webb. NASA announced the selection of Grumman as the LEM contractor in November 1962.
Space historian James Hansen concludes that:
The LOR method had the advantage of allowing the lander spacecraft to be used as a "lifeboat" in the event of a failure of the command ship. Some documents prove this theory was discussed before and after the method was chosen. In 1964 an MSC study concluded, "The LM [as lifeboat]... was finally dropped, because no single reasonable CSM failure could be identified that would prohibit use of the SPS." Ironically, just such a failure happened on Apollo 13 when an oxygen tank explosion left the CSM without electrical power. The lunar module provided propulsion, electrical power and life support to get the crew home safely.
Spacecraft
Faget's preliminary Apollo design employed a cone-shaped command module, supported by one of several service modules providing propulsion and electrical power, sized appropriately for the space station, cislunar, and lunar landing missions. Once Kennedy's Moon landing goal became official, detailed design began of a command and service module (CSM) in which the crew would spend the entire direct-ascent mission and lift off from the lunar surface for the return trip, after being soft-landed by a larger landing propulsion module. The final choice of lunar orbit rendezvous changed the CSM's role to the translunar ferry used to transport the crew, along with a new spacecraft, the Lunar Excursion Module (LEM, later shortened to LM (Lunar Module) but still pronounced ) which would take two individuals to the lunar surface and return them to the CSM.
Command and service module
The command module (CM) was the conical crew cabin, designed to carry three astronauts from launch to lunar orbit and back to an Earth ocean landing. It was the only component of the Apollo spacecraft to survive without major configuration changes as the program evolved from the early Apollo study designs. Its exterior was covered with an ablative heat shield, and had its own reaction control system (RCS) engines to control its attitude and steer its atmospheric entry path. Parachutes were carried to slow its descent to splashdown. The module was tall, in diameter, and weighed approximately .
A cylindrical service module (SM) supported the command module, with a service propulsion engine and an RCS with propellants, and a fuel cell power generation system with liquid hydrogen and liquid oxygen reactants. A high-gain S-band antenna was used for long-distance communications on the lunar flights. On the extended lunar missions, an orbital scientific instrument package was carried. The service module was discarded just before reentry. The module was long and in diameter. The initial lunar flight version weighed approximately fully fueled, while a later version designed to carry a lunar orbit scientific instrument package weighed just over .
North American Aviation won the contract to build the CSM, and also the second stage of the Saturn V launch vehicle for NASA. Because the CSM design was started early before the selection of lunar orbit rendezvous, the service propulsion engine was sized to lift the CSM off the Moon, and thus was oversized to about twice the thrust required for translunar flight. Also, there was no provision for docking with the lunar module. A 1964 program definition study concluded that the initial design should be continued as Block I which would be used for early testing, while Block II, the actual lunar spacecraft, would incorporate the docking equipment and take advantage of the lessons learned in Block I development.
Apollo Lunar Module
The Apollo Lunar Module (LM) was designed to descend from lunar orbit to land two astronauts on the Moon and take them back to orbit to rendezvous with the command module. Not designed to fly through the Earth's atmosphere or return to Earth, its fuselage was designed totally without aerodynamic considerations and was of an extremely lightweight construction. It consisted of separate descent and ascent stages, each with its own engine. The descent stage contained storage for the descent propellant, surface stay consumables, and surface exploration equipment. The ascent stage contained the crew cabin, ascent propellant, and a reaction control system. The initial LM model weighed approximately , and allowed surface stays up to around 34 hours. An extended lunar module (ELM) weighed over , and allowed surface stays of more than three days. The contract for design and construction of the lunar module was awarded to Grumman Aircraft Engineering Corporation, and the project was overseen by Thomas J. Kelly.
Launch vehicles
Before the Apollo program began, Wernher von Braun and his team of rocket engineers had started work on plans for very large launch vehicles, the Saturn series, and the even larger Nova series. In the midst of these plans, von Braun was transferred from the Army to NASA and was made Director of the Marshall Space Flight Center. The initial direct ascent plan to send the three-person Apollo command and service module directly to the lunar surface, on top of a large descent rocket stage, would require a Nova-class launcher, with a lunar payload capability of over . The June 11, 1962, decision to use lunar orbit rendezvous enabled the Saturn V to replace the Nova, and the MSFC proceeded to develop the Saturn rocket family for Apollo.
Since Apollo, like Mercury, used more than one launch vehicle for space missions, NASA used spacecraft-launch vehicle combination series numbers: AS-10x for Saturn I, AS-20x for Saturn IB, and AS-50x for Saturn V (compare Mercury-Redstone 3, Mercury-Atlas 6) to designate and plan all missions, rather than numbering them sequentially as in Project Gemini. This was changed by the time human flights began.
Little Joe II
Since Apollo, like Mercury, would require a launch escape system (LES) in case of a launch failure, a relatively small rocket was required for qualification flight testing of this system. A rocket bigger than the Little Joe used by Mercury would be required, so the Little Joe II was built by General Dynamics/Convair. After an August 1963 qualification test flight, four LES test flights (A-001 through 004) were made at the White Sands Missile Range between May 1964 and January 1966.
Saturn I
Saturn I, the first US heavy lift launch vehicle, was initially planned to launch partially equipped CSMs in low Earth orbit tests. The S-I first stage burned RP-1 with liquid oxygen (LOX) oxidizer in eight clustered Rocketdyne H-1 engines, to produce of thrust. The S-IV second stage used six liquid hydrogen-fueled Pratt & Whitney RL-10 engines with of thrust. The S-V third stage flew inactively on Saturn I four times.
The first four Saturn I test flights were launched from LC-34, with only the first stage live, carrying dummy upper stages filled with water. The first flight with a live S-IV was launched from LC-37. This was followed by five launches of boilerplate CSMs (designated AS-101 through AS-105) into orbit in 1964 and 1965. The last three of these further supported the Apollo program by also carrying Pegasus satellites, which verified the safety of the translunar environment by measuring the frequency and severity of micrometeorite impacts.
In September 1962, NASA planned to launch four crewed CSM flights on the Saturn I from late 1965 through 1966, concurrent with Project Gemini. The payload capacity would have severely limited the systems which could be included, so the decision was made in October 1963 to use the uprated Saturn IB for all crewed Earth orbital flights.
Saturn IB
The Saturn IB was an upgraded version of the Saturn I. The S-IB first stage increased the thrust to by uprating the H-1 engine. The second stage replaced the S-IV with the S-IVB-200, powered by a single J-2 engine burning liquid hydrogen fuel with LOX, to produce of thrust. A restartable version of the S-IVB was used as the third stage of the Saturn V. The Saturn IB could send over into low Earth orbit, sufficient for a partially fueled CSM or the LM. Saturn IB launch vehicles and flights were designated with an AS-200 series number, "AS" indicating "Apollo Saturn" and the "2" indicating the second member of the Saturn rocket family.
Saturn V
Saturn V launch vehicles and flights were designated with an AS-500 series number, "AS" indicating "Apollo Saturn" and the "5" indicating Saturn V. The three-stage Saturn V was designed to send a fully fueled CSM and LM to the Moon. It was in diameter and stood tall with its lunar payload. Its capability grew to for the later advanced lunar landings. The S-IC first stage burned RP-1/LOX for a rated thrust of , which was upgraded to . The second and third stages burned liquid hydrogen; the third stage was a modified version of the S-IVB, with thrust increased to and capability to restart the engine for translunar injection after reaching a parking orbit.
Astronauts
NASA's director of flight crew operations during the Apollo program was Donald K. "Deke" Slayton, one of the original Mercury Seven astronauts who was medically grounded in September 1962 due to a heart murmur. Slayton was responsible for making all Gemini and Apollo crew assignments.
Thirty-two astronauts were assigned to fly missions in the Apollo program. Twenty-four of these left Earth's orbit and flew around the Moon between December 1968 and December 1972 (three of them twice). Half of the 24 walked on the Moon's surface, though none of them returned to it after landing once. One of the moonwalkers was a trained geologist. Of the 32, Gus Grissom, Ed White, and Roger Chaffee were killed during a ground test in preparation for the Apollo 1 mission.
The Apollo astronauts were chosen from the Project Mercury and Gemini veterans, plus from two later astronaut groups. All missions were commanded by Gemini or Mercury veterans. Crews on all development flights (except the Earth orbit CSM development flights) through the first two landings on Apollo 11 and Apollo 12, included at least two (sometimes three) Gemini veterans. Harrison Schmitt, a geologist, was the first NASA scientist astronaut to fly in space, and landed on the Moon on the last mission, Apollo 17. Schmitt participated in the lunar geology training of all of the Apollo landing crews.
NASA awarded all 32 of these astronauts its highest honor, the Distinguished Service Medal, given for "distinguished service, ability, or courage", and personal "contribution representing substantial progress to the NASA mission". The medals were awarded posthumously to Grissom, White, and Chaffee in 1969, then to the crews of all missions from Apollo 8 onward. The crew that flew the first Earth orbital test mission Apollo 7, Walter M. Schirra, Donn Eisele, and Walter Cunningham, were awarded the lesser NASA Exceptional Service Medal, because of discipline problems with the flight director's orders during their flight. In October 2008, the NASA Administrator decided to award them the Distinguished Service Medals. For Schirra and Eisele, this was posthumously.
Lunar mission profile
The first lunar landing mission was planned to proceed:
Profile variations
The first three lunar missions (Apollo 8, Apollo 10, and Apollo 11) used a free return trajectory, keeping a flight path coplanar with the lunar orbit, which would allow a return to Earth in case the SM engine failed to make lunar orbit insertion. Landing site lighting conditions on later missions dictated a lunar orbital plane change, which required a course change maneuver soon after TLI, and eliminated the free-return option.
After Apollo 12 placed the second of several seismometers on the Moon, the jettisoned LM ascent stages on Apollo 12 and later missions were deliberately crashed on the Moon at known locations to induce vibrations in the Moon's structure. The only exceptions to this were the Apollo 13 LM which burned up in the Earth's atmosphere, and Apollo 16, where a loss of attitude control after jettison prevented making a targeted impact.
As another active seismic experiment, the S-IVBs on Apollo 13 and subsequent missions were deliberately crashed on the Moon instead of being sent to solar orbit.
Starting with Apollo 13, descent orbit insertion was to be performed using the service module engine instead of the LM engine, in order to allow a greater fuel reserve for landing. This was actually done for the first time on Apollo 14, since the Apollo 13 mission was aborted before landing.
Development history
Uncrewed flight tests
Two Block I CSMs were launched from LC-34 on suborbital flights in 1966 with the Saturn IB. The first, AS-201 launched on February 26, reached an altitude of and splashed down downrange in the Atlantic Ocean. The second, AS-202 on August 25, reached altitude and was recovered downrange in the Pacific Ocean. These flights validated the service module engine and the command module heat shield.
A third Saturn IB test, AS-203 launched from pad 37, went into orbit to support design of the S-IVB upper stage restart capability needed for the Saturn V. It carried a nose cone instead of the Apollo spacecraft, and its payload was the unburned liquid hydrogen fuel, the behavior of which engineers measured with temperature and pressure sensors, and a TV camera. This flight occurred on July 5, before AS-202, which was delayed because of problems getting the Apollo spacecraft ready for flight.
Preparation for crewed flight
Two crewed orbital Block I CSM missions were planned: AS-204 and AS-205. The Block I crew positions were titled Command Pilot, Senior Pilot, and Pilot. The Senior Pilot would assume navigation duties, while the Pilot would function as a systems engineer. The astronauts would wear a modified version of the Gemini spacesuit.
After an uncrewed LM test flight AS-206, a crew would fly the first Block II CSM and LM in a dual mission known as AS-207/208, or AS-278 (each spacecraft would be launched on a separate Saturn IB). The Block II crew positions were titled Commander, Command Module Pilot, and Lunar Module Pilot. The astronauts would begin wearing a new Apollo A6L spacesuit, designed to accommodate lunar extravehicular activity (EVA). The traditional visor helmet was replaced with a clear "fishbowl" type for greater visibility, and the lunar surface EVA suit would include a water-cooled undergarment.
Deke Slayton, the grounded Mercury astronaut who became director of flight crew operations for the Gemini and Apollo programs, selected the first Apollo crew in January 1966, with Grissom as Command Pilot, White as Senior Pilot, and rookie Donn F. Eisele as Pilot. But Eisele dislocated his shoulder twice aboard the KC135 weightlessness training aircraft, and had to undergo surgery on January 27. Slayton replaced him with Chaffee. NASA announced the final crew selection for AS-204 on March 21, 1966, with the backup crew consisting of Gemini veterans James McDivitt and David Scott, with rookie Russell L. "Rusty" Schweickart. Mercury/Gemini veteran Wally Schirra, Eisele, and rookie Walter Cunningham were announced on September 29 as the prime crew for AS-205.
In December 1966, the AS-205 mission was canceled, since the validation of the CSM would be accomplished on the 14-day first flight, and AS-205 would have been devoted to space experiments and contribute no new engineering knowledge about the spacecraft. Its Saturn IB was allocated to the dual mission, now redesignated AS-205/208 or AS-258, planned for August 1967. McDivitt, Scott and Schweickart were promoted to the prime AS-258 crew, and Schirra, Eisele and Cunningham were reassigned as the Apollo1 backup crew.
Program delays
The spacecraft for the AS-202 and AS-204 missions were delivered by North American Aviation to the Kennedy Space Center with long lists of equipment problems which had to be corrected before flight; these delays caused the launch of AS-202 to slip behind AS-203, and eliminated hopes the first crewed mission might be ready to launch as soon as November 1966, concurrently with the last Gemini mission. Eventually, the planned AS-204 flight date was pushed to February 21, 1967.
North American Aviation was prime contractor not only for the Apollo CSM, but for the SaturnV S-II second stage as well, and delays in this stage pushed the first uncrewed SaturnV flight AS-501 from late 1966 to November 1967. (The initial assembly of AS-501 had to use a dummy spacer spool in place of the stage.)
The problems with North American were severe enough in late 1965 to cause Manned Space Flight Administrator George Mueller to appoint program director Samuel Phillips to head a "tiger team" to investigate North American's problems and identify corrections. Phillips documented his findings in a December 19 letter to NAA president Lee Atwood, with a strongly worded letter by Mueller, and also gave a presentation of the results to Mueller and Deputy Administrator Robert Seamans. Meanwhile, Grumman was also encountering problems with the Lunar Module, eliminating hopes it would be ready for crewed flight in 1967, not long after the first crewed CSM flights.
Apollo 1 fire
Grissom, White, and Chaffee decided to name their flight Apollo1 as a motivational focus on the first crewed flight. They trained and conducted tests of their spacecraft at North American, and in the altitude chamber at the Kennedy Space Center. A "plugs-out" test was planned for January, which would simulate a launch countdown on LC-34 with the spacecraft transferring from pad-supplied to internal power. If successful, this would be followed by a more rigorous countdown simulation test closer to the February 21 launch, with both spacecraft and launch vehicle fueled.
The plugs-out test began on the morning of January 27, 1967, and immediately was plagued with problems. First, the crew noticed a strange odor in their spacesuits which delayed the sealing of the hatch. Then, communications problems frustrated the astronauts and forced a hold in the simulated countdown. During this hold, an electrical fire began in the cabin and spread quickly in the high pressure, 100% oxygen atmosphere. Pressure rose high enough from the fire that the cabin inner wall burst, allowing the fire to erupt onto the pad area and frustrating attempts to rescue the crew. The astronauts were asphyxiated before the hatch could be opened.
NASA immediately convened an accident review board, overseen by both houses of Congress. While the determination of responsibility for the accident was complex, the review board concluded that "deficiencies existed in command module design, workmanship and quality control". At the insistence of NASA Administrator Webb, North American removed Harrison Storms as command module program manager. Webb also reassigned Apollo Spacecraft Program Office (ASPO) Manager Joseph Francis Shea, replacing him with George Low.
To remedy the causes of the fire, changes were made in the Block II spacecraft and operational procedures, the most important of which were use of a nitrogen/oxygen mixture instead of pure oxygen before and during launch, and removal of flammable cabin and space suit materials. The Block II design already called for replacement of the Block I plug-type hatch cover with a quick-release, outward opening door. NASA discontinued the crewed Block I program, using the BlockI spacecraft only for uncrewed SaturnV flights. Crew members would also exclusively wear modified, fire-resistant A7L Block II space suits, and would be designated by the Block II titles, regardless of whether a LM was present on the flight or not.
Uncrewed Saturn V and LM tests
On April 24, 1967, Mueller published an official Apollo mission numbering scheme, using sequential numbers for all flights, crewed or uncrewed. The sequence would start with Apollo 4 to cover the first three uncrewed flights while retiring the Apollo1 designation to honor the crew, per their widows' wishes.
In September 1967, Mueller approved a sequence of mission types which had to be successfully accomplished in order to achieve the crewed lunar landing. Each step had to be successfully accomplished before the next ones could be performed, and it was unknown how many tries of each mission would be necessary; therefore letters were used instead of numbers. The A missions were uncrewed Saturn V validation; B was uncrewed LM validation using the Saturn IB; C was crewed CSM Earth orbit validation using the Saturn IB; D was the first crewed CSM/LM flight (this replaced AS-258, using a single Saturn V launch); E would be a higher Earth orbit CSM/LM flight; F would be the first lunar mission, testing the LM in lunar orbit but without landing (a "dress rehearsal"); and G would be the first crewed landing. The list of types covered follow-on lunar exploration to include H lunar landings, I for lunar orbital survey missions, and J for extended-stay lunar landings.
The delay in the CSM caused by the fire enabled NASA to catch up on human-rating the LM and SaturnV. Apollo4 (AS-501) was the first uncrewed flight of the SaturnV, carrying a BlockI CSM on November 9, 1967. The capability of the command module's heat shield to survive a trans-lunar reentry was demonstrated by using the service module engine to ram it into the atmosphere at higher than the usual Earth-orbital reentry speed.
Apollo 5 (AS-204) was the first uncrewed test flight of the LM in Earth orbit, launched from pad 37 on January 22, 1968, by the Saturn IB that would have been used for Apollo 1. The LM engines were successfully test-fired and restarted, despite a computer programming error which cut short the first descent stage firing. The ascent engine was fired in abort mode, known as a "fire-in-the-hole" test, where it was lit simultaneously with jettison of the descent stage. Although Grumman wanted a second uncrewed test, George Low decided the next LM flight would be crewed.
This was followed on April 4, 1968, by Apollo 6 (AS-502) which carried a CSM and a LM Test Article as ballast. The intent of this mission was to achieve trans-lunar injection, followed closely by a simulated direct-return abort, using the service module engine to achieve another high-speed reentry. The Saturn V experienced pogo oscillation, a problem caused by non-steady engine combustion, which damaged fuel lines in the second and third stages. Two S-II engines shut down prematurely, but the remaining engines were able to compensate. The damage to the third stage engine was more severe, preventing it from restarting for trans-lunar injection. Mission controllers were able to use the service module engine to essentially repeat the flight profile of Apollo 4. Based on the good performance of Apollo6 and identification of satisfactory fixes to the Apollo6 problems, NASA declared the SaturnV ready to fly crew, canceling a third uncrewed test.
Crewed development missions
Apollo 7, launched from LC-34 on October 11, 1968, was the Cmission, crewed by Schirra, Eisele, and Cunningham. It was an 11-day Earth-orbital flight which tested the CSM systems.
Apollo 8 was planned to be the D mission in December 1968, crewed by McDivitt, Scott and Schweickart, launched on a SaturnV instead of two Saturn IBs. In the summer it had become clear that the LM would not be ready in time. Rather than waste the Saturn V on another simple Earth-orbiting mission, ASPO Manager George Low suggested the bold step of sending Apollo8 to orbit the Moon instead, deferring the Dmission to the next mission in March 1969, and eliminating the E mission. This would keep the program on track. The Soviet Union had sent two tortoises, mealworms, wine flies, and other lifeforms around the Moon on September 15, 1968, aboard Zond 5, and it was believed they might soon repeat the feat with human cosmonauts. The decision was not announced publicly until successful completion of Apollo 7. Gemini veterans Frank Borman and Jim Lovell, and rookie William Anders captured the world's attention by making ten lunar orbits in 20 hours, transmitting television pictures of the lunar surface on Christmas Eve, and returning safely to Earth.
The following March, LM flight, rendezvous and docking were successfully demonstrated in Earth orbit on Apollo 9, and Schweickart tested the full lunar EVA suit with its portable life support system (PLSS) outside the LM. The F mission was successfully carried out on Apollo 10 in May 1969 by Gemini veterans Thomas P. Stafford, John Young and Eugene Cernan. Stafford and Cernan took the LM to within of the lunar surface.
The G mission was achieved on Apollo 11 in July 1969 by an all-Gemini veteran crew consisting of Neil Armstrong, Michael Collins and Buzz Aldrin. Armstrong and Aldrin performed the first landing at the Sea of Tranquility at 20:17:40 UTC on July 20, 1969. They spent a total of 21 hours, 36 minutes on the surface, and spent 2hours, 31 minutes outside the spacecraft, walking on the surface, taking photographs, collecting material samples, and deploying automated scientific instruments, while continuously sending black-and-white television back to Earth. The astronauts returned safely on July 24.
Production lunar landings
In November 1969, Charles "Pete" Conrad became the third person to step onto the Moon, which he did while speaking more informally than had Armstrong:
Conrad and rookie Alan L. Bean made a precision landing of Apollo 12 within walking distance of the Surveyor 3 uncrewed lunar probe, which had landed in April 1967 on the Ocean of Storms. The command module pilot was Gemini veteran Richard F. Gordon Jr. Conrad and Bean carried the first lunar surface color television camera, but it was damaged when accidentally pointed into the Sun. They made two EVAs totaling 7hours and 45 minutes. On one, they walked to the Surveyor, photographed it, and removed some parts which they returned to Earth.
The contracted batch of 15 Saturn Vs was enough for lunar landing missions through Apollo 20. Shortly after Apollo 11, NASA publicized a preliminary list of eight more planned landing sites after Apollo 12, with plans to increase the mass of the CSM and LM for the last five missions, along with the payload capacity of the Saturn V. These final missions would combine the I and J types in the 1967 list, allowing the CMP to operate a package of lunar orbital sensors and cameras while his companions were on the surface, and allowing them to stay on the Moon for over three days. These missions would also carry the Lunar Roving Vehicle (LRV) increasing the exploration area and allowing televised liftoff of the LM. Also, the Block II spacesuit was revised for the extended missions to allow greater flexibility and visibility for driving the LRV.
The success of the first two landings allowed the remaining missions to be crewed with a single veteran as commander, with two rookies. Apollo 13 launched Lovell, Jack Swigert, and Fred Haise in April 1970, headed for the Fra Mauro formation. But two days out, a liquid oxygen tank exploded, disabling the service module and forcing the crew to use the LM as a "lifeboat" to return to Earth. Another NASA review board was convened to determine the cause, which turned out to be a combination of damage of the tank in the factory, and a subcontractor not making a tank component according to updated design specifications. Apollo was grounded again, for the remainder of 1970 while the oxygen tank was redesigned and an extra one was added.
Mission cutbacks
About the time of the first landing in 1969, it was decided to use an existing Saturn V to launch the Skylab orbital laboratory pre-built on the ground, replacing the original plan to construct it in orbit from several Saturn IB launches; this eliminated Apollo 20. NASA's yearly budget also began to shrink in light of the successful landing, and NASA also had to make funds available for the development of the upcoming Space Shuttle. By 1971, the decision was made to also cancel missions 18 and 19. The two unused Saturn Vs became museum exhibits at the John F. Kennedy Space Center on Merritt Island, Florida, George C. Marshall Space Center in Huntsville, Alabama, Michoud Assembly Facility in New Orleans, Louisiana, and Lyndon B. Johnson Space Center in Houston, Texas.
The cutbacks forced mission planners to reassess the original planned landing sites in order to achieve the most effective geological sample and data collection from the remaining four missions. Apollo 15 had been planned to be the last of the H series missions, but since there would be only two subsequent missions left, it was changed to the first of three J missions.
Apollo 13's Fra Mauro mission was reassigned to Apollo 14, commanded in February 1971 by Mercury veteran Alan Shepard, with Stuart Roosa and Edgar Mitchell. This time the mission was successful. Shepard and Mitchell spent 33 hours and 31 minutes on the surface, and completed two EVAs totalling 9hours 24 minutes, which was a record for the longest EVA by a lunar crew at the time.
In August 1971, just after conclusion of the Apollo 15 mission, President Richard Nixon proposed canceling the two remaining lunar landing missions, Apollo 16 and 17. Office of Management and Budget Deputy Director Caspar Weinberger was opposed to this, and persuaded Nixon to keep the remaining missions.
Extended missions
Apollo 15 was launched on July 26, 1971, with David Scott, Alfred Worden and James Irwin. Scott and Irwin landed on July 30 near Hadley Rille, and spent just under two days, 19 hours on the surface. In over 18 hours of EVA, they collected about of lunar material.
Apollo 16 landed in the Descartes Highlands on April 20, 1972. The crew was commanded by John Young, with Ken Mattingly and Charles Duke. Young and Duke spent just under three days on the surface, with a total of over 20 hours EVA.
Apollo 17 was the last of the Apollo program, landing in the Taurus–Littrow region in December 1972. Eugene Cernan commanded Ronald E. Evans and NASA's first scientist-astronaut, geologist Harrison H. Schmitt. Schmitt was originally scheduled for Apollo 18, but the lunar geological community lobbied for his inclusion on the final lunar landing. Cernan and Schmitt stayed on the surface for just over three days and spent just over 23 hours of total EVA.
Canceled missions
Several missions were planned for but were canceled before details were finalized.
Mission summary
Source: Apollo by the Numbers: A Statistical Reference (Orloff 2004).
Samples returned
The Apollo program returned over of lunar rocks and soil to the Lunar Receiving Laboratory in Houston. Today, 75% of the samples are stored at the Lunar Sample Laboratory Facility built in 1979.
The rocks collected from the Moon are extremely old compared to rocks found on Earth, as measured by radiometric dating techniques. They range in age from about 3.2 billion years for the basaltic samples derived from the lunar maria, to about 4.6 billion years for samples derived from the highlands crust. As such, they represent samples from a very early period in the development of the Solar System, that are largely absent on Earth. One important rock found during the Apollo Program is dubbed the Genesis Rock, retrieved by astronauts David Scott and James Irwin during the Apollo 15 mission. This anorthosite rock is composed almost exclusively of the calcium-rich feldspar mineral anorthite, and is believed to be representative of the highland crust. A geochemical component called KREEP was discovered by Apollo 12, which has no known terrestrial counterpart. KREEP and the anorthositic samples have been used to infer that the outer portion of the Moon was once completely molten (see lunar magma ocean).
Almost all the rocks show evidence of impact process effects. Many samples appear to be pitted with micrometeoroid impact craters, which is never seen on Earth rocks, due to the thick atmosphere. Many show signs of being subjected to high-pressure shock waves that are generated during impact events. Some of the returned samples are of impact melt (materials melted near an impact crater.) All samples returned from the Moon are highly brecciated as a result of being subjected to multiple impact events.
From analyses of the composition of the returned lunar samples, it is now believed that the Moon was created through the impact of a large astronomical body with Earth.
Costs
Apollo cost $25.4 billion or approximately $257 billion (2023) using improved cost analysis.
Of this amount, $20.2 billion ($ adjusted) was spent on the design, development, and production of the Saturn family of launch vehicles, the Apollo spacecraft, spacesuits, scientific experiments, and mission operations. The cost of constructing and operating Apollo-related ground facilities, such as the NASA human spaceflight centers and the global tracking and data acquisition network, added an additional $5.2 billion ($ adjusted).
The amount grows to $28 billion ($280 billion adjusted) if the costs for related projects such as Project Gemini and the robotic Ranger, Surveyor, and Lunar Orbiter programs are included.
NASA's official cost breakdown, as reported to Congress in the Spring of 1973, is as follows:
Accurate estimates of human spaceflight costs were difficult in the early 1960s, as the capability was new and management experience was lacking. Preliminary cost analysis by NASA estimated $7 billion – $12 billion for a crewed lunar landing effort. NASA Administrator James Webb increased this estimate to $20 billion before reporting it to Vice President Johnson in April 1961.
Project Apollo was a massive undertaking, representing the largest research and development project in peacetime. At its peak, it employed over 400,000 employees and contractors around the country and accounted for more than half of NASA's total spending in the 1960s. After the first Moon landing, public and political interest waned, including that of President Nixon, who wanted to rein in federal spending. NASA's budget could not sustain Apollo missions which cost, on average, $445 million ($ adjusted) each while simultaneously developing the Space Shuttle. The final fiscal year of Apollo funding was 1973.
Apollo Applications Program
Looking beyond the crewed lunar landings, NASA investigated several post-lunar applications for Apollo hardware. The Apollo Extension Series (Apollo X) proposed up to 30 flights to Earth orbit, using the space in the Spacecraft Lunar Module Adapter (SLA) to house a small orbital laboratory (workshop). Astronauts would continue to use the CSM as a ferry to the station. This study was followed by design of a larger orbital workshop to be built in orbit from an empty S-IVB Saturn upper stage and grew into the Apollo Applications Program (AAP). The workshop was to be supplemented by the Apollo Telescope Mount, which could be attached to the ascent stage of the lunar module via a rack. The most ambitious plan called for using an empty S-IVB as an interplanetary spacecraft for a Venus fly-by mission.
The S-IVB orbital workshop was the only one of these plans to make it off the drawing board. Dubbed Skylab, it was assembled on the ground rather than in space, and launched in 1973 using the two lower stages of a Saturn V. It was equipped with an Apollo Telescope Mount. Skylab's last crew departed the station on February 8, 1974, and the station itself re-entered the atmosphere in 1979 after development of the Space Shuttle was delayed too long to save it.
The Apollo–Soyuz program also used Apollo hardware for the first joint nation spaceflight, paving the way for future cooperation with other nations in the Space Shuttle and International Space Station programs.
Recent observations
In 2008, Japan Aerospace Exploration Agency's SELENE probe observed evidence of the halo surrounding the Apollo 15 Lunar Module blast crater while orbiting above the lunar surface.
Beginning in 2009, NASA's robotic Lunar Reconnaissance Orbiter, while orbiting above the Moon, photographed the remnants of the Apollo program left on the lunar surface, and each site where crewed Apollo flights landed. All of the U.S. flags left on the Moon during the Apollo missions were found to still be standing, with the exception of the one left during the Apollo 11 mission, which was blown over during that mission's lift-off from the lunar surface; the degree to which these flags retain their original colors remains unknown. The flags cannot be seen through a telescope from Earth.
In a November 16, 2009, editorial, The New York Times opined:
Legacy
Science and engineering
The Apollo program has been described as the greatest technological achievement in human history. Apollo stimulated many areas of technology, leading to over 1,800 spinoff products as of 2015, including advances in the development of cordless power tools, fireproof materials, heart monitors, solar panels, digital imaging, and the use of liquid methane as fuel. The flight computer design used in both the lunar and command modules was, along with the Polaris and Minuteman missile systems, the driving force behind early research into integrated circuits (ICs). By 1963, Apollo was using 60 percent of the United States' production of ICs. The crucial difference between the requirements of Apollo and the missile programs was Apollo's much greater need for reliability. While the Navy and Air Force could work around reliability problems by deploying more missiles, the political and financial cost of failure of an Apollo mission was unacceptably high.
Technologies and techniques required for Apollo were developed by Project Gemini. The Apollo project was enabled by NASA's adoption of new advances in semiconductor electronic technology, including metal–oxide–semiconductor field-effect transistors (MOSFETs) in the Interplanetary Monitoring Platform (IMP) and silicon integrated circuit chips in the Apollo Guidance Computer (AGC).
Cultural impact
The crew of Apollo 8 sent the first live televised pictures of the Earth and the Moon back to Earth, and read from the creation story in the Book of Genesis, on Christmas Eve 1968. An estimated one-quarter of the population of the world saw—either live or delayed—the Christmas Eve transmission during the ninth orbit of the Moon, and an estimated one-fifth of the population of the world watched the live transmission of the Apollo 11 moonwalk.
The Apollo program also affected environmental activism in the 1970s due to photos taken by the astronauts. The most well known include Earthrise, taken by William Anders on Apollo 8, and The Blue Marble, taken by the Apollo 17 astronauts. The Blue Marble was released during a surge in environmentalism, and became a symbol of the environmental movement as a depiction of Earth's frailty, vulnerability, and isolation amid the vast expanse of space.
According to The Economist, Apollo succeeded in accomplishing President Kennedy's goal of taking on the Soviet Union in the Space Race by accomplishing a singular and significant achievement, to demonstrate the superiority of the free-market system. The publication noted the irony that in order to achieve the goal, the program required the organization of tremendous public resources within a vast, centralized government bureaucracy.
Apollo 11 broadcast data restoration project
Prior to Apollo 11's 40th anniversary in 2009, NASA searched for the original videotapes of the mission's live televised moonwalk. After an exhaustive three-year search, it was concluded that the tapes had probably been erased and reused. A new digitally remastered version of the best available broadcast television footage was released instead.
Depictions on film
Documentaries
Numerous documentary films cover the Apollo program and the Space Race, including:
Footprints on the Moon (1969)
Moonwalk One (1970)
The Greatest Adventure (1978)
For All Mankind (1989)
Moon Shot (1994 miniseries)
"Moon" from the BBC miniseries The Planets (1999)
Magnificent Desolation: Walking on the Moon 3D (2005)
The Wonder of It All (2007)
In the Shadow of the Moon (2007)
When We Left Earth: The NASA Missions (2008 miniseries)
Moon Machines (2008 miniseries)
James May on the Moon (2009)
NASA's Story (2009 miniseries)
Apollo 11 (2019)
Chasing the Moon (2019 miniseries)
Docudramas
Some missions have been dramatized:
Apollo 13 (1995)
Apollo 11 (1996)
From the Earth to the Moon (1998)
The Dish (2000)
Space Race (2005)
Moonshot (2009)
First Man (2018)
Fictional
The Apollo program has been the focus of several works of fiction, including:
Apollo 18 (2011), horror movie which was released to negative reviews.
Men in Black 3 (2012), Science Fiction/Comedy movie. Agent J played by Will Smith goes back to the Apollo 11 launch in 1969 to ensure that a global protection system is launched in to space.
For All Mankind (2019), TV series depicting an alternate history in which the Soviet Union was the first country to successfully land a man on the Moon.
Indiana Jones and the Dial of Destiny (2023), fifth Indiana Jones film, in which Jürgen Voller, a NASA member and ex-Nazi involved with the Apollo program, wants to time travel. The New York City parade for the Apollo 11 crew is portrayed as a plot point.
See also
Apollo 11 in popular culture
Apollo Lunar Surface Experiments Package
Exploration of the Moon
Leslie Cantwell collection
List of artificial objects on the Moon
List of crewed spacecraft
List of missions to the Moon
Soviet crewed lunar programs
Stolen and missing Moon rocks
Artemis Program
Citations
Sources
Chaikin interviewed all the surviving astronauts and others who worked with the program.
NASA reports
Apollo Program Summary Report (PDF), NASA, JSC-09423, April 1975
NASA History Series Publications
Project Apollo Drawings and Technical Diagrams at the NASA History Program Office
The Apollo Lunar Surface Journal edited by Eric M. Jones and Ken Glover
The Apollo Flight Journal by W. David Woods, et al.
Multimedia
NASA Apollo Program images and videos
Apollo Image Archive at Arizona State University
Audio recording and transcript of President John F. Kennedy, NASA administrator James Webb, et al., discussing the Apollo agenda (White House Cabinet Room, November 21, 1962)
The Project Apollo Archive by Kipp Teague is a large repository of Apollo images, videos, and audio recordings
The Project Apollo Archive on Flickr
Apollo Image Atlas—almost 25,000 lunar images, Lunar and Planetary Institute
The short film The Time of Apollo (1975) is available for free viewing and download at the National Archives.
Apollo (11, 13 and 17) in real time multimedia project
|
;1960s in the United States;1970s in the United States;Articles containing video clips;Engineering projects;Exploration of the Moon;Human spaceflight programs;NASA programs;Space program of the United States
|
https://en.wikipedia.org/wiki/Aspirin
|
Aspirin () is the genericized trademark for acetylsalicylic acid (ASA), a nonsteroidal anti-inflammatory drug (NSAID) used to reduce pain, fever, and inflammation, and as an antithrombotic. Specific inflammatory conditions that aspirin is used to treat include Kawasaki disease, pericarditis, and rheumatic fever.
Aspirin is also used long-term to help prevent further heart attacks, ischaemic strokes, and blood clots in people at high risk. For pain or fever, effects typically begin within 30 minutes. Aspirin works similarly to other NSAIDs but also suppresses the normal functioning of platelets.
One common adverse effect is an upset stomach. More significant side effects include stomach ulcers, stomach bleeding, and worsening asthma. Bleeding risk is greater among those who are older, drink alcohol, take other NSAIDs, or are on other blood thinners. Aspirin is not recommended in the last part of pregnancy. It is not generally recommended in children with infections because of the risk of Reye syndrome. High doses may result in ringing in the ears.
A precursor to aspirin found in the bark of the willow tree (genus Salix) has been used for its health effects for at least 2,400 years. In 1853, chemist Charles Frédéric Gerhardt treated the medicine sodium salicylate with acetyl chloride to produce acetylsalicylic acid for the first time. Over the next 50 years, other chemists, mostly of the German company Bayer, established the chemical structure and devised more efficient production methods. Felix Hoffmann (or Arthur Eichengrün) of Bayer was the first to produce acetylsalicylic acid in a pure, stable form in 1897. By 1899, Bayer had dubbed this drug Aspirin and was selling it globally.
Aspirin is available without medical prescription as a proprietary or generic medication in most jurisdictions. It is one of the most widely used medications globally, with an estimated (50 to 120 billion pills) consumed each year, and is on the World Health Organization's List of Essential Medicines. In 2022, it was the 36th most commonly prescribed medication in the United States, with more than 16million prescriptions.
Brand vs. generic name
In 1897, scientists at the Bayer company began studying acetylsalicylic acid as a less-irritating replacement medication for common salicylate medicines. By 1899, Bayer had named it "Aspirin" and was selling it around the world.
Aspirin's popularity grew over the first half of the 20th century, leading to competition between many brands and formulations. The word Aspirin was Bayer's brand name; however, its rights to the trademark were lost or sold in many countries. The name is ultimately a blend of the prefix a(cetyl) + spir Spiraea, the meadowsweet plant genus from which the acetylsalicylic acid was originally derived at Bayer + -in, the common suffix for drugs in the end of the 19th century.
Chemical properties
Aspirin decomposes rapidly in solutions of ammonium acetate or the acetates, carbonates, citrates, or hydroxides of the alkali metals. It is stable in dry air, but gradually hydrolyses in contact with moisture to acetic and salicylic acids. In a solution with alkalis, the hydrolysis proceeds rapidly and the clear solutions formed may consist entirely of acetate and salicylate.
Like flour mills, factories producing aspirin tablets must control the amount of the powder that becomes airborne inside the building, because the powder-air mixture can be explosive. The National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit in the United States of 5mg/m3 (time-weighted average). In 1989, the US Occupational Safety and Health Administration (OSHA) set a legal permissible exposure limit for aspirin of 5mg/m3, but this was vacated by the AFL-CIO v. OSHA decision in 1993.
Synthesis
The synthesis of aspirin is classified as an esterification reaction. Salicylic acid is treated with acetic anhydride, an acid derivative, causing a chemical reaction that turns salicylic acid's hydroxyl group into an ester group (R-OH → R-OCOCH3). This process yields aspirin and acetic acid, which is considered a byproduct of this reaction. Small amounts of sulfuric acid (and occasionally phosphoric acid) are almost always used as a catalyst. This method is commonly demonstrated in undergraduate teaching labs.
Reaction between acetic acid and salicylic acid can also form aspirin but this esterification reaction is reversible and the presence of water can lead to hydrolysis of the aspirin. So, an anhydrous reagent is preferred.
Reaction mechanism
Formulations containing high concentrations of aspirin often smell like vinegar because aspirin can decompose through hydrolysis in moist conditions, yielding salicylic and acetic acids.
Physical properties
Aspirin, an acetyl derivative of salicylic acid, is a white, crystalline, weakly acidic substance that melts at , and decomposes around . Its acid dissociation constant (pKa) is 3.5 at .
Polymorphism
Polymorphism is the ability of a substance to form more than one crystal structure. Until 2005, there was only one proven polymorph of aspirin (form I), though the existence of another polymorph was debated since the 1960s, and one report from 1981 reported that when crystallized in the presence of aspirin anhydride, the diffractogram of aspirin has weak additional peaks. Though at the time it was dismissed as mere impurity, it was, in retrospect, form II aspirin.
Form II was reported in 2005, found after attempted co-crystallization of aspirin and levetiracetam from hot acetonitrile. Pure form II aspirin can be prepared by seeding the batch with aspirin anhydrate in 15% weight.
In form I, pairs of aspirin molecules form centrosymmetric dimers through the acetyl groups with the (acidic) methyl proton to carbonyl hydrogen bonds. In form II, each aspirin molecule forms the same hydrogen bonds, but with two neighbouring molecules instead of one. With respect to the hydrogen bonds formed by the carboxylic acid groups, both polymorphs form identical dimer structures. The aspirin polymorphs contain identical 2-dimensional sections and are therefore more precisely described as polytypes.
Form III was reported in 2015 by compressing Form I above 2 GPa, but it reverts to form I when pressure is removed. Form IV was reported in 2017, which is stable at ambient conditions.
Mechanism of action
Discovery of the mechanism
In 1971, British pharmacologist John Robert Vane, then employed by the Royal College of Surgeons in London, showed that aspirin suppressed the production of prostaglandins and thromboxanes. For this discovery, he was awarded the 1982 Nobel Prize in Physiology or Medicine, jointly with Sune Bergström and Bengt Ingemar Samuelsson.
Prostaglandins and thromboxanes
Aspirin's ability to suppress the production of prostaglandins and thromboxanes is due to its irreversible inactivation of the cyclooxygenase (COX; officially known as prostaglandin-endoperoxide synthase, PTGS) enzyme required for prostaglandin and thromboxane synthesis. Aspirin acts as an acetylating agent where an acetyl group is covalently attached to a serine residue in the active site of the COX enzyme (suicide inhibition). This makes aspirin different from other NSAIDs (such as diclofenac and ibuprofen), which are reversible inhibitors.
Low-dose aspirin use irreversibly blocks the formation of thromboxane A2 in platelets, which inhibits platelet aggregation during the lifetime of the affected platelet (8–9 days). This antithrombotic property makes aspirin useful for reducing the incidence of heart attacks in people who have had a heart attack, unstable angina, ischemic stroke or transient ischemic attack. 40mg of aspirin a day is able to inhibit a large proportion of maximum thromboxane A2 release provoked acutely, with the prostaglandin I2 synthesis being little affected; however, higher doses of aspirin are required to attain further inhibition.
Prostaglandins, a type of hormone, have diverse effects, including the transmission of pain information to the brain, modulation of the hypothalamic thermostat, and inflammation. Thromboxanes are responsible for the aggregation of platelets that form blood clots. Heart attacks are caused primarily by blood clots, and low doses of aspirin are seen as an effective medical intervention to prevent a second acute myocardial infarction.
COX-1 and COX-2 inhibition
At least two different types of cyclooxygenases, COX-1 and COX-2, are acted on by aspirin. Aspirin irreversibly inhibits COX-1 and modifies the enzymatic activity of COX-2. COX-2 normally produces prostanoids, most of which are proinflammatory. Aspirin-modified COX-2 (aka prostaglandin-endoperoxide synthase 2 or PTGS2) produces epi-lipoxins, most of which are anti-inflammatory. Newer NSAID drugs, COX-2 inhibitors (coxibs), have been developed to inhibit only COX-2, with the intent to reduce the incidence of gastrointestinal side effects.
Several COX-2 inhibitors, such as rofecoxib (Vioxx), have been withdrawn from the market, after evidence emerged that COX-2 inhibitors increase the risk of heart attack and stroke. Endothelial cells lining the microvasculature in the body are proposed to express COX-2, and, by selectively inhibiting COX-2, prostaglandin production (specifically, PGI2; prostacyclin) is downregulated with respect to thromboxane levels, as COX-1 in platelets is unaffected. Thus, the protective anticoagulative effect of PGI2 is removed, increasing the risk of thrombus and associated heart attacks and other circulatory problems.
Furthermore, aspirin, while inhibiting the ability of COX-2 to form pro-inflammatory products such as the prostaglandins, converts this enzyme's activity from a prostaglandin-forming cyclooxygenase to a lipoxygenase-like enzyme: aspirin-treated COX-2 metabolizes a variety of polyunsaturated fatty acids to hydroperoxy products which are then further metabolized to specialized proresolving mediators such as the aspirin-triggered lipoxins(15-epilipoxin-A4/B4), aspirin-triggered resolvins, and aspirin-triggered maresins. These mediators possess potent anti-inflammatory activity. It is proposed that this aspirin-triggered transition of COX-2 from cyclooxygenase to lipoxygenase activity and the consequential formation of specialized proresolving mediators contributes to the anti-inflammatory effects of aspirin.
Additional mechanisms
Aspirin has been shown to have at least three additional modes of action. It uncouples oxidative phosphorylation in cartilaginous (and hepatic) mitochondria, by diffusing from the inner membrane space as a proton carrier back into the mitochondrial matrix, where it ionizes once again to release protons. Aspirin buffers and transports the protons. When high doses are given, it may actually cause fever, owing to the heat released from the electron transport chain, as opposed to the antipyretic action of aspirin seen with lower doses. In addition, aspirin induces the formation of NO-radicals in the body, which have been shown in mice to have an independent mechanism of reducing inflammation. This reduced leukocyte adhesion is an important step in the immune response to infection; however, evidence is insufficient to show that aspirin helps to fight infection. More recent data also suggest salicylic acid and its derivatives modulate signalling through NF-κB. NF-κB, a transcription factor complex, plays a central role in many biological processes, including inflammation.
Aspirin is readily broken down in the body to salicylic acid, which itself has anti-inflammatory, antipyretic, and analgesic effects. In 2012, salicylic acid was found to activate AMP-activated protein kinase, which has been suggested as a possible explanation for some of the effects of both salicylic acid and aspirin. The acetyl portion of the aspirin molecule has its own targets. Acetylation of cellular proteins is a well-established phenomenon in the regulation of protein function at the post-translational level. Aspirin is able to acetylate several other targets in addition to COX isoenzymes. These acetylation reactions may explain many hitherto unexplained effects of aspirin.
Formulations
Aspirin is produced in many formulations, with some differences in effect. In particular, aspirin can cause gastrointestinal bleeding, and formulations are sought which deliver the benefits of aspirin while mitigating harmful bleeding. Formulations may be combined (e.g., buffered + vitamin C).
Tablets, typically of about 75–100 mg and 300–320 mg of immediate-release aspirin (IR-ASA).
Dispersible tablets.
Enteric-coated tablets.
Buffered formulations containing aspirin with one of many buffering agents.
Formulations of aspirin with vitamin C (ASA-VitC)
A phospholipid-aspirin complex liquid formulation, PL-ASA. , the phospholipid coating is being trialled to determine if it causes less gastrointestinal damage.
Pharmacokinetics
Acetylsalicylic acid is a weak acid, and very little of it is ionized in the stomach after oral administration. Acetylsalicylic acid is quickly absorbed through the cell membrane in the acidic conditions of the stomach. The higher pH and larger surface area of the small intestine cause aspirin to be absorbed more slowly there, as more of it is ionized. Owing to the formation of concretions, aspirin is absorbed much more slowly during overdose, and blood plasma concentrations can continue to rise for up to 24 hours after ingestion.
About 50–80% of salicylate in the blood is bound to human serum albumin, while the rest remains in the active, ionized state; protein binding is concentration-dependent. Saturation of binding sites leads to more free salicylate and increased toxicity. The volume of distribution is 0.1–0.2 L/kg. Acidosis increases the volume of distribution because of enhancement of tissue penetration of salicylates.
As much as 80% of therapeutic doses of salicylic acid is metabolized in the liver. Conjugation with glycine forms salicyluric acid, and with glucuronic acid to form two different glucuronide esters. The conjugate with the acetyl group intact is referred to as the acyl glucuronide; the deacetylated conjugate is the phenolic glucuronide. These metabolic pathways have only a limited capacity. Small amounts of salicylic acid are also hydroxylated to gentisic acid. With large salicylate doses, the kinetics switch from first-order to zero-order, as metabolic pathways become saturated and renal excretion becomes increasingly important.
Salicylates are excreted mainly by the kidneys as salicyluric acid (75%), free salicylic acid (10%), salicylic phenol (10%), acyl glucuronides (5%), gentisic acid (< 1%), and 2,3-dihydroxybenzoic acid. When small doses (less than 250mg in an adult) are ingested, all pathways proceed by first-order kinetics, with an elimination half-life of about 2.0 h to 4.5 h. When higher doses of salicylate are ingested (more than 4 g), the half-life becomes much longer (15 h to 30 h), because the biotransformation pathways concerned with the formation of salicyluric acid and salicyl phenolic glucuronide become saturated. Renal excretion of salicylic acid becomes increasingly important as the metabolic pathways become saturated, because it is extremely sensitive to changes in urinary pH. A 10- to 20-fold increase in renal clearance occurs when urine pH is increased from 5 to 8. The use of urinary alkalinization exploits this particular aspect of salicylate elimination. It was found that short-term aspirin use in therapeutic doses might precipitate reversible acute kidney injury when the patient was ill with glomerulonephritis or cirrhosis. Aspirin for some patients with chronic kidney disease and some children with congestive heart failure was contraindicated.
History
Medicines made from willow and other salicylate-rich plants appear in clay tablets from ancient Sumer as well as the Ebers Papyrus from ancient Egypt. Hippocrates referred to the use of salicylic tea to reduce fevers around 400 BC, and willow bark preparations were part of the pharmacopoeia of Western medicine in classical antiquity and the Middle Ages. Willow bark extract became recognized for its specific effects on fever, pain, and inflammation in the mid-eighteenth century after the Rev Edward Stone of Chipping Norton, Oxfordshire, noticed that the bitter taste of willow bark resembled the taste of the bark of the cinchona tree, known as "Peruvian bark", which was used successfully in Peru to treat a variety of ailments. Stone experimented with preparations of powdered willow bark on people in Chipping Norton for five years and found it to be as effective as Peruvian bark and a cheaper domestic version. In 1763, he sent a report of his findings to the Royal Society in London. By the nineteenth century, pharmacists were experimenting with and prescribing a variety of chemicals related to salicylic acid, the active component of willow extract.
In 1853, chemist Charles Frédéric Gerhardt treated sodium salicylate with acetyl chloride to produce acetylsalicylic acid for the first time; in the second half of the 19th century, other academic chemists established the compound's chemical structure and devised more efficient methods of synthesis. In 1897, scientists at the drug and dye firm Bayer began investigating acetylsalicylic acid as a less-irritating replacement for standard common salicylate medicines, and identified a new way to synthesize it. That year, Felix Hoffmann (or Arthur Eichengrün) of Bayer was the first to produce acetylsalicylic acid in a pure, stable form.
Salicylic acid had been extracted in 1838 from the herb meadowsweet, whose German name, Spirsäure, was the basis for naming the newly synthesized drug, which, by 1899, Bayer was selling globally. The word Aspirin was Bayer's brand name, rather than the generic name of the drug; however, Bayer's rights to the trademark were lost or sold in many countries. Aspirin's popularity grew over the first half of the 20th century, leading to fierce competition with the proliferation of aspirin brands and products.
Aspirin's popularity declined after the development of acetaminophen/paracetamol in 1956 and ibuprofen in 1962. In the 1960s and 1970s, John Vane and others discovered the basic mechanism of aspirin's effects, while clinical trials and other studies from the 1960s to the 1980s established aspirin's efficacy as an anti-clotting agent that reduces the risk of clotting diseases. The initial large studies on the use of low-dose aspirin to prevent heart attacks that were published in the 1970s and 1980s helped spur reform in clinical research ethics and guidelines for human subject research and US federal law, and are often cited as examples of clinical trials that included only men, but from which people drew general conclusions that did not hold true for women.
Aspirin sales revived considerably in the last decades of the 20th century, and remain strong in the 21st century with widespread use as a preventive treatment for heart attacks and strokes.
Trademark
Bayer lost its trademark for aspirin in the United States and some other countries in actions taken between 1918 and 1921 because it had failed to use the name for its own product correctly and had for years allowed the use of "Aspirin" by other manufacturers without defending the intellectual property rights. Aspirin is a generic trademark in many countries. Aspirin, with a capital "A", remains a registered trademark of Bayer in Germany, Canada, Mexico, and in over 80 other countries, for acetylsalicylic acid in all markets, but using different packaging and physical aspects for each.
Compendial status
United States Pharmacopeia
British Pharmacopoeia
Medical use
Aspirin is used in the treatment of a number of conditions, including fever, pain, rheumatic fever, and inflammatory conditions, such as rheumatoid arthritis, pericarditis, and Kawasaki disease. Lower doses of aspirin have also been shown to reduce the risk of death from a heart attack, or the risk of stroke in people who are at high risk or who have cardiovascular disease, but not in elderly people who are otherwise healthy. There is evidence that aspirin is effective at preventing colorectal cancer, though the mechanisms of this effect are unclear.
Pain
Aspirin is an effective analgesic for acute pain, although it is generally considered inferior to ibuprofen because aspirin is more likely to cause gastrointestinal bleeding. Aspirin is generally ineffective for those pains caused by muscle cramps, bloating, gastric distension, or acute skin irritation. As with other NSAIDs, combinations of aspirin and caffeine provide slightly greater pain relief than aspirin alone. Effervescent formulations of aspirin relieve pain faster than aspirin in tablets, which makes them useful for the treatment of migraines. Topical aspirin may be effective for treating some types of neuropathic pain.
Aspirin, either by itself or in a combined formulation, effectively treats certain types of a headache, but its efficacy may be questionable for others. Secondary headaches, meaning those caused by another disorder or trauma, should be promptly treated by a medical provider. Among primary headaches, the International Classification of Headache Disorders distinguishes between tension headache (the most common), migraine, and cluster headache. Aspirin or other over-the-counter analgesics are widely recognized as effective for the treatment of tension headaches. Aspirin, especially as a component of an aspirin/paracetamol/caffeine combination, is considered a first-line therapy in the treatment of migraine, and comparable to lower doses of sumatriptan. It is most effective at stopping migraines when they are first beginning.
Fever
Like its ability to control pain, aspirin's ability to control fever is due to its action on the prostaglandin system through its irreversible inhibition of COX. Although aspirin's use as an antipyretic in adults is well established, many medical societies and regulatory agencies, including the American Academy of Family Physicians, the American Academy of Pediatrics, and the Food and Drug Administration, strongly advise against using aspirin for the treatment of fever in children because of the risk of Reye syndrome, a rare but often fatal illness associated with the use of aspirin or other salicylates in children during episodes of viral or bacterial infection. Because of the risk of Reye syndrome in children, in 1986, the US Food and Drug Administration (FDA) required labeling on all aspirin-containing medications advising against its use in children and teenagers.
Inflammation
Aspirin is used as an anti-inflammatory agent for both acute and long-term inflammation, as well as for the treatment of inflammatory diseases, such as rheumatoid arthritis.
Heart attacks and strokes
Aspirin is an important part of the treatment of those who have had a heart attack. It is generally not recommended for routine use by people with no other health problems, including those over the age of 70.
The 2009 Antithrombotic Trialists' Collaboration published in Lancet evaluated the efficacy and safety of low dose aspirin in secondary prevention. In those with prior ischaemic stroke or acute myocardial infarction, daily low dose aspirin was associated with a 19% relative risk reduction of serious cardiovascular events (non-fatal myocardial infarction, non-fatal stroke, or vascular death). This did come at the expense of a 0.19% absolute risk increase in gastrointestinal bleeding; however, the benefits outweigh the hazard risk in this case. Data from previous trials have suggested that weight-based dosing of aspirin has greater benefits in primary prevention of cardiovascular outcomes. However, more recent trials were not able to replicate similar outcomes using low dose aspirin in low body weight (<70 kg) in specific subset of population studied i.e. elderly and diabetic population, and more evidence is required to study the effect of high dose aspirin in high body weight (≥70 kg).
After percutaneous coronary interventions (PCIs), such as the placement of a coronary artery stent, a U.S. Agency for Healthcare Research and Quality guideline recommends that aspirin be taken indefinitely. Frequently, aspirin is combined with an ADP receptor inhibitor, such as clopidogrel, prasugrel, or ticagrelor to prevent blood clots. This is called dual antiplatelet therapy (DAPT). Duration of DAPT was advised in the United States and European Union guidelines after the CURE and PRODIGY studies. In 2020, the systematic review and network meta-analysis from Khan et al. showed promising benefits of short-term (< 6 months) DAPT followed by P2Y12 inhibitors in selected patients, as well as the benefits of extended-term (> 12 months) DAPT in high risk patients. In conclusion, the optimal duration of DAPT after PCIs should be personalized after outweighing each patient's risks of ischemic events and risks of bleeding events with consideration of multiple patient-related and procedure-related factors. Moreover, aspirin should be continued indefinitely after DAPT is complete.
The status of the use of aspirin for the primary prevention in cardiovascular disease is conflicting and inconsistent, with recent changes from previously recommending it widely decades ago, and that some referenced newer trials in clinical guidelines show less of benefit of adding aspirin alongside other anti-hypertensive and cholesterol lowering therapies. The ASCEND study demonstrated that in high-bleeding risk diabetics with no prior cardiovascular disease, there is no overall clinical benefit (12% decrease in risk of ischaemic events v/s 29% increase in GI bleeding) of low dose aspirin in preventing the serious vascular events over a period of 7.4 years. Similarly, the results of the ARRIVE study also showed no benefit of same dose of aspirin in reducing the time to first cardiovascular outcome in patients with moderate risk of cardiovascular disease over a period of five years. Aspirin has also been suggested as a component of a polypill for prevention of cardiovascular disease. Complicating the use of aspirin for prevention is the phenomenon of aspirin resistance. For people who are resistant, aspirin's efficacy is reduced. Some authors have suggested testing regimens to identify people who are resistant to aspirin.
As of , the United States Preventive Services Task Force (USPSTF) determined that there was a "small net benefit" for patients aged 40–59 with a 10% or greater 10-year cardiovascular disease (CVD) risk, and "no net benefit" for patients aged over 60. Determining the net benefit was based on balancing the risk reduction of taking aspirin for heart attacks and ischaemic strokes, with the increased risk of gastrointestinal bleeding, intracranial bleeding, and hemorrhagic strokes. Their recommendations state that age changes the risk of the medicine, with the magnitude of the benefit of aspirin coming from starting at a younger age, while the risk of bleeding, while small, increases with age, particular for adults over 60, and can be compounded by other risk factors such as diabetes and a history of gastrointestinal bleeding. As a result, the USPSTF suggests that "people ages 40 to 59 who are at higher risk for CVD should decide with their clinician whether to start taking aspirin; people 60 or older should not start taking aspirin to prevent a first heart attack or stroke." Primary prevention guidelines from made by the American College of Cardiology and the American Heart Association state they might consider aspirin for patients aged 40–69 with a higher risk of atherosclerotic CVD, without an increased bleeding risk, while stating they would not recommend aspirin for patients aged over 70 or adults of any age with an increased bleeding risk. They state a CVD risk estimation and a risk discussion should be done before starting on aspirin, while stating aspirin should be used "infrequently in the routine primary prevention of (atherosclerotic CVD) because of lack of net benefit". As of , the European Society of Cardiology made similar recommendations; considering aspirin specifically to patients aged less than 70 at high or very high CVD risk, without any clear contraindications, on a case-by-case basis considering both ischemic risk and bleeding risk.
Cancer prevention
Aspirin may reduce the overall risk of both getting cancer and dying from cancer. There is substantial evidence for lowering the risk of colorectal cancer (CRC), but aspirin must be taken for at least 10–20 years to see this benefit. It may also slightly reduce the risk of endometrial cancer and prostate cancer.
Some conclude the benefits are greater than the risks due to bleeding in those at average risk. Others are unclear if the benefits are greater than the risk. Given this uncertainty, the 2007 United States Preventive Services Task Force (USPSTF) guidelines on this topic recommended against the use of aspirin for prevention of CRC in people with average risk. Nine years later however, the USPSTF issued a grade B recommendation for the use of low-dose aspirin (75 to 100mg/day) "for the primary prevention of CVD [cardiovascular disease] and CRC in adults 50 to 59 years of age who have a 10% or greater 10-year CVD risk, are not at increased risk for bleeding, have a life expectancy of at least 10 years, and are willing to take low-dose aspirin daily for at least 10 years".
A meta-analysis through 2019 said that there was an association between taking aspirin and lower risk of cancer of the colorectum, esophagus, and stomach.
In 2021, the United States Preventive Services Task Force raised questions about the use of aspirin in cancer prevention. It notes the results of the 2018 ASPREE (Aspirin in Reducing Events in the Elderly) Trial, in which the risk of cancer-related death was higher in the aspirin-treated group than in the placebo group.
In 2025, a group of scientists at the University of Cambridge found that aspirin stimulates the immune system to reduce cancer metastasis. They found that a protein called ARHGEF1 suppresses T cells, that are required for attacking metastatic cancer cells. Aspirin appeared to counteract this suppression by targeting a clotting factor called thromboxane A2 (TXA2), which activates ARHGEF1, thus preventing it from suppressing the T cells. The researchers called the discovery a "Eureka moment". It was reported that the findings could lead to a more targeted use for aspirin in cancer research. It was also said that taking self-medicating with aspirin should not be done yet due to its potential side effects until clinical trials were held.
Psychiatry
Bipolar disorder
Aspirin, along with several other agents with anti-inflammatory properties, has been repurposed as an add-on treatment for depressive episodes in subjects with bipolar disorder in light of the possible role of inflammation in the pathogenesis of severe mental disorders. A 2022 systematic review concluded that aspirin exposure reduced the risk of depression in a pooled cohort of three studies (HR 0.624, 95% CI: 0.0503, 1.198, P=0.033). However, further high-quality, longer-duration, double-blind randomized controlled trials (RCTs) are needed to determine whether aspirin is an effective add-on treatment for bipolar depression. Thus, notwithstanding the biological rationale, the clinical perspectives of aspirin and anti-inflammatory agents in the treatment of bipolar depression remain uncertain.
Dementia
Although cohort and longitudinal studies have shown low-dose aspirin has a greater likelihood of reducing the incidence of dementia, numerous randomized controlled trials have not validated this.
Schizophrenia
Some researchers have speculated the anti-inflammatory effects of aspirin may be beneficial for schizophrenia. Small trials have been conducted but evidence remains lacking.
Other uses
Aspirin is a first-line treatment for the fever and joint-pain symptoms of acute rheumatic fever. The therapy often lasts for one to two weeks, and is rarely indicated for longer periods. After fever and pain have subsided, the aspirin is no longer necessary, since it does not decrease the incidence of heart complications and residual rheumatic heart disease. Naproxen has been shown to be as effective as aspirin and less toxic, but due to the limited clinical experience, naproxen is recommended only as a second-line treatment.
Along with rheumatic fever, Kawasaki disease remains one of the few indications for aspirin use in children in spite of a lack of high quality evidence for its effectiveness.
Low-dose aspirin supplementation has moderate benefits when used for prevention of pre-eclampsia. This benefit is greater when started in early pregnancy.
Aspirin has also demonstrated anti-tumoral effects, via inhibition of the PTTG1 gene, which is often overexpressed in tumors.
Resistance
For some people, aspirin does not have as strong an effect on platelets as for others, an effect known as aspirin-resistance or insensitivity. One study has suggested women are more likely to be resistant than men, and a different, aggregate study of 2,930 people found 28% were resistant.
A study in 100 Italian people found, of the apparent 31% aspirin-resistant subjects, only 5% were truly resistant, and the others were noncompliant.
Another study of 400 healthy volunteers found no subjects who were truly resistant, but some had "pseudoresistance, reflecting delayed and reduced drug absorption".
Meta-analysis and systematic reviews have concluded that laboratory confirmed aspirin resistance confers increased rates of poorer outcomes in cardiovascular and neurovascular diseases. Although the majority of research conducted has surrounded cardiovascular and neurovascular, there is emerging research into the risk of aspirin resistance after orthopaedic surgery where aspirin is used for venous thromboembolism prophylaxis. Aspirin resistance in orthopaedic surgery, specifically after total hip and knee arthroplasties, is of interest as risk factors for aspirin resistance are also risk factors for venous thromboembolisms and osteoarthritis; the sequelae of requiring a total hip or knee arthroplasty. Some of these risk factors include obesity, advancing age, diabetes mellitus, dyslipidemia and inflammatory diseases.
Dosages
Adult aspirin tablets are produced in standardised sizes, which vary slightly from country to country, for example 300mg in Britain and 325mg in the United States. Smaller doses are based on these standards, e.g., 75mg and 81mg tablets. The 81 mg tablets are commonly called "baby aspirin" or "baby-strength", because they were originallybut no longerintended to be administered to infants and children. No medical significance occurs due to the slight difference in dosage between the 75mg and the 81mg tablets. The dose required for benefit appears to depend on a person's weight. For those weighing less than , low dose is effective for preventing cardiovascular disease; for patients above this weight, higher doses are required.
In general, for adults, doses are taken four times a day for fever or arthritis, with doses near the maximal daily dose used historically for the treatment of rheumatic fever. For the prevention of myocardial infarction (MI) in someone with documented or suspected coronary artery disease, much lower doses are taken once daily.
March 2009 recommendations from the USPSTF on the use of aspirin for the primary prevention of coronary heart disease encourage men aged 45–79 and women aged 55–79 to use aspirin when the potential benefit of a reduction in MI for men or stroke for women outweighs the potential harm of an increase in gastrointestinal hemorrhage. The WHI study of postmenopausal women found that aspirin resulted in a 25% lower risk of death from cardiovascular disease and a 14% lower risk of death from any cause, though there was no significant difference between 81mg and 325mg aspirin doses. The 2021 ADAPTABLE study also showed no significant difference in cardiovascular events or major bleeding between 81mg and 325mg doses of aspirin in patients (both men and women) with established cardiovascular disease.
Low-dose aspirin use was also associated with a trend toward lower risk of cardiovascular events, and lower aspirin doses (75 or 81mg/day) may optimize efficacy and safety for people requiring aspirin for long-term prevention.
In children with Kawasaki disease, aspirin is taken at dosages based on body weight, initially four times a day for up to two weeks and then at a lower dose once daily for a further six to eight weeks.
Adverse effects
In October 2020, the US Food and Drug Administration (FDA) required the drug label to be updated for all nonsteroidal anti-inflammatory medications to describe the risk of kidney problems in unborn babies that result in low amniotic fluid. They recommend avoiding NSAIDs in pregnant women at 20 weeks or later in pregnancy. One exception to the recommendation is the use of low-dose 81mg aspirin at any point in pregnancy under the direction of a health care professional.
Contraindications
Aspirin should not be taken by people who are allergic to ibuprofen or naproxen, or who have salicylate intolerance or a more generalized drug intolerance to NSAIDs, and caution should be exercised in those with asthma or NSAID-precipitated bronchospasm. Owing to its effect on the stomach lining, manufacturers recommend people with peptic ulcers, mild diabetes, or gastritis seek medical advice before using aspirin. Even if none of these conditions is present, the risk of stomach bleeding is still increased when aspirin is taken with alcohol or warfarin. People with hemophilia or other bleeding tendencies should not take aspirin or other salicylates. Aspirin is known to cause hemolytic anemia in people who have the genetic disease glucose-6-phosphate dehydrogenase deficiency, particularly in large doses and depending on the severity of the disease. Use of aspirin during dengue fever is not recommended owing to increased bleeding tendency. Aspirin taken at doses of ≤325 mg and ≤100 mg per day for ≥2 days can increase the odds of suffering a gout attack by 81% and 91% respectively. This effect may potentially be worsened by high purine diets, diuretics, and kidney disease, but is eliminated by the urate lowering drug allopurinol. Daily low dose aspirin does not appear to worsen kidney function. Aspirin may reduce cardiovascular risk in those without established cardiovascular disease in people with moderate CKD, without significantly increasing the risk of bleeding. Aspirin should not be given to children or adolescents under the age of 16 to control cold or influenza symptoms, as this has been linked with Reye syndrome.
Gastrointestinal
Aspirin increases the risk of upper gastrointestinal bleeding. Enteric coating on aspirin may be used in manufacturing to prevent release of aspirin into the stomach to reduce gastric harm, but enteric coating does not reduce gastrointestinal bleeding risk. Enteric-coated aspirin may not be as effective at reducing blood clot risk. Combining aspirin with other NSAIDs has been shown to further increase the risk of gastrointestinal bleeding. Using aspirin in combination with clopidogrel or warfarin also increases the risk of upper gastrointestinal bleeding.
The blockade of COX-1 by aspirin apparently results in the upregulation of COX-2 as part of a gastric defense. There is no clear evidence that simultaneous use of a COX-2 inhibitor with aspirin may increase the risk of gastrointestinal injury.
"Buffering" is an additional method used with the intent to mitigate gastrointestinal bleeding, such as by preventing aspirin from concentrating in the walls of the stomach, although the benefits of buffered aspirin are disputed. Almost any buffering agent used in antacids can be used; Bufferin, for example, uses magnesium oxide. Other preparations use calcium carbonate. Gas-forming agents in effervescent tablet and powder formulations can also double as a buffering agent, one example being sodium bicarbonate, used in Alka-Seltzer.
Taking vitamin C with aspirin has been investigated as a method of protecting the stomach lining. In trials, vitamin C-releasing aspirin (ASA-VitC) or a buffered aspirin formulation containing vitamin C was found to cause less stomach damage than aspirin alone.
Retinal vein occlusion
It is a widespread habit among eye specialists (ophthalmologists) to prescribe aspirin as an add-on medication for patients with retinal vein occlusion (RVO), such as central retinal vein occlusion (CRVO) and branch retinal vein occlusion (BRVO). The reason of this widespread use is the evidence of its proven effectiveness in major systemic venous thrombotic disorders, and it has been assumed that may be similarly beneficial in various types of retinal vein occlusion.
However, a large-scale investigation based on data of nearly 700 patients showed "that aspirin or other antiplatelet aggregating agents or anticoagulants adversely influence the visual outcome in patients with CRVO and hemi-CRVO, without any evidence of protective or beneficial effect". Several expert groups, including the Royal College of Ophthalmologists, recommended against the use of antithrombotic drugs (incl. aspirin) for patients with RVO.
Central effects
Large doses of salicylate, a metabolite of aspirin, cause temporary tinnitus (ringing in the ears) based on experiments in rats, as the action on arachidonic acid and NMDA receptors cascade.
Reye syndrome
Reye syndrome, a rare but severe illness characterized by acute encephalopathy and fatty liver, can occur when children or adolescents are given aspirin for a fever or other illness or infection. From 1981 to 1997, 1207 cases of Reye syndrome in people younger than 18 were reported to the US Centers for Disease Control and Prevention (CDC). Of these, 93% reported being ill in the three weeks preceding the onset of Reye syndrome, most commonly with a respiratory infection, chickenpox, or diarrhea. Salicylates were detectable in 81.9% of children for whom test results were reported. After the association between Reye syndrome and aspirin was reported, and safety measures to prevent it (including a Surgeon General's warning, and changes to the labeling of aspirin-containing drugs) were implemented, aspirin taken by children declined considerably in the United States, as did the number of reported cases of Reye syndrome; a similar decline was found in the United Kingdom after warnings against pediatric aspirin use were issued. The US Food and Drug Administration recommends aspirin (or aspirin-containing products) should not be given to anyone under the age of 12 who has a fever, and the UK National Health Service recommends children who are under 16 years of age should not take aspirin, unless it is on the advice of a doctor.
Skin
For a small number of people, taking aspirin can result in symptoms including hives, swelling, and headache. Aspirin can exacerbate symptoms among those with chronic hives, or create acute symptoms of hives. These responses can be due to allergic reactions to aspirin, or more often due to its effect of inhibiting the COX-1 enzyme. Skin reactions may also tie to systemic contraindications, seen with NSAID-precipitated bronchospasm, or those with atopy.
Aspirin and other NSAIDs, such as ibuprofen, may delay the healing of skin wounds. Earlier findings from two small, low-quality trials suggested a benefit with aspirin (alongside compression therapy) on venous leg ulcer healing time and leg ulcer size, however, larger, more recent studies of higher quality have been unable to corroborate these outcomes.
Other adverse effects
Aspirin can induce swelling of skin tissues in some people. In one study, angioedema appeared one to six hours after ingesting aspirin in some of the people. However, when the aspirin was taken alone, it did not cause angioedema in these people; the aspirin had been taken in combination with another NSAID-induced drug when angioedema appeared.
Aspirin causes an increased risk of cerebral microbleeds, having the appearance on MRI scans of 5 to 10mm or smaller, hypointense (dark holes) patches.
A study of a group with a mean dosage of aspirin of 270mg per day estimated an average absolute risk increase in intracerebral hemorrhage (ICH) of 12 events per 10,000 persons. In comparison, the estimated absolute risk reduction in myocardial infarction was 137 events per 10,000 persons, and a reduction of 39 events per 10,000 persons in ischemic stroke. In cases where ICH already has occurred, aspirin use results in higher mortality, with a dose of about 250mg per day resulting in a relative risk of death within three months after the ICH around 2.5 (95% confidence interval 1.3 to 4.6).
Aspirin and other NSAIDs can cause abnormally high blood levels of potassium by inducing a hyporeninemic hypoaldosteronism state via inhibition of prostaglandin synthesis; however, these agents do not typically cause hyperkalemia by themselves in the setting of normal renal function and euvolemic state.
Use of low-dose aspirin before a surgical procedure has been associated with an increased risk of bleeding events in some patients, however, ceasing aspirin prior to surgery has also been associated with an increase in major adverse cardiac events. An analysis of multiple studies found a three-fold increase in adverse events such as myocardial infarction in patients who ceased aspirin prior to surgery. The analysis found that the risk is dependent on the type of surgery being performed and the patient indication for aspirin use.
In July 2015, the US Food and Drug Administration (FDA) strengthened warnings of increased heart attack and stroke risk associated with nonsteroidal anti-inflammatory drugs (NSAID). Aspirin is an NSAID but is not affected by the revised warnings.
Overdose
Aspirin overdose can be acute or chronic. In acute poisoning, a single large dose is taken; in chronic poisoning, higher than normal doses are taken over a period of time. Acute overdose has a mortality rate of 2%. Chronic overdose is more commonly lethal, with a mortality rate of 25%; chronic overdose may be especially severe in children. Toxicity is managed with a number of potential treatments, including activated charcoal, intravenous dextrose and normal saline, sodium bicarbonate, and dialysis. The diagnosis of poisoning usually involves measurement of plasma salicylate, the active metabolite of aspirin, by automated spectrophotometric methods. Plasma salicylate levels in general range from 30 to 100mg/L after usual therapeutic doses, 50–300mg/L in people taking high doses and 700–1400mg/L following acute overdose. Salicylate is also produced as a result of exposure to bismuth subsalicylate, methyl salicylate, and sodium salicylate.
Interactions
Aspirin is known to interact with other drugs. For example, acetazolamide and ammonium chloride are known to enhance the intoxicating effect of salicylates, and alcohol also increases the gastrointestinal bleeding associated with these types of drugs. Aspirin is known to displace a number of drugs from protein-binding sites in the blood, including the antidiabetic drugs tolbutamide and chlorpropamide, warfarin, methotrexate, phenytoin, probenecid, valproic acid (as well as interfering with beta oxidation, an important part of valproate metabolism), and other NSAIDs. Corticosteroids may also reduce the concentration of aspirin. Other NSAIDs, such as ibuprofen and naproxen, may reduce the antiplatelet effect of aspirin. Although limited evidence suggests this may not result in a reduced cardioprotective effect of aspirin. Analgesic doses of aspirin decrease sodium loss induced by spironolactone in the urine, however this does not reduce the antihypertensive effects of spironolactone. Furthermore, antiplatelet doses of aspirin are deemed too small to produce an interaction with spironolactone. Aspirin is known to compete with penicillin G for renal tubular secretion. Aspirin may also inhibit the absorption of vitamin C.
Research
The ISIS-2 trial demonstrated that aspirin at doses of 160mg daily for one month, decreased the mortality by 21% of participants with a suspected myocardial infarction in the first five weeks. A single daily dose of 324mg of aspirin for 12 weeks has a highly protective effect against acute myocardial infarction and death in men with unstable angina.
Bipolar disorder
Aspirin has been repurposed as an add-on treatment for depressive episodes in subjects with bipolar disorder. However, meta-analytic evidence is based on very few studies and does not suggest any efficacy of aspirin in the treatment of bipolar depression. Thus, notwithstanding the biological rationale, the clinical perspectives of aspirin and anti-inflammatory agents in the treatment of bipolar depression remain uncertain.
Infectious diseases
Several studies investigated the anti-infective properties of aspirin for bacterial, viral and parasitic infections. Aspirin was demonstrated to limit platelet activation induced by Staphylococcus aureus and Enterococcus faecalis and to reduce streptococcal adhesion to heart valves. In patients with tuberculous meningitis, the addition of aspirin reduced the risk of new cerebral infarction [RR = 0.52 (0.29-0.92)]. A role of aspirin on bacterial and fungal biofilm is also being supported by growing evidence.
Cancer prevention
Evidence from observational studies was conflicting on the effect of aspirin in breast cancer prevention; a randomized controlled trial showed that aspirin had no significant effect in reducing breast cancer, thus further studies are needed to clarify the effect of aspirin in cancer prevention.
In gardening
There are anecdotal reports that aspirin can improve the growth and resistance of plants, though most research has involved salicylic acid instead of aspirin.
Veterinary medicine
Aspirin is sometimes used in veterinary medicine as an anticoagulant or to relieve pain associated with musculoskeletal inflammation or osteoarthritis. Aspirin should be given to animals only under the direct supervision of a veterinarian, as adverse effects—including gastrointestinal issues—are common. An aspirin overdose in any species may result in salicylate poisoning, characterized by hemorrhaging, seizures, coma, and even death.
Dogs are better able to tolerate aspirin than cats are. Cats metabolize aspirin slowly because they lack the glucuronide conjugates that aid in the excretion of aspirin, making it potentially toxic if dosing is not spaced out properly. No clinical signs of toxicosis occurred when cats were given 25mg/kg of aspirin every 48 hours for 4 weeks, but the recommended dose for relief of pain and fever and for treating blood clotting diseases in cats is 10mg/kg every 48 hours to allow for metabolization.
Further reading
|
;1897 in Germany;1897 in science;Acetate esters;Acetylsalicylic acids;Antiplatelet drugs;Brands that became generic;Chemical substances for emergency medicine;Commercialization of traditional medicines;Covalent inhibitors;Drugs developed by Bayer;Equine medications;German inventions;Hepatotoxins;Nonsteroidal anti-inflammatory drugs;Salicylic acids;Salicylyl esters;Wikipedia medicine articles ready to translate;World Health Organization essential medicines
|
https://en.wikipedia.org/wiki/Acupuncture
|
Acupuncture is a form of alternative medicine and a component of traditional Chinese medicine (TCM) in which thin needles are inserted into the body. Acupuncture is a pseudoscience; the theories and practices of TCM are not based on scientific knowledge, and it has been characterized as quackery.
There is a range of acupuncture technological variants that originated in different philosophies, and techniques vary depending on the country in which it is performed. However, it can be divided into two main foundational philosophical applications and approaches; the first being the modern standardized form called eight principles TCM and the second being an older system that is based on the ancient Daoist wuxing, better known as the five elements or phases in the West. Acupuncture is most often used to attempt pain relief, though acupuncturists say that it can also be used for a wide range of other conditions. Acupuncture is typically used in combination with other forms of treatment.
The global acupuncture market was worth US$24.55 billion in 2017. The market was led by Europe with a 32.7% share, followed by Asia-Pacific with a 29.4% share and the Americas with a 25.3% share. It was estimated in 2021 that the industry would reach a market size of US$55 billion by 2023.
The conclusions of trials and systematic reviews of acupuncture generally provide no good evidence of benefits, which suggests that it is not an effective method of healthcare. Acupuncture is generally safe when done by appropriately trained practitioners using clean needle techniques and single-use needles. When properly delivered, it has a low rate of mostly minor adverse effects. When accidents and infections do occur, they are associated with neglect on the part of the practitioner, particularly in the application of sterile techniques. A review conducted in 2013 stated that reports of infection transmission increased significantly in the preceding decade. The most frequently reported adverse events were pneumothorax and infections. Since serious adverse events continue to be reported, it is recommended that acupuncturists be trained sufficiently to reduce the risk.
Scientific investigation has not found any histological or physiological evidence for traditional Chinese concepts such as qi, meridians, and acupuncture points, and many modern practitioners no longer support the existence of qi or meridians, which was a major part of early belief systems. Acupuncture is believed to have originated around 100 BC in China, around the time The Inner Classic of Huang Di (Huangdi Neijing) was published, though some experts suggest it could have been practiced earlier. Over time, conflicting claims and belief systems emerged about the effect of lunar, celestial and earthly cycles, yin and yang energies, and a body's "rhythm" on the effectiveness of treatment. Acupuncture fluctuated in popularity in China due to changes in the country's political leadership and the preferential use of rationalism or scientific medicine. Acupuncture spread first to Korea in the 6th century AD, then to Japan through medical missionaries, and then to Europe, beginning with France. In the 20th century, as it spread to the United States and Western countries, spiritual elements of acupuncture that conflicted with scientific knowledge were sometimes abandoned in favor of simply tapping needles into acupuncture points.
Clinical practice
Acupuncture is a form of alternative medicine. It is used most commonly for pain relief, though it is also used to treat a wide range of conditions. Acupuncture is generally only used in combination with other forms of treatment. For example, the American Society of Anesthesiologists states it may be considered in the treatment of nonspecific, noninflammatory low back pain only in conjunction with conventional therapy.
Acupuncture is the insertion of thin needles into the skin. According to the Mayo Foundation for Medical Education and Research (Mayo Clinic), a typical session entails lying still while approximately five to twenty needles are inserted; for the majority of cases, the needles will be left in place for ten to twenty minutes. It can be associated with the application of heat, pressure, or laser light. Classically, acupuncture is individualized and based on philosophy and intuition, and not on scientific research. There is also a non-invasive therapy developed in early 20th-century Japan using an elaborate set of instruments other than needles for the treatment of children ( or ).
Clinical practice varies depending on the country. A comparison of the average number of patients treated per hour found significant differences between China (10) and the United States (1.2). Chinese herbs are often used. There is a diverse range of acupuncture approaches, involving different philosophies. Although various different techniques of acupuncture practice have emerged, the method used in traditional Chinese medicine (TCM) seems to be the most widely adopted in the US. Traditional acupuncture involves needle insertion, moxibustion, and cupping therapy, and may be accompanied by other procedures such as feeling the pulse and other parts of the body and examining the tongue. Traditional acupuncture involves the belief that a "life force" (qi) circulates within the body in lines called meridians. The main methods practiced in the UK are TCM and Western medical acupuncture. The term Western medical acupuncture is used to indicate an adaptation of TCM-based acupuncture which focuses less on TCM. The Western medical acupuncture approach involves using acupuncture after a medical diagnosis. Limited research has compared the contrasting acupuncture systems used in various countries for determining different acupuncture points, and thus there is no defined standard for acupuncture points.
In traditional acupuncture, the acupuncturist decides which points to treat by observing and questioning the patient to make a diagnosis according to the tradition used. In TCM, the four diagnostic methods are: inspection, auscultation and olfaction, inquiring, and palpation. Inspection focuses on the face and particularly on the tongue, including analysis of the tongue size, shape, tension, color and coating, and the absence or presence of teeth marks around the edge. Auscultation and olfaction involve listening for particular sounds, such as wheezing, and observing body odor. Inquiring involves focusing on the "seven inquiries": chills and fever; perspiration; appetite, thirst and taste; defecation and urination; pain; sleep; and menses and leukorrhea. Palpation is focusing on feeling the body for tender points and feeling the pulse.
Needles
The most common mechanism of stimulation of acupuncture points employs penetration of the skin by thin metal needles, which are manipulated manually or the needle may be further stimulated by electrical stimulation (electroacupuncture). Acupuncture needles are typically made of stainless steel, making them flexible and preventing them from rusting or breaking. Needles are usually disposed of after each use to prevent contamination. Reusable needles when used should be sterilized between applications. In many areas, only sterile, single-use acupuncture needles are allowed, including the State of California. Needles vary in length between , with shorter needles used near the face and eyes, and longer needles in areas with thicker tissues; needle diameters vary from 0 to 0, with thicker needles used on more robust patients. Thinner needles may be flexible and require tubes for insertion. The tip of the needle should not be made too sharp to prevent breakage, although blunt needles cause more pain.
Apart from the usual filiform needle, other needle types include three-edged needles and the Nine Ancient Needles. Japanese acupuncturists use extremely thin needles that are used superficially, sometimes without penetrating the skin, and surrounded by a guide tube (a 17th-century invention adopted in China and the West). Korean acupuncture uses copper needles and has a greater focus on the hand.
Needling technique
Insertion
The skin is sterilized and needles are inserted, frequently with a plastic guide tube. Needles may be manipulated in various ways, including spinning, flicking, or moving up and down relative to the skin. Since most pain is felt in the superficial layers of the skin, a quick insertion of the needle is recommended. Often the needles are stimulated by hand in order to cause a dull, localized, aching sensation that is called de qi, as well as "needle grasp," a tugging feeling felt by the acupuncturist and generated by a mechanical interaction between the needle and skin. Acupuncture can be painful. The acupuncturist's skill level may influence the painfulness of the needle insertion; a sufficiently skilled practitioner may be able to insert the needles without causing any pain.
sensation
(; "arrival of qi") refers to a claimed sensation of numbness, distension, or electrical tingling at the needling site. If these sensations are not observed then inaccurate location of the acupoint, improper depth of needle insertion, inadequate manual manipulation, are blamed. If is not immediately observed upon needle insertion, various manual manipulation techniques are often applied to promote it (such as "plucking", "shaking" or "trembling").
Once is observed, techniques might be used which attempt to "influence" the ; for example, by certain manipulation the can allegedly be conducted from the needling site towards more distant sites of the body. Other techniques aim at "tonifying" () or "sedating" () qi. The former techniques are used in deficiency patterns, the latter in excess patterns. De qi is more important in Chinese acupuncture, while Western and Japanese patients may not consider it a necessary part of the treatment.
Related practices
Acupressure, a non-invasive form of bodywork, uses physical pressure applied to acupressure points by the hand or elbow, or with various devices.
Acupuncture is often accompanied by moxibustion, the burning of cone-shaped preparations of moxa (made from dried mugwort) on or near the skin, often but not always near or on an acupuncture point. Traditionally, acupuncture was used to treat acute conditions while moxibustion was used for chronic diseases. Moxibustion could be direct (the cone was placed directly on the skin and allowed to burn the skin, producing a blister and eventually a scar), or indirect (either a cone of moxa was placed on a slice of garlic, ginger or other vegetable, or a cylinder of moxa was held above the skin, close enough to either warm or burn it).
Cupping therapy is an ancient Chinese form of alternative medicine in which a local suction is created on the skin; practitioners believe this mobilizes blood flow in order to promote healing.
Tui na is a TCM method of attempting to stimulate the flow of qi by various bare-handed techniques that do not involve needles.
Electroacupuncture is a form of acupuncture in which acupuncture needles are attached to a device that generates continuous electric pulses (this has been described as "essentially transdermal electrical nerve stimulation [TENS] masquerading as acupuncture").
Fire needle acupuncture also known as fire needling is a technique which involves quickly inserting a flame-heated needle into areas on the body.
Sonopuncture is a stimulation of the body similar to acupuncture using sound instead of needles. This may be done using purpose-built transducers to direct a narrow ultrasound beam to a depth of 6–8 centimetres at acupuncture meridian points on the body. Alternatively, tuning forks or other sound emitting devices are used.
Acupuncture point injection is the injection of various substances (such as drugs, vitamins or herbal extracts) into acupoints. This technique combines traditional acupuncture with injection of what is often an effective dose of an approved pharmaceutical drug, and proponents claim that it may be more effective than either treatment alone, especially for the treatment of some kinds of chronic pain. However, a 2016 review found that most published trials of the technique were of poor value due to methodology issues and larger trials would be needed to draw useful conclusions.
Auriculotherapy, commonly known as ear acupuncture, auricular acupuncture, or auriculoacupuncture, is considered to date back to ancient China. It involves inserting needles to stimulate points on the outer ear. The modern approach was developed in France during the early 1950s. There is no scientific evidence that it can cure disease; the evidence of effectiveness is negligible.
Scalp acupuncture, developed in Japan, is based on reflexological considerations regarding the scalp.
Koryo hand acupuncture, developed in Korea, centers around assumed reflex zones of the hand. Medical acupuncture attempts to integrate reflexological concepts, the trigger point model, and anatomical insights (such as dermatome distribution) into acupuncture practice, and emphasizes a more formulaic approach to acupuncture point location.
Cosmetic acupuncture is the use of acupuncture in an attempt to reduce wrinkles on the face.
Bee venom acupuncture is a treatment approach of injecting purified, diluted bee venom into acupoints.
Veterinary acupuncture is the use of acupuncture on domesticated animals.
Efficacy
, many thousands of papers had been published on the efficacy of acupuncture for the treatment of various adult health conditions, but there was no robust evidence it was beneficial for anything, except shoulder pain and fibromyalgia. For Science-Based Medicine, Steven Novella wrote that the overall pattern of evidence was reminiscent of that for homeopathy, compatible with the hypothesis that most, if not all, benefits were due to the placebo effect, and strongly suggestive that acupuncture had no beneficial therapeutic effects at all.
Harriet Hall noticed that according to Edzard Ernst, systematic reviews agree that acupuncture works for neck pain, but not for every other pain—and that makes its whole enterprise suspicious.
Research methodology and challenges
Sham acupuncture and research
It is difficult but not impossible to design rigorous research trials for acupuncture. Due to acupuncture's invasive nature, one of the major challenges in efficacy research is in the design of an appropriate placebo control group. For efficacy studies to determine whether acupuncture has specific effects, "sham" forms of acupuncture where the patient, practitioner, and analyst are blinded seem the most acceptable approach. Sham acupuncture uses non-penetrating needles or needling at non-acupuncture points, e.g. inserting needles on meridians not related to the specific condition being studied, or in places not associated with meridians. The under-performance of acupuncture in such trials may indicate that therapeutic effects are due entirely to non-specific effects, or that the sham treatments are not inert, or that systematic protocols yield less than optimal treatment.
A 2014 review in Nature Reviews Cancer found that "contrary to the claimed mechanism of redirecting the flow of qi through meridians, researchers usually find that it generally does not matter where the needles are inserted, how often (that is, no dose-response effect is observed), or even if needles are actually inserted. In other words, "sham" or "placebo" acupuncture generally produces the same effects as "real" acupuncture and, in some cases, does better." A 2013 meta-analysis found little evidence that the effectiveness of acupuncture on pain (compared to sham) was modified by the location of the needles, the number of needles used, the experience or technique of the practitioner, or by the circumstances of the sessions. The same analysis also suggested that the number of needles and sessions is important, as greater numbers improved the outcomes of acupuncture compared to non-acupuncture controls. There has been little systematic investigation of which components of an acupuncture session may be important for any therapeutic effect, including needle placement and depth, type and intensity of stimulation, and number of needles used. The research seems to suggest that needles do not need to stimulate the traditionally specified acupuncture points or penetrate the skin to attain an anticipated effect (e.g. psychosocial factors).
A response to "sham" acupuncture in osteoarthritis may be used in the elderly, but placebos have usually been regarded as deception and thus unethical. However, some physicians and ethicists have suggested circumstances for applicable uses for placebos such as it might present a theoretical advantage of an inexpensive treatment without adverse reactions or interactions with drugs or other medications. As the evidence for most types of alternative medicine such as acupuncture is far from strong, the use of alternative medicine in regular healthcare can present an ethical question.
Using the principles of evidence-based medicine to research acupuncture is controversial, and has produced different results. Some research suggests acupuncture can alleviate pain but the majority of research suggests that acupuncture's effects are mainly due to placebo. Evidence suggests that any benefits of acupuncture are short-lasting. There is insufficient evidence to support use of acupuncture compared to mainstream medical treatments. Acupuncture is not better than mainstream treatment in the long term.
The use of acupuncture has been criticized owing to there being little scientific evidence for explicit effects, or the mechanisms for its supposed effectiveness, for any condition that is discernible from placebo. Acupuncture has been called "theatrical placebo", and David Gorski argues that when acupuncture proponents advocate "harnessing of placebo effects" or work on developing "meaningful placebos", they essentially concede it is little more than that.
Publication bias
Publication bias is cited as a concern in the reviews of randomized controlled trials of acupuncture. A 1998 review of studies on acupuncture found that trials originating in China, Japan, Hong Kong, and Taiwan were uniformly favourable to acupuncture, as were ten out of eleven studies conducted in Russia. A 2011 assessment of the quality of randomized controlled trials on traditional Chinese medicine, including acupuncture, concluded that the methodological quality of most such trials (including randomization, experimental control, and blinding) was generally poor, particularly for trials published in Chinese journals (though the quality of acupuncture trials was better than the trials testing traditional Chinese medicine remedies). The study also found that trials published in non-Chinese journals tended to be of higher quality. Chinese authors use more Chinese studies, which have been demonstrated to be uniformly positive. A 2012 review of 88 systematic reviews of acupuncture published in Chinese journals found that less than half of these reviews reported testing for publication bias, and that the majority of these reviews were published in journals with impact factors of zero. A 2015 study comparing pre-registered records of acupuncture trials with their published results found that it was uncommon for such trials to be registered before the trial began. This study also found that selective reporting of results and changing outcome measures to obtain statistically significant results was common in this literature.
Scientist Steven Salzberg identifies acupuncture and Chinese medicine generally as a focus for "fake medical journals" such as the Journal of Acupuncture and Meridian Studies and Acupuncture in Medicine.
Safety
Adverse events
Acupuncture is generally safe when administered by an experienced, appropriately trained practitioner using clean-needle technique and sterile single-use needles. When improperly delivered it can cause adverse effects. Accidents and infections are associated with infractions of sterile technique or neglect on the part of the practitioner. To reduce the risk of serious adverse events after acupuncture, acupuncturists should be trained sufficiently. A 2009 overview of Cochrane reviews found acupuncture is not effective for a wide range of conditions. People with serious spinal disease, such as cancer or infection, are not good candidates for acupuncture. Contraindications to acupuncture (conditions that should not be treated with acupuncture) include coagulopathy disorders (e.g. hemophilia and advanced liver disease), warfarin use, severe psychiatric disorders (e.g. psychosis), and skin infections or skin trauma (e.g. burns). Further, electroacupuncture should be avoided at the spot of implanted electrical devices (such as pacemakers).
A 2011 systematic review of systematic reviews (internationally and without language restrictions) found that serious complications following acupuncture continue to be reported. Between 2000 and 2009, ninety-five cases of serious adverse events, including five deaths, were reported. Many such events are not inherent to acupuncture but are due to malpractice of acupuncturists. This might be why such complications have not been reported in surveys of adequately trained acupuncturists. Most such reports originate from Asia, which may reflect the large number of treatments performed there or a relatively higher number of poorly trained Asian acupuncturists. Many serious adverse events were reported from developed countries. These included Australia, Austria, Canada, Croatia, France, Germany, Ireland, the Netherlands, New Zealand, Spain, Sweden, Switzerland, the UK, and the US. The number of adverse effects reported from the UK appears particularly unusual, which may indicate less under-reporting in the UK than other countries. Reports included 38 cases of infections and 42 cases of organ trauma. The most frequent adverse events included pneumothorax, and bacterial and viral infections.
A 2013 review found (without restrictions regarding publication date, study type or language) 295 cases of infections; mycobacterium was the pathogen in at least 96%. Likely sources of infection include towels, hot packs or boiling tank water, and reusing reprocessed needles. Possible sources of infection include contaminated needles, reusing personal needles, a person's skin containing mycobacterium, and reusing needles at various sites in the same person. Although acupuncture is generally considered a safe procedure, a 2013 review stated that the reports of infection transmission increased significantly in the prior decade, including those of mycobacterium. Although it is recommended that practitioners of acupuncture use disposable needles, the reuse of sterilized needles is still permitted. It is also recommended that thorough control practices for preventing infection be implemented and adapted.
English-language
A 2013 systematic review of the English-language case reports found that serious adverse events associated with acupuncture are rare, but that acupuncture is not without risk. Between 2000 and 2011 the English-language literature from 25 countries and regions reported 294 adverse events. The majority of the reported adverse events were relatively minor, and the incidences were low. For example, a prospective survey of 34,000 acupuncture treatments found no serious adverse events and 43 minor ones, a rate of 1.3 per 1000 interventions. Another survey found there were 7.1% minor adverse events, of which 5 were serious, amid 97,733 acupuncture patients. The most common adverse effect observed was infection (e.g. mycobacterium), and the majority of infections were bacterial in nature, caused by skin contact at the needling site. Infection has also resulted from skin contact with unsterilized equipment or with dirty towels in an unhygienic clinical setting. Other adverse complications included five reported cases of spinal cord injuries (e.g. migrating broken needles or needling too deeply), four brain injuries, four peripheral nerve injuries, five heart injuries, seven other organ and tissue injuries, bilateral hand edema, epithelioid granuloma, pseudolymphoma, argyria, pustules, pancytopenia, and scarring due to hot-needle technique. Adverse reactions from acupuncture, which are unusual and uncommon in typical acupuncture practice, included syncope, galactorrhoea, bilateral nystagmus, pyoderma gangrenosum, hepatotoxicity, eruptive lichen planus, and spontaneous needle migration.
A 2013 systematic review found 31 cases of vascular injuries caused by acupuncture, three causing death. Two died from pericardial tamponade and one was from an aortoduodenal fistula. The same review found vascular injuries were rare, bleeding and pseudoaneurysm were most prevalent. A 2011 systematic review (without restriction in time or language), aiming to summarize all reported case of cardiac tamponade after acupuncture, found 26 cases resulting in 14 deaths, with little doubt about cause in most fatal instances. The same review concluded that cardiac tamponade was a serious, usually fatal, though theoretically avoidable complication following acupuncture, and urged training to minimize risk.
A 2012 review found that a number of adverse events were reported after acupuncture in the UK's National Health Service (NHS), 95% of which were not severe, though miscategorization and under-reporting may alter the total figures. From January 2009 to December 2011, 468 safety incidents were recognized within the NHS organizations. The adverse events recorded included retained needles (31%), dizziness (30%), loss of consciousness/unresponsive (19%), falls (4%), bruising or soreness at needle site (2%), pneumothorax (1%) and other adverse side effects (12%). Acupuncture practitioners should know, and be prepared to be responsible for, any substantial harm from treatments. Some acupuncture proponents argue that the long history of acupuncture suggests it is safe. However, there is an increasing literature on adverse events (e.g. spinal-cord injury).
Acupuncture seems to be safe in people getting anticoagulants, assuming needles are used at the correct location and depth, but studies are required to verify these findings.
Chinese, Korean, and Japanese-language
A 2010 systematic review of the Chinese-language literature found numerous acupuncture-related adverse events, including pneumothorax, fainting, subarachnoid hemorrhage, and infection as the most frequent, and cardiovascular injuries, subarachnoid hemorrhage, pneumothorax, and recurrent cerebral hemorrhage as the most serious, most of which were due to improper technique. Between 1980 and 2009, the Chinese-language literature reported 479 adverse events. Prospective surveys show that mild, transient acupuncture-associated adverse events ranged from 6.71% to 15%. In a study with 190,924 patients, the prevalence of serious adverse events was roughly 0.024%. Another study showed a rate of adverse events requiring specific treatment of 2.2%, 4,963 incidences among 229,230 patients. Infections, mainly hepatitis, after acupuncture are reported often in English-language research, though are rarely reported in Chinese-language research, making it plausible that acupuncture-associated infections have been underreported in China. Infections were mostly caused by poor sterilization of acupuncture needles. Other adverse events included spinal epidural hematoma (in the cervical, thoracic and lumbar spine), chylothorax, injuries of abdominal organs and tissues, injuries in the neck region, injuries to the eyes, including orbital hemorrhage, traumatic cataract, injury of the oculomotor nerve and retinal puncture, hemorrhage to the cheeks and the hypoglottis, peripheral motor-nerve injuries and subsequent motor dysfunction, local allergic reactions to metal needles, stroke, and cerebral hemorrhage after acupuncture.
A causal link between acupuncture and the adverse events cardiac arrest, pyknolepsy, shock, fever, cough, thirst, aphonia, leg numbness, and sexual dysfunction remains uncertain. The same review concluded that acupuncture can be considered inherently safe when practiced by properly trained practitioners, but the review also stated there is a need to find effective strategies to minimize the health risks. Between 1999 and 2010, the Korean-language literature contained reports of 1104 adverse events. Between the 1980s and 2002, the Japanese-language literature contained reports of 150 adverse events.
Children and pregnancy
Although acupuncture has been practiced for thousands of years in China, its use in pediatrics in the United States did not become common until the early 2000s. In 2007, the National Health Interview Survey (NHIS) conducted by the National Center For Health Statistics (NCHS) estimated that approximately 150,000 children had received acupuncture treatment for a variety of conditions.
In 2008, a study determined that the use of acupuncture-needle treatment on children was "questionable" due to the possibility of adverse side-effects and the pain manifestation differences in children versus adults. The study also includes warnings against practicing acupuncture on infants, as well as on children who are over-fatigued, very weak, or have over-eaten.
When used on children, acupuncture is considered safe when administered by well-trained, licensed practitioners using sterile needles; however, a 2011 review found there was limited research to draw definite conclusions about the overall safety of pediatric acupuncture. The same review found 279 adverse events, 25 of them serious. The adverse events were mostly mild in nature (e.g., bruising or bleeding). The prevalence of mild adverse events ranged from 10.1% to 13.5%, an estimated 168 incidences among 1,422 patients. On rare occasions adverse events were serious (e.g. cardiac rupture or hemoptysis); many might have been a result of substandard practice. The incidence of serious adverse events was 5 per one million, which included children and adults.
When used during pregnancy, the majority of adverse events caused by acupuncture were mild and transient, with few serious adverse events. The most frequent mild adverse event was needling or unspecified pain, followed by bleeding. Although two deaths (one stillbirth and one neonatal death) were reported, there was a lack of acupuncture-associated maternal mortality. Limiting the evidence as certain, probable or possible in the causality evaluation, the estimated incidence of adverse events following acupuncture in pregnant women was 131 per 10,000.
Although acupuncture is not contraindicated in pregnant women, some specific acupuncture points are particularly sensitive to needle insertion; these spots, as well as the abdominal region, should be avoided during pregnancy.
Moxibustion and cupping
Four adverse events associated with moxibustion were bruising, burns and cellulitis, spinal epidural abscess, and large superficial basal cell carcinoma. Ten adverse events were associated with cupping. The minor ones were keloid scarring, burns, and bullae; the serious ones were acquired hemophilia A, stroke following cupping on the back and neck, factitious panniculitis, reversible cardiac hypertrophy, and iron deficiency anemia.
Risk of forgoing conventional medical care
As with other alternative medicines, unethical or naïve practitioners may induce patients to exhaust financial resources by pursuing ineffective treatment. Professional ethics codes set by accrediting organizations such as the National Certification Commission for Acupuncture and Oriental Medicine require practitioners to make "timely referrals to other health care professionals as may be appropriate." Stephen Barrett states that there is a "risk that an acupuncturist whose approach to diagnosis is not based on scientific concepts will fail to diagnose a dangerous condition".
Conceptual basis
Traditional
Acupuncture is a substantial part of traditional Chinese medicine (TCM). Early acupuncture beliefs relied on concepts that are common in TCM, such as a life force energy called qi. Qi was believed to flow from the body's primary organs (zang-fu organs) to the "superficial" body tissues of the skin, muscles, tendons, bones, and joints, through channels called meridians. Acupuncture points where needles are inserted are mainly (but not always) found at locations along the meridians. Acupuncture points not found along a meridian are called extraordinary points and those with no designated site are called points.
In TCM, disease is generally perceived as a disharmony or imbalance in energies such as yin, yang, qi, xuĕ, zàng-fǔ, meridians, and of the interaction between the body and the environment. Therapy is based on which "pattern of disharmony" can be identified. For example, some diseases are believed to be caused by meridians being invaded with an excess of wind, cold, and damp. In order to determine which pattern is at hand, practitioners examine things like the color and shape of the tongue, the relative strength of pulse-points, the smell of the breath, the quality of breathing, or the sound of the voice. TCM and its concept of disease does not strongly differentiate between the cause and effect of symptoms.
Purported scientific basis
Many within the scientific community consider acupuncture to be quackery and pseudoscience, having no effect other than as "theatrical placebo". David Gorski has argued that of all forms of quackery, acupuncture has perhaps gained most acceptance among physicians and institutions. Academics Massimo Pigliucci and Maarten Boudry describe acupuncture as a "borderlands science" lying between science and pseudoscience.
A 2015 paper by several professors states that acupuncture has "no credible or respectable place in medicine", because it is often considered to be pseudoscience or quackery.
Rationalizations of traditional medicine
It is a generally held belief within the acupuncture community that acupuncture points and meridians structures are special conduits for electrical signals, but no research has established any consistent anatomical structure or function for either acupuncture points or meridians. Human tests to determine whether electrical continuity was significantly different near meridians than other places in the body have been inconclusive. Scientific research has not supported the existence of qi, meridians, or yin and yang. A Nature editorial described TCM as "fraught with pseudoscience", with the majority of its treatments having no logical mechanism of action. Quackwatch states that "TCM theory and practice are not based upon the body of knowledge related to health, disease, and health care that has been widely accepted by the scientific community. TCM practitioners disagree among themselves about how to diagnose patients and which treatments should go with which diagnoses. Even if they could agree, the TCM theories are so nebulous that no amount of scientific study will enable TCM to offer rational care." Academic discussions of acupuncture still make reference to pseudoscientific concepts such as qi and meridians despite the lack of scientific evidence.
Release of endorphins or adenosine
Some modern practitioners support the use of acupuncture to treat pain, but have abandoned the use of qi, meridians, yin, yang and other mystical energies as an explanatory frameworks. The use of qi as an explanatory framework has been decreasing in China, even as it becomes more prominent during discussions of acupuncture in the US.
Many acupuncturists attribute pain relief to the release of endorphins when needles penetrate, but no longer support the idea that acupuncture can affect a disease. Some studies suggest acupuncture causes a series of events within the central nervous system, and that it is possible to inhibit acupuncture's analgesic effects with the opioid antagonist naloxone. Mechanical deformation of the skin by acupuncture needles appears to result in the release of adenosine. The anti-nociceptive effect of acupuncture may be mediated by the adenosine A1 receptor. A 2014 review in Nature Reviews Cancer analyzed mouse studies that suggested acupuncture relieves pain via the local release of adenosine, which then triggered nearby A1 receptors. The review found that in those studies, because acupuncture "caused more tissue damage and inflammation relative to the size of the animal in mice than in humans, such studies unnecessarily muddled a finding that local inflammation can result in the local release of adenosine with analgesic effect."
History
Origins
Acupuncture, along with moxibustion, is one of the oldest practices of traditional Chinese medicine. Most historians believe the practice began in China, though there are some conflicting narratives on when it originated. Academics David Ramey and Paul Buell said the exact date acupuncture was founded depends on the extent to which dating of ancient texts can be trusted and the interpretation of what constitutes acupuncture.
Acupressure therapy was prevalent in India. Once Buddhism spread to China, the acupressure therapy was also integrated into common medical practice in China and it came to be known as acupuncture. The major points of Indian acupressure and Chinese acupuncture are similar to each other.
According to an article in Rheumatology, the first documentation of an "organized system of diagnosis and treatment" for acupuncture was in Inner Classic of Huang Di (Huangdi Neijing) from about 100 BC. Gold and silver needles found in the tomb of Liu Sheng from around 100 BC are believed to be the earliest archaeological evidence of acupuncture, though it is unclear if that was their purpose. According to Plinio Prioreschi, the earliest known historical record of acupuncture is the Shiji ("Records of the Grand Historian"), written by a historian around 100 BC. It is believed that this text was documenting what was established practice at that time.
Alternative theories
The 5,000-year-old mummified body of Ötzi the Iceman was found with 15 groups of tattoos, many of which were located at points on the body where acupuncture needles are used for abdominal or lower back problems. Evidence from the body suggests Ötzi had these conditions. This has been cited as evidence that practices similar to acupuncture may have been practised elsewhere in Eurasia during the early Bronze Age; however, The Oxford Handbook of the History of Medicine calls this theory "speculative". It is considered unlikely that acupuncture was practised before 2000 BC.
Acupuncture may have been practised during the Neolithic era, near the end of the Stone Age, using sharpened stones called Bian shi. Many Chinese texts from later eras refer to sharp stones called "plen", which means "stone probe", that may have been used for acupuncture purposes. The ancient Chinese medical text, Huangdi Neijing, indicates that sharp stones were believed at-the-time to cure illnesses at or near the body's surface, perhaps because of the short depth a stone could penetrate. However, it is more likely that stones were used for other medical purposes, such as puncturing a growth to drain its pus. The Mawangdui texts, which are believed to be from the 2nd century BC, mention the use of pointed stones to open abscesses, and moxibustion, but not for acupuncture. It is also speculated that these stones may have been used for bloodletting, due to the ancient Chinese belief that illnesses were caused by demons within the body that could be killed or released. It is likely bloodletting was an antecedent to acupuncture.
According to historians Lu Gwei-djen and Joseph Needham, there is substantial evidence that acupuncture may have begun around 600 BC. Some hieroglyphs and pictographs from that era suggests acupuncture and moxibustion were practised. However, historians Lu and Needham said it was unlikely a needle could be made out of the materials available in China during this time period. It is possible that bronze was used for early acupuncture needles. Tin, copper, gold and silver are also possibilities, though they are considered less likely, or to have been used in fewer cases. If acupuncture was practised during the Shang dynasty (1766 to 1122 BC), organic materials like thorns, sharpened bones, or bamboo may have been used. Once methods for producing steel were discovered, it would replace all other materials, since it could be used to create a very fine, but sturdy needle. Lu and Needham noted that all the ancient materials that could have been used for acupuncture and which often produce archaeological evidence, such as sharpened bones, bamboo or stones, were also used for other purposes. An article in Rheumatology said that the absence of any mention of acupuncture in documents found in the tomb of Mawangdui from 198 BC suggest that acupuncture was not practised by that time.
Belief systems
Several different and sometimes conflicting belief systems emerged regarding acupuncture. This may have been the result of competing schools of thought. Some ancient texts referred to using acupuncture to cause bleeding, while others mixed the ideas of blood-letting and spiritual ch'i energy. Over time, the focus shifted from blood to the concept of puncturing specific points on the body, and eventually to balancing Yin and Yang energies as well. According to David Ramey, no single "method or theory" was ever predominantly adopted as the standard. At the time, scientific knowledge of medicine was not yet developed, especially because in China dissection of the deceased was forbidden, preventing the development of basic anatomical knowledge.
It is not certain when specific acupuncture points were introduced, but the autobiography of Bian Que from around 400–500 BC references inserting needles at designated areas. Bian Que believed there was a single acupuncture point at the top of one's skull that he called the point "of the hundred meetings." Texts dated to be from 156 to 186 BC document early beliefs in channels of life force energy called meridians that would later be an element in early acupuncture beliefs.
Ramey and Buell said the "practice and theoretical underpinnings" of modern acupuncture were introduced in The Yellow Emperor's Classic (Huangdi Neijing) around 100 BC. It introduced the concept of using acupuncture to manipulate the flow of life energy (qi) in a network of meridian (channels) in the body. The network concept was made up of acu-tracts, such as a line down the arms, where it said acupoints were located. Some of the sites acupuncturists use needles at today still have the same names as those given to them by the Yellow Emperor's Classic. Numerous additional documents were published over the centuries introducing new acupoints. By the 4th century AD, most of the acupuncture sites in use today had been named and identified.
Early development in China
Establishment and growth
In the first half of the 1st century AD, acupuncturists began promoting the belief that acupuncture's effectiveness was influenced by the time of day or night, the lunar cycle, and the season. The 'science of the yin-yang cycles' ( ) was a set of beliefs that curing diseases relied on the alignment of both heavenly () and earthly () forces that were attuned to cycles like that of the sun and moon. There were several different belief systems that relied on a number of celestial and earthly bodies or elements that rotated and only became aligned at certain times. According to Needham and Lu, these "arbitrary predictions" were depicted by acupuncturists in complex charts and through a set of special terminology.
Acupuncture needles during this period were much thicker than most modern ones and often resulted in infection. Infection is caused by a lack of sterilization, but at that time it was believed to be caused by use of the wrong needle, or needling in the wrong place, or at the wrong time. Later, many needles were heated in boiling water, or in a flame. Sometimes needles were used while they were still hot, creating a cauterizing effect at the injection site. Nine needles were recommended in the Great Compendium of Acupuncture and Moxibustion from 1601, which may have been because of an ancient Chinese belief that nine was a magic number.
Other belief systems were based on the idea that the human body operated on a rhythm and acupuncture had to be applied at the right point in the rhythm to be effective. In some cases a lack of balance between Yin and Yang were believed to be the cause of disease.
In the 1st century AD, many of the first books about acupuncture were published and recognized acupuncturist experts began to emerge. The Zhen Jiu Jia Yi Jing, which was published in the mid-3rd century, became the oldest acupuncture book that is still in existence in the modern era. Other books like the Yu Gui Zhen Jing, written by the Director of Medical Services for China, were also influential during this period, but were not preserved. In the mid 7th century, Sun Simiao published acupuncture-related diagrams and charts that established standardized methods for finding acupuncture sites on people of different sizes and categorized acupuncture sites in a set of modules.
Acupuncture became more established in China as improvements in paper led to the publication of more acupuncture books. The Imperial Medical Service and the Imperial Medical College, which both supported acupuncture, became more established and created medical colleges in every province. The public was also exposed to stories about royal figures being cured of their diseases by prominent acupuncturists. By time the Great Compendium of Acupuncture and Moxibustion was published during the Ming dynasty (1368–1644 AD), most of the acupuncture practices used in the modern era had been established.
Decline
By the end of the Song dynasty (1279 AD), acupuncture had lost much of its status in China. It became rarer in the following centuries, and was associated with less prestigious professions like alchemy, shamanism, midwifery and moxibustion. Additionally, by the 18th century, scientific rationality was becoming more popular than traditional superstitious beliefs. By 1757 a book documenting the history of Chinese medicine called acupuncture a "lost art". Its decline was attributed in part to the popularity of prescriptions and medications, as well as its association with the lower classes.
In 1822, the Chinese Emperor signed a decree excluding the practice of acupuncture from the Imperial Medical Institute. He said it was unfit for practice by gentlemen-scholars. In China acupuncture was increasingly associated with lower-class, illiterate practitioners. It was restored for a time, but banned again in 1929 in favor of science-based medicine. Although acupuncture declined in China during this time period, it was also growing in popularity in other countries.
International expansion
Korea is believed to be the first country in Asia that acupuncture spread to outside of China. Within Korea there is a legend that acupuncture was developed by emperor Dangun, though it is more likely to have been brought into Korea from a Chinese colonial prefecture in 514 AD. Acupuncture use was commonplace in Korea by the 6th century. It spread to Vietnam in the 8th and 9th centuries. As Vietnam began trading with Japan and China around the 9th century, it was influenced by their acupuncture practices as well. China and Korea sent "medical missionaries" that spread traditional Chinese medicine to Japan, starting around 219 AD. In 553, several Korean and Chinese citizens were appointed to re-organize medical education in Japan and they incorporated acupuncture as part of that system. Japan later sent students back to China and established acupuncture as one of five divisions of the Chinese State Medical Administration System.
Acupuncture began to spread to Europe in the second half of the 17th century. Around this time the surgeon-general of the Dutch East India Company met Japanese and Chinese acupuncture practitioners and later encouraged Europeans to further investigate it. He published the first in-depth description of acupuncture for the European audience and created the term "acupuncture" in his 1683 work De Acupunctura. France was an early adopter among the West due to the influence of Jesuit missionaries, who brought the practice to French clinics in the 16th century. The French doctor Louis Berlioz (the father of the composer Hector Berlioz) is usually credited with being the first to experiment with the procedure in Europe in 1810, before publishing his findings in 1816.
By the 19th century, acupuncture had become commonplace in many areas of the world. Americans and Britons began showing interest in acupuncture in the early 19th century, although interest waned by mid-century. Western practitioners abandoned acupuncture's traditional beliefs in spiritual energy, pulse diagnosis, and the cycles of the moon, sun or the body's rhythm. Diagrams of the flow of spiritual energy, for example, conflicted with the West's own anatomical diagrams. It adopted a new set of ideas for acupuncture based on tapping needles into nerves. In Europe it was speculated that acupuncture may allow or prevent the flow of electricity in the body, as electrical pulses were found to make a frog's leg twitch after death.
The West eventually created a belief system based on Travell trigger points that were believed to inhibit pain. They were in the same locations as China's spiritually identified acupuncture points, but under a different nomenclature. The first elaborate Western treatise on acupuncture was published in 1683 by Willem ten Rhijne.
Modern era
In China, the popularity of acupuncture rebounded in 1949 when Mao Zedong took power and sought to unite China behind traditional cultural values. It was also during this time that many Eastern medical practices were consolidated under the name traditional Chinese medicine (TCM).
New practices were adopted in the 20th century, such as using a cluster of needles, electrified needles, or leaving needles inserted for up to a week. A lot of emphasis developed on using acupuncture on the ear. Acupuncture research organizations such as the International Society of Acupuncture were founded in the 1940s and 1950s and acupuncture services became available in modern hospitals. China, where acupuncture was believed to have originated, was increasingly influenced by Western medicine. Meanwhile, acupuncture grew in popularity in the US. The US Congress created the Office of Alternative Medicine in 1992 and the National Institutes of Health (NIH) declared support for acupuncture for some conditions in November 1997. In 1999, the National Center for Complementary and Alternative Medicine was created within the NIH. Acupuncture became the most popular alternative medicine in the US.
Politicians from the Chinese Communist Party said acupuncture was superstitious and conflicted with the party's commitment to science. Communist Party Chairman Mao Zedong later reversed this position, arguing that the practice was based on scientific principles. During the Cultural Revolution, disbelief in acupuncture anesthesia was subjected to ruthless political repression.
In 1971, New York Times reporter James Reston published an article on his acupuncture experiences in China, which led to more investigation of and support for acupuncture. The US President Richard Nixon visited China in 1972. During one part of the visit, the delegation was shown a patient undergoing major surgery while fully awake, ostensibly receiving acupuncture rather than anesthesia. Later it was found that the patients selected for the surgery had both a high pain tolerance and received heavy indoctrination before the operation; these demonstration cases were also frequently receiving morphine surreptitiously through an intravenous drip that observers were told contained only fluids and nutrients. One patient receiving open heart surgery while awake was ultimately found to have received a combination of three powerful sedatives as well as large injections of a local anesthetic into the wound. After the National Institute of Health expressed support for acupuncture for a limited number of conditions, adoption in the US grew further. In 1972 the first legal acupuncture center in the US was established in Washington DC and in 1973 the American Internal Revenue Service allowed acupuncture to be deducted as a medical expense.
In 2006, a BBC documentary Alternative Medicine filmed a patient undergoing open heart surgery allegedly under acupuncture-induced anesthesia. It was later revealed that the patient had been given a cocktail of anesthetics.
In 2010, UNESCO inscribed "acupuncture and moxibustion of traditional Chinese medicine" on the UNESCO Intangible Cultural Heritage List following China's nomination.
Adoption
Acupuncture is most heavily practiced in China and is popular in the US, Australia, and Europe. In Switzerland, acupuncture has become the most frequently used alternative medicine since 2004. In the United Kingdom, a total of 4 million acupuncture treatments were administered in 2009. Acupuncture is used in most pain clinics and hospices in the UK. An estimated 1 in 10 adults in Australia used acupuncture in 2004. In Japan, it is estimated that 25 percent of the population will try acupuncture at some point, though in most cases it is not covered by public health insurance. Users of acupuncture in Japan are more likely to be elderly and to have a limited education. Approximately half of users surveyed indicated a likelihood to seek such remedies in the future, while 37% did not. Less than one percent of the US population reported having used acupuncture in the early 1990s. By the early 2010s, more than 14 million Americans reported having used acupuncture as part of their health care.
In the US, acupuncture is increasingly () used at academic medical centers, and is usually offered through CAM centers or anesthesia and pain management services. Examples include those at Harvard University, Stanford University, Johns Hopkins University, and UCLA. CDC clinical practice guidelines from 2022 list acupuncture among the types of complementary and alternative medicines physicians should consider in preference to opioid prescription for certain kinds of pain.
The use of acupuncture in Germany increased by 20% in 2007, after the German acupuncture trials supported its efficacy for certain uses. In 2011, there were more than one million users, and insurance companies have estimated that two-thirds of German users are women. As a result of the trials, German public health insurers began to cover acupuncture for chronic low back pain and osteoarthritis of the knee, but not tension headache or migraine. This decision was based in part on socio-political reasons. Some insurers in Germany chose to stop reimbursement of acupuncture because of the trials. For other conditions, insurers in Germany were not convinced that acupuncture had adequate benefits over usual care or sham treatments. Highlighting the results of the placebo group, researchers refused to accept a placebo therapy as efficient.
Regulation
There are various government and trade association regulatory bodies for acupuncture in the United Kingdom, the United States, Saudi Arabia, Australia, New Zealand, Japan, Canada, and in European countries and elsewhere. The World Health Organization recommends that an acupuncturist receive 200 hours of specialized training if they are a physician and 2,500 hours for non-physicians before being licensed or certified; many governments have adopted similar standards.
In Hong Kong, the practice of acupuncture is regulated by the Chinese Medicine Council, which was formed in 1999 by the Legislative Council. It includes a licensing exam, registration, and degree courses approved by the board. Canada has acupuncture licensing programs in the provinces of British Columbia, Ontario, Alberta and Quebec; standards set by the Chinese Medicine and Acupuncture Association of Canada are used in provinces without government regulation. Regulation in the US began in the 1970s in California, which was eventually followed by every state but Wyoming and Idaho. Licensing requirements vary greatly from state to state. The needles used in acupuncture are regulated in the US by the Food and Drug Administration. In some states acupuncture is regulated by a board of medical examiners, while in others by the board of licensing, health or education.
In Japan, acupuncturists are licensed by the Minister of Health, Labour and Welfare after passing an examination and graduating from a technical school or university. In Australia, the Chinese Medicine Board of Australia regulates acupuncture, among other Chinese medical traditions, and restricts the use of titles like 'acupuncturist' to registered practitioners only. The practice of Acupuncture in New Zealand in 1990 acupuncture was included into the Governmental Accident Compensation Corporation (ACC) Act. This inclusion granted qualified and professionally registered acupuncturists the ability to provide subsidised care and treatment to citizens, residents, and temporary visitors for work- or sports-related injuries that occurred within the country of New Zealand. The two bodies for the regulation of acupuncture and attainment of ACC treatment provider status in New Zealand are Acupuncture NZ, and The New Zealand Acupuncture Standards Authority. At least 28 countries in Europe have professional associations for acupuncturists. In France, the Académie Nationale de Médecine (National Academy of Medicine) has regulated acupuncture since 1955.
Bibliography
|
;Alternative medicine;Chinese inventions;Energy therapies;Pain management;Pseudoscience;Traditional Chinese medicine
|
https://en.wikipedia.org/wiki/Kolmogorov%20complexity
|
In algorithmic information theory (a subfield of computer science and mathematics), the Kolmogorov complexity of an object, such as a piece of text, is the length of a shortest computer program (in a predetermined programming language) that produces the object as output. It is a measure of the computational resources needed to specify the object, and is also known as algorithmic complexity, Solomonoff–Kolmogorov–Chaitin complexity, program-size complexity, descriptive complexity, or algorithmic entropy. It is named after Andrey Kolmogorov, who first published on the subject in 1963 and is a generalization of classical information theory.
The notion of Kolmogorov complexity can be used to state and prove impossibility results akin to Cantor's diagonal argument, Gödel's incompleteness theorem, and Turing's halting problem.
In particular, no program P computing a lower bound for each text's Kolmogorov complexity can return a value essentially larger than P's own length (see section ); hence no single program can compute the exact Kolmogorov complexity for infinitely many texts. Kolmogorov complexity is the length of the ultimately compressed version of a file (i.e., anything which can be put in a computer). Formally, it is the length of a shortest program from which the file can be reconstructed. While Kolmogorov complexity is uncomputable, various approaches have been proposed and reviewed.
Definition
Intuition
Consider the following two strings of 32 lowercase letters and digits:
abababababababababababababababab , and
4c1j5b2p0cv4w1x8rx2y39umgw5q85s7
The first string has a short English-language description, namely "write ab 16 times", which consists of 17 characters. The second one has no obvious simple description (using the same character set) other than writing down the string itself, i.e., "write 4c1j5b2p0cv4w1x8rx2y39umgw5q85s7" which has 38 characters. Hence the operation of writing the first string can be said to have "less complexity" than writing the second.
More formally, the complexity of a string is the length of the shortest possible description of the string in some fixed universal description language (the sensitivity of complexity relative to the choice of description language is discussed below). It can be shown that the Kolmogorov complexity of any string cannot be more than a few bytes larger than the length of the string itself. Strings like the abab example above, whose Kolmogorov complexity is small relative to the string's size, are not considered to be complex.
The Kolmogorov complexity can be defined for any mathematical object, but for simplicity the scope of this article is restricted to strings. We must first specify a description language for strings. Such a description language can be based on any computer programming language, such as Lisp, Pascal, or Java. If P is a program which outputs a string x, then P is a description of x. The length of the description is just the length of P as a character string, multiplied by the number of bits in a character (e.g., 7 for ASCII).
We could, alternatively, choose an encoding for Turing machines, where an encoding is a function which associates to each Turing Machine M a bitstring <M>. If M is a Turing Machine which, on input w, outputs string x, then the concatenated string <M> w is a description of x. For theoretical analysis, this approach is more suited for constructing detailed formal proofs and is generally preferred in the research literature. In this article, an informal approach is discussed.
Any string s has at least one description. For example, the second string above is output by the pseudo-code:
function GenerateString2()
return "4c1j5b2p0cv4w1x8rx2y39umgw5q85s7"
whereas the first string is output by the (much shorter) pseudo-code:
function GenerateString1()
return "ab" × 16
If a description d(s) of a string s is of minimal length (i.e., using the fewest bits), it is called a minimal description of s, and the length of d(s) (i.e. the number of bits in the minimal description) is the Kolmogorov complexity of s, written K(s). Symbolically,
K(s) = |d(s)|.
The length of the shortest description will depend on the choice of description language; but the effect of changing languages is bounded (a result called the invariance theorem).
Plain Kolmogorov complexity C
There are two definitions of Kolmogorov complexity: plain and prefix-free. The plain complexity is the minimal description length of any program, and denoted while the prefix-free complexity is the minimal description length of any program encoded in a prefix-free code, and denoted . The plain complexity is more intuitive, but the prefix-free complexity is easier to study.
By default, all equations hold only up to an additive constant. For example, really means that , that is, .
Let be a computable function mapping finite binary strings to binary strings. It is a universal function if, and only if, for any computable , we can encode the function in a "program" , such that . We can think of as a program interpreter, which takes in an initial segment describing the program, followed by data that the program should process.
One problem with plain complexity is that , because intuitively speaking, there is no general way to tell where to divide an output string just by looking at the concatenated string. We can divide it by specifying the length of or , but that would take extra symbols. Indeed, for any there exists such that .
Typically, inequalities with plain complexity have a term like on one side, whereas the same inequalities with prefix-free complexity have only .
The main problem with plain complexity is that there is something extra sneaked into a program. A program not only represents for something with its code, but also represents its own length. In particular, a program may represent a binary number up to , simply by its own length. Stated in another way, it is as if we are using a termination symbol to denote where a word ends, and so we are not using 2 symbols, but 3. To fix this defect, we introduce the prefix-free Kolmogorov complexity.
Prefix-free Kolmogorov complexity K
A prefix-free code is a subset of such that given any two different words in the set, neither is a prefix of the other. The benefit of a prefix-free code is that we can build a machine that reads words from the code forward in one direction, and as soon as it reads the last symbol of the word, it knows that the word is finished, and does not need to backtrack or a termination symbol.
Define a prefix-free Turing machine to be a Turing machine that comes with a prefix-free code, such that the Turing machine can read any string from the code in one direction, and stop reading as soon as it reads the last symbol. Afterwards, it may compute on a work tape and write to a write tape, but it cannot move its read-head anymore.
This gives us the following formal way to describe K.
Fix a prefix-free universal Turing machine, with three tapes: a read tape infinite in one direction, a work tape infinite in two directions, and a write tape infinite in one direction.
The machine can read from the read tape in one direction only (no backtracking), and write to the write tape in one direction only. It can read and write the work tape in both directions.
The work tape and write tape start with all zeros. The read tape starts with an input prefix code, followed by all zeros.
Let be the prefix-free code on , used by the universal Turing machine.
Note that some universal Turing machines may not be programmable with prefix codes. We must pick only a prefix-free universal Turing machine.
The prefix-free complexity of a string is the shortest prefix code that makes the machine output :
Invariance theorem
Informal treatment
There are some description languages which are optimal, in the following sense: given any description of an object in a description language, said description may be used in the optimal description language with a constant overhead. The constant depends only on the languages involved, not on the description of the object, nor the object being described.
Here is an example of an optimal description language. A description will have two parts:
The first part describes another description language.
The second part is a description of the object in that language.
In more technical terms, the first part of a description is a computer program (specifically: a compiler for the object's language, written in the description language), with the second part being the input to that computer program which produces the object as output.
The invariance theorem follows: Given any description language L, the optimal description language is at least as efficient as L, with some constant overhead.
Proof: Any description D in L can be converted into a description in the optimal language by first describing L as a computer program P (part 1), and then using the original description D as input to that program (part 2). The
total length of this new description D′ is (approximately):
|D′ | = |P| + |D|
The length of P is a constant that doesn't depend on D. So, there is at most a constant overhead, regardless of the object described. Therefore, the optimal language is universal up to this additive constant.
A more formal treatment
Theorem: If K1 and K2 are the complexity functions relative to Turing complete description languages L1 and L2, then there is a constant c – which depends only on the languages L1 and L2 chosen – such that
∀s. −c ≤ K1(s) − K2(s) ≤ c.
Proof: By symmetry, it suffices to prove that there is some constant c such that for all strings s
K1(s) ≤ K2(s) + c.
Now, suppose there is a program in the language L1 which acts as an interpreter for L2:
function InterpretLanguage(string p)
where p is a program in L2. The interpreter is characterized by the following property:
Running InterpretLanguage on input p returns the result of running p.
Thus, if P is a program in L2 which is a minimal description of s, then InterpretLanguage(P) returns the string s. The length of this description of s is the sum of
The length of the program InterpretLanguage, which we can take to be the constant c.
The length of P which by definition is K2(s).
This proves the desired upper bound.
History and context
Algorithmic information theory is the area of computer science that studies Kolmogorov complexity and other complexity measures on strings (or other data structures).
The concept and theory of Kolmogorov Complexity is based on a crucial theorem first discovered by Ray Solomonoff, who published it in 1960, describing it in "A Preliminary Report on a General Theory of Inductive Inference" as part of his invention of algorithmic probability. He gave a more complete description in his 1964 publications, "A Formal Theory of Inductive Inference," Part 1 and Part 2 in Information and Control.
Andrey Kolmogorov later independently published this theorem in Problems Inform. Transmission in 1965. Gregory Chaitin also presents this theorem in J. ACM – Chaitin's paper was submitted October 1966 and revised in December 1968, and cites both Solomonoff's and Kolmogorov's papers.
The theorem says that, among algorithms that decode strings from their descriptions (codes), there exists an optimal one. This algorithm, for all strings, allows codes as short as allowed by any other algorithm up to an additive constant that depends on the algorithms, but not on the strings themselves. Solomonoff used this algorithm and the code lengths it allows to define a "universal probability" of a string on which inductive inference of the subsequent digits of the string can be based. Kolmogorov used this theorem to define several functions of strings, including complexity, randomness, and information.
When Kolmogorov became aware of Solomonoff's work, he acknowledged Solomonoff's priority. For several years, Solomonoff's work was better known in the Soviet Union than in the Western World. The general consensus in the scientific community, however, was to associate this type of complexity with Kolmogorov, who was concerned with randomness of a sequence, while Algorithmic Probability became associated with Solomonoff, who focused on prediction using his invention of the universal prior probability distribution. The broader area encompassing descriptional complexity and probability is often called Kolmogorov complexity. The computer scientist Ming Li considers this an example of the Matthew effect: "...to everyone who has, more will be given..."
There are several other variants of Kolmogorov complexity or algorithmic information. The most widely used one is based on self-delimiting programs, and is mainly due to Leonid Levin (1974).
An axiomatic approach to Kolmogorov complexity based on Blum axioms (Blum 1967) was introduced by Mark Burgin in the paper presented for publication by Andrey Kolmogorov.
In the late 1990s and early 2000s, methods developed to approximate Kolmogorov complexity relied on popular compression algorithms like LZW, which made difficult or impossible to provide any estimation to short strings until a method based on Algorithmic probability was introduced, offering the only alternative to compression-based methods.
Basic results
We write to be , where means some fixed way to code for a tuple of strings x and y.
Inequalities
We omit additive factors of . This section is based on.
Theorem.
Proof. Take any program for the universal Turing machine used to define plain complexity, and convert it to a prefix-free program by first coding the length of the program in binary, then convert the length to prefix-free coding. For example, suppose the program has length 9, then we can convert it as follows:where we double each digit, then add a termination code. The prefix-free universal Turing machine can then read in any program for the other machine as follows:The first part programs the machine to simulate the other machine, and is a constant overhead . The second part has length . The third part has length .
Theorem: There exists such that . More succinctly, . Similarly, , and .
Proof. For the plain complexity, just write a program that simply copies the input to the output. For the prefix-free complexity, we need to first describe the length of the string, before writing out the string itself.
Theorem. (extra information bounds, subadditivity)
Note that there is no way to compare and or or or . There are strings such that the whole string is easy to describe, but its substrings are very hard to describe.
Theorem. (symmetry of information) .
Proof. One side is simple. For the other side with , we need to use a counting argument (page 38 ).
Theorem. (information non-increase) For any computable function , we have .
Proof. Program the Turing machine to read two subsequent programs, one describing the function and one describing the string. Then run both programs on the work tape to produce , and write it out.
Uncomputability of Kolmogorov complexity
A naive attempt at a program to compute K
At first glance it might seem trivial to write a program which can compute K(s) for any s, such as the following:
function KolmogorovComplexity(string s)
for i = 1 to infinity:
for each string p of length exactly i
if isValidProgram(p) and evaluate(p) == s
return i
This program iterates through all possible programs (by iterating through all possible strings and only considering those which are valid programs), starting with the shortest. Each program is executed to find the result produced by that program, comparing it to the input s. If the result matches then the length of the program is returned.
However this will not work because some of the programs p tested will not terminate, e.g. if they contain infinite loops. There is no way to avoid all of these programs by testing them in some way before executing them due to the non-computability of the halting problem.
What is more, no program at all can compute the function K, be it ever so sophisticated. This is proven in the following.
Formal proof of uncomputability of K
Theorem: There exist strings of arbitrarily large Kolmogorov complexity. Formally: for each natural number n, there is a string s with K(s) ≥ n.
Proof: Otherwise all of the infinitely many possible finite strings could be generated by the finitely many programs with a complexity below n bits.
Theorem: K is not a computable function. In other words, there is no program which takes any string s as input and produces the integer K(s) as output.
The following proof by contradiction uses a simple Pascal-like language to denote programs; for sake of proof simplicity assume its description (i.e. an interpreter) to have a length of bits.
Assume for contradiction there is a program
function KolmogorovComplexity(string s)
which takes as input a string s and returns K(s). All programs are of finite length so, for sake of proof simplicity, assume it to be bits.
Now, consider the following program of length bits:
function GenerateComplexString()
for i = 1 to infinity:
for each string s of length exactly i
if KolmogorovComplexity(s) ≥ 8000000000
return s
Using KolmogorovComplexity as a subroutine, the program tries every string, starting with the shortest, until it returns a string with Kolmogorov complexity at least bits, i.e. a string that cannot be produced by any program shorter than bits. However, the overall length of the above program that produced s is only bits, which is a contradiction. (If the code of KolmogorovComplexity is shorter, the contradiction remains. If it is longer, the constant used in GenerateComplexString can always be changed appropriately.)
The above proof uses a contradiction similar to that of the Berry paradox: "The smallest positive integer that cannot be defined in fewer than twenty English words". It is also possible to show the non-computability of K by reduction from the non-computability of the halting problem H, since K and H are Turing-equivalent.
There is a corollary, humorously called the "full employment theorem" in the programming language community, stating that there is no perfect size-optimizing compiler.
Chain rule for Kolmogorov complexity
The chain rule for Kolmogorov complexity states that there exists a constant c such that for all X and Y:
K(X,Y) = K(X) + K(Y|X) + c*max(1,log(K(X,Y))).
It states that the shortest program that reproduces X and Y is no more than a logarithmic term larger than a program to reproduce X and a program to reproduce Y given X. Using this statement, one can define an analogue of mutual information for Kolmogorov complexity.
Compression
It is straightforward to compute upper bounds for K(s) – simply compress the string s with some method, implement the corresponding decompressor in the chosen language, concatenate the decompressor to the compressed string, and measure the length of the resulting string – concretely, the size of a self-extracting archive in the given language.
A string s is compressible by a number c if it has a description whose length does not exceed |s| − c bits. This is equivalent to saying that . Otherwise, s is incompressible by c. A string incompressible by 1 is said to be simply incompressible – by the pigeonhole principle, which applies because every compressed string maps to only one uncompressed string, incompressible strings must exist, since there are 2n bit strings of length n, but only 2n − 1 shorter strings, that is, strings of length less than n, (i.e. with length 0, 1, ..., n − 1).
For the same reason, most strings are complex in the sense that they cannot be significantly compressed – their K(s) is not much smaller than |s|, the length of s in bits. To make this precise, fix a value of n. There are 2n bitstrings of length n. The uniform probability distribution on the space of these bitstrings assigns exactly equal weight 2−n to each string of length n.
Theorem: With the uniform probability distribution on the space of bitstrings of length n, the probability that a string is incompressible by c is at least .
To prove the theorem, note that the number of descriptions of length not exceeding n − c is given by the geometric series:
1 + 2 + 22 + ... + 2n − c = 2n−c+1 − 1.
There remain at least
2n − 2n−c+1 + 1
bitstrings of length n that are incompressible by c. To determine the probability, divide by 2n.
Chaitin's incompleteness theorem
By the above theorem (), most strings are complex in the sense that they cannot be described in any significantly "compressed" way. However, it turns out that the fact that a specific string is complex cannot be formally proven, if the complexity of the string is above a certain threshold. The precise formalization is as follows. First, fix a particular axiomatic system S for the natural numbers. The axiomatic system has to be powerful enough so that, to certain assertions A about complexity of strings, one can associate a formula FA in S. This association must have the following property:
If FA is provable from the axioms of S, then the corresponding assertion A must be true. This "formalization" can be achieved based on a Gödel numbering.
Theorem: There exists a constant L (which only depends on S and on the choice of description language) such that there does not exist a string s for which the statement
K(s) ≥ L (as formalized in S)
can be proven within S.
Proof Idea: The proof of this result is modeled on a self-referential construction used in Berry's paradox. We firstly obtain a program which enumerates the proofs within S and we specify a procedure P which takes as an input an integer L and prints the strings x which are within proofs within S of the statement K(x) ≥ L. By then setting L to greater than the length of this procedure P, we have that the required length of a program to print x as stated in K(x) ≥ L as being at least L is then less than the amount L since the string x was printed by the procedure P. This is a contradiction. So it is not possible for the proof system S to prove K(x) ≥ L for L arbitrarily large, in particular, for L larger than the length of the procedure P, (which is finite).
Proof:
We can find an effective enumeration of all the formal proofs in S by some procedure
function NthProof(int n)
which takes as input n and outputs some proof. This function enumerates all proofs. Some of these are proofs for formulas we do not care about here, since every possible proof in the language of S is produced for some n. Some of these are complexity formulas of the form K(s) ≥ n where s and n are constants in the language of S. There is a procedure
function NthProofProvesComplexityFormula(int n)
which determines whether the nth proof actually proves a complexity formula K(s) ≥ L. The strings s, and the integer L in turn, are computable by procedure:
function StringNthProof(int n)
function ComplexityLowerBoundNthProof(int n)
Consider the following procedure:
function GenerateProvablyComplexString(int n)
for i = 1 to infinity:
if NthProofProvesComplexityFormula(i) and ComplexityLowerBoundNthProof(i) ≥ n
return StringNthProof(i)
Given an n, this procedure tries every proof until it finds a string and a proof in the formal system S of the formula K(s) ≥ L for some L ≥ n; if no such proof exists, it loops forever.
Finally, consider the program consisting of all these procedure definitions, and a main call:
GenerateProvablyComplexString(n0)
where the constant n0 will be determined later on. The overall program length can be expressed as U+log2(n0), where U is some constant and log2(n0) represents the length of the integer value n0, under the reasonable assumption that it is encoded in binary digits. We will choose n0 to be greater than the program length, that is, such that n0 > U+log2(n0). This is clearly true for n0 sufficiently large, because the left hand side grows linearly in n0 whilst the right hand side grows logarithmically in n0 up to the fixed constant U.
Then no proof of the form "K(s)≥L" with L≥n0 can be obtained in S, as can be seen by an indirect argument:
If ComplexityLowerBoundNthProof(i) could return a value ≥n0, then the loop inside GenerateProvablyComplexString would eventually terminate, and that procedure would return a string s such that
This is a contradiction, Q.E.D.
As a consequence, the above program, with the chosen value of n0, must loop forever.
Similar ideas are used to prove the properties of Chaitin's constant.
Minimum message length
The minimum message length principle of statistical and inductive inference and machine learning was developed by C.S. Wallace and D.M. Boulton in 1968. MML is Bayesian (i.e. it incorporates prior beliefs) and information-theoretic. It has the desirable properties of statistical invariance (i.e. the inference transforms with a re-parametrisation, such as from polar coordinates to Cartesian coordinates), statistical consistency (i.e. even for very hard problems, MML will converge to any underlying model) and efficiency (i.e. the MML model will converge to any true underlying model about as quickly as is possible). C.S. Wallace and D.L. Dowe (1999) showed a formal connection between MML and algorithmic information theory (or Kolmogorov complexity).
Kolmogorov randomness
Kolmogorov randomness defines a string (usually of bits) as being random if and only if every computer program that can produce that string is at least as long as the string itself. To make this precise, a universal computer (or universal Turing machine) must be specified, so that "program" means a program for this universal machine. A random string in this sense is "incompressible" in that it is impossible to "compress" the string into a program that is shorter than the string itself. For every universal computer, there is at least one algorithmically random string of each length. Whether a particular string is random, however, depends on the specific universal computer that is chosen. This is because a universal computer can have a particular string hard-coded in itself, and a program running on this universal computer can then simply refer to this hard-coded string using a short sequence of bits (i.e. much shorter than the string itself).
This definition can be extended to define a notion of randomness for infinite sequences from a finite alphabet. These algorithmically random sequences can be defined in three equivalent ways. One way uses an effective analogue of measure theory; another uses effective martingales. The third way defines an infinite sequence to be random if the prefix-free Kolmogorov complexity of its initial segments grows quickly enough — there must be a constant c such that the complexity of an initial segment of length n is always at least n−c. This definition, unlike the definition of randomness for a finite string, is not affected by which universal machine is used to define prefix-free Kolmogorov complexity.
Relation to entropy
For dynamical systems, entropy rate and algorithmic complexity of the trajectories are related by a theorem of Brudno, that the equality holds for almost all .
It can be shown that for the output of Markov information sources, Kolmogorov complexity is related to the entropy of the information source. More precisely, the Kolmogorov complexity of the output of a Markov information source, normalized by the length of the output, converges almost surely (as the length of the output goes to infinity) to the entropy of the source.
Theorem. (Theorem 14.2.5 ) The conditional Kolmogorov complexity of a binary string satisfieswhere is the binary entropy function (not to be confused with the entropy rate).
Halting problem
The Kolmogorov complexity function is equivalent to deciding the halting problem.
If we have a halting oracle, then the Kolmogorov complexity of a string can be computed by simply trying every halting program, in lexicographic order, until one of them outputs the string.
The other direction is much more involved. It shows that given a Kolmogorov complexity function, we can construct a function , such that for all large , where is the Busy Beaver shift function (also denoted as ). By modifying the function at lower values of we get an upper bound on , which solves the halting problem.
Consider this program , which takes input as , and uses .
List all strings of length .
For each such string , enumerate all (prefix-free) programs of length until one of them does output . Record its runtime .
Output the largest .
We prove by contradiction that for all large .
Let be a Busy Beaver of length . Consider this (prefix-free) program, which takes no input:
Run the program , and record its runtime length .
Generate all programs with length . Run every one of them for up to steps. Note the outputs of those that have halted.
Output the string with the lowest lexicographic order that has not been output by any of those.
Let the string output by the program be .
The program has length , where comes from the length of the Busy Beaver , comes from using the (prefix-free) Elias delta code for the number , and comes from the rest of the program. Therefore,for all big . Further, since there are only so many possible programs with length , we have by pigeonhole principle.
By assumption, , so every string of length has a minimal program with runtime . Thus, the string has a minimal program with runtime . Further, that program has length . This contradicts how was constructed.
Universal probability
Fix a universal Turing machine , the same one used to define the (prefix-free) Kolmogorov complexity. Define the (prefix-free) universal probability of a string to beIn other words, it is the probability that, given a uniformly random binary stream as input, the universal Turing machine would halt after reading a certain prefix of the stream, and output .
Note. does not mean that the input stream is , but that the universal Turing machine would halt at some point after reading the initial segment , without reading any further input, and that, when it halts, its has written to the output tape.
Theorem. (Theorem 14.11.1)
Conditional versions
The conditional Kolmogorov complexity of two strings is, roughly speaking, defined as the Kolmogorov complexity of x given y as an auxiliary input to the procedure.
There is also a length-conditional complexity , which is the complexity of x given the length of x as known/input.
Time-bounded complexity
Time-bounded Kolmogorov complexity is a modified version of Kolmogorov complexity where the space of programs to be searched for a solution is confined to only programs that can run within some pre-defined number of steps. It is hypothesised that the possibility of the existence of an efficient algorithm for determining approximate time-bounded Kolmogorov complexity is related to the question of whether true one-way functions exist.
See also
Berry paradox
Code golf
Data compression
Descriptive complexity theory
Grammar induction
Inductive reasoning
Kolmogorov structure function
Levenshtein distance
Manifold hypothesis
Solomonoff's theory of inductive inference
Sample entropy
Notes
References
Further reading
External links
The Legacy of Andrei Nikolaevich Kolmogorov
Chaitin's online publications
Solomonoff's IDSIA page
Generalizations of algorithmic information by J. Schmidhuber
Tromp's lambda calculus computer model offers a concrete definition of K()]
Universal AI based on Kolmogorov Complexity by M. Hutter:
David Dowe's Minimum Message Length (MML) and Occam's razor pages.
|
*;Computability theory;Computational complexity theory;Data compression;Descriptive complexity;Measures of complexity
|
https://en.wikipedia.org/wiki/Ambergris
|
Ambergris ( or ; ; ), ambergrease, or grey amber is a solid, waxy, flammable substance of a dull grey or blackish colour produced in the digestive system of sperm whales. Freshly produced ambergris has a marine, fecal odor. It acquires a sweet, earthy scent as it ages, commonly likened to the fragrance of isopropyl alcohol without the vaporous chemical astringency.
Ambergris has been highly valued by perfume makers as a fixative that allows the scent to last much longer, although it has been mostly replaced by synthetic ambroxide..It is sometimes used in cooking.
Dogs are attracted to the smell of ambergris and are sometimes used by ambergris searchers.
Etymology
The English word amber derives from Middle Persian ʾmbl, traveling via Arabic (), Middle Latin ambar, and Middle French ambre to be adopted in Middle English in the 14th century.
The word "ambergris" comes from the Old French ambre gris or "grey amber". The addition of "grey" came about when, in the Romance languages, the sense of the word "amber" was extended to Baltic amber (fossil resin), as white or yellow amber (ambre jaune), from as early as the late 13th century. This fossilized resin subsequently became the dominant (and now exclusive) sense of "amber", leaving "ambergris" as the word for the whale secretion.
The archaic alternate spelling "ambergrease" arose as an eggcorn from the phonetic pronunciation of "ambergris," encouraged by the substance's waxy texture.
Formation
Ambergris is formed from a secretion of the bile duct in the intestines of the sperm whale, and can be found floating on the sea or washed up on coastlines. It is sometimes found in the abdomens of dead sperm whales. Because the beaks of giant squids have been discovered within lumps of ambergris, scientists have hypothesized that the substance is produced by the whale's gastrointestinal tract to ease the passage of hard, sharp objects that it may have eaten.
Ambergris is passed like fecal matter. It is speculated that an ambergris mass too large to be passed through the intestines is expelled via the mouth, but this remains under debate. Another theory states that an ambergris mass is formed when the colon of a whale is enlarged by a blockage from intestinal worms and cephalopod parts resulting in the death of the whale and the mass being excreted into the sea. Ambergris takes years to form. Christopher Kemp, the author of Floating Gold: A Natural (and Unnatural) History of Ambergris, says that it is only produced by sperm whales, and only by an estimated one percent of them. Ambergris is rare; once expelled by a whale, it often floats for years before making landfall. The slim chances of finding ambergris and the legal ambiguity involved led perfume makers away from ambergris, and led chemists on a quest to find viable alternatives.
Ambergris is found primarily in the Atlantic Ocean and on the coasts of South Africa; Brazil; Madagascar; the East Indies; The Maldives; China; Japan; India; Australia; New Zealand; and the Molucca Islands. Most commercially collected ambergris comes from the Bahamas in the Atlantic, particularly New Providence. In 2021, fishermen found a 127 kg (280-pound) piece of ambergris off the coast of Yemen, valued at US$1.5 million. Fossilised ambergris from 1.75 million years ago has also been found.
Physical properties
Ambergris is found in lumps of various shapes and sizes, usually weighing from to or more. When initially expelled by or removed from the whale, the fatty precursor of ambergris is pale white in color (sometimes streaked with black), soft, with a strong fecal smell. Following months to years of photodegradation and oxidation in the ocean, this precursor gradually hardens, developing a dark grey or black color, a crusty and waxy texture, and a peculiar odor that is at once sweet, earthy, marine, and animalic. Its scent has been generally described as a vastly richer and smoother version of isopropanol without its stinging harshness. In this developed condition, ambergris has a specific gravity ranging from 0.780 to 0.926 (meaning it floats in water). It melts at about to a fatty, yellow resinous liquid; and at it is volatilised into a white vapor. It is soluble in ether, and in volatile and fixed oils.
Chemical properties
Ambergris is relatively nonreactive to acid. White crystals of a terpenoid known as ambrein, discovered by Leopold Ružička and Fernand Lardon in 1946, can be separated from ambergris by heating raw ambergris in alcohol, then allowing the resulting solution to cool. Breakdown of the relatively scentless ambrein through oxidation produces ambroxide and ambrinol, the main odor components of ambergris.
Ambroxide is now produced synthetically and used extensively in the perfume industry.
Applications
Ambergris has been mostly known for its use in creating perfume and fragrance much like musk. Perfumes based on ambergris still exist.
Ambergris has historically been used in food and drink. A serving of eggs and ambergris was reportedly King Charles II of England's favorite dish. A recipe for Rum Shrub liqueur from the mid 19th century called for a thread of ambergris to be added to rum, almonds, cloves, cassia, and the peel of oranges in making a cocktail from The English and Australian Cookery Book. It has been used as a flavoring agent in Turkish coffee and in hot chocolate in 18th century Europe. The substance is considered an aphrodisiac in some cultures.
Ancient Egyptians burned ambergris as incense, while in modern Egypt ambergris is used for scenting cigarettes. The ancient Chinese called the substance "dragon's spittle fragrance". During the Black Death in Europe, people believed that carrying a ball of ambergris could help prevent them from contracting plague. This was because the fragrance covered the smell of the air which was believed to be a cause of plague.
During the Middle Ages, Europeans used ambergris as a medication for headaches, colds, epilepsy, and other ailments.
Legality
From the 18th to the mid-19th century, the whaling industry prospered. By some reports, nearly 50,000 whales, including sperm whales, were killed each year. Throughout the 19th century, "millions of whales were killed for their oil, whalebone, and ambergris" to fuel profits, and they soon became endangered as a species as a result. Due to studies showing that the whale populations were being threatened, the International Whaling Commission instituted a moratorium on commercial whaling in 1982. Although ambergris is not harvested from whales, many countries also ban the trade of ambergris as part of the more general ban on the hunting and exploitation of whales.
Urine, faeces, and ambergris (that has been naturally excreted by a sperm whale) are waste products not considered parts or derivatives of a CITES species and are therefore not covered by the provisions of the convention.
Countries where ambergris trade is illegal include:
Australia – Under federal law, the export and import of ambergris for commercial purposes is banned by the Environment Protection and Biodiversity Conservation Act 1999. The various states and territories have additional laws regarding ambergris.
United States – The possession and trade of ambergris is prohibited by the Endangered Species Act of 1973.
India – Sale or possession is illegal under the Wild Life (Protection) Act, 1972.
Countries where trade of ambergris is legal include:
United Kingdom
France
Switzerland
Maldives
References
Further reading
montalvoeascinciasdonossotempo.blogspot, accessed 21 August 2015
External links
Natural History Magazine Article (from 1933): Floating Gold – The Romance of Ambergris
Ambergris – A Pathfinder and Annotated Bibliography
On the chemistry and ethics of Ambergris
Pathologist finds €500,000 ‘floating gold’ in dead whale in Canary Islands
|
Animal glandular products;Natural products;Perfume ingredients;Traditional medicine;Whale products
|
https://en.wikipedia.org/wiki/Acetylene
|
Acetylene (systematic name: ethyne) is a chemical compound with the formula and structure . It is a hydrocarbon and the simplest alkyne. This colorless gas is widely used as a fuel and a chemical building block. It is unstable in its pure form and thus is usually handled as a solution. Pure acetylene is odorless, but commercial grades usually have a marked odor due to impurities such as divinyl sulfide and phosphine.
As an alkyne, acetylene is unsaturated because its two carbon atoms are bonded together in a triple bond. The carbon–carbon triple bond places all four atoms in the same straight line, with CCH bond angles of 180°. The triple bond in acetylene results in a high energy content that is released when acetylene is burned.
Discovery
Acetylene was discovered in 1836 by Edmund Davy, who identified it as a "new carburet of hydrogen". It was an accidental discovery while attempting to isolate potassium metal. By heating potassium carbonate with carbon at very high temperatures, he produced a residue of what is now known as potassium carbide, (K2C2), which reacted with water to release the new gas. It was rediscovered in 1860 by French chemist Marcellin Berthelot, who coined the name acétylène. Berthelot's empirical formula for acetylene (C4H2), as well as the alternative name "quadricarbure d'hydrogène" (hydrogen quadricarbide), were incorrect because many chemists at that time used the wrong atomic mass for carbon (6 instead of 12). Berthelot was able to prepare this gas by passing vapours of organic compounds (methanol, ethanol, etc.) through a red hot tube and collecting the effluent. He also found that acetylene was formed by sparking electricity through mixed cyanogen and hydrogen gases. Berthelot later obtained acetylene directly by passing hydrogen between the poles of a carbon arc.
Preparation
Partial combustion of hydrocarbons
Since the 1950s, acetylene has mainly been manufactured by the partial combustion of methane in the US, much of the EU, and many other countries:
It is a recovered side product in production of ethylene by cracking of hydrocarbons. Approximately 400,000 tonnes were produced by this method in 1983. Its presence in ethylene is usually undesirable because of its explosive character and its ability to poison Ziegler–Natta catalysts. It is selectively hydrogenated into ethylene, usually using Pd–Ag catalysts.
Dehydrogenation of alkanes
The heaviest alkanes in petroleum and natural gas are cracked into lighter molecules which are dehydrogenated at high temperature:
This last reaction is implemented in the process of anaerobic decomposition of methane by microwave plasma.
Carbochemical method
The first acetylene produced was by Edmund Davy in 1836, via potassium carbide.
Acetylene was historically produced by hydrolysis (reaction with water) of calcium carbide:
This reaction was discovered by Friedrich Wöhler in 1862, but a suitable commercial scale production method which allowed acetylene to be put into wider scale use was not found until 1892 by the Canadian inventor Thomas Willson while searching for a viable commercial production method for aluminum.
As late as the early 21st century, China, Japan, and Eastern Europe produced acetylene primarily by this method.
The use of this technology has since declined worldwide with the notable exception of China, with its emphasis on coal-based chemical industry, as of 2013. Otherwise oil has increasingly supplanted coal as the chief source of reduced carbon.
Calcium carbide production requires high temperatures, ~2000 °C, necessitating the use of an electric arc furnace. In the US, this process was an important part of the late-19th century revolution in chemistry enabled by the massive hydroelectric power project at Niagara Falls.
Bonding
In terms of valence bond theory, in each carbon atom the 2s orbital hybridizes with one 2p orbital thus forming an sp hybrid. The other two 2p orbitals remain unhybridized. The two ends of the two sp hybrid orbital overlap to form a strong σ valence bond between the carbons, while on each of the other two ends hydrogen atoms attach also by σ bonds. The two unchanged 2p orbitals form a pair of weaker π bonds.
Since acetylene is a linear symmetrical molecule, it possesses the D∞h point group.
Physical properties
Changes of state
At atmospheric pressure, acetylene cannot exist as a liquid and does not have a melting point. The triple point on the phase diagram corresponds to the melting point (−80.8 °C) at the minimal pressure at which liquid acetylene can exist (1.27 atm). At temperatures below the triple point, solid acetylene can change directly to the vapour (gas) by sublimation. The sublimation point at atmospheric pressure is −84.0 °C.
Other
At room temperature, the solubility of acetylene in acetone is 27.9 g per kg. For the same amount of dimethylformamide (DMF), the solubility is 51 g. At
20.26 bar, the solubility increases to 689.0 and 628.0 g for acetone and DMF, respectively. These solvents are used in pressurized gas cylinders.
Applications
Welding
Approximately 20% of acetylene is supplied by the industrial gases industry for oxyacetylene gas welding and cutting due to the high temperature of the flame. Combustion of acetylene with oxygen produces a flame of over , releasing 11.8 kJ/g. Oxygen with acetylene is the hottest burning common gas mixture. Acetylene is the third-hottest natural chemical flame after dicyanoacetylene's and cyanogen at . Oxy-acetylene welding was a popular welding process in previous decades. The development and advantages of arc-based welding processes have made oxy-fuel welding nearly extinct for many applications. Acetylene usage for welding has dropped significantly. On the other hand, oxy-acetylene welding equipment is quite versatile – not only because the torch is preferred for some sorts of iron or steel welding (as in certain artistic applications), but also because it lends itself easily to brazing, braze-welding, metal heating (for annealing or tempering, bending or forming), the loosening of corroded nuts and bolts, and other applications. Bell Canada cable-repair technicians still use portable acetylene-fuelled torch kits as a soldering tool for sealing lead sleeve splices in manholes and in some aerial locations. Oxyacetylene welding may also be used in areas where electricity is not readily accessible. Oxyacetylene cutting is used in many metal fabrication shops. For use in welding and cutting, the working pressures must be controlled by a regulator, since above , if subjected to a shockwave (caused, for example, by a flashback), acetylene decomposes explosively into hydrogen and carbon.
Chemicals
Acetylene is useful for many processes, but few are conducted on a commercial scale.
One of the major chemical applications is ethynylation of formaldehyde.
Acetylene adds to aldehydes and ketones to form α-ethynyl alcohols:
The reaction gives butynediol, with propargyl alcohol as the by-product. Copper acetylide is used as the catalyst.
In addition to ethynylation, acetylene reacts with carbon monoxide, acetylene reacts to give acrylic acid, or acrylic esters. Metal catalysts are required. These derivatives form products such as acrylic fibers, glasses, paints, resins, and polymers. Except in China, use of acetylene as a chemical feedstock has declined by 70% from 1965 to 2007 owing to cost and environmental considerations. In China, acetylene is a major precursor to vinyl chloride.
Historical uses
Prior to the widespread use of petrochemicals, coal-derived acetylene was a building block for several industrial chemicals. Thus acetylene can be hydrated to give acetaldehyde, which in turn can be oxidized to acetic acid. Processes leading to acrylates were also commercialized. Almost all of these processes became obsolete with the availability of petroleum-derived ethylene and propylene.
Niche applications
In 1881, the Russian chemist Mikhail Kucherov described the hydration of acetylene to acetaldehyde using catalysts such as mercury(II) bromide. Before the advent of the Wacker process, this reaction was conducted on an industrial scale.
The polymerization of acetylene with Ziegler–Natta catalysts produces polyacetylene films. Polyacetylene, a chain of CH centres with alternating single and double bonds, was one of the first discovered organic semiconductors. Its reaction with iodine produces a highly electrically conducting material. Although such materials are not useful, these discoveries led to the developments of organic semiconductors, as recognized by the Nobel Prize in Chemistry in 2000 to Alan J. Heeger, Alan G MacDiarmid, and Hideki Shirakawa.
In the 1920s, pure acetylene was experimentally used as an inhalation anesthetic.
Acetylene is sometimes used for carburization (that is, hardening) of steel when the object is too large to fit into a furnace.
Acetylene is used to volatilize carbon in radiocarbon dating. The carbonaceous material in an archeological sample is treated with lithium metal in a small specialized research furnace to form lithium carbide (also known as lithium acetylide). The carbide can then be reacted with water, as usual, to form acetylene gas to feed into a mass spectrometer to measure the isotopic ratio of carbon-14 to carbon-12.
Acetylene combustion produces a strong, bright light and the ubiquity of carbide lamps drove much acetylene commercialization in the early 20th century. Common applications included coastal lighthouses, street lights,
and automobile and mining headlamps. In most of these applications, direct combustion is a fire hazard, and so acetylene has been replaced, first by incandescent lighting and many years later by low-power/high-lumen LEDs. Nevertheless, acetylene lamps remain in limited use in remote or otherwise inaccessible areas and in countries with a weak or unreliable central electric grid.
Natural occurrence
The energy richness of the C≡C triple bond and the rather high solubility of acetylene in water make it a suitable substrate for bacteria, provided an adequate source is available. A number of bacteria living on acetylene have been identified. The enzyme acetylene hydratase catalyzes the hydration of acetylene to give acetaldehyde:
Acetylene is a moderately common chemical in the universe, often associated with the atmospheres of gas giants. One curious discovery of acetylene is on Enceladus, a moon of Saturn. Natural acetylene is believed to form from catalytic decomposition of long-chain hydrocarbons at temperatures of and above. Since such temperatures are highly unlikely on such a small distant body, this discovery is potentially suggestive of catalytic reactions within that moon, making it a promising site to search for prebiotic chemistry.
Reactions
Vinylation reactions
In vinylation reactions, H−X compounds add across the triple bond. Alcohols and phenols add to acetylene to give vinyl ethers. Thiols give vinyl thioethers. Similarly, vinylpyrrolidone and vinylcarbazole are produced industrially by vinylation of 2-pyrrolidone and carbazole.
The hydration of acetylene is a vinylation reaction, but the resulting vinyl alcohol isomerizes to acetaldehyde. The reaction is catalyzed by mercury salts. This reaction once was the dominant technology for acetaldehyde production, but it has been displaced by the Wacker process, which affords acetaldehyde by oxidation of ethylene, a cheaper feedstock. A similar situation applies to the conversion of acetylene to the valuable vinyl chloride by hydrochlorination vs the oxychlorination of ethylene.
Vinyl acetate is used instead of acetylene for some vinylations, which are more accurately described as transvinylations. Higher esters of vinyl acetate have been used in the synthesis of vinyl formate.
Organometallic chemistry
Acetylene and its derivatives (2-butyne, diphenylacetylene, etc.) form complexes with transition metals. Its bonding to the metal is somewhat similar to that of ethylene complexes. These complexes are intermediates in many catalytic reactions such as alkyne trimerisation to benzene, tetramerization to cyclooctatetraene, and carbonylation to hydroquinone:
at basic conditions (50–, 20–).
Metal acetylides, species of the formula , are also common. Copper(I) acetylide and silver acetylide can be formed in aqueous solutions with ease due to a favorable solubility equilibrium.
Acid-base reactions
Acetylene has a pKa of 25, acetylene can be deprotonated by a superbase to form an acetylide:
Various organometallic and inorganic reagents are effective.
Hydrogenation
Acetylene can be semihydrogenated to ethylene, providing a feedstock for a variety of polyethylene plastics. Halogens add to the triple bond.
Safety and handling
Acetylene is not especially toxic, but when generated from calcium carbide, or CaC2, it can contain toxic impurities such as traces of phosphine and arsine, which gives it a distinct garlic-like smell. It is also highly flammable, as are most light hydrocarbons, hence its use in welding. Its most singular hazard is associated with its intrinsic instability, especially when it is pressurized: under certain conditions acetylene can react in an exothermic addition-type reaction to form a number of products, typically benzene and/or vinylacetylene, possibly in addition to carbon and hydrogen. Although it is stable at normal pressures and temperatures, if it is subjected to pressures as low as 15 psig it can explode. The safe limit for acetylene therefore is 101 kPagage, or 15 psig. Additionally, if acetylene is initiated by intense heat or a shockwave, it can decompose explosively if the absolute pressure of the gas exceeds about . It is therefore supplied and stored dissolved in acetone or dimethylformamide (DMF), contained in a gas cylinder with a porous filling, which renders it safe to transport and use, given proper handling. Acetylene cylinders should be used in the upright position to avoid withdrawing acetone during use.
Information on safe storage of acetylene in upright cylinders is provided by the OSHA, Compressed Gas Association, United States Mine Safety and Health Administration (MSHA), EIGA, and other agencies.
Copper catalyses the decomposition of acetylene, and as a result acetylene should not be transported in copper pipes.
Cylinders should be stored in an area segregated from oxidizers to avoid exacerbated reaction in case of fire/leakage. Acetylene cylinders should not be stored in confined spaces, enclosed vehicles, garages, and buildings, to avoid unintended leakage leading to explosive atmosphere. In the US, National Electric Code (NEC) requires consideration for hazardous areas including those where acetylene may be released during accidents or leaks. Consideration may include electrical classification and use of listed Group A electrical components in US. Further information on determining the areas requiring special consideration is in NFPA 497. In Europe, ATEX also requires consideration for hazardous areas where flammable gases may be released during accidents or leaks.
References
External links
Acetylene Production Plant and Detailed Process
Acetylene at Chemistry Comes Alive!
Movie explaining acetylene formation from calcium carbide and the explosive limits forming fire hazards
Calcium Carbide & Acetylene at The Periodic Table of Videos (University of Nottingham)
CDC – NIOSH Pocket Guide to Chemical Hazards – Acetylene
|
;Alkynes;Explosive gases;Fuel gas;Industrial gases;Synthetic fuel technologies;Welding
|
https://en.wikipedia.org/wiki/Acre
|
The acre ( ) is a unit of land area used in the British imperial and the United States customary systems. It is traditionally defined as the area of one chain by one furlong (66 by 660 feet), which is exactly equal to 10 square chains, of a square mile, 4,840 square yards, or 43,560 square feet, and approximately 4,047 m2, or about 40% of a hectare. Based upon the international yard and pound agreement of 1959, an acre may be declared as exactly 4,046.8564224 square metres. The acre is sometimes abbreviated ac, but is usually spelled out as the word "acre".
Traditionally, in the Middle Ages, an acre was conceived of as the area of land that could be ploughed by one man using a team of eight oxen in one day. The acre is still a statutory measure in the United States. Both the international acre and the US survey acre are in use, but they differ by only four parts per million. The most common use of the acre is to measure tracts of land. The acre is used in many established and former Commonwealth of Nations countries by custom. In a few, it continues as a statute measure, although not since 2010 in the UK, and not for decades in Australia, New Zealand, and South Africa. In many places where it is not a statute measure, it is still lawful to "use for trade" if given as supplementary information and is not used for land registration.
Description
One acre equals (0.0015625) square mile, 4,840 square yards, 43,560 square feet, or about (see below). While all modern variants of the acre contain 4,840 square yards, there are alternative definitions of a yard, so the exact size of an acre depends upon the particular yard on which it is based. Originally, an acre was understood as a strip of land sized at forty perches (660 ft, or 1 furlong) long and four perches (66 ft) wide; this may have also been understood as an approximation of the amount of land a yoke of oxen could plough in one day (a furlong being "a furrow long"). A square enclosing one acre is approximately 69.57 yards, or 208 feet 9 inches (), on a side. As a unit of measure, an acre has no prescribed shape; any area of 43,560 square feet is an acre.
US survey acres
In the international yard and pound agreement of 1959, the United States and five countries of the Commonwealth of Nations defined the international yard to be exactly 0.9144 metre. The US authorities decided that, while the refined definition would apply nationally in all other respects, the US survey foot (and thus the survey acre) would continue 'until such a time as it becomes desirable and expedient to readjust [it]'. By inference, an "international acre" may be calculated as exactly square metres but it does not have a basis in any international agreement.
Both the international acre and the US survey acre contain of a square mile or 4,840 square yards, but alternative definitions of a yard are used (see survey foot and survey yard), so the exact size of an acre depends upon the yard upon which it is based. The US survey acre is about 4,046.872 square metres; its exact value ( m2) is based on an inch defined by 1 metre = 39.37 inches exactly, as established by the Mendenhall Order of 1893. Surveyors in the United States use both international and survey feet, and consequently, both varieties of acre.
Since the difference between the US survey acre and international acre (0.016 square metres, 160 square centimetres or 24.8 square inches), is only about a quarter of the size of an A4 sheet or US letter, it is usually not important which one is being discussed. Areas are seldom measured with sufficient accuracy for the different definitions to be detectable. In October 2019, the US National Geodetic Survey and the National Institute of Standards and Technology announced their joint intent to end the "temporary" continuance of the US survey foot, mile, and acre units (as permitted by their 1959 decision, above), with effect from the end of 2022.
Spanish acre
The Puerto Rican cuerda () is sometimes called the "Spanish acre" in the continental United States.
Use
The acre is commonly used in many current and former Commonwealth countries by custom, and in a few it continues as a statute measure. These include Antigua and Barbuda, American Samoa, The Bahamas, Belize, the British Virgin Islands, Canada, the Cayman Islands, Dominica, the Falkland Islands, Grenada, Ghana, Guam, the Northern Mariana Islands, Jamaica, Montserrat, Samoa, Saint Lucia, St. Helena, St. Kitts and Nevis, St. Vincent and the Grenadines, Turks and Caicos, the United Kingdom, the United States, and the US Virgin Islands.
Republic of Ireland
In the Republic of Ireland, the hectare is legally used under European units of measurement directives; however, the acre (the same standard statute as used in the UK, not the old Irish acre, which was of a different size) is still widely used, especially in agriculture.
Indian subcontinent
In India, residential plots are measured in square feet or square metre, while agricultural land is measured in acres. In Sri Lanka, the division of an acre into 160 perches or 4 roods is common. In Pakistan, residential plots are measured in (20 = 1 = 605 sq yards) and open/agriculture land measurement is in acres (8 = 1 acre) and (25 acres = 1 = 200 ), and .
United Kingdom
Its use as a primary unit for trade in the United Kingdom ceased to be permitted from 1 October 1995, due to the 1994 amendment of the Weights and Measures Act, where it was replaced by the hectare though its use as a supplementary unit continues to be permitted indefinitely. This was with the exemption of land registration, which records the sale and possession of land; in 2010 HM Land Registry ended its exemption. The measure is still used to communicate with the public, and informally (non-contract) by the farming and property industries.
Equivalence to other units of area
1 international acre is equal to the following metric units:
0.40468564224 hectare (A square with 100 m sides has an area of 1 hectare.)
4,046.8564224 square metres (or a square with approximately 63.61 m sides)
1 United States survey acre is equal to:
0.404687261 hectare
4,046.87261 square metres (1 square kilometre is equal to 247.105 acres)
1 acre (both variants) is equal to the following customary units:
66 feet × 660 feet (43,560 square feet)
10 square chains (1 chain = 66 feet = 22 yards = 4 rods = 100 links)
1 acre is approximately 208.71 feet × 208.71 feet (a square)
4,840 square yards
43,560 square feet
160 perches. A perch is equal to a square rod (1 square rod is 0.00625 acre)
4 roods
A furlong by a chain (furlong 220 yards, chain 22 yards)
40 rods by 4 rods, 160 rods2 (historically fencing was often sold in 40 rod lengths)
(0.0015625) square mile (1 square mile is equal to 640 acres)
Perhaps the easiest way for US residents to envision an acre is as a rectangle measuring 88 yards by 55 yards ( of 880 yards by of 880 yards), about the size of a standard American football field. To be more exact, one acre is 90.75% of a 100-yd-long by 53.33-yd-wide American football field (without the end zone). The full field, including the end zones, covers about . For residents of other countries, the acre might be envisioned as rather more than half of a football pitch.
Historical origin
The word acre is derived from the Norman, attested for the first time in a text of Fécamp in 1006 to the meaning of «agrarian measure». Acre dates back to the old Scandinavian akr “cultivated field, ploughed land” which is perpetuated in Icelandic and the Faroese “field (wheat)”, Norwegian and Swedish , Danish “field”, cognate with German , Dutch , Latin , Sanskrit , and Greek (). In English, an obsolete variant spelling was aker. According to the Act on the Composition of Yards and Perches, dating from around 1300, an acre is "40 perches [rods] in length and four in breadth", meaning 220 yards by 22 yards. As detailed in the diagram, an acre was roughly the amount of land tillable by a yoke of oxen in one day.
Before the enactment of the metric system, many countries in Europe used their own official acres. In France, the traditional unit of area was the arpent carré, a measure based on the Roman system of land measurement.
The was used only in Normandy (and neighbouring places outside its traditional borders), but its value varied greatly across Normandy, ranging from 3,632 to 9,725 square metres, with 8,172 square metres being the most frequent value. But inside the same of Normandy, for instance in pays de Caux, the farmers (still in the 20th century) made the difference between the (68 ares, 66 centiares) and the (56 to 65 ca). The Normandy was usually divided in 4 (roods) and 160 square , like the English acre.
The Normandy was equal to 1.6 , the unit of area more commonly used in Northern France outside of Normandy. In Canada, the Paris used in Quebec before the metric system was adopted is sometimes called "French acre" in English, even though the Paris and the Normandy were two very different units of area in ancient France (the Paris became the unit of area of French Canada, whereas the Normandy was never used in French Canada).
In Germany, the Netherlands, and Eastern Europe the traditional unit of area was . Like the acre, the morgen was a unit of ploughland, representing a strip that could be ploughed by one man and an ox or horse in a morning. There were many variants of the morgen, differing between the different German territories, ranging from . It was also used in Old Prussia, in the Balkans, Norway, and Denmark, where it was equal to about . Statutory values for the acre were enacted in England, and subsequently the United Kingdom, by acts of:
Edward I
Edward III
Henry VIII
George IV
Queen Victoria – the British Weights and Measures Act of 1878 defined it as containing 4,840 square yards.
Historically, the size of farms and landed estates in the United Kingdom was usually expressed in acres (or acres, roods, and perches), even if the number of acres was so large that it might conveniently have been expressed in square miles. For example, a certain landowner might have been said to own 32,000 acres of land, not 50 square miles of land.
The acre is related to the square mile, with 640 acres making up one square mile. One mile is 5280 feet (1760 yards). In western Canada and the western United States, divisions of land area were typically based on the square mile, and fractions thereof. If the square mile is divided into quarters, each quarter has a side length of mile (880 yards) and is square mile in area, or 160 acres. These subunits are typically then again divided into quarters, with each side being mile long, and being of a square mile in area, or 40 acres. In the United States, farmland was typically divided as such, and the phrase "the back 40" refers to the 40-acre parcel to the back of the farm. Most of the Canadian Prairie Provinces and the US Midwest are on square-mile grids for surveying purposes.
Legacy units
Customary acre – The customary acre was roughly similar to the Imperial acre, but it was subject to considerable local variation similar to the variation in carucates, virgates, bovates, nooks, and farundels. These may have been multiples of the customary acre, rather than the statute acre.
Builder's acre = an even or , used in US real-estate development to simplify the math and for marketing. It is nearly 10% smaller than a survey acre, and the discrepancy has led to lawsuits alleging misrepresentation.
Feddan - Middle Eastern measurement unit, .
Scottish acre = 1.3 Imperial acres (5,080 m2, an obsolete Scottish measurement)
Irish acre =
Cheshire acre =
Stremma or Greek acre ≈ 10,000 square Greek feet, but now set at exactly 1,000 square metres (a similar unit was the zeugarion)
Dunam or Turkish acre ≈ 1,600 square Turkish paces, but now set at exactly 1,000 square metres (a similar unit was the çift)
Actus quadratus or Roman acre ≈ 14,400 square Roman feet (about 1,260 square metres)
God's Acre – a synonym for a churchyard.
Long acre the grass strip on either side of a road that may be used for illicit grazing.
Town acre was a term used in early 19th century in the planning of towns on a grid plan, such as Adelaide, South Australia and Wellington, New Plymouth and Nelson in New Zealand. The land was divided into plots of an Imperial acre, and these became known as town acres.
See also
Acre-foot – used in US to measure a large water volume
Anthropic units
Arpent – used in Louisiana to measure length and area
Conversion of units
Jugerum – Roman unit of area
Morgen ("morning") – normally of a Tagwerk ("day work") of ploughing with an ox
Mu – Chinese acre
Public Land Survey System
Quarter acre
Section (United States land surveying)
Spanish units of measurement
Notes
References
External links
The Units of Measurement Regulations 1995 (United Kingdom)
|
Customary units of measurement in the United States;Imperial units;Surveying;Units of area
|
https://en.wikipedia.org/wiki/Adenosine%20triphosphate
|
Adenosine triphosphate (ATP) is a nucleoside triphosphate that provides energy to drive and support many processes in living cells, such as muscle contraction, nerve impulse propagation, and chemical synthesis. Found in all known forms of life, it is often referred to as the "molecular unit of currency" for intracellular energy transfer.
When consumed in a metabolic process, ATP converts either to adenosine diphosphate (ADP) or to adenosine monophosphate (AMP). Other processes regenerate ATP. It is also a precursor to DNA and RNA, and is used as a coenzyme. An average adult human processes around 50 kilograms (about 100 moles) daily.
From the perspective of biochemistry, ATP is classified as a nucleoside triphosphate, which indicates that it consists of three components: a nitrogenous base (adenine), the sugar ribose, and the triphosphate.
Structure
ATP consists of three parts: a sugar, an amine base, and a phosphate group. More specifically, ATP consists of an adenine attached by the #9-nitrogen atom to the 1′ carbon atom of a sugar (ribose), which in turn is attached at the 5' carbon atom of the sugar to a triphosphate group. In its many reactions related to metabolism, the adenine and sugar groups remain unchanged, but the triphosphate is converted to di- and monophosphate, giving respectively the derivatives ADP and AMP. The three phosphoryl groups are labeled as alpha (α), beta (β), and, for the terminal phosphate, gamma (γ).
In neutral solution, ionized ATP exists mostly as ATP4−, with a small proportion of ATP3−.
Metal cation binding
Polyanionic and featuring a potentially chelating polyphosphate group, ATP binds metal cations with high affinity. The binding constant for is (). The binding of a divalent cation, almost always magnesium, strongly affects the interaction of ATP with various proteins. Due to the strength of the ATP-Mg2+ interaction, ATP exists in the cell mostly as a complex with bonded to the phosphate oxygen centers.
A second magnesium ion is critical for ATP binding in the kinase domain. The presence of Mg2+ regulates kinase activity. It is interesting from an RNA world perspective that ATP can carry a Mg ion which catalyzes RNA polymerization.
Chemical properties
Salts of ATP can be isolated as colorless solids.
ATP is stable in aqueous solutions between pH 6.8 and 7.4 (in the absence of catalysts). At more extreme pH levels, it rapidly hydrolyses to ADP and phosphate. Living cells maintain the ratio of ATP to ADP at a point ten orders of magnitude from equilibrium, with ATP concentrations fivefold higher than the concentration of ADP. In the context of biochemical reactions, the P-O-P bonds are frequently referred to as high-energy bonds.
Reactive aspects
The hydrolysis of ATP into ADP and inorganic phosphate
ATP(aq) + (l) = ADP(aq) + HPO(aq) + H(aq)
releases of enthalpy. This may differ under physiological conditions if the reactant and products are not exactly in these ionization states. The values of the free energy released by cleaving either a phosphate (Pi) or a pyrophosphate (PPi) unit from ATP at standard state concentrations of 1 mol/L at pH 7 are:
ATP + → ADP + Pi ΔG°' = −30.5 kJ/mol (−7.3 kcal/mol)
ATP + → AMP + PPi ΔG°' = −45.6 kJ/mol (−10.9 kcal/mol)
These abbreviated equations at a pH near 7 can be written more explicitly (R = adenosyl):
[RO-P(O)2-O-P(O)2-O-PO3]4− + → [RO-P(O)2-O-PO3]3− + [HPO4]2− + H+
[RO-P(O)2-O-P(O)2-O-PO3]4− + → [RO-PO3]2− + [HO3P-O-PO3]3− + H+
At cytoplasmic conditions, where the ADP/ATP ratio is 10 orders of magnitude from equilibrium, the ΔG is around −57 kJ/mol.
Along with pH, the free energy change of ATP hydrolysis is also associated with Mg2+ concentration, from ΔG°' = −35.7 kJ/mol at a Mg2+ concentration of zero, to ΔG°' = −31 kJ/mol at [Mg2+] = 5 mM. Higher concentrations of Mg2+ decrease free energy released in the reaction due to binding of Mg2+ ions to negatively charged oxygen atoms of ATP at pH 7.
Production from AMP and ADP
Production, aerobic conditions
A typical intracellular concentration of ATP may be 1–10 μmol per gram of tissue in a variety of eukaryotes. The dephosphorylation of ATP and rephosphorylation of ADP and AMP occur repeatedly in the course of aerobic metabolism.
ATP can be produced by a number of distinct cellular processes; the three main pathways in eukaryotes are (1) glycolysis, (2) the citric acid cycle/oxidative phosphorylation, and (3) beta-oxidation. The overall process of oxidizing glucose to carbon dioxide, the combination of pathways 1 and 2, known as cellular respiration, produces about 30 equivalents of ATP from each molecule of glucose.
ATP production by a non-photosynthetic aerobic eukaryote occurs mainly in the mitochondria, which comprise nearly 25% of the volume of a typical cell.
Glycolysis
In glycolysis, glucose and glycerol are metabolized to pyruvate. Glycolysis generates two equivalents of ATP through substrate phosphorylation catalyzed by two enzymes, phosphoglycerate kinase (PGK) and pyruvate kinase. Two equivalents of nicotinamide adenine dinucleotide (NADH) are also produced, which can be oxidized via the electron transport chain and result in the generation of additional ATP by ATP synthase. The pyruvate generated as an end-product of glycolysis is a substrate for the Krebs Cycle.
Glycolysis is viewed as consisting of two phases with five steps each. In phase 1, "the preparatory phase", glucose is converted to 2 d-glyceraldehyde-3-phosphate (g3p). One ATP is invested in Step 1, and another ATP is invested in Step 3. Steps 1 and 3 of glycolysis are referred to as "Priming Steps". In Phase 2, two equivalents of g3p are converted to two pyruvates. In Step 7, two ATP are produced. Also, in Step 10, two further equivalents of ATP are produced. In Steps 7 and 10, ATP is generated from ADP. A net of two ATPs is formed in the glycolysis cycle. The glycolysis pathway is later associated with the Citric Acid Cycle which produces additional equivalents of ATP.
Regulation
In glycolysis, hexokinase is directly inhibited by its product, glucose-6-phosphate, and pyruvate kinase is inhibited by ATP itself. The main control point for the glycolytic pathway is phosphofructokinase (PFK), which is allosterically inhibited by high concentrations of ATP and activated by high concentrations of AMP. The inhibition of PFK by ATP is unusual since ATP is also a substrate in the reaction catalyzed by PFK; the active form of the enzyme is a tetramer that exists in two conformations, only one of which binds the second substrate fructose-6-phosphate (F6P). The protein has two binding sites for ATP – the active site is accessible in either protein conformation, but ATP binding to the inhibitor site stabilizes the conformation that binds F6P poorly. A number of other small molecules can compensate for the ATP-induced shift in equilibrium conformation and reactivate PFK, including cyclic AMP, ammonium ions, inorganic phosphate, and fructose-1,6- and -2,6-biphosphate.
Citric acid cycle
In the mitochondrion, pyruvate is oxidized by the pyruvate dehydrogenase complex to the acetyl group, which is fully oxidized to carbon dioxide by the citric acid cycle (also known as the Krebs cycle). Every "turn" of the citric acid cycle produces two molecules of carbon dioxide, one equivalent of ATP guanosine triphosphate (GTP) through substrate-level phosphorylation catalyzed by succinyl-CoA synthetase, as succinyl-CoA is converted to succinate, three equivalents of NADH, and one equivalent of FADH2. NADH and FADH2 are recycled (to NAD+ and FAD, respectively) by oxidative phosphorylation, generating additional ATP. The oxidation of NADH results in the synthesis of 2–3 equivalents of ATP, and the oxidation of one FADH2 yields between 1–2 equivalents of ATP. The majority of cellular ATP is generated by this process. Although the citric acid cycle itself does not involve molecular oxygen, it is an obligately aerobic process because O2 is used to recycle the NADH and FADH2. In the absence of oxygen, the citric acid cycle ceases.
The generation of ATP by the mitochondrion from cytosolic NADH relies on the malate-aspartate shuttle (and to a lesser extent, the glycerol-phosphate shuttle) because the inner mitochondrial membrane is impermeable to NADH and NAD+. Instead of transferring the generated NADH, a malate dehydrogenase enzyme converts oxaloacetate to malate, which is translocated to the mitochondrial matrix. Another malate dehydrogenase-catalyzed reaction occurs in the opposite direction, producing oxaloacetate and NADH from the newly transported malate and the mitochondrion's interior store of NAD+. A transaminase converts the oxaloacetate to aspartate for transport back across the membrane and into the intermembrane space.
In oxidative phosphorylation, the passage of electrons from NADH and FADH2 through the electron transport chain releases the energy to pump protons out of the mitochondrial matrix and into the intermembrane space. This pumping generates a proton motive force that is the net effect of a pH gradient and an electric potential gradient across the inner mitochondrial membrane. Flow of protons down this potential gradient – that is, from the intermembrane space to the matrix – yields ATP by ATP synthase. Three ATP are produced per turn.
Although oxygen consumption appears fundamental for the maintenance of the proton motive force, in the event of oxygen shortage (hypoxia), intracellular acidosis (mediated by enhanced glycolytic rates and ATP hydrolysis), contributes to mitochondrial membrane potential and directly drives ATP synthesis.
Most of the ATP synthesized in the mitochondria will be used for cellular processes in the cytosol; thus it must be exported from its site of synthesis in the mitochondrial matrix. ATP outward movement is favored by the membrane's electrochemical potential because the cytosol has a relatively positive charge compared to the relatively negative matrix. For every ATP transported out, it costs 1 H+. Producing one ATP costs about 3 H+. Therefore, making and exporting one ATP requires 4H+. The inner membrane contains an antiporter, the ADP/ATP translocase, which is an integral membrane protein used to exchange newly synthesized ATP in the matrix for ADP in the intermembrane space.
Regulation
The citric acid cycle is regulated mainly by the availability of key substrates, particularly the ratio of NAD+ to NADH and the concentrations of calcium, inorganic phosphate, ATP, ADP, and AMP. Citrate – the ion that gives its name to the cycle – is a feedback inhibitor of citrate synthase and also inhibits PFK, providing a direct link between the regulation of the citric acid cycle and glycolysis.
Beta oxidation
In the presence of air and various cofactors and enzymes, fatty acids are converted to acetyl-CoA. The pathway is called beta-oxidation. Each cycle of beta-oxidation shortens the fatty acid chain by two carbon atoms and produces one equivalent each of acetyl-CoA, NADH, and FADH2. The acetyl-CoA is metabolized by the citric acid cycle to generate ATP, while the NADH and FADH2 are used by oxidative phosphorylation to generate ATP. Dozens of ATP equivalents are generated by the beta-oxidation of a single long acyl chain.
Regulation
In oxidative phosphorylation, the key control point is the reaction catalyzed by cytochrome c oxidase, which is regulated by the availability of its substrate – the reduced form of cytochrome c. The amount of reduced cytochrome c available is directly related to the amounts of other substrates:
which directly implies this equation:
Thus, a high ratio of [NADH] to [NAD+] or a high ratio of [ADP] [Pi] to [ATP] imply a high amount of reduced cytochrome c and a high level of cytochrome c oxidase activity. An additional level of regulation is introduced by the transport rates of ATP and NADH between the mitochondrial matrix and the cytoplasm.
Ketosis
Ketone bodies can be used as fuels, yielding 22 ATP and 2 GTP molecules per acetoacetate molecule when oxidized in the mitochondria. Ketone bodies are transported from the liver to other tissues, where acetoacetate and beta-hydroxybutyrate can be reconverted to acetyl-CoA to produce reducing equivalents (NADH and FADH2), via the citric acid cycle. Ketone bodies cannot be used as fuel by the liver, because the liver lacks the enzyme β-ketoacyl-CoA transferase, also called thiolase. Acetoacetate in low concentrations is taken up by the liver and undergoes detoxification through the methylglyoxal pathway which ends with lactate. Acetoacetate in high concentrations is absorbed by cells other than those in the liver and enters a different pathway via 1,2-propanediol. Though the pathway follows a different series of steps requiring ATP, 1,2-propanediol can be turned into pyruvate.
Production, anaerobic conditions
Fermentation is the metabolism of organic compounds in the absence of air. It involves substrate-level phosphorylation in the absence of a respiratory electron transport chain.
The equation for the reaction of glucose to form lactic acid is:
+ 2 ADP + 2 Pi → 2 + 2 ATP + 2
Anaerobic respiration is respiration in the absence of . Prokaryotes can utilize a variety of electron acceptors. These include nitrate, sulfate, and carbon dioxide. In
anaerobic organisms and prokaryotes, different pathways result in ATP. ATP is produced in the chloroplasts of green plants in a process similar to oxidative phosphorylation, called photophosphorylation.
ATP replenishment by nucleoside diphosphate kinases
ATP can also be synthesized through several so-called "replenishment" reactions catalyzed by the enzyme families of nucleoside diphosphate kinases (NDKs), which use other nucleoside triphosphates as a high-energy phosphate donor, and the ATP:guanido-phosphotransferase family.
ATP production during photosynthesis
In plants, ATP is synthesized in the thylakoid membrane of the chloroplast. The process is called photophosphorylation. The "machinery" is similar to that in mitochondria except that light energy is used to pump protons across a membrane to produce a proton-motive force. ATP synthase then ensues exactly as in oxidative phosphorylation. Some of the ATP produced in the chloroplasts is consumed in the Calvin cycle, which produces triose sugars.
ATP recycling
The total quantity of ATP in the human body is about 0.1 mol/L. The majority of ATP is recycled from ADP by the aforementioned processes. Thus, at any given time, the total amount of ATP + ADP remains fairly constant.
The energy used by human cells in an adult requires the hydrolysis of 100 to 150 mol/L of ATP daily, which means a human will typically use their body weight worth of ATP over the course of the day. Each equivalent of ATP is recycled 1000–1500 times during a single day (), at approximately 9×1020 molecules/s.
Biochemical functions
Cellular energy production
The conversion of ATP to ADP is the principal mechanism for energy supply in biological processes. Energy is produced in cells when the terminal phosphate group in an ATP molecule is removed from the chain to produce adenosine diphosphate (ADP) when water hydrolyzes ATP:
ATP + H2O → ADP + HPO4 + H + energy
However, removing a phosphate group from ADP to produce adenosine monophosphate (AMP) also produces extra energy.
Intracellular signaling
ATP is involved in signal transduction by serving as substrate for kinases, enzymes that transfer phosphate groups. Kinases are the most common ATP-binding proteins. They share a small number of common folds. Phosphorylation of a protein by a kinase can activate a cascade such as the mitogen-activated protein kinase cascade.
ATP is also a substrate of adenylate cyclase, most commonly in G protein-coupled receptor signal transduction pathways and is transformed to second messenger, cyclic AMP, which is involved in triggering calcium signals by the release of calcium from intracellular stores. This form of signal transduction is particularly important in brain function, although it is involved in the regulation of a multitude of other cellular processes.
DNA and RNA synthesis
ATP is one of four monomers required in the synthesis of RNA. The process is promoted by RNA polymerases. A similar process occurs in the formation of DNA, except that ATP is first converted to the deoxyribonucleotide dATP. Like many condensation reactions in nature, DNA replication and DNA transcription also consume ATP.
Amino acid activation in protein synthesis
Aminoacyl-tRNA synthetase enzymes consume ATP in the attachment tRNA to amino acids, forming aminoacyl-tRNA complexes. Aminoacyl transferase binds AMP-amino acid to tRNA. The coupling reaction proceeds in two steps:
aa + ATP ⟶ aa-AMP + PPi
aa-AMP + tRNA ⟶ aa-tRNA + AMP
The amino acid is coupled to the penultimate nucleotide at the 3′-end of the tRNA (the A in the sequence CCA) via an ester bond (roll over in illustration).
ATP binding cassette transporter
Transporting chemicals out of a cell against a gradient is often associated with ATP hydrolysis. Transport is mediated by ATP binding cassette transporters. The human genome encodes 48 ABC transporters, that are used for exporting drugs, lipids, and other compounds.
Extracellular signalling and neurotransmission
Cells secrete ATP to communicate with other cells in a process called purinergic signalling. ATP serves as a neurotransmitter in many parts of the nervous system, modulates ciliary beating, affects vascular oxygen supply etc. ATP is either secreted directly across the cell membrane through channel proteins or is pumped into vesicles which then fuse with the membrane. Cells detect ATP using the purinergic receptor proteins P2X and P2Y. ATP has been shown to be a critically important signalling molecule for microglia - neuron interactions in the adult brain, as well as during brain development. Furthermore, tissue-injury induced ATP-signalling is a major factor in rapid microglial phenotype changes.
Muscle contraction
ATP fuels muscle contractions. Muscle contractions are regulated by signaling pathways, although different muscle types being regulated by specific pathways and stimuli based on their particular function. However, in all muscle types, contraction is performed by the proteins actin and myosin.
ATP is initially bound to myosin. When ATPase hydrolyzes the bound ATP into ADP and inorganic phosphate, myosin is positioned in a way that it can bind to actin. Myosin bound by ADP and Pi forms cross-bridges with actin and the subsequent release of ADP and Pi releases energy as the power stroke. The power stroke causes actin filament to slide past the myosin filament, shortening the muscle and causing a contraction. Another ATP molecule can then bind to myosin, releasing it from actin and allowing this process to repeat.
Protein solubility
ATP has recently been proposed to act as a biological hydrotrope and has been shown to affect proteome-wide solubility.
Abiogenic origins
Acetyl phosphate (AcP), a precursor to ATP, can readily be synthesized at modest yields from thioacetate in pH 7 and 20 °C and pH 8 and 50 °C, although acetyl phosphate is less stable in warmer temperatures and alkaline conditions than in cooler and acidic to neutral conditions. It is unable to promote polymerization of ribonucleotides and amino acids and was only capable of phosphorylation of organic compounds. It was shown that it can promote aggregation and stabilization of AMP in the presence of Na+, aggregation of nucleotides could promote polymerization above 75 °C in the absence of Na+. It is possible that polymerization promoted by AcP could occur at mineral surfaces. It was shown that ADP can only be phosphorylated to ATP by AcP and other nucleoside triphosphates were not phosphorylated by AcP. This might explain why all lifeforms use ATP to drive biochemical reactions.
ATP analogues
Biochemistry laboratories often use in vitro studies to explore ATP-dependent molecular processes. ATP analogs are also used in X-ray crystallography to determine a protein structure in complex with ATP, often together with other substrates.
Enzyme inhibitors of ATP-dependent enzymes such as kinases are needed to examine the binding sites and transition states involved in ATP-dependent reactions.
Most useful ATP analogs cannot be hydrolyzed as ATP would be; instead, they trap the enzyme in a structure closely related to the ATP-bound state. Adenosine 5′-(γ-thiotriphosphate) is an extremely common ATP analog in which one of the gamma-phosphate oxygens is replaced by a sulfur atom; this anion is hydrolyzed at a dramatically slower rate than ATP itself and functions as an inhibitor of ATP-dependent processes. In crystallographic studies, hydrolysis transition states are modeled by the bound vanadate ion.
Caution is warranted in interpreting the results of experiments using ATP analogs, since some enzymes can hydrolyze them at appreciable rates at high concentration.
Medical use
ATP is used intravenously for some heart-related conditions.
History
ATP was discovered in 1929 from muscle tissue by and Jendrassik and, independently, by Cyrus Fiske and Yellapragada Subba Rao of Harvard Medical School, both teams competing against each other to find an assay for phosphorus.
It was proposed to be the intermediary between energy-yielding and energy-requiring reactions in cells by Fritz Albert Lipmann in 1941. He played a major role in establishing that ATP is the energy currency of a cell.
It was first synthesized in the laboratory by Alexander Todd in 1948, and he was awarded the Nobel Prize in Chemistry in 1957 partly for this work.
The 1978 Nobel Prize in Chemistry was awarded to Peter Dennis Mitchell for the discovery of the chemiosmotic mechanism of ATP synthesis.
The 1997 Nobel Prize in Chemistry was divided, one half jointly to Paul D. Boyer and John E. Walker "for their elucidation of the enzymatic mechanism underlying the synthesis of adenosine triphosphate (ATP)" and the other half to Jens C. Skou "for the first discovery of an ion-transporting enzyme, Na+, K+ -ATPase."
See also
Adenosine-tetraphosphatase
Adenosine methylene triphosphate
ATPases
ATP test
Creatine
Cyclic adenosine monophosphate (cAMP)
Nucleotide exchange factor
Phosphagen
References
External links
ATP bound to proteins in the PDB
ScienceAid: Energy ATP and Exercise
PubChem entry for Adenosine Triphosphate
KEGG entry for Adenosine Triphosphate
ATP 3D model
|
Adenosine receptor agonists;Cellular respiration;Coenzymes;Ergogenic aids;Exercise physiology;Neurotransmitters;Nucleotides;Phosphate esters;Purinergic signalling;Purines;Substances discovered in the 1920s
|
https://en.wikipedia.org/wiki/Antibiotic
|
An antibiotic is a type of antimicrobial substance active against bacteria. It is the most important type of antibacterial agent for fighting bacterial infections, and antibiotic medications are widely used in the treatment and prevention of such infections. They may either kill or inhibit the growth of bacteria. A limited number of antibiotics also possess antiprotozoal activity. Antibiotics are not effective against viruses such as the ones which cause the common cold or influenza. Drugs which inhibit growth of viruses are termed antiviral drugs or antivirals. Antibiotics are also not effective against fungi. Drugs which inhibit growth of fungi are called antifungal drugs.
Sometimes, the term antibiotic—literally "opposing life", from the Greek roots ἀντι anti, "against" and βίος bios, "life"—is broadly used to refer to any substance used against microbes, but in the usual medical usage, antibiotics (such as penicillin) are those produced naturally (by one microorganism fighting another), whereas non-antibiotic antibacterials (such as sulfonamides and antiseptics) are fully synthetic. However, both classes have the same effect of killing or preventing the growth of microorganisms, and both are included in antimicrobial chemotherapy. "Antibacterials" include bactericides, bacteriostatics, antibacterial soaps, and chemical disinfectants, whereas antibiotics are an important class of antibacterials used more specifically in medicine and sometimes in livestock feed.
Antibiotics have been used since ancient times. Many civilizations used topical application of moldy bread, with many references to its beneficial effects arising from ancient Egypt, Nubia, China, Serbia, Greece, and Rome. The first person to directly document the use of molds to treat infections was John Parkinson (1567–1650). Antibiotics revolutionized medicine in the 20th century. Synthetic antibiotic chemotherapy as a science and development of antibacterials began in Germany with Paul Ehrlich in the late 1880s. Alexander Fleming (1881–1955) discovered modern day penicillin in 1928, the widespread use of which proved significantly beneficial during wartime. The first sulfonamide and the first systemically active antibacterial drug, Prontosil, was developed by a research team led by Gerhard Domagk in 1932 or 1933 at the Bayer Laboratories of the IG Farben conglomerate in Germany. However, the effectiveness and easy access to antibiotics have also led to their overuse and some bacteria have evolved resistance to them. Antimicrobial resistance (AMR), a naturally occurring process, is driven largely by the misuse and overuse of antimicrobials. Yet, at the same time, many people around the world do not have access to essential antimicrobials. The World Health Organization has classified AMR as a widespread "serious threat [that] is no longer a prediction for the future, it is happening right now in every region of the world and has the potential to affect anyone, of any age, in any country". Each year, nearly 5 million deaths are associated with AMR globally. Global deaths attributable to AMR numbered 1.27 million in 2019.
Etymology
The term 'antibiosis', meaning "against life", was introduced by the French bacteriologist Jean Paul Vuillemin as a descriptive name of the phenomenon exhibited by these early antibacterial drugs. Antibiosis was first described in 1877 in bacteria when Louis Pasteur and Robert Koch observed that an airborne bacillus could inhibit the growth of Bacillus anthracis. These drugs were later renamed antibiotics by Selman Waksman, an American microbiologist, in 1947.
The term antibiotic was first used in 1942 by Selman Waksman and his collaborators in journal articles to describe any substance produced by a microorganism that is antagonistic to the growth of other microorganisms in high dilution. This definition excluded substances that kill bacteria but that are not produced by microorganisms (such as gastric juices and hydrogen peroxide). It also excluded synthetic antibacterial compounds such as the sulfonamides. In current usage, the term "antibiotic" is applied to any medication that kills bacteria or inhibits their growth, regardless of whether that medication is produced by a microorganism or not.
The term "antibiotic" derives from anti + βιωτικός (biōtikos), "fit for life, lively", which comes from βίωσις (biōsis), "way of life", and that from βίος (bios), "life". The term "antibacterial" derives from Greek ἀντί (anti), "against" + βακτήριον (baktērion), diminutive of βακτηρία (baktēria), "staff, cane", because the first bacteria to be discovered were rod-shaped.
Usage
Medical uses
Antibiotics are used to treat or prevent bacterial infections, and sometimes protozoan infections. (Metronidazole is effective against a number of parasitic diseases). When an infection is suspected of being responsible for an illness but the responsible pathogen has not been identified, an empiric therapy is adopted. This involves the administration of a broad-spectrum antibiotic based on the signs and symptoms presented and is initiated pending laboratory results that can take several days.
When the responsible pathogenic microorganism is already known or has been identified, definitive therapy can be started. This will usually involve the use of a narrow-spectrum antibiotic. The choice of antibiotic given will also be based on its cost. Identification is critically important as it can reduce the cost and toxicity of the antibiotic therapy and also reduce the possibility of the emergence of antimicrobial resistance. To avoid surgery, antibiotics may be given for non-complicated acute appendicitis.
Antibiotics may be given as a preventive measure and this is usually limited to at-risk populations such as those with a weakened immune system (particularly in HIV cases to prevent pneumonia), those taking immunosuppressive drugs, cancer patients, and those having surgery. Their use in surgical procedures is to help prevent infection of incisions. They have an important role in dental antibiotic prophylaxis where their use may prevent bacteremia and consequent infective endocarditis. Antibiotics are also used to prevent infection in cases of neutropenia particularly cancer-related.
The use of antibiotics for secondary prevention of coronary heart disease is not supported by current scientific evidence, and may actually increase cardiovascular mortality, all-cause mortality and the occurrence of stroke.
Routes of administration
There are many different routes of administration for antibiotic treatment. Antibiotics are usually taken by mouth. In more severe cases, particularly deep-seated systemic infections, antibiotics can be given intravenously or by injection. Where the site of infection is easily accessed, antibiotics may be given topically in the form of eye drops onto the conjunctiva for conjunctivitis or ear drops for ear infections and acute cases of swimmer's ear. Topical use is also one of the treatment options for some skin conditions including acne and cellulitis. Advantages of topical application include achieving high and sustained concentration of antibiotic at the site of infection; reducing the potential for systemic absorption and toxicity, and total volumes of antibiotic required are reduced, thereby also reducing the risk of antibiotic misuse. Topical antibiotics applied over certain types of surgical wounds have been reported to reduce the risk of surgical site infections. However, there are certain general causes for concern with topical administration of antibiotics. Some systemic absorption of the antibiotic may occur; the quantity of antibiotic applied is difficult to accurately dose, and there is also the possibility of local hypersensitivity reactions or contact dermatitis occurring. It is recommended to administer antibiotics as soon as possible, especially in life-threatening infections. Many emergency departments stock antibiotics for this purpose.
Global consumption
Antibiotic consumption varies widely between countries. The WHO report on surveillance of antibiotic consumption published in 2018 analysed 2015 data from 65 countries. As measured in defined daily doses per 1,000 inhabitants per day. Mongolia had the highest consumption with a rate of 64.4. Burundi had the lowest at 4.4. Amoxicillin and amoxicillin/clavulanic acid were the most frequently consumed.
Side effects
Antibiotics are screened for any negative effects before their approval for clinical use, and are usually considered safe and well tolerated. However, some antibiotics have been associated with a wide extent of adverse side effects ranging from mild to very severe depending on the type of antibiotic used, the microbes targeted, and the individual patient. Side effects may reflect the pharmacological or toxicological properties of the antibiotic or may involve hypersensitivity or allergic reactions. Adverse effects range from fever and nausea to major allergic reactions, including photodermatitis and anaphylaxis.
Common side effects of oral antibiotics include diarrhea, resulting from disruption of the species composition in the intestinal flora, resulting, for example, in overgrowth of pathogenic bacteria, such as Clostridioides difficile. Taking probiotics during the course of antibiotic treatment can help prevent antibiotic-associated diarrhea. Antibacterials can also affect the vaginal flora, and may lead to overgrowth of yeast species of the genus Candida in the vulvo-vaginal area. Additional side effects can result from interaction with other drugs, such as the possibility of tendon damage from the administration of a quinolone antibiotic with a systemic corticosteroid.
Some antibiotics may also damage the mitochondrion, a bacteria-derived organelle found in eukaryotic, including human, cells. Mitochondrial damage cause oxidative stress in cells and has been suggested as a mechanism for side effects from fluoroquinolones. They are also known to affect chloroplasts.
Interactions
Birth control pills
There are few well-controlled studies on whether antibiotic use increases the risk of oral contraceptive failure. The majority of studies indicate antibiotics do not interfere with birth control pills, such as clinical studies that suggest the failure rate of contraceptive pills caused by antibiotics is very low (about 1%). Situations that may increase the risk of oral contraceptive failure include non-compliance (missing taking the pill), vomiting, or diarrhea. Gastrointestinal disorders or interpatient variability in oral contraceptive absorption affecting ethinylestradiol serum levels in the blood. Women with menstrual irregularities may be at higher risk of failure and should be advised to use backup contraception during antibiotic treatment and for one week after its completion. If patient-specific risk factors for reduced oral contraceptive efficacy are suspected, backup contraception is recommended.
In cases where antibiotics have been suggested to affect the efficiency of birth control pills, such as for the broad-spectrum antibiotic rifampicin, these cases may be due to an increase in the activities of hepatic liver enzymes' causing increased breakdown of the pill's active ingredients. Effects on the intestinal flora, which might result in reduced absorption of estrogens in the colon, have also been suggested, but such suggestions have been inconclusive and controversial. Clinicians have recommended that extra contraceptive measures be applied during therapies using antibiotics that are suspected to interact with oral contraceptives. More studies on the possible interactions between antibiotics and birth control pills (oral contraceptives) are required as well as careful assessment of patient-specific risk factors for potential oral contractive pill failure prior to dismissing the need for backup contraception.
Alcohol
Interactions between alcohol and certain antibiotics may occur and may cause side effects and decreased effectiveness of antibiotic therapy. While moderate alcohol consumption is unlikely to interfere with many common antibiotics, there are specific types of antibiotics with which alcohol consumption may cause serious side effects. Therefore, potential risks of side effects and effectiveness depend on the type of antibiotic administered.
Antibiotics such as metronidazole, tinidazole, cephamandole, latamoxef, cefoperazone, cefmenoxime, and furazolidone, cause a disulfiram-like chemical reaction with alcohol by inhibiting its breakdown by acetaldehyde dehydrogenase, which may result in vomiting, nausea, and shortness of breath. In addition, the efficacy of doxycycline and erythromycin succinate may be reduced by alcohol consumption. Other effects of alcohol on antibiotic activity include altered activity of the liver enzymes that break down the antibiotic compound.
Pharmacodynamics
The successful outcome of antimicrobial therapy with antibacterial compounds depends on several factors. These include host defense mechanisms, the location of infection, and the pharmacokinetic and pharmacodynamic properties of the antibacterial. The bactericidal activity of antibacterials may depend on the bacterial growth phase, and it often requires ongoing metabolic activity and division of bacterial cells. These findings are based on laboratory studies, and in clinical settings have also been shown to eliminate bacterial infection. Since the activity of antibacterials depends frequently on its concentration, in vitro characterization of antibacterial activity commonly includes the determination of the minimum inhibitory concentration and minimum bactericidal concentration of an antibacterial.
To predict clinical outcome, the antimicrobial activity of an antibacterial is usually combined with its pharmacokinetic profile, and several pharmacological parameters are used as markers of drug efficacy.
Combination therapy
In important infectious diseases, including tuberculosis, combination therapy (i.e., the concurrent application of two or more antibiotics) has been used to delay or prevent the emergence of resistance. In acute bacterial infections, antibiotics as part of combination therapy are prescribed for their synergistic effects to improve treatment outcome as the combined effect of both antibiotics is better than their individual effect. Fosfomycin has the highest number of synergistic combinations among antibiotics and is almost always used as a partner drug. Methicillin-resistant Staphylococcus aureus infections may be treated with a combination therapy of fusidic acid and rifampicin. Antibiotics used in combination may also be antagonistic and the combined effects of the two antibiotics may be less than if one of the antibiotics was given as a monotherapy. For example, chloramphenicol and tetracyclines are antagonists to penicillins. However, this can vary depending on the species of bacteria. In general, combinations of a bacteriostatic antibiotic and bactericidal antibiotic are antagonistic.
In addition to combining one antibiotic with another, antibiotics are sometimes co-administered with resistance-modifying agents. For example, β-lactam antibiotics may be used in combination with β-lactamase inhibitors, such as clavulanic acid or sulbactam, when a patient is infected with a β-lactamase-producing strain of bacteria.
Classes
Antibiotics are commonly classified based on their mechanism of action, chemical structure, or spectrum of activity. Most target bacterial functions or growth processes. Those that target the bacterial cell wall (penicillins and cephalosporins) or the cell membrane (polymyxins), or interfere with essential bacterial enzymes (rifamycins, lipiarmycins, quinolones, and sulfonamides) have bactericidal activities, killing the bacteria. Protein synthesis inhibitors (macrolides, lincosamides, and tetracyclines) are usually bacteriostatic, inhibiting further growth (with the exception of bactericidal aminoglycosides). Further categorization is based on their target specificity. "Narrow-spectrum" antibiotics target specific types of bacteria, such as gram-negative or gram-positive, whereas broad-spectrum antibiotics affect a wide range of bacteria. Following a 40-year break in discovering classes of antibacterial compounds, four new classes of antibiotics were introduced to clinical use in the late 2000s and early 2010s: cyclic lipopeptides (such as daptomycin), glycylcyclines (such as tigecycline), oxazolidinones (such as linezolid), and lipiarmycins (such as fidaxomicin).
Production
With advances in medicinal chemistry, most modern antibacterials are semisynthetic modifications of various natural compounds. These include, for example, the beta-lactam antibiotics, which include the penicillins (produced by fungi in the genus Penicillium), the cephalosporins, and the carbapenems. Compounds that are still isolated from living organisms are the aminoglycosides, whereas other antibacterials—for example, the sulfonamides, the quinolones, and the oxazolidinones—are produced solely by chemical synthesis. Many antibacterial compounds are relatively small molecules with a molecular weight of less than 1000 daltons.
Since the first pioneering efforts of Howard Florey and Chain in 1939, the importance of antibiotics, including antibacterials, to medicine has led to intense research into producing antibacterials at large scales. Following screening of antibacterials against a wide range of bacteria, production of the active compounds is carried out using fermentation, usually in strongly aerobic conditions.
Resistance
Antimicrobial resistance (AMR or AR) is a naturally occurring process. AMR is driven largely by the misuse and overuse of antimicrobials. Yet, at the same time, many people around the world do not have access to essential antimicrobials. The emergence of antibiotic-resistant bacteria is a common phenomenon mainly caused by the overuse/misuse. It represents a threat to health globally. Each year, nearly 5 million deaths are associated with AMR globally.
Emergence of resistance often reflects evolutionary processes that take place during antibiotic therapy. The antibiotic treatment may select for bacterial strains with physiologically or genetically enhanced capacity to survive high doses of antibiotics. Under certain conditions, it may result in preferential growth of resistant bacteria, while growth of susceptible bacteria is inhibited by the drug. For example, antibacterial selection for strains having previously acquired antibacterial-resistance genes was demonstrated in 1943 by the Luria–Delbrück experiment. Antibiotics such as penicillin and erythromycin, which used to have a high efficacy against many bacterial species and strains, have become less effective, due to the increased resistance of many bacterial strains.
Resistance may take the form of biodegradation of pharmaceuticals, such as sulfamethazine-degrading soil bacteria introduced to sulfamethazine through medicated pig feces.
The survival of bacteria often results from an inheritable resistance, but the growth of resistance to antibacterials also occurs through horizontal gene transfer. Horizontal transfer is more likely to happen in locations of frequent antibiotic use.
Antibacterial resistance may impose a biological cost, thereby reducing fitness of resistant strains, which can limit the spread of antibacterial-resistant bacteria, for example, in the absence of antibacterial compounds. Additional mutations, however, may compensate for this fitness cost and can aid the survival of these bacteria.
Paleontological data show that both antibiotics and antibiotic resistance are ancient compounds and mechanisms. Useful antibiotic targets are those for which mutations negatively impact bacterial reproduction or viability.
Several molecular mechanisms of antibacterial resistance exist. Intrinsic antibacterial resistance may be part of the genetic makeup of bacterial strains. For example, an antibiotic target may be absent from the bacterial genome. Acquired resistance results from a mutation in the bacterial chromosome or the acquisition of extra-chromosomal DNA. Antibacterial-producing bacteria have evolved resistance mechanisms that have been shown to be similar to, and may have been transferred to, antibacterial-resistant strains. The spread of antibacterial resistance often occurs through vertical transmission of mutations during growth and by genetic recombination of DNA by horizontal genetic exchange. For instance, antibacterial resistance genes can be exchanged between different bacterial strains or species via plasmids that carry these resistance genes. Plasmids that carry several different resistance genes can confer resistance to multiple antibacterials. Cross-resistance to several antibacterials may also occur when a resistance mechanism encoded by a single gene conveys resistance to more than one antibacterial compound.
Antibacterial-resistant strains and species, sometimes referred to as "superbugs", now contribute to the emergence of diseases that were, for a while, well controlled. For example, emergent bacterial strains causing tuberculosis that are resistant to previously effective antibacterial treatments pose many therapeutic challenges. Every year, nearly half a million new cases of multidrug-resistant tuberculosis (MDR-TB) are estimated to occur worldwide. For example, NDM-1 is a newly identified enzyme conveying bacterial resistance to a broad range of beta-lactam antibacterials. The United Kingdom's Health Protection Agency has stated that "most isolates with NDM-1 enzyme are resistant to all standard intravenous antibiotics for treatment of severe infections." On 26 May 2016, an E. coli "superbug" was identified in the United States resistant to colistin, "the last line of defence" antibiotic.
In recent years, even anaerobic bacteria, historically considered less concerning in terms of resistance, have demonstrated high rates of antibiotic resistance, particularly Bacteroides, for which resistance rates to penicillin have been reported to exceed 90%.
Misuse
Per The ICU Book, "The first rule of antibiotics is to try not to use them, and the second rule is try not to use too many of them." Inappropriate antibiotic treatment and overuse of antibiotics have contributed to the emergence of antibiotic-resistant bacteria. However, potential harm from antibiotics extends beyond selection of antimicrobial resistance and their overuse is associated with adverse effects for patients themselves, seen most clearly in critically ill patients in Intensive care units. Self-prescribing of antibiotics is an example of misuse. Many antibiotics are frequently prescribed to treat symptoms or diseases that do not respond to antibiotics or that are likely to resolve without treatment. Also, incorrect or suboptimal antibiotics are prescribed for certain bacterial infections. The overuse of antibiotics, like penicillin and erythromycin, has been associated with emerging antibiotic resistance since the 1950s. Widespread usage of antibiotics in hospitals has also been associated with increases in bacterial strains and species that no longer respond to treatment with the most common antibiotics.
Common forms of antibiotic misuse include excessive use of prophylactic antibiotics in travelers and failure of medical professionals to prescribe the correct dosage of antibiotics on the basis of the patient's weight and history of prior use. Other forms of misuse include failure to take the entire prescribed course of the antibiotic, incorrect dosage and administration, or failure to rest for sufficient recovery. Inappropriate antibiotic treatment, for example, is their prescription to treat viral infections such as the common cold. One study on respiratory tract infections found "physicians were more likely to prescribe antibiotics to patients who appeared to expect them". Multifactorial interventions aimed at both physicians and patients can reduce inappropriate prescription of antibiotics. The lack of rapid point of care diagnostic tests, particularly in resource-limited settings is considered one of the drivers of antibiotic misuse.
Several organizations concerned with antimicrobial resistance are lobbying to eliminate the unnecessary use of antibiotics. The issues of misuse and overuse of antibiotics have been addressed by the formation of the US Interagency Task Force on Antimicrobial Resistance. This task force aims to actively address antimicrobial resistance, and is coordinated by the US Centers for Disease Control and Prevention, the Food and Drug Administration (FDA), and the National Institutes of Health, as well as other US agencies. A non-governmental organization campaign group is Keep Antibiotics Working. In France, an "Antibiotics are not automatic" government campaign started in 2002 and led to a marked reduction of unnecessary antibiotic prescriptions, especially in children.
The emergence of antibiotic resistance has prompted restrictions on their use in the UK in 1970 (Swann report 1969), and the European Union has banned the use of antibiotics as growth-promotional agents since 2003. Moreover, several organizations (including the World Health Organization, the National Academy of Sciences, and the U.S. Food and Drug Administration) have advocated restricting the amount of antibiotic use in food animal production. However, commonly there are delays in regulatory and legislative actions to limit the use of antibiotics, attributable partly to resistance against such regulation by industries using or selling antibiotics, and to the time required for research to test causal links between their use and resistance to them. Two federal bills (S.742 and H.R. 2562) aimed at phasing out nontherapeutic use of antibiotics in US food animals were proposed, but have not passed. These bills were endorsed by public health and medical organizations, including the American Holistic Nurses' Association, the American Medical Association, and the American Public Health Association.
Despite pledges by food companies and restaurants to reduce or eliminate meat that comes from animals treated with antibiotics, the purchase of antibiotics for use on farm animals has been increasing every year.
There has been extensive use of antibiotics in animal husbandry. In the United States, the question of emergence of antibiotic-resistant bacterial strains due to use of antibiotics in livestock was raised by the US Food and Drug Administration (FDA) in 1977. In March 2012, the United States District Court for the Southern District of New York, ruling in an action brought by the Natural Resources Defense Council and others, ordered the FDA to revoke approvals for the use of antibiotics in livestock, which violated FDA regulations.
Studies have shown that common misconceptions about the effectiveness and necessity of antibiotics to treat common mild illnesses contribute to their overuse.
Other forms of antibiotic-associated harm include anaphylaxis, drug toxicity most notably kidney and liver damage, and super-infections with resistant organisms. Antibiotics are also known to affect mitochondrial function, and this may contribute to the bioenergetic failure of immune cells seen in sepsis. They also alter the microbiome of the gut, lungs, and skin, which may be associated with adverse effects such as Clostridioides difficile associated diarrhoea. Whilst antibiotics can clearly be lifesaving in patients with bacterial infections, their overuse, especially in patients where infections are hard to diagnose, can lead to harm via multiple mechanisms.
History
Before the early 20th century, treatments for infections were based primarily on medicinal folklore. Mixtures with antimicrobial properties that were used in treatments of infections were described over 2,000 years ago. Many ancient cultures, including the ancient Egyptians and ancient Greeks, used specially selected mold and plant materials to treat infections. Nubian mummies studied in the 1990s were found to contain significant levels of tetracycline. The beer brewed at that time was conjectured to have been the source.
The use of antibiotics in modern medicine began with the discovery of synthetic antibiotics derived from dyes. Various Essential oils have been shown to have anti-microbial properties. Along with this, the plants from which these oils have been derived can be used as niche anti-microbial agents.
Synthetic antibiotics derived from dyes
Synthetic antibiotic chemotherapy as a science and development of antibacterials began in Germany with Paul Ehrlich in the late 1880s. Ehrlich noted certain dyes would colour human, animal, or bacterial cells, whereas others did not. He then proposed the idea that it might be possible to create chemicals that would act as a selective drug that would bind to and kill bacteria without harming the human host. After screening hundreds of dyes against various organisms, in 1907, he discovered a medicinally useful drug, the first synthetic antibacterial organoarsenic compound salvarsan, now called arsphenamine.
This heralded the era of antibacterial treatment that was begun with the discovery of a series of arsenic-derived synthetic antibiotics by both Alfred Bertheim and Ehrlich in 1907. Ehrlich and Bertheim had experimented with various chemicals derived from dyes to treat trypanosomiasis in mice and spirochaeta infection in rabbits. While their early compounds were too toxic, Ehrlich and Sahachiro Hata, a Japanese bacteriologist working with Ehrlich in the quest for a drug to treat syphilis, achieved success with the 606th compound in their series of experiments. In 1910, Ehrlich and Hata announced their discovery, which they called drug "606", at the Congress for Internal Medicine at Wiesbaden. The Hoechst company began to market the compound toward the end of 1910 under the name Salvarsan, now known as arsphenamine. The drug was used to treat syphilis in the first half of the 20th century. In 1908, Ehrlich received the Nobel Prize in Physiology or Medicine for his contributions to immunology. Hata was nominated for the Nobel Prize in Chemistry in 1911 and for the Nobel Prize in Physiology or Medicine in 1912 and 1913.
The first sulfonamide and the first systemically active antibacterial drug, Prontosil, was developed by a research team led by Gerhard Domagk in 1932 or 1933 at the Bayer Laboratories of the IG Farben conglomerate in Germany, for which Domagk received the 1939 Nobel Prize in Physiology or Medicine. Sulfanilamide, the active drug of Prontosil, was not patentable as it had already been in use in the dye industry for some years. Prontosil had a relatively broad effect against Gram-positive cocci, but not against enterobacteria. Research was stimulated apace by its success. The discovery and development of this sulfonamide drug opened the era of antibacterials.
Penicillin and other natural antibiotics
Observations about the growth of some microorganisms inhibiting the growth of other microorganisms have been reported since the late 19th century. These observations of antibiosis between microorganisms led to the discovery of natural antibacterials. Louis Pasteur observed, "if we could intervene in the antagonism observed between some bacteria, it would offer perhaps the greatest hopes for therapeutics".
In 1874, physician Sir William Roberts noted that cultures of the mould Penicillium glaucum that is used in the making of some types of blue cheese did not display bacterial contamination.
In 1895 Vincenzo Tiberio, Italian physician, published a paper on the antibacterial power of some extracts of mold.
In 1897, doctoral student Ernest Duchesne submitted a dissertation, "" (Contribution to the study of vital competition in micro-organisms: antagonism between moulds and microbes), the first known scholarly work to consider the therapeutic capabilities of moulds resulting from their anti-microbial activity. In his thesis, Duchesne proposed that bacteria and moulds engage in a perpetual battle for survival. Duchesne observed that E. coli was eliminated by Penicillium glaucum when they were both grown in the same culture. He also observed that when he inoculated laboratory animals with lethal doses of typhoid bacilli together with Penicillium glaucum, the animals did not contract typhoid. Duchesne's army service after getting his degree prevented him from doing any further research. Duchesne died of tuberculosis, a disease now treated by antibiotics.
In 1928, Sir Alexander Fleming postulated the existence of penicillin, a molecule produced by certain moulds that kills or stops the growth of certain kinds of bacteria. Fleming was working on a culture of disease-causing bacteria when he noticed the spores of a green mold, Penicillium rubens, in one of his culture plates. He observed that the presence of the mould killed or prevented the growth of the bacteria. Fleming postulated that the mould must secrete an antibacterial substance, which he named penicillin in 1928. Fleming believed that its antibacterial properties could be exploited for chemotherapy. He initially characterised some of its biological properties, and attempted to use a crude preparation to treat some infections, but he was unable to pursue its further development without the aid of trained chemists.
Ernst Chain, Howard Florey and Edward Abraham succeeded in purifying the first penicillin, penicillin G, in 1942, but it did not become widely available outside the Allied military before 1945. Later, Norman Heatley developed the back extraction technique for efficiently purifying penicillin in bulk. The chemical structure of penicillin was first proposed by Abraham in 1942 and then later confirmed by Dorothy Crowfoot Hodgkin in 1945. Purified penicillin displayed potent antibacterial activity against a wide range of bacteria and had low toxicity in humans. Furthermore, its activity was not inhibited by biological constituents such as pus, unlike the synthetic sulfonamides. (see below) The development of penicillin led to renewed interest in the search for antibiotic compounds with similar efficacy and safety. For their successful development of penicillin, which Fleming had accidentally discovered but could not develop himself, as a therapeutic drug, Chain and Florey shared the 1945 Nobel Prize in Medicine with Fleming.
Florey credited René Dubos with pioneering the approach of deliberately and systematically searching for antibacterial compounds, which had led to the discovery of gramicidin and had revived Florey's research in penicillin. In 1939, coinciding with the start of World War II, Dubos had reported the discovery of the first naturally derived antibiotic, tyrothricin, a compound of 20% gramicidin and 80% tyrocidine, from Bacillus brevis. It was one of the first commercially manufactured antibiotics and was very effective in treating wounds and ulcers during World War II. Gramicidin, however, could not be used systemically because of toxicity. Tyrocidine also proved too toxic for systemic usage. Research results obtained during that period were not shared between the Axis and the Allied powers during World War II and limited access during the Cold War.
Late 20th century
During the mid-20th century, the number of new antibiotic substances introduced for medical use increased significantly. From 1935 to 1968, 12 new classes were launched. However, after this, the number of new classes dropped markedly, with only two new classes introduced between 1969 and 2003.
Antibiotic pipeline
Both the WHO and the Infectious Disease Society of America report that the weak antibiotic pipeline does not match bacteria's increasing ability to develop resistance. The Infectious Disease Society of America report noted that the number of new antibiotics approved for marketing per year had been declining and identified seven antibiotics against the Gram-negative bacilli currently in phase 2 or phase 3 clinical trials. However, these drugs did not address the entire spectrum of resistance of Gram-negative bacilli. According to the WHO fifty one new therapeutic entities - antibiotics (including combinations), are in phase 1–3 clinical trials as of May 2017. Antibiotics targeting multidrug-resistant Gram-positive pathogens remains a high priority.
A few antibiotics have received marketing authorization in the last seven years. The cephalosporin ceftaroline and the lipoglycopeptides oritavancin and telavancin have been approved for the treatment of acute bacterial skin and skin structure infection and community-acquired bacterial pneumonia. The lipoglycopeptide dalbavancin and the oxazolidinone tedizolid has also been approved for use for the treatment of acute bacterial skin and skin structure infection. The first in a new class of narrow-spectrum macrocyclic antibiotics, fidaxomicin, has been approved for the treatment of C. difficile colitis. New cephalosporin-lactamase inhibitor combinations also approved include ceftazidime-avibactam and ceftolozane-avibactam for complicated urinary tract infection and intra-abdominal infection.
Possible improvements include clarification of clinical trial regulations by FDA. Furthermore, appropriate economic incentives could persuade pharmaceutical companies to invest in this endeavor. In the US, the Antibiotic Development to Advance Patient Treatment (ADAPT) Act was introduced with the aim of fast tracking the drug development of antibiotics to combat the growing threat of 'superbugs'. Under this Act, FDA can approve antibiotics and antifungals treating life-threatening infections based on smaller clinical trials. The CDC will monitor the use of antibiotics and the emerging resistance, and publish the data. The FDA antibiotics labeling process, 'Susceptibility Test Interpretive Criteria for Microbial Organisms' or 'breakpoints', will provide accurate data to healthcare professionals. According to Allan Coukell, senior director for health programs at The Pew Charitable Trusts, "By allowing drug developers to rely on smaller datasets, and clarifying FDA's authority to tolerate a higher level of uncertainty for these drugs when making a risk/benefit calculation, ADAPT would make the clinical trials more feasible."
Replenishing the antibiotic pipeline and developing other new therapies
Because antibiotic-resistant bacterial strains continue to emerge and spread, there is a constant need to develop new antibacterial treatments. Current strategies include traditional chemistry-based approaches such as natural product-based drug discovery, newer chemistry-based approaches such as drug design, traditional biology-based approaches such as immunoglobulin therapy, and experimental biology-based approaches such as phage therapy, fecal microbiota transplants, antisense RNA-based treatments, and CRISPR-Cas9-based treatments.
Natural product-based antibiotic discovery
Most of the antibiotics in current use are natural products or natural product derivatives, and bacterial, fungal, plant and animal extracts are being screened in the search for new antibiotics. Organisms may be selected for testing based on ecological, ethnomedical, genomic, or historical rationales. Medicinal plants, for example, are screened on the basis that they are used by traditional healers to prevent or cure infection and may therefore contain antibacterial compounds. Also, soil bacteria are screened on the basis that, historically, they have been a very rich source of antibiotics (with 70 to 80% of antibiotics in current use derived from the actinomycetes).
In addition to screening natural products for direct antibacterial activity, they are sometimes screened for the ability to suppress antibiotic resistance and antibiotic tolerance. For example, some secondary metabolites inhibit drug efflux pumps, thereby increasing the concentration of antibiotic able to reach its cellular target and decreasing bacterial resistance to the antibiotic. Natural products known to inhibit bacterial efflux pumps include the alkaloid lysergol, the carotenoids capsanthin and capsorubin, and the flavonoids rotenone and chrysin. Other natural products, this time primary metabolites rather than secondary metabolites, have been shown to eradicate antibiotic tolerance. For example, glucose, mannitol, and fructose reduce antibiotic tolerance in Escherichia coli and Staphylococcus aureus, rendering them more susceptible to killing by aminoglycoside antibiotics.
Natural products may be screened for the ability to suppress bacterial virulence factors too. Virulence factors are molecules, cellular structures and regulatory systems that enable bacteria to evade the body's immune defenses (e.g. urease, staphyloxanthin), move towards, attach to, and/or invade human cells (e.g. type IV pili, adhesins, internalins), coordinate the activation of virulence genes (e.g. quorum sensing), and cause disease (e.g. exotoxins). Examples of natural products with antivirulence activity include the flavonoid epigallocatechin gallate (which inhibits listeriolysin O), the quinone tetrangomycin (which inhibits staphyloxanthin), and the sesquiterpene zerumbone (which inhibits Acinetobacter baumannii motility).
Immunoglobulin therapy
Antibodies (anti-tetanus immunoglobulin) have been used in the treatment and prevention of tetanus since the 1910s, and this approach continues to be a useful way of controlling bacterial diseases. The monoclonal antibody bezlotoxumab, for example, has been approved by the US FDA and EMA for recurrent Clostridioides difficile infection, and other monoclonal antibodies are in development (e.g. AR-301 for the adjunctive treatment of S. aureus ventilator-associated pneumonia). Antibody treatments act by binding to and neutralizing bacterial exotoxins and other virulence factors.
Phage therapy
Phage therapy is under investigation as a method of treating antibiotic-resistant strains of bacteria. Phage therapy involves infecting bacterial pathogens with viruses. Bacteriophages and their host ranges are extremely specific for certain bacteria, thus, unlike antibiotics, they do not disturb the host organism's intestinal microbiota. Bacteriophages, also known as phages, infect and kill bacteria primarily during lytic cycles. Phages insert their DNA into the bacterium, where it is transcribed and used to make new phages, after which the cell will lyse, releasing new phage that are able to infect and destroy further bacteria of the same strain. The high specificity of phage protects "good" bacteria from destruction.
Some disadvantages to the use of bacteriophages also exist, however. Bacteriophages may harbour virulence factors or toxic genes in their genomes and, prior to use, it may be prudent to identify genes with similarity to known virulence factors or toxins by genomic sequencing. In addition, the oral and IV administration of phages for the eradication of bacterial infections poses a much higher safety risk than topical application. Also, there is the additional concern of uncertain immune responses to these large antigenic cocktails.
There are considerable regulatory hurdles that must be cleared for such therapies. Despite numerous challenges, the use of bacteriophages as a replacement for antimicrobial agents against MDR pathogens that no longer respond to conventional antibiotics, remains an attractive option.
Fecal microbiota transplants
Fecal microbiota transplants involve transferring the full intestinal microbiota from a healthy human donor (in the form of stool) to patients with C. difficile infection. Although this procedure has not been officially approved by the US FDA, its use is permitted under some conditions in patients with antibiotic-resistant C. difficile infection. Cure rates are around 90%, and work is underway to develop stool banks, standardized products, and methods of oral delivery. Fecal microbiota transplantation has also been used more recently for inflammatory bowel diseases.
Antisense RNA-based treatments
Antisense RNA-based treatment (also known as gene silencing therapy) involves (a) identifying bacterial genes that encode essential proteins (e.g. the Pseudomonas aeruginosa genes acpP, lpxC, and rpsJ), (b) synthesizing single-stranded RNA that is complementary to the mRNA encoding these essential proteins, and (c) delivering the single-stranded RNA to the infection site using cell-penetrating peptides or liposomes. The antisense RNA then hybridizes with the bacterial mRNA and blocks its translation into the essential protein. Antisense RNA-based treatment has been shown to be effective in in vivo models of P. aeruginosa pneumonia.
In addition to silencing essential bacterial genes, antisense RNA can be used to silence bacterial genes responsible for antibiotic resistance. For example, antisense RNA has been developed that silences the S. aureus mecA gene (the gene that encodes modified penicillin-binding protein 2a and renders S. aureus strains methicillin-resistant). Antisense RNA targeting mecA mRNA has been shown to restore the susceptibility of methicillin-resistant staphylococci to oxacillin in both in vitro and in vivo studies.
CRISPR-Cas9-based treatments
In the early 2000s, a system was discovered that enables bacteria to defend themselves against invading viruses. The system, known as CRISPR-Cas9, consists of (a) an enzyme that destroys DNA (the nuclease Cas9) and (b) the DNA sequences of previously encountered viral invaders (CRISPR). These viral DNA sequences enable the nuclease to target foreign (viral) rather than self (bacterial) DNA.
Although the function of CRISPR-Cas9 in nature is to protect bacteria, the DNA sequences in the CRISPR component of the system can be modified so that the Cas9 nuclease targets bacterial resistance genes or bacterial virulence genes instead of viral genes. The modified CRISPR-Cas9 system can then be administered to bacterial pathogens using plasmids or bacteriophages. This approach has successfully been used to silence antibiotic resistance and reduce the virulence of enterohemorrhagic E. coli in an in vivo model of infection.
Reducing the selection pressure for antibiotic resistance
In addition to developing new antibacterial treatments, it is important to reduce the selection pressure for the emergence and spread of antimicrobial resistance (AMR), such as antibiotic resistance. Strategies to accomplish this include well-established infection control measures such as infrastructure improvement (e.g. less crowded housing), better sanitation (e.g. safe drinking water and food), better use of vaccines and vaccine development, other approaches such as antibiotic stewardship, and experimental approaches such as the use of prebiotics and probiotics to prevent infection. Antibiotic cycling, where antibiotics are alternated by clinicians to treat microbial diseases, is proposed, but recent studies revealed such strategies are ineffective against antibiotic resistance.
Vaccines
Vaccines are an essential part of the response to reduce AMR as they prevent infections, reduce the use and overuse of antimicrobials, and slow the emergence and spread of drug-resistant pathogens. Vaccination either excites or reinforces the immune competence of a host to ward off infection, leading to the activation of macrophages, the production of antibodies, inflammation, and other classic immune reactions. Antibacterial vaccines have been responsible for a drastic reduction in global bacterial diseases.
|
;.;Anti-infective agents
|
https://en.wikipedia.org/wiki/Allotropy
|
Allotropy or allotropism () is the property of some chemical elements to exist in two or more different forms, in the same physical state, known as allotropes of the elements. Allotropes are different structural modifications of an element: the atoms of the element are bonded together in different manners.
For example, the allotropes of carbon include diamond (the carbon atoms are bonded together to form a cubic lattice of tetrahedra), graphite (the carbon atoms are bonded together in sheets of a hexagonal lattice), graphene (single sheets of graphite), and fullerenes (the carbon atoms are bonded together in spherical, tubular, or ellipsoidal formations).
The term allotropy is used for elements only, not for compounds. The more general term, used for any compound, is polymorphism, although its use is usually restricted to solid materials such as crystals. Allotropy refers only to different forms of an element within the same physical phase (the state of matter, such as a solid, liquid or gas). The differences between these states of matter would not alone constitute examples of allotropy. Allotropes of chemical elements are frequently referred to as polymorphs or as phases of the element.
For some elements, allotropes have different molecular formulae or different crystalline structures, as well as a difference in physical phase; for example, two allotropes of oxygen (dioxygen, O2, and ozone, O3) can both exist in the solid, liquid and gaseous states. Other elements do not maintain distinct allotropes in different physical phases; for example, phosphorus has numerous solid allotropes, which all revert to the same P4 form when melted to the liquid state.
History
The concept of allotropy was originally proposed in 1840 by the Swedish scientist Baron Jöns Jakob Berzelius (1779–1848). The term is derived . After the acceptance of Avogadro's hypothesis in 1860, it was understood that elements could exist as polyatomic molecules, and two allotropes of oxygen were recognized as O2 and O3. In the early 20th century, it was recognized that other cases such as carbon were due to differences in crystal structure.
By 1912, Ostwald noted that the allotropy of elements is just a special case of the phenomenon of polymorphism known for compounds, and proposed that the terms allotrope and allotropy be abandoned and replaced by polymorph and polymorphism. Although many other chemists have repeated this advice, IUPAC and most chemistry texts still favour the usage of allotrope and allotropy for elements only.
Differences in properties of an element's allotropes
Allotropes are different structural forms of the same element and can exhibit quite different physical properties and chemical behaviours. The change between allotropic forms is triggered by the same forces that affect other structures, i.e., pressure, light, and temperature. Therefore, the stability of the particular allotropes depends on particular conditions. For instance, iron changes from a body-centered cubic structure (ferrite) to a face-centered cubic structure (austenite) above 906 °C, and tin undergoes a modification known as tin pest from a metallic form to a semimetallic form below 13.2 °C (55.8 °F). As an example of allotropes having different chemical behaviour, ozone (O3) is a much stronger oxidizing agent than dioxygen (O2).
List of allotropes
Typically, elements capable of variable coordination number and/or oxidation states tend to exhibit greater numbers of allotropic forms. Another contributing factor is the ability of an element to catenate.
Examples of allotropes include:
Non-metals
Metalloids
Metals
Among the metallic elements that occur in nature in significant quantities (56 up to U, without Tc and Pm), almost half (27) are allotropic at ambient pressure: Li, Be, Na, Ca, Ti, Mn, Fe, Co, Sr, Y, Zr, Sn, La, Ce, Pr, Nd, Sm, Gd, Tb, Dy, Yb, Hf, Tl, Th, Pa and U. Some phase transitions between allotropic forms of technologically relevant metals are those of Ti at 882 °C, Fe at 912 °C and 1,394 °C, Co at 422 °C, Zr at 863 °C, Sn at 13 °C and U at 668 °C and 776 °C.
Most stable structure under standard conditions.
Structures stable below room temperature.
Structures stable above room temperature.
Structures stable above atmospheric pressure.
Lanthanides and actinides
Cerium, samarium, dysprosium and ytterbium have three allotropes.
Praseodymium, neodymium, gadolinium and terbium have two allotropes.
Plutonium has six distinct solid allotropes under "normal" pressures. Their densities vary within a ratio of some 4:3, which vastly complicates all kinds of work with the metal (particularly casting, machining, and storage). A seventh plutonium allotrope exists at very high pressures. The transuranium metals Np, Am, and Cm are also allotropic.
Promethium, americium, berkelium and californium have three allotropes each.
Nanoallotropes
In 2017, the concept of nanoallotropy was proposed. Nanoallotropes, or allotropes of nanomaterials, are nanoporous materials that have the same chemical composition (e.g., Au), but differ in their architecture at the nanoscale (that is, on a scale 10 to 100 times the dimensions of individual atoms). Such nanoallotropes may help create ultra-small electronic devices and find other industrial applications. The different nanoscale architectures translate into different properties, as was demonstrated for surface-enhanced Raman scattering performed on several different nanoallotropes of gold. A two-step method for generating nanoallotropes was also created.
See also
Isomer
Polymorphism (materials science)
Notes
References
External links
Allotropes – Chemistry Encyclopedia
|
;Chemistry;Inorganic chemistry;Physical chemistry
|
https://en.wikipedia.org/wiki/Archimedes
|
Archimedes of Syracuse ( ; ) was an Ancient Greek mathematician, physicist, engineer, astronomer, and inventor from the ancient city of Syracuse in Sicily. Although few details of his life are known, based on his surviving work, he is considered one of the leading scientists in classical antiquity, and one of the greatest mathematicians of all time. Archimedes anticipated modern calculus and analysis by applying the concept of the infinitesimals and the method of exhaustion to derive and rigorously prove many geometrical theorems, including the area of a circle, the surface area and volume of a sphere, the area of an ellipse, the area under a parabola, the volume of a segment of a paraboloid of revolution, the volume of a segment of a hyperboloid of revolution, and the area of a spiral.
Archimedes' other mathematical achievements include deriving an approximation of pi (), defining and investigating the Archimedean spiral, and devising a system using exponentiation for expressing very large numbers. He was also one of the first to apply mathematics to physical phenomena, working on statics and hydrostatics. Archimedes' achievements in this area include a proof of the law of the lever, the widespread use of the concept of center of gravity, and the enunciation of the law of buoyancy known as Archimedes' principle. In astronomy, he made measurements of the apparent diameter of the Sun and the size of the universe. He is also said to have built a planetarium device that demonstrated the movements of the known celestial bodies, and may have been a precursor to the Antikythera mechanism. He is also credited with designing innovative machines, such as his screw pump, compound pulleys, and defensive war machines to protect his native Syracuse from invasion.
Archimedes died during the siege of Syracuse, when he was killed by a Roman soldier despite orders that he should not be harmed. Cicero describes visiting Archimedes' tomb, which was surmounted by a sphere and a cylinder that Archimedes requested be placed there to represent his most valued mathematical discovery.
Unlike his inventions, Archimedes' mathematical writings were little known in antiquity. Alexandrian mathematicians read and quoted him, but the first comprehensive compilation was not made until by Isidore of Miletus in Byzantine Constantinople, while Eutocius' commentaries on Archimedes' works in the same century opened them to wider readership for the first time. In the Middle ages, Archimedes' work was translated into Arabic in the 9th century and then into Latin in the 12th century, and were an influential source of ideas for scientists during the Renaissance and in the Scientific Revolution. The recent discovery in 1906 of previously lost works by Archimedes in the Archimedes Palimpsest has also provided new insights into how he obtained mathematical results.
Biography
The details of Archimedes life are obscure; a biography of Archimedes mentioned by Eutocius was allegedly written by his friend Heraclides Lembus, but this work has been lost, and modern scholarship is doubtful that it was written by Heraclides to begin with.
Based on a statement by the Byzantine Greek scholar John Tzetzes that Archimedes lived for 75 years before his death in 212 BC, Archimedes is estimated to have been born c. 287 BC in the seaport city of Syracuse, Sicily, at that time a self-governing colony in Magna Graecia. In the Sand-Reckoner, Archimedes gives his father's name as Phidias, an astronomer about whom nothing else is known; Plutarch wrote in his Parallel Lives that Archimedes was related to King Hiero II, the ruler of Syracuse, although Cicero and Silius Italicus suggest he was of humble origin. It is also unknown whether he ever married or had children, or if he ever visited Alexandria, Egypt, during his youth; though his surviving written works, addressed to Dositheus of Pelusium, a student of the Alexandrian astronomer Conon of Samos, and to the head librarian Eratosthenes of Cyrene, suggested that he maintained collegial relations with scholars based there. In the preface to On Spirals addressed to Dositheus, Archimedes says that "many years have elapsed since Conon's death." Conon of Samos lived c. 280–220 BC, suggesting that Archimedes may have been an older man when writing some of his works.
Golden wreath
Another story of a problem that Archimedes is credited solving with in service of Hiero II is the "wreath problem." According to Vitruvius, writing about two centuries after Archimedes' death, King Hiero II of Syracuse had commissioned a golden wreath for a temple to the immortal gods, and had supplied pure gold to be used by the goldsmith. However, the king had begun to suspect that the goldsmith had substituted some cheaper silver and kept some of the pure gold for himself, and, unable to make the smith confess, asked Archimedes to investigate. Later, while stepping into a bath, Archimedes allegedly noticed that the level of the water in the tub rose more the lower he sank in the tub and, realizing that this effect could be used to determine the golden crown's volume, was so excited that he took to the streets naked, having forgotten to dress, crying "Eureka!, meaning "I have found [it]!" According to Vitruvius, Archimedes then took a lump of gold and a lump of silver that were each equal in weight to the wreath, and, placing each in the bathtub, showed that the wreath displaced more water than the gold and less than the silver, demonstrating that the wreath was gold mixed with silver
A different account is given in the Carmen de Ponderibus, an anonymous 5th century Latin didactic poem on weights and measures once attributed to the grammarian Priscian. In this poem, the lumps of gold and silver were placed on the scales of a balance, and then the entire apparatus was immersed in water; the difference in density between the gold and the silver, or between the gold and the crown, causes the scale to tip accordingly. Unlike the more famous bathtub account given by Vitruvius, this poetic account uses the hydrostatics principle now known as Archimedes' principle that is found in his treatise On Floating Bodies, where a body immersed in a fluid experiences a buoyant force equal to the weight of the fluid it displaces. Galileo Galilei, who invented a hydrostatic balance in 1586 inspired by Archimedes' work, considered it "probable that this method is the same that Archimedes followed, since, besides being very accurate, it is based on demonstrations found by Archimedes himself."
Launching the Syracusia
A large part of Archimedes' work in engineering probably arose from fulfilling the needs of his home city of Syracuse. Athenaeus of Naucratis in his Deipnosophistae quotes a certain Moschion for a description on how King Hiero II commissioned the design of a huge ship, the Syracusia, which is said to have been the largest ship built in classical antiquity and, according to Moschion's account, it was launched by Archimedes. Plutarch tells a slightly different account, relating that Archimedes boasted to Hiero that he was able to move any large weight, at which point Hiero challenged him to move a ship. These accounts contain many fantastic details that are historically implausible, and the authors of these stories provide conflicting about how this task was accomplished: Plutarch states that Archimedes constructed a block-and-tackle pulley system, while Hero of Alexandria attributed the same boast to Archimedes' invention of the baroulkos, a kind of windlass. Pappus of Alexandria attributed this feat, instead, to Archimedes' use of mechanical advantage, the principle of leverage to lift objects that would otherwise have been too heavy to move, attributing to him the oft-quoted remark: "Give me a place to stand on, and I will move the Earth."
Athenaeus, likely garbling the details of Hero's account of the baroulkos, also mentions that Archimedes used a "screw" in order to remove any potential water leaking through the hull of the Syracusia. Although this device is sometimes referred to as Archimedes' screw, it likely predates him by a significant amount, and none of his closest contemporaries who describe its use (Philo of Byzantium, Strabo, and Vitruvius) credit him with its use.
War machines
The greatest reputation Archimedes earned during antiquity was for the defense of his city from the Romans during the Siege of Syracuse. According to Plutarch, Archimedes had constructed war machines for Hiero II, but had never been given an opportunity to use them during Hiero's lifetime. In 214 BC, however, during the Second Punic War, when Syracuse switched allegiances from Rome to Carthage, the Roman army under Marcus Claudius Marcellus attempted to take the city, Archimedes allegedly personally oversaw the use of these war machines in the defense of the city, greatly delaying the Romans, who were only able to capture the city after a long siege. Three different historians, Plutarch, Livy, and Polybius provide testimony about these war machines, describing improved catapults, cranes that dropped heavy pieces of lead on the Roman ships or which used an iron claw to lift them out of the water, dropping the back in so that they sank.
A much more improbable account, not found in any of the three earliest accounts (Plutarch, Polybius, or Livy) describes how Archimedes used "burning mirrors" to focus the sun's rays onto the attacking Roman ships, setting them on fire. The earliest account to mention ships being set on fire, by the 2nd century CE satirist Lucian of Samosata, does not mention mirrors, and only says the ships were set on fire by artificial means, which may imply that burning projectiles were used. The first author to mention mirrors is Galen, writing later in the same century. Nearly four hundred years after Lucian and Galen, Anthemius, despite skepticism, tried to reconstruct Archimedes' hypothetical reflector geometry. The purported device, sometimes called "Archimedes' heat ray", has been the subject of an ongoing debate about its credibility since the Renaissance. René Descartes rejected it as false, while modern researchers have attempted to recreate the effect using only the means that would have been available to Archimedes, with mixed results.
Death
There are several divergent accounts of Archimedes' death during the sack of Syracuse after it fell to the Romans: The oldest account, from Livy, says that, while drawing figures in the dust, Archimedes was killed by a Roman soldier who did not know he was Archimedes. According to Plutarch, the soldier demanded that Archimedes come with him, but Archimedes declined, saying that he had to finish working on the problem, and the soldier killed Archimedes with his sword. Another story from Plutarch has Archimedes carrying mathematical instruments before being killed because a soldier thought they were valuable items. Another Roman writer, Valerius Maximus (fl. 30 AD), wrote in Memorable Doings and Sayings that Archimedes' last words as the soldier killed him were "... but protecting the dust with his hands, said 'I beg of you, do not disturb this." which is similar to the last words now commonly attributed to him, "Do not disturb my circles," which otherwise do not appear in any ancient sources.
Marcellus was reportedly angered by Archimedes' death, as he considered him a valuable scientific asset (he called Archimedes "a geometrical Briareus") and had ordered that he should not be harmed. Cicero (106–43 BC) mentions that Marcellus brought to Rome two planetariums Archimedes built, which were constructed by Archimedes and which showed the motion of the Sun, Moon and five planets, one of which he donated to the Temple of Virtue in Rome, and the other he allegedly kept as his only personal loot from Syracuse." Pappus of Alexandria reports on a now lost treatise by Archimedes On Sphere-Making, which may have dealt with the construction of these mechanisms. Constructing mechanisms of this kind would have required a sophisticated knowledge of differential gearing, which was once thought to have been beyond the range of the technology available in ancient times, but the discovery in 1902 of the Antikythera mechanism, another device built BC designed with a similar purpose, has confirmed that devices of this kind were known to the ancient Greeks, with some scholars regarding Archimedes' device as a precursor.
While serving as a quaestor in Sicily, Cicero himself found what was presumed to be Archimedes' tomb near the Agrigentine gate in Syracuse, in a neglected condition and overgrown with bushes. Cicero had the tomb cleaned up and was able to see the carving and read some of the verses that had been added as an inscription. The tomb carried a sculpture illustrating Archimedes' favorite mathematical proof, that the volume and surface area of the sphere are two-thirds that of an enclosing cylinder including its bases.
Mathematics
While he is often regarded as a designer of mechanical devices, Archimedes also made contributions to the field of mathematics, both in applying the techniques of his predecessors to obtain new results, and developing new methods of his own.
Method of exhaustion
In Quadrature of the Parabola, Archimedes states that a certain proposition in Euclid's Elements demonstrating that the area of a circle is proportional to its diameter was proven using a lemma now known as the Archimedean property, that “the excess by which the greater of two unequal regions exceed the lesser, if added to itself, can exceed any given bounded region.” Prior to Archimedes, Eudoxus of Cnidus and other earlier mathematicians applied this lemma, a technique now referred to as the "method of exhaustion," to find the volume of a tetrahedron, cylinder, cone, and sphere, for which proofs are given in book XII of Euclid's Elements.
In Measurement of a Circle, Archimedes employed this method to show that the area of a circle is the same as a right triangle whose base and height are equal to its radius and circumference. He then approximated the ratio between the radius and the circumference, the value of , by drawing a larger regular hexagon outside a circle then a smaller regular hexagon inside the circle, and progressively doubling the number of sides of each regular polygon, calculating the length of a side of each polygon at each step. As the number of sides increases, it becomes a more accurate approximation of a circle. After four such steps, when the polygons had 96 sides each, he was able to determine that the value of lay between 3 (approx. 3.1429) and 3 (approx. 3.1408), consistent with its actual value of approximately 3.1416. In the same treatise, he also asserts that the value of the square root of 3 as lying between (approximately 1.7320261) and (approximately 1.7320512), which he may have derived from a similar method.
In Quadrature of the Parabola, Archimedes used this technique to prove that the area enclosed by a parabola and a straight line is times the area of a corresponding inscribed triangle as shown in the figure at right, expressing the solution to the problem as an infinite geometric series with the common ratio :
If the first term in this series is the area of the triangle, then the second is the sum of the areas of two triangles whose bases are the two smaller secant lines, and whose third vertex is where the line that is parallel to the parabola's axis and that passes through the midpoint of the base intersects the parabola, and so on. This proof uses a variation of the series which sums to .
He also used this technique in order to measure the surface areas of a sphere and cone, to calculate the area of an ellipse, and to find the area contained within an Archimedean spiral.
Mechanical method
In addition to developing on the works of earlier mathematicians with the method of exhaustion, Archimedes also pioneered a novel technique using the law of the lever in order to measure the area and volume of shapes using physical means. He first gives an outline of this proof in Quadrature of the Parabola alongside the geometric proof, but he gives a fuller explanation in The Method of Mechanical Theorems. According to Archimedes, he proved the results in his mathematical treatises first using this method, and then worked backwards, applying the method of exhaustion only after he had already calculated an approximate value for the answer.
Large numbers
Archimedes also developed methods for representing large numbers.
In The Sand Reckoner, Archimedes devised a system of counting based on the myriad, the Greek term for the number 10,000, in order to calculate a number that was greater than the grains of sand needed to fill the universe. He proposed a number system using powers of a myriad of myriads (100 million, i.e., 10,000 x 10,000) and concluded that the number of grains of sand required to fill the universe would be 8 vigintillion, or 8. In doing so, he demonstrated that mathematics could represent arbitrarily large numbers.
In the Cattle Problem, Archimedes challenges the mathematicians at the Library of Alexandria to count the numbers of cattle in the Herd of the Sun, which involves solving a number of simultaneous Diophantine equations. A more difficult version of the problem in which some of the answers are required to be square numbers, and the answer is a very large number, approximately 7.760271.
Archimedean solids
In a lost work described by Pappus of Alexandria, Archimedes also proved that there are exactly thirteen semiregular polyhedra.
Writings
Archimedes made his work known through correspondence with mathematicians in Alexandria, which were originally written in Doric Greek, the dialect of ancient Syracuse.
Surviving works
The following are ordered chronologically based on new terminological and historical criteria set by Knorr (1978) and Sato (1986).
Measurement of a Circle
This is a short work consisting of three propositions. It is written in the form of a correspondence with Dositheus of Pelusium, who was a student of Conon of Samos. In Proposition II, Archimedes gives an approximation of the value of pi (), showing that it is greater than (3.1408...) and less than (3.1428...).
The Sand Reckoner
In this treatise, also known as Psammites, Archimedes finds a number that is greater than the grains of sand needed to fill the universe. This book mentions the heliocentric theory of the Solar System proposed by Aristarchus of Samos, as well as contemporary ideas about the size of the Earth and the distance between various celestial bodies, and attempts to measure the apparent diameter of the Sun. By using a system of numbers based on powers of the myriad, Archimedes concludes that the number of grains of sand required to fill the universe is 8 in modern notation. The introductory letter states that Archimedes' father was an astronomer named Phidias. The Sand Reckoner is the only surviving work in which Archimedes discusses his views on astronomy.
Archimedes discusses astronomical measurements of the Earth, Sun, and Moon, as well as Aristarchus' heliocentric model of the universe, in the Sand-Reckoner. Without the use of either trigonometry or a table of chords, Archimedes determines the Sun's apparent diameter by first describing the procedure and instrument used to make observations (a straight rod with pegs or grooves), applying correction factors to these measurements, and finally giving the result in the form of upper and lower bounds to account for observational error.
Ptolemy, quoting Hipparchus, also references Archimedes' solstice observations in the Almagest. This would make Archimedes the first known Greek to have recorded multiple solstice dates and times in successive years.
On the Equilibrium of Planes
There are two books to On the Equilibrium of Planes: the first contains seven postulates and fifteen propositions, while the second book contains ten propositions. In the first book, Archimedes proves the law of the lever, which states that:
Earlier descriptions of the principle of the lever are found in a work by Euclid and in the Mechanical Problems, belonging to the Peripatetic school of the followers of Aristotle, the authorship of which has been attributed by some to Archytas.
Archimedes uses the principles derived to calculate the areas and centers of gravity of various geometric figures including triangles, parallelograms and parabolas.
Quadrature of the Parabola
In this work of 24 propositions addressed to Dositheus, Archimedes proves by two methods that the area enclosed by a parabola and a straight line is 4/3 the area of a triangle with equal base and height. He achieves this by two different methods: first by applying the law of the lever, and by calculating the value of a geometric series that sums to infinity with the ratio 1/4.
On the Sphere and Cylinder
In this two-volume treatise addressed to Dositheus, Archimedes obtains the result of which he was most proud, namely the relationship between a sphere and a circumscribed cylinder of the same height and diameter. The volume is 3 for the sphere, and 23 for the cylinder. The surface area is 42 for the sphere, and 62 for the cylinder (including its two bases), where is the radius of the sphere and cylinder.
On Spirals
This work of 28 propositions is also addressed to Dositheus. The treatise defines what is now called the Archimedean spiral. It is the locus of points corresponding to the locations over time of a point moving away from a fixed point with a constant speed along a line which rotates with constant angular velocity. Equivalently, in modern polar coordinates (, ), it can be described by the equation with real numbers and .
This is an early example of a mechanical curve (a curve traced by a moving point) considered by a Greek mathematician.
On Conoids and Spheroids
This is a work in 32 propositions addressed to Dositheus. In this treatise Archimedes calculates the areas and volumes of sections of cones, spheres, and paraboloids.
On Floating Bodies
There are two books of On Floating Bodies. In the first book, Archimedes spells out the law of equilibrium of fluids and proves that water will adopt a spherical form around a center of gravity.
This may have been an attempt at explaining the theory of contemporary Greek astronomers such as Eratosthenes that the Earth is round. The fluids described by Archimedes are not since he assumes the existence of a point towards which all things fall in order to derive the spherical shape.
Archimedes' principle of buoyancy is given in this work, stated as follows:
Any body wholly or partially immersed in fluid experiences an upthrust equal to, but opposite in direction to, the weight of the fluid displaced.
In the second part, he calculates the equilibrium positions of sections of paraboloids. This was probably an idealization of the shapes of ships' hulls. Some of his sections float with the base under water and the summit above water, similar to the way that icebergs float.
Ostomachion
Also known as Loculus of Archimedes or Archimedes' Box, this is a dissection puzzle similar to a Tangram, and the treatise describing it was found in more complete form in the Archimedes Palimpsest. Archimedes calculates the areas of the 14 pieces which can be assembled to form a square. Reviel Netz of Stanford University argued in 2003 that Archimedes was attempting to determine how many ways the pieces could be assembled into the shape of a square. Netz calculates that the pieces can be made into a square 17,152 ways. The number of arrangements is 536 when solutions that are equivalent by rotation and reflection are excluded. The puzzle represents an example of an early problem in combinatorics.
The origin of the puzzle's name is unclear, and it has been suggested that it is taken from the Ancient Greek word for "throat" or "gullet", stomachos (). Ausonius calls the puzzle , a Greek compound word formed from the roots of () and ().
The cattle problem
In this work, addressed to Eratosthenes and the mathematicians in Alexandria, Archimedes challenges them to count the numbers of cattle in the Herd of the Sun, which involves solving a number of simultaneous Diophantine equations. Gotthold Ephraim Lessing discovered this work in a Greek manuscript consisting of a 44-line poem in the Herzog August Library in Wolfenbüttel, Germany in 1773. There is a more difficult version of the problem in which some of the answers are required to be square numbers. A. Amthor first solved this version of the problem in 1880, and the answer is a very large number, approximately 7.760271.
The Method of Mechanical Theorems
As with The Cattle Problem, The Method of Mechanical Theorems was written in the form of a letter to Eratosthenes in Alexandria.
In this work Archimedes uses a novel method, an early form of Cavalieri's principle, to rederive the results from the treatises sent to Dositheus (Quadrature of the Parabola, On the Sphere and Cylinder, On Spirals, On Conoids and Spheroids) that he had previously used the method of exhaustion to prove, using the law of the lever he applied in On the Equilbrium of Planes in order to find the center of gravity of an object first, and reasoning geometrically from there in order to more easily derive the volume of an object. Archimedes states that he used this method to derive the results in the treatises sent to Dositheus before he proved them more rigorously with the method of exhaustion, stating that it is useful to know that a result is true before proving it rigorously, much as Eudoxus of Cnidus was aided in proving that the volume of a cone is one-third the volume of cylinder by knowing that Democritus had already asserted it to be true on the argument that this is true by the fact that the pyramid has one-third the rectangular prism of the same base.
This treatise was thought lost until the discovery of the Archimedes Palimpsest in 1906.
Apocryphal works
Archimedes' Book of Lemmas or Liber Assumptorum is a treatise with 15 propositions on the nature of circles. The earliest known copy of the text is in Arabic. T. L. Heath and Marshall Clagett argued that it cannot have been written by Archimedes in its current form, since it quotes Archimedes, suggesting modification by another author. The Lemmas may be based on an earlier work by Archimedes that is now lost.
Other questionable attributions to Archimedes' work include the Latin poem Carmen de ponderibus et mensuris (4th or 5th century), which describes the use of a hydrostatic balance, to solve the problem of the crown, and the 12th-century text Mappae clavicula, which contains instructions on how to perform assaying of metals by calculating their specific gravities.
Lost works
Many written works by Archimedes have not survived or are only extant in heavily edited fragments: Pappus of Alexandria mentions On Sphere-Making, as well as a work on semiregular polyhedra, and another work on spirals, while Theon of Alexandria quotes a remark about refraction from the Catoptrica. Principles, addressed to Zeuxippus, explained the number system used in The Sand Reckoner; there are also On Balances; On Centers of Gravity.
Scholars in the medieval Islamic world also attribute to Archimedes a formula for calculating the area of a triangle from the length of its sides, which today is known as Heron's formula due to its first known appearance in the work of Heron of Alexandria in the 1st century AD, and may have been proven in a lost work of Archimedes that is no longer extant.
Archimedes Palimpsest
In 1906, the Danish professor Johan Ludvig Heiberg visited Constantinople to examine a 174-page goatskin parchment of prayers, written in the 13th century, after reading a short transcription published seven years earlier by Papadopoulos-Kerameus. He confirmed that it was indeed a palimpsest, a document with text that had been written over an erased older work. Palimpsests were created by scraping the ink from existing works and reusing them, a common practice in the Middle Ages, as vellum was expensive. The older works in the palimpsest were identified by scholars as 10th-century copies of previously lost treatises by Archimedes. The palimpsest holds seven treatises, including the only surviving copy of On Floating Bodies in the original Greek. It is the only known source of The Method of Mechanical Theorems, referred to by Suidas and thought to have been lost forever. Stomachion was also discovered in the palimpsest, with a more complete analysis of the puzzle than had been found in previous texts.
The treatises in the Archimedes Palimpsest include:
On the Equilibrium of Planes
On Spirals
Measurement of a Circle
On the Sphere and Cylinder
On Floating Bodies
The Method of Mechanical Theorems
Stomachion
Speeches by the 4th century BC politician Hypereides
A commentary on Aristotle's Categories
Other works
The parchment spent hundreds of years in a monastery library in Constantinople before being sold to a private collector in the 1920s. On 29 October 1998, it was sold at auction to an anonymous buyer for a total of $2.2 million. The palimpsest was stored at the Walters Art Museum in Baltimore, Maryland, where it was subjected to a range of modern tests including the use of ultraviolet and light to read the overwritten text. It has since returned to its anonymous owner.
Legacy
Sometimes called the father of mathematics and mathematical physics, historians of science and mathematics almost universally agree that Archimedes was the finest mathematician from antiquity.
Classical antiquity
The reputation that Archimedes had for mechanical inventions in classical antiquity is well-documented; Athenaeus recounts in his Deipnosophistae how Archimedes supervised the construction of the largest known ship in antiquity, the Syracusia, while Apuleius talks about his work in catoptrics. Plutarch had claimed that Archimedes disdained mechanics and focused primarily on pure geometry, but this is generally considered to be a mischaracterization by modern scholarship, fabricated to bolster Plutarch's own Platonist values rather than to an accurate presentation of Archimedes, and, unlike his inventions, Archimedes' mathematical writings were little known in antiquity outside of the works of Alexandrian mathematicians. The first comprehensive compilation was not made until by Isidore of Miletus in Byzantine Constantinople, while Eutocius' commentaries on Archimedes' works earlier in the same century opened them to wider readership for the first time.
Middle ages
Archimedes' work was translated into Arabic by Thābit ibn Qurra (836–901 AD), and into Latin via Arabic by Gerard of Cremona (c. 1114–1187). Direct Greek to Latin translations were later done by William of Moerbeke (c. 1215–1286) and Iacobus Cremonensis (c. 1400–1453).
Renaissance and early modern Europe
During the Renaissance, the Editio princeps (First Edition) was published in Basel in 1544 by Johann Herwagen with the works of Archimedes in Greek and Latin, which were an influential source of ideas for scientists during the Renaissance and again in the 17th century.
Leonardo da Vinci repeatedly expressed admiration for Archimedes, and attributed his invention Architonnerre to Archimedes. Galileo Galilei called him "superhuman" and "my master", while Christiaan Huygens said, "I think Archimedes is comparable to no one", consciously emulating him in his early work. Gottfried Wilhelm Leibniz said, "He who understands Archimedes and Apollonius will admire less the achievements of the foremost men of later times".
Italian numismatist and archaeologist Filippo Paruta (1552–1629) and Leonardo Agostini (1593–1676) reported on a bronze coin in Sicily with the portrait of Archimedes on the obverse and a cylinder and sphere with the monogram ARMD in Latin on the reverse. Although the coin is now lost and its date is not precisely known, Ivo Schneider described the reverse as "a sphere resting on a base – probably a rough image of one of the planetaria created by Archimedes," and suggested it might have been minted in Rome for Marcellus who "according to ancient reports, brought two spheres of Archimedes with him to Rome".
In modern mathematics
Gauss's heroes were Archimedes and Newton, and Moritz Cantor, who studied under Gauss in the University of Göttingen, reported that he once remarked in conversation that "there had been only three epoch-making mathematicians: Archimedes, Newton, and Eisenstein". Likewise, Alfred North Whitehead said that "in the year 1500 Europe knew less than Archimedes who died in the year 212 BC." The historian of mathematics Reviel Netz, echoing Whitehead's proclamation on Plato and philosophy, said that "Western science is but a series of footnotes to Archimedes," calling him "the most important scientist who ever lived." and Eric Temple Bell, wrote that "Any list of the three "greatest" mathematicians of all history would include the name of Archimedes. The other two usually associated with him are Newton and Gauss. Some, considering the relative wealth—or poverty—of mathematics and physical science in the respective ages in which these giants lived, and estimating their achievements against the background of their times, would put Archimedes first."
The discovery in 1906 of previously lost works by Archimedes in the Archimedes Palimpsest has provided new insights into how he obtained mathematical results.
The Fields Medal for outstanding achievement in mathematics carries a portrait of Archimedes, along with a carving illustrating his proof on the sphere and the cylinder. The inscription around the head of Archimedes is a quote attributed to 1st century AD poet Manilius, which reads in Latin: Transire suum pectus mundoque potiri ("Rise above oneself and grasp the world").
Cultural influence
The world's first seagoing steamship with a screw propeller was the SS Archimedes, which was launched in 1839 and named in honor of Archimedes and his work on the screw.
Archimedes has also appeared on postage stamps issued by East Germany (1973), Greece (1983), Italy (1983), Nicaragua (1971), San Marino (1982), and Spain (1963).
The exclamation of Eureka! attributed to Archimedes is the state motto of California. In this instance, the word refers to the discovery of gold near Sutter's Mill in 1848 which sparked the California gold rush.
There is a crater on the Moon named Archimedes () in his honor, as well as a lunar mountain range, the Montes Archimedes ().
See also
Concepts
Arbelos
Archimedean point
Archimedes' axiom
Archimedes number
Archimedes paradox
Archimedean solid
Archimedes' twin circles
Methods of computing square roots
Salinon
Steam cannon
People
Zhang Heng
Notes
Footnotes
Citations
Ancient testimony
Plutarch, Life of Marcellus
Modern sources
Further reading
Clagett, Marshall. 1964–1984. Archimedes in the Middle Ages 1–5. Madison, WI: University of Wisconsin Press.
Clagett, Marshall. 1970. "Archimedes". In Charles Coulston Gillispie, ed. Dictionary of Scientific Biography. Vol. 1 (Abailard–Berg). New York: Charles Scribner's Sons. .
Gow, Mary. 2005. Archimedes: Mathematical Genius of the Ancient World. Enslow Publishing. .
Hasan, Heather. 2005. Archimedes: The Father of Mathematics. Rosen Central. .
Netz, Reviel. 2004–2017. The Works of Archimedes: Translation and Commentary. 1–2. Cambridge University Press. Vol. 1: "The Two Books on the Sphere and the Cylinder". . Vol. 2: "On Spirals". .
Netz, Reviel, and William Noel. 2007. The Archimedes Codex. Orion Publishing Group. .
Pickover, Clifford A. 2008. Archimedes to Hawking: Laws of Science and the Great Minds Behind Them. Oxford University Press. .
Simms, Dennis L. 1995. Archimedes the Engineer. Continuum International Publishing Group. .
Stein, Sherman. 1999. Archimedes: What Did He Do Besides Cry Eureka?. Mathematical Association of America. .
|
;210s BC deaths;280s BC births;3rd-century BC Greek mathematicians;3rd-century BC Greek writers;3rd-century BC Syracusans;Ancient Greek engineers;Ancient Greek geometers;Ancient Greek inventors;Ancient Greek murder victims;Ancient Greek physicists;Ancient Syracusans;Buoyancy;Doric Greek writers;Fluid dynamicists;Hellenistic-era philosophers;Mathematicians from Sicily;People from Syracuse, Sicily;Scientists from Sicily;Sicilian Greeks;Year of birth uncertain;Year of death uncertain
|
https://en.wikipedia.org/wiki/Anthemius%20of%20Tralles
|
Anthemius of Tralles (, Medieval Greek: , Anthémios o Trallianós; – 533 558) was a Byzantine Greek from Tralles who worked as a geometer and architect in Constantinople, the capital of the Byzantine Empire. With Isidore of Miletus, he designed the Hagia Sophia for Justinian I.
Life
Anthemius was one of the five sons of Stephanus of Tralles, a physician. His brothers were Dioscorus, Alexander, Olympius, and Metrodorus. Dioscorus followed his father's profession in Tralles; Alexander did so in Rome and became one of the most celebrated medical men of his time; Olympius became a noted lawyer; and Metrodorus worked as a grammarian in Constantinople.
Anthemius was said to have annoyed his neighbor Zeno in two ways: first, by engineering a miniature earthquake by sending steam through leather tubes he had fixed among the joists and flooring of Zeno's parlor while he was entertaining friends and, second, by simulating thunder and lightning and flashing intolerable light into Zeno's eyes from a slightly hollowed mirror. In addition to his familiarity with steam, some dubious authorities credited Anthemius with a knowledge of gunpowder or other explosive compound.
Mathematics
Anthemius was a capable mathematician. In the course of his treatise On Burning Mirrors, he intended to facilitate the construction of surfaces to reflect light to a single point, he described the string construction of the ellipse and assumed a property of ellipses not found in Apollonius of Perga's Conics: the equality of the angles subtended at a focus by two tangents drawn from a point. His work also includes the first practical use of the directrix: having given the focus and a double ordinate, he used the focus and directrix to obtain any number of points on a parabola. This work was later known to Arab mathematicians such as Alhazen.
Eutocius of Ascalon's commentary on Apollonius's Conics was dedicated to Anthemius.
Architecture
As an architect, Anthemius is best known for his work designing the Hagia Sophia. He was commissioned with Isidore of Miletus by Justinian I shortly after the earlier church on the site burned down in 532 but died early on in the project. He is also said to have repaired the flood defenses at Daras.
Editions of On Burning-Glasses
Notes
References
.
|
470s births;5th-century Byzantine scientists;5th-century Byzantine writers;5th-century mathematicians;6th-century Byzantine scientists;6th-century Byzantine writers;6th-century architects;6th-century deaths;6th-century mathematicians;Byzantine architects;Greek Christians;Hagia Sophia;Justinian I;People from Tralles
|
https://en.wikipedia.org/wiki/Adaptive%20radiation
|
In evolutionary biology, adaptive radiation is a process in which organisms diversify rapidly from an ancestral species into a multitude of new forms, particularly when a change in the environment makes new resources available, alters biotic interactions or opens new environmental niches. Starting with a single ancestor, this process results in the speciation and phenotypic adaptation of an array of species exhibiting different morphological and physiological traits. The prototypical example of adaptive radiation is finch speciation on the Galapagos ("Darwin's finches"), but examples are known from around the world.
Characteristics
Four features can be used to identify an adaptive radiation:
A common ancestry of component species: specifically a recent ancestry. Note that this is not the same as a monophyly in which all descendants of a common ancestor are included.
A phenotype-environment correlation: a significant association between environments and the morphological and physiological traits used to exploit those environments.
Trait utility: the performance or fitness advantages of trait values in their corresponding environments.
Rapid speciation: presence of one or more bursts in the emergence of new species around the time that ecological and phenotypic divergence is underway.
Conditions
Adaptive radiations are thought to be triggered by an ecological opportunity or a new adaptive zone. Sources of ecological opportunity can be the loss of antagonists (competitors or predators), the evolution of a key innovation, or dispersal to a new environment. Any one of these ecological opportunities has the potential to result in an increase in population size and relaxed stabilizing (constraining) selection. As genetic diversity is positively correlated with population size the expanded population will have more genetic diversity compared to the ancestral population. With reduced stabilizing selection phenotypic diversity can also increase. In addition, intraspecific competition will increase, promoting divergent selection to use a wider range of resources. This ecological release provides the potential for ecological speciation and thus adaptive radiation.
Occupying a new environment might take place under the following conditions:
A new habitat has opened up: a volcano, for example, can create new ground in the middle of the ocean. This is the case in places like Hawaii and the Galapagos. For aquatic species, the formation of a large new lake habitat could serve the same purpose; the tectonic movement that formed the East African Rift, ultimately leading to the creation of the Rift Valley Lakes, is an example of this. An extinction event could effectively achieve this same result, opening up niches that were previously occupied by species that no longer exist.
This new habitat is relatively isolated. When a volcano erupts on the mainland and destroys an adjacent forest, it is likely that the terrestrial plant and animal species that used to live in the destroyed region will recolonize without evolving greatly. However, if a newly formed habitat is isolated, the species that colonize it will likely be somewhat random and uncommon arrivals.
The new habitat has a wide availability of niche space. The rare colonist can only adaptively radiate into as many forms as there are niches.
Relationship between mass-extinctions and mass adaptive radiations
A 2020 study found there to be no direct causal relationship between the proportionally most comparable mass radiations and extinctions in terms of "co-occurrence of species", substantially challenging the hypothesis of "creative mass extinctions".
Examples
Darwin's finches
Darwin's finches on the Galapagos Islands are a model system for the study of adaptive radiation. Today represented by approximately 15 species, Darwin's finches are Galapagos endemics famously adapted for a specialized feeding behavior (although one species, the Cocos finch (Pinaroloxias inornata), is not found in the Galapagos but on the island of Cocos south of Costa Rica). Darwin's finches are not actually finches in the true sense, but are members of the tanager family Thraupidae, and are derived from a single ancestor that arrived in the Galapagos from mainland South America perhaps just 3 million years ago. Excluding the Cocos finch, each species of Darwin's finch is generally widely distributed in the Galapagos and fills the same niche on each island. For the ground finches, this niche is a diet of seeds, and they have thick bills to facilitate the consumption of these hard materials. The ground finches are further specialized to eat seeds of a particular size: the large ground finch (Geospiza magnirostris) is the largest species of Darwin's finch and has the thickest beak for breaking open the toughest seeds, the small ground finch (Geospiza fuliginosa) has a smaller beak for eating smaller seeds, and the medium ground finch (Geospiza fortis) has a beak of intermediate size for optimal consumption of intermediately sized seeds (relative to G. magnirostris and G. fuliginosa). There is some overlap: for example, the most robust medium ground finches could have beaks larger than those of the smallest large ground finches. Because of this overlap, it can be difficult to tell the species apart by eye, though their songs differ. These three species often occur sympatrically, and during the rainy season in the Galapagos when food is plentiful, they specialize little and eat the same, easily accessible foods. It was not well-understood why their beaks were so adapted until Peter and Rosemary Grant studied their feeding behavior in the long dry season, and discovered that when food is scarce, the ground finches use their specialized beaks to eat the seeds that they are best suited to eat and thus avoid starvation.
The other finches in the Galapagos are similarly uniquely adapted for their particular niche. The cactus finches (Geospiza sp.) have somewhat longer beaks than the ground finches that serve the dual purpose of allowing them to feed on Opuntia cactus nectar and pollen while these plants are flowering, but on seeds during the rest of the year. The warbler-finches (Certhidea sp.) have short, pointed beaks for eating insects. The woodpecker finch (Camarhynchus pallidus) has a slender beak which it uses to pick at wood in search of insects; it also uses small sticks to reach insect prey inside the wood, making it one of the few animals that use tools.
The mechanism by which the finches initially diversified is still an area of active research. One proposition is that the finches were able to have a non-adaptive, allopatric speciation event on separate islands in the archipelago, such that when they reconverged on some islands, they were able to maintain reproductive isolation. Once they occurred in sympatry, niche specialization was favored so that the different species competed less directly for resources. This second, sympatric event was adaptive radiation.
Cichlids of the African Great Lakes
The haplochromine cichlid fishes in the Great Lakes of the East African Rift (particularly in Lake Tanganyika, Lake Malawi, and Lake Victoria) form the most speciose modern example of adaptive radiation. These lakes are believed to be home to about 2,000 different species of cichlid, spanning a wide range of ecological roles and morphological characteristics. Cichlids in these lakes fill nearly all of the roles typically filled by many fish families, including those of predators, scavengers, and herbivores, with varying dentitions and head shapes to match their dietary habits. In each case, the radiation events are only a few million years old, making the high level of speciation particularly remarkable. Several factors could be responsible for this diversity: the availability of a multitude of niches probably favored specialization, as few other fish taxa are present in the lakes (meaning that sympatric speciation was the most probable mechanism for initial specialization). Also, continual changes in the water level of the lakes during the Pleistocene (which often turned the largest lakes into several smaller ones) could have created the conditions for secondary allopatric speciation.
Tanganyika cichlids
Lake Tanganyika is the site from which nearly all the cichlid lineages of East Africa (including both riverine and lake species) originated. Thus, the species in the lake constitute a single adaptive radiation event but do not form a single monophyletic clade. Lake Tanganyika is also the least speciose of the three largest African Great Lakes, with only around 200 species of cichlid; however, these cichlids are more morphologically divergent and ecologically distinct than their counterparts in lakes Malawi and Victoria, an artifact of Lake Tanganyika's older cichlid fauna. Lake Tanganyika itself is believed to have formed 9–12 million years ago, putting a recent cap on the age of the lake's cichlid fauna. Many of Tanganyika's cichlids live very specialized lifestyles. The giant or emperor cichlid (Boulengerochromis microlepis) is a piscivore often ranked the largest of all cichlids (though it competes for this title with South America's Cichla temensis, the speckled peacock bass). It is thought that giant cichlids spawn only a single time, breeding in their third year and defending their young until they reach a large size, before dying of starvation some time thereafter. The three species of Altolamprologus are also piscivores, but with laterally compressed bodies and thick scales enabling them to chase prey into thin cracks in rocks without damaging their skin. Plecodus straeleni has evolved large, strangely curved teeth that are designed to scrape scales off of the sides of other fish, scales being its main source of food. Gnathochromis permaxillaris possesses a large mouth with a protruding upper lip, and feeds by opening this mouth downward onto the sandy lake bottom, sucking in small invertebrates. A number of Tanganyika's cichlids are shell-brooders, meaning that mating pairs lay and fertilize their eggs inside of empty shells on the lake bottom. Lamprologus callipterus is a unique egg-brooding species, with 15 cm-long males amassing collections of shells and guarding them in the hopes of attracting females (about 6 cm in length) to lay eggs in these shells. These dominant males must defend their territories from three types of rival: (1) other dominant males looking to steal shells; (2) younger, "sneaker" males looking to fertilize eggs in a dominant male's territory; and (3) tiny, 2–4 cm "parasitic dwarf" males that also attempt to rush in and fertilize eggs in the dominant male's territory. These parasitic dwarf males never grow to the size of dominant males, and the male offspring of dominant and parasitic dwarf males grow with 100% fidelity into the form of their fathers. A number of other highly specialized Tanganyika cichlids exist aside from these examples, including those adapted for life in open lake water up to 200m deep.
Malawi cichlids
The cichlids of Lake Malawi constitute a "species flock" of up to 1000 endemic species. Only seven cichlid species in Lake Malawi are not a part of the species flock: the Eastern happy (Astatotilapia calliptera), the sungwa (Serranochromis robustus), and five tilapia species (genera Oreochromis and Coptodon). All of the other cichlid species in the lake are descendants of a single original colonist species, which itself was descended from Tanganyikan ancestors. The common ancestor of Malawi's species flock is believed to have reached the lake 3.4 million years ago at the earliest, making Malawi cichlids' diversification into their present numbers particularly rapid. Malawi's cichlids span a similarly range of feeding behaviors to those of Tanganyika, but also show signs of a much more recent origin. For example, all members of the Malawi species flock are mouth-brooders, meaning the female keeps her eggs in her mouth until they hatch; in almost all species, the eggs are also fertilized in the female's mouth, and in a few species, the females continue to guard their fry in their mouth after they hatch. Males of most species display predominantly blue coloration when mating. However, a number of particularly divergent species are known from Malawi, including the piscivorous Nimbochromis livingtonii, which lies on its side in the substrate until small cichlids, perhaps drawn to its broken white patterning, come to inspect the predator - at which point they are swiftly eaten.
Victoria's cichlids
Lake Victoria's cichlids are also a species flock, once composed of some 500 or more species. The deliberate introduction of the Nile Perch (Lates niloticus) in the 1950s proved disastrous for Victoria cichlids, and the collective biomass of the Victoria cichlid species flock has decreased substantially and an unknown number of species have become extinct. However, the original range of morphological and behavioral diversity seen in the lake's cichlid fauna is still mostly present today, if endangered. These again include cichlids specialized for niches across the trophic spectrum, as in Tanganyika and Malawi, but again, there are standouts. Victoria is famously home to many piscivorous cichlid species, some of which feed by sucking the contents out of mouthbrooding females' mouths. Victoria's cichlids constitute a far younger radiation than even that of Lake Malawi, with estimates of the age of the flock ranging from 200,000 years to as little as 14,000.
Adaptive radiation in Hawaii
Hawaii has served as the site of a number of adaptive radiation events, owing to its isolation, recent origin, and large land area. The three most famous examples of these radiations are presented below, though insects like the Hawaiian drosophilid flies and Hyposmocoma moths have also undergone adaptive radiation.
Hawaiian honeycreepers
The Hawaiian honeycreepers form a large, highly morphologically diverse species group of birds that began radiating in the early days of the Hawaiian archipelago. While today only 17 species are known to persist in Hawaii (3 more may or may not be extinct), there were more than 50 species prior to Polynesian colonization of the archipelago (between 18 and 21 species have gone extinct since the discovery of the islands by westerners). The Hawaiian honeycreepers are known for their beaks, which are specialized to satisfy a wide range of dietary needs: for example, the beak of the ʻakiapōlāʻau (Hemignathus wilsoni) is characterized by a short, sharp lower mandible for scraping bark off of trees, and the much longer, curved upper mandible is used to probe the wood underneath for insects. Meanwhile, the ʻiʻiwi (Drepanis coccinea) has a very long curved beak for reaching nectar deep in Lobelia flowers. An entire clade of Hawaiian honeycreepers, the tribe Psittirostrini, is composed of thick-billed, mostly seed-eating birds, like the Laysan finch (Telespiza cantans). In at least some cases, similar morphologies and behaviors appear to have evolved convergently among the Hawaiian honeycreepers; for example, the short, pointed beaks of Loxops and Oreomystis evolved separately despite once forming the justification for lumping the two genera together. The Hawaiian honeycreepers are believed to have descended from a single common ancestor some 15 to 20 million years ago, though estimates range as low as 3.5 million years.
Hawaiian silverswords
Adaptive radiation is not a strictly vertebrate phenomenon, and examples are also known from among plants. The most famous example of adaptive radiation in plants is quite possibly the Hawaiian silverswords, named for alpine desert-dwelling Argyroxiphium species with long, silvery leaves that live for up to 20 years before growing a single flowering stalk and then dying. The Hawaiian silversword alliance consists of twenty-eight species of Hawaiian plants which, aside from the namesake silverswords, includes trees, shrubs, vines, cushion plants, and more. The silversword alliance is believed to have originated in Hawaii no more than 6 million years ago, making this one of Hawaii's youngest adaptive radiation events. This means that the silverswords evolved on Hawaii's modern high islands, and descended from a single common ancestor that arrived on Kauai from western North America. The closest modern relatives of the silverswords today are California tarweeds of the family Asteraceae.
Hawaiian lobelioids
Hawaii is also the site of a separate major floral adaptive radiation event: the Hawaiian lobelioids. The Hawaiian lobelioids are significantly more speciose than the silverswords, perhaps because they have been present in Hawaii for so much longer: they descended from a single common ancestor who arrived in the archipelago up to 15 million years ago. Today the Hawaiian lobelioids form a clade of over 125 species, including succulents, trees, shrubs, epiphytes, etc. Many species have been lost to extinction and many of the surviving species endangered.
Caribbean anoles
Anole lizards are distributed broadly in the New World, from the Southeastern US to South America. With over 400 species currently recognized, often placed in a single genus (Anolis), they constitute one of the largest radiation events among all lizards. Anole radiation on the mainland has largely been a process of speciation, and is not adaptive to any great degree, but anoles on each of the Greater Antilles (Cuba, Hispaniola, Puerto Rico, and Jamaica) have adaptively radiated in separate, convergent ways. On each of these islands, anoles have evolved with such a consistent set of morphological adaptations that each species can be assigned to one of six "ecomorphs": trunk–ground, trunk–crown, grass–bush, crown–giant, twig, and trunk. Take for example crown–giants from each of these islands: the Cuban Anolis luteogularis, Hispaniola's Anolis ricordii, Puerto Rico's Anolis cuvieri, and Jamaica's Anolis garmani (Cuba and Hispaniola are both home to more than one species of crown–giant). These anoles are all large, canopy-dwelling species with large heads and large lamellae (scales on the undersides of the fingers and toes that are important for traction in climbing), and yet none of these species are particularly closely related and appear to have evolved these similar traits independently. The same can be said of the other five ecomorphs across the Caribbean's four largest islands. Much like in the case of the cichlids of the three largest African Great Lakes, each of these islands is home to its own convergent Anolis adaptive radiation event.
Other examples
Presented above are the most well-documented examples of modern adaptive radiation, but other examples are known. Populations of three-spined sticklebacks have repeatedly diverged and evolved into distinct ecotypes. On Madagascar, birds of the family Vangidae are marked by very distinct beak shapes to suit their ecological roles. Madagascan mantellid frogs have radiated into forms that mirror other tropical frog faunas, with the brightly colored mantellas (Mantella) having evolved convergently with the Neotropical poison dart frogs of Dendrobatidae, while the arboreal Boophis species are the Madagascan equivalent of tree frogs and glass frogs. The pseudoxyrhophiine snakes of Madagascar have evolved into fossorial, arboreal, terrestrial, and semi-aquatic forms that converge with the colubroid faunas in the rest of the world. These Madagascan examples are significantly older than most of the other examples presented here: Madagascar's fauna has been evolving in isolation since the island split from India some 88 million years ago, and the Mantellidae originated around 50 mya. Older examples are known: the K-Pg extinction event, which caused the disappearance of the dinosaurs and most other reptilian megafauna 65 million years ago, is seen as having triggered a global adaptive radiation event that created the mammal diversity that exists today. Also the Cambrian Explosion, where vacant niches left by the extinction of Ediacaran biota during End-Ediacaran mass extinction were filled up by the emergence of new phyla.
References
Further reading
Wilson, E. et al. Life on Earth, by Wilson, E.; Eisner, T.; Briggs, W.; Dickerson, R.; Metzenberg, R.; O'Brien, R.; Susman, M.; Boggs, W. (Sinauer Associates, Inc., Publishers, Stamford, Connecticut), c 1974. Chapters: The Multiplication of Species; Biogeography, pp 824–877. 40 Graphs, w species pictures, also Tables, Photos, etc. Includes Galápagos Islands, Hawaii, and Australia subcontinent, (plus St. Helena Island, etc.).
Leakey, Richard. The Origin of Humankind—on adaptive radiation in biology and human evolution, pp. 28–32, 1994, Orion Publishing.
Grant, P.R. 1999. The ecology and evolution of Darwin's Finches. Princeton University Press, Princeton, NJ.
Mayr, Ernst. 2001. What evolution is. Basic Books, New York, NY.
Gavrilets, S. and A. Vose. 2009. Dynamic patterns of adaptive radiation: evolution of mating preferences. In Butlin, R.K., J. Bridle, and D. Schluter (eds) Speciation and Patterns of Diversity, Cambridge University Press, page. 102–126.
Pinto, Gabriel, Luke Mahler, Luke J. Harmon, and Jonathan B. Losos. "Testing the Island Effect in Adaptive Radiation: Rates and Patterns of Morphological Diversification in Caribbean and Mainland Anolis Lizards." NCBI (2008): n. pag. Web. 28 Oct. 2014.
Schluter, Dolph. The ecology of adaptive radiation. Oxford University Press, 2000.
|
Evolutionary biology terminology;Speciation
|
https://en.wikipedia.org/wiki/Agarose%20gel%20electrophoresis
|
Agarose gel electrophoresis is a method of gel electrophoresis used in biochemistry, molecular biology, genetics, and clinical chemistry to separate a mixed population of macromolecules such as DNA or proteins in a matrix of agarose, one of the two main components of agar. The proteins may be separated by charge and/or size (isoelectric focusing agarose electrophoresis is essentially size independent), and the DNA and RNA fragments by length. Biomolecules are separated by applying an electric field to move the charged molecules through an agarose matrix, and the biomolecules are separated by size in the agarose gel matrix.
Agarose gel is easy to cast, has relatively fewer charged groups, and is particularly suitable for separating DNA of size range most often encountered in laboratories, which accounts for the popularity of its use. The separated DNA may be viewed with stain, most commonly under UV light, and the DNA fragments can be extracted from the gel with relative ease. Most agarose gels used are between 0.7–2% dissolved in a suitable electrophoresis buffer.
Properties of agarose gel
Agarose gel is a three-dimensional matrix formed of helical agarose molecules in supercoiled bundles that are aggregated into three-dimensional structures with channels and pores through which biomolecules can pass. The 3-D structure is held together with hydrogen bonds and can therefore be disrupted by heating back to a liquid state. The melting temperature is different from the gelling temperature, depending on the sources, agarose gel has a gelling temperature of and a melting temperature of . Low-melting and low-gelling agaroses made through chemical modifications are also available.
Agarose gel has large pore size and good gel strength, making it suitable as an anticonvection medium for the electrophoresis of DNA and large protein molecules. The pore size of a 1% gel has been estimated from 100 nm to 200–500 nm, and its gel strength allows gels as dilute as 0.15% to form a slab for gel electrophoresis. Low-concentration gels (0.1–0.2%) however are fragile and therefore hard to handle. Agarose gel has lower resolving power than polyacrylamide gel for DNA but has a greater range of separation, and is therefore used for DNA fragments of usually 50–20,000 bp in size. The limit of resolution for standard agarose gel electrophoresis is around 750 kb, but resolution of over 6 Mb is possible with pulsed field gel electrophoresis (PFGE). It can also be used to separate large proteins, and it is the preferred matrix for the gel electrophoresis of particles with effective radii larger than 5–10 nm. A 0.9% agarose gel has pores large enough for the entry of bacteriophage T4.
The agarose polymer contains charged groups, in particular pyruvate and sulfate. These negatively charged groups create a flow of water in the opposite direction to the movement of DNA in a process called electroendosmosis (EEO), and can therefore retard the movement of DNA and cause blurring of bands. Higher concentration gels would have higher electroendosmotic flow. Low EEO agarose is therefore generally preferred for use in agarose gel electrophoresis of nucleic acids, but high EEO agarose may be used for other purposes. The lower sulfate content of low EEO agarose, particularly low-melting point (LMP) agarose, is also beneficial in cases where the DNA extracted from gel is to be used for further manipulation as the presence of contaminating sulfates may affect some subsequent procedures, such as ligation and PCR. Zero EEO agaroses however are undesirable for some applications as they may be made by adding positively charged groups and such groups can affect subsequent enzyme reactions. Electroendosmosis is a reason agarose is used in preference to agar as the agaropectin component in agar contains a significant amount of negatively charged sulfate and carboxyl groups. The removal of agaropectin in agarose substantially reduces the EEO, as well as reducing the non-specific adsorption of biomolecules to the gel matrix. However, for some applications such as the electrophoresis of serum proteins, a high EEO may be desirable, and agaropectin may be added in the gel used.
Migration of nucleic acids in agarose gel
Factors affecting migration of nucleic acid in gel
A number of factors can affect the migration of nucleic acids: the dimension of the gel pores (gel concentration), size of DNA being electrophoresed, the voltage used, the ionic strength of the buffer, and the concentration of intercalating dye such as ethidium bromide if used during electrophoresis.
Smaller molecules travel faster than larger molecules in gel, and double-stranded DNA moves at a rate that is inversely proportional to the logarithm of the number of base pairs. This relationship however breaks down with very large DNA fragments, and separation of very large DNA fragments requires the use of pulsed field gel electrophoresis (PFGE), which applies alternating current from different directions and the large DNA fragments are separated as they reorient themselves with the changing field.
For standard agarose gel electrophoresis, larger molecules are resolved better using a low concentration gel while smaller molecules separate better at high concentration gel. Higher concentration gels, however, require longer run times (sometimes days).
The movement of the DNA may be affected by the conformation of the DNA molecule, for example, supercoiled DNA usually moves faster than relaxed DNA because it is tightly coiled and hence more compact. In a normal plasmid DNA preparation, multiple forms of DNA may be present. Gel electrophoresis of the plasmids would normally show the negatively supercoiled form as the main band, while nicked DNA (open circular form) and the relaxed closed circular form appears as minor bands. The rate at which the various forms move however can change using different electrophoresis conditions, and the mobility of larger circular DNA may be more strongly affected than linear DNA by the pore size of the gel.
Ethidium bromide which intercalates into circular DNA can change the charge, length, as well as the superhelicity of the DNA molecule, therefore its presence in gel during electrophoresis can affect its movement. For example, the positive charge of ethidium bromide can reduce the DNA movement by 15%. Agarose gel electrophoresis can be used to resolve circular DNA with different supercoiling topology.
DNA damage due to increased cross-linking will also reduce electrophoretic DNA migration in a dose-dependent way.
The rate of migration of the DNA is proportional to the voltage applied, i.e. the higher the voltage, the faster the DNA moves. The resolution of large DNA fragments however is lower at high voltage. The mobility of DNA may also change in an unsteady field – in a field that is periodically reversed, the mobility of DNA of a particular size may drop significantly at a particular cycling frequency. This phenomenon can result in band inversion in field inversion gel electrophoresis (FIGE), whereby larger DNA fragments move faster than smaller ones.
Migration anomalies
"Smiley" gels - this edge effect is caused when the voltage applied is too high for the gel concentration used.
Overloading of DNA - overloading of DNA slows down the migration of DNA fragments.
Contamination - presence of impurities, such as salts or proteins can affect the movement of the DNA.
Mechanism of migration and separation
The negative charge of its phosphate backbone moves the DNA towards the positively charged anode during electrophoresis. However, the migration of DNA molecules in solution, in the absence of a gel matrix, is independent of molecular weight during electrophoresis. The gel matrix is therefore responsible for the separation of DNA by size during electrophoresis, and a number of models exist to explain the mechanism of separation of biomolecules in gel matrix. A widely accepted one is the Ogston model which treats the polymer matrix as a sieve. A globular protein or a random coil DNA moves through the interconnected pores, and the movement of larger molecules is more likely to be impeded and slowed down by collisions with the gel matrix, and the molecules of different sizes can therefore be separated in this sieving process.
The Ogston model however breaks down for large molecules whereby the pores are significantly smaller than size of the molecule. For DNA molecules of size greater than 1 kb, a reptation model (or its variants) is most commonly used. This model assumes that the DNA can crawl in a "snake-like" fashion (hence "reptation") through the pores as an elongated molecule. A biased reptation model applies at higher electric field strength, whereby the leading end of the molecule become strongly biased in the forward direction and pulls the rest of the molecule along. Real-time fluorescence microscopy of stained molecules, however, showed more subtle dynamics during electrophoresis, with the DNA showing considerable elasticity as it alternately stretching in the direction of the applied field and then contracting into a ball, or becoming hooked into a U-shape when it gets caught on the polymer fibres.
General procedure
The details of an agarose gel electrophoresis experiment may vary depending on methods, but most follow a general procedure.
Casting of gel
The gel is prepared by dissolving the agarose powder in an appropriate buffer, such as TAE or TBE, to be used in electrophoresis. The agarose is dispersed in the buffer before heating it to near-boiling point, but avoid boiling. The melted agarose is allowed to cool sufficiently before pouring the solution into a cast as the cast may warp or crack if the agarose solution is too hot. A comb is placed in the cast to create wells for loading sample, and the gel should be completely set before use.
The concentration of gel affects the resolution of DNA separation. The agarose gel is composed of microscopic pores through which the molecules travel, and there is an inverse relationship between the pore size of the agarose gel and the concentration – pore size decreases as the density of agarose fibers increases. High gel concentration improves separation of smaller DNA molecules, while lowering gel concentration permits large DNA molecules to be separated. The process allows fragments ranging from 50 base pairs to several mega bases to be separated depending on the gel concentration used. The concentration is measured in weight of agarose over volume of buffer used (g/ml). For a standard agarose gel electrophoresis, a 0.8% gel gives good separation or resolution of large 5–10kb DNA fragments, while 2% gel gives good resolution for small 0.2–1kb fragments. 1% gels is often used for a standard electrophoresis. High percentage gels are often brittle and may not set evenly, while low percentage gels (0.1-0.2%) are fragile and not easy to handle. Low-melting-point (LMP) agarose gels are also more fragile than normal agarose gel. Low-melting point agarose may be used on its own or simultaneously with standard agarose for the separation and isolation of DNA. PFGE and FIGE are often done with high percentage agarose gels.
Loading of samples
Once the gel has set, the comb is removed, leaving wells where DNA samples can be loaded. Loading buffer is mixed with the DNA sample before the mixture is loaded into the wells. The loading buffer contains a dense compound, which may be glycerol, sucrose, or Ficoll, that raises the density of the sample so that the DNA sample may sink to the bottom of the well. If the DNA sample contains residual ethanol after its preparation, it may float out of the well. The loading buffer also includes colored dyes such as xylene cyanol and bromophenol blue used to monitor the progress of the electrophoresis. The DNA samples are loaded using a pipette.
Electrophoresis
Agarose gel electrophoresis is most commonly done horizontally in a subaquaeous mode whereby the slab gel is completely submerged in buffer during electrophoresis. It is also possible, but less common, to perform the electrophoresis vertically, as well as horizontally with the gel raised on agarose legs using an appropriate apparatus. The buffer used in the gel is the same as the running buffer in the electrophoresis tank, which is why electrophoresis in the subaquaeous mode is possible with agarose gel.
For optimal resolution of DNA greater than 2kb in size in standard gel electrophoresis, 5 to 8 V/cm is recommended (the distance in cm refers to the distance between electrodes, therefore this recommended voltage would be 5 to 8 multiplied by the distance between the electrodes in cm). Voltage may also be limited by the fact that it heats the gel and may cause the gel to melt if it is run at high voltage for a prolonged period, especially if the gel used is LMP agarose gel. Too high a voltage may also reduce resolution, as well as causing band streaking for large DNA molecules. Too low a voltage may lead to broadening of band for small DNA fragments due to dispersion and diffusion.
Since DNA is not visible in natural light, the progress of the electrophoresis is monitored using colored dyes. Xylene cyanol (light blue color) comigrates large DNA fragments, while Bromophenol blue (dark blue) comigrates with the smaller fragments. Less commonly used dyes include Cresol Red and Orange G which migrate ahead of bromophenol blue. A DNA marker is also run together for the estimation of the molecular weight of the DNA fragments. Note however that the size of a circular DNA like plasmids cannot be accurately gauged using standard markers unless it has been linearized by restriction digest, alternatively a supercoiled DNA marker may be used.
Staining and visualization
DNA as well as RNA are normally visualized by staining with ethidium bromide, which intercalates into the major grooves of the DNA and fluoresces under UV light. The intercalation depends on the concentration of DNA and thus, a band with high intensity will indicate a higher amount of DNA compared to a band of less intensity. The ethidium bromide may be added to the agarose solution before it gels, or the DNA gel may be stained later after electrophoresis. Destaining of the gel is not necessary but may produce better images. Other methods of staining are available; examples are MIDORI Green, SYBR Green, GelRed, methylene blue, brilliant cresyl blue, Nile blue sulfate, and crystal violet. SYBR Green, GelRed and other similar commercial products are sold as safer alternatives to ethidium bromide as it has been shown to be mutagenic in Ames test, although the carcinogenicity of ethidium bromide has not actually been established. SYBR Green requires the use of a blue-light transilluminator. DNA stained with crystal violet can be viewed under natural light without the use of a UV transilluminator which is an advantage, however it may not produce a strong band.
When stained with ethidium bromide, the gel is viewed with an ultraviolet (UV) transilluminator. The UV light excites the electrons within the aromatic ring of ethidium bromide, and once they return to the ground state, light is released, making the DNA and ethidium bromide complex fluoresce. Standard transilluminators use wavelengths of 302/312-nm (UV-B), however exposure of DNA to UV radiation for as little as 45 seconds can produce damage to DNA and affect subsequent procedures, for example reducing the efficiency of transformation, in vitro transcription, and PCR. Exposure of DNA to UV radiation therefore should be limited. Using a higher wavelength of 365 nm (UV-A range) causes less damage to the DNA but also produces much weaker fluorescence with ethidium bromide. Where multiple wavelengths can be selected in the transilluminator, shorter wavelength can be used to capture images, while longer wavelength should be used if it is necessary to work on the gel for any extended period of time.
The transilluminator apparatus may also contain image capture devices, such as a digital or polaroid camera, that allow an image of the gel to be taken or printed.
For gel electrophoresis of protein, the bands may be visualised with Coomassie or silver stains.
Downstream procedures
The separated DNA bands are often used for further procedures, and a DNA band may be cut out of the gel as a slice, dissolved and purified. Contaminants however may affect some downstream procedures such as PCR, and low melting point agarose may be preferred in some cases as it contains fewer of the sulfates that can affect some enzymatic reactions. The gels may also be used for blotting techniques.
Buffers
In general, the ideal buffer should have good conductivity, produce less heat and have a long life. There are a number of buffers used for agarose electrophoresis; common ones for nucleic acids include tris/acetate/EDTA (TAE) and tris/borate/EDTA (TBE). The buffers used contain EDTA to inactivate many nucleases which require divalent cation for their function. The borate in TBE buffer can be problematic as borate can polymerize, and/or interact with cis diols such as those found in RNA. TAE has the lowest buffering capacity, but it provides the best resolution for larger DNA. This means a lower voltage and more time, but a better product.
Many other buffers have been proposed, e.g. lithium borate (LB), iso electric histidine, pK matched goods buffers, etc.; in most cases the purported rationale is lower current (less heat) and or matched ion mobilities, which leads to longer buffer life. Tris-phosphate buffer has high buffering capacity but cannot be used if DNA extracted is to be used in phosphate sensitive reaction. LB is relatively new and is ineffective in resolving fragments larger than 5 kbp; However, with its low conductivity, a much higher voltage could be used (up to 35 V/cm), which means a shorter analysis time for routine electrophoresis. As low as one base pair size difference could be resolved in 3% agarose gel with an extremely low conductivity medium (1 mM lithium borate).
Other buffering system may be used in specific applications, for example, barbituric acid-sodium barbiturate or tris-barbiturate buffers may be used for in agarose gel electrophoresis of proteins, for example in the detection of abnormal distribution of proteins.
Applications
Estimation of the size of DNA molecules following digestion with restriction enzymes, e.g., in restriction mapping of cloned DNA.
Estimation of the DNA concentration by comparing the intensity of the nucleic acid band with the corresponding band of the size marker.
Analysis of products of a polymerase chain reaction (PCR), e.g., in molecular genetic diagnosis or genetic fingerprinting
Separation of DNA fragments for extraction and purification.
Separation of restricted genomic DNA prior to Southern transfer, or of RNA prior to Northern transfer.
Separation of proteins, for example, screening of protein abnormalities in clinical chemistry.
Agarose gels are easily cast and handled compared to other matrices and nucleic acids are not chemically altered during electrophoresis. Samples are also easily recovered. After the experiment is finished, the resulting gel can be stored in a plastic bag in a refrigerator.
Electrophoresis is performed in buffer solutions to reduce pH changes due to the electric field, which is important because the charge of DNA and RNA depends on pH, but running for too long can exhaust the buffering capacity of the solution. Further, different preparations of genetic material may not migrate consistently with each other, for morphological or other reasons.
|
Articles containing video clips;Biological techniques and tools;Electrophoresis;Molecular biology;Polymerase chain reaction
|
https://en.wikipedia.org/wiki/Antimicrobial%20resistance
|
Antimicrobial resistance (AMR or AR) occurs when microbes evolve mechanisms that protect them from antimicrobials, which are drugs used to treat infections. This resistance affects all classes of microbes, including bacteria (antibiotic resistance), viruses (antiviral resistance), parasites (antiparasitic resistance), and fungi (antifungal resistance). Together, these adaptations fall under the AMR umbrella, posing significant challenges to healthcare worldwide. Misuse and improper management of antimicrobials are primary drivers of this resistance, though it can also occur naturally through genetic mutations and the spread of resistant genes.
Antibiotic resistance, a significant AMR subset, enables bacteria to survive antibiotic treatment, complicating infection management and treatment options. Resistance arises through spontaneous mutation, horizontal gene transfer, and increased selective pressure from antibiotic overuse, both in medicine and agriculture, which accelerates resistance development.
The burden of AMR is immense, with nearly 5 million annual deaths associated with resistant infections. Infections from AMR microbes are more challenging to treat and often require costly alternative therapies that may have more severe side effects. Preventive measures, such as using narrow-spectrum antibiotics and improving hygiene practices, aim to reduce the spread of resistance. Microbes resistant to multiple drugs are termed multidrug-resistant (MDR) and are sometimes called superbugs.
The World Health Organization (WHO) claims that AMR is one of the top global public health and development threats, estimating that bacterial AMR was directly responsible for 1.27 million global deaths in 2019 and contributed to 4.95 million deaths. Moreover, the WHO and other international bodies warn that AMR could lead to up to 10 million deaths annually by 2050 unless actions are taken. Global initiatives, such as calls for international AMR treaties, emphasize coordinated efforts to limit misuse, fund research, and provide access to necessary antimicrobials in developing nations. However, the COVID-19 pandemic redirected resources and scientific attention away from AMR, intensifying the challenge.
Definition
Antimicrobial resistance means a microorganism's resistance to an antimicrobial drug that was once able to treat an infection by that microorganism. A person cannot become resistant to antibiotics. Resistance is a property of the microbe, not a person or other organism infected by a microbe. All types of microbes can develop drug resistance. Thus, there are antibiotic, antifungal, antiviral and antiparasitic resistance.
Antibiotic resistance is a subset of antimicrobial resistance. This more specific resistance is linked to bacteria and thus broken down into two further subsets, microbiological and clinical. Microbiological resistance is the most common and occurs from genes, mutated or inherited, that allow the bacteria to resist the mechanism to kill the microbe associated with certain antibiotics. Clinical resistance is shown through the failure of many therapeutic techniques where the bacteria that are normally susceptible to a treatment become resistant after surviving the outcome of the treatment. In both cases of acquired resistance, the bacteria can pass the genetic catalyst for resistance through horizontal gene transfer: conjugation, transduction, or transformation. This allows the resistance to spread across the same species of pathogen or even similar bacterial pathogens.
Overview
WHO report released April 2014 stated, "this serious threat is no longer a prediction for the future, it is happening right now in every region of the world and has the potential to affect anyone, of any age, in any country. Antibiotic resistance—when bacteria change so antibiotics no longer work in people who need them to treat infections—is now a major threat to public health."
Each year, nearly 5 million deaths are associated with AMR globally. In 2019, global deaths attributable to AMR numbered 1.27 million in 2019. That same year, AMR may have contributed to 5 million deaths and one in five people who died due to AMR were children under five years old.
In 2018, WHO considered antibiotic resistance to be one of the biggest threats to global health, food security and development. Deaths attributable to AMR vary by area:
The European Centre for Disease Prevention and Control calculated that in 2015 there were 671,689 infections in the EU and European Economic Area caused by antibiotic-resistant bacteria, resulting in 33,110 deaths. Most were acquired in healthcare settings. In 2019 there were 133,000 deaths caused by AMR.
Causes
AMR is driven largely by the misuse and overuse of antimicrobials. Yet, at the same time, many people around the world do not have access to essential antimicrobials. This leads to microbes either evolving a defense against drugs used to treat them, or certain strains of microbes that have a natural resistance to antimicrobials becoming much more prevalent than the ones that are easily defeated with medication. While antimicrobial resistance does occur naturally over time, the use of antimicrobial agents in a variety of settings both within the healthcare industry and outside of has led to antimicrobial resistance becoming increasingly more prevalent.
Although many microbes develop resistance to antibiotics over time through natural mutation, overprescribing and inappropriate prescription of antibiotics have accelerated the problem. It is possible that as many as 1 in 3 prescriptions written for antibiotics are unnecessary. Every year, approximately 154 million prescriptions for antibiotics are written. Of these, up to 46 million are unnecessary or inappropriate for the condition that the patient has. Microbes may naturally develop resistance through genetic mutations that occur during cell division, and although random mutations are rare, many microbes reproduce frequently and rapidly, increasing the chances of members of the population acquiring a mutation that increases resistance. Many individuals stop taking antibiotics when they begin to feel better. When this occurs, it is possible that the microbes that are less susceptible to treatment still remain in the body. If these microbes are able to continue to reproduce, this can lead to an infection by bacteria that are less susceptible or even resistant to an antibiotic.
Natural occurrence
AMR is a naturally occurring process. Antimicrobial resistance can evolve naturally due to continued exposure to antimicrobials. Natural selection means that organisms that are able to adapt to their environment, survive, and continue to produce offspring. As a result, the types of microorganisms that are able to survive over time with continued attack by certain antimicrobial agents will naturally become more prevalent in the environment, and those without this resistance will become obsolete.
Some contemporary antimicrobial resistances have also evolved naturally before the use of antimicrobials of human clinical uses. For instance, methicillin-resistance evolved as a pathogen of hedgehogs, possibly as a co-evolutionary adaptation of the pathogen to hedgehogs that are infected by a dermatophyte that naturally produces antibiotics. Also, many soil fungi and bacteria are natural competitors and the original antibiotic penicillin discovered by Alexander Fleming rapidly lost clinical effectiveness in treating humans and, furthermore, none of the other natural penicillins (F, K, N, X, O, U1 or U6) are currently in clinical use.
Antimicrobial resistance can be acquired from other microbes through swapping genes in a process termed horizontal gene transfer. This means that once a gene for resistance to an antibiotic appears in a microbial community, it can then spread to other microbes in the community, potentially moving from a non-disease causing microbe to a disease-causing microbe. This process is heavily driven by the natural selection processes that happen during antibiotic use or misuse.
Over time, most of the strains of bacteria and infections present will be the type resistant to the antimicrobial agent being used to treat them, making this agent now ineffective to defeat most microbes. With the increased use of antimicrobial agents, there is a speeding up of this natural process.
Self-medication
In the vast majority of countries, antibiotics can only be prescribed by a doctor and supplied by a pharmacy. Self-medication by consumers is defined as "the taking of medicines on one's own initiative or on another person's suggestion, who is not a certified medical professional", and it has been identified as one of the primary reasons for the evolution of antimicrobial resistance. Self-medication with antibiotics is an unsuitable way of using them but a common practice in resource-constrained countries. The practice exposes individuals to the risk of bacteria that have developed antimicrobial resistance. Many people resort to this out of necessity, when access to a physician is unavailable, or when patients have a limited amount of time or money to see a doctor. This increased access makes it extremely easy to obtain antimicrobials. An example is India, where in the state of Punjab 73% of the population resorted to treating their minor health issues and chronic illnesses through self-medication.
Self-medication is higher outside the hospital environment, and this is linked to higher use of antibiotics, with the majority of antibiotics being used in the community rather than hospitals. The prevalence of self-medication in low- and middle-income countries (LMICs) ranges from 8.1% to 93%. Accessibility, affordability, and conditions of health facilities, as well as the health-seeking behavior, are factors that influence self-medication in low- and middle-income countries. Two significant issues with self-medication are the lack of knowledge of the public on, firstly, the dangerous effects of certain antimicrobials (for example ciprofloxacin which can cause tendonitis, tendon rupture and aortic dissection) and, secondly, broad microbial resistance and when to seek medical care if the infection is not clearing. In order to determine the public's knowledge and preconceived notions on antibiotic resistance, a screening of 3,537 articles published in Europe, Asia, and North America was done. Of the 55,225 total people surveyed in the articles, 70% had heard of antibiotic resistance previously, but 88% of those people thought it referred to some type of physical change in the human body.
Clinical misuse
Clinical misuse by healthcare professionals is another contributor to increased antimicrobial resistance. Studies done in the US show that the indication for treatment of antibiotics, choice of the agent used, and the duration of therapy was incorrect in up to 50% of the cases studied. In 2010 and 2011 about a third of antibiotic prescriptions in outpatient settings in the United States were not necessary. Another study in an intensive care unit in a major hospital in France has shown that 30% to 60% of prescribed antibiotics were unnecessary. These inappropriate uses of antimicrobial agents promote the evolution of antimicrobial resistance by supporting the bacteria in developing genetic alterations that lead to resistance.
According to research conducted in the US that aimed to evaluate physicians' attitudes and knowledge on antimicrobial resistance in ambulatory settings, only 63% of those surveyed reported antibiotic resistance as a problem in their local practices, while 23% reported the aggressive prescription of antibiotics as necessary to avoid failing to provide adequate care. This demonstrates that many doctors underestimate the impact that their own prescribing habits have on antimicrobial resistance as a whole. It also confirms that some physicians may be overly cautious and prescribe antibiotics for both medical or legal reasons, even when clinical indications for use of these medications are not always confirmed. This can lead to unnecessary antimicrobial use, a pattern which may have worsened during the COVID-19 pandemic.
Studies have shown that common misconceptions about the effectiveness and necessity of antibiotics to treat common mild illnesses contribute to their overuse.
Important to the conversation of antibiotic use is the veterinary medical system. Veterinary oversight is required by law for all medically important antibiotics. Veterinarians use the Pharmacokinetic/pharmacodynamic model (PK/PD) approach to ensuring that the correct dose of the drug is delivered to the correct place at the correct timing.
Pandemics, disinfectants and healthcare systems
Increased antibiotic use during the early waves of the COVID-19 pandemic may exacerbate this global health challenge. Moreover, pandemic burdens on some healthcare systems may contribute to antibiotic-resistant infections. The use of disinfectants such as alcohol-based hand sanitizers, and antiseptic hand wash may also have the potential to increase antimicrobial resistance. Extensive use of disinfectants can lead to mutations that induce antimicrobial resistance. On the other hand, "increased hand hygiene, decreased international travel, and decreased elective hospital procedures may have reduced AMR pathogen selection and spread in the short term" during the COVID-19 pandemic.
A 2024 United Nations High-Level Meeting on AMR has pledged to reduce deaths associated with bacterial AMR by 10% over the next six years. In their first major declaration on the issue since 2016, global leaders also committed to raising $100 million to update and implement AMR action plans. However, the final draft of the declaration omitted an earlier target to reduce antibiotic use in animals by 30% by 2030, due to opposition from meat-producing countries and the farming industry. Critics argue this omission is a major weakness, as livestock accounts for around 73% of global sales of antimicrobial agents, including antibiotics, antivirals, and antiparasitics.
Environmental pollution
Considering the complex interactions between humans, animals and the environment, it is also important to consider the environmental aspects and contributors to antimicrobial resistance. Although there are still some knowledge gaps in understanding the mechanisms and transmission pathways, environmental pollution is considered a significant contributor to antimicrobial resistance. Important contributing factors are through "antibiotic residues", "industrial effluents", " agricultural runoffs", "heavy metals", "biocides and pesticides" and "sewage and wastewater" that create reservoirs for resistant genes and bacteria that facilitates the transfer of human pathogens. Unused or expired antibiotics, if not disposed of properly, can enter water systems and soil. Discharge from pharmaceutical manufacturing and other industrial companies can also introduce antibiotics and other chemicals into the environment. These factors allow for creating selective pressure for resistant bacteria. Antibiotics used in livestock and aquaculture can contaminate soil and water, which promotes resistance in environmental microbes. Heavy metals such as zinc, copper and mercury, and also biocides and pesticides, can co- select for antibiotic resistance, enhancing their speed. Inadequate treatment of sewage and wastewater allows resistant bacteria and genes to spread through water systems.
Food production
Livestock
The antimicrobial resistance crisis also extends to the food industry, specifically with food producing animals. With an ever-increasing human population, there is constant pressure to intensify productivity in many agricultural sectors, including the production of meat as a source of protein. Antibiotics are fed to livestock to act as growth supplements, and a preventive measure to decrease the likelihood of infections.
Farmers typically use antibiotics in animal feed to improve growth rates and prevent infections. However, this is illogical as antibiotics are used to treat infections and not prevent infections. 80% of antibiotic use in the U.S. is for agricultural purposes and about 70% of these are medically important. Overusing antibiotics gives the bacteria time to adapt leaving higher doses or even stronger antibiotics needed to combat the infection. Though antibiotics for growth promotion were banned throughout the EU in 2006, 40 countries worldwide still use antibiotics to promote growth.
This can result in the transfer of resistant bacterial strains into the food that humans eat, causing potentially fatal transfer of disease. While the practice of using antibiotics as growth promoters does result in better yields and meat products, it is a major issue and needs to be decreased in order to prevent antimicrobial resistance. Though the evidence linking antimicrobial usage in livestock to antimicrobial resistance is limited, the World Health Organization Advisory Group on Integrated Surveillance of Antimicrobial Resistance strongly recommended the reduction of use of medically important antimicrobials in livestock. Additionally, the Advisory Group stated that such antimicrobials should be expressly prohibited for both growth promotion and disease prevention in food producing animals.
By mapping antimicrobial consumption in livestock globally, it was predicted that in 228 countries there would be a total 67% increase in consumption of antibiotics by livestock by 2030. In some countries such as Brazil, Russia, India, China, and South Africa it is predicted that a 99% increase will occur. Several countries have restricted the use of antibiotics in livestock, including Canada, China, Japan, and the US. These restrictions are sometimes associated with a reduction of the prevalence of antimicrobial resistance in humans.
In the United States the Veterinary Feed Directive went into practice in 2017 dictating that All medically important antibiotics to be used in feed or water for food animal species require a veterinary feed directive (VFD) or a prescription.
Pesticides
Most pesticides protect crops against insects and plants, but in some cases antimicrobial pesticides are used to protect against various microorganisms such as bacteria, viruses, fungi, algae, and protozoa. The overuse of many pesticides in an effort to have a higher yield of crops has resulted in many of these microbes evolving a tolerance against these antimicrobial agents. Currently there are over 4000 antimicrobial pesticides registered with the US Environmental Protection Agency (EPA) and sold to market, showing the widespread use of these agents. It is estimated that for every single meal a person consumes, 0.3 g of pesticides is used, as 90% of all pesticide use is in agriculture. A majority of these products are used to help defend against the spread of infectious diseases, and hopefully protect public health. But out of the large amount of pesticides used, it is also estimated that less than 0.1% of those antimicrobial agents, actually reach their targets. That leaves over 99% of all pesticides used available to contaminate other resources. In soil, air, and water these antimicrobial agents are able to spread, coming in contact with more microorganisms and leading to these microbes evolving mechanisms to tolerate and further resist pesticides. The use of antifungal azole pesticides that drive environmental azole resistance have been linked to azole resistance cases in the clinical setting. The same issues confront the novel antifungal classes (e.g. orotomides) which are again being used in both the clinic and agriculture.
Wild birds
Wildlife, including wild and migratory birds, serve as a reservoir for zoonotic disease and antimicrobial-resistant organisms. Birds are a key link between the transmission of zoonotic diseases to human populations. By the same token, increased contact between wild birds and human populations (including domesticated animals), has increased the amount of anti-microbial resistance (AMR) to the bird population. The introduction of AMR to wild birds positively correlates with human pollution and increased human contact. Additionally, wild birds can participate in horizontal gene transfer with bacteria, leading to the transmission of antibiotic-resistant genes (ARG).
For simplicity, wild bird populations can be divided into two major categories, wild sedentary birds and wild migrating birds. Wild sedentary bird exposure to AMR is through increased contact with densely populated areas, human waste, domestic animals, and domestic animal/livestock waste. Wild migrating birds interact with sedentary birds in different environments along their migration route. This increases the rate and diversity of AMR across varying ecosystems.
Neglect of wildlife in the global discussions surrounding health security and AMR, creates large barriers to true AMR surveillance. The surveillance of anti-microbial resistant organisms in wild birds is a potential metric for the rate of AMR in the environment. This surveillance also allows for further investigation into the transmission routs between different ecosystems and human populations (including domesticated animals and livestock). Such information gathered from wild bird biomes, can help identify patterns of diseased transmission and better target interventions. These targeted interventions can inform the use of antimicrobial agents and reduce the persistence of multi-drug resistant organisms.
Gene transfer from ancient microorganisms
Permafrost is a term used to refer to any ground that remained frozen for two years or more, with the oldest known examples continuously frozen for around 700,000 years. In the recent decades, permafrost has been rapidly thawing due to climate change. The cold preserves any organic matter inside the permafrost, and it is possible for microorganisms to resume their life functions once it thaws. While some common pathogens such as influenza, smallpox or the bacteria associated with pneumonia have failed to survive intentional attempts to revive them, more cold-adapted microorganisms such as anthrax, or several ancient plant and amoeba viruses, have successfully survived prolonged thaw.
Some scientists have argued that the inability of known causative agents of contagious diseases to survive being frozen and thawed makes this threat unlikely. Instead, there have been suggestions that when modern pathogenic bacteria interact with the ancient ones, they may, through horizontal gene transfer, pick up genetic sequences which are associated with antimicrobial resistance, exacerbating an already difficult issue. Antibiotics to which permafrost bacteria have displayed at least some resistance include chloramphenicol, streptomycin, kanamycin, gentamicin, tetracycline, spectinomycin and neomycin. However, other studies show that resistance levels in ancient bacteria to modern antibiotics remain lower than in the contemporary bacteria from the active layer of thawed ground above them, which may mean that this risk is "no greater" than from any other soil.
Prevention
There have been increasing public calls for global collective action to address the threat, including a proposal for an international treaty on antimicrobial resistance. Further detail and attention is still needed in order to recognize and measure trends in resistance on the international level; the idea of a global tracking system has been suggested but implementation has yet to occur. A system of this nature would provide insight to areas of high resistance as well as information necessary for evaluating programs, introducing interventions and other changes made to fight or reverse antibiotic resistance.
Duration of antimicrobials
Delaying or minimizing the use of antibiotics for certain conditions may help safely reduce their use. Antimicrobial treatment duration should be based on the infection and other health problems a person may have. For many infections once a person has improved there is little evidence that stopping treatment causes more resistance. Some, therefore, feel that stopping early may be reasonable in some cases. Other infections, however, do require long courses regardless of whether a person feels better.
Delaying antibiotics for ailments such as a sore throat and otitis media may have no difference in the rate of complications compared with immediate antibiotics, for example. When treating respiratory tract infections, clinical judgement is required as to the appropriate treatment (delayed or immediate antibiotic use).
The study, "Shorter and Longer Antibiotic Durations for Respiratory Infections: To Fight Antimicrobial Resistance—A Retrospective Cross-Sectional Study in a Secondary Care Setting in the UK," highlights the urgency of reevaluating antibiotic treatment durations amidst the global challenge of antimicrobial resistance (AMR). It investigates the effectiveness of shorter versus longer antibiotic regimens for respiratory tract infections (RTIs) in a UK secondary care setting, emphasizing the need for evidence-based prescribing practices to optimize patient outcomes and combat AMR.
Monitoring and mapping
There are multiple national and international monitoring programs for drug-resistant threats, including methicillin-resistant Staphylococcus aureus (MRSA), vancomycin-resistant S. aureus (VRSA), extended spectrum beta-lactamase (ESBL) producing Enterobacterales, vancomycin-resistant Enterococcus (VRE), and multidrug-resistant Acinetobacter baumannii (MRAB).
ResistanceOpen is an online global map of antimicrobial resistance developed by HealthMap which displays aggregated data on antimicrobial resistance from publicly available and user submitted data. The website can display data for a radius from a location. Users may submit data from antibiograms for individual hospitals or laboratories. European data is from the EARS-Net (European Antimicrobial Resistance Surveillance Network), part of the ECDC. ResistanceMap is a website by the Center for Disease Dynamics, Economics & Policy and provides data on antimicrobial resistance on a global level.
The WHO's AMR global action plan also recommends antimicrobial resistance surveillance in animals. Initial steps in the EU for establishing the veterinary counterpart EARS-Vet (EARS-Net for veterinary medicine) have been made. AMR data from pets in particular is scarce, but needed to support antibiotic stewardship in veterinary medicine.
By comparison there is a lack of national and international monitoring programs for antifungal resistance.
Limiting antimicrobial use in humans
Antimicrobial stewardship programmes appear useful in reducing rates of antimicrobial resistance. The antimicrobial stewardship program will also provide pharmacists with the knowledge to educate patients that antibiotics will not work for a virus for example.
Excessive antimicrobial use has become one of the top contributors to the evolution of antimicrobial resistance. Since the beginning of the antimicrobial era, antimicrobials have been used to treat a wide range of infectious diseases. Overuse of antimicrobials has become the primary cause of rising levels of antimicrobial resistance. The main problem is that doctors are willing to prescribe antimicrobials to ill-informed individuals who believe that antimicrobials can cure nearly all illnesses, including viral infections like the common cold. In an analysis of drug prescriptions, 36% of individuals with a cold or an upper respiratory infection (both usually viral in origin) were given prescriptions for antibiotics. These prescriptions accomplished nothing other than increasing the risk of further evolution of antibiotic resistant bacteria. Using antimicrobials without prescription is another driving force leading to the overuse of antibiotics to self-treat diseases like the common cold, cough, fever, and dysentery resulting in an epidemic of antibiotic resistance in countries like Bangladesh, risking its spread around the globe. Introducing strict antibiotic stewardship in the outpatient setting to reduce inappropriate prescribing of antibiotics may reduce the emerging bacterial resistance.
The WHO AWaRe (Access, Watch, Reserve) guidance and antibiotic book has been introduced to guide antibiotic choice for the 30 most common infections in adults and children to reduce inappropriate prescribing in primary care and hospitals. Narrow-spectrum antibiotics are preferred due to their lower resistance potential, and broad-spectrum antibiotics are only recommended for people with more severe symptoms. Some antibiotics are more likely to confer resistance, so are kept as reserve antibiotics in the AWaRe book.
Various diagnostic strategies have been employed to prevent the overuse of antifungal therapy in the clinic, proving a safe alternative to empirical antifungal therapy, and thus underpinning antifungal stewardship schemes.
At the hospital level
Antimicrobial stewardship teams in hospitals are encouraging optimal use of antimicrobials. The goals of antimicrobial stewardship are to help practitioners pick the right drug at the right dose and duration of therapy while preventing misuse and minimizing the development of resistance. Stewardship interventions may reduce the length of stay by an average of slightly over 1 day while not increasing the risk of death. Dispensing, to discharged in-house patients, the exact number of antibiotic pharmaceutical units necessary to complete an ongoing treatment can reduce antibiotic leftovers within the community as community pharmacies can have antibiotic package inefficiencies.
At the primary care level
Given the volume of care provided in primary care (general practice), recent strategies have focused on reducing unnecessary antimicrobial prescribing in this setting. Simple interventions, such as written information explaining when taking antibiotics is not necessary, for example in common infections of the upper respiratory tract, have been shown to reduce antibiotic prescribing. Various tools are also available to help professionals decide if prescribing antimicrobials is necessary.
Parental expectations, driven by the worry for their children's health, can influence how often children are prescribed antibiotics. Parents often rely on their clinician for advice and reassurance. However a lack of plain language information and not having adequate time for consultation negatively impacts this relationship. In effect parents often rely on past experiences in their expectations rather than reassurance from the clinician. Adequate time for consultation and plain language information can help parents make informed decisions and avoid unnecessary antibiotic use.
Parents play a critical role in reducing unnecessary antibiotic use, particularly during cold and flu season when children frequently experience respiratory illnesses. Many of these illnesses are caused by viruses, such as colds or the flu, which antibiotics cannot treat. Misusing antibiotics in these situations not only fails to benefit the child but also contributes to the emergence of antibiotic-resistant bacteria, posing a broader public health threat. To address parental concerns and reduce inappropriate prescribing, healthcare providers can offer plain-language explanations about the difference between bacterial and viral infections, alongside clear guidance on managing viral illnesses without antibiotics. Vaccinations also play a vital role in reducing the incidence of serious bacterial infections that may require antibiotic treatment, thereby helping to preserve the effectiveness of existing antibiotics. Schools further amplify the spread of infections due to close contact and shared surfaces, underscoring the importance of hygiene practices like regular handwashing, covering coughs, and staying home when unwell. These preventive measures not only reduce the need for antibiotics but also lower the overall risk of resistant bacteria spreading within communities.
The prescriber should closely adhere to the five rights of drug administration: the right patient, the right drug, the right dose, the right route, and the right time. Microbiological samples should be taken for culture and sensitivity testing before treatment when indicated and treatment potentially changed based on the susceptibility report. Health workers and pharmacists can help tackle antibiotic resistance by: enhancing infection prevention and control; only prescribing and dispensing antibiotics when they are truly needed; prescribing and dispensing the right antibiotic(s) to treat the illness. A unit dose system implemented in community pharmacies can also reduce antibiotic leftovers at households. Despite these, written guideline intervention for prescriber to do history taking and provision of advice and knowledge of pharmacists and non‐pharmacists may not reduce the sales of non‐prescription antimicrobial drugs in community pharmacies, drugstores, and other medicine outlets.
At the individual level
People can help tackle resistance by using antibiotics only when infected with a bacterial infection and prescribed by a doctor; completing the full prescription even if the user is feeling better, never sharing antibiotics with others, or using leftover prescriptions. Taking antibiotics when not needed won't help the user, but instead give bacteria the option to adapt and leave the user with the side effects that come with the certain type of antibiotic. The CDC recommends that you follow these behaviors so that you avoid these negative side effects and keep the community safe from spreading drug-resistant bacteria. Practicing basic bacterial infection prevention courses, such as hygiene, also helps to prevent the spread of antibiotic-resistant bacteria.
Country examples
The Netherlands has the lowest rate of antibiotic prescribing in the OECD, at a rate of 11.4 defined daily doses (DDD) per 1,000 people per day in 2011. The defined daily dose (DDD) is a statistical measure of drug consumption, defined by the World Health Organization (WHO).
Germany and Sweden also have lower prescribing rates, with Sweden's rate having been declining since 2007.
Greece, France and Belgium have high prescribing rates for antibiotics of more than 28 DDD.
Water, sanitation, hygiene
Infectious disease control through improved water, sanitation and hygiene (WASH) infrastructure needs to be included in the antimicrobial resistance (AMR) agenda. The "Interagency Coordination Group on Antimicrobial Resistance" stated in 2018 that "the spread of pathogens through unsafe water results in a high burden of gastrointestinal disease, increasing even further the need for antibiotic treatment." This is particularly a problem in developing countries where the spread of infectious diseases caused by inadequate WASH standards is a major driver of antibiotic demand. Growing usage of antibiotics together with persistent infectious disease levels have led to a dangerous cycle in which reliance on antimicrobials increases while the efficacy of drugs diminishes. The proper use of infrastructure for water, sanitation and hygiene (WASH) can result in a 47–72 percent decrease of diarrhea cases treated with antibiotics depending on the type of intervention and its effectiveness. A reduction of the diarrhea disease burden through improved infrastructure would result in large decreases in the number of diarrhea cases treated with antibiotics. This was estimated as ranging from 5 million in Brazil to up to 590 million in India by the year 2030. The strong link between increased consumption and resistance indicates that this will directly mitigate the accelerating spread of AMR. Sanitation and water for all by 2030 is Goal Number 6 of the Sustainable Development Goals.
An increase in hand washing compliance by hospital staff results in decreased rates of resistant organisms.
Water supply and sanitation infrastructure in health facilities offer significant co-benefits for combatting AMR, and investment should be increased. There is much room for improvement: WHO and UNICEF estimated in 2015 that globally 38% of health facilities did not have a source of water, nearly 19% had no toilets and 35% had no water and soap or alcohol-based hand rub for handwashing.
Industrial wastewater treatment
Manufacturers of antimicrobials need to improve the treatment of their wastewater (by using industrial wastewater treatment processes) to reduce the release of residues into the environment.
Limiting antimicrobial use in animals and farming
It is established that the use of antibiotics in animal husbandry can give rise to AMR resistances in bacteria found in food animals to the antibiotics being administered (through injections or medicated feeds). For this reason only antimicrobials that are deemed "not-clinically relevant" are used in these practices.
Unlike resistance to antibacterials, antifungal resistance can be driven by arable farming, currently there is no regulation on the use of similar antifungal classes in agriculture and the clinic.
Recent studies have shown that the prophylactic use of "non-priority" or "non-clinically relevant" antimicrobials in feeds can potentially, under certain conditions, lead to co-selection of environmental AMR bacteria with resistance to medically important antibiotics. The possibility for co-selection of AMR resistances in the food chain pipeline may have far-reaching implications for human health.
Country examples
Europe
In 1997, European Union health ministers voted to ban avoparcin and four additional antibiotics used to promote animal growth in 1999. In 2006 a ban on the use of antibiotics in European feed, with the exception of two antibiotics in poultry feeds, became effective. In Scandinavia, there is evidence that the ban has led to a lower prevalence of antibiotic resistance in (nonhazardous) animal bacterial populations. As of 2004, several European countries established a decline of antimicrobial resistance in humans through limiting the use of antimicrobials in agriculture and food industries without jeopardizing animal health or economic cost.
United States
The United States Department of Agriculture (USDA) and the Food and Drug Administration (FDA) collect data on antibiotic use in humans and in a more limited fashion in animals. About 80% of antibiotic use in the U.S. is for agriculture purposes, and about 70% of these are medically important. This gives reason for concern about the antibiotic resistance crisis in the U.S. and more reason to monitor it. The FDA first determined in 1977 that there is evidence of emergence of antibiotic-resistant bacterial strains in livestock. The long-established practice of permitting OTC sales of antibiotics (including penicillin and other drugs) to lay animal owners for administration to their own animals nonetheless continued in all states.
In 2000, the FDA announced their intention to revoke approval of fluoroquinolone use in poultry production because of substantial evidence linking it to the emergence of fluoroquinolone-resistant Campylobacter infections in humans. Legal challenges from the food animal and pharmaceutical industries delayed the final decision to do so until 2006. Fluroquinolones have been banned from extra-label use in food animals in the USA since 2007. However, they remain widely used in companion and exotic animals.
Global action plans and awareness
At the 79th United Nations General Assembly High-Level Meeting on AMR on 26 September 2024, world leaders approved a political declaration committing to a clear set of targets and actions, including reducing the estimated 4.95 million human deaths associated with bacterial AMR annually by 10% by 2030.
The increasing interconnectedness of the world and the fact that new classes of antibiotics have not been developed and approved for more than 25 years highlight the extent to which antimicrobial resistance is a global health challenge. A global action plan to tackle the growing problem of resistance to antibiotics and other antimicrobial medicines was endorsed at the Sixty-eighth World Health Assembly in May 2015. One of the key objectives of the plan is to improve awareness and understanding of antimicrobial resistance through effective communication, education and training. This global action plan developed by the World Health Organization was created to combat the issue of antimicrobial resistance and was guided by the advice of countries and key stakeholders. The WHO's global action plan is composed of five key objectives that can be targeted through different means, and represents countries coming together to solve a major problem that can have future health consequences. These objectives are as follows:
improve awareness and understanding of antimicrobial resistance through effective communication, education and training.
strengthen the knowledge and evidence base through surveillance and research.
reduce the incidence of infection through effective sanitation, hygiene and infection prevention measures.
optimize the use of antimicrobial medicines in human and animal health.
develop the economic case for sustainable investment that takes account of the needs of all countries and to increase investment in new medicines, diagnostic tools, vaccines and other interventions.
Steps towards progress
React based in Sweden has produced informative material on AMR for the general public.
Videos are being produced for the general public to generate interest and awareness.
The Irish Department of Health published a National Action Plan on Antimicrobial Resistance in October 2017. The Strategy for the Control of Antimicrobial Resistance in Ireland (SARI), Iaunched in 2001 developed Guidelines for Antimicrobial Stewardship in Hospitals in Ireland in conjunction with the Health Protection Surveillance Centre, these were published in 2009. Following their publication a public information campaign 'Action on Antibiotics' was launched to highlight the need for a change in antibiotic prescribing. Despite this, antibiotic prescribing remains high with variance in adherence to guidelines.
The United Kingdom published a 20-year vision for antimicrobial resistance that sets out the goal of containing and controlling AMR by 2040. The vision is supplemented by a 5-year action plan running from 2019 to 2024, building on the previous action plan (2013–2018).
The World Health Organization has published the 2024 Bacterial Priority Pathogens List which covers 15 families of antibiotic-resistant bacterial pathogens. Notable among these are gram-negative bacteria resistant to last-resort antibiotics, drug-resistant mycobacterium tuberculosis, and other high-burden resistant pathogens such as Salmonella, Shigella, Neisseria gonorrhoeae, Pseudomonas aeruginosa, and Staphylococcus aureus. The inclusion of these pathogens in the list underscores their global impact in terms of burden, as well as issues related to transmissibility, treatability, and prevention options. It also reflects the R&D pipeline of new treatments and emerging resistance trends.
Antibiotic Awareness Week
The World Health Organization has promoted the first World Antibiotic Awareness Week running from 16 to 22 November 2015. The aim of the week is to increase global awareness of antibiotic resistance. It also wants to promote the correct usage of antibiotics across all fields in order to prevent further instances of antibiotic resistance.
World Antibiotic Awareness Week has been held every November since 2015. For 2017, the Food and Agriculture Organization of the United Nations (FAO), the World Health Organization (WHO) and the World Organisation for Animal Health (OIE) are together calling for responsible use of antibiotics in humans and animals to reduce the emergence of antibiotic resistance.
United Nations
In 2016 the Secretary-General of the United Nations convened the Interagency Coordination Group (IACG) on Antimicrobial Resistance. The IACG worked with international organizations and experts in human, animal, and plant health to create a plan to fight antimicrobial resistance. Their report released in April 2019 highlights the seriousness of antimicrobial resistance and the threat it poses to world health. It suggests five recommendations for member states to follow in order to tackle this increasing threat. The IACG recommendations are as follows:
Accelerate progress in countries
Innovate to secure the future
Collaborate for more effective action
Invest for a sustainable response
Strengthen accountability and global governance
One Health Approach
The One Health approach recognizes that human, animal, and environmental health are interconnected in the development and spread of antimicrobial resistance (AMR). Key strategies include:
Integrated Surveillance
Monitoring antibiotic use and resistance trends across human medicine, agriculture, and environmental sectors.
For example, 73% of the world's antibiotics are used in livestock, often for non-therapeutic purposes like growth promotion.
Policy Interventions
Banning non-therapeutic antibiotics in agriculture (e.g., European Union's 2006 growth promoter ban).
Incentivizing development of new antibiotics and alternatives (e.g., vaccines, bacteriophages).
Environmental Mitigation
Reducing pharmaceutical waste in water systems and soil through improved waste management.
Addressing resistance genes in wastewater from hospitals, farms, and drug manufacturing sites.
Mechanisms and organisms
Bacteria
The five main mechanisms by which bacteria exhibit resistance to antibiotics are:
Drug inactivation or modification: for example, enzymatic deactivation of penicillin G in some penicillin-resistant bacteria through the production of β-lactamases. Drugs may also be chemically modified through the addition of functional groups by transferase enzymes; for example, acetylation, phosphorylation, or adenylation are common resistance mechanisms to aminoglycosides. Acetylation is the most widely used mechanism and can affect a number of drug classes.
Alteration of target- or binding site: for example, alteration of PBP—the binding target site of penicillins—in MRSA and other penicillin-resistant bacteria. Another protective mechanism found among bacterial species is ribosomal protection proteins. These proteins protect the bacterial cell from antibiotics that target the cell's ribosomes to inhibit protein synthesis. The mechanism involves the binding of the ribosomal protection proteins to the ribosomes of the bacterial cell, which in turn changes its conformational shape. This allows the ribosomes to continue synthesizing proteins essential to the cell while preventing antibiotics from binding to the ribosome to inhibit protein synthesis.
Alteration of metabolic pathway: for example, some sulfonamide-resistant bacteria do not require para-aminobenzoic acid (PABA), an important precursor for the synthesis of folic acid and nucleic acids in bacteria inhibited by sulfonamides, instead, like mammalian cells, they turn to using preformed folic acid.
Reduced drug accumulation: by decreasing drug permeability or increasing active efflux (pumping out) of the drugs across the cell surface. These multidrug efflux pumps within the cellular membrane of certain bacterial species are used to pump antibiotics out of the cell before they are able to do any damage. They are often activated by a specific substrate associated with an antibiotic, as in fluoroquinolone resistance.
Ribosome splitting and recycling: for example, drug-mediated stalling of the ribosome by lincomycin and erythromycin unstalled by a heat shock protein found in Listeria monocytogenes, which is a homologue of HflX from other bacteria. Liberation of the ribosome from the drug allows further translation and consequent resistance to the drug.
There are several different types of germs that have developed a resistance over time.
The six pathogens causing most deaths associated with resistance are Escherichia coli, Staphylococcus aureus, Klebsiella pneumoniae, Streptococcus pneumoniae, Acinetobacter baumannii, and Pseudomonas aeruginosa. They were responsible for 929,000 deaths attributable to resistance and 3.57 million deaths associated with resistance in 2019.
Penicillinase-producing Neisseria gonorrhoeae developed a resistance to penicillin in 1976. Another example is Azithromycin-resistant Neisseria gonorrhoeae, which developed a resistance to azithromycin in 2011.
In gram-negative bacteria, plasmid-mediated resistance genes produce proteins that can bind to DNA gyrase, protecting it from the action of quinolones. Finally, mutations at key sites in DNA gyrase or topoisomerase IV can decrease their binding affinity to quinolones, decreasing the drug's effectiveness.
Some bacteria are naturally resistant to certain antibiotics; for example, gram-negative bacteria are resistant to most β-lactam antibiotics due to the presence of β-lactamase. Antibiotic resistance can also be acquired as a result of either genetic mutation or horizontal gene transfer. Although mutations are rare, with spontaneous mutations in the pathogen genome occurring at a rate of about 1 in 105 to 1 in 108 per chromosomal replication, the fact that bacteria reproduce at a high rate allows for the effect to be significant. Given that lifespans and production of new generations can be on a timescale of mere hours, a new (de novo) mutation in a parent cell can quickly become an inherited mutation of widespread prevalence, resulting in the microevolution of a fully resistant colony. However, chromosomal mutations also confer a cost of fitness. For example, a ribosomal mutation may protect a bacterial cell by changing the binding site of an antibiotic but may result in slower growth rate. Moreover, some adaptive mutations can propagate not only through inheritance but also through horizontal gene transfer. The most common mechanism of horizontal gene transfer is the transferring of plasmids carrying antibiotic resistance genes between bacteria of the same or different species via conjugation. However, bacteria can also acquire resistance through transformation, as in Streptococcus pneumoniae uptaking of naked fragments of extracellular DNA that contain antibiotic resistance genes to streptomycin, through transduction, as in the bacteriophage-mediated transfer of tetracycline resistance genes between strains of S. pyogenes, or through gene transfer agents, which are particles produced by the host cell that resemble bacteriophage structures and are capable of transferring DNA.
Antibiotic resistance can be introduced artificially into a microorganism through laboratory protocols, sometimes used as a selectable marker to examine the mechanisms of gene transfer or to identify individuals that absorbed a piece of DNA that included the resistance gene and another gene of interest.
Recent findings show no necessity of large populations of bacteria for the appearance of antibiotic resistance. Small populations of Escherichia coli in an antibiotic gradient can become resistant. Any heterogeneous environment with respect to nutrient and antibiotic gradients may facilitate antibiotic resistance in small bacterial populations. Researchers hypothesize that the mechanism of resistance evolution is based on four SNP mutations in the genome of E. coli produced by the gradient of antibiotic.
In one study, which has implications for space microbiology, a non-pathogenic strain E. coli MG1655 was exposed to trace levels of the broad spectrum antibiotic chloramphenicol, under simulated microgravity (LSMMG, or Low Shear Modeled Microgravity) over 1000 generations. The adapted strain acquired resistance to not only chloramphenicol, but also cross-resistance to other antibiotics; this was in contrast to the observation on the same strain, which was adapted to over 1000 generations under LSMMG, but without any antibiotic exposure; the strain in this case did not acquire any such resistance. Thus, irrespective of where they are used, the use of an antibiotic would likely result in persistent resistance to that antibiotic, as well as cross-resistance to other antimicrobials.
In recent years, the emergence and spread of β-lactamases called carbapenemases has become a major health crisis. One such carbapenemase is New Delhi metallo-beta-lactamase 1 (NDM-1), an enzyme that makes bacteria resistant to a broad range of beta-lactam antibiotics. The most common bacteria that make this enzyme are gram-negative such as E. coli and Klebsiella pneumoniae, but the gene for NDM-1 can spread from one strain of bacteria to another by horizontal gene transfer.
Viruses
Specific antiviral drugs are used to treat some viral infections. These drugs prevent viruses from reproducing by inhibiting essential stages of the virus's replication cycle in infected cells. Antivirals are used to treat HIV, hepatitis B, hepatitis C, influenza, herpes viruses including varicella zoster virus, cytomegalovirus and Epstein–Barr virus. With each virus, some strains have become resistant to the administered drugs.
Antiviral drugs typically target key components of viral reproduction; for example, oseltamivir targets influenza neuraminidase, while guanosine analogs inhibit viral DNA polymerase. Resistance to antivirals is thus acquired through mutations in the genes that encode the protein targets of the drugs.
Resistance to HIV antivirals is problematic, and even multi-drug resistant strains have evolved. One source of resistance is that many current HIV drugs, including NRTIs and NNRTIs, target reverse transcriptase; however, HIV-1 reverse transcriptase is highly error prone and thus mutations conferring resistance arise rapidly. Resistant strains of the HIV virus emerge rapidly if only one antiviral drug is used. Using three or more drugs together, termed combination therapy, has helped to control this problem, but new drugs are needed because of the continuing emergence of drug-resistant HIV strains.
Fungi
Infections by fungi are a cause of high morbidity and mortality in immunocompromised persons, such as those with HIV/AIDS, tuberculosis or receiving chemotherapy. The fungi Candida, Cryptococcus neoformans and Aspergillus fumigatus cause most of these infections and antifungal resistance occurs in all of them. Multidrug resistance in fungi is increasing because of the widespread use of antifungal drugs to treat infections in immunocompromised individuals and the use of some agricultural antifungals. Antifungal resistant disease is associated with increased mortality.
Some fungi (e.g. Candida krusei and fluconazole) exhibit intrinsic resistance to certain antifungal drugs or classes, whereas some species develop antifungal resistance to external pressures. Antifungal resistance is a One Health concern, driven by multiple extrinsic factors, including extensive fungicidal use, overuse of clinical antifungals, environmental change and host factors.
In the USA fluconazole-resistant Candida species and azole resistance in Aspergillus fumigatus have been highlighted as a growing threat.
More than 20 species of Candida can cause candidiasis infection, the most common of which is Candida albicans. Candida yeasts normally inhabit the skin and mucous membranes without causing infection. However, overgrowth of Candida can lead to candidiasis. Some Candida species (e.g. Candida glabrata) are becoming resistant to first-line and second-line antifungal agents such as echinocandins and azoles.
The emergence of Candida auris as a potential human pathogen that sometimes exhibits multi-class antifungal drug resistance is concerning and has been associated with several outbreaks globally. The WHO has released a priority fungal pathogen list, including pathogens with antifungal resistance.
The identification of antifungal resistance is undermined by limited classical diagnosis of infection, where a culture is lacking, preventing susceptibility testing. National and international surveillance schemes for fungal disease and antifungal resistance are limited, hampering the understanding of the disease burden and associated resistance. The application of molecular testing to identify genetic markers associating with resistance may improve the identification of antifungal resistance, but the diversity of mutations associated with resistance is increasing across the fungal species causing infection. In addition, a number of resistance mechanisms depend on up-regulation of selected genes (for instance reflux pumps) rather than defined mutations that are amenable to molecular detection.
Due to the limited number of antifungals in clinical use and the increasing global incidence of antifungal resistance, using the existing antifungals in combination might be beneficial in some cases but further research is needed. Similarly, other approaches that might help to combat the emergence of antifungal resistance could rely on the development of host-directed therapies such as immunotherapy or vaccines.
Parasites
The protozoan parasites that cause the diseases malaria, trypanosomiasis, toxoplasmosis, cryptosporidiosis and leishmaniasis are important human pathogens.
Malarial parasites that are resistant to the drugs that are currently available to infections are common and this has led to increased efforts to develop new drugs. Resistance to recently developed drugs such as artemisinin has also been reported. The problem of drug resistance in malaria has driven efforts to develop vaccines.
Trypanosomes are parasitic protozoa that cause African trypanosomiasis and Chagas disease (American trypanosomiasis). There are no vaccines to prevent these infections so drugs such as pentamidine and suramin, benznidazole and nifurtimox are used to treat infections. These drugs are effective but infections caused by resistant parasites have been reported.
Leishmaniasis is caused by protozoa and is an important public health problem worldwide, especially in sub-tropical and tropical countries. Drug resistance has "become a major concern".
Global and genomic data
In 2022, genomic epidemiologists reported results from a global survey of antimicrobial resistance via genomic wastewater-based epidemiology, finding large regional variations, providing maps, and suggesting resistance genes are also passed on between microbial species that are not closely related. The WHO provides the Global Antimicrobial Resistance and Use Surveillance System (GLASS) reports which summarize annual (e.g. 2020's) data on international AMR, also including an interactive dashboard.
Epidemiology
United Kingdom
Public Health England reported that the total number of antibiotic resistant infections in England rose by 9% from 55,812 in 2017 to 60,788 in 2018, but antibiotic consumption had fallen by 9% from 20.0 to 18.2 defined daily doses per 1,000 inhabitants per day between 2014 and 2018.
United States
The Centers for Disease Control and Prevention reported that more than 2.8 million cases of antibiotic resistance have been reported. However, in 2019 overall deaths from antibiotic-resistant infections decreased by 18% and deaths in hospitals decreased by 30%.
The COVID pandemic caused a reversal of much of the progress made on attenuating the effects of antibiotic resistance, resulting in more antibiotic use, more resistant infections, and less data on preventive action. Hospital-onset infections and deaths both increased by 15% in 2020, and significantly higher rates of infections were reported for 4 out of 6 types of healthcare associated infections.
History
The 1950s to 1970s represented the golden age of antibiotic discovery, where countless new classes of antibiotics were discovered to treat previously incurable diseases such as tuberculosis and syphilis. However, since that time the discovery of new classes of antibiotics has been almost nonexistent, and represents a situation that is especially problematic considering the resiliency of bacteria shown over time and the continued misuse and overuse of antibiotics in treatment.
Already in 1940, in their letter to the editor of Nature journal, Abraham and Chain identified the enzyme penicillinase as responsible for the deactivation of penicillin in penicillin-resistant bacteria. This discovery was the first step in understanding the mechanisms of microbial resistance to β-lactam antibiotics. The phenomenon of antimicrobial resistance caused by overuse of antibiotics was predicted as early as 1945 by Alexander Fleming who said "The time may come when penicillin can be bought by anyone in the shops. Then there is the danger that the ignorant man may easily under-dose himself and by exposing his microbes to nonlethal quantities of the drug make them resistant." Without the creation of new and stronger antibiotics an era where common infections and minor injuries can kill, and where complex procedures such as surgery and chemotherapy become too risky, is a very real possibility. Antimicrobial resistance can lead to epidemics of enormous proportions if preventive actions are not taken. In this day and age current antimicrobial resistance leads to longer hospital stays, higher medical costs, and increased mortality.
Society and culture
Innovation policy
Since the mid-1980s pharmaceutical companies have invested in medications for cancer or chronic disease that have greater potential to make money and have "de-emphasized or dropped development of antibiotics". On 20 January 2016 at the World Economic Forum in Davos, Switzerland, more than "80 pharmaceutical and diagnostic companies" from around the world called for "transformational commercial models" at a global level to spur research and development on antibiotics and on the "enhanced use of diagnostic tests that can rapidly identify the infecting organism". A number of countries are considering or implementing delinked payment models for new antimicrobials whereby payment is based on value rather than volume of drug sales. This offers the opportunity to pay for valuable new drugs even if they are reserved for use in relatively rare drug resistant infections.
Legal frameworks
Some global health scholars have argued that a global, legal framework is needed to prevent and control antimicrobial resistance. For instance, binding global policies could be used to create antimicrobial use standards, regulate antibiotic marketing, and strengthen global surveillance systems. Ensuring compliance of involved parties is a challenge. Global antimicrobial resistance policies could take lessons from the environmental sector by adopting strategies that have made international environmental agreements successful in the past such as: sanctions for non-compliance, assistance for implementation, majority vote decision-making rules, an independent scientific panel, and specific commitments.
United States
For the United States 2016 budget, U.S. president Barack Obama proposed to nearly double the amount of federal funding to "combat and prevent" antibiotic resistance to more than $1.2 billion. Many international funding agencies like USAID, DFID, SIDA and Gates Foundation have pledged money for developing strategies to counter antimicrobial resistance.
On 27 March 2015, the White House released a comprehensive plan to address the increasing need for agencies to combat the rise of antibiotic-resistant bacteria. The Task Force for Combating Antibiotic-Resistant Bacteria developed The National Action Plan for Combating Antibiotic-Resistant Bacteria with the intent of providing a roadmap to guide the US in the antibiotic resistance challenge and with hopes of saving many lives. This plan outlines steps taken by the Federal government over the next five years needed in order to prevent and contain outbreaks of antibiotic-resistant infections; maintain the efficacy of antibiotics already on the market; and to help to develop future diagnostics, antibiotics, and vaccines.
The Action Plan was developed around five goals with focuses on strengthening health care, public health veterinary medicine, agriculture, food safety and research, and manufacturing. These goals, as listed by the White House, are as follows:
Slow the Emergence of Resistant Bacteria and Prevent the Spread of Resistant Infections
Strengthen National One-Health Surveillance Efforts to Combat Resistance
Advance Development and use of Rapid and Innovative Diagnostic Tests for Identification and Characterization of Resistant Bacteria
Accelerate Basic and Applied Research and Development for New Antibiotics, Other Therapeutics, and Vaccines
Improve International Collaboration and Capacities for Antibiotic Resistance Prevention, Surveillance, Control and Antibiotic Research and Development
The following are goals set to meet by 2020:
Establishment of antimicrobial programs within acute care hospital settings
Reduction of inappropriate antibiotic prescription and use by at least 50% in outpatient settings and 20% inpatient settings
Establishment of State Antibiotic Resistance (AR) Prevention Programs in all 50 states
Elimination of the use of medically important antibiotics for growth promotion in food-producing animals.
Current Status of AMR in the U.S.
As of 2023, antimicrobial resistance (AMR) remains a significant public health threat in the United States. According to the Centers for Disease Control and Prevention's 2023 Report on Antibiotic Resistance Threats, over 2.8 million antibiotic-resistant infections occur in the U.S. each year, leading to at least 35,000 deaths annually. Among the most concerning resistant pathogens are Carbapenem-resistant Enterobacteriaceae (CRE), Methicillin-resistant Staphylococcus aureus (MRSA), and Clostridioides difficile (C. diff), all of which continue to be responsible for severe healthcare-associated infections (HAIs).
The COVID-19 pandemic led to a significant disruption in healthcare, with an increase in the use of antibiotics during the treatment of viral infections. This rise in antibiotic prescribing, coupled with overwhelmed healthcare systems, contributed to a resurgence in AMR during the pandemic years. A 2021 CDC report identified a sharp increase in HAIs caused by resistant pathogens in COVID-19 patients, a trend that has persisted into 2023. Recent data suggest that although antibiotic use has decreased since the pandemic, some resistant pathogens remain prevalent in healthcare settings.
The CDC has also expanded its Get Ahead of Sepsis campaign in 2023, focusing on raising awareness of AMR's role in sepsis and promoting the judicious use of antibiotics in both healthcare and community settings. This initiative has reached millions through social media, healthcare facilities, and public health outreach, aiming to educate the public on the importance of preventing infections and reducing antibiotic misuse.
Policies
According to World Health Organization, policymakers can help tackle resistance by strengthening resistance-tracking and laboratory capacity and by regulating and promoting the appropriate use of medicines. Policymakers and industry can help tackle resistance by: fostering innovation and research and development of new tools; and promoting cooperation and information sharing among all stakeholders.
The U.S. government continues to prioritize AMR mitigation through policy and legislation. In 2023, the National Action Plan for Combating Antibiotic-Resistant Bacteria (CARB) 2023-2028 was released, outlining strategic objectives for reducing antibiotic-resistant infections, advancing infection prevention, and accelerating research on new antibiotics. The plan also emphasizes the importance of improving antibiotic stewardship across healthcare, agriculture, and veterinary settings.
Furthermore, the PASTEUR Act (Pioneering Antimicrobial Subscriptions to End Upsurging Resistance) has gained momentum in Congress. If passed, the bill would create a subscription-based payment model to incentivize the development of new antimicrobial drugs, while supporting antimicrobial stewardship programs to reduce the misuse of existing antibiotics. This legislation is considered a critical step toward addressing the economic barriers to developing new antimicrobials.
Policy evaluation
Measuring the costs and benefits of strategies to combat AMR is difficult and policies may only have effects in the distant future. In other infectious diseases this problem has been addressed by using mathematical models. More research is needed to understand how AMR develops and spreads so that mathematical modelling can be used to anticipate the likely effects of different policies.
Further research
Rapid testing and diagnostics
Distinguishing infections requiring antibiotics from self-limiting ones is clinically challenging. In order to guide appropriate use of antibiotics and prevent the evolution and spread of antimicrobial resistance, diagnostic tests that provide clinicians with timely, actionable results are needed.
Acute febrile illness is a common reason for seeking medical care worldwide and a major cause of morbidity and mortality. In areas with decreasing malaria incidence, many febrile patients are inappropriately treated for malaria, and in the absence of a simple diagnostic test to identify alternative causes of fever, clinicians presume that a non-malarial febrile illness is most likely a bacterial infection, leading to inappropriate use of antibiotics. Multiple studies have shown that the use of malaria rapid diagnostic tests without reliable tools to distinguish other fever causes has resulted in increased antibiotic use.
Antimicrobial susceptibility testing (AST) can facilitate a precision medicine approach to treatment by helping clinicians to prescribe more effective and targeted antimicrobial therapy. At the same time with traditional phenotypic AST it can take 12 to 48 hours to obtain a result due to the time taken for organisms to grow on/in culture media. Rapid testing, possible from molecular diagnostics innovations, is defined as "being feasible within an 8-h working shift". There are several commercial Food and Drug Administration-approved assays available which can detect AMR genes from a variety of specimen types. Progress has been slow due to a range of reasons including cost and regulation. Genotypic AMR characterisation methods are, however, being increasingly used in combination with machine learning algorithms in research to help better predict phenotypic AMR from organism genotype.
Optical techniques such as phase contrast microscopy in combination with single-cell analysis are another powerful method to monitor bacterial growth. In 2017, scientists from Uppsala University in Sweden published a method that applies principles of microfluidics and cell tracking, to monitor bacterial response to antibiotics in less than 30 minutes overall manipulation time. This invention was awarded the 8M£ Longitude Prize on AMR in 2024. Recently, this platform has been advanced by coupling microfluidic chip with optical tweezing in order to isolate bacteria with altered phenotype directly from the analytical matrix.
Rapid diagnostic methods have also been trialled as antimicrobial stewardship interventions to influence the healthcare drivers of AMR. Serum procalcitonin measurement has been shown to reduce mortality rate, antimicrobial consumption and antimicrobial-related side-effects in patients with respiratory infections, but impact on AMR has not yet been demonstrated. Similarly, point of care serum testing of the inflammatory biomarker C-reactive protein has been shown to influence antimicrobial prescribing rates in this patient cohort, but further research is required to demonstrate an effect on rates of AMR. Clinical investigation to rule out bacterial infections are often done for patients with pediatric acute respiratory infections. Currently it is unclear if rapid viral testing affects antibiotic use in children.
Vaccines
Vaccines are an essential part of the response to reduce AMR as they prevent infections, reduce the use and overuse of antimicrobials, and slow the emergence and spread of drug-resistant pathogens.
Microorganisms usually do not develop resistance to vaccines because vaccines reduce the spread of the infection and target the pathogen in multiple ways in the same host and possibly in different ways between different hosts. Furthermore, if the use of vaccines increases, there is evidence that antibiotic resistant strains of pathogens will decrease; the need for antibiotics will naturally decrease as vaccines prevent infection before it occurs. A 2024 report by WHO finds that vaccines against 24 pathogens could reduce the number of antibiotics needed by 22% or 2.5 billion defined daily doses globally every year. If vaccines could be rolled out against all the evaluated pathogens, they could save a third of the hospital costs associated with AMR. Vaccinated people have fewer infections and are protected against potential complications from secondary infections that may need antimicrobial medicines or require admission to hospital. However, there are well documented cases of vaccine resistance, although these are usually much less of a problem than antimicrobial resistance.
While theoretically promising, antistaphylococcal vaccines have shown limited efficacy, because of immunological variation between Staphylococcus species, and the limited duration of effectiveness of the antibodies produced. Development and testing of more effective vaccines is underway.
Two registrational trials have evaluated vaccine candidates in active immunization strategies against S. aureus infection. In a phase II trial, a bivalent vaccine of capsular proteins 5 & 8 was tested in 1804 hemodialysis patients with a primary fistula or synthetic graft vascular access. After 40 weeks following vaccination a protective effect was seen against S. aureus bacteremia, but not at 54 weeks following vaccination. Based on these results, a second trial was conducted which failed to show efficacy.
Merck tested V710, a vaccine targeting IsdB, in a blinded randomized trial in patients undergoing median sternotomy. The trial was terminated after a higher rate of multiorgan system failure–related deaths was found in the V710 recipients. Vaccine recipients who developed S. aureus infection were five times more likely to die than control recipients who developed S. aureus infection.
Numerous investigators have suggested that a multiple-antigen vaccine would be more effective, but a lack of biomarkers defining human protective immunity keep these proposals in the logical, but strictly hypothetical arena.
Antibody therapy
Antibodies are promising against antimicrobial resistance. Monoclonal antibodies (mAbs) target bacterial virulence factors, aiding in bacterial destruction through various mechanisms. Three FDA-approved antibodies target B. anthracis and C. difficile toxins. Innovative strategies include DSTA4637S, an antibody-antibiotic conjugate, and MEDI13902, a bispecific antibody targeting Pseudomonas aeruginosa components.
Alternating therapy
Alternating therapy is a proposed method in which two or three antibiotics are taken in a rotation versus taking just one antibiotic such that bacteria resistant to one antibiotic are killed when the next antibiotic is taken. Studies have found that this method reduces the rate at which antibiotic resistant bacteria emerge in vitro relative to a single drug for the entire duration.
Studies have found that bacteria that evolve antibiotic resistance towards one group of antibiotic may become more sensitive to others. This phenomenon can be used to select against resistant bacteria using an approach termed collateral sensitivity cycling, which has recently been found to be relevant in developing treatment strategies for chronic infections caused by Pseudomonas aeruginosa. Despite its promise, large-scale clinical and experimental studies revealed limited evidence of susceptibility to antibiotic cycling across various pathogens.
Development of new drugs
Since the discovery of antibiotics, research and development (R&D) efforts have provided new drugs in time to treat bacteria that became resistant to older antibiotics, but in the 2000s there has been concern that development has slowed enough that seriously ill people may run out of treatment options. Another concern is that practitioners may become reluctant to perform routine surgeries because of the increased risk of harmful infection. Backup treatments can have serious side-effects; for example, antibiotics like aminoglycosides (such as amikacin, gentamicin, kanamycin, streptomycin, etc.) used for the treatment of drug-resistant tuberculosis and cystic fibrosis can cause respiratory disorders, deafness and kidney failure.
The potential crisis at hand is the result of a marked decrease in industry research and development. Poor financial investment in antibiotic research has exacerbated the situation. The pharmaceutical industry has little incentive to invest in antibiotics because of the high risk and because the potential financial returns are less likely to cover the cost of development than for other pharmaceuticals. In 2011, Pfizer, one of the last major pharmaceutical companies developing new antibiotics, shut down its primary research effort, citing poor shareholder returns relative to drugs for chronic illnesses. However, small and medium-sized pharmaceutical companies are still active in antibiotic drug research. In particular, apart from classical synthetic chemistry methodologies, researchers have developed a combinatorial synthetic biology platform on single cell level in a high-throughput screening manner to diversify novel lanthipeptides.
In the 5–10 years since 2010, there has been a significant change in the ways new antimicrobial agents are discovered and developed – principally via the formation of public-private funding initiatives. These include CARB-X, which focuses on nonclinical and early phase development of novel antibiotics, vaccines, rapid diagnostics; Novel Gram Negative Antibiotic (GNA-NOW), which is part of the EU's Innovative Medicines Initiative; and Replenishing and Enabling the Pipeline for Anti-infective Resistance Impact Fund (REPAIR). Later stage clinical development is supported by the AMR Action Fund, which in turn is supported by multiple investors with the aim of developing 2–4 new antimicrobial agents by 2030. The delivery of these trials is facilitated by national and international networks supported by the Clinical Research Network of the National Institute for Health and Care Research (NIHR), European Clinical Research Alliance in Infectious Diseases (ECRAID) and the recently formed ADVANCE-ID, which is a clinical research network based in Asia. The Global Antibiotic Research and Development Partnership (GARDP) is generating new evidence for global AMR threats such as neonatal sepsis, treatment of serious bacterial infections and sexually transmitted infections as well as addressing global access to new and strategically important antibacterial drugs.
The discovery and development of new antimicrobial agents has been facilitated by regulatory advances, which have been principally led by the European Medicines Agency (EMA) and the Food and Drug Administration (FDA). These processes are increasingly aligned although important differences remain and drug developers must prepare separate documents. New development pathways have been developed to help with the approval of new antimicrobial agents that address unmet needs such as the Limited Population Pathway for Antibacterial and Antifungal Drugs (LPAD). These new pathways are required because of difficulties in conducting large definitive phase III clinical trials in a timely way.
Some of the economic impediments to the development of new antimicrobial agents have been addressed by innovative reimbursement schemes that delink payment of antimicrobials from volume-based sales. In the UK, a market entry reward scheme has been pioneered by the National Institute for Clinical Excellence (NICE) whereby an annual subscription fee is paid for use of strategically valuable antimicrobial agents – cefiderocol and ceftazidime-aviabactam are the first agents to be used in this manner and the scheme is potential blueprint for comparable programs in other countries.
The available classes of antifungal drugs are still limited but as of 2021 novel classes of antifungals are being developed and are undergoing various stages of clinical trials to assess performance.
Scientists have started using advanced computational approaches with supercomputers for the development of new antibiotic derivatives to deal with antimicrobial resistance.
Biomaterials
Using antibiotic-free alternatives in bone infection treatment may help decrease the use of antibiotics and thus antimicrobial resistance. The bone regeneration material bioactive glass S53P4 has shown to effectively inhibit the bacterial growth of up to 50 clinically relevant bacteria including MRSA and MRSE.
Nanomaterials
During the last decades, copper and silver nanomaterials have demonstrated appealing features for the development of a new family of antimicrobial agents. Nanoparticles (1–100 nm) show unique properties and promise as antimicrobial agents against resistant bacteria. Silver (AgNPs) and gold nanoparticles (AuNPs) are extensively studied, disrupting bacterial cell membranes and interfering with protein synthesis. Zinc oxide (ZnO NPs), copper (CuNPs), and silica (SiNPs) nanoparticles also exhibit antimicrobial properties. However, high synthesis costs, potential toxicity, and instability pose challenges. To overcome these, biological synthesis methods and combination therapies with other antimicrobials are explored. Enhanced biocompatibility and targeting are also under investigation to improve efficacy.
Rediscovery of ancient treatments
Similar to the situation in malaria therapy, where successful treatments based on ancient recipes have been found, there has already been some success in finding and testing ancient drugs and other treatments that are effective against AMR bacteria.
Computational community surveillance
One of the key tools identified by the WHO and others for the fight against rising antimicrobial resistance is improved surveillance of the spread and movement of AMR genes through different communities and regions. Recent advances in high-throughput DNA sequencing as a result of the Human Genome Project have resulted in the ability to determine the individual microbial genes in a sample. Along with the availability of databases of known antimicrobial resistance genes, such as the Comprehensive Antimicrobial Resistance Database (CARD) and ResFinder, this allows the identification of all the antimicrobial resistance genes within the sample – the so-called "resistome". In doing so, a profile of these genes within a community or environment can be determined, providing insights into how antimicrobial resistance is spreading through a population and allowing for the identification of resistance that is of concern.
Phage therapy
Phage therapy is the therapeutic use of bacteriophages to treat pathogenic bacterial infections. Phage therapy has many potential applications in human medicine as well as dentistry, veterinary science, and agriculture.
Phage therapy relies on the use of naturally occurring bacteriophages to infect and lyse bacteria at the site of infection in a host. Due to current advances in genetics and biotechnology these bacteriophages can possibly be manufactured to treat specific infections. Phages can be bioengineered to target multidrug-resistant bacterial infections, and their use involves the added benefit of preventing the elimination of beneficial bacteria in the human body. Phages destroy bacterial cell walls and membrane through the use of lytic proteins which kill bacteria by making many holes from the inside out. Bacteriophages can even possess the ability to digest the biofilm that many bacteria develop that protect them from antibiotics in order to effectively infect and kill bacteria. Bioengineering can play a role in creating successful bacteriophages.
Understanding the mutual interactions and evolutions of bacterial and phage populations in the environment of a human or animal body is essential for rational phage therapy.
Bacteriophagics are used against antibiotic resistant bacteria in Georgia (George Eliava Institute) and in one institute in Wrocław, Poland. Bacteriophage cocktails are common drugs sold over the counter in pharmacies in eastern countries. In Belgium, four patients with severe musculoskeletal infections received bacteriophage therapy with concomitant antibiotics. After a single course of phage therapy, no recurrence of infection occurred and no severe side-effects related to the therapy were detected.
Books
Journals
16-minute film about a post-antibiotic world. Review:
|
;Evolutionary biology;Global issues;Health disasters;Pharmaceuticals policy;Veterinary medicine
|
https://en.wikipedia.org/wiki/Antigen
|
In immunology, an antigen (Ag) is a molecule, moiety, foreign particulate matter, or an allergen, such as pollen, that can bind to a specific antibody or T-cell receptor. The presence of antigens in the body may trigger an immune response.
Antigens can be proteins, peptides (amino acid chains), polysaccharides (chains of simple sugars), lipids, or nucleic acids. Antigens exist on normal cells, cancer cells, parasites, viruses, fungi, and bacteria.
Antigens are recognized by antigen receptors, including antibodies and T-cell receptors. Diverse antigen receptors are made by cells of the immune system so that each cell has a specificity for a single antigen. Upon exposure to an antigen, only the lymphocytes that recognize that antigen are activated and expanded, a process known as clonal selection. In most cases, antibodies are antigen-specific, meaning that an antibody can only react to and bind one specific antigen; in some instances, however, antibodies may cross-react to bind more than one antigen. The reaction between an antigen and an antibody is called the antigen-antibody reaction.
Antigen can originate either from within the body ("self-protein" or "self antigens") or from the external environment ("non-self"). The immune system identifies and attacks "non-self" external antigens. Antibodies usually do not react with self-antigens due to negative selection of T cells in the thymus and B cells in the bone marrow. The diseases in which antibodies react with self antigens and damage the body's own cells are called autoimmune diseases.
Vaccines are examples of antigens in an immunogenic form, which are intentionally administered to a recipient to induce the memory function of the adaptive immune system towards antigens of the pathogen invading that recipient. The vaccine for seasonal influenza is a common example.
Etymology
Paul Ehrlich coined the term antibody () in his side-chain theory at the end of the 19th century. In 1899, Ladislas Deutsch (László Detre) named the hypothetical substances halfway between bacterial constituents and antibodies "antigenic or immunogenic substances" (). He originally believed those substances to be precursors of antibodies, just as a zymogen is a precursor of an enzyme. But, by 1903, he understood that an antigen induces the production of immune bodies (antibodies) and wrote that the word antigen is a contraction of antisomatogen (). The Oxford English Dictionary indicates that the logical construction should be "anti(body)-gen".
The term originally referred to a substance that acts as an antibody generator.
Terminology
Epitope – the distinct surface features of an, its antigenic determinant.Antigenic molecules, normally "large" biological polymers, usually present surface features that can act as points of interaction for specific antibodies. Any such feature constitutes an epitope. Most antigens have the potential to be bound by multiple antibodies, each of which is specific to one of the antigen's epitopes. Using the "lock and key" metaphor, the antigen can be seen as a string of keys (epitopes) each of which matches a different lock (antibody). Different antibody idiotypes, each have distinctly formed complementarity-determining regions.
Allergen – A substance capable of causing an allergic reaction. The (detrimental) reaction may result after exposure via ingestion, inhalation, injection, or contact with skin.
Superantigen – A class of antigens that cause non-specific activation of T-cells, resulting in polyclonal T-cell activation and massive cytokine release.
Tolerogen – A substance that invokes a specific immune non-responsiveness due to its molecular form. If its molecular form is changed, a tolerogen can become an immunogen.
Immunoglobulin-binding protein – Proteins such as protein A, protein G, and protein L that are capable of binding to antibodies at positions outside of the antigen-binding site. While antigens are the "target" of antibodies, immunoglobulin-binding proteins "attack" antibodies.
T-dependent antigen – Antigens that require the assistance of T cells to induce the formation of specific antibodies.
T-independent antigen – Antigens that stimulate B cells directly.
Immunodominant antigens – Antigens that dominate (over all others from a pathogen) in their ability to produce an immune response. T cell responses typically are directed against a relatively few immunodominant epitopes, although in some cases (e.g., infection with the malaria pathogen Plasmodium spp.) it is dispersed over a relatively large number of parasite antigens.
Antigen-presenting cells present antigens in the form of peptides on histocompatibility molecules. The T cells selectively recognize the antigens; depending on the antigen and the type of the histocompatibility molecule, different types of T cells will be activated. For T-cell receptor (TCR) recognition, the peptide must be processed into small fragments inside the cell and presented by a major histocompatibility complex (MHC). The antigen cannot elicit the immune response without the help of an immunologic adjuvant. Similarly, the adjuvant component of vaccines plays an essential role in the activation of the innate immune system.
An immunogen is an antigen substance (or adduct) that is able to trigger a humoral (innate) or cell-mediated immune response. It first initiates an innate immune response, which then causes the activation of the adaptive immune response. An antigen binds the highly variable immunoreceptor products (B-cell receptor or T-cell receptor) once these have been generated. Immunogens are those antigens, termed immunogenic, capable of inducing an immune response.
At the molecular level, an antigen can be characterized by its ability to bind to an antibody's paratopes. Different antibodies have the potential to discriminate among specific epitopes present on the antigen surface. A hapten is a small molecule that can only induce an immune response when attached to a larger carrier molecule, such as a protein. Antigens can be proteins, polysaccharides, lipids, nucleic acids or other biomolecules. This includes parts (coats, capsules, cell walls, flagella, fimbriae, and toxins) of bacteria, viruses, and other microorganisms. Non-microbial non-self antigens can include pollen, egg white, and proteins from transplanted tissues and organs or on the surface of transfused blood cells.
Sources
Antigens can be classified according to their source.
Exogenous antigens
Exogenous antigens are antigens that have entered the body from the outside, for example, by inhalation, ingestion or injection. The immune system's response to exogenous antigens is often subclinical. By endocytosis or phagocytosis, exogenous antigens are taken into the antigen-presenting cells (APCs) and processed into fragments. APCs then present the fragments to T helper cells (CD4+) by the use of class II histocompatibility molecules on their surface. Some T cells are specific for the peptide:MHC complex. They become activated and start to secrete cytokines, substances that activate cytotoxic T lymphocytes (CTL), antibody-secreting B cells, macrophages and other particles.
Some antigens start out as exogenous and later become endogenous (for example, intracellular viruses). Intracellular antigens can be returned to circulation upon the destruction of the infected cell.
Endogenous antigens
Endogenous antigens are generated within normal cells as a result of normal cell metabolism, or because of viral or intracellular bacterial infection. The fragments are then presented on the cell surface in the complex with MHC class I molecules. If activated cytotoxic CD8+ T cells recognize them, the T cells secrete various toxins that cause the lysis or apoptosis of the infected cell. In order to keep the cytotoxic cells from killing cells just for presenting self-proteins, the cytotoxic cells (self-reactive T cells) are deleted as a result of tolerance (negative selection). Endogenous antigens include xenogenic (heterologous), autologous and idiotypic or allogenic (homologous) antigens. Sometimes antigens are part of the host itself in an autoimmune disease.
Autoantigens
An autoantigen is usually a self-protein or protein complex (and sometimes DNA or RNA) that is recognized by the immune system of patients with a specific autoimmune disease. Under normal conditions, these self-proteins should not be the target of the immune system, but in autoimmune diseases, their associated T cells are not deleted and instead attack.
Neoantigens
Neoantigens are those that are entirely absent from the normal human genome. As compared with nonmutated self-proteins, neoantigens are of relevance to tumor control, as the quality of the T cell pool that is available for these antigens is not affected by central T cell tolerance. Technology to systematically analyze T cell reactivity against neoantigens became available only recently. Neoantigens can be directly detected and quantified.
Viral antigens
For virus-associated tumors, such as cervical cancer and a subset of head and neck cancers, epitopes derived from viral open reading frames contribute to the pool of neoantigens.
Tumor antigens
Tumor antigens are those antigens that are presented by MHC class I or MHC class II molecules on the surface of tumor cells. Antigens found only on such cells are called tumor-specific antigens (TSAs) and generally result from a tumor-specific mutation. More common are antigens that are presented by tumor cells and normal cells, called tumor-associated antigens (TAAs). Cytotoxic T lymphocytes that recognize these antigens may be able to destroy tumor cells.
Tumor antigens can appear on the surface of the tumor in the form of, for example, a mutated receptor, in which case they are recognized by B cells.
For human tumors without a viral etiology, novel peptides (neo-epitopes) are created by tumor-specific DNA alterations.
Process
A large fraction of human tumor mutations are effectively patient-specific. Therefore, neoantigens may also be based on individual tumor genomes. Deep-sequencing technologies can identify mutations within the protein-coding part of the genome (the exome) and predict potential neoantigens. In mice models, for all novel protein sequences, potential MHC-binding peptides were predicted. The resulting set of potential neoantigens was used to assess T cell reactivity. Exome–based analyses were exploited in a clinical setting, to assess reactivity in patients treated by either tumor-infiltrating lymphocyte (TIL) cell therapy or checkpoint blockade. Neoantigen identification was successful for multiple experimental model systems and human malignancies.
The false-negative rate of cancer exome sequencing is low—i.e.: the majority of neoantigens occur within exonic sequence with sufficient coverage. However, the vast majority of mutations within expressed genes do not produce neoantigens that are recognized by autologous T cells.
As of 2015 mass spectrometry resolution is insufficient to exclude many false positives from the pool of peptides that may be presented by MHC molecules. Instead, algorithms are used to identify the most likely candidates. These algorithms consider factors such as the likelihood of proteasomal processing, transport into the endoplasmic reticulum, affinity for the relevant MHC class I alleles and gene expression or protein translation levels.
The majority of human neoantigens identified in unbiased screens display a high predicted MHC binding affinity. Minor histocompatibility antigens, a conceptually similar antigen class are also correctly identified by MHC binding algorithms. Another potential filter examines whether the mutation is expected to improve MHC binding. The nature of the central TCR-exposed residues of MHC-bound peptides is associated with peptide immunogenicity.
Nativity
A native antigen is an antigen that is not yet processed by an APC to smaller parts. T cells cannot bind native antigens, but require that they be processed by APCs, whereas B cells can be activated by native ones.
Antigenic specificity
Antigenic specificity is the ability of the host cells to recognize an antigen specifically as a unique molecular entity and distinguish it from another with exquisite precision. Antigen specificity is due primarily to the side-chain conformations of the antigen. It is measurable and need not be linear or of a rate-limited step or equation. Both T cells and B cells are cellular components of adaptive immunity.
See also
References
|
;Biomolecules;Immune system
|
https://en.wikipedia.org/wiki/Alexander%20technique
|
The Alexander technique, named after its developer Frederick Matthias Alexander (1869–1955), is an alternative therapy based on the idea that poor posture causes a range of health problems. The American National Center for Complementary and Integrative Health classifies it as a "psychological and physical" complementary approach to health when used "together with" mainstream conventional medicine.
Alexander began developing his technique's principles in the 1890s to address his own voice loss during public speaking. He credited his method with allowing him to pursue his passion for performing Shakespearean recitations.
Proponents and teachers of the Alexander technique believe the technique can address a variety of health conditions, but there is a lack of research to support the claims. , the UK National Health Service and the National Institute for Health and Care Excellence (NICE) cite evidence that the Alexander technique may be helpful for long-term back pain and for long-term neck pain, and that it could help people cope with Parkinson's disease. Both the American health-insurance company Aetna and the Australian Department of Health have conducted reviews and concluded that there is insufficient evidence for the technique's health claims to warrant insurance coverage.
Method
The Alexander technique is most commonly taught in a series of private lessons which may last from 30 minutes to an hour. The number of lessons varies widely, depending on the student's needs and level of interest. Students are often performers, such as actors, dancers, musicians, athletes and public speakers, people who work on computers, or those who are in frequent pain for other reasons. Instructors observe their students, and provide both verbal and gentle manual guidance to help students learn how to move with better poise and less strain. Sessions include chair work – often in front of a mirror – during which the instructor will guide the student while the student stands, sits and walks, learning to move efficiently while maintaining a comfortable relationship between the head, neck, and spine, and table work or physical manipulation.
In the United Kingdom, there is no regulation for who can offer Alexander technique services. Professional organisations do exist, however, typically offering three-year courses to people becoming instructors.
History
The Alexander technique is based on the personal observations of Frederick Matthias Alexander (1869–1955). Alexander's career as an actor was hampered by recurrent bouts of laryngitis, but he found he could overcome it by focusing on his discomfort and tension, and relaxing. Alexander also thought posture could be improved if a person became more conscious of their bodily movements.
While on a recital tour in New Zealand (1895), Alexander came to believe in the wider significance of improved carriage for overall physical functioning, although evidence from his own publications appears to indicate it happened less systematically and over a long period of time.
Alexander did not originally conceive of his technique as therapy, but it has become a form of alternative medicine.
When considering how to classify the Alexander technique in relation to mainstream medicine, some sources describe it as alternative and/or complementary, depending on whether it is used alone or with mainstream methods. The American National Center for Complementary and Integrative Health classifies it as a "psychological and physical" complementary approach to health when used with mainstream methods. When used "in place of" conventional medicine, it is considered "alternative".
Influence
The American philosopher and educator John Dewey became impressed with the Alexander technique after his headaches, neck pains, blurred vision, and stress symptoms largely improved during the time he used Alexander's advice to change his posture. In 1923, Dewey wrote the introduction to Alexander's Constructive Conscious Control of the Individual.
Fritz Perls, who originated Gestalt therapy, credited Alexander as an inspiration for his psychological work.
Uses
The Alexander technique is used as a therapy for stress-related chronic conditions. It does not attempt to cure the underlying cause, but to teach people how to avoid bad habits which might exacerbate their condition.
The technique is used as an alternative treatment to improve both voice and posture for people in the performing arts. it was on the curriculum of prominent Western performing arts institutions.
According to Alexander technique instructor Michael J. Gelb, people tend to study the Alexander technique for reasons of personal development.
Health effects
The UK National Health Service says that advocates of the Alexander technique made claims for it that were not supported by evidence, but that there was evidence suggesting that it might help with chronic back or neck pain. According to the NHS, Alexander technique may be of benefit for people with Parkinson disease. The National Institute for Health and Care Excellence (NICE) guidelines state that people with Parkinson disease who are experiencing balance or motor function problems should consider the Alexander technique along with disease-specific physiotherapy. There is limited evidence for chronic pain, stammering, and balance skills in older people. There was no good evidence of benefit for other conditions including asthma, headaches, osteoarthritis, difficulty sleeping, and stress.
A 2012 Cochrane systematic review found that there is no good evidence that the Alexander technique is effective for treating asthma, and randomized clinical trials are needed in order to assess the effectiveness of this type of treatment approach.
A review published in BMC Complementary and Alternative Medicine in 2014 focused on "the evidence for the effectiveness of AT sessions on musicians' performance, anxiety, respiratory function and posture" concluded that "evidence from RCTs and CTs suggests that AT sessions may improve performance anxiety in musicians. Effects on music performance, respiratory function and posture yet remain inconclusive."
A 2015 review, conducted for the Australian Department of Health in order to determine what services the Australian government should pay for, examined clinical trials published to date and found that "overall, the evidence was limited by the small number of participants in the intervention arms, wide confidence intervals or a lack of replication of results." It concluded that "the Alexander technique may improve short-term pain and disability in people with low back pain, but the longer-term effects remain uncertain. For all other clinical conditions, the effectiveness of the Alexander technique was deemed to be uncertain, due to insufficient evidence." It also noted that "evidence for the safety of Alexander Technique was lacking, with most trials not reporting on this outcome." Subsequently, in 2017, the Australian government named the Alexander technique as a practice that would not qualify for insurance subsidy, saying this step would "ensure taxpayer funds are expended appropriately and not directed to therapies lacking evidence".
A review by Aetna last updated in 2021 stated: "Aetna considers the following alternative medicine interventions experimental and investigational because there is inadequate evidence in the peer-reviewed published medical literature of their effectiveness." The Alexander technique is included in that list.
See also
Nikolai Bernstein
George E. Coghill
Motor skill consolidation
Neutral spine
Psychomotor learning
References
External links
|
Alternative medicine;Mind–body interventions;Postural awareness techniques;Somatics
|
https://en.wikipedia.org/wiki/Apparent%20magnitude
|
Apparent magnitude () is a measure of the brightness of a star, astronomical object or other celestial objects like artificial satellites. Its value depends on its intrinsic luminosity, its distance, and any extinction of the object's light caused by interstellar dust along the line of sight to the observer.
Unless stated otherwise, the word magnitude in astronomy usually refers to a celestial object's apparent magnitude. The magnitude scale likely dates to before the ancient Roman astronomer Claudius Ptolemy, whose star catalog popularized the system by listing stars from 1st magnitude (brightest) to 6th magnitude (dimmest). The modern scale was mathematically defined to closely match this historical system by Norman Pogson in 1856.
The scale is reverse logarithmic: the brighter an object is, the lower its magnitude number. A difference of 1.0 in magnitude corresponds to the brightness ratio of , or about 2.512. For example, a magnitude 2.0 star is 2.512 times as bright as a magnitude 3.0 star, 6.31 times as magnitude 4.0, and 100 times magnitude 7.0.
The brightest astronomical objects have negative apparent magnitudes: for example, Venus at −4.2 or Sirius at −1.46. The faintest stars visible with the naked eye on the darkest night have apparent magnitudes of about +6.5, though this varies depending on a person's eyesight and with altitude and atmospheric conditions. The apparent magnitudes of known objects range from the Sun at −26.832 to objects in deep Hubble Space Telescope images of magnitude +31.5.
The measurement of apparent magnitude is called photometry. Photometric measurements are made in the ultraviolet, visible, or infrared wavelength bands using standard passband filters belonging to photometric systems such as the UBV system or the Strömgren uvbyβ system. Measurement in the V-band may be referred to as the apparent visual magnitude.
Absolute magnitude is a related quantity which measures the luminosity that a celestial object emits, rather than its apparent brightness when observed, and is expressed on the same reverse logarithmic scale. Absolute magnitude is defined as the apparent magnitude that a star or object would have if it were observed from a distance of . Therefore, it is of greater use in stellar astrophysics since it refers to a property of a star regardless of how close it is to Earth. But in observational astronomy and popular stargazing, references to "magnitude" are understood to mean apparent magnitude.
Amateur astronomers commonly express the darkness of the sky in terms of limiting magnitude, i.e. the apparent magnitude of the faintest star they can see with the naked eye. This can be useful as a way of monitoring the spread of light pollution.
Apparent magnitude is technically a measure of illuminance, which can also be measured in photometric units such as lux.
History
The scale used to indicate magnitude originates in the Hellenistic practice of dividing stars visible to the naked eye into six magnitudes. The brightest stars in the night sky were said to be of first magnitude ( = 1), whereas the faintest were of sixth magnitude ( = 6), which is the limit of human visual perception (without the aid of a telescope). Each grade of magnitude was considered twice the brightness of the following grade (a logarithmic scale), although that ratio was subjective as no photodetectors existed. This rather crude scale for the brightness of stars was popularized by Ptolemy in his Almagest and is generally believed to have originated with Hipparchus. This cannot be proved or disproved because Hipparchus's original star catalogue is lost. The only preserved text by Hipparchus himself (a commentary to Aratus) clearly documents that he did not have a system to describe brightness with numbers: He always uses terms like "big" or "small", "bright" or "faint" or even descriptions such as "visible at full moon".
In 1856, Norman Robert Pogson formalized the system by defining a first magnitude star as a star that is 100 times as bright as a sixth-magnitude star, thereby establishing the logarithmic scale still in use today. This implies that a star of magnitude is about 2.512 times as bright as a star of magnitude . This figure, the fifth root of 100, became known as . The 1884 Harvard Photometry and 1886 Potsdamer Durchmusterung star catalogs popularized Pogson's ratio, and eventually it became a de facto standard in modern astronomy to describe differences in brightness.
Defining and calibrating what magnitude 0.0 means is difficult, and different types of measurements which detect different kinds of light (possibly by using filters) have different zero points. Pogson's original 1856 paper defined magnitude 6.0 to be the faintest star the unaided eye can see, but the true limit for faintest possible visible star varies depending on the atmosphere and how high a star is in the sky. The Harvard Photometry used an average of 100 stars close to Polaris to define magnitude 5.0. Later, the Johnson UVB photometric system defined multiple types of photometric measurements with different filters, where magnitude 0.0 for each filter is defined to be the average of six stars with the same spectral type as Vega. This was done so the color index of these stars would be 0. Although this system is often called "Vega normalized", Vega is slightly dimmer than the six-star average used to define magnitude 0.0, meaning Vega's magnitude is normalized to 0.03 by definition.
With the modern magnitude systems, brightness is described using Pogson's ratio. In practice, magnitude numbers rarely go above 30 before stars become too faint to detect. While Vega is close to magnitude 0, there are four brighter stars in the night sky at visible wavelengths (and more at infrared wavelengths) as well as the bright planets Venus, Mars, and Jupiter, and since brighter means smaller magnitude, these must be described by negative magnitudes. For example, Sirius, the brightest star of the celestial sphere, has a magnitude of −1.4 in the visible. Negative magnitudes for other very bright astronomical objects can be found in the table below.
Astronomers have developed other photometric zero point systems as alternatives to Vega normalized systems. The most widely used is the AB magnitude system, in which photometric zero points are based on a hypothetical reference spectrum having constant flux per unit frequency interval, rather than using a stellar spectrum or blackbody curve as the reference. The AB magnitude zero point is defined such that an object's AB and Vega-based magnitudes will be approximately equal in the V filter band. However, the AB magnitude system is defined assuming an idealized detector measuring only one wavelength of light, while real detectors accept energy from a range of wavelengths.
Measurement
Precision measurement of magnitude (photometry) requires calibration of the photographic or (usually) electronic detection apparatus. This generally involves contemporaneous observation, under identical conditions, of standard stars whose magnitude using that spectral filter is accurately known. Moreover, as the amount of light actually received by a telescope is reduced due to transmission through the Earth's atmosphere, the airmasses of the target and calibration stars must be taken into account. Typically one would observe a few different stars of known magnitude which are sufficiently similar. Calibrator stars close in the sky to the target are favoured (to avoid large differences in the atmospheric paths). If those stars have somewhat different zenith angles (altitudes) then a correction factor as a function of airmass can be derived and applied to the airmass at the target's position. Such calibration obtains the brightness as would be observed from above the atmosphere, where apparent magnitude is defined.
The apparent magnitude scale in astronomy reflects the received power of stars and not their amplitude. Newcomers should consider using the relative brightness measure in astrophotography to adjust exposure times between stars. Apparent magnitude also integrates over the entire object, regardless of its focus, and this needs to be taken into account when scaling exposure times for objects with significant apparent size, like the Sun, Moon and planets. For example, directly scaling the exposure time from the Moon to the Sun works because they are approximately the same size in the sky. However, scaling the exposure from the Moon to Saturn would result in an overexposure if the image of Saturn takes up a smaller area on your sensor than the Moon did (at the same magnification, or more generally, f/#).
Calculations
The dimmer an object appears, the higher the numerical value given to its magnitude, with a difference of 5 magnitudes corresponding to a brightness factor of exactly 100. Therefore, the magnitude , in the spectral band , would be given by
which is more commonly expressed in terms of common (base-10) logarithms as
where is the observed irradiance using spectral filter , and is the reference flux (zero-point) for that photometric filter. Since an increase of 5 magnitudes corresponds to a decrease in brightness by a factor of exactly 100, each magnitude increase implies a decrease in brightness by the factor (Pogson's ratio). Inverting the above formula, a magnitude difference implies a brightness factor of
Example: Sun and Moon
What is the ratio in brightness between the Sun and the full Moon?
The apparent magnitude of the Sun is −26.832 (brighter), and the mean magnitude of the full moon is −12.74 (dimmer).
Difference in magnitude:
Brightness factor:
The Sun appears to be approximately times as bright as the full Moon.
Magnitude addition
Sometimes one might wish to add brightness. For example, photometry on closely separated double stars may only be able to produce a measurement of their combined light output. To find the combined magnitude of that double star knowing only the magnitudes of the individual components, this can be done by adding the brightness (in linear units) corresponding to each magnitude.
Solving for yields
where is the resulting magnitude after adding the brightnesses referred to by and .
Apparent bolometric magnitude
While magnitude generally refers to a measurement in a particular filter band corresponding to some range of wavelengths, the apparent or absolute bolometric magnitude (mbol) is a measure of an object's apparent or absolute brightness integrated over all wavelengths of the electromagnetic spectrum (also known as the object's irradiance or power, respectively). The zero point of the apparent bolometric magnitude scale is based on the definition that an apparent bolometric magnitude of 0 mag is equivalent to a received irradiance of 2.518×10−8 watts per square metre (W·m−2).
Absolute magnitude
While apparent magnitude is a measure of the brightness of an object as seen by a particular observer, absolute magnitude is a measure of the intrinsic brightness of an object. Flux decreases with distance according to an inverse-square law, so the apparent magnitude of a star depends on both its absolute brightness and its distance (and any extinction). For example, a star at one distance will have the same apparent magnitude as a star four times as bright at twice that distance. In contrast, the intrinsic brightness of an astronomical object, does not depend on the distance of the observer or any extinction.
The absolute magnitude , of a star or astronomical object is defined as the apparent magnitude it would have as seen from a distance of . The absolute magnitude of the Sun is 4.83 in the V band (visual), 4.68 in the Gaia satellite's G band (green) and 5.48 in the B band (blue).
In the case of a planet or asteroid, the absolute magnitude rather means the apparent magnitude it would have if it were from both the observer and the Sun, and fully illuminated at maximum opposition (a configuration that is only theoretically achievable, with the observer situated on the surface of the Sun).
Standard reference values
The magnitude scale is a reverse logarithmic scale. A common misconception is that the logarithmic nature of the scale is because the human eye itself has a logarithmic response. In Pogson's time this was thought to be true (see Weber–Fechner law), but it is now believed that the response is a power law .
Magnitude is complicated by the fact that light is not monochromatic. The sensitivity of a light detector varies according to the wavelength of the light, and the way it varies depends on the type of light detector. For this reason, it is necessary to specify how the magnitude is measured for the value to be meaningful. For this purpose the UBV system is widely used, in which the magnitude is measured in three different wavelength bands: U (centred at about 350 nm, in the near ultraviolet), B (about 435 nm, in the blue region) and V (about 555 nm, in the middle of the human visual range in daylight). The V band was chosen for spectral purposes and gives magnitudes closely corresponding to those seen by the human eye. When an apparent magnitude is discussed without further qualification, the V magnitude is generally understood.
Because cooler stars, such as red giants and red dwarfs, emit little energy in the blue and UV regions of the spectrum, their power is often under-represented by the UBV scale. Indeed, some L and T class stars have an estimated magnitude of well over 100, because they emit extremely little visible light, but are strongest in infrared.
Measures of magnitude need cautious treatment and it is extremely important to measure like with like. On early 20th century and older orthochromatic (blue-sensitive) photographic film, the relative brightnesses of the blue supergiant Rigel and the red supergiant Betelgeuse irregular variable star (at maximum) are reversed compared to what human eyes perceive, because this archaic film is more sensitive to blue light than it is to red light. Magnitudes obtained from this method are known as photographic magnitudes, and are now considered obsolete.
For objects within the Milky Way with a given absolute magnitude, 5 is added to the apparent magnitude for every tenfold increase in the distance to the object. For objects at very great distances (far beyond the Milky Way), this relationship must be adjusted for redshifts and for non-Euclidean distance measures due to general relativity.
For planets and other Solar System bodies, the apparent magnitude is derived from its phase curve and the distances to the Sun and observer.
List of apparent magnitudes
Some of the listed magnitudes are approximate. Telescope sensitivity depends on observing time, optical bandpass, and interfering light from scattering and airglow.
See also
Angular diameter
Distance modulus
List of nearest bright stars
List of nearest stars
Luminosity
Surface brightness
References
|
Logarithmic scales of measurement;Observational astronomy
|
https://en.wikipedia.org/wiki/Absolute%20magnitude
|
In astronomy, absolute magnitude () is a measure of the luminosity of a celestial object on an inverse logarithmic astronomical magnitude scale; the more luminous (intrinsically bright) an object, the lower its magnitude number. An object's absolute magnitude is defined to be equal to the apparent magnitude that the object would have if it were viewed from a distance of exactly , without extinction (or dimming) of its light due to absorption by interstellar matter and cosmic dust. By hypothetically placing all objects at a standard reference distance from the observer, their luminosities can be directly compared among each other on a magnitude scale. For Solar System bodies that shine in reflected light, a different definition of absolute magnitude (H) is used, based on a standard reference distance of one astronomical unit.
Absolute magnitudes of stars generally range from approximately −10 to +20. The absolute magnitudes of galaxies can be much lower (brighter).
The more luminous an object, the smaller the numerical value of its absolute magnitude. A difference of 5 magnitudes between the absolute magnitudes of two objects corresponds to a ratio of 100 in their luminosities, and a difference of n magnitudes in absolute magnitude corresponds to a luminosity ratio of 100n/5. For example, a star of absolute magnitude MV = 3.0 would be 100 times as luminous as a star of absolute magnitude MV = 8.0 as measured in the V filter band. The Sun has absolute magnitude MV = +4.83. Highly luminous objects can have negative absolute magnitudes: for example, the Milky Way galaxy has an absolute B magnitude of about −20.8.
As with all astronomical magnitudes, the absolute magnitude can be specified for different wavelength ranges corresponding to specified filter bands or passbands; for stars a commonly quoted absolute magnitude is the absolute visual magnitude, which uses the visual (V) band of the spectrum (in the UBV photometric system). Absolute magnitudes are denoted by a capital M, with a subscript representing the filter band used for measurement, such as MV for absolute magnitude in the V band.
An object's absolute bolometric magnitude (Mbol) represents its total luminosity over all wavelengths, rather than in a single filter band, as expressed on a logarithmic magnitude scale. To convert from an absolute magnitude in a specific filter band to absolute bolometric magnitude, a bolometric correction (BC) is applied.
Stars and galaxies
In stellar and galactic astronomy, the standard distance is 10 parsecs (about 32.616 light-years, 308.57 petameters or 308.57 trillion kilometres). A star at 10 parsecs has a parallax of 0.1″ (100 milliarcseconds). Galaxies (and other extended objects) are much larger than 10 parsecs; their light is radiated over an extended patch of sky, and their overall brightness cannot be directly observed from relatively short distances, but the same convention is used. A galaxy's magnitude is defined by measuring all the light radiated over the entire object, treating that integrated brightness as the brightness of a single point-like or star-like source, and computing the magnitude of that point-like source as it would appear if observed at the standard 10 parsecs distance. Consequently, the absolute magnitude of any object equals the apparent magnitude it would have if it were 10 parsecs away.
Some stars visible to the naked eye have such a low absolute magnitude that they would appear bright enough to outshine the planets and cast shadows if they were at 10 parsecs from the Earth. Examples include Rigel (−7.8), Deneb (−8.4), Naos (−6.2), and Betelgeuse (−5.8). For comparison, Sirius has an absolute magnitude of only 1.4, which is still brighter than the Sun, whose absolute visual magnitude is 4.83. The Sun's absolute bolometric magnitude is set arbitrarily, usually at 4.75.
Absolute magnitudes of stars generally range from approximately −10 to +20. The absolute magnitudes of galaxies can be much lower (brighter). For example, the giant elliptical galaxy M87 has an absolute magnitude of −22 (i.e. as bright as about 60,000 stars of magnitude −10). Some active galactic nuclei (quasars like CTA-102) can reach absolute magnitudes in excess of −32, making them the most luminous persistent objects in the observable universe, although these objects can vary in brightness over astronomically short timescales. At the extreme end, the optical afterglow of the gamma ray burst GRB 080319B reached, according to one paper, an absolute r magnitude brighter than −38 for a few tens of seconds.
Apparent magnitude
The Greek astronomer Hipparchus established a numerical scale to describe the brightness of each star appearing in the sky. The brightest stars in the sky were assigned an apparent magnitude , and the dimmest stars visible to the naked eye are assigned . The difference between them corresponds to a factor of 100 in brightness. For objects within the immediate neighborhood of the Sun, the absolute magnitude and apparent magnitude from any distance (in parsecs, with 1 pc = 3.2616 light-years) are related by
where is the radiant flux measured at distance (in parsecs), the radiant flux measured at distance . Using the common logarithm, the equation can be written as
where it is assumed that extinction from gas and dust is negligible. Typical extinction rates within the Milky Way galaxy are 1 to 2 magnitudes per kiloparsec, when dark clouds are taken into account.
For objects at very large distances (outside the Milky Way) the luminosity distance (distance defined using luminosity measurements) must be used instead of , because the Euclidean approximation is invalid for distant objects. Instead, general relativity must be taken into account. Moreover, the cosmological redshift complicates the relationship between absolute and apparent magnitude, because the radiation observed was shifted into the red range of the spectrum. To compare the magnitudes of very distant objects with those of local objects, a K correction might have to be applied to the magnitudes of the distant objects.
The absolute magnitude can also be written in terms of the apparent magnitude and stellar parallax :
or using apparent magnitude and distance modulus :
Examples
Rigel has a visual magnitude of 0.12 and distance of about 860 light-years:
Vega has a parallax of 0.129″, and an apparent magnitude of 0.03:
The Black Eye Galaxy has a visual magnitude of 9.36 and a distance modulus of 31.06:
Bolometric magnitude
The absolute bolometric magnitude () takes into account electromagnetic radiation at all wavelengths. It includes those unobserved due to instrumental passband, the Earth's atmospheric absorption, and extinction by interstellar dust. It is defined based on the luminosity of the stars. In the case of stars with few observations, it must be computed assuming an effective temperature.
Classically, the difference in bolometric magnitude is related to the luminosity ratio according to:
which makes by inversion:
where
is the Sun's luminosity (bolometric luminosity)
is the star's luminosity (bolometric luminosity)
is the bolometric magnitude of the Sun
is the bolometric magnitude of the star.
In August 2015, the International Astronomical Union passed Resolution B2 defining the zero points of the absolute and apparent bolometric magnitude scales in SI units for power (watts) and irradiance (W/m2), respectively. Although bolometric magnitudes had been used by astronomers for many decades, there had been systematic differences in the absolute magnitude-luminosity scales presented in various astronomical references, and no international standardization. This led to systematic differences in bolometric corrections scales. Combined with incorrect assumed absolute bolometric magnitudes for the Sun, this could lead to systematic errors in estimated stellar luminosities (and other stellar properties, such as radii or ages, which rely on stellar luminosity to be calculated).
Resolution B2 defines an absolute bolometric magnitude scale where corresponds to luminosity , with the zero point luminosity set such that the Sun (with nominal luminosity ) corresponds to absolute bolometric magnitude . Placing a radiation source (e.g. star) at the standard distance of 10 parsecs, it follows that the zero point of the apparent bolometric magnitude scale corresponds to irradiance . Using the IAU 2015 scale, the nominal total solar irradiance ("solar constant") measured at 1 astronomical unit () corresponds to an apparent bolometric magnitude of the Sun of .
Following Resolution B2, the relation between a star's absolute bolometric magnitude and its luminosity is no longer directly tied to the Sun's (variable) luminosity:
where
is the star's luminosity (bolometric luminosity) in watts
is the zero point luminosity
is the bolometric magnitude of the star
The new IAU absolute magnitude scale permanently disconnects the scale from the variable Sun. However, on this SI power scale, the nominal solar luminosity corresponds closely to , a value that was commonly adopted by astronomers before the 2015 IAU resolution.
The luminosity of the star in watts can be calculated as a function of its absolute bolometric magnitude as:
using the variables as defined previously.
Solar System bodies ()
For planets and asteroids, a definition of absolute magnitude that is more meaningful for non-stellar objects is used. The absolute magnitude, commonly called , is defined as the apparent magnitude that the object would have if it were one astronomical unit (AU) from both the Sun and the observer, and in conditions of ideal solar opposition (an arrangement that is impossible in practice). Because Solar System bodies are illuminated by the Sun, their brightness varies as a function of illumination conditions, described by the phase angle. This relationship is referred to as the phase curve. The absolute magnitude is the brightness at phase angle zero, an arrangement known as opposition, from a distance of one AU.
Apparent magnitude
The absolute magnitude can be used to calculate the apparent magnitude of a body. For an object reflecting sunlight, and are connected by the relation
where is the phase angle, the angle between the body-Sun and body–observer lines. is the phase integral (the integration of reflected light; a number in the 0 to 1 range).
By the law of cosines, we have:
Distances:
is the distance between the body and the observer
is the distance between the body and the Sun
is the distance between the observer and the Sun
, a unit conversion factor, is the constant 1 AU, the average distance between the Earth and the Sun
Approximations for phase integral
The value of depends on the properties of the reflecting surface, in particular on its roughness. In practice, different approximations are used based on the known or assumed properties of the surface. The surfaces of terrestrial planets are generally more difficult to model than those of gaseous planets, the latter of which have smoother visible surfaces.
Planets as diffuse spheres
Planetary bodies can be approximated reasonably well as ideal diffuse reflecting spheres. Let be the phase angle in degrees, then
A full-phase diffuse sphere reflects two-thirds as much light as a diffuse flat disk of the same diameter. A quarter phase () has as much light as full phase ().
By contrast, a diffuse disk reflector model is simply , which isn't realistic, but it does represent the opposition surge for rough surfaces that reflect more uniform light back at low phase angles.
The definition of the geometric albedo , a measure for the reflectivity of planetary surfaces, is based on the diffuse disk reflector model. The absolute magnitude , diameter (in kilometers) and geometric albedo of a body are related by
or equivalently,
Example: The Moon's absolute magnitude can be calculated from its diameter and geometric albedo :
We have ,
At quarter phase, (according to the diffuse reflector model), this yields an apparent magnitude of The actual value is somewhat lower than that, This is not a good approximation, because the phase curve of the Moon is too complicated for the diffuse reflector model. A more accurate formula is given in the following section.
More advanced models
Because Solar System bodies are never perfect diffuse reflectors, astronomers use different models to predict apparent magnitudes based on known or assumed properties of the body. For planets, approximations for the correction term in the formula for have been derived empirically, to match observations at different phase angles. The approximations recommended by the Astronomical Almanac are (with in degrees):
Here is the effective inclination of Saturn's rings (their tilt relative to the observer), which as seen from Earth varies between 0° and 27° over the course of one Saturn orbit, and is a small correction term depending on Uranus' sub-Earth and sub-solar latitudes. is the Common Era year. Neptune's absolute magnitude is changing slowly due to seasonal effects as the planet moves along its 165-year orbit around the Sun, and the approximation above is only valid after the year 2000. For some circumstances, like for Venus, no observations are available, and the phase curve is unknown in those cases. The formula for the Moon is only applicable to the near side of the Moon, the portion that is visible from the Earth.
Example 1: On 1 January 2019, Venus was from the Sun, and from Earth, at a phase angle of (near quarter phase). Under full-phase conditions, Venus would have been visible at Accounting for the high phase angle, the correction term above yields an actual apparent magnitude of This is close to the value of predicted by the Jet Propulsion Laboratory.
Example 2: At first quarter phase, the approximation for the Moon gives With that, the apparent magnitude of the Moon is close to the expected value of about . At last quarter, the Moon is about 0.06 mag fainter than at first quarter, because that part of its surface has a lower albedo.
Earth's albedo varies by a factor of 6, from 0.12 in the cloud-free case to 0.76 in the case of altostratus cloud. The absolute magnitude in the table corresponds to an albedo of 0.434. Due to the variability of the weather, Earth's apparent magnitude cannot be predicted as accurately as that of most other planets.
Asteroids
If an object has an atmosphere, it reflects light more or less isotropically in all directions, and its brightness can be modelled as a diffuse reflector. Bodies with no atmosphere, like asteroids or moons, tend to reflect light more strongly to the direction of the incident light, and their brightness increases rapidly as the phase angle approaches . This rapid brightening near opposition is called the opposition effect. Its strength depends on the physical properties of the body's surface, and hence it differs from asteroid to asteroid.
In 1985, the IAU adopted the semi-empirical -system, based on two parameters and called absolute magnitude and slope, to model the opposition effect for the ephemerides published by the Minor Planet Center.
where
the phase integral is and
for or , , , and .
This relation is valid for phase angles , and works best when .
The slope parameter relates to the surge in brightness, typically , when the object is near opposition. It is known accurately only for a small number of asteroids, hence for most asteroids a value of is assumed. In rare cases, can be negative. An example is 101955 Bennu, with .
In 2012, the -system was officially replaced by an improved system with three parameters , and , which produces more satisfactory results if the opposition effect is very small or restricted to very small phase angles. However, as of 2022, this -system has not been adopted by either the Minor Planet Center nor Jet Propulsion Laboratory.
The apparent magnitude of asteroids varies as they rotate, on time scales of seconds to weeks depending on their rotation period, by up to or more. In addition, their absolute magnitude can vary with the viewing direction, depending on their axial tilt. In many cases, neither the rotation period nor the axial tilt are known, limiting the predictability. The models presented here do not capture those effects.
Cometary magnitudes
The brightness of comets is given separately as total magnitude (, the brightness integrated over the entire visible extend of the coma) and nuclear magnitude (, the brightness of the core region alone). Both are different scales than the magnitude scale used for planets and asteroids, and can not be used for a size comparison with an asteroid's absolute magnitude .
The activity of comets varies with their distance from the Sun. Their brightness can be approximated as
where are the total and nuclear apparent magnitudes of the comet, respectively, are its "absolute" total and nuclear magnitudes, and are the body-sun and body-observer distances, is the Astronomical Unit, and are the slope parameters characterising the comet's activity. For , this reduces to the formula for a purely reflecting body (showing no cometary activity).
For example, the lightcurve of comet C/2011 L4 (PANSTARRS) can be approximated by On the day of its perihelion passage, 10 March 2013, comet PANSTARRS was from the Sun and from Earth. The total apparent magnitude is predicted to have been at that time. The Minor Planet Center gives a value close to that, .
The absolute magnitude of any given comet can vary dramatically. It can change as the comet becomes more or less active over time or if it undergoes an outburst. This makes it difficult to use the absolute magnitude for a size estimate. When comet 289P/Blanpain was discovered in 1819, its absolute magnitude was estimated as . It was subsequently lost and was only rediscovered in 2003. At that time, its absolute magnitude had decreased to , and it was realised that the 1819 apparition coincided with an outburst. 289P/Blanpain reached naked eye brightness (5–8 mag) in 1819, even though it is the comet with the smallest nucleus that has ever been physically characterised, and usually doesn't become brighter than 18 mag.
For some comets that have been observed at heliocentric distances large enough to distinguish between light reflected from the coma, and light from the nucleus itself, an absolute magnitude analogous to that used for asteroids has been calculated, allowing to estimate the sizes of their nuclei.
Meteors
For a meteor, the standard distance for measurement of magnitudes is at an altitude of at the observer's zenith.
|
Observational astronomy
|
https://en.wikipedia.org/wiki/Alpha%20Centauri
|
Alpha Centauri (, α Cen, or Alpha Cen) is a star system in the southern constellation of Centaurus. It consists of three stars: Rigil Kentaurus (), Toliman (), and Proxima Centauri (). Proxima Centauri is the closest star to the Sun at 4.2465 light-years (ly) which is 1.3020 pc.
Rigil Kentaurus and Toliman are Sun-like stars (class G and K, respectively) that together form the binary star system . To the naked eye, these two main components appear to be a single star with an apparent magnitude of −0.27. It is the brightest star in the constellation and the third-brightest in the night sky, outshone by only Sirius and Canopus.
Rigil Kentaurus has 1.1 times the mass () and 1.5 times the luminosity of the Sun (), while Toliman is smaller and cooler, at and less than . The pair orbit around a common centre with an orbital period of 79 years. Their elliptical orbit is eccentric, so that the distance between A and B varies from 35.6 astronomical units (AU), or about the distance between Pluto and the Sun, to or about the distance between Saturn and the Sun. One astronomical unit is the distance from Earth to the Sun, 150 million kilometers.
Proxima Centauri is a small faint red dwarf (class M). Though not visible to the naked eye, Proxima Centauri is the closest star to the Sun at a distance of , slightly closer than . The distance between Proxima Centauri and is about , equivalent to about 430 times the radius of Neptune's orbit.
Proxima Centauri has one confirmed planet: Proxima b, an Earth-sized planet in the habitable zone (though it is unlikely to be habitable), one candidate planet, Proxima d, sub-Earth which orbits very closely to the star, and the controversial Proxima c, a mini-Neptune astronomical units away. Rigil Kentaurus may have a Neptune-sized planet in the habitable zone, though it is not yet known with certainty to be planetary in nature and could be an artifact of the discovery mechanism. Toliman has no known planets.
Etymology and nomenclature
α Centauri (Latinised to Alpha Centauri) is the system's designation given by J. Bayer in 1603. It belongs to the constellation Centaurus, named after the part human, part horse creature in Greek mythology; Heracles accidentally wounded the centaur and placed him in the sky after his death. Alpha Centauri marks the right front hoof of the Centaur. The common name Rigil Kentaurus is a Latinisation of the Arabic translation Rijl al-Qinṭūrus, meaning "the Foot of the Centaur". Qinṭūrus is the Arabic transliteration of the Greek (Kentaurus). The name is frequently abbreviated to Rigil Kent () or even Rigil, though the latter name is better known for Rigel ( Orionis).
An alternative name found in European sources, Toliman, is an approximation of the Arabic aẓ-Ẓalīmān (in older transcription, aṭ-Ṭhalīmān), meaning 'the (two male) Ostriches', an appellation Zakariya al-Qazwini had applied to the pair of stars Lambda and Mu Sagittarii; it was often unclear on old star maps which name was intended to go with which star (or stars), and the referents changed over time. The name Toliman originates with Jacob Golius' 1669 edition of Al-Farghani's Compendium. Tolimân is Golius' Latinisation of the Arabic name "the ostriches", the name of an asterism of which Alpha Centauri formed the main star.
was discovered in 1915 by Robert T. A. Innes, who suggested that it be named Proxima Centaurus, . The name Proxima Centauri later became more widely used and is now listed by the International Astronomical Union (IAU) as the approved proper name; it is frequently abbreviated to Proxima.
In 2016, the Working Group on Star Names of the IAU, having decided to attribute proper names to individual component stars rather than to multiple systems, approved the name Rigil Kentaurus () as being restricted to and the name Proxima Centauri () for On 10 August 2018, the IAU approved the name Toliman () for
Other names
During the 19th century, the northern amateur popularist E.H. Burritt used the now-obscure name Bungula (). Its origin is not known, but it may have been coined from the Greek letter beta () and Latin 'hoof', originally for Beta Centauri (the other hoof).
In Chinese astronomy, Nán Mén, meaning Southern Gate, refers to an asterism consisting of Alpha Centauri and Epsilon Centauri. Consequently, the Chinese name for Alpha Centauri itself is Nán Mén Èr, the Second Star of the Southern Gate.
To the Indigenous Boorong people of northwestern Victoria in Australia, Alpha Centauri and Beta Centauri are Bermbermgle, two brothers noted for their courage and destructiveness, who speared and killed Tchingal "The Emu" (the Coalsack Nebula). The form in Wotjobaluk is Bram-bram-bult.
Observation
To the naked eye, appear to be a single star, the brightest in the southern constellation of Centaurus. Their apparent angular separation varies over about 80 years between 2 and 22 arcseconds (the naked eye has a resolution of 60 arcsec), but through much of the orbit, both are easily resolved in binoculars or small telescopes. At −0.27 apparent magnitude (combined for A and B magnitudes ), Alpha Centauri is a first-magnitude star and is fainter only than Sirius and Canopus. It is the outer star of The Pointers or The Southern Pointers, so called because the line through Beta Centauri (Hadar/Agena), some 4.5° west, points to the constellation Crux—the Southern Cross. The Pointers easily distinguish the true Southern Cross from the fainter asterism known as the False Cross.
South of about 29° South latitude, is circumpolar and never sets below the horizon. North of about 29° N latitude, Alpha Centauri never rises. Alpha Centauri lies close to the southern horizon when viewed from latitude 29° N to the equator (close to Hermosillo and Chihuahua City in Mexico; Galveston, Texas; Ocala, Florida; and Lanzarote, the Canary Islands of Spain), but only for a short time around its culmination. The star culminates each year at local midnight on 24 April and at local 9 p.m. on 8 June.
As seen from Earth, Proxima Centauri is 2.2° southwest from this distance is about four times the angular diameter of the Moon. Proxima Centauri appears as a deep-red star of a typical apparent magnitude of 11.1 in a sparsely populated star field, requiring moderately sized telescopes to be seen. Listed as V645 Cen in the General Catalogue of Variable Stars, version 4.2, this UV Ceti star or "flare star" can unexpectedly brighten rapidly by as much as 0.6 magnitude at visual wavelengths, then fade after only a few minutes. Some amateur and professional astronomers regularly monitor for outbursts using either optical or radio telescopes. In August 2015, the largest recorded flares of the star occurred, with the star becoming 8.3 times brighter than normal on 13 August, in the B band (blue light region).
Observational history
Alpha Centauri is listed in the 2nd century star catalog appended to Ptolemy's Almagest. Ptolemy gave its ecliptic coordinates, but texts differ as to whether the ecliptic latitude reads or . (Presently the ecliptic latitude is , but it has decreased by a fraction of a degree since Ptolemy's time due to proper motion.) In Ptolemy's time, Alpha Centauri was visible from Alexandria, Egypt, at but, due to precession, its declination is now , and it can no longer be seen at that latitude. English explorer Robert Hues brought Alpha Centauri to the attention of European observers in his 1592 work Tractatus de Globis, along with Canopus and Achernar, noting:
The binary nature of Alpha Centauri AB was recognized in December 1689 by Jean Richaud, while observing a passing comet from his station in Puducherry. Alpha Centauri was only the third binary star to be discovered, preceded by Mizar AB and Acrux.
The large proper motion of Alpha Centauri AB was discovered by Manuel John Johnson, observing from Saint Helena, who informed Thomas Henderson at the Royal Observatory, Cape of Good Hope of it. The parallax of Alpha Centauri was subsequently determined by Henderson from many exacting positional observations of the AB system between April 1832 and May 1833. He withheld his results, however, because he suspected they were too large to be true, but eventually published them in 1839 after Bessel released his own accurately determined parallax for in 1838. For this reason, Alpha Centauri is sometimes considered as the second star to have its distance measured because Henderson's work was not fully acknowledged at first. (The distance of Alpha Centauri from the Earth is now reckoned at 4.396 light-years or .)
John Herschel made the first micrometrical observations in 1834. Since the early 20th century, measures have been made with photographic plates.
By 1926, William Stephen Finsen calculated the approximate orbit elements close to those now accepted for this system. All future positions are now sufficiently accurate for visual observers to determine the relative places of the stars from a binary star ephemeris. Others, like D. Pourbaix (2002), have regularly refined the precision of new published orbital elements.
Robert T. A. Innes discovered Proxima Centauri in 1915 by blinking photographic plates taken at different times during a proper motion survey. These showed large proper motion and parallax similar in both size and direction to those of which suggested that Proxima Centauri is part of the system and slightly closer to Earth than . As a result, Innes concluded that Proxima Centauri was the closest star to Earth yet discovered.
Location and motion
Alpha Centauri may be inside the G-cloud of the Local Bubble, and its nearest known system is the binary brown dwarf system Luhman 16, at distance.
Historical distance estimates
{| class="wikitable sortable mw-collapsible"
|+ Alpha Centauri AB historical distance estimates
|-
! rowspan="2" | Source
! rowspan="2" |Year
! rowspan="2" |Subject!! rowspan="2" | Parallax (mas) !! colspan="3" | Distance !! rowspan="2" | References
|-
!parsecs !! light-years !! petametres
|-
| H. Henderson || 1839 || AB || || || 2.81 ± 0.53 || ||
|-
| T. Henderson
|1842
|AB
|
| 1.10 ± 0.15
| 3.57 ± 0.5
|
|
|-
| Maclear
|1851
|AB
|
|
|
| 32.4 ± 2.5
|
|-
| Moesta
|1868
|AB
|
|
|
|
|
|-
| Gill & Elkin
|1885
|AB
|
|
|
|
|
|-
| Roberts
|1895
|AB
|
| 1.32 ± 0.2
| 4.29 ± 0.65
|
|
|-
| Woolley et al.
|1970
|AB
|
|
|
|
|
|-
| Gliese & Jahreiß
|1991
|AB
|
|
|
|
|
|-
| van Altena et al.
| 1995
| AB
|
|
|
|
|
|-
| Perryman et al.
| 1997
| AB
|
|
|
|
|-
| Söderhjelm
| 1999
| AB
|
|
|
|
|
|-
|rowspan="2"| van Leeuwen
|rowspan="2"| 2007
| A
|
|
|
|
|
|-
| B
|
|
|
| 37.5 ± 2.5
|
|-
| RECONS TOP100
|2012
|AB
|
|
|
|
|
|}
Kinematics
All components of display significant proper motion against the background sky. Over centuries, this causes their apparent positions to slowly change. Proper motion was unknown to ancient astronomers. Most assumed that the stars were permanently fixed on the celestial sphere, as stated in the works of the philosopher Aristotle. In 1718, Edmond Halley found that some stars had significantly moved from their ancient astrometric positions.
In the 1830s, Thomas Henderson discovered the true distance to by analysing his many astrometric mural circle observations. He then realised this system also likely had a high proper motion. In this case, the apparent stellar motion was found using Nicolas Louis de Lacaille's astrometric observations of 1751–1752, by the observed differences between the two measured positions in different epochs.
Calculated proper motion of the centre of mass for is about 3620 mas/y (milliarcseconds per year) toward the west and 694 mas/y toward the north, giving an overall motion of 3686 mas/y in a direction 11° north of west. The motion of the centre of mass is about 6.1 arcmin each century, or 1.02° each millennium. The speed in the western direction is and in the northerly direction . Using spectroscopy the mean radial velocity has been determined to be around towards the Solar System. This gives a speed with respect to the Sun of , very close to the peak in the distribution of speeds of nearby stars.
Since is almost exactly in the plane of the Milky Way as viewed from Earth, many stars appear behind it. In early May 2028, will pass between the Earth and a distant red star, when there is a 45% probability that an Einstein ring will be observed. Other conjunctions will also occur in the coming decades, allowing accurate measurement of proper motions and possibly giving information on planets.
Predicted future changes
Based on the system's common proper motion and radial velocities, will continue to change its position in the sky significantly and will gradually brighten. For example, in about 6,200 CE, α Centauri's true motion will cause an extremely rare first-magnitude stellar conjunction with Beta Centauri, forming a brilliant optical double star in the southern sky. It will then pass just north of the Southern Cross or Crux, before moving northwest and up towards the present celestial equator and away from the galactic plane. By about 26,700 CE, in the present-day constellation of Hydra, will reach perihelion at away, though later calculations suggest that this will occur in 27,000 AD. At its nearest approach, α Centauri will attain a maximum apparent magnitude of −0.86, comparable to present-day magnitude of Canopus, but it will still not surpass that of Sirius, which will brighten incrementally over the next 60,000 years, and will continue to be the brightest star as seen from Earth (other than the Sun) for the next 210,000 years.
Stellar system
Alpha Centauri is a triple star system, with its two main stars, A and B, together comprising a binary component. The AB designation, or older A×B, denotes the mass centre of a main binary system relative to companion star(s) in a multiple star system. AB-C refers to the component of Proxima Centauri in relation to the central binary, being the distance between the centre of mass and the outlying companion. Because the distance between Proxima (C) and either of Alpha Centauri A or B is similar, the AB binary system is sometimes treated as a single gravitational object.
Orbital properties
The A and B components of Alpha Centauri have an orbital period of 79.762 years. Their orbit is moderately eccentric, as it has an eccentricity of almost 0.52; their closest approach or periastron is , or about the distance between the Sun and Saturn; and their furthest separation or apastron is , about the distance between the Sun and Pluto. The most recent periastron was in August 1955 and the next will occur in May 2035; the most recent apastron was in May 1995 and will next occur in 2075.
Viewed from Earth, the apparent orbit of A and B means that their separation and position angle (PA) are in continuous change throughout their projected orbit. Observed stellar positions in 2019 are separated by 4.92 arcsec through the PA of 337.1°, increasing to 5.49 arcsec through 345.3° in 2020. The closest recent approach was in February 2016, at 4.0 arcsec through the PA of 300°. The observed maximum separation of these stars is about 22 arcsec, while the minimum distance is 1.7 arcsec. The widest separation occurred during February 1976, and the next will be in January 2056.
Alpha Centauri C is about from Alpha Centauri AB, equivalent to about 5% of the distance between Alpha Centauri AB and the Sun. Until 2017, measurements of its small speed and its trajectory were of too little accuracy and duration in years to determine whether it is bound to Alpha Centauri AB or unrelated.
Radial velocity measurements made in 2017 were precise enough to show that Proxima Centauri and Alpha Centauri AB are gravitationally bound. The orbital period of Proxima Centauri is approximately years, with an eccentricity of 0.5, much more eccentric than Mercury's. Proxima Centauri comes within of AB at periastron, and its apastron occurs at .
Physical properties
Asteroseismic studies, chromospheric activity, and stellar rotation (gyrochronology) are all consistent with the Alpha Centauri system being similar in age to, or slightly older than, the Sun. Asteroseismic analyses that incorporate tight observational constraints on the stellar parameters for the Alpha Centauri stars have yielded age estimates of Gyr, Gyr, 6.4 Gyr, and Gyr. Age estimates for the stars based on chromospheric activity (Calcium H & K emission) yield whereas gyrochronology yields Gyr. Stellar evolution theory implies both stars are slightly older than the Sun at 5 to 6 billion years, as derived by their mass and spectral characteristics.
From the orbital elements, the total mass of Alpha Centauri AB is about
– or twice that of the Sun. The average individual stellar masses are about and , respectively, though slightly different masses have also been quoted in recent years, such as and , totaling . Alpha Centauri A and B have absolute magnitudes of +4.38 and +5.71, respectively.
Alpha Centauri AB System
Alpha Centauri A
Alpha Centauri A, also known as Rigil Kentaurus, is the principal member, or primary, of the binary system. It is a solar-like main-sequence star with a similar yellowish colour, whose stellar classification is spectral type G2-V; it is about 10% more massive than the Sun, with a radius about 22% larger. When considered among the individual brightest stars in the night sky, it is the fourth-brightest at an apparent magnitude of +0.01, being slightly fainter than Arcturus at an apparent magnitude of −0.05.
The type of magnetic activity on Alpha Centauri A is comparable to that of the Sun, showing coronal variability due to star spots, as modulated by the rotation of the star. However, since 2005 the activity level has fallen into a deep minimum that might be similar to the Sun's historical Maunder Minimum. Alternatively, it may have a very long stellar activity cycle and is slowly recovering from a minimum phase.
Alpha Centauri B
Alpha Centauri B, also known as Toliman, is the secondary star of the binary system. It is a main-sequence star of spectral type K1-V, making it more an orange colour than Alpha Centauri A; it has around 90% of the mass of the Sun and a 14% smaller diameter. Although it has a lower luminosity than A, Alpha Centauri B emits more energy in the X-ray band. Its light curve varies on a short time scale, and there has been at least one observed flare. It is more magnetically active than Alpha Centauri A, showing a cycle of compared to 11 years for the Sun, and has about half the minimum-to-peak variation in coronal luminosity of the Sun. This cycle was recently re-estimated based on more than 20 years of high-resolution spectroscopic observations of the CaIIH&K lines showing a cycle of . Alpha Centauri B has an apparent magnitude of +1.35, slightly dimmer than Mimosa.
Alpha Centauri C
Alpha Centauri C, better known as Proxima Centauri, is a small main-sequence red dwarf of spectral class M6-Ve. It has an absolute magnitude of +15.60, over 20,000 times fainter than the Sun. Its mass is calculated to be . It is the closest star to the Sun but is too faint to be visible to the naked eye.
Planetary system
The Alpha Centauri system as a whole has two confirmed planets, both of them around Proxima Centauri. While other planets have been claimed to exist around all of the stars, none of the discoveries have been confirmed.
Planets of Proxima Centauri
Proxima Centauri b is a terrestrial planet discovered in 2016 by astronomers at the European Southern Observatory (ESO). It has an estimated minimum mass of 1.17 (Earth masses) and orbits approximately 0.049 AU from Proxima Centauri, placing it in the star's habitable zone.
The discovery of Proxima Centauri c was formally published in 2020 and could be a super-Earth or mini-Neptune. It has a mass of roughly 7 and orbits about from Proxima Centauri with a period of . In June 2020, a possible direct imaging detection of the planet hinted at the presence of a large ring system. However, a 2022 study disputed the existence of this planet.
A 2020 paper refining Proxima b's mass excludes the presence of extra companions with masses above at periods shorter than 50 days, but the authors detected a radial-velocity curve with a periodicity of 5.15 days, suggesting the presence of a planet with a mass of about . This planet, Proxima Centauri d, was detected in 2022.
Planets of Alpha Centauri A
In 2021, a candidate planet named Candidate 1 (or C1) was detected around Alpha Centauri A, thought to orbit at approximately with a period of about one year, and to have a mass between that of Neptune and one-half that of Saturn, though it may be a dust disk or an artifact. The possibility of C1 being a background star has been ruled out. If this candidate is confirmed, the temporary name C1 will most likely be replaced with the scientific designation Alpha Centauri Ab in accordance with current naming conventions.
GO Cycle 1 observations are planned for the James Webb Space Telescope (JWST) to search for planets around Alpha Centauri A, as well as observations of Epsilon Muscae. The coronographic observations, which occurred on July 26 and 27, 2023, were failures, though there are follow-up observations in March 2024. Pre-launch estimates predicted that JWST will be able to find planets with a radius of 5 at . Multiple observations every 3–6 months could push the limit down to 3 . Post-launch estimates based on observations of HIP 65426 b find that JWST will be able to find planets even closer to Alpha Centauri A and could find a 5 planet at . Candidate 1 has an estimated radius between and orbits at . It is therefore likely within the reach of JWST observations.
Planets of Alpha Centauri B
The first claim of a planet around Alpha Centauri B was that of Alpha Centauri Bb in 2012, which was proposed to be an Earth-mass planet in a 3.2-day orbit. This was refuted in 2015 when the apparent planet was shown to be an artifact of the way the radial velocity data was processed.
A search for transits of planet Bb was conducted with the Hubble Space Telescope from 2013 to 2014. This search detected one potential transit-like event, which could be associated with a different planet with a radius around . This planet would most likely orbit Alpha Centauri B with an orbital period of 20.4 days or less, with only a 5% chance of it having a longer orbit. The median of the likely orbits is 12.4 days. Its orbit would likely have an eccentricity of 0.24 or less. It could have lakes of molten lava and would be far too close to Alpha Centauri B to harbour life. If confirmed, this planet might be called . However, the name has not been used in the literature, as it is not a claimed discovery.
Hypothetical planets
Additional planets may exist in the Alpha Centauri system, either orbiting Alpha Centauri A or Alpha Centauri B individually, or in large orbits around Alpha Centauri AB. Because both stars are fairly similar to the Sun (for example, in age and metallicity), astronomers have been especially interested in making detailed searches for planets in the Alpha Centauri system. Several established planet-hunting teams have used various radial velocity or star transit methods in their searches around these two bright stars. All the observational studies have so far failed to find evidence for brown dwarfs or gas giants.
In 2009, computer simulations showed that a planet might have been able to form near the inner edge of Alpha Centauri B's habitable zone, which extends from from the star. Certain special assumptions, such as considering that the Alpha Centauri pair may have initially formed with a wider separation and later moved closer to each other (as might be possible if they formed in a dense star cluster), would permit an accretion-friendly environment farther from the star. Bodies around Alpha Centauri A would be able to orbit at slightly farther distances due to its stronger gravity. In addition, the lack of any brown dwarfs or gas giants in close orbits around Alpha Centauri make the likelihood of terrestrial planets greater than otherwise. A theoretical study indicates that a radial velocity analysis might detect a hypothetical planet of in Alpha Centauri B's habitable zone.
Radial velocity measurements of Alpha Centauri B made with the High Accuracy Radial Velocity Planet Searcher spectrograph were sufficiently sensitive to detect a planet within the habitable zone of the star (i.e. with an orbital period P = 200 days), but no planets were detected.
Current estimates place the probability of finding an Earth-like planet around Alpha Centauri at roughly 75%. The observational thresholds for planet detection in the habitable zones by the radial velocity method are currently (2017) estimated to be about for Alpha Centauri A, for Alpha Centauri B, and for Proxima Centauri.
Early computer-generated models of planetary formation predicted the existence of terrestrial planets around both Alpha Centauri A and B, but most recent numerical investigations have shown that the gravitational pull of the companion star renders the accretion of planets difficult. Despite these difficulties, given the similarities to the Sun in spectral types, star type, age and probable stability of the orbits, it has been suggested that this stellar system could hold one of the best possibilities for harbouring extraterrestrial life on a potential planet.
In the Solar System, it was once thought that Jupiter and Saturn were probably crucial in perturbing comets into the inner Solar System, providing the inner planets with a source of water and various other ices. However, since isotope measurements of the deuterium to hydrogen (D/H) ratio in comets Halley, Hyakutake, Hale–Bopp, 2002T7, and Tuttle yield values approximately twice that of Earth's oceanic water, more recent models and research predict that less than 10% of Earth's water was supplied from comets. In the system, Proxima Centauri may have influenced the planetary disk as the system was forming, enriching the area around Alpha Centauri with volatile materials. This would be discounted if, for example, happened to have gas giants orbiting (or vice versa), or if and B themselves were able to perturb comets into each other's inner systems, as Jupiter and Saturn presumably have done in the Solar System. Such icy bodies probably also reside in Oort clouds of other planetary systems. When they are influenced gravitationally by either the gas giants or disruptions by passing nearby stars, many of these icy bodies then travel star-wards. Such ideas also apply to the close approach of Alpha Centauri or other stars to the Solar system, when, in the distant future, the Oort Cloud might be disrupted enough to increase the number of active comets.
To be in the habitable zone, a planet around Alpha Centauri A would have an orbital radius of between about 1.2 and so as to have similar planetary temperatures and conditions for liquid water to exist. For the slightly less luminous and cooler , the habitable zone is between about 0.7 and .
With the goal of finding evidence of such planets, both Proxima Centauri and were among the listed "Tier-1" target stars for NASA's Space Interferometry Mission (S.I.M.). Detecting planets as small as three Earth-masses or smaller within two AU of a "Tier-1" target would have been possible with this new instrument. The S.I.M. mission, however, was cancelled due to financial issues in 2010.
Circumstellar discs
Based on observations between 2007 and 2012, a study found a slight excess of emissions in the 24 μm (mid/far-infrared) band surrounding , which may be interpreted as evidence for a sparse circumstellar disc or dense interplanetary dust. The total mass was estimated to be between to the mass of the Moon, or 10–100 times the mass of the Solar System's zodiacal cloud. If such a disc existed around both stars, disc would likely be stable to and would likely be stable to This would put A's disc entirely within the frost line, and a small part of B's outer disc just outside.
View from this system
The sky from would appear much as it does from the Earth, except that Centaurus's brightest star, being itself, would be absent from the constellation. The Sun would appear as a white star of apparent magnitude +0.5, roughly the same as the average brightness of Betelgeuse from Earth. It would be at the antipodal point of current right ascension and declination, at (2000), in eastern Cassiopeia, easily outshining all the rest of the stars in the constellation. With the placement of the Sun east of the magnitude 3.4 star Epsilon Cassiopeiae, nearly in front of the Heart Nebula, the "W" line of stars of Cassiopeia would have a "/W" shape.
Other nearby stars' placements may be affected somewhat drastically. Sirius, at 9.2 light years away from the system, would still be the brightest star in the night sky, with a magnitude of -1.2, but would be located in Orion less than a degree away from Betelgeuse. Procyon, which would also be at a slightly further distance than from the Sun, would move to outshine Pollux in the middle of Gemini.
A planet around either or B would see the other star as a very bright secondary. For example, an Earth-like planet at from (with a revolution period of 1.34 years) would get Sun-like illumination from its primary, and would appear 5.7–8.6 magnitudes dimmer (−21.0 to −18.2), 190–2,700 times dimmer than but still 150–2,100 times brighter than the full Moon. Conversely, an Earth-like planet at from (with a revolution period of 0.63 years) would get nearly Sun-like illumination from its primary, and would appear 4.6–7.3 magnitudes dimmer (−22.1 to −19.4), 70 to 840 times dimmer than but still 470–5,700 times brighter than the full Moon.
Proxima Centauri would appear dim as one of many stars, being magnitude 4.5 at its current distance, and magnitude 2.6 at periastron.
Future exploration
Alpha Centauri is a first target for crewed or robotic interstellar exploration. Using current spacecraft technologies, crossing the distance between the Sun and Alpha Centauri would take several millennia, though the possibility of nuclear pulse propulsion or laser light sail technology, as considered in the Breakthrough Starshot program, could make the journey to Alpha Centauri in 20 years. An objective of such a mission would be to make a fly-by of, and possibly photograph, planets that might exist in the system. The existence of Proxima Centauri b, announced by the European Southern Observatory (ESO) in August 2016, would be a target for the Starshot program.
NASA released a mission concept in 2017 that would send a spacecraft to Alpha Centauri in 2069, scheduled to coincide with the 100th anniversary of the first crewed lunar landing in 1969, Even at speed 10% of the speed of light (about 108 million km/h), which NASA experts say may be possible, it would take a spacecraft 44 years to reach the constellation, by the year 2113, and would take another 4 years for a signal, by the year 2117 to reach Earth. The concept received no further funding or development.
In culture
Alpha Centauri has been recognized and associated throughout history, particularly in the Southern Hemisphere. Polynesians have been using Alpha Centauri for their star navigation and have called it Kamailehope. In the Ngarrindjeri culture of Australia, Alpha Centauri represents with Beta Centauri two sharks chasing a stingray, the Southern Cross, and in Incan culture it with Beta Centauri form the eyes of a llama-shaped dark constellation embedded in the band of stars that the visible Milky Way forms in the sky. In ancient Egypt it was also revered and in China it is known as part of the South Gate asterism.
The Sagan Planet Walk in Ithaca, New York, is a walkable scale model of the solar system. An obelisk representing the scaled position of Alpha Centauri has been added at ʻImiloa Astronomy Center in Hawaii.
Hypothetical planets or exploration
|
;0559;071681 and 071683;128620 and 128621;16891215;5759 and 5760;Articles containing video clips;Astronomical objects known since antiquity;Centauri, Alpha;Centaurus;G-type main-sequence stars;Hypothetical planetary systems;K-type main-sequence stars;M-type main-sequence stars;Maunder Minimum;PD-60 05483;Rigil Kentaurus;Triple star systems
|
https://en.wikipedia.org/wiki/Arbor%20Day
|
Arbor Day (or Arbour Day in some countries) is a secular day of observance in which individuals and groups are encouraged to plant trees. Today, many countries observe such a holiday. Though usually observed in the spring, the date varies, depending on climate and suitable planting season.
Origins and history
First Arbor Day
The Spanish village of Mondoñedo held the first documented arbor plantation festival in the world organized by its mayor in 1594. The place remains as Alameda de los Remedios and it is still planted with lime and horse-chestnut trees. A humble granite marker and a bronze plate recall the event. Additionally, the small Spanish village of Villanueva de la Sierra held the first modern Arbor Day, an initiative launched in 1805 by the local priest with the enthusiastic support of the entire population.
First American Arbor Day
The first American Arbor Day was originated by J. Sterling Morton of Nebraska City, Nebraska, at an annual meeting of the Nebraska State board of agriculture held in Lincoln. On April 10, 1872, an estimated one million trees were planted in Nebraska.
In 1883, the American Forestry Association made Birdsey Northrop of Connecticut the chairman of the committee to campaign for Arbor Day nationwide; Northrop further globalized the idea when he visited Japan in 1895 and delivered his Arbor Day and Village Improvement message. He also brought his enthusiasm for Arbor Day to Australia, Canada, and other countries in Europe.
McCreight and Theodore Roosevelt
Beginning in 1906, Pennsylvania conservationist Major Israel McCreight of DuBois, Pennsylvania, argued that President Theodore Roosevelt's conservation speeches were limited to businessmen in the lumber industry and recommended a campaign of youth education and a national policy on conservation education. McCreight urged Roosevelt to make a public statement to school children about trees and the destruction of American forests. Conservationist Gifford Pinchot, Chief of the United States Forest Service, embraced McCreight's recommendations and asked the President to speak to the public school children of the United States about conservation. On April 15, 1907, Roosevelt issued an "Arbor Day Proclamation to the School Children of the United States" about the importance of trees and that forestry deserves to be taught in U.S. schools. Pinchot wrote McCreight, "we shall all be indebted to you for having made the suggestion."
Around the world
Australia
Arbor Day has been observed in Australia since the first event took place in Adelaide, South Australia on the 20th June 1889. National Schools Tree Day is held on the last Friday of July for schools and National Tree Day the last Sunday in July throughout Australia. Many states have Arbour Day, although Victoria has an Arbour Week, which was suggested by Premier Rupert (Dick) Hamer in the 1980s.
Belgium
International Day of Treeplanting is celebrated in Flanders on or around 21 March as a theme-day/educational-day/observance, not as a public holiday. Tree planting is sometimes combined with awareness campaigns of the fight against cancer: Kom Op Tegen Kanker.
Brazil
The Arbor Day (Dia da Árvore) is celebrated on September 21. It is not a national holiday. However, schools nationwide celebrate this day with environment-related activities, namely tree planting.
British Virgin Islands
Arbour Day is celebrated on November 22. It is sponsored by the National Parks Trust of the Virgin Islands. Activities include an annual national Arbour Day Poetry Competition and tree planting ceremonies throughout the territory.
Cambodia
Cambodia celebrates Arbor Day on July 9 with a tree planting ceremony attended by the king.
Canada
The day was founded by Sir George William Ross, later the premier of Ontario, when he was minister of education in Ontario (1883–1899). According to the Ontario Teachers' Manuals "History of Education" (1915), Ross established both Arbour Day and Empire Day—"the former to give the school children an interest in making and keeping the school grounds attractive, and the latter to inspire the children with a spirit of patriotism" (p. 222). This predates the claimed founding of the day by Don Clark of Schomberg, Ontario for his wife Margret Clark in 1906. In Canada, National Forest Week is the last full week of September, and National Tree Day (Maple Leaf Day) falls on the Wednesday of that week. Ontario celebrates Arbour Week from the last Friday in April to the first Sunday in May. Prince Edward Island celebrates Arbour Day on the third Friday in May during Arbour Week. Arbour Day is the longest running civic greening project in Calgary and is celebrated on the first Thursday in May. On this day, each grade 1 student in Calgary's schools receives a tree seedling to be taken home to be planted on private property.
Central African Republic
National Tree Planting Day is on July 22.
Chile
"Dia del Arbol" was celebrated on June 28, 2022, as defined by Chile's Environment Ministry
Greater China
Republic of China (Taiwan)
Arbor Day (植樹節) was founded by the forester Ling Daoyang in 1915 and has been a traditional holiday in the Republic of China since 1916. The Beiyang government's Ministry of Agriculture and Commerce first commemorated Arbor Day in 1915 at the suggestion of forester Ling Daoyang. In 1916, the government announced that all provinces of the Republic of China would celebrate the on the same day as the Qingming Festival, April 5, despite the differences in climate across China, which is on the first day of the fifth solar term of the traditional Chinese lunisolar calendar. From 1929, by decree of the Nationalist government, Arbor Day was , to commemorate the death of Sun Yat-sen, who had been a major advocate of afforestation in his life. Following the retreat of the government of the Republic of China to Taiwan in 1949, the celebration of Arbor Day on March 12 was retained.
People's Republic of China
In People's Republic of China, during the fourth session of the Fifth National People's Congress of the People's Republic of China in 1979 adopted the Resolution on the Unfolding of a Nationwide Voluntary Tree-planting Campaign. This resolution established the Arbor Day (植树节), also March 12, and stipulated that every able-bodied citizen between the ages of 11 and 60 should plant three to five trees per year or do the equivalent amount of work in seedling, cultivation, tree tending, or other services. Supporting documentation instructs all units to report population statistics to the local afforestation committees for workload allocation. Many couples choose to marry the day before the annual celebration, and they plant the tree to mark beginning of their life together and the new life of the tree.
Republic of Congo
National Tree Planting Day is on November 6.
Costa Rica
"Día del Árbol" is on June 15.
Colombia
"Día de los Árboles" (Day of Trees) is on April 29.
Cuba
"Dia del Árbol" (Day of the Tree) was first observed on October 10, 1904, and today is officially observed on June 21 of each year.
Czech Republic
Arbor Day in the Czech Republic is celebrated on October 20.
Egypt
Arbor Day is on January 15.
Germany
Arbor Day ("Tag des Baumes") is on April 25. Its first celebration was in 1952.
India
Van Mahotsav is an annual pan-Indian tree planting festival, occupying a week in the month of July. During this event millions of trees are planted. It was initiated in 1950 by K. M. Munshi, the then Union Minister for Agriculture and Food, to create an enthusiasm in the mind of the populace for the conservation of forests and planting of trees.
The name Van Mahotsava (the festival of trees) originated in July 1947 after a successful tree-planting drive was undertaken in Delhi, in which national leaders like Jawaharlal Nehru, Dr Rajendra Prasad and Abul Kalam Azad participated. Paryawaran Sachetak Samiti, a leading environmental organization conducts mass events and activities on this special day celebration each year. The week was simultaneously celebrated in a number of states in the country.
Iran
In Iran, it is known as "National Tree Planting Day". By the Solar Hijri calendar, it is on the fifteenth day of the month Esfand, which usually corresponds with March 5. This day is the first day of the "Natural Recyclable Resources Week" (March 5 to 12).
This is the time when the saplings of the all kinds in terms of different climates of different parts of Iran are shared among the people. They are also taught how to plant trees.
Israel
The Jewish holiday Tu Bishvat, the new year for trees, is on the 15th day of the month of Shvat, which usually falls in January or February. Originally based on the date used to calculate the age of fruit trees for tithing as mandated in Leviticus 19:23–25, the holiday now is most often observed by planting trees or raising money to plant trees, and by eating dried fruits, specifically Raisins, figs, dates and nuts. Tu Bishvat is a semi-official holiday in Israel; schools are open but Hebrew-speaking schools often go on tree-planting excursions.
Japan
Japan celebrates a similarly themed Greenery Day, held on May 4.
Kenya
Historically, Kenya celebrated National Tree Planting Day on April 21. Often people plant palm trees and coconut trees along the Indian Ocean that borders the east coast of Kenya. They plant trees to remember Prof. Wangari Maathai, who won a Nobel Peace Prize for planting of trees and caring for them all over Kenya.
With the Kenyan government launching a campaign to plant 15 billion trees by 2032, they launched National Tree Growing Day with very aggressive targets for the number of trees to be planted. The first national public holiday was November 13, 2023. The second was May 10, 2024, with a goal to plant one billion trees in a single day.
Korea
North Korea marks "Tree Planting Day" on March 2, when people across the country plant trees. This day is considered to combine traditional Asian cultural values with the country's dominant Communist ideology.
In South Korea, April 5, Singmogil or Sikmogil (식목일), the Arbor Day, was a public holiday until 2005. Even though Singmogil is no longer an official holiday, the day is still celebrated, with the South Korean public continuing to take part in tree-planting activities.
Lesotho
National Tree Planting Day is usually on March 21 depending on the lunar cycle.
Luxembourg
National Tree Planting Day is on the second Saturday in November.
Malawi
National Tree Planting Day is on the 2nd Monday of December.
Mexico
The Día del Árbol was established in Mexico in 1959 with President Adolfo López Mateos issuing a decree that it should be observed on the 2nd Thursday of July.
Mongolia
National Tree Planting Day is on the 2nd Saturday of May and October. The first National Tree Planting Day was celebrated May 8, 2010.
Namibia
Namibia's first Arbor Day was celebrated on October 8, 2004. It takes place annually on the second Friday of October.
Netherlands
Since conference and of the Food and Agriculture Organization's publication World Festival of Trees, and a resolution of the United Nations in 1954: "The Conference, recognising the need of arousing mass consciousness of the aesthetic, physical and economic value of trees, recommends a World Festival of Trees to be celebrated annually in each member country on a date suited to local conditions"; it has been adopted by the Netherlands. In 1957, the National Committee Day of Planting Trees/Foundation of National Festival of Trees (Nationale Boomplantdag/Nationale Boomfeestdag) was created.
On the third Wednesday in March each year (near the spring equinox), three quarters of Dutch schoolchildren aged 10/11 and Dutch celebrities plant trees. Stichting Nationale Boomfeestdag organizes all the activities in the Netherlands for this day. Some municipalities however plant the trees around 21 September because of the planting season.
In 2007, the 50th anniversary was celebrated with special golden jubilee activities.
New Zealand
New Zealand's first Arbor Day planting was on 3 July 1890 at Greytown, in the Wairarapa. The first official celebration was scheduled to take place in Wellington in August 1892, with the planting of pōhutukawa and Norfolk pines along Thorndon Esplanade.
Prominent New Zealand botanist Dr Leonard Cockayne worked extensively on native plants throughout New Zealand and wrote many notable botanical texts. As early as the 1920s he held a vision for school students of New Zealand to be involved in planting native trees and plants in their school grounds. This vision bore fruit and schools in New Zealand have long planted native trees on Arbor Day.
Since 1977, New Zealand has celebrated Arbor Day on 5 June, which is also World Environment Day. Prior to then, Arbor Day was celebrated on 4 August, which is rather late in the year for tree planting in New Zealand, hence the date change.
Many of the Department of Conservation's Arbor Day activities focus on ecological restoration projects using native plants to restore habitats that have been damaged or destroyed by humans or invasive pests and weeds. There are great restoration projects underway around New Zealand and many organisations including community groups, landowners, conservation organisations, iwi, volunteers, schools, local businesses, nurseries and councils are involved in them. These projects are part of a vision to protect and restore the indigenous biodiversity.
Niger
Since 1975, Niger has celebrated Arbor Day as part of its Independence Day: 3 August. On this day, aiding the fight against desertification, each Nigerien plants a tree.
North Macedonia
Having in mind the bad condition of the forest fund, and in particular the catastrophic wildfires which occurred in the summer of 2007, a citizens' initiative for afforestation was started in North Macedonia. The campaign by the name 'Tree Day-Plant Your Future' was first organized on 12 March 2008, when an official non-working day was declared and more than 150,000 Macedonians planted 2 million trees in one day (symbolically, one for each citizen). Six million more were planted in November the same year, and another 12,5 million trees in 2009. This has been established as a tradition and takes place every year.
Pakistan
National tree plantation day of Pakistan (قومی شجر کاری دن) is celebrated on 18 August.
Philippines
Since 1947, Arbor Day in the Philippines has been institutionalized to be observed throughout the nation by planting trees and ornamental plants and other forms of relevant activities. Its practice was instituted through Proclamation No. 30. It was subsequently revised by Proclamation No. 41, issued in the same year. In 1955, the commemoration was extended from a day to a week and moved to the last full week of July. Over two decades later, its commemoration was moved to the second week of June. In 2003, the commemorations were reduced from a week to a day and was moved to June 25 per Proclamation No. 396. The same proclamation directed "the active participation of all government agencies, including government-owned and controlled corporations, private sector, schools, civil society groups and the citizenry in tree planting activity". It was subsequently revised by Proclamation 643 in the succeeding year.
In 2012, Republic Act 10176 was passed, which revived tree planting events "as [a] yearly event for local government units" and mandated the planting of at least one tree per year for able-bodied Filipino citizens aged 12 years old and above. Since 2012, many local arbor day celebrations have been commemorated, as in the cases of Natividad and Tayug in Pangasinan and Santa Rita in Pampanga.
Poland
In Poland, Arbor Day has been celebrated since 2002. Each October 10, many Polish people plant trees as well as participate in events organized by ecological foundations. Moreover, Polish Forest Inspectorates and schools give special lectures and lead ecological awareness campaigns.
Portugal
Arbor Day is celebrated on March 21. It is not a national holiday but instead schools nationwide celebrate this day with environment-related activities, namely tree planting.
Russia
All-Russian day of forest plantation was celebrated for the first time on 14 May 2011. Now it is held in April–May (it depends on the weather in different regions).
Samoa
Arbor Day in Samoa is celebrated on the first Friday in November.
Saudi Arabia
Arbor Day in Saudi Arabia is celebrated on April 29.
Singapore
In 1971 a 'Tree Planting Day' was established which in 1990 was replaced by 'Clean and Green Week'.
South Africa
Arbor Day was celebrated from 1945 until 2000 in South Africa. After that, the national government extended it to National Arbor Week, which lasts annually from 1–7 September. Two trees, one common and one rare, are highlighted to increase public awareness of indigenous trees, while various "greening" activities are undertaken by schools, businesses and other organizations. For example, the social enterprise Greenpop, which focusses on sustainable urban greening, forest restoration and environmental awareness in Sub-Saharan Africa, leverages Arbor Day each year to call for tree planting action. During Arbor Month 2019, responding to recent studies that underscore the importance of tree restoration, they launched their new goal of planting 500,000 by 2025.
Spain
In 1896 Mariano Belmás Estrada promoted the first "Festival of Trees" in Madrid.
In Spain there was an International Forest Day on 21 March, but a decree in 1915 also brought in an Arbor Day throughout Spain. Each municipality or collective decides the date for its Arbor Day, usually between February and May. In Villanueva de la Sierra (Extremadura), where the first Arbor Day in the world was held in 1805, it is celebrated, as on that occasion, on Tuesday Carnaval. It is a great day in the local festive calendar.
As an example of commitment to nature, the small town of Pescueza, with only 180 inhabitants, organizes every spring a large plantation of holm oaks, which is called the "Festivalino", promoted by city council, several foundations, and citizen participation.
Sri Lanka
National Tree Planting Day is on November 15.
Tanzania
National Tree Planting Day is on April 1.
Turkey
National Tree Planting Day is on November 11.
Uganda
National Tree Planting Day is on March 24.
United Kingdom
First mounted in 1975, National Tree Week is a celebration of the start of the winter tree planting season, usually at the end of November. Around a million trees are planted each year by schools, community organizations and local authorities.
On 6 February 2020, Myerscough College in Lancashire, England, supported by the Arbor Day Foundation, celebrated the UK's first Arbor Day.
United States
Arbor Day was founded in 1872 by J. Sterling Morton in Nebraska City, Nebraska. By the 1920s, each state in the United States had passed public laws that stipulated a certain day to be Arbor Day or Arbor and Bird Day observance.
National Arbor Day is celebrated every year on the last Friday in April; it is a civic holiday in Nebraska. Other states have selected their own dates for Arbor Day.
The customary observance is to plant a tree. On the first Arbor Day, April 10, 1872, an estimated one million trees were planted.
Venezuela
Venezuela recognizes Día del Arbol (Day of the Tree) on the last Sunday of May.
See also
Arbor Day Foundation (US)
Earth Day
Greenery Day (Japan)
International Day of Forests
National Public Lands Day (US)
Timeline of environmental events
Tu BiShvat (Jewish holiday)
World Water Day
References
External links
International Arbor Days
Arbor Day lesson plans for the classroom
National Arbor Day Foundation
State Arbor Days and state trees
History of Arbor Day
|
1872 establishments in Nebraska;Environmental awareness days;Forestry events;Forestry-related lists;Holidays and observances by scheduling (nth weekday of the month);Recurring events established in 1872;Reforestation;Trees in culture;Types of secular holidays;Urban forestry
|
https://en.wikipedia.org/wiki/Andrew%20Wiles
|
Sir Andrew John Wiles (born 11 April 1953) is an English mathematician and a Royal Society Research Professor at the University of Oxford, specialising in number theory. He is best known for proving Fermat's Last Theorem, for which he was awarded the 2016 Abel Prize and the 2017 Copley Medal and for which he was appointed a Knight Commander of the Order of the British Empire in 2000. In 2018, Wiles was appointed the first Regius Professor of Mathematics at Oxford. Wiles is also a 1997 MacArthur Fellow.
Wiles was born in Cambridge to theologian Maurice Frank Wiles and Patricia Wiles. While spending much of his childhood in Nigeria, Wiles developed an interest in mathematics and in Fermat's Last Theorem in particular. After moving to Oxford and graduating from there in 1974, he worked on unifying Galois representations, elliptic curves and modular forms, starting with Barry Mazur's generalizations of Iwasawa theory. In the early 1980s, Wiles spent a few years at the University of Cambridge before moving to Princeton University, where he worked on expanding out and applying Hilbert modular forms. In 1986, upon reading Ken Ribet's seminal work on Fermat's Last Theorem, Wiles set out to prove the modularity theorem for semistable elliptic curves, which implied Fermat's Last Theorem. By 1993, he had been able to convince a knowledgeable colleague that he had a proof of Fermat's Last Theorem, though a flaw was subsequently discovered. After an insight on 19 September 1994, Wiles and his student Richard Taylor were able to circumvent the flaw, and published the results in 1995, to widespread acclaim.
In proving Fermat's Last Theorem, Wiles developed new tools for mathematicians to begin unifying disparate ideas and theorems. His former student Taylor along with three other mathematicians were able to prove the full modularity theorem by 2000, using Wiles' work. Upon receiving the Abel Prize in 2016, Wiles reflected on his legacy, expressing his belief that he did not just prove Fermat's Last Theorem, but pushed the whole of mathematics as a field towards the Langlands program of unifying number theory.
Education and early life
Wiles was born on 11 April 1953 in Cambridge, England, the son of Maurice Frank Wiles (1923–2005) and Patricia Wiles (née Mowll). From 1952 to 1955, his father worked as the chaplain at Ridley Hall, Cambridge, and later became the Regius Professor of Divinity at the University of Oxford.
Wiles began his formal schooling in Nigeria, while living there as a very young boy with his parents. However, according to letters written by his parents, for at least the first several months after he was supposed to be attending classes, he refused to go. From that fact, Wiles himself concluded that in his earliest years, he was not enthusiastic about spending time in academic institutions. In an interview with Nadia Hasnaoui in 2021, he said he trusted the letters, yet he could not remember a time when he did not enjoy solving mathematical problems.
Wiles attended King's College School, Cambridge, and The Leys School, Cambridge. Wiles told WGBH-TV in 1999 that he came across Fermat's Last Theorem on his way home from school when he was 10 years old. He stopped at his local library where he found a book The Last Problem, by Eric Temple Bell, about the theorem. Fascinated by the existence of a theorem that was so easy to state that he, a ten-year-old, could understand it, but that no one had proven, he decided to be the first person to prove it. However, he soon realised that his knowledge was too limited, so he abandoned his childhood dream until it was brought back to his attention at the age of 33 by Ken Ribet's 1986 proof of the epsilon conjecture, which Gerhard Frey had previously linked to Fermat's equation.
Early career
In 1974, Wiles earned his bachelor's degree in mathematics at Merton College, Oxford. Wiles's graduate research was guided by John Coates, beginning in the summer of 1975. Together they worked on the arithmetic of elliptic curves with complex multiplication by the methods of Iwasawa theory. He further worked with Barry Mazur on the main conjecture of Iwasawa theory over the rational numbers, and soon afterward, he generalised this result to totally real fields.
In 1980, Wiles earned a PhD while at Clare College, Cambridge. After a stay at the Institute for Advanced Study in Princeton, New Jersey, in 1981, Wiles became a Professor of Mathematics at Princeton University.
In 1985–86, Wiles was a Guggenheim Fellow at the Institut des Hautes Études Scientifiques near Paris and at the .
In 1989, Wiles was elected to the Royal Society. At that point according to his election certificate, he had been working "on the construction of ℓ-adic representations attached to Hilbert modular forms, and has applied these to prove the 'main conjecture' for cyclotomic extensions of totally real fields".
Proof of Fermat's Last Theorem
From 1988 to 1990, Wiles was a Royal Society Research Professor at the University of Oxford, and then he returned to Princeton.
From 1994 to 2009, Wiles was a Eugene Higgins Professor at Princeton.
Starting in mid-1986, based on successive progress of the previous few years of Gerhard Frey, Jean-Pierre Serre and Ken Ribet, it became clear that Fermat's Last Theorem (the statement that no three positive integers , , and satisfy the equation for any integer value of greater than ) could be proven as a corollary of a limited form of the modularity theorem (unproven at the time and then known as the "Taniyama–Shimura–Weil conjecture"). The modularity theorem involved elliptic curves, which was also Wiles's own specialist area, and stated that all such curves have a modular form associated with them. These curves can be thought of as mathematical objects resembling solutions for a torus' surface, and if Fermat's Last Theorem were false and solutions existed, "a peculiar curve would result". A proof of the theorem therefore would involve showing that such a curve would not exist.
The conjecture was seen by contemporary mathematicians as important, but extraordinarily difficult or perhaps impossible to prove. For example, Wiles's ex-supervisor John Coates stated that it seemed "impossible to actually prove", and Ken Ribet considered himself "one of the vast majority of people who believed [it] was completely inaccessible", adding that "Andrew Wiles was probably one of the few people on earth who had the audacity to dream that you can actually go and prove [it]."
Despite this, Wiles, with his from-childhood fascination with Fermat's Last Theorem, decided to undertake the challenge of proving the conjecture, at least to the extent needed for Frey's curve. He dedicated all of his research time to this problem for over six years in near-total secrecy, covering up his efforts by releasing prior work in small segments as separate papers and confiding only in his wife.
Wiles' research involved creating a proof by contradiction of Fermat's Last Theorem, which Ribet in his 1986 work had found to have an elliptic curve and thus an associated modular form if true. Starting by assuming that the theorem was incorrect, Wiles then contradicted the Taniyama–Shimura–Weil conjecture as formulated under that assumption, with Ribet's theorem (which stated that if were a prime number, no such elliptic curve could have a modular form, so no odd prime counterexample to Fermat's equation could exist). Wiles also proved that the conjecture applied to the special case known as the semistable elliptic curves to which Fermat's equation was tied. In other words, Wiles had found that the Taniyama–Shimura–Weil conjecture was true in the case of Fermat's equation, and Ribet's finding (that the conjecture holding for semistable elliptic curves could mean Fermat's Last Theorem is true) prevailed, thus proving Fermat's Last Theorem.
In June 1993, he presented his proof to the public for the first time at a conference in Cambridge. Gina Kolata of The New York Times summed up the presentation as follows:
In August 1993, it was discovered that the proof contained a flaw in several areas, related to properties of the Selmer group and use of a tool called an Euler system. Wiles tried and failed for over a year to repair his proof. According to Wiles, the crucial idea for circumventing—rather than closing—this area came to him on 19 September 1994, when he was on the verge of giving up. The circumvention used Galois representations to replace elliptic curves, reduced the problem to a class number formula and solved it, among other matters, all using Victor Kolyvagin's ideas as a basis for fixing Matthias Flach's approach with Iwasawa theory. Together with his former student Richard Taylor, Wiles published a second paper which contained the circumvention and thus completed the proof. Both papers were published in May 1995 in a dedicated issue of the Annals of Mathematics.
Later career
In 2011, Wiles rejoined the University of Oxford as Royal Society Research Professor.
In May 2018, Wiles was appointed Regius Professor of Mathematics at Oxford, the first in the university's history.
Legacy
Wiles' work has been used in many fields of mathematics. Notably, in 1999, three of his former students, Richard Taylor, Brian Conrad, and Fred Diamond, working with Christophe Breuil, built upon Wiles' proof to prove the full modularity theorem. Wiles's doctoral students have also included Manjul Bhargava (2014 winner of the Fields Medal), Ehud de Shalit, Ritabrata Munshi (winner of the SSB Prize and ICTP Ramanujan Prize), Karl Rubin (son of Vera Rubin), Christopher Skinner, and Vinayak Vatsal (2007 winner of the Coxeter–James Prize).
In 2016, upon receiving the Abel Prize, Wiles said about his proof of Fermat's Last Theorem, "The methods that solved it opened up a new way of attacking one of the big webs of conjectures of contemporary mathematics called the Langlands Program, which as a grand vision tries to unify different branches of mathematics. It’s given us a new way to look at that".
Awards and honours
Wiles's proof of Fermat's Last Theorem has stood up to the scrutiny of the world's other mathematical experts. Wiles was interviewed for an episode of the BBC documentary series Horizon about Fermat's Last Theorem. This was broadcast as an episode of the PBS science television series Nova with the title "The Proof". His work and life are also described in great detail in Simon Singh's popular book Fermat's Last Theorem.
In 1988, Wiles was awarded the Junior Whitehead Prize of the London Mathematical Society (1988). In 1989, he was elected a Fellow of the Royal Society (FRS)
In 1994, Wiles was elected member of the American Academy of Arts and Sciences. Upon completing his proof of Fermat's Last Theorem in 1995, he was awarded the Schock Prize, Fermat Prize, and Wolf Prize in Mathematics that year. Wiles was elected a Foreign Associate of the National Academy of Sciences and won an NAS Award in Mathematics from the National Academy of Sciences, the Royal Medal, and the Ostrowski Prize in 1996. He won the American Mathematical Society's Cole Prize, a MacArthur Fellowship, and the Wolfskehl Prize in 1997, and was elected member of the American Philosophical Society that year.
In 1998, Wiles was awarded a silver plaque from the International Mathematical Union recognising his achievements, in place of the Fields Medal, which is restricted to those under the age of 40 (Wiles was 41 when he proved the theorem in 1994). That same year, he was awarded the King Faisal Prize along with the Clay Research Award in 1999, the year the asteroid 9999 Wiles was named after him.
In 2000, he was awarded Knight Commander of the Order of the British Empire (2000) In 2004 Wiles won the Premio Pitagora. In 2005, he won the Shaw Prize.
The building at the University of Oxford housing the Mathematical Institute was named after Wiles in 2013. Later that year he won the Abel Prize. In 2017, Wiles won the Copley Medal. In 2019, he won the De Morgan Medal.
References
External links
Profile from Oxford
Profile from Princeton
|
1953 births;20th-century English mathematicians;21st-century English mathematicians;Abel Prize laureates;Alumni of Clare College, Cambridge;Alumni of King's College, Cambridge;Alumni of Merton College, Oxford;British number theorists;Clay Research Award recipients;Fellows of Merton College, Oxford;Fellows of the Royal Society;Fermat's Last Theorem;Foreign associates of the National Academy of Sciences;Institute for Advanced Study visiting scholars;Knights Commander of the Order of the British Empire;Living people;MacArthur Fellows;Members of the American Philosophical Society;Members of the French Academy of Sciences;People educated at The Leys School;People from Cambridge;Princeton University faculty;Recipients of the Copley Medal;Regius Professors of Mathematics (University of Oxford);Rolf Schock Prize laureates;Royal Medal winners;Trustees of the Institute for Advanced Study;Whitehead Prize winners;Wolf Prize in Mathematics laureates
|
https://en.wikipedia.org/wiki/Avionics
|
Avionics (a portmanteau of aviation and electronics) are the electronic systems used on aircraft. Avionic systems include communications, navigation, the display and management of multiple systems, and the hundreds of systems that are fitted to aircraft to perform individual functions. These can be as simple as a searchlight for a police helicopter or as complicated as the tactical system for an airborne early warning platform.
History
The term "avionics" was coined in 1949 by Philip J. Klass, senior editor at Aviation Week & Space Technology magazine as a portmanteau of "aviation electronics".
Radio communication was first used in aircraft just prior to World War I. The first airborne radios were in zeppelins, but the military sparked development of light radio sets that could be carried by heavier-than-air craft, so that aerial reconnaissance biplanes could report their observations immediately in case they were shot down. The first experimental radio transmission from an airplane was conducted by the U.S. Navy in August 1910. The first aircraft radios transmitted by radiotelegraphy. They required a two-seat aircraft with a second crewman who operated a telegraph key to spell out messages in Morse code. During World War I, AM voice two way radio sets were made possible in 1917 (see TM (triode)) by the development of the triode vacuum tube, which were simple enough that the pilot in a single seat aircraft could use it while flying.
Radar, the central technology used today in aircraft navigation and air traffic control, was developed by several nations, mainly in secret, as an air defense system in the 1930s during the runup to World War II. Many modern avionics have their origins in World War II wartime developments. For example, autopilot systems that are commonplace today began as specialized systems to help bomber planes fly steadily enough to hit precision targets from high altitudes. Britain's 1940 decision to share its radar technology with its U.S. ally, particularly the magnetron vacuum tube, in the famous Tizard Mission, significantly shortened the war. Modern avionics is a substantial portion of military aircraft spending. Aircraft like the F-15E and the now retired F-14 have roughly 20 percent of their budget spent on avionics. Most modern helicopters now have budget splits of 60/40 in favour of avionics.
The civilian market has also seen a growth in cost of avionics. Flight control systems (fly-by-wire) and new navigation needs brought on by tighter airspaces, have pushed up development costs. The major change has been the recent boom in consumer flying. As more people begin to use planes as their primary method of transportation, more elaborate methods of controlling aircraft safely in these high restrictive airspaces have been invented.
Modern avionics
Avionics plays a heavy role in modernization initiatives like the Federal Aviation Administration's (FAA) Next Generation Air Transportation System project in the United States and the Single European Sky ATM Research (SESAR) initiative in Europe. The Joint Planning and Development Office put forth a roadmap for avionics in six areas:
Published Routes and Procedures – Improved navigation and routing
Negotiated Trajectories – Adding data communications to create preferred routes dynamically
Delegated Separation – Enhanced situational awareness in the air and on the ground
LowVisibility/CeilingApproach/Departure – Allowing operations with weather constraints with less ground infrastructure
Surface Operations – To increase safety in approach and departure
ATM Efficiencies – Improving the air traffic management (ATM) process
Market
The Aircraft Electronics Association reports $1.73 billion avionics sales for the first three quarters of 2017 in business and general aviation, a 4.1% yearly improvement: 73.5% came from North America, forward-fit represented 42.3% while 57.7% were retrofits as the U.S. deadline of January 1, 2020 for mandatory ADS-B out approach.
Aircraft avionics
The cockpit or, in larger aircraft, under the cockpit of an aircraft or in a movable nosecone, is a typical location for avionic bay equipment, including control, monitoring, communication, navigation, weather, and anti-collision systems. The majority of aircraft power their avionics using 14- or 28‑volt DC electrical systems; however, larger, more sophisticated aircraft (such as airliners or military combat aircraft) have AC systems operating at 115 volts 400 Hz, AC. There are several major vendors of flight avionics, including The Boeing Company, Panasonic Avionics Corporation, Honeywell (which now owns Bendix/King), Universal Avionics Systems Corporation, Rockwell Collins (now Collins Aerospace), Thales Group, GE Aviation Systems, Garmin, Raytheon, Parker Hannifin, UTC Aerospace Systems (now Collins Aerospace), Selex ES (now Leonardo), Shadin Avionics, and Avidyne Corporation.
International standards for avionics equipment are prepared by the Airlines Electronic Engineering Committee (AEEC) and published by ARINC.
Avionics Installation
Avionics installation is a critical aspect of modern aviation, ensuring that aircraft are equipped with the necessary electronic systems for safe and efficient operation. These systems encompass a wide range of functions, including communication, navigation, monitoring, flight control, and weather detection. Avionics installations are performed on all types of aircraft, from small general aviation planes to large commercial jets and military aircraft.
Installation Process
The installation of avionics requires a combination of technical expertise, precision, and adherence to stringent regulatory standards. The process typically involves:
Planning and Design: Before installation, the avionics shop works closely with the aircraft owner to determine the required systems based on the aircraft type, intended use, and regulatory requirements. Custom instrument panels are often designed to accommodate the new systems.
Wiring and Integration: Avionics systems are integrated into the aircraft's electrical and control systems, with wiring often requiring laser marking for durability and identification. Shops use detailed schematics to ensure correct installation.
Testing and Calibration: After installation, each system must be thoroughly tested and calibrated to ensure proper function. This includes ground testing, flight testing, and system alignment with regulatory standards such as those set by the FAA.
Certification: Once the systems are installed and tested, the avionics shop completes the necessary certifications. In the U.S., this often involves compliance with FAA Part 91.411 and 91.413 for IFR (Instrument Flight Rules) operations, as well as RVSM (Reduced Vertical Separation Minimum) certification.
Regulatory Standards
Avionics installation is governed by strict regulatory frameworks to ensure the safety and reliability of aircraft systems. In the United States, the Federal Aviation Administration (FAA) sets the standards for avionics installations. These include guidelines for:
System Performance: Avionics systems must meet performance benchmarks as defined by the FAA, ensuring they function correctly in all phases of flight.
Certification: Shops performing installations must be FAA-certified, and their technicians often hold certifications such as the General Radiotelephone Operator License (GROL).
Inspections: Aircraft equipped with newly installed avionics systems must undergo rigorous inspections before being cleared for flight, including both ground and flight tests.
Advancements in Avionics Technology
The field of avionics has seen rapid technological advancements in recent years, leading to more integrated and automated systems. Key trends include:
Glass Cockpits: Traditional analog gauges are being replaced by fully integrated glass cockpit displays, providing pilots with a centralized view of all flight parameters.
NextGen Technologies: ADS-B and satellite-based navigation are part of the FAA's NextGen initiative, aimed at modernizing air traffic control and improving the efficiency of the national airspace.
Autonomous Systems: Advanced automation systems are paving the way for more autonomous aircraft systems, enhancing safety, efficiency, and reducing pilot workload.
Communications
Communications connect the flight deck to the ground and the flight deck to the passengers. On‑board communications are provided by public-address systems and aircraft intercoms.
The VHF aviation communication system works on the airband of 118.000 MHz to 136.975 MHz. Each channel is spaced from the adjacent ones by 8.33 kHz in Europe, 25 kHz elsewhere. VHF is also used for line of sight communication such as aircraft-to-aircraft and aircraft-to-ATC. Amplitude modulation (AM) is used, and the conversation is performed in simplex mode. Aircraft communication can also take place using HF (especially for trans-oceanic flights) or satellite communication.
Navigation
Air navigation is the determination of position and direction on or above the surface of the Earth. Avionics can use satellite navigation systems (such as GPS and WAAS), inertial navigation system (INS), ground-based radio navigation systems (such as VOR or LORAN), or any combination thereof. Some navigation systems such as GPS calculate the position automatically and display it to the flight crew on moving map displays. Older ground-based Navigation systems such as VOR or LORAN requires a pilot or navigator to plot the intersection of signals on a paper map to determine an aircraft's location; modern systems calculate the position automatically and display it to the flight crew on moving map displays.
Monitoring
The first hints of glass cockpits emerged in the 1970s when flight-worthy cathode-ray tube (CRT) screens began to replace electromechanical displays, gauges and instruments. A "glass" cockpit refers to the use of computer monitors instead of gauges and other analog displays. Aircraft were getting progressively more displays, dials and information dashboards that eventually competed for space and pilot attention. In the 1970s, the average aircraft had more than 100 cockpit instruments and controls.
Glass cockpits started to come into being with the Gulfstream G‑IV private jet in 1985. One of the key challenges in glass cockpits is to balance how much control is automated and how much the pilot should do manually. Generally they try to automate flight operations while keeping the pilot constantly informed.
Aircraft flight-control system
Aircraft have means of automatically controlling flight. Autopilot was first invented by Lawrence Sperry during World War I to fly bomber planes steady enough to hit accurate targets from 25,000 feet. When it was first adopted by the U.S. military, a Honeywell engineer sat in the back seat with bolt cutters to disconnect the autopilot in case of emergency. Nowadays most commercial planes are equipped with aircraft flight control systems in order to reduce pilot error and workload at landing or takeoff.
The first simple commercial auto-pilots were used to control heading and altitude and had limited authority on things like thrust and flight control surfaces. In helicopters, auto-stabilization was used in a similar way. The first systems were electromechanical. The advent of fly-by-wire and electro-actuated flight surfaces (rather than the traditional hydraulic) has increased safety. As with displays and instruments, critical devices that were electro-mechanical had a finite life. With safety critical systems, the software is very strictly tested.
Fuel Systems
Fuel Quantity Indication System (FQIS) monitors the amount of fuel aboard. Using various sensors, such as capacitance tubes, temperature sensors, densitometers & level sensors, the FQIS computer calculates the mass of fuel remaining on board.
Fuel Control and Monitoring System (FCMS) reports fuel remaining on board in a similar manner, but, by controlling pumps & valves, also manages fuel transfers around various tanks.
Refuelling control to upload to a certain total mass of fuel and distribute it automatically.
Transfers during flight to the tanks that feed the engines. E.G. from fuselage to wing tanks
Centre of gravity control transfers from the tail (trim) tanks forward to the wings as fuel is expended
Maintaining fuel in the wing tips (to alleviate wing bending due to lift in flight) & transferring to the main tanks after landing
Controlling fuel jettison during an emergency to reduce the aircraft weight.
Collision-avoidance systems
To supplement air traffic control, most large transport aircraft and many smaller ones use a traffic alert and collision avoidance system (TCAS), which can detect the location of nearby aircraft, and provide instructions for avoiding a midair collision. Smaller aircraft may use simpler traffic alerting systems such as TPAS, which are passive (they do not actively interrogate the transponders of other aircraft) and do not provide advisories for conflict resolution.
To help avoid controlled flight into terrain (CFIT), aircraft use systems such as ground-proximity warning systems (GPWS), which use radar altimeters as a key element. One of the major weaknesses of GPWS is the lack of "look-ahead" information, because it only provides altitude above terrain "look-down". In order to overcome this weakness, modern aircraft use a terrain awareness warning system (TAWS).
Flight recorders
Commercial aircraft cockpit data recorders, commonly known as "black boxes", store flight information and audio from the cockpit. They are often recovered from an aircraft after a crash to determine control settings and other parameters during the incident.
Weather systems
Weather systems such as weather radar (typically Arinc 708 on commercial aircraft) and lightning detectors are important for aircraft flying at night or in instrument meteorological conditions, where it is not possible for pilots to see the weather ahead. Heavy precipitation (as sensed by radar) or severe turbulence (as sensed by lightning activity) are both indications of strong convective activity and severe turbulence, and weather systems allow pilots to deviate around these areas.
Lightning detectors like the Stormscope or Strikefinder have become inexpensive enough that they are practical for light aircraft. In addition to radar and lightning detection, observations and extended radar pictures (such as NEXRAD) are now available through satellite data connections, allowing pilots to see weather conditions far beyond the range of their own in-flight systems. Modern displays allow weather information to be integrated with moving maps, terrain, and traffic onto a single screen, greatly simplifying navigation.
Modern weather systems also include wind shear and turbulence detection and terrain and traffic warning systems. In‑plane weather avionics are especially popular in Africa, India, and other countries where air-travel is a growing market, but ground support is not as well developed.
Aircraft management systems
There has been a progression towards centralized control of the multiple complex systems fitted to aircraft, including engine monitoring and management. Health and usage monitoring systems (HUMS) are integrated with aircraft management computers to give maintainers early warnings of parts that will need replacement.
The integrated modular avionics concept proposes an integrated architecture with application software portable across an assembly of common hardware modules. It has been used in fourth generation jet fighters and the latest generation of airliners.
Mission or tactical avionics
Military aircraft have been designed either to deliver a weapon or to be the eyes and ears of other weapon systems. The vast array of sensors available to the military is used for whatever tactical means required. As with aircraft management, the bigger sensor platforms (like the E‑3D, JSTARS, ASTOR, Nimrod MRA4, Merlin HM Mk 1) have mission-management computers.
Police and EMS aircraft also carry sophisticated tactical sensors.
Military communications
While aircraft communications provide the backbone for safe flight, the tactical systems are designed to withstand the rigors of the battle field. UHF, VHF Tactical (30–88 MHz) and SatCom systems combined with ECCM methods, and cryptography secure the communications. Data links such as Link 11, 16, 22 and BOWMAN, JTRS and even TETRA provide the means of transmitting data (such as images, targeting information etc.).
Radar
Airborne radar was one of the first tactical sensors. The benefit of altitude providing range has meant a significant focus on airborne radar technologies. Radars include airborne early warning (AEW), anti-submarine warfare (ASW), and even weather radar (Arinc 708) and ground tracking/proximity radar.
The military uses radar in fast jets to help pilots fly at low levels. While the civil market has had weather radar for a while, there are strict rules about using it to navigate the aircraft.
Sonar
Dipping sonar fitted to a range of military helicopters allows the helicopter to protect shipping assets from submarines or surface threats. Maritime support aircraft can drop active and passive sonar devices (sonobuoys) and these are also used to determine the location of enemy submarines.
Electro-optics
Electro-optic systems include devices such as the head-up display (HUD), forward looking infrared (FLIR), infrared search and track and other passive infrared devices (Passive infrared sensor). These are all used to provide imagery and information to the flight crew. This imagery is used for everything from search and rescue to navigational aids and target acquisition.
ESM/DAS
Electronic support measures and defensive aids systems are used extensively to gather information about threats or possible threats. They can be used to launch devices (in some cases automatically) to counter direct threats against the aircraft. They are also used to determine the state of a threat and identify it.
Aircraft networks
The avionics systems in military, commercial and advanced models of civilian aircraft are interconnected using an avionics databus. Common avionics databus protocols, with their primary application, include:
Aircraft Data Network (ADN): Ethernet derivative for Commercial Aircraft
Avionics Full-Duplex Switched Ethernet (AFDX): Specific implementation of ARINC 664 (ADN) for Commercial Aircraft
ARINC 429: Generic Medium-Speed Data Sharing for Private and Commercial Aircraft
ARINC 664: See ADN above
ARINC 629: Commercial Aircraft (Boeing 777)
ARINC 708: Weather Radar for Commercial Aircraft
ARINC 717: Flight Data Recorder for Commercial Aircraft
ARINC 825: CAN bus for commercial aircraft (for example Boeing 787 and Airbus A350)
Commercial Standard Digital Bus
IEEE 1394b: Military Aircraft
MIL-STD-1553: Military Aircraft
MIL-STD-1760: Military Aircraft
TTP – Time-Triggered Protocol: Boeing 787, Airbus A380, Fly-By-Wire Actuation Platforms from Parker Aerospace
See also
Astrionics, similar, for spacecraft
Acronyms and abbreviations in avionics
Avionics software
Emergency locator beacon
Emergency position-indicating radiobeacon station
Integrated modular avionics
Notes
Further reading
Avionics: Development and Implementation by Cary R. Spitzer (Hardcover – December 15, 2006)
Principles of Avionics, 4th Edition by Albert Helfrick, Len Buckwalter, and Avionics Communications Inc. (Paperback – July 1, 2007)
Avionics Training: Systems, Installation, and Troubleshooting by Len Buckwalter (Paperback – June 30, 2005)
Avionics Made Simple, by Mouhamed Abdulla, Ph.D.; Jaroslav V. Svoboda, Ph.D. and Luis Rodrigues, Ph.D. (Coursepack – Dec. 2005 - ).
External links
Avionics in Commercial Aircraft
Aircraft Electronics Association (AEA)
Pilot's Guide to Avionics
The Avionic Systems Standardisation Committee
Space Shuttle Avionics
Aviation Today Avionics magazine
RAES Avionics homepage
|
;Aircraft instruments;Electronic engineering;Spacecraft components
|
https://en.wikipedia.org/wiki/A%20Fire%20Upon%20the%20Deep
|
A Fire Upon the Deep is a 1992 science fiction novel by American writer Vernor Vinge. It is a space opera involving superhuman intelligences, aliens, variable physics, space battles, love, betrayal, genocide, and a communication medium resembling Usenet. A Fire Upon the Deep won the Hugo Award in 1993, sharing it with Doomsday Book by Connie Willis.
Besides the normal print book editions, the novel was also included on a CD-ROM sold by ClariNet Communications along with the other nominees for the 1993 Hugo awards. The CD-ROM edition included numerous annotations by Vinge on his thoughts and intentions about different parts of the book, and was later released as a standalone e-book. It has a loose prequel, A Deepness in the Sky, from 1999, and a direct sequel, The Children of the Sky, from 2012.
Setting
The novel is set in various locations in the Milky Way. The galaxy is divided into four concentric volumes called the "Zones of Thought"; it is not clear to the novel's characters whether this is a natural phenomenon or an artificially produced one, but it seems to roughly correspond with galactic-scale stellar density and a Beyond region is mentioned in the Sculptor Galaxy as well. The Zones reflect fundamental differences in basic physical laws, and one of the main consequences is their effect on intelligence, both biological and artificial. Artificial intelligence and automation is most directly affected, in that advanced hardware and software from the Beyond or the Transcend will work less and less well as a ship "descends" towards the Unthinking Depths. But even biological intelligence is affected to a lesser degree. The four zones are spoken of in terms of "low" to "high" as follows:
The Unthinking Depths are the innermost zone, surrounding the Galactic Center. In it, only minimal forms of intelligence, biological or otherwise, are possible. This means that any ship straying into the Depths will be stranded, effectively permanently. Even if the crew did not die immediately—and some forms of life native to "higher" Zones would likely do so—they would be rendered incapable of even human intelligence, leaving them unable to operate their ship in any meaningful way.
Surrounding the Depths is the Slow Zone. "Old Earth" is in this Zone, and humanity is said to have originated there, although Earth plays no significant role in the story. Biological intelligence is possible in "the Slowness", but not true, sentient, artificial intelligence. Automation is not intelligent enough to calculate the jumps required for faster than light travel (FTL) in the Slow Zone, but they may escape by performing an immediate reverse jump to where they came from if the Slowness is detected, and navigation systems watch for this and store the information required during each jump. All ships in the Slow Zone are restricted to sub-light speeds. Faster-than-light communication is impossible into or out of the Slow Zone. As the boundaries of the Zones are subject to change, accidental entry into the Slow Zone is a major hazard at the "Bottom" of the Beyond, the next zone out. Starships which operate near the Beyond/Slow Zone border often have an auxiliary Bussard ramjet drive, so that if they accidentally stray into the Slow Zone (thus disabling any FTL drive), they will at least have a backup (sub-light) drive to try to reach the Beyond. Such ships also tend to include "coldsleep" equipment, as it is likely that any such return will still take many lifetimes for most species.
The next layer outward is the Beyond, within which artificial intelligence and FTL travel and FTL communication are possible. A few human civilizations exist in the Beyond, all descended from a single ethnic Norwegian group which reached the Beyond. The original settlement of this group is known as Nyjora; other human settlements in the Beyond include Straumli Realm and Sjandra Kei. In the Beyond, FTL travel is accomplished by making many small "jumps" across space, with the efficiency of the drive increasing the farther a ship travels from the galactic core. This reflects increases in both drive efficiency and the ship's automation's increased capacity, enabling the computation of longer and longer jumps. The Beyond is not a homogeneous zone—many references are made to, e.g., the "High Beyond" or the "Bottom of the Beyond", depending on distance from the galactic core. These terms refer to differences in the Zone itself, not just relative distance from the Core, but there are no obvious Zone boundaries within the Beyond the way there are between the Slow Zone and the Beyond, or between the Beyond and the Transcend. Whereas a ship that crosses from the Beyond to the Slow Zone or vice versa will experience a dramatic change in its capabilities, a ship in the Beyond which moves farther out will experience a gradual increase in efficiency (assuming it has the technology to make use of it) until another major shift at the boundary with the Transcend. The Beyond is populated by a very large number of interstellar and intergalactic civilizations which are linked by an FTL communication network, "the Net", sometimes cynically called the "Net of a Million Lies". The Net does connect with the Transcend, on the off-chance that one of the "Powers" that live there deigns to communicate, but has no connections with the Slow Zone, as FTL communication is impossible into or out of that Zone. In the novel, the Net is depicted as working much like the Usenet network in the early 1990s, with transcripts of messages containing header and footer information as one would find in such forums.
The outermost layer, containing the galactic halo, is the Transcend, within which incomprehensible, superintelligent beings dwell. When a "Beyonder" civilization reaches the point of technological singularity, it can "Transcend", becoming a "Power". Such Powers always seem to relocate to the Transcend, seemingly necessarily, where they become engaged in activities which are entirely mysterious to those in the Beyond.
One of the characters in the book, the human Ravna, uses this analogy to explain the relation between the zones:
Plot
An expedition from Straumli Realm, a young human civilization in the high Beyond, investigates a newly discovered five-billion-year-old data archive in the low Transcend that offers the possibility of unimaginable riches. The expedition's facility, High Lab, is gradually and secretly compromised by an initially dormant superintelligence within the archive later known as the Blight. However, shortly before the Blight's final "flowering", two self-aware entities, created similarly to the Blight, plot to aid the humans before the Blight can gain its full powers.
Finally recognizing their danger, the High Lab researchers attempt to flee in two ships, one carrying the adults and the second carrying the children in "coldsleep boxes". The Blight discovers that the first ship lists a data storage device in its cargo manifest; assuming it contains information that could harm it, the Blight destroys the ship. The second ship escapes.
The ship lands on a distant planet with a medieval-level civilization of dog-like creatures, dubbed "Tines", who live in packs as group minds. Upon landing, however, the two surviving adults, husband and wife, are ambushed and killed by Tine fanatics known as Flenserists, in whose realm they have landed. The Flenserists capture a young boy named Jefri Olsndot and his wounded sister, Johanna. Johanna is rescued by a Tine named Peregrine who witnessed the ambush and taken to a neighboring kingdom ruled by a brilliant Tine named Woodcarver. Steel, the Flenserists' leader, tells Jefri that Johanna and their parents were killed by Woodcarver and exploits him in order to develop advanced technology (such as cannon and radio communication), while Johanna and the knowledge stored in her "dataset" device help Woodcarver rapidly develop as well. A highly placed Flenserist spy keeps Steel informed of Woodcarver's progress.
A distress signal from the sleeper ship eventually reaches "Relay", a major information/service provider for the galactic communications network. A benign transcendent being named "Old One" contacts Relay, seeking information about the Blight and the humans who released it, and reconstitutes a human man named Pham Nuwen from the wreckage of a spaceship to act as its agent, using his doubt of his own memory's veracity to keep him under its control. Ravna Bergsndot, the only human Relay employee, traces the sleeper ship's signal to the Tines' world and persuades her employer to investigate what it took from High Lab, contracting the merchant vessel Out of Band II, owned by two sentient plant "Skroderiders", Blueshell and Greenstalk, to transport her and Pham there.
Before the mission is launched, the Blight launches a surprise attack on Relay and kills Old One. As Old One dies, it downloads what anti-Blight information it can into Pham. Pham, Ravna and the Skroderiders barely escape Relay's destruction in the Out of Band II.
The Blight expands, taking over races and "rewriting" their people to become its agents, murdering several other Powers, and seizing other archives in the Beyond, searching for what was taken from High Lab, but looks only in the Beyond. It finally realizes where the danger truly lies and sends a hastily assembled fleet in pursuit.
The humans arrive at the Tines' world first and ally with Woodcarver to defeat the Flenserists and rescue Jefri. Pham then initiates Countermeasure, which was aboard the humans' ship. Countermeasure extends the Slow Zone outward thousands of light years, enveloping and killing the Blight at the cost of wrecking thousands of civilizations and causing trillions of deaths. The humans are stranded on the Tines' world, now in the depths of the Slow Zone. Activating Countermeasure is fatal to Pham, but before he dies, the remnant of Old One within his mind reveals to him that, although his body is a reconstruction, his memories are real. (Vinge expands on Pham's backstory in A Deepness in the Sky.)
Intelligent species
Aprahanti
A race of humanoids with colorful butterfly-like wings who attempt to use the chaos wrought by the Blight to reestablish their waning hegemony. Despite their attractive, delicate appearance, the Aprahanti are an extremely fearsome and vicious species.
Blight
An ancient, malevolent super-intelligent entity which strives to constantly expand and can manipulate electronics and sentient beings.
Dirokimes
An older race which originally inhabited Sjandra Kei before the arrival of humanity. They work with the humans.
Humans
All humans in the novel (except Pham) are descended from Nyjoran stock. Their ancestors were "Tuvo-Norsk" asteroid miners from Old Earth's solar system, which is noted as being on the other side of the galaxy in the Slow Zone. (Nyjora sounds similar to New Norwegian "New Earth".) One of the major human habitations is Sjandra Kei, three systems comprising roughly 28 billion individuals. Their main language is Samnorsk, the Norwegian term for a hypothetical unification of the Bokmål and Nynorsk forms of the language. (Vinge indicates in the book's dedication that several key ideas in it came to him while at a conference in Tromsø, Norway.)
Skroders/Riders/Skroderiders
A race of plant beings with fronds that serve as arms, the Riders have little native capacity for short-term memory. They are one of the longest-existing species; five billion years ago, someone gave them six-wheeled mechanical constructs ("skrodes") to move around and to provide short-term memory that made it easier for them to retain information well enough to become long-term memory in the "rider". It is later revealed that their "benefactor" is the Blight, which is able to easily corrupt and remotely operate the Riders via their skrodes.
Tines
A race of group minds, each Tine is a "pack" of doglike members, which communicate within the pack using very short-range ultrasonic waves from drumlike organs called "tympana". A pack of four to eight members possesses roughly human-level intelligence; a pack with fewer or more is less smart. Each "soul" can survive and evolve by adding members to replace those who die, potentially for hundreds of years, as Woodcarver does.
Related works
Vinge first used the concepts of "Zones of Thought" in a 1988 novella The Blabber, which occurs after Fire. Vinge's novel A Deepness in the Sky (1999) is a prequel to A Fire Upon the Deep set 20,000 years earlier and featuring Pham Nuwen. Vinge's The Children of the Sky, "a near-term sequel to A Fire Upon the Deep, set ten years later, was released in October 2011.
Vinge's former wife, Joan D. Vinge, has also written stories in the Zones of Thought universe, based on his notes. These include "The Outcasts of Heaven Belt", "Legacy", and (as of 2008) a planned novel featuring Pham Nuwen.
Title
Vinge's original title for the novel was "Among the Tines"; its final title was suggested by his editors.
Awards and nominations
A Fire Upon the Deep shared the 1993 Hugo Award for Best Novel with Doomsday Book. The book was nominated for the Nebula Award for Best Novel of 1992, the 1993 John W. Campbell Memorial Award for Best Science Fiction Novel, and the 1993 Locus Award for Best Science Fiction Novel.
Critical reactions
Jo Walton wrote: "Any one of the ideas in A Fire Upon the Deep would have kept an ordinary writer going for years. For me it's the book that does everything right, the example of what science fiction does when it works. ... A Fire Upon the Deep remains a favourite and a delight to re-read, absorbing even when I know exactly what's coming."
References
External links
A Fire Upon the Deep at Worlds Without End
The book with Vinge's commentaries
|
1992 American novels;1992 science fiction novels;Apocalyptic fiction;Fiction about artificial intelligence;Fiction about consciousness transfer;Fiction about malware;Fiction about nanotechnology;Hugo Award for Best Novel–winning works;Novels about technological singularity;Novels by Vernor Vinge;Tor Books books;Transhumanist books;Usenet
|
https://en.wikipedia.org/wiki/Asterix
|
Asterix ( or , "Asterix the Gaul"; also known as Asterix and Obelix in some adaptations or The Adventures of Asterix) is a French comic album series about a Gaulish village which, thanks to a magic potion that enhances strength, resists the forces of Julius Caesar's Roman Republic Army in a nonhistorical telling of the time after the Gallic Wars. Many adventures take the titular hero Asterix and his friend Obelix to Rome and beyond.
The series first appeared in the Franco-Belgian comic magazine Pilote on 29 October 1959. It was written by René Goscinny and illustrated by Albert Uderzo until Goscinny's death in 1977. Uderzo then took over the writing until 2009, when he sold the rights to publishing company Hachette; he died in 2020. In 2013, a new team consisting of Jean-Yves Ferri (script) and Didier Conrad (artwork) took over. , 40 volumes have been released; the most recent was penned by new writer Fabcaro and released on 26 October 2023.
By that year, the volumes in total had sold 393 million copies, making them the best-selling European comic book series, and the second best-selling comic book series in history after One Piece.
Description
Asterix comics usually start with the following introduction: The year is 50 BC. Gaul is entirely occupied by the Romans. Well, not entirely... One small village of indomitable Gauls still holds out against the invaders. And life is not easy for the Roman legionaries who garrison the fortified camps of Totorum, Aquarium, Laudanum and Compendium...The series follows the adventures of a village of Gauls as they resist Roman occupation in 50 BC. They do so using a magic potion, brewed by their druid Getafix (Panoramix in the French version), which temporarily gives the recipient superhuman strength. The protagonists, the title character Asterix and his friend Obelix, have various adventures. The "-ix" ending of both names (as well as all the other pseudo-Gaulish "-ix" names in the series) alludes to the "-rix" suffix (meaning "king", like "-rex" in Latin) present in the names of many real Gaulish chieftains such as Vercingetorix, Orgetorix, and Dumnorix.
In some of the stories, they travel to foreign countries, whilst other tales are set in and around their village. For much of the history of the series (volumes 4 through 29), settings in Gaul and abroad alternate, with even-numbered volumes set abroad and odd-numbered volumes set in Gaul, mostly in the village.
The Asterix series is one of the most popular Franco-Belgian comics in the world, with the series being translated into 111 languages and dialects .
The success of the series has led to the adaptation of its books into 15 films: ten animated, and five live action (two of which, Asterix & Obelix: Mission Cleopatra and Asterix and Obelix vs. Caesar, were major box office successes in France). There have also been a number of games based on the characters, and a theme park near Paris, Parc Astérix. The very first French satellite, Astérix, launched in 1965, was named after the character, whose name is close to Greek ἀστήρ and Latin astrum, meaning a "star". As of 20 April 2022, 385million copies of Asterix books had been sold worldwide and translated in 111 languages making it the world's most widely translated comic book series, with co-creators René Goscinny and Albert Uderzo being France's best-selling authors abroad.
In April 2022, Albert and René's general director, Céleste Surugue, hosted a 45-minute talk entitled 'The Next Incarnation of a Heritage Franchise: Asterix' and spoke about the success of the Asterix franchise, of which he noted "The idea was to find a subject with a strong connection with French culture and, while looking at the country's history, they ended up choosing its first defeat, namely the Gaul's Roman colonisation". He also went on to say how, since 1989, Parc Asterix has attracted an average of 2.3 million visitors per year. Other notable mentions were how the franchise includes 10 animated movies, which recorded over 53 million viewers worldwide. The inception of Studios Idéfix in 1974 and the opening of Studio 58 in 2016 were among the necessary steps to make Asterix a "100% Gaulish production", considered the best solution to keep the creative process under control from start to finish and to employ French manpower. He also noted how a new album is now published every two years, with print figures of 5 million and an estimated readership of 20 million.
Publication history
Prior to creating the Asterix series, Goscinny and Uderzo had had success with their series Oumpah-pah, which was published in Tintin magazine.
Astérix was originally serialised in Pilote magazine, debuting in the first issue on 29 October 1959. In 1961, the first book was put together, titled Asterix the Gaul. From then on, books were released generally on a yearly basis. Their success was exponential; the first book sold 6,000 copies in its year of publication; a year later, the second sold 20,000. In 1963, the third sold 40,000; the fourth, released in 1964, sold 150,000. A year later, the fifth sold 300,000; 1966's Asterix and the Big Fight sold 400,000 upon initial publication. The ninth Asterix volume, when first released in 1967, sold 1.2 million copies in two days.
Uderzo's first preliminary sketches portrayed Asterix as a huge and strong traditional Gaulish warrior. But Goscinny had a different picture in his mind, visualizing Asterix as a shrewd, compact warrior who would possess intelligence and wit more than raw strength. However, Uderzo felt that the downsized hero needed a strong but dim companion, to which Goscinny agreed. Hence, Obelix was born. Despite the growing popularity of Asterix with the readers, the financial backing for the publication Pilote ceased. Pilote was taken over by Georges Dargaud.
When Goscinny died in 1977, Uderzo continued the series by popular demand of the readers, who implored him to continue. He continued to issue new volumes of the series, but on a less frequent basis. Many critics and fans of the series prefer the earlier collaborations with Goscinny. Uderzo created his own publishing company, Éditions Albert René, which published every album drawn and written by Uderzo alone since then. However, Dargaud, the initial publisher of the series, kept the publishing rights on the 24 first albums made by both Uderzo and Goscinny. In 1990, the Uderzo and Goscinny families decided to sue Dargaud to take over the rights. In 1998, after a long trial, Dargaud lost the rights to publish and sell the albums. Uderzo decided to sell these rights to Hachette instead of Albert-René, but the publishing rights on new albums were still owned by Albert Uderzo (40%), Sylvie Uderzo (20%) and Anne Goscinny (40%).
In December 2008, Uderzo sold his stake to Hachette, which took over the company. In a letter published in the French newspaper Le Monde in 2009, Uderzo's daughter, Sylvie, attacked her father's decision to sell the family publishing firm and the rights to produce new Astérix adventures after his death. She said:
... the co-creator of Astérix, France's comic strip hero, has betrayed the Gaulish warrior to the modern-day Romans – the men of industry and finance.
However, René Goscinny's daughter, Anne, also gave her agreement to the continuation of the series and sold her rights at the same time. She is reported to have said that "Asterix has already had two lives: one during my father's lifetime and one after it. Why not a third?". A few months later, Uderzo appointed three illustrators, who had been his assistants for many years, to continue the series. In 2011, Uderzo announced that a new Asterix album was due out in 2013, with Jean-Yves Ferri writing the story and Frédéric Mébarki drawing it. A year later, in 2012, the publisher Albert-René announced that Frédéric Mébarki had withdrawn from drawing the new album, due to the pressure he felt in following in the steps of Uderzo. Comic artist Didier Conrad was officially announced to take over drawing duties from Mébarki, with the due date of the new album in 2013 unchanged.
In January 2015, after the murders of seven cartoonists at the satirical Paris weekly Charlie Hebdo, Astérix creator Albert Uderzo came out of retirement to draw two Astérix pictures honouring the memories of the victims.
List of titles
Numbers 1–24, 32 and 34 are by Goscinny and Uderzo. Numbers 25–31 and 33 are by Uderzo alone. Numbers 35–39 are by Jean-Yves Ferri and Didier Conrad. Years stated are for their initial album release.
Asterix the Gaul (1961)
Asterix and the Golden Sickle (1962)
Asterix and the Goths (1963)
Asterix the Gladiator (1964)
Asterix and the Banquet (1965)
Asterix and Cleopatra (1965)
Asterix and the Big Fight (1966)
Asterix in Britain (1966)
Asterix and the Normans (1967)
Asterix the Legionary (1967)
Asterix and the Chieftain's Shield (1967)
Asterix at the Olympic Games (1968)
Asterix and the Cauldron (1969)
Asterix in Spain (1969)
Asterix and the Roman Agent (1970)
Asterix in Switzerland (1970)
The Mansions of the Gods (1971)
Asterix and the Laurel Wreath (1972)
Asterix and the Soothsayer (1972)
Asterix in Corsica (1973)
Asterix and Caesar's Gift (1974)
Asterix and the Great Crossing (1975)
Obelix and Co. (1976)
Asterix in Belgium (1979)
Asterix and the Great Divide (1980)
Asterix and the Black Gold (1981)
Asterix and Son (1983)
Asterix and the Magic Carpet (1987)
Asterix and the Secret Weapon (1991)
Asterix and Obelix All at Sea (1996)
Asterix and the Actress (2001)
Asterix and the Class Act (2003)
Asterix and the Falling Sky (2005)
Asterix and Obelix's Birthday: The Golden Book (2009)
Asterix and the Picts (2013)
Asterix and the Missing Scroll (2015)
Asterix and the Chariot Race (2017)
Asterix and the Chieftain's Daughter (2019)
Asterix and the Griffin (2021)
Asterix and the White Iris (2023)
(2025)
Non-canonical volumes:
Asterix Conquers Rome, to be the 23rd volume, before Obelix and Co. (1976) – comic
How Obelix Fell into the Magic Potion When he was a Little Boy (1989) – special issue album
Uderzo Croqué par ses Amis (Uderzo sketched by his friends) (1996) – tribute album by various artists
The Twelve Tasks of Asterix (2016) – special issue album, illustrated text
Asterix Conquers Rome is a comics adaptation of the animated film The Twelve Tasks of Asterix. It was released in 1976 and was the 23rd volume to be published, but it has been rarely reprinted and is not considered to be canonical to the series. The only English translations ever to be published were in the Asterix Annual 1980 and never an English standalone volume. A picture-book version of the same story was published in English translation as The Twelve Tasks of Asterix by Hodder & Stoughton in 1978.
In 1996, a tribute album in honour of Albert Uderzo was released titled Uderzo Croqué par ses Amis, a volume containing 21 short stories with Uderzo in Ancient Gaul. This volume was published by Soleil Productions and has not been translated into English.
In 2007, Éditions Albert René released a tribute volume titled Astérix et ses Amis, a 60-page volume of one-to-four-page short stories. It was a tribute to Albert Uderzo on his 80th birthday by 34 European cartoonists. The volume was translated into nine languages. , it has not been translated into English.
In 2016, the French publisher Hachette, along with Anne Goscinny and Albert Uderzo decided to make the special issue album The XII Tasks of Asterix for the 40th anniversary of the film The Twelve Tasks of Asterix. There was no English edition.
Synopsis and characters
The main setting for the series is an unnamed coastal village, rumoured to be inspired by Erquy in Armorica (present-day Brittany), a province of Gaul (modern France), in the year 50 BC. Julius Caesar has conquered nearly all of Gaul for the Roman Republic during the Gallic Wars. The little Armorican village, however, has held out because the villagers can gain temporary superhuman strength by drinking a magic potion brewed by the local village druid, Getafix. His chief is Vitalstatistix.
The main protagonist and hero of the village is Asterix, who, because of his shrewdness, is usually entrusted with the most important affairs of the village. He is aided in his adventures by his rather corpulent and slower thinking friend, Obelix, who, because he fell into the druid's cauldron of the potion as a baby, has permanent superhuman strength (because of this, Getafix steadfastly refuses to allow Obelix to drink the potion, as doing so would have a dangerous and unpredictable result, as shown in Asterix and Obelix All at Sea). Obelix is usually accompanied by Dogmatix, his little dog. (Except for Asterix and Obelix, the names of the characters change with the language. For example, Obelix's dog's name is "Idéfix" in the original French edition.)
Asterix and Obelix (and sometimes other members of the village) go on various adventures both within the village and in far away lands. Places visited in the series include parts of Gaul (Lutetia, Corsica etc.), neighbouring nations (Belgium, Spain, Britain, Germany etc.), and far away lands (North America, Middle East, India etc.).
The series employs science-fiction and fantasy elements in the more recent books; for instance, the use of extraterrestrials in Asterix and the Falling Sky and the city of Atlantis in Asterix and Obelix All at Sea.
With rare exceptions, the ending of the albums usually shows a big banquet with the village's inhabitants gathering – the sole exception is the bard Cacofonix restrained and gagged to prevent him from singing (but in Asterix and the Normans the blacksmith Fulliautomatix was tied up). Mostly the banquets are held under the starry nights in the village, where roast boar is devoured and all (but one) are set about in merrymaking. However, there are a few exceptions, such as in Asterix and Cleopatra.
Humour
The humour encountered in the Asterix comics often centers around puns, caricatures, and tongue-in-cheek stereotypes of contemporary European nations and French regions. Much of the multi-layered humour in the initial Asterix books was French-specific, which delayed the translation of the books into other languages for fear of losing the jokes and the spirit of the story. Some translations have actually added local humour: In the Italian translation, the Roman legionaries are made to speak in 20th-century Roman dialect, and Obelix's famous Ils sont fous, ces Romains ! ("These Romans are crazy") is translated as Sono pazzi questi romani, a long-established humorous expansion of the Roman abbreviation SPQR. In another example: Hiccups are written onomatopoeically in French as hips, but in English as "hic", allowing Roman legionaries in more than one of the English translations to decline their hiccups absurdly in Latin (hic, haec, hoc). The newer albums share a more universal humour, both written and visual.
Character names
All the fictional characters in Asterix have names which are puns on their roles or personalities, and which follow certain patterns specific to nationality. Certain rules are followed (most of the time) such as Gauls (and their neighbours) having an "-ix" suffix for the men and ending in "-a" for the women; for example, Chief Vitalstatistix (so called due to his portly stature) and his wife Impedimenta (often at odds with the chief). The male Roman names end in "-us", echoing Latin nominative male singular form, as in Gluteus Maximus, a muscle-bound athlete whose name is literally the butt of the joke. Gothic names (present-day Germany) end in "-ic", after Gothic chiefs such as Alaric and Theoderic; for example Rhetoric the interpreter. Greek names end in "-os" or "-es"; for example, Thermos the restaurateur. British names usually end in "-ax" or "-os" and are often puns on the taxation associated with the later United Kingdom; examples include Mykingdomforanos, a British tribal chieftain, Valuaddedtax the druid, and Selectivemploymentax the mercenary. Names of Normans end with "-af", for example Nescaf or Cenotaf. Egyptian characters often end in -is, such as the architects Edifis and Artifis, and the scribe Exlibris. Indic names, apart from the only Indic female characters Orinjade and Lemuhnade, exhibit considerable variation; examples include Watziznehm, Watzit, Owzat, and Howdoo. Other nationalities are treated to pidgin translations from their language, like Huevos y Bacon, a Spanish chieftain (whose name, meaning eggs and bacon, is often guidebook Spanish for tourists), or literary and other popular media references, like Dubbelosix (a sly reference to James Bond's codename "007").
Most of these jokes, and hence the names of the characters, are specific to the translation; for example, the druid named Getafix in English translation – "get a fix", referring to the character's role in dispensing the magic potion – is Panoramix in the original French and Miraculix in German. Even so, occasionally the wordplay has been preserved: Obelix's dog, known in the original French as Idéfix (from idée fixe, a "fixed idea" or obsession), is called Dogmatix in English, which not only renders the original meaning strikingly closely ("dogmatic") but in fact adds another layer of wordplay with the syllable "Dog-" at the beginning of the name.
The name Asterix, French Astérix, comes from , meaning "asterisk", which is the typographical symbol * indicating a footnote, from the Greek word ἀστήρ (aster), meaning a "star". His name is usually left unchanged in translations, aside from accents and the use of local alphabets. For example, in Esperanto, Polish, Slovene, Latvian, and Turkish it is Asteriks (in Turkish he was first named Bücür meaning "shorty", but the name was then standardised). Two exceptions include Icelandic, in which he is known as Ástríkur ("Rich of love"), and Sinhala, where he is known as (Soora Pappa), which can be interpreted as "Hero". The name Obelix (Obélix) may refer to "obelisk", a stone column from ancient Egypt (and hence his large size and strength and his task of carrying around menhirs), but also to another typographical symbol, the obelisk or obelus ().
For explanations of some of the other names, see List of Asterix characters.
Ethnic stereotypes
Many of the Asterix adventures take place in other countries away from their homeland in Gaul. In every album that takes place abroad, the characters meet (usually modern-day) stereotypes for each country, as seen by the French.
Italics (Italians) are the inhabitants of Italy. In the adventures of Asterix, the term "Romans" is used by non-Italics to refer to all inhabitants of Italy, who at that time had extended their dominion over a large part of the Mediterranean basin. But as can be seen in Asterix and the Chariot Race, in the Italian Peninsula this term is used only to refer to the people from the capital, with many Italics preferring to identify themselves as Umbrians, Etruscans, Venetians, etc. Various topics from this country are explored, as in this example, Italian cuisine (pasta, pizza, wine), art, famous people (Luciano Pavarotti, Silvio Berlusconi, Leonardo da Vinci's Mona Lisa), and even the controversial issues of political corruption. Romans in general appear more similar to the historical Romans than to modern-age Italians.
Goths (Germans) are disciplined and militaristic, but divided into many factions that fight amongst each other (which is a reference to Germany before Otto von Bismarck, and to the rivalry between East Germany and West Germany in the Aftermath of World War II), and they wear the Pickelhaube helmet common during the German Empire. In later appearances, the Goths tend to be more good-natured.
Helvetians (Swiss) are neutral, eat fondue, and are obsessed with cleaning, accurate time-keeping, and banks.
The Britons (English) are phlegmatic, and speak with early 20th-century aristocratic slang (similar to Bertie Wooster). They stop for tea every day (making it with hot water and a drop of milk until Asterix brings them actual tea leaves), drink lukewarm beer (Bitter), eat foods with mint sauce that are considered tasteless by the non-Briton characters (Rosbif), and live in streets containing rows of identical houses. In Asterix and Obelix: God Save Britannia the Britons all wore woollen pullovers and Tam o' shanters.
Hibernians (Irish) inhabit Hibernia, the Latin name of Ireland and they fight against the Romans alongside the Britons to defend the British Isles.
Iberians (Spanish) are filled with pride and have rather choleric tempers. They produce olive oil, provide very slow aid for chariot problems on the Roman roads and (thanks to Asterix) adopt bullfighting as a tradition.
When the Gauls visited North America in Asterix and the Great Crossing, Obelix punches one of the attacking Native Americans with a knockout blow. The warrior first hallucinates American-style emblematic eagles; the second time, he sees stars in the formation of the Stars and Stripes; the third time, he sees stars shaped like the United States Air Force roundel. Asterix's inspired idea for getting the attention of a nearby Viking ship (which could take them back to Gaul) is to hold up a torch; this refers to the Statue of Liberty (which was a gift from France).
Corsicans are proud, patriotic, and easily aroused but lazy, making decisions by using pre-filled ballot boxes. They harbour vendettas against each other, and always take their siesta.
Greeks are chauvinists and consider Romans, Gauls, and all others to be barbarians. They eat stuffed grape leaves (dolma), drink resinated wine (retsina), and are hospitable to tourists. Most seem to be related by blood, and often suggest some cousin appropriate for a job. Greek characters are often depicted in side profile, making them resemble figures from classical Greek vase paintings.
Normans (Vikings) drink endlessly, they always use cream in their cuisine, they don't know what fear is (which they're trying to discover), and in their home territory (Scandinavia), the night lasts for 6 months.Their depiction in the albums is a mix of stereotypes of Scandinavian Vikings and the Norman French. Their names end in "-af".
Cimbres (Danes) are very similar to the Normans with the greatest difference being that the Gauls are unable to communicate with them. Their names end in "-sen", a common ending of surnames in Denmark and Norway akin to "-son".
Belgians speak with a funny accent, snub the Gauls, and always eat sliced roots deep-fried in bear fat. They also tell Belgian jokes.
Lusitanians (Portuguese) are short in stature and polite (Uderzo said all the Portuguese who he had met were like that). Their most recent appearance in the albums depicts them with an easy-going and procrastinating nature.
The Indians have elephant trainers, as well as gurus who can fast for weeks and levitate on magic carpets. They worship thirty-three million deities and consider cows as sacred. They also bathe in the Ganges river.
Egyptians are short with prominent noses, endlessly engaged in building pyramids and palaces. Their favorite food is lentil soup and they sail feluccas along the banks of the Nile River.
Persians (Iranians) produce carpets and staunchly refuse to mend foreign ones. They eat caviar, as well as roasted camel and the women wear burqas.
Hittites, Sumerians, Akkadians, Assyrians, and Babylonians are perpetually at war with each other and attack strangers because they confuse them with their enemies, but they later apologize when they realize that the strangers are not their enemies. This is likely a criticism of the constant conflicts among the Middle Eastern peoples.
The Jews are all depicted as Yemenite Jews, with dark skin, black eyes, and beards, a tribute to Marc Chagall, the famous painter whose painting of King David hangs at the Knesset (Israeli Parliament).
Numidians, contrary to the Berber inhabitants of ancient Numidia (located in North Africa), are obviously Africans from sub-Saharan Africa. The names end in "-tha", similar to the historical king Jugurtha of Numidia.
The Picts (Scots) wear a typical dress with a kilt (skirt), have the habit of drinking "malt water" (whisky) and throwing logs (caber tossing) as a popular sport and their names all start with "Mac-".
Sarmatians (Ukrainians) inhabit the North Black Sea area, which represents present-day Ukraine. Their names end in "-ov", like many Ukrainian surnames.
When the Gauls see foreigners speaking their foreign languages, these have different representations in the cartoon speech bubbles:
Iberian: Same as Spanish, with inversion of exclamation marks ('¡') and question marks ("¿")
Goth language: Gothic script (incomprehensible to the Gauls, except Getafix, who speaks Gothic)
Cimbres: "Ø" and "Å" instead of "O" and "A" (incomprehensible to the Gauls)
Amerindian: Pictograms and sign language (generally incomprehensible to the Gauls)
Egyptians and Kushites: Hieroglyphs with explanatory footnotes (incomprehensible to the Gauls)
Greek: Straight letters, carved as if in stone
Sarmatian: In their speech balloons, some letters (E, F, N, R ...) are written in a mirror-reversed form, which evokes the modern Cyrillic alphabet.
Translations
The various volumes have been translated into more than 120 languages and dialects. Besides the original French language, most albums are available in Arabic, Basque, Bulgarian, Catalan, Chinese, Croatian, Czech, Danish, Dutch, English, Estonian, Finnish, Galician, German, Greek, Hebrew, Hindi, Icelandic, Irish, Italian, Japanese, Korean, Latin, Latvian, Norwegian, Polish, Portuguese, Romanian, Russian, Serbian, Slovene, Spanish, Swedish, Turkish, and Ukrainian.
Some books have also been translated into languages including Esperanto, Scottish Gaelic, Irish, Scots, Indonesian, Hindi, Persian, Bengali, Afrikaans, Arabic, Frisian, Romansch, Thai, Vietnamese, Welsh, Sinhala, Ancient Greek, and Luxembourgish.
In Europe, several volumes were translated into a variety of regional languages and dialects, such as Alsatian, Breton, Chtimi (Picard), and Corsican in France; Bavarian, Swabian, and Low German in Germany; and Savo, Karelia, Rauma, and Helsinki slang dialects in Finland. In Portugal a special edition of the first volume, Asterix the Gaul, was translated into local language Mirandese. In Greece, a number of volumes have appeared in the Cretan Greek, Cypriot Greek, and Pontic Greek dialects. In the Italian version, while the Gauls speak standard Italian, the legionaries speak in the Romanesque dialect. In the former Yugoslavia, the "Forum" publishing house translated Corsican text in Asterix in Corsica into the Montenegrin dialect of Serbo-Croatian (today called Montenegrin).
In the Netherlands, several volumes were translated into West Frisian, a Germanic language spoken in the province of Friesland; into Limburgish, a regional language spoken not only in Dutch Limburg but also in Belgian Limburg and North Rhine-Westphalia, Germany; and into Tweants, a dialect in the region of Twente in the eastern province of Overijssel. Hungarian-language books were published in the former Yugoslavia for the Hungarian minority living in Serbia. Although not translated into a fully autonomous dialect, the books differ slightly from the language of the books issued in Hungary. In Sri Lanka, the cartoon series was adapted into Sinhala as Sura Pappa.
Most volumes have been translated into Latin and Ancient Greek, with accompanying teachers' guides, as a way of teaching these ancient languages.
English translation
Before Asterix became famous in the English-speaking world, translations of some strips were published in British comics including Valiant, Ranger, and Look & Learn, under names Little Fred and Big Ed and Beric the Bold, set in Roman-occupied Britain. These were included in an exhibition on Goscinny's life and career, and Asterix, in London's Jewish Museum in 2018.
In 1970, William Morrow and Company published English translations in hardback of three Asterix albums for the American market. These were Asterix the Gaul, Asterix and Cleopatra and Asterix the Legionary. Lawrence Hughes in a letter to The New York Times stated, "Sales were modest, with the third title selling half the number of the first. I was publisher at the time, and Bill Cosby tried to buy film and television rights. When that fell through, we gave up the series."
The first 33 Asterix albums were translated into English by Anthea Bell and Derek Hockridge (including the three volumes reprinted by William Morrow), who were widely praised for maintaining the spirit and humour of the original French versions. Hockridge died in 2013, so Bell translated books 34 to 36 by herself, before retiring in 2016 for health reasons. She died in 2018. Adriana Hunter became translator.
US publisher Papercutz in December 2019 announced it would begin publishing "all-new more American translations" of the Asterix books, starting on 19 May 2020. The launch was postponed to 15 July 2020 as a result of the COVID-19 pandemic. The new translator is Joe Johnson, a professor of French and Spanish at Clayton State University.
Adaptations
The series has been adapted into various media. There are 18 films, 15 board games, 40 video games, and 1 theme park.
Films
Deux Romains en Gaule, 1967 black and white television film, mixed media, live-action with Asterix and Obelix animated. Released on DVD in 2002.
Asterix the Gaul, 1967, animated, based on the album Asterix the Gaul.
Asterix and the Golden Sickle, 1967, animated, based upon the album Asterix and the Golden Sickle, incomplete and never released.
Asterix and Cleopatra, 1968, animated, based on the album Asterix and Cleopatra.
The Dogmatix movie, 1973, animated, a unique story based on Dogmatix and his animal friends, Albert Uderzo created a comic version (consisting of eight comics, as the film is a combination of 8 different stories) of the never-released movie in 2003.
The Twelve Tasks of Asterix, 1976, animated, a unique story not based on an existing comic.
Asterix Versus Caesar, 1985, animated, based on both Asterix the Legionary and Asterix the Gladiator.
Asterix in Britain, 1986, animated, based upon the album Asterix in Britain.
Asterix and the Big Fight, 1989, animated, based on both Asterix and the Big Fight and Asterix and the Soothsayer.
Asterix Conquers America, 1994, animated, loosely based upon the album Asterix and the Great Crossing.
Asterix and Obelix vs. Caesar, 1999, live-action, based primarily upon Asterix the Gaul, Asterix and the Soothsayer, Asterix and the Goths, Asterix the Legionary, and Asterix the Gladiator.
Asterix & Obelix: Mission Cleopatra, 2002, live-action, based upon the album Asterix and Cleopatra.
Asterix and the Vikings, 2006, animated, loosely based upon the album Asterix and the Normans along with some side references to Asterix and the Great Crossing.
Asterix at the Olympic Games, 2008, live-action, loosely based upon the album Asterix at the Olympic Games.
Asterix and Obelix: God Save Britannia, 2012, live-action, loosely based upon the album Asterix in Britain and Asterix and the Normans.
Asterix: The Mansions of the Gods, 2014, animated, based upon the album The Mansions of the Gods and is the first animated Asterix movie in stereoscopic 3D.
Asterix: The Secret of the Magic Potion, 2018, animated, original story.
Asterix & Obelix: The Middle Kingdom, 2023, live-action, original story, consisting of Asterix and Obelix traveling to China to rescue the empress from Julius Caesar and his ally, Prince Deng Tsin Quin.
Television series
Dogmatix and the Indomitables, an animated series of eleven-minute episodes, was produced by Studio 58 and Futurikon, and premiered on the Okoo streaming service on 2 July 2021 before beginning its linear broadcast on France 4 on 28 August 2021. The animation is produced by o2o Studio.
The show is distributed globally by LS Distribution.
Asterix and Obelix: The Big Fight, a CG-animated miniseries based on the 1966 album, and directed by Alain Chabat, debuted on Netflix in 2025.
Games
Many gamebooks, board games and video games are based upon the Asterix series. In particular, many video games were released by various computer game publishers.
Theme park
Parc Astérix, a theme park 22 miles north of Paris, based upon the series, was opened in 1989. It is one of the most visited sites in France, with around 2.3 million visitors per year.
In popular culture
The first French satellite, which was launched in 1965, was named Astérix-1 in honour of Asterix. Asteroids 29401 Asterix and 29402 Obelix were also named in honour of the characters. Coincidentally, the word Asterix/Asterisk originates from the Greek for Little Star.
During the campaign for Paris to host the 1992 Summer Olympics in 1986, Asterix appeared in many posters over the Eiffel Tower, later it was lost to Barcelona and the 2024 Summer Olympics held 32 years later in the same city after Tokyo in 2021.
The French company Belin introduced a series of Asterix crisps shaped in the forms of Roman shields, gourds, wild boar, and bones.
In the UK in 1995, Asterix coins were presented free in every Ferrero Nutella jars.
In 1991, Asterix and Obelix appeared on the cover of Time for a special edition about France, art directed by Mirko Ilić. In a 2009 issue of the same magazine, Asterix is described as being seen by some as a symbol for France's independence and defiance of globalisation. Despite this, Asterix has made several promotional appearances for fast food chain McDonald's, including one advertisement which featured members of the village enjoying the traditional story-ending feast at a McDonald's restaurant.
Version 4.0 of the operating system OpenBSD features a parody of an Asterix story.
Action Comics Issue #579, published by DC Comics in 1986, written by Lofficier and Illustrated by Keith Giffen, featured a homage to Asterix where Superman and Jimmy Olsen are drawn back in time to a small village of indomitable Gauls.
In 2005, the Mirror World Asterix exhibition was held in Brussels. The Belgian post office also released a set of stamps to coincide with the exhibition. A book was released to coincide with the exhibition, containing sections in French, Dutch and English.
On 29 October 2009, the Google homepage of a great number of countries displayed a logo (called Google Doodle) commemorating the 50th anniversary of Asterix.
Although they have since changed, the #2 and #3 heralds in the Society for Creative Anachronism's Kingdom of Ansteorra were the Asterisk and Obelisk Heralds.
Asterix and Obelix were the official mascots of the 2017 IIHF World Championships, jointly hosted by France and Germany.
In 2019, France issued a commemorative €2 coin to celebrate the 60th anniversary of Asterix.
The Royal Canadian Navy has a supply vessel named MV Asterix. A second Resolve-Class ship, to have been named MV Obelix, was cancelled.
Asterix, Obelix and Vitalstatistix appear in Larry Gonick's The Cartoon History of the Universe volume 2, especially in the depiction of the Gallic invasion of Italy (390 – 387 BCE). In the final panel of that sequence, as they trudge off into the sunset, Obelix says "Come on, Asterix! Let's get our own comic book."
See also
List of Asterix characters
Bande dessinée
English translations of Asterix
List of Asterix games
List of Asterix volumes
Kajko i Kokosz
Potion
Roman Gaul, after Julius Caesar's conquest of 58–51 BC that consisted of five provinces
Commentarii de Bello Gallico
References
Sources
Astérix publications in Pilote BDoubliées
Astérix albums Bedetheque
Further reading
– This is Chapter #16, in Part III: Translations, Transformations, Migrations
Tosina Fernández, Luis J. "Creatividad paremiológica en las traducciones al castellano de Astérix". Proverbium vol. 38, 2021, pp. 361–376. Proverbiium PDF
Tosina Fernández, Luis J. "Paremiological Creativity and Visual Representation of Proverbs: An Analysis of the Use of Proverbs in the Adventures of Asterix the Gaul". Proceedings of the Fourteenth Interdisciplinary Colloquium on Proverbs, 2 to 8 November 2020, at Tavira, Portugal, edited by Rui J.B. Soares and Outi Lauhakangas, Tavira: Tipografia Tavirense, 2021, pp. 256–277.
External links
Official site
Asterix the Gaul at Don Markstein's Toonopedia, from the original on 6 April 2012.
Asterix around the World – The many languages
Alea Jacta Est (Asterix for grown-ups) Each Asterix book is examined in detail
Les allusions culturelles dans Astérix – Cultural allusions
The Asterix Annotations – album-by-album explanations of all the historical references and obscure in-jokes
|
;1959 comics debuts;1959 establishments in France;Alternate history comics;Armorica;Bandes dessinées;Comic franchises;Comics adapted into animated films;Comics adapted into animated series;Comics adapted into video games;Comics by Albert Uderzo;Comics set in Brittany;Comics set in France;Comics set in ancient Rome;Comics set in the 1st century BC;Dargaud titles;Fantasy comics;Fiction about rebellions;French comic strips;French comics adapted into films;Gallia Lugdunensis;Historical comics;Humor comics;Lagardère SCA franchises;Pilote titles;Pirate comics;Potions;Satirical comics;Slapstick comedy;Works about rebellions;Works about rebels;Works set in Roman Gaul
|
https://en.wikipedia.org/wiki/AppleTalk
|
AppleTalk is a discontinued proprietary suite of networking protocols developed by Apple Computer for their Macintosh computers. AppleTalk includes a number of features that allow local area networks to be connected with no prior setup or the need for a centralized router or server of any sort. Connected AppleTalk-equipped systems automatically assign addresses, update the distributed namespace, and configure any required inter-networking routing.
AppleTalk was released in 1985 and was the primary protocol used by Apple devices through the 1980s and 1990s. Versions were also released for the IBM PC and compatibles and the Apple IIGS. AppleTalk support was also available in most networked printers (especially laser printers), some file servers, and a number of routers.
The rise of TCP/IP during the 1990s led to a reimplementation of most of these types of support on that protocol, and AppleTalk became unsupported as of the release of Mac OS X v10.6 in 2009. Many of AppleTalk's more advanced autoconfiguration features have since been introduced in Bonjour, while Universal Plug and Play serves similar needs.
History
AppleNet
After the release of the Apple Lisa computer in January 1983, Apple invested considerable effort in the development of a local area networking (LAN) system for the machines. Known as AppleNet, it was based on the seminal Xerox XNS protocol stack but running on a custom 1 Mbit/s coaxial cable system rather than Xerox's 2.94 Mbit/s Ethernet. AppleNet was announced early in 1983 with a full introduction at the target price of $500 for plug-in AppleNet cards for the Lisa and the Apple II.
At that time, early LAN systems were just coming to market, including Ethernet, Token Ring, Econet, and ARCNET. This was a topic of major commercial effort at the time, dominating shows like the National Computer Conference (NCC) in Anaheim in May 1983. All of the systems were jockeying for position in the market, but even at this time, Ethernet's widespread acceptance suggested it was to become a de facto standard. It was at this show that Steve Jobs asked Gursharan Sidhu a seemingly innocuous question: "Why has networking not caught on?"
Four months later, in October, AppleNet was cancelled. At the time, they announced that "Apple realized that it's not in the business to create a networking system. We built and used AppleNet in-house, but we realized that if we had shipped it, we would have seen new standards coming up." In January, Jobs announced that they would instead be supporting IBM's Token Ring, which he expected to come out in a "few months".
AppleBus
Through this period, Apple was deep in development of the Macintosh computer. During development, engineers had made the decision to use the Zilog 8530 serial controller chip (SCC) instead of the lower-cost and more common UART to provide serial port connections. The SCC cost about $5 more than a UART, but offered much higher speeds of up to 250 kilobits per second (or higher with additional hardware) and internally supported a number of basic networking-like protocols like IBM's Bisync.
The SCC was chosen because it would allow multiple devices to be attached to the port. Peripherals equipped with similar SCCs could communicate using the built-in protocols, interleaving their data with other peripherals on the same bus. This would eliminate the need for more ports on the back of the machine, and allowed for the elimination of expansion slots for supporting more complex devices. The initial concept was known as AppleBus, envisioning a system controlled by the host Macintosh polling "dumb" devices in a fashion similar to the modern Universal Serial Bus.
AppleBus networking
The Macintosh team had already begun work on what would become the LaserWriter and had considered a number of other options to answer the question of how to share these expensive machines and other resources. A series of memos from Bob Belleville clarified these concepts, outlining the Mac, LaserWriter, and a file server system which would become the Macintosh Office. By late 1983 it was clear that IBM's Token Ring would not be ready in time for the launch of the Mac, and might miss the launch of these other products as well. In the end, Token Ring would not ship until October 1985.
Jobs' earlier question to Sidhu had already sparked a number of ideas. When AppleNet was cancelled in October, Sidhu led an effort to develop a new networking system based on the AppleBus hardware. This new system would not have to conform to any existing preconceptions, and was designed to be worthy of the Mac – a system that was user-installable and required no configuration or fixed network addresses – in short, a true plug-and-play network. Considerable effort was needed, but by the time the Mac was released, the basic concepts had been outlined, and some of the low-level protocols were on their way to completion. Sidhu mentioned the work to Belleville only two hours after the Mac was announced.
The "new" AppleBus was announced in early 1984, allowing direct connection from the Mac or Lisa through a small box that is plugged into the serial port and connected via cables to the next computer upstream and downstream. Adaptors for Apple II and Apple III were also announced. Apple also announced that an AppleBus network could be attached to, and would appear to be a single node within, a Token Ring system. Details of how this would work were sketchy.
AppleTalk Personal Network
Just prior to its release in early 1985, AppleBus was renamed AppleTalk. Initially marketed as AppleTalk Personal Network, it comprised a family of network protocols and a physical layer.
The physical layer had a number of limitations, including a speed of only 230.4 kbit/s, a maximum distance of from end to end, and only 32 nodes per LAN. But as the basic hardware was built into the Mac, adding nodes only cost about $50 for the adaptor box. In comparison, Ethernet or Token Ring cards cost hundreds or thousands of dollars. Additionally, the entire networking stack required only about 6 kB of RAM, allowing it to run on any Mac.
The relatively slow speed of AppleTalk allowed further reductions in cost. Instead of using RS-422's balanced transmit and receive circuits, the AppleTalk cabling used a single common electrical ground, which limited speeds to about 500 kbit/s, but allowed one conductor to be removed. This meant that common three-conductor cables could be used for wiring. Additionally, the adaptors were designed to be "self-terminating", meaning that nodes at the end of the network could simply leave their last connector unconnected. There was no need for the wires to be connected back together into a loop, nor the need for hubs or other devices.
The system was designed for future expansion; the addressing system allowed for expansion to 255 nodes in a LAN (although only 32 could be used at that time), and by using "bridges" (which came to be known as "routers", although technically not the same) one could interconnect LANs into larger collections. "Zones" allowed devices to be addressed within a bridge-connected internet. Additionally, AppleTalk was designed from the start to allow use with any potential underlying physical link, and within a few years, the physical layer would be renamed LocalTalk, so as to differentiate it from the AppleTalk protocols.
The main advantage of AppleTalk was that it was completely maintenance-free. To join a device to a network, a user simply plugged the adaptor into the machine, then connected a cable from it to any free port on any other adaptor. The AppleTalk network stack negotiated a network address, assigned the computer a human-readable name, and compiled a list of the names and types of other machines on the network so the user could browse the devices through the Chooser. AppleTalk was so easy to use that ad hoc networks tended to appear whenever multiple Macs were in the same room. Apple would later use this in an advertisement showing a network being created between two seats in an airplane.
PhoneNet and other adaptors
A thriving third-party market for AppleTalk devices developed over the next few years. One particularly notable example was an alternate adaptor designed by BMUG and commercialised by Farallon as PhoneNET in 1987. This was essentially a replacement for Apple's connector that had conventional phone jacks instead of Apple's round connectors. PhoneNet allowed AppleTalk networks to be connected together using normal telephone wires, and with very little extra work, could run analog phones and AppleTalk on a single four-conductor phone cable.
Other companies took advantage of the SCC's ability to read external clocks in order to support higher transmission speeds, up to 1 Mbit/s. In these systems, the external adaptor also included its own clock, and used that to signal the SCC's clock input pins. The best-known such system was Centram's FlashTalk, which ran at 768 kbit/s, and was intended to be used with their TOPS networking system. A similar solution was the 850 kbit/s DaynaTalk, which used a separate box that plugged in between the computer and a normal LocalTalk/PhoneNet box. Dayna also offered a PC expansion card that ran up to 1.7 Mbit/s when talking to other Dayna PC cards. Several other systems also existed with even higher performance, but these often required special cabling that was incompatible with LocalTalk/PhoneNet, and also required patches to the networking stack that often caused problems.
AppleTalk over Ethernet
As Apple expanded into more commercial and education markets, they needed to integrate AppleTalk into existing network installations. Many of these organisations had already invested in a very expensive Ethernet infrastructure and there was no direct way to connect a Macintosh to Ethernet. AppleTalk included a protocol structure for interconnecting AppleTalk subnets and so as a solution, EtherTalk was initially created to use the Ethernet as a backbone between LocalTalk subnets. To accomplish this, organizations would need to purchase a LocalTalk-to-Ethernet bridge and Apple left it to third parties to produce these products. A number of companies responded, including Hayes and a few newly formed companies like Kinetics.
LocalTalk, EtherTalk, TokenTalk, and AppleShare
By 1987, Ethernet was clearly winning the standards battle over Token Ring, and in the middle of that year, Apple introduced EtherTalk 1.0, an implementation of the AppleTalk protocol over the Ethernet physical layer. Introduced for the newly released Macintosh II computer, one of Apple's first two Macintoshes with expansion slots (the Macintosh SE had one slot of a different type), the operating system included a new Network control panel that allowed the user to select which physical connection to use for networking (from "Built-in" or "EtherTalk"). At introduction, Ethernet interface cards were available from 3Com and Kinetics that plugged into a Nubus slot in the machine. The new networking stack also expanded the system to allow a full 255 nodes per LAN. With EtherTalk's release, AppleTalk Personal Network was renamed LocalTalk, the name it would be known under for the bulk of its life. Token Ring would later be supported with a similar TokenTalk product, which used the same Network control panel and underlying software. Over time, many third-party companies would introduce compatible Ethernet and Token Ring cards that used these same drivers.
The appearance of a Macintosh with a direct Ethernet connection also magnified the Ethernet and LocalTalk compatibility problem: Networks with new and old Macs needed some way to communicate with each other. This could be as simple as a network of Ethernet Mac II's trying to talk to a LaserWriter that only connected to LocalTalk. Apple initially relied on the aforementioned LocalTalk-to-Ethernet bridge products, but contrary to Apple's belief that these would be low-volume products, by the end of 1987, 130,000 such networks were in use. AppleTalk was at that time the most used networking system in the world, with over three times the installations of any other vendor.
1987 also marked the introduction of the AppleShare product, a dedicated file server that ran on any Mac with 512 kB of RAM or more. A common AppleShare machine was the Mac Plus with an external SCSI hard drive. AppleShare was the #3 network operating system in the late 1980s, behind Novell NetWare and Microsoft's MS-Net. AppleShare was effectively the replacement for the failed Macintosh Office efforts, which had been based on a dedicated file server device.
AppleTalk Phase II and other developments
A significant re-design was released in 1989 as AppleTalk Phase II. In many ways, Phase II can be considered an effort to make the earlier version (never called Phase I) more generic. LANs could now support more than 255 nodes, and zones were no longer associated with physical networks but were entirely virtual constructs used simply to organize nodes. For instance, one could now make a "Printers" zone that would list all the printers in an organization, or one might want to place that same device in the "2nd Floor" zone to indicate its physical location. Phase II also included changes to the underlying inter-networking protocols to make them less "chatty", which had previously been a serious problem on networks that bridged over wide-area networks.
By this point, Apple had a wide variety of communications products under development, and many of these were announced along with AppleTalk Phase II. These included updates to EtherTalk and TokenTalk, AppleTalk software and LocalTalk hardware for the IBM PC, EtherTalk for Apple's A/UX operating system allowing it to use LaserWriters and other network resources, and the Mac X.25 and MacX products.
Ethernet had become almost universal by 1990, and it was time to build Ethernet into Macs direct from the factory. However, the physical wiring used by these networks was not yet completely standardized. Apple solved this problem using a single port on the back of the computer into which the user could plug an adaptor for any given cabling system. This FriendlyNet system was based on the industry-standard Attachment Unit Interface or AUI, but deliberately chose a non-standard connector that was smaller and easier to use, which they called "Apple AUI", or AAUI. FriendlyNet was first introduced on the Quadra 700 and Quadra 900 computers, and used across much of the Mac line for some time. As with LocalTalk, a number of third-party FriendlyNet adaptors quickly appeared.
As 10BASE-T became the de facto cabling system for Ethernet, second-generation Power Macintosh machines added a 10BASE-T port in addition to AAUI. The PowerBook 3400c and lower-end Power Macs also added 10BASE-T. The Power Macintosh 7300/8600/9600 were the final Macs to include AAUI, and 10BASE-T became universal starting with the Power Macintosh G3 and PowerBook G3.
The capital-I Internet
From the beginning of AppleTalk, users wanted to connect the Macintosh to TCP/IP network environments. In 1984, Bill Croft at Stanford University pioneered the development of IP packets encapsulated in DDP as part of the SEAGATE (Stanford Ethernet–AppleTalk Gateway) project. SEAGATE was commercialized by Kinetics in their LocalTalk-to-Ethernet bridge as an additional routing option. A few years later, MacIP was separated from the SEAGATE code and became the de facto method for IP packets to be routed over LocalTalk networks. By 1986, Columbia University released the first version of the Columbia AppleTalk Package (CAP) that allowed higher integration of Unix, TCP/IP, and AppleTalk environments. In 1988, Apple released MacTCP, a system that allowed the Mac to support TCP/IP on machines with suitable Ethernet hardware. However, this left many universities with the problem of supporting IP on their many LocalTalk-equipped Macs. It was soon common to include MacIP support in LocalTalk-to-Ethernet bridges. MacTCP would not become a standard part of the Classic Mac OS until 1994, by which time it also supported SNMP and PPP.
For some time in the early 1990s, the Mac was a primary client on the rapidly expanding Internet. Among the better-known programs in wide use were Fetch, Eudora, eXodus, NewsWatcher, and the NCSA packages, especially NCSA Mosaic and its offspring, Netscape Navigator. Additionally, a number of server products appeared that allowed the Mac to host Internet content. Through this period, Macs had about 2 to 3 times as many clients connected to the Internet as any other platform, despite the relatively small overall microcomputer market share.
As the world quickly moved to IP for both LAN and WAN uses, Apple was faced with maintaining two increasingly outdated code bases on an ever-wider group of machines as well as the introduction of the PowerPC-based machines. This led to the Open Transport efforts, which re-implemented both MacTCP and AppleTalk on an entirely new code base adapted from the Unix standard STREAMS. Early versions had problems and did not become stable for some time. By that point, Apple was deep in their ultimately doomed Copland efforts.
Legacy and abandonment
With the purchase of NeXT and subsequent development of Mac OS X, AppleTalk was strictly a legacy system. Support was added to Mac OS X in order to provide support for a large number of existing AppleTalk devices, notably laser printers and file shares, but alternate connection solutions common in this era, notably USB for printers, limited their demand. As Apple abandoned many of these product categories, and all new systems were based on IP, AppleTalk became less and less common. AppleTalk support was finally removed from the macOS line in Mac OS X v10.6 in 2009.
However, the loss of AppleTalk did not reduce the desire for networking solutions that combined its ease of use with IP routing. Apple has led the development of many such efforts, from the introduction of the AirPort router to the development of the zero-configuration networking system and their implementation of it, Rendezvous, later renamed Bonjour.
As of 2020, AppleTalk support has been completely removed from legacy support with macOS 11 Big Sur.
Design
The AppleTalk design rigorously followed the OSI model of protocol layering. Unlike most of the early LAN systems, AppleTalk was not built using the archetypal Xerox XNS system. The intended target was not Ethernet, and it did not have 48-bit addresses to route. Nevertheless, many portions of the AppleTalk system have direct analogs in XNS.
One key differentiation for AppleTalk was it contained two protocols aimed at making the system completely self-configuring. The AppleTalk address resolution protocol (AARP) allowed AppleTalk hosts to automatically generate their own network addresses, and the Name Binding Protocol (NBP) was a dynamic system for mapping network addresses to user-readable names. Although systems similar to AARP existed in other systems, Banyan VINES for instance. Beginning about 2002 Rendezvous (the combination of DNS-based service discovery, Multicast DNS, and link-local addressing) provided capabilities and usability using IP that were similar to those of AppleTalk.
Both AARP and NBP had defined ways to allow "controller" devices to override the default mechanisms. The concept was to allow routers to provide the information or "hardwire" the system to known addresses and names. On larger networks where AARP could cause problems as new nodes searched for free addresses, the addition of a router could reduce "chattiness." Together AARP and NBP made AppleTalk an easy-to-use networking system. New machines were added to the network by plugging them in and optionally giving them a name. The NBP lists were examined and displayed by a program known as the Chooser which would display a list of machines on the local network, divided into classes such as file-servers and printers.
Addressing
An AppleTalk address was a four-byte quantity. This consisted of a two-byte network number, a one-byte node number, and a one-byte socket number. Of these, only the network number required any configuration, being obtained from a router. Each node dynamically chose its own node number, according to a protocol (originally the LocalTalk Link Access Protocol LLAP and later, for Ethernet/EtherTalk, the AppleTalk Address Resolution Protocol, AARP) which handled contention between different nodes accidentally choosing the same number. For socket numbers, a few well-known numbers were reserved for special purposes specific to the AppleTalk protocol itself. Apart from these, all application-level protocols were expected to use dynamically assigned socket numbers at both the client and server end.
Because of this dynamism, users could not be expected to access services by specifying their address. Instead, all services had names which, being chosen by humans, could be expected to be meaningful to users, and also could be sufficiently long to minimize the chance of conflicts.
As NBP names translated to an address, which included a socket number as well as a node number, a name in AppleTalk mapped directly to a service being provided by a machine, which was entirely separate from the name of the machine itself. Thus, services could be moved to a different machine and, so long as they kept the same service name, there was no need for users to do anything different in order to continue accessing the service. And the same machine could host any number of instances of services of the same type, without any network connection conflicts.
Contrast this with A records in the DNS, in which a name translates to a machine's address, not including the port number that might be providing a service. Thus, if people are accustomed to using a particular machine name to access a particular service, their access will break when the service is moved to a different machine. This can be mitigated somewhat by insistence on using CNAME records indicating service rather than actual machine names to refer to the service, but there is no way of guaranteeing that users will follow such a convention. Some newer protocols, such as Kerberos and Active Directory use DNS SRV records to identify services by name, which is much closer to the AppleTalk model.
Protocols
AppleTalk Address Resolution Protocol
The AppleTalk Address Resolution Protocol (AARP) resolves AppleTalk addresses to link layer addresses. It is functionally equivalent to ARP and obtains address resolution by a method very similar to ARP.
AARP is a fairly simple system. When powered on, an AppleTalk machine broadcasts an AARP probe packet asking for a network address, intending to hear back from controllers such as routers. If no address is provided, one is picked at random from the "base subnet", 0. It then broadcasts another packet saying "I am selecting this address", and then waits to see if anyone else on the network complains. If another machine has that address, the newly connecting machine will pick another address, and keep trying until it finds a free one. On a network with many machines it may take several tries before a free address is found, so for performance purposes the successful address is recorded in NVRAM and used as the default address in the future. This means that in most real-world setups where machines are added a few at a time, only one or two tries are needed before the address effectively becomes constant.
AppleTalk Data Stream Protocol
The AppleTalk Data Stream Protocol (ADSP) was a comparatively late addition to the AppleTalk protocol suite, done when it became clear that a TCP-style reliable connection-oriented transport was needed. Significant differences from TCP were that:
A connection attempt could be rejected.
There were no "half-open" connections; once one end initiated a tear-down of the connection, the whole connection would be closed (i.e., ADSP is full-duplex, not dual simplex).
AppleTalk had an included attention message system which allowed short messages to be sent which would bypass the normal stream data flow. These were delivered reliably but out of order with respect to the stream. Any attention message would be delivered as soon as possible instead of waiting for the current stream byte sequence point to become current.
Apple Filing Protocol
The Apple Filing Protocol (AFP), formerly AppleTalk Filing Protocol, is the protocol for communicating with AppleShare file servers. Built on top of AppleTalk Session Protocol (for legacy AFP over DDP) or the Data Stream Interface (for AFP over TCP), it provides services for authenticating users (extensible to different authentication methods including two-way random-number exchange) and for performing operations specific to the Macintosh HFS filesystem. AFP is still in use in macOS, even though most other AppleTalk protocols have been deprecated.
AppleTalk Session Protocol
The AppleTalk Session Protocol (ASP) was an intermediate protocol, built on top of AppleTalk Transaction Protocol (ATP), which in turn was the foundation of AFP. It provided basic services for requesting responses to arbitrary commands and performing out-of-band status queries. It also allowed the server to send asynchronous attention messages to the client.
AppleTalk Transaction Protocol
The AppleTalk Transaction Protocol (ATP) was the original reliable transport-level protocol for AppleTalk, built on top of DDP. At the time it was being developed, a full, reliable connection-oriented protocol like TCP was considered to be too expensive to implement for most of the intended uses of AppleTalk. Thus, ATP was a simple request/response exchange, with no need to set up or tear down connections.
An ATP request packet could be answered by up to eight response packets. The requestor then sent an acknowledgement packet containing a bit mask indicating which of the response packets it received, so the responder could retransmit the remainder.
ATP could operate in either "at-least-once" mode or "exactly-once" mode. Exactly-once mode was essential for operations which were not idempotent; in this mode, the responder kept a copy of the response buffers in memory until successful receipt of a release packet from the requestor, or until a timeout elapsed. This way, it could respond to duplicate requests with the same transaction ID by resending the same response data, without performing the actual operation again.
Datagram Delivery Protocol
The Datagram Delivery Protocol (DDP) was the lowest-level data-link-independent transport protocol. It provided a datagram service with no guarantees of delivery. All application-level protocols, including the infrastructure protocols NBP, RTMP and ZIP, were built on top of DDP. AppleTalk's DDP corresponds closely to the Network layer of the Open Systems Interconnection (OSI) communication model.
Name Binding Protocol
The Name Binding Protocol (NBP) was a dynamic, distributed system for managing AppleTalk names. When a service started up on a machine, it registered a name for itself as chosen by a human administrator. At this point, NBP provided a system for checking that no other machine had already registered the same name. Later, when a client wanted to access that service, it used NBP to query machines to find that service. NBP provided browsability ("what are the names of all the services available?") as well as the ability to find a service with a particular name. Names were human-readable, containing spaces and upper- and lower-case letters, and including support for searching.
AppleTalk Echo Protocol
The AppleTalk Echo Protocol (AEP) was a transport layer protocol designed to test the reachability of network nodes. AEP generates packets to be sent to the network node and is identified in the Type field of a packet as an AEP packet. The packet is first passed to the source DDP. After it is identified as an AEP packet, it is forwarded to the node where the packet is examined by the DDP at the destination. After the packet is identified as an AEP packet, the packet is then copied and a field in the packet is altered to create an AEP reply packet, and is then returned to the source node.
Printer Access Protocol
The Printer Access Protocol (PAP) was the standard way of communicating with PostScript printers. It was built on top of ATP. When a PAP connection was opened, each end sent the other an ATP request which basically meant "send me more data". The client's response to the server was to send a block of PostScript code, while the server could respond with any diagnostic messages that might be generated as a result, after which another "send-more-data" request was sent. This use of ATP provided automatic flow control; each end could only send data to the other end if there was an outstanding ATP request to respond to.
PAP also provided for out-of-band status queries, handled by separate ATP transactions. Even while it was busy servicing a print job from one client, a PAP server could continue to respond to status requests from any number of other clients. This allowed other Macintoshes on the LAN that were waiting to print to display status messages indicating that the printer was busy, and what the job was that it was busy with.
Routing Table Maintenance Protocol
The Routing Table Maintenance Protocol (RTMP) was the protocol by which routers kept each other informed about the topology of the network. This was the only part of AppleTalk that required periodic unsolicited broadcasts: every 10 seconds, each router had to send out a list of all the network numbers it knew about and how far away it thought they were.
Zone Information Protocol
The Zone Information Protocol (ZIP) was the protocol by which AppleTalk network numbers were associated with zone names. A zone was a subdivision of the network that made sense to humans (for example, "Accounting Department"); but while a network number had to be assigned to a topologically contiguous section of the network, a zone could include several different discontiguous portions of the network.
Physical implementation
The initial default hardware implementation for AppleTalk was a high-speed serial protocol known as LocalTalk that used the Macintosh's built-in RS-422 ports at 230.4 kbit/s. LocalTalk used a splitter box in the RS-422 port to provide an upstream and downstream cable from a single port. The topology was a bus: cables were daisy-chained from each connected machine to the next, up to the maximum of 32 permitted on any LocalTalk segment. The system was slow by today's standards, but at the time the additional cost and complexity of networking on PC machines was such that it was common that Macs were the only networked personal computers in an office. Other larger computers, such as UNIX or VAX workstations, would commonly be networked via Ethernet.
Other physical implementations were also available. A very popular replacement for LocalTalk was PhoneNET, a third-party solution from Farallon Computing, Inc. (renamed Netopia, acquired by Motorola in 2007) that also used the RS-422 port and was indistinguishable from LocalTalk as far as Apple's LocalTalk port drivers were concerned, but ran over very inexpensive standard phone cabling with four-wire, six-position modular connectors, the same cables used to connect landline telephones. Since it used the second pair of wires, network devices could even be connected through existing telephone jacks if a second line was not present. Foreshadowing today's network hubs and switches, Farallon provided solutions for PhoneNet to be used in star as well as bus configurations, with both passive star connections (with the phone wires simply bridged to each other at a central point), and active star with "PhoneNet Star Controller" hub hardware. In a star configuration, any wiring issue only affected one device, and problems were easy to pinpoint. PhoneNet's low cost, flexibility, and easy troubleshooting resulted in it being the dominant choice for Mac networks into the early 1990s.
AppleTalk protocols also came to run over Ethernet (first coaxial and then twisted pair) and Token Ring physical layers, labeled by Apple as EtherTalk and TokenTalk, respectively. EtherTalk gradually became the dominant implementation method for AppleTalk as Ethernet became generally popular in the PC industry throughout the 1990s. Besides AppleTalk and TCP/IP, any Ethernet network could also simultaneously carry other protocols such as DECnet and IPX.
Networking model
Versions
Cross-platform solutions
When AppleTalk was first introduced, the dominant office computing platform was the PC compatible running MS-DOS. Apple introduced the AppleTalk PC Card in early 1987, allowing PCs to join AppleTalk networks and print to LaserWriter printers. A year later AppleShare PC was released, allowing PCs to access AppleShare file servers.
The "TOPS Teleconnector" MS-DOS networking system over AppleTalk system enabled MS-DOS PCs to communicate over AppleTalk network hardware; it comprised an AppleTalk interface card for the PC and a suite of networking software allowing such functions as file, drive and printer sharing. As well as allowing the construction of a PC-only AppleTalk network, it allowed communication between PCs and Macs with TOPS software installed. (Macs without TOPS installed could use the same network but only to communicate with other Apple machines.) The Mac TOPS software did not match the quality of Apple's own either in ease of use or in robustness and freedom from crashes, but the DOS software was relatively simple to use in DOS terms, and was robust.
The BSD and Linux operating systems support AppleTalk through an open source project called Netatalk, which implements the complete protocol suite and allows them to both act as native file or print servers for Macintosh computers, and print to LocalTalk printers over the network.
The Windows Server operating systems supported AppleTalk starting with Windows NT and ending after Windows Server 2003. Miramar included AppleTalk in its PC MacLAN product which was discontinued by CA in 2007. GroupLogic continues to bundle its AppleTalk protocol with its ExtremeZ-IP server software for Macintosh-Windows integration which supports Windows Server 2008 and Windows Vista as well prior versions. HELIOS Software GmbH offers a proprietary implementation of the AppleTalk protocol stack, as part of their HELIOS UB2 server. This is essentially a File and Print Server suite that runs on a whole range of different platforms.
In addition, Columbia University released the Columbia AppleTalk Package (CAP) which implemented the protocol suite for various Unix flavours including Ultrix, SunOS, BSD and IRIX. This package is no longer actively maintained.
See also
Netatalk is a free, open-source implementation of the AppleTalk suite of protocols.
Network File System
Remote File Sharing
Samba
Server Message Block
Notes
References
Citations
Bibliography
External links
Pushing AppleTalk Across the Internet
|
Apple Inc. software;Network operating systems;Network protocols
|
https://en.wikipedia.org/wiki/Apple%20III
|
The Apple III (styled as apple ///) is a business-oriented personal computer produced by Apple Computer and released in 1980. Running the Apple SOS operating system, it was intended as the successor to the Apple II; however, it was largely considered a failure in the market. It was designed to provide features business users wanted: a true typewriter-style keyboard with upper and lowercase letters (the Apple II only supported uppercase at the time) and an 80-column display.
It had the internal code name of "Sara", named after Wendell Sander's daughter. The system was announced on May 19, 1980, and released in late November that year. Serious stability issues required a design overhaul and a recall of the first 14,000 machines produced. The Apple III was formally reintroduced on November 9, 1981.
Damage to the computer's reputation had already been done, however, and it failed to do well commercially. Development stopped, and the Apple III was discontinued on April 24, 1984. Its last successor, the III Plus, was dropped from the Apple product line in September 1985.
An estimated 65,000 to 75,000 Apple III computers were sold. The Apple III Plus brought this up to approximately 120,000. Apple co-founder Steve Wozniak stated that the primary reason for the Apple III's failure was that the system was designed by Apple's marketing department, unlike Apple's previous engineering-driven projects. The Apple III's failure led Apple to reevaluate its plan to phase out the Apple II, prompting the eventual continuation of development of the older machine. As a result, later Apple II models incorporated some hardware and software technologies of the Apple III.
Overview
Design
Steve Wozniak and Steve Jobs expected hobbyists to purchase the Apple II; however, because of VisiCalc and Disk II, small businesses purchased 90% of the computers. The Apple III was designed to be a business computer and successor. Though the Apple II contributed to the inspirations of several important business products, such as VisiCalc, Multiplan, and Apple Writer, the computer's hardware architecture, operating system, and developer environment are limited. Apple management intended to clearly establish market segmentation by designing the Apple III to appeal to the 90% business market, leaving the Apple II to home and education users. Management believed that "once the Apple III was out, the Apple II would stop selling in six months", Wozniak said.
The Apple III is powered by a 2 megahertz Synertek 6502A or 6502B 8-bit CPU (operating effectively between 1.4 and 1.8 MHz due to video or memory refresh cycles) and, like some of the later machines in the Apple II family, uses bank switching techniques to address memory beyond the 6502's traditional 64 KB limit, up to 256 KB in the III's case. Third-party vendors produced memory upgrade kits that allow the Apple III to reach up to 512 KB of random-access memory (RAM). Other Apple III built-in features include an 80-column, 24-line display with upper and lowercase characters, a numeric keypad, dual-speed (pressure-sensitive) cursor control keys, 6-bit (DAC) audio, and a built-in 140-kilobyte 5.25-inch floppy disk drive. Graphics modes include 560x192 in black and white, and 280x192 with 16 colors or shades of gray. Unlike the Apple II, the Disk III controller is part of the logic board.
The Apple III is the first Apple product to allow the user to choose both a screen font and a keyboard layout: either QWERTY or Dvorak. These choices cannot be changed while programs were running. This was unlike the Apple IIc, which has a keyboard switch directly above the keyboard, allowing the user to switch on the fly.
Software
The Apple III introduced an advanced operating system called Apple SOS, pronounced "apple sauce". Its ability to address resources by name allows the Apple III to be more scalable than the Apple II's addressing by physical location such as PR#6 and CATALOG, D1. Apple SOS allows the full capacity of a storage device to be used as a single volume, such as the Apple ProFile hard disk drive, and it supports a hierarchical file system. Some of the features and code base of Apple SOS were later adopted into the Apple II's ProDOS and GS/OS operating systems, as well as Lisa 7/7 and Mac OS.
With a starting price of $4,340 (equivalent to $17,356 as of 2024) and a maximum price of $7,800 (equivalent to $31,194 as of 2024), the Apple III was more expensive than many of the CP/M-based business computers that were available at the time. Few software applications other than VisiCalc are available for the computer; according to a presentation at KansasFest 2012, fewer than 50 Apple III-specific software packages were ever published, most shipping when the III Plus was released. However this number is proven to be wildly incorrect, given the manual 'RESOURCE GUIDE: Of Apple /// and Apple /// Plus Software and Hardware' published and released by Apple Computer, Inc. in May 1984 lists in excess of 500+ software packages produced by many and varied publishers. Given software publishers and specialised hardware manufacturers such as On-Three, Inc. produced products for the Apple III well in to the late 90s, in excess of 500 products can also be seen as way too conservative.
Because Apple did not view the Apple III as suitable for hobbyists, it did not provide much of the technical software information that accompanies the Apple II. Originally intended as a direct replacement to the Apple II, it was designed to be backward compatible with Apple II software. However, since Apple did not want to encourage continued development of the II platform, Apple II compatibility exists only in a special Apple II Mode which is limited in its capabilities to the emulation of a basic Apple II Plus configuration with of RAM.
Special chips were intentionally added to prevent access from Apple II Mode to the III's advanced features such as its larger amount of memory.
Peripherals
The Apple III has four expansion slots, a number that inCider in 1986 called "miserly"., also saying Apple II cards are compatible but risk violating government RFI regulations, and require Apple III-specific device drivers; BYTE stated that "Apple provides virtually no information on how to write them". As with software, Apple provided little hardware technical information with the computer but Apple III-specific products became available, such as one that made the computer compatible with the Apple IIe. Several new Apple-produced peripherals were developed for the Apple III. The original Apple III has a built-in real-time clock, which is recognized by Apple SOS. The clock was later removed from the "revised" model, and was instead made available as an add-on.
Along with the built-in floppy drive, the Apple III can also handle up to three additional external Disk III floppy disk drives. The Disk III is only officially compatible with the Apple III. The Apple III Plus requires an adaptor from Apple to use the Disk III with its DB-25 disk port.
With the introduction of the revised Apple III a year after launch, Apple began offering the ProFile external hard disk system. Priced at $3,499 for 5 MB of storage, it also required a peripheral slot for its controller card.
Backward compatibility
The Apple III has the built-in hardware capability to run Apple II software. In order to do so, an emulation boot disk is required that functionally turns the machine into a standard 48-kilobyte Apple II Plus, until it is powered off. The keyboard, internal floppy drive (and one external Disk III), display (color is provided through the 'B/W video' port) and speaker all act as Apple II peripherals. The paddle and serial ports can also function in Apple II mode, however with some limitations and compatibility issues.
Apple engineers added specialized circuitry with the sole purpose of blocking access to its advanced features when running in Apple II emulation mode. This was done primarily to discourage further development and interest in the Apple II line, and to push the Apple III as its successor. For example, no more than of RAM can be accessed, even if the machine has of RAM or higher present. Many Apple II programs require a minimum of of RAM, making them impossible to run on the Apple III. Similarly, access to lowercase support, 80 columns text, or its more advanced graphics and sound are blocked by this hardware circuitry, making it impossible for even skilled software programmers to bypass Apple's lockout. A third-party company, Titan Technologies, sold an expansion board called the III Plus II that allows Apple II mode to access more memory, a standard game port, and with a later released companion card, even emulate the Apple IIe.
Certain Apple II slot cards can be installed in the Apple III and used in native III-mode with custom written SOS device drivers, including Grappler Plus and Liron 3.5 Controller.
Revisions
After overheating issues were attributed to serious design flaws, a redesigned logic board was introduced in mid-December 1981 – which included a lower power supply requirement, wider circuit traces and better-designed chip sockets. The $3,495 revised model also includes 256 KB of RAM as the standard configuration. The 14,000 units of the original Apple III sold were returned and replaced with the entirely new revised model.
Apple III Plus
Apple discontinued the III in October 1983 because it violated FCC regulations, and the FCC required the company to change the redesigned computer's name. It introduced the Apple III Plus in December 1983 at a price of US$2,995. This newer version includes a built-in clock, video interlacing, standardized rear port connectors, 55-watt power supply, 256 KB of RAM as standard, and a redesigned, Apple IIe-like keyboard.
Owners of the Apple III could purchase individual III Plus upgrades, like the clock and interlacing feature, and obtain the newer logic board as a service replacement. A keyboard upgrade kit, dubbed "Apple III Plus upgrade kit" was also made available – which included the keyboard, cover, keyboard encoder ROM, and logo replacements. This upgrade had to be installed by an authorized service technician.
Design flaws
According to Wozniak, the Apple III "had 100 percent hardware failures". Former Apple executive Taylor Pohlman stated that:
Jobs insisted on the idea of having no fan or air vents, in order to make the computer run quietly. He would later push this same ideology onto almost all Apple models he had control of, from the Apple Lisa and Macintosh 128K to the iMac. To allow the computer to dissipate heat, the base of the Apple III was made of heavy cast aluminum, which supposedly acts as a heat sink. One advantage to the aluminum case was a reduction in RFI (Radio Frequency Interference), a problem which had plagued the Apple II series throughout its history. Unlike the Apple II, the power supply was mounted – without its own shell – in a compartment separate from the logic board. The decision to use an aluminum shell ultimately led to engineering issues which resulted in the Apple III's reliability problems. The lead time for manufacturing the shells was high, and this had to be done before the motherboard was finalized. Later, it was realized that there was not enough room on the motherboard for all of the components unless narrow traces were used.
Many Apple IIIs were thought to have failed due to their inability to properly dissipate heat. inCider stated in 1986 that "Heat has always been a formidable enemy of the Apple ///", and some users reported that their Apple IIIs became so hot that the chips started dislodging from the board, causing the screen to display garbled data or their disk to come out of the slot "melted". BYTE wrote, "the integrated circuits tended to wander out of their sockets". It has been rumored Apple advised customers to tilt the front of the Apple III six inches above the desk and then drop it to reseat the chips as a temporary solution. Other analyses blame a faulty automatic chip insertion process, not heat.
Case designer Jerry Manock denied the design flaw charges, insisting that tests proved that the unit adequately dissipated the internal heat. The primary cause, he claimed, was a major logic board design problem. The logic board used "fineline" technology that was not fully mature at the time, with narrow, closely spaced traces. When chips were "stuffed" into the board and wave-soldered, solder bridges would form between traces that were not supposed to be connected. This caused numerous short circuits, which required hours of costly diagnosis and hand rework to fix. Apple designed a new circuit board with more layers and normal-width traces. The new logic board was laid out by one designer on a huge drafting board, rather than using the costly CAD-CAM system used for the previous board, and the new design worked.
Earlier Apple III units came with a built-in real time clock. The hardware, however, would fail after prolonged use. Assuming that National Semiconductor would test all parts before shipping them, Apple did not perform this level of testing. Apple was soldering chips directly to boards and could not easily replace a bad chip if one was found. Eventually, Apple solved this problem by removing the real-time clock from the Apple III's specification rather than shipping the Apple III with the clock pre-installed, and then sold the peripheral as a level 1 technician add-on.
BASIC
Microsoft and Apple each developed their own versions of BASIC for the Apple III. Apple III Microsoft BASIC was designed to run on the CP/M platform available for the Apple III. Apple Business BASIC shipped with the Apple III. Donn Denman ported Applesoft BASIC to SOS and reworked it to take advantage of the extended memory of the Apple III.
Both languages introduced a number of new or improved features over Applesoft BASIC. Both languages replaced Applesoft's single-precision floating-point variables using 5-byte storage with the somewhat-reduced-precision 4-byte variables, while also adding a larger numerical format. Apple III Microsoft BASIC provides double-precision floating-point variables, taking 8 bytes of storage, while Apple Business BASIC offers an extra-long integer type, also taking 8 bytes for storage. Both languages also retain 2-byte integers, and maximum 255-character strings.
Other new features common to both languages include:
Incorporation of disk-file commands within the language.
Operators for MOD and for integer-division.
An optional ELSE clause in IF...THEN statements.
HEX$() function for hexadecimal-format output.
INSTR function for finding a substring within a string.
PRINT USING statement to control format of output. Apple Business BASIC had an option, in addition to directly specifying the format with a string expression, of giving the line number where an IMAGE statement gave the formatting expression, similar to a FORMAT statement in FORTRAN.
Some features work differently in each language:
Microsoft BASIC additional features
function to replace Applesoft's command.
statement to input an entire line of text, regardless of punctuation, into a single string variable.
and statements to automatically direct output to paper.
and statements to left- or right-justify a string expression within a given string variable's character length.
function for output, and "&"- or "&O"-formatted expressions, for manipulating octal notation.
function for generating blank spaces outside of a statement, and function to do likewise with any character.
... statements, for loop structures built on general Boolean conditions without an index variable.
Bitwise Boolean (16-bit) operations (, , ), with additional operators , , .
Line number specification in the command.
options of (to skip to the statement after that which caused the error) or a specified line number (which replaces the idea of exiting error-handling by -line, thus avoiding Applesoft II's stack error problem).
Multiple parameters in user-defined () functions.
A return to the old Applesoft One concept of having multiple functions at different addresses, by establishing ten different functions, numbered to , with separate statements to define the address of each. The argument passed to a function can be of any specific type, including string. The returned value can also be of any type, by default the same type as the argument passed.
There is no support for graphics provided within the language, nor for reading analog controls or buttons; nor is there a means of defining the active window of the text screen.
Business BASIC additional features
Apple Business BASIC eliminates all references to absolute memory addresses. Thus, the POKE command and PEEK() function were not included in the language, and new features replaced the CALL statement and USR() function. The functionality of certain features in Applesoft that had been achieved with various PEEK and POKE locations is now provided by:
BUTTON() function to read game-controller buttons
WINDOW statement to define the active window of the text screen by its coordinates
KBD, HPOS, and VPOS system variables
External binary subroutines and functions are loaded into memory by a single INVOKE disk-command that loads separately-assembled code modules. A PERFORM statement is then used to call an INVOKEd procedure by name, with an argument-list. INVOKEd functions would be referenced in expressions by EXFN. (floating-point) or EXFN%. (integer), with the function name appended, plus the argument-list for the function.
Graphics are supported with an INVOKEd module, with features including displaying text within graphics in various fonts, within four different graphics modes available on the Apple III.
Reception
"The Apple III is unlikely to approach the success of the Apple II", InfoWorld said in January 1981. Citing the III's high price, manufacturing delays, limited disk storage, and small software library, the magazine asked "why buy a $5000 computer with an emulator when most of the programs you need run directly on a $2500 computer".
Despite devoting the majority of its R&D to the Apple III and so ignoring the II that for a while dealers had difficulty in obtaining the latter, the III's technical problems made marketing the computer difficult. Ed Smith, who after designing the APF Imagination Machine worked as a distributor's representative, described the III as "a complete disaster". He recalled that he "was responsible for going to every dealership, setting up the Apple III in their showroom, and then explaining to them the functions of the Apple III, which in many cases didn't really work".
Sales
BYTE reported in 1982 that Apple had sold only 10,000 of the original Apple III, compared to 350,000 Apple IIs sold by the end of 1981. Pohlman reported that Apple was only selling 500 units a month by late 1981, mostly as replacements. The company was able to eventually raise monthly sales to 5,000, but the IBM PC's successful launch had encouraged software companies to develop for it instead, prompting Apple to shift focus to the Lisa and Macintosh. The PC almost ended sales of the Apple III, the most closely comparable Apple computer model. By early 1984, sales were primarily to existing III owners, Apple itself—its 4,500 employees were equipped with some 3,000-4,500 units—and some small businesses. Apple finally discontinued the Apple III series on April 24, 1984, four months after introducing the III Plus, after selling only up to 75,000 units and replacing 14,000 defective units.
Jobs said the company lost "incalculable amounts" of money on the Apple III. Wozniak estimated that Apple had spent $100 million on the III instead of improving the II and better competing against IBM. Pohlman claimed that there was a "stigma" at Apple associated with having contributed to the computer. Most employees who worked on the III reportedly left Apple.
Legacy
The file system and some design ideas from Apple SOS, the Apple III's operating system, were part of Apple ProDOS and Apple GS/OS, the major operating systems for the Apple II following the demise of the Apple III, as well as the Apple Lisa, which was the de facto business-oriented successor to the Apple III. The hierarchical file system influenced the evolution of the Macintosh: while the original Macintosh File System (MFS) was a flat file system designed for a floppy disk without subdirectories, subsequent file systems were hierarchical. By comparison, the IBM PC's first file system (again designed for floppy disks) was also flat and later versions (designed for hard disks) were hierarchical.
In popular culture
At the start of the Walt Disney Pictures film Tron, lead character Kevin Flynn (played by Jeff Bridges) is seen hacking into the ENCOM mainframe using an Apple III.
References
Sources
External links
The Ill-Fated Apple III
Many manuals and diagrams
Sara – Apple /// emulator
The Ill-Fated Apple III Low End Mac
Apple III Chaos: Apple's First Failure Low End Mac
|
8-bit computers;Apple II family;Computer-related introductions in 1980;Discontinued Apple Inc. products;Products and services discontinued in 1984
|
https://en.wikipedia.org/wiki/Atari%20ST
|
Atari ST is a line of personal computers from Atari Corporation and the successor to the company's 8-bit computers. The initial model, the Atari 520ST, had limited release in April–June 1985, and was widely available in July. It was the first personal computer with a bitmapped color graphical user interface, using a version of Digital Research's GEM environment from February 1985. The Atari 1040ST, released in 1986 with 1 MB of memory, was the first home computer with a cost per kilobyte of RAM under US$1/KB.
After Jack Tramiel purchased the assets of the Atari, Inc. consumer division in 1984 to create Atari Corporation, the 520ST was designed in five months by a small team led by Shiraz Shivji. Alongside the Macintosh, Amiga, Apple IIGS and Acorn Archimedes, the ST is part of a mid-1980s generation of computers with 16 or 16/32-bit processors, 256 KB or more of RAM, and mouse-controlled graphical user interfaces. "ST" officially stands for "Sixteen/Thirty-two", referring to the Motorola 68000's 16-bit external bus and 32-bit internals.
The ST was sold with either Atari's color monitor or less expensive monochrome monitor. Color graphics modes are available only on the former while the highest-resolution mode requires the monochrome monitor. Some models can display the color modes on a TV. In Germany and some other markets, the ST gained a foothold for CAD and desktop publishing. With built-in MIDI ports, it was popular for music sequencing and as a controller of musical instruments among amateur and professional musicians. The Atari ST's primary competitor was the Amiga from Commodore.
The 520ST and 1040ST were followed by the Mega series, the STE, and the portable STacy. In the early 1990s, Atari released three final evolutions of the ST with significant technical differences from the original models: TT030 (1990), Mega STE (1991), and Falcon (1992). Atari discontinued the entire ST computer line in 1993, shifting the company's focus to the Jaguar video game console.
Development
The Atari ST was born from the rivalry between home computer makers Atari, Inc. and Commodore International. Jay Miner, one of the designers of the custom chips in the Atari 2600 and Atari 8-bit computers, tried to convince Atari management to create a new chipset for a video game console and computer. When his idea was rejected, he left Atari to form a small think tank called Hi-Toro in 1982 and began designing the new "Lorraine" chipset.
Hi-Toro, by then renamed Amiga, ran out of capital to complete Lorraine's development, and Atari, now owned by Warner Communications, paid Amiga to continue its work. In return, Atari received exclusive use of the Lorraine design for one year as a video game console. After that time, Atari had the right to add a keyboard and market the complete computer, designated the 1850XLD.
Tramel Technology
After leaving Commodore International in January 1984, Jack Tramiel formed Tramel (without an "i") Technology, Ltd. with his sons and other ex-Commodore employees and, in April, began planning a new computer. Interested in Atari's overseas manufacturing and worldwide distribution network, Tramiel negotiated with Warner in May and June 1984. He secured funding and bought Atari's consumer division, which included the console and home computer departments, in July. The arcade video game division remained part of Warner. As executives and engineers left Commodore to join Tramel Technology, Commodore filed lawsuits against four former engineers for infringement of trade secrets. The Tramiels did not purchase the employee contracts with the assets of Atari, Inc. and re-hired approximately 100 of the 900 former employees. Tramel Technology soon changed its name to Atari Corporation.
Commodore and Amiga
Amid rumors that Tramiel was negotiating to buy Atari, Amiga Corp. entered discussions with Commodore. This led to Commodore wanting to purchase Amiga Corporation outright, which Commodore believed would cancel any outstanding contracts, including Atari's. Instead of Amiga Corp. delivering Lorraine to Atari, Commodore delivered a check of $500,000 on Amiga's behalf, in effect returning the funds Atari invested in Amiga for the chipset. Tramiel countered by suing Amiga Corp. on August 13, 1984, seeking damages and an injunction to bar Amiga (and effectively Commodore) from producing anything with its technology.
The lawsuit left the Amiga team in limbo during mid-1984. Commodore eventually moved forward, with plans to improve the chipset and develop an operating system. Commodore announced the Amiga 1000 with the Lorraine chipset in July 1985, but it wasn't available in quantity until 1986. The delay gave Atari time to deliver the Atari 520ST in June 1985. In March 1987, the two companies settled the dispute out of court in a closed decision.
ST hardware
The lead architect of the new computer project at Tramel Technology and Atari Corporation was ex-Commodore employee Shiraz Shivji, who previously worked on the Commodore 64's development. Different CPUs were investigated, including the 32-bit National Semiconductor NS32000, but engineers were disappointed with its performance, and they moved to the Motorola 68000. The Atari ST design was completed in five months in 1984, concluding with it being shown at the January 1985 Consumer Electronics Show.
A custom sound processor called AMY had been in development at Atari, Inc. and was considered for the new ST computer design. The chip needed more time to complete, so AMY was dropped in favor of a commodity Yamaha YM2149F variant of the General Instrument AY-3-8910.
Operating system
Soon after the Atari buyout, Microsoft suggested to Tramiel that it could port Windows to the platform, but the delivery date was out by two years. A proposal to write a new operating system was rejected as Atari management was unsure whether the company had the required expertise.
Digital Research was working on a new GUI-based system called Crystal, soon to become GEM, but was fully committed to the Intel platform. A team from Atari was sent to Digital Research headquarters to work on a port to the 68000. Atari's Leonard Tramiel oversaw "Project Jason" (also known as The Operating System) for the ST series, named for designer and developer Jason Loveman.
GEM is based on CP/M-68K, a direct port of CP/M to the 68000. By 1985, CP/M was becoming increasingly outdated; it did not support subdirectories, for example. Digital Research was also in the process of building GEMDOS, a disk operating system for GEM, and debated whether a port of it could be completed in time for product delivery in June. The decision was eventually taken to port it, resulting in a GEMDOS file system which became part of Atari TOS (for "The Operating System", colloquially known as the "Tramiel Operating System"). This gave the ST a fast, hierarchical file system, essential for hard drives, and provided programmers with function calls similar to MS-DOS. The Atari ST character set is based on codepage 437.
Release
After six months of intensive effort following Tramiel's takeover, Atari announced the 520ST at the Winter Consumer Electronics Show in Las Vegas in January 1985. InfoWorld assessed the prototypes shown at computer shows as follows:Pilot production models of the Atari machine are much slicker than the hand-built models shown at earlier computer fairs; it doesn't look like a typical Commodore 64-style, corner-cutting, low-cost Jack Tramiel product of the past.Atari unexpectedly displayed the ST at Atlanta COMDEX in May. Similarities to the original Macintosh and Tramiel's role in its development resulted in it being nicknamed Jackintosh. Atari's rapid development of the ST amazed many, but others were skeptical, citing its "cheap" appearance, Atari's uncertain financial health, and poor relations between Tramiel-led Commodore and software developers.
In early 1985, the 520ST shipped to the press, developers, and user groups, and in early July 1985 for general retail sales. It saved the company. Atari ST print advertisements stated, "America, We Built It For You", and quoted Atari president Sam Tramiel: "We promised. We delivered. With pride, determination, and good old ATARI know how". By November, Atari stated that more than 50,000 520STs had been sold, "with U.S. sales alone well into five figures". The machine had gone from concept to store shelves in a little under one year.
Atari had intended to release the 130ST with 128 KB of RAM and the 260ST with 256 KB. However, the ST initially shipped without TOS in ROM and required booting TOS from floppy, taking 206 KB RAM away from applications. The 260ST was launched in Europe on a limited basis. Early models have six ROM sockets for easy upgrades to TOS. New ROMs were released a few months later and were included in new machines and as an upgrade for older machines.
Atari originally intended to include GEM's Graphical Device Operating System (GDOS), which allows programs to send GEM VDI (Virtual Device Interface) commands to drivers loaded by GDOS. This allows developers to send VDI instructions to other devices simply by pointing to it. However, GDOS was not ready at the time the ST started shipping and was included in software packages and with later ST machines. Later versions of GDOS support vector fonts.
A limited set of GEM fonts were included in the ROMs, including the ST's standard 8x8 pixel graphical character set. It contains four characters which can be placed together in a square, forming the face of J. R. "Bob" Dobbs (the figurehead of the Church of the SubGenius).
The ST was less expensive than most contemporaries, including the Macintosh Plus, and is faster than many. Largely as a result of its price and performance factor, the ST became fairly popular, especially in Europe where foreign-exchange rates amplified prices. The company's English advertising slogan of the era was "Power Without the Price". An Atari ST and terminal emulation software was much cheaper than a Digital VT220 terminal, commonly needed by offices with central computers.
By late 1985, the 520STM added an RF modulator for TV display.
Industry reaction
Computer Gaming World stated that Tramiel's poor pre-Atari reputation would likely make computer stores reluctant to deal with the company, hurting its distribution of the ST. One retailer said, "If you can believe Lucy when she holds the football for Charlie Brown, you can believe Jack Tramiel"; another said that because of its experience with Tramiel, "our interest in Atari is zero, zilch". Neither Atari nor Commodore could persuade large chains like ComputerLand or BusinessLand to sell its products. Observers criticized Atari's erratic discussion of its stated plans for the new computer, as it shifted between using mass merchandisers, specialty computer stores, and both. When asked at COMDEX, Atari executives could not name any computer stores that would carry the ST. After a meeting with Atari, one analyst said, "We've seen marketing strategies changed before our eyes".
Tramiel's poor reputation influenced potential software developers. One said, "Dealing with Commodore is like dealing with Attila the Hun. I don't know if Tramiel will be following his old habits ... I don't see a lot of people rushing to get software on the machine." Large business-software companies like Lotus, Ashton-Tate, and Microsoft did not promise software for either the ST or Amiga, and the majority of software companies were hesitant to support another platform beyond the IBM PC, Apple, and Commodore 64. Philippe Kahn of Borland said, "These days, if I were a consumer, I'd stick with companies [such as Apple and IBM] I know will be around".
At Las Vegas COMDEX in November 1985, the industry was surprised by more than 30 companies exhibiting ST software while the Amiga had almost none. After Atlanta COMDEX, The New York Times reported that "more than 100 software titles will be available for the [ST], most written by small software houses that desperately need work", and contrasted the "small, little-known companies" at Las Vegas with the larger ones like Electronic Arts and Activision, which planned Amiga applications.
Trip Hawkins of Electronic Arts said, "I don't think Atari understands the software business. I'm still skeptical about its resources and its credibility." Although Michael Berlyn of Infocom promised that his company would quickly publish all of its games for the new computer, he doubted many others would soon do so. Spinnaker and Lifetree were more positive, both promising to release ST software. Spinnaker said that "Atari has a vastly improved attitude toward software developers. They are eager to give us technical support and machines". Lifetree said, "We are giving Atari high priority". Some, such as Software Publishing Corporation, were unsure of whether to develop for the ST or the Amiga. John C. Dvorak wrote that the public saw both Commodore and Atari as selling "cheap disposable" game machines, in part because of their computers' sophisticated graphics.
Design
The original 520ST case design was created by Ira Velinsky, Atari's chief Industrial Designer. It is wedge-shaped, with bold angular lines and a series of grilles cut into the rear for airflow. The keyboard has soft tactile feedback and rhomboid-shaped function keys across the top. It is an all-in-one unit, similar to earlier home computers like the Commodore 64, but with a larger keyboard with cursor keys and a numeric keypad. The original has an external floppy drive (SF354) and AC adapter. Starting with the 1040ST, the floppy drive and power supply are integrated into the base unit.
Ports
The ports on the 520ST remained largely unchanged over its history.
Standard
RS-232c serial port (DB25 male, operating as basic 9-conductor DTE)
Centronics printer port (DB25 female, officially compliant only with the most basic unidirectional standard with a single, "Busy" input line; unofficially offering some bidirectional capabilities)
Atari joystick ports (DE-9 male) for the mouse and game controllers
2 MIDI ports (5-pin DIN, "IN" and "OUT")
Because of its bi-directional design, the Centronics printer port can be used for joystick input, and several games used available adaptors that used the printer socket, providing two additional 9-pin joystick ports.
ST-specific
Monitor port (custom 13-pin DIN, 12 of the pins in a rectangular pattern, carrying signals for both RGB and monochrome monitors, monophonic audio and, in later models, composite video)
ACSI (similar to SCSI) DMA port (custom-sized 19-pin D-sub, for hard disks and laser printers, capable of up to 2 MB/s with efficient programming)
Floppy port (14-pin DIN, listed as operating at 250 kbit/s)
ST cartridge port (double-sided 40-contact edge connector socket, for 128 KB ROM cartridges)
Monitor
The ST supports a monochrome or colour monitor. The colour hardware supports two resolutions: 320 × 200 pixels, with 16 of 512 colours; and 640 × 200, with 4 of 512 colours. The monochrome monitor was less expensive and has a single resolution of 640 × 400 at 71.25 Hz. The attached monitor determines available resolutions, so each application either supports both types of monitors or only one. Most ST games require colour with productivity software favouring the monochrome. The Philips CM8833-II was a popular color monitor for the Atari ST.
Floppy drive
Atari initially used single-sided 3.5 inch floppy disk drives that could store up to 360 KB. Later drives were double-sided and stored 720 KB. Some commercial software, particularly games, shipped by default on single-sided disks, even supplying two 360 KB floppies instead of a single double-sided one, to avoid alienating early adopters.
Some software uses formats which allow the full disk to be read by double-sided drives but still lets single-sided drives access side A of the disk. Many magazine coverdisks (such as the first 30 issues of ST Format) were designed this way, as were a few games. The music in Carrier Command and the intro sequence in Populous are not accessible to single-sided drives, for example.
STs with double-sided drives can read disks formatted by MS-DOS, but IBM PC compatibles can not read Atari disks because of differences in the layout of data on track 0.
Later systems
1040ST
Atari upgraded the basic design in 1986 with the 1040STF, stylized as 1040STF: essentially a 520ST with twice the RAM and with the power supply and a double-sided floppy drive with twice the capacity, and built-in instead of external. This adds to the size of the machine, but reduces cable clutter. The joystick and mouse ports, formerly on the right side of the machine, are in a recess underneath the keyboard. An "FM" variant includes an RF modulator allowing a television to be used instead of a monitor.
The trailing "F" and "FM" were often dropped in common usage. In BYTE magazine's March 1986 cover photo of the system, the name plate reads 1040STFM but in the headline and article it's simply "1040ST".
The 1040ST is one of the earliest personal computers shipped with a base RAM configuration of 1 MB. With a list price of in the US, BYTE hailed it as the first computer to break the $1000 per megabyte price barrier. Compute! noted that the 1040ST is the first computer with one megabyte of RAM to sell for less than $2,500.
A limited number of 1040STFs shipped with a single-sided floppy drive of 360KB storage capacity verses 720KB in the double sided version.
We can find also lot of 520 STF versions in Europe, early models are dated 1986, December and have also a single-sided floppy drive of 360KB storage capacity.
Mega
Initial sales were strong, especially in Europe, where Atari sold 75% of its computers. West Germany became Atari's strongest market, with small business owners using them for desktop publishing and CAD.
To address this growing market segment, Atari introduced the ST1 at Comdex in 1986. Renamed to Mega, it includes a high-quality detached keyboard, a stronger case to support the weight of a monitor, and an internal bus expansion connector. An optional 20 MB hard drive can be placed below or above the main case. Initially equipped with 2 or 4 MB of RAM (a 1 MB version, the Mega 1, followed), the Mega machines can be combined with Atari's laser printer for a low-cost desktop publishing package.
A custom blitter coprocessor improved some graphics performance, but was not included in all models. Developers wanting to use it had to detect its presence in their programs. Properly written applications using the GEM API automatically make use of the blitter.
STE
In late 1989, Atari Corporation released the 520STE and 1040STE (also written STE), enhanced version of the ST with improvements to the multimedia hardware and operating system. It features an increased color palette of 4,096 colors from the ST's 512 (though the maximum displayable palette without programming tricks is still limited to 16 in the lowest 320 × 200 resolution, and even fewer in higher resolutions), genlock support, and a blitter coprocessor (stylized as "BLiTTER") which can quickly move large blocks of data (particularly, graphics data) around in RAM. The STE is the first Atari with PCM audio; using a new chip, it added the ability to play back 8-bit (signed) samples at 6258 Hz, 12,517 Hz, 25,033 Hz, and even 50,066 Hz, via direct memory access (DMA). The channels are arranged as either a mono track or a track of LRLRLRLR... bytes. RAM is now much more simply upgradable via SIMMs.
Two enhanced joystick ports were added (two normal joysticks can be plugged into each port with an adapter), with the new connectors placed in more easily accessed locations on the side of the case. The enhanced joystick ports were re-used in the Atari Jaguar console and are compatible.
The STE models initially had software and hardware conflicts resulting in some applications and video games written for the ST line being unstable or even completely unusable, primarily caused by programming direct hardware calls which bypassed the operating system. Furthermore, even having a joystick plugged in would sometimes cause strange behavior with a few applications (such as the WYSIWYG word-processor application 1st Word Plus). Sleepwalker was the only STE-only game from a major publisher, but there were STe enhancements in games such as Another World, Zool and The Chaos Engine, as well as exclusives from smaller companies.
The last STE machine, the Mega STE, is an STE in a grey Atari TT case that had a switchable 16 MHz, dual-bus design (16-bit external, 32-bit internal), optional Motorola 68881 FPU, built-in 1.44 MB "HD" 3-inch floppy disk drive, VME expansion slot, a network port (very similar to that used by Apple's LocalTalk) and an optional built-in 3" hard drive. It also shipped with TOS 2.00 (better support for hard drives, enhanced desktop interface, memory test, 1.44 MB floppy support, bug fixes). It was marketed as more affordable than a TT but more powerful than an ordinary ST.
Atari TT
In 1990, Atari released the high-end workstation-oriented Atari TT030, based on a 32 MHz Motorola 68030 processor. The "TT" name ("Thirty-two/Thirty-two") continued the nomenclature because the 68030 chip has 32-bit buses both internally and externally. Originally planned with a 68020 CPU, the TT has improved graphics and more powerful support chips. The case has a new design with an integrated hard-drive enclosure.
Falcon
The final model of ST computer is the Falcon030. Like the TT, it is 68030-based, at 16 MHz, but with improved video modes and an on-board Motorola 56001 audio digital signal processor. Like the Atari STE, it supports sampling frequencies above 44.1 kHz; the sampling master clock is 98340 Hz (which can be divided by a number between 2 and 16 to get the actual sampling frequencies). It can play the STE sample frequencies (up to 50066 Hz) in 8 or 16 bit, mono or stereo, all by using the same DMA interface as the STE, with a few additions. It can both play back and record samples, with 8 mono channels and 4 stereo channels, allowing musicians to use it for recording to hard drive. Although the 68030 microprocessor can use 32-bit memory, the Falcon uses a 16-bit bus, which reduces performance and cost. In another cost-reduction measure, Atari shipped the Falcon in an inexpensive case much like that of the STF and STE. Aftermarket upgrade kits allow it to be put in a desktop or rack-mount case, with the keyboard separate.
Released in 1992, the Falcon was discontinued by Atari the following year. In Europe, C-Lab licensed the Falcon design from Atari and released the C-Lab Falcon Mk I, identical to Atari's Falcon except for slight modifications to the audio circuitry. The Mk II added an internal 500 MB SCSI hard disk; and the Mk X further added a desktop case. C-Lab Falcons were also imported to the US by some Atari dealers.
Software
As with the Atari 8-bit computers, software publishers attributed their reluctance to produce Atari ST products in part to—as Compute! reported in 1988—the belief in the existence of a "higher-than-normal amount of software piracy". That year, WordPerfect threatened to discontinue the Atari ST version of its word processor because the company discovered that pirate bulletin board systems (BBSs) were distributing it, causing ST-Log to warn that "we had better put a stop to piracy now ... it can have harmful effects on the longevity and health of your computer". A positive review of Typhoon Thompson in Antic concluded:
In 1989, magazines published a letter by Gilman Louie, head of Spectrum HoloByte. He stated that he had been warned by competitors that releasing a game like Falcon on the ST would fail because BBSs would widely disseminate it. Within 30 days of releasing the non-copy protected ST version, the game was available on BBSs with maps and code wheels. Because the ST market was smaller than that for the IBM PC, it was more vulnerable to piracy which, Louie said, seemed to be better organized and more widely accepted for the ST. He reported that the Amiga version sold in six weeks twice as much as the ST version in nine weeks, and that the Mac and PC versions had four times the sales. Computer Gaming World stated "This is certainly the clearest exposition ... we have seen to date" of why software companies produced less software for the ST than for other computers.
Several third-party OSes were developed for, or ported to, the Atari ST. Unix clones include Idris, Minix, and the MiNT OS which was developed specifically for the Atari ST.
Audio
Plenty of professional quality MIDI-related software was released. The popular Windows and Macintosh applications Cubase and Logic Pro originated on the Atari ST (the latter as Creator, Notator, Notator-SL, and Notator Logic). Another popular and powerful ST music sequencer application, KCS, contains a "Multi-Program Environment" that allows ST users to run other applications, such as the synthesizer patch editing software XoR (now known as Unisyn on the Macintosh), from within the sequencer application.
Music tracker software became popular on the ST, such as the TCB Tracker, aiding the production of quality music from the Yamaha synthesizer, now called chiptunes.
Due to the ST having comparatively large amounts of memory for the time, sound sampling packages became feasible. Replay Professional features a sound sampler using the ST cartridge port to read in parallel from the cartridge port from the ADC. For output of digital sound, it uses the on-board frequency output, sets it to 128 kHz (inaudible) and then modulates the amplitude of that.
MasterTracks Pro originated on Macintosh, then ST, then IBM PC version. It continued on Windows and macOS, along with the original company's notation applications Encore.
Applications
Professional desktop publishing software includes Timeworks Publisher, PageStream and Calamus. Word processors include WordPerfect, Microsoft Write, AtariWorks, Signum, Script and First Word (bundled with the machine). Spreadsheets include 3D-Calc, and databases include Zoomracks. Graphics applications include NEOchrome, DEGAS & DEGAS Elite, Deluxe Paint, STAD, and Cyber Paint (which author Jim Kent would later evolve into Autodesk Animator) with advanced features such as 3D design and animation. The Spectrum 512 paint program uses rapid palette switching to expand the on-screen color palette to 512 (up to 46 colors per scan line).
3D computer graphics applications (like Cyber Studio CAD-3D, which author Tom Hudson later developed into Autodesk 3D Studio), brought 3D modelling, sculpting, scripting, and computer animation to the desktop. Video capture and editing applications use dongles connected to the cartridge port for low frame rate, mainly silent and monochrome, but progressed to sound and basic color in still frames. At the end, Spectrum 512 and CAD-3D teamed up to produce realistic 512-color textured 3D renderings, but processing was slow, and Atari's failure to deliver a machine with a math coprocessor had Hudson and Yost looking towards the PC as the future before a finished product could be delivered to the consumer.
Garry Kasparov became the first chess player to register a copy of ChessBase, a popular commercial database program for storing and searching records of chess games. The first version was built for Atari ST with his collaboration in January 1987. In his autobiography Child of Change, he regards this facility as "the most important development in chess research since printing".
Graphical touchscreen point of sale software for restaurants was originally developed for Atari ST by Gene Mosher under the ViewTouch copyright and trademark. Instead of using GEM, he developed a GUI and widget framework for the application using the NEOchrome paint program.
Software development
The 520ST was bundled with both Digital Research Logo and Atari ST BASIC. Third-party BASIC systems with better performance were eventually released: HiSoft BASIC, GFA BASIC, FaST BASIC, DBASIC, LDW BASIC, Omikron BASIC, BASIC 1000D and STOS. In the later years of the Atari ST, Omikron Basic was bundled with it in Germany.
Atari's initial development kit from Atari is a computer and manuals. The cost discouraged development. The later Atari Developer's Kit consists of software and manuals for . It includes a resource kit, C compiler (first Alcyon C, then Mark Williams C), debugger, 68000 assembler, and non-disclosure agreement. The third-party Megamax C development package was .
Other development tools include 68000 assemblers (MadMac from Atari, HiSoft Systems's Devpac, TurboAss, GFA-Assembler), Pascal (OSS Personal Pascal, Maxon Pascal, PurePascal), Modula-2, C compilers (Lattice C, Pure C, Megamax C, GNU C, Aztec C, AHCC), LISP, and Prolog.
Games
The ST had success in gaming due to the low cost, fast performance, and colorful graphics compared to contemporary PCs or 8-bit systems. ST game developers include Steve Bak, Peter Molyneux, Doug Bell, Jeff Minter, Éric Chahi, Jez San, and David Braben.
When the Atari ST was released in 1985, it seemed to be aimed at the professional market. However, the inclusion of two joystick ports and a low-resolution mode of 320x200 pixels, with 16 colours from a 512-colour palette, hinted at its potential for gaming. Initially, it was uncertain whether these new 16-bit machines could really deliver a next-generation gaming experience, as the games at launch didn't show a significant visual improvement over the 8-bit systems of the time.
After a while, the first ST games began to appear that people were attracted to:
Time Bandits - which brought the labyrinth action to the ST, but was not technically superior to the 8-bit.
Major Motion - a Spy Hunter clone that could be played with the mouse.
Arena - a decathlon game that had to be played with the keyboard, but had graphics with a level of detail beyond the capabilities of any 8-bit system.
Megaroids - an Asteroids clone in a medium resolution of 640x200 in 4 colours. This made it outstanding at the time.
Joust - an arcade port showing the new capabilities of bitmap graphics compared to the character set graphics of 8-bit systems - [Moon Patrol] - offered a high resolution 640x400 black and white version.
Sundog - An RPG with simple graphics, but a story that made it a classic.
As developers became more familiar with the ST's capabilities, they were able to exploit its full potential. This resulted in games with visuals that far surpassed anything seen on 8-bit systems. Notable examples include
Goldrunner - Its sampled sound, bitmap graphics and smooth scrolling were impressive.
Starglider - Featuring a multi-second title sample, a feat for the time, its fast, colourful 3D wireframe graphics showcased the power of the 16-bit processor.
Gauntlet - Arcade port with the ability to play with 4 players via a parallel port joystick adapter.
ST Karate - Fighting game
Oids - 2D physics-based action game inspired by Thrust.
It wasn't long before ST games were gracing the covers of leading computer game magazines. It became standard practice to develop games on the ST and then port them to other platforms. Several of these titles went on to have a significant impact on the history of computer gaming:
The realtime pseudo-3D role-playing video game Dungeon Master, was developed and released first on the ST, and is considered to be the best-selling software ever produced for the platform.
Simulation games like Falcon and Flight Simulator II use the ST's graphics hardware, as do many arcade ports.
The 1987 first-person shooter, MIDI Maze, uses the MIDI ports to connect up to 16 machines for networked deathmatch play.
The 3D Rollercoaster Racer Stunt Car Racer had fast 3D graphics, surpassing those of other systems, largely due to the ST's powerful CPU.
The arcade conversion Super Sprint remained exclusive to the ST for several years, cementing its status as one of the system's signature titles.
Beyond the mainstream releases, there was also a flourishing scene of games designed specifically for the Atari ST's monochrome mode. With its 640x400 resolution, coupled with the crisp display of Atari's SM124 monitor, this mode provided a canvas for some truly distinctive games, offering unique aesthetics and gameplay:
Oxyd - Based on the classic memory card game, Oxyd delivered a compelling puzzle experience.
Ballerburg - A game that captivated a generation and may have paved the way for titles like Worms.
Bolo - a breakout game.
The Atari ST enjoyed a period of dominance throughout the second half of the 1980s, but its influence began to diminish as the next decade dawned. Competitors with custom chips gained the upper hand for a time until the PC took over. During this period, games were predominantly developed on these rival systems and subsequently ported to the ST. The inherent nature of game conversions meant that the original, optimised for its native hardware, often suffered compromises in the translation. A prime example is [Wolfchild], a superb game in its original form, but the ST version was noticeably inferior due to a rushed port.
While the enhanced capabilities of the Atari 1040 STE were welcomed by the Atari ST community, the number of games that utilised them was limited. This was largely due to the relatively small user base of STe owners, making exclusive STe development commercially unviable. However, some titles did manage to garner positive attention beyond the Atari community:
Obsession - A pinball simulation that boasted numerous tables, leveraging the STe's expanded colour palette and improved hardware scrolling.
Substation - A first-person shooter set within an icy environment.
Brutal Football - A sports game that showed off the STe's Blitter chip.
Sleepwalker - an STe only game by Ocean Software.
The Atari Falcon, intended as the successor to the ST/STe, found a dedicated following within the Atari scene, resulting in a vibrant homebrew community. Sadly, the Falcon's overall market penetration was insufficient to make a widespread impact. Notable titles include:
Crown of Creation - A 3D game.
Ishar I, II, III - A series of well-regarded dungeon crawlers.
Racer 2 - A highly polished driving game.
Although often overlooked by mainstream publications, the Atari ST gaming scene remains active. Dedicated Atari enthusiasts continue to develop and release new games. Notable examples include:
Stario Land - A meticulously crafted platformer, reminiscent of Mario, which demonstrated the capabilities of smooth scrolling on the ST, subtly highlighting the shortcomings of earlier attempts like The Great Giana Sisters.
Double Bobble 2000 - A faithful recreation of Bubble Bobble, specifically for the Atari Falcon.
Grav - A challenging shoot-em-up.
Hector vs The Mutant Vampire Tomatoes From Hel -: A quirky action-platformer.
Beyond the ongoing development of new games, the Atari ST community maintains a presence through various initiatives. Notably, the Atari ST Offline Tournament (STOT), established in 2007, provides a monthly platform for high-score competitions, keeping classic games in active rotation. Furthermore, gatherings and dedicated MIDI Maze events demonstrate the enduring popularity of networked play on the ST.
Social media platforms, particularly YouTube, feature numerous channels dedicated to showcasing Atari ST games. Online resources like AtariMania (archiving), Atari-Forum (community), Atari Legend (the central Atari ST portal), and AtariCrypt (a diverse hub) serve as essential pillars of the community, ensuring the Atari ST remains an active platform.
Emulators
Spectre GCR emulates the Macintosh. MS-DOS emulators were released in the late 1980s. PC-Ditto has a software-only version, and a hardware version that plugs into the cartridge slot or kludges internally. After running the software, an MS-DOS boot disk is required to load the system. Both run MS-DOS programs in CGA mode, though much more slowly than on an IBM PC. Other options are the PC-Speed (NEC V30), AT-Spee (Intel 80286), and ATonce-386SX (Intel 80386SX) hardware emulator boards.
Music industry
The ST's low cost, built-in MIDI ports, and fast, low-latency response times made it a favorite with musicians.
Prominent Russian film music and song composer Aleksandr Zatsepin started using personal computers for work with Atari 1040ST and continued using Cubase and Vienna Symphonic Library.
German electronic music pioneers Tangerine Dream relied heavily on the Atari ST in the studio and for live performances during the late 1980s and 1990s.
The album notes for Mike Oldfield's Earth Moving state that it was recorded using an Atari ST and C-Lab MIDI software.
The Fatboy Slim album You've Come a Long Way, Baby was created using an Atari ST.
In the Paris performance of Jean Michel Jarre's album Waiting for Cousteau, the Paris La Défense – Une Ville En Concert, musicians have attached Atari ST machines with C-Lab Unitor software to their keyboards, as seen in the TV live show and video recordings.
White Town's "Your Woman", which reached No. 1 in the UK singles charts, was created using an Atari ST.
The Utah Saints used a 520ST and 1040ST running Cubase during the recording of both of their albums, Utah Saints and Two, with their 1040ST still occasionally used for re-recording or remixing early tracks up to 2015.
Atari Teenage Riot programmed most of their music on an Atari ST, including the entire album Is This Hyperreal? (June 2011).
Cabaret Voltaire founder Richard H. Kirk said in 2016 that he continues to write music on an Atari 1040ST with C-Lab.
Darude used Cubase on an Atari 1040ST when he created his 2000 hit "Sandstorm".
Depeche Mode used a combination of an Atari ST and Cubase in the studio during the production of Songs of Faith and Devotion in 1992. The machine is visible in the documentary included with the 2006 remaster of the album.
Record producer Jimmy Hotz used an Atari ST to produce Fleetwood Mac's Tango in the Night album, and records for B. B. King and Dave Mason.
English DJ and house producer Joey Negro.
English songwriters and record producers Stock, Aitken, and Waterman.
English synth-pop duo Pet Shop Boys replaced their Fairlight CMI with an Atari ST, with their programmer Pete Gleadall saying, "[Atari ST] was just much easier to work with".
Canadian industrial band Skinny Puppy used the Atari ST with Steinberg Pro 24 software to produce several of their albums, including Rabies and The Process. A 1040ST can be seen in footage of the band jamming in their studio during The Processs writing sessions.
Dario G used the Atari ST to produce the dance track "Sunchyme" which reached No. 2 in the UK charts.
Technical specifications
All STs are made up of both custom and commercial chips.
Custom chips:
ST Shifter "Video shift register chip": Enables bitmap graphics using 32 KB of contiguous memory for all resolutions. Screen address has to be a multiple of 256.
ST GLU "Generalized Logic Unit": Control logic for the system used to connect the ST's chips. Not part of the data path, but needed to bridge chips with each other.
ST MMU "Memory Management Unit": Provides signals needed for CPU/blitter/DMA and Shifter to access dynamic RAM. Even memory accesses are given to CPU/blitter/DMA while odd cycles are reserved for DRAM refresh or used by Shifter for displaying contents of the frame buffer.
ST DMA "Direct Memory Access": Used for floppy and hard drive data transfers. Can directly access main memory in the ST.
Support chips:
MC6850P ACIA "Asynchronous Common Interface Adapter": Enables the ST to directly communicate with MIDI devices and keyboard (two chips used). for MIDI, for keyboard.
MC68901 MFP "Multi Function Peripheral": Used for interrupt generation/control, serial and misc. control input signals. Atari TT030 has two MFP chips.
WD-1772-PH "Western Digital Floppy Disk Controller": Floppy controller chip.
YM2149F PSG "Programmable Sound Generator": Provides three-voice sound synthesis, also used for floppy signalling, serial control output and printer parallel port.
HD6301V1 "Hitachi keyboard processor": Used for keyboard scanning and mouse/joystick ports.
ST/STF/STM/STFM
As originally released in the 520ST:
CPU: Motorola 68000 16-/32-bit CPU @ 8 MHz. 16-bit data/32-bit internal/24-bit address.
RAM: 512 KB or 1 MB
Display modes (60 Hz NTSC, 50 Hz PAL, 71.2 Hz monochrome):
Low resolution: 320 × 200 (16 color), palette of 512 colors
Medium resolution: 640 × 200 (4 color), palette of 512 colors
High resolution: 640 × 400, monochrome
Sound: Yamaha YM2149 3-voice square wave plus 1-voice white noise mono Programmable Sound Generator
Drive: Single-sided 3" floppy disk drive, 360 KB capacity when formatted to standard 9 sector, 80 track layout.
Ports: TV out (on ST-M and ST-FM models, NTSC or PAL standard RF-modulated), MIDI in/out (with 'out-thru'), RS-232 serial, Centronics parallel (printer), monitor (RGB or Composite Video color and mono, 13-pin DIN), extra disk drive port (14-pin DIN), DMA port (ACSI port, Atari Computer System Interface) for hard disks and Atari Laser Printer (sharing RAM with computer system), joystick and mouse ports (9-pin MSX standard)
Operating System: TOS v1.00 with Graphics Environment Manager (GEM)
Very early machines have the OS on a floppy disk before a final version was burned into ROM. This version of TOS was bootstrapped from a small core boot ROM.
In 1986, most production models became STFs, with an integrated single- (520STF) or double-sided (1040STF) double density floppy disk drive built-in, but no other changes. Also in 1986, the 520STM (or 520STM) added an RF modulator for allowing the low and medium resolution color modes when connected to a TV. Later F and FM'' models of the 520 had a built-in double-sided disk drive instead of a single-sided one.
STE
As originally released in the 520STE/1040STE:
All of the features of the 520STFM/1040STFM
Extended palette of 4,096 available colors to choose from
Blitter chip (stylized as BLiTTER) to copy/fill/clear large data blocks with a max write rate of 4 Mbytes/s
Hardware support for horizontal and vertical fine scrolling and split screen (using the Shifter video chip)
DMA sound chip with 2-channels stereo 8-bit PCM sound at 6.25/12.5/25/50 kHz and stereo RCA audio-out jacks (using enhancements to the Shifter video chip to support audio shifting)
National LMC 1992 audio controller chip, allowing adjustable left/right/master volume and bass and treble EQ via a Microwire interface
Memory: 30-pin SIMM memory slots (SIPP packages in earliest versions) allowing upgrades up to 4 MB Allowable memory sizes including only 0.5, 1.0, 2.0, 2.5 and 4.0 MB due to configuration restraints (however, 2.5 MB is not officially supported and has compatibility problems). Later third-party upgrade kits allow a maximum of 14 MB w/Magnum-ST, bypassing the stock MMU with a replacement unit and the additional chips on a separate board fitting over it.
Ability to synchronize the video timings with an external device so that a video Genlock device can be used without having to make any modifications to computer's hardware
Analogue joypad ports (2), with support for devices such as paddles and light pens in addition to joysticks/joypads. The Atari Jaguar joypads and Power Pad joypads (gray version of Jaguar joypads marketed for the STE and Falcon) can be used without an adapter. Two standard Atari-style digital joysticks could be plugged into each analogue port with an adapter.
TOS 1.06 (also known as TOS 1.6) or TOS 1.62 (which fixed some major backwards-compatibility bugs in TOS 1.6) in two socketed 128 KB ROM chips.
Socketed PLCC 68000 CPU
Models
The members of the ST family are listed below, in roughly chronological order:
520ST original model with 512 KB RAM, external power supply, no floppy disk drive. The early models had only a bootstrap ROM and TOS had to be loaded from disk.
520ST+ same as the original model 520ST, but with 1 MB of RAM,
260ST originally intended to be a 256 KB variant, but actually sold in small quantities in Europe with 512 KB. Used after the release of the 520ST+ to differentiate the cheaper 512 KB models from the 1 MB models. Because the early 520STs were sold with TOS on disk, which used up 192 KB of RAM, the machine only had around 256 KB left.
520STM a 520ST with a built-in modulator for TV output and 512 KB RAM.
520STFM a 520STM with a redesigned motherboard in a larger case with a built-in floppy disk drive (in some cases a single-sided drive only), and 512 KB RAM.
520STF a 520STFM without RF modulator
1040STF a 520STFM with 1 MB of RAM and a built-in double-sided floppy disk drive, but without RF modulator
1040STFM a 520STFM with 1 MB of RAM and a built-in double-sided floppy disk drive with RF modulator
Mega ST (MEGA 1, MEGA 2, MEGA 4) redesigned motherboard with 1, 2 or 4 MB of RAM, respectively, in a much improved "pizza box" case with a detached keyboard. All MEGA mainboards have a PLCC socket for the BLiTTER chip and some early models did not include the BLiTTER chip. They also included a real-time clock and internal expansion connector. Some early MEGA 2 had a MEGA 4 mainboard with half of the memory chip places unpopulated and the MEGA 2 can be upgraded by adding the additional DRAM chips and some resistors for the control lines. The MEGA 1 mainboards had a redesigned memory chip area and could not be upgraded in this way as there are only places for the 1 MB DRAM chips.
520STE and 1040STE a 520STFM/1040STFM with enhanced sound, a BLiTTER chip, and a 4096-color palette, in the older 1040-style all-in-one case
Mega STE same hardware as 1040STE except for a faster 16 MHz processor with 16K cache, an onboard SCSI controller, additional faster RS232 port, VME expansion port, in an ST gray version of the TT case
STacy a portable (but definitely not laptop) version of the ST with the complete ST keyboard, an LCD screen simulating 640x400 hi-res, and a mini-trackball intended mostly for travelers and musicians because of the backlit screen and its built-in midi ports. Originally designed to operate on 12 standard C cell flashlight batteries for portability, when Atari finally realized how quickly the machine would use up a set of batteries (especially when rechargeable batteries of the time supplied insufficient power compared to the intended alkalines), they simply glued the lid of the battery compartment shut.
ST BOOK a later portable ST, more portable than the STacy, but sacrificing several features in order to achieve this, notably the backlight and internal floppy disk drive. Files were meant to be stored on a small amount (one megabyte) of internal flash memory 'on the road' and transferred using serial or parallel links, memory flashcards or external (and externally powered) floppy disk to a desktop ST once back indoors. The screen is highly reflective for the time, but still hard to use indoors or in low light, it is fixed to the 640 × 400 1-bit mono mode, and no external video port was provided. Despite its limitations, it gained some popularity, particularly amongst musicians.
Unreleased
The 130ST was intended to be a 128 KB variant. It was announced at the 1985 CES alongside the 520ST but never produced. The 4160STE was a 1040STE, but with 4 MB of RAM. A small quantity of development units were produced, but the system was never officially released. Atari did produce a quantity of 4160STE metallic case badges which found their way to dealers, so it's not uncommon to find one attached to systems which were originally 520/1040STE. No such labels were produced for the base of the systems.
Related systems
Atari Transputer Workstation is a standalone machine developed in conjunction with Perihelion Hardware, containing modified ST hardware and up to 17 transputers capable of massively parallel operations for tasks such as ray tracing.
Clones
Following Atari's departure from the computer market, both Medusa Computer Systems and Milan Computer manufactured Atari Falcon/TT-compatible machines with 68040 and 68060 processors. The FireBee is an Atari ST/TT clone based on the Coldfire processor. The GE-Soft Eagle is a 32 MHz TT clone.
Peripherals
SF354: Single-sided double-density 3-inch floppy drive (360 KB) with external power supply
SF314: Double-sided double-density 3-inch floppy drive (720 KB) with external power supply
PS3000: Combined 12-inch color monitor and 360k 3-inch floppy drive (SF354). Speaker. Manufactured by JVC in limited quantity (≈1000), only a few working models remain.
SM124: Monochrome monitor, 12-inch screen (9.5-inch displayed image), speaker, 640 × 400 pixels, 70 Hz refresh
SM125: Monochrome monitor, 12-inch screen, up/down/sideways swivel stand, speaker, 640x400 pixels, 70 Hz refresh
SM147: Monochrome monitor, 14-inch screen, no speaker, replacement for SM124
SC1224: Color monitor, 12-inch screen, 640 × 200 pixels plus speaker
SC1425: Color monitor, 14-inch screen, One speaker on the left of screen, a jack to plug ear-listeners
SC1435: Color monitor, 14-inch screen, stereo speakers, replacement for SC1224 (rebadged Magnavox 1CM135)
SM195: Monochrome monitor, 19-inch screen for TT030. 1280 × 960 pixels. 70 Hz refresh
SH204: External hard drive, 20 MB MFM drive, "shoe box" case made of metal
SH205: External hard drive, Mega ST matching case, 20 MB MFM 3.5-inch (Tandon TM262) or 5.25-inch (Segate ST225) drive with ST506 interface (became later the Megafile 20)
Megafile 20, 30, 60: External hard drive, Mega ST matching case, ACSI bus; Megafile 30 and 60 had a 5.25-inch RLL (often a Seagate ST238R 30 MB or Seagate ST277R 60 MB drive) with ST506 interface
Megafile 44: Removable cartridge drive, ACSI bus, Mega ST matching case
SLM804: Laser printer, connected through ACSI DMA port, used ST's memory and processor to build pages for printing
SLM605: Laser printer, connected through ACSI DMA port, smaller than SLM804.
Satandisk
SatanDisk is a SD and MMC card adapter for Atari 16-bit computers, such as the Atari ST, invented in 2007. The objective is to replace mechanical hard drives available from Atari (SH204, SH205 and Megafile) and compatible products. The interface allows the connection of an SD or MMC card to be attached to the ACSI (hard disc) port of Atari computers, and has been tested to be compatible with TOS versions 1.02 to 2.06. The maximum supported size is 4 GB. The device appears to the system as any regular ACSI attached hard disc, but has so far only been successfully used with the proprietary and commercial HDDriver driver package.
In 2009 the developer Jookie (Miroslav NOHAJ) introduced a successor UltraSatan which supports two SD/MMC cards in parallel. The adapter features hot-plug capability of the cards and includes a battery backed up RTC chip. Additionally to the commercial HDDriver it is supported by the free ICD PRO.
|
68000-based home computers;All-in-one computers;Atari ST;Computer-related introductions in 1985;Home computers;Products introduced in 1985
|
https://en.wikipedia.org/wiki/Abiotic%20stress
|
Abiotic stress is the negative impact of non-living factors on the living organisms in a specific environment. The non-living variable must influence the environment beyond its normal range of variation to adversely affect the population performance or individual physiology of the organism in a significant way.
Whereas a biotic stress would include living disturbances such as fungi or harmful insects, abiotic stress factors, or stressors, are naturally occurring, often intangible and inanimate factors such as intense sunlight, temperature or wind that may cause harm to the plants and animals in the area affected. Abiotic stress is essentially unavoidable. Abiotic stress affects animals, but plants are especially dependent, if not solely dependent, on environmental factors, so it is particularly constraining. Abiotic stress is the most harmful factor concerning the growth and productivity of crops worldwide. Research has also shown that abiotic stressors are at their most harmful when they occur together, in combinations of abiotic stress factors.
Examples
Abiotic stress comes in many forms. The most common of the stressors are the easiest for people to identify, but there are many other, less recognizable abiotic stress factors which affect environments constantly.
The most basic stressors include:
High winds
Extreme temperatures
Drought
Flood
Other natural disasters, such as tornadoes and wildfires.
Cold
Heat
Nutrient deficiency
Lesser-known stressors generally occur on a smaller scale. They include: poor edaphic conditions like rock content and pH levels, high radiation, compaction, contamination, and other, highly specific conditions like rapid rehydration during seed germination.
Effects
Abiotic stress, as a natural part of every ecosystem, will affect organisms in a variety of ways. Although these effects may be either beneficial or detrimental, the location of the area is crucial in determining the extent of the impact that abiotic stress will have. The higher the latitude of the area affected, the greater the impact of abiotic stress will be on that area. So, a taiga or boreal forest is at the mercy of whatever abiotic stress factors may come along, while tropical zones are much less susceptible to such stressors.
Benefits
While abiotic stress may have negative impacts on individual organisms, there are cases where abiotic stress plays an important role in maintaining a healthy ecosystem. Important ecosystem mechanisms and improved overall stress tolerance may rely on occasional low levels of abiotic stress.
One example of a situation where abiotic stress plays a constructive role in an ecosystem is in natural wildfires. Smaller fires are useful in reducing the overall fuel load of an area of forest or prairie. By clearing out dead brush and other organic matter, the risk of catastrophic and widespread fire decreases, and the residual ash of smaller fires helps add nutrients back into the soil. The observed benefits of these smaller and more controlled fires on land usability and species populations have led to the use of prescribed burning by humans for centuries. Varying perspectives on the benefits and risks of fire to ecosystems have influenced official policy through history. The U.S. Forest Service, initially focused on fire control, changed its policy to one of fire management in 1974, recognizing these fires as a natural part of an ecosystem. There is also evidence that a diverse fire history between patches of land within an area has been shown to benefit transitional landscapes between savanna and forest. Even though it is healthy for an ecosystem, a wildfire can still be considered an abiotic stressor, because it puts stress on individual organisms within the area. On the larger scale, though, natural wildfires are positive manifestations of abiotic stress.
What also needs to be taken into account when looking for benefits of abiotic stress, is that one phenomenon may not affect an entire ecosystem in the same way. While a flood will kill most plants living low on the ground in a certain area, if there is rice there, it will thrive in the wet conditions. Another example of this is in phytoplankton and zooplankton. The same types of conditions are usually considered stressful for these two types of organisms. They act very similarly when exposed to ultraviolet light and most toxins, but at elevated temperatures the phytoplankton reacts negatively, while the thermophilic zooplankton reacts positively to the increase in temperature. The two may be living in the same environment, but an increase in temperature of the area would prove stressful only for one of the organisms.
Lastly, abiotic stress has enabled species to grow, develop, and evolve, through the process of natural selection. Heritable traits that improve an organism's resiliency under stressed conditions increase the likelihood that the organism will survive and reproduce, enabling it to pass these traits to the next generation. Both plants and animals have evolved mechanisms allowing them to survive extremes.
Detriments
One of the detriments concerning abiotic stress involves farming. It has been claimed by one study that abiotic stress causes the most crop loss of any other factor and that most major crops are reduced in their yield by more than 50% from their potential yield.
Because abiotic stress is widely considered a detrimental effect, the research on this branch of the issue is extensive. For more information on the harmful effects of abiotic stress, see the sections below on plants and animals.
In plants
A plant's first line of defense against abiotic stress is in its roots. If the soil holding the plant is healthy and biologically diverse, the plant will have a higher chance of surviving stressful conditions.
The plant responses to stress are dependent on the tissue or organ affected by the stress. For example, transcriptional responses to stress are tissue or cell specific in roots and are quite different depending on the stress involved.
One of the primary responses to abiotic stress such as high salinity is the disruption of the Na+/K+ ratio in the cytoplasm of the plant cell. High concentrations of Na+, for example, can decrease the capacity for the plant to take up water and also alter enzyme and transporter functions. Evolved adaptations to efficiently restore cellular ion homeostasis have led to a wide variety of stress tolerant plants.
Facilitation, or the positive interactions between different species of plants, is an intricate web of association in a natural environment. It is how plants work together. In areas of high stress, the level of facilitation is especially high as well. This could possibly be because the plants need a stronger network to survive in a harsher environment, so their interactions between species, such as cross-pollination or mutualistic actions, become more common to cope with the severity of their habitat.
Plants also adapt very differently from one another, even from a plant living in the same area. When a group of different plant species was prompted by a variety of different stress signals, such as drought or cold, each plant responded uniquely. Hardly any of the responses were similar, even though the plants had become accustomed to exactly the same home environment.
Serpentine soils (media with low concentrations of nutrients and high concentrations of heavy metals) can be a source of abiotic stress. Initially, the absorption of toxic metal ions is limited by cell membrane exclusion. Ions that are absorbed into tissues are sequestered in cell vacuoles. This sequestration mechanism is facilitated by proteins on the vacuole membrane. An example of plants that adapt to serpentine soil are Metallophytes, or hyperaccumulators, as they are known for their ability to absorbed heavy metals using the root-to-shoot translocation (which it will absorb into shoots rather than the plant itself). They're also extinguished for their ability to absorb toxic substances from heavy metals.
Chemical priming has been proposed to increase tolerance to abiotic stresses in crop plants. In this method, which is analogous to vaccination, stress-inducing chemical agents are introduced to the plant in brief doses so that the plant begins preparing defense mechanisms. Thus, when the abiotic stress occurs, the plant has already prepared defense mechanisms that can be activated faster and increase tolerance. Prior exposure to tolerable doses of biotic stresses such as phloem-feeding insect infestation have also been shown to increase tolerance to abiotic stresses in plant
Impact on food production
Abiotic stress mostly affects plants used in agriculture. Some examples of adverse conditions (which may be caused by climate change) are high or low temperatures, drought, salinity, and toxins.
Rice (Oryza sativa) is a classic example. Rice is a staple food throughout the world, especially in China and India. Rice plants can undergo different types of abiotic stresses, like drought and high salinity. These stress conditions adversely affect rice production. Genetic diversity has been studied among several rice varieties with different genotypes, using molecular markers.
Chickpea production is affected by drought. Chickpeas are one of the most important foods in the world.
Wheat is another major crop that is affected by drought: lack of water affects the plant development, and can wither the leaves.
Maize crops can be affected by high temperature and drought, leading to the loss of maize crops due to poor plant development.
Soybean is a major source of protein, and its production is also affected by drought.
Salt stress in plants
Soil salinization, the accumulation of water-soluble salts to levels that negatively impact plant production, is a global phenomenon affecting approximately 831 million hectares of land. More specifically, the phenomenon threatens 19.5% of the world's irrigated agricultural land and 2.1% of the world's non-irrigated (dry-land) agricultural lands. High soil salinity content can be harmful to plants because water-soluble salts can alter osmotic potential gradients and consequently inhibit many cellular functions. For example, high soil salinity content can inhibit the process of photosynthesis by limiting a plant's water uptake; high levels of water-soluble salts in the soil can decrease the osmotic potential of the soil and consequently decrease the difference in water potential between the soil and the plant's roots, thereby limiting electron flow from H2O to P680 in Photosystem II's reaction center.
Over generations, many plants have mutated and built different mechanisms to counter salinity effects. A good combatant of salinity in plants is the hormone ethylene. Ethylene is known for regulating plant growth and development and dealing with stress conditions. Many central membrane proteins in plants, such as ETO2, ERS1 and EIN2, are used for ethylene signaling in many plant growth processes. Mutations in these proteins can lead to heightened salt sensitivity and can limit plant growth. The effects of salinity has been studied on Arabidopsis plants that have mutated ERS1, ERS2, ETR1, ETR2 and EIN4 proteins. These proteins are used for ethylene signaling against certain stress conditions, such as salt and the ethylene precursor ACC is used to suppress any sensitivity to the salt stress.
Phosphate starvation in plants
Phosphorus (P) is an essential macronutrient required for plant growth and development, but it is present only in limited quantities in most of the world's soil. Plants use P mainly in the form of soluble inorganic phosphates (PO4−−−) but are subject to abiotic stress when there is not enough soluble PO4−−− in the soil. Phosphorus forms insoluble complexes with Ca and Mg in alkaline soils and with Al and Fe in acidic soils that make the phosphorus unavailable for plant roots. When there is limited bioavailable P in the soil, plants show extensive symptoms of abiotic stress, such as short primary roots and more lateral roots and root hairs to make more surface available for phosphate absorption, exudation of organic acids and phosphatase to release phosphates from complex P–containing molecules and make it available for growing plants' organs. It has been shown that PHR1, a MYB-related transcription factor, is a master regulator of P-starvation response in plants. PHR1 also has been shown to regulate extensive remodeling of lipids and metabolites during phosphorus limitation stress
Drought stress
Drought stress, defined as naturally occurring water deficit, is a main cause of crop losses in agriculture. This is because water is essential for many fundamental processes in plant growth. It has become especially important in recent years to find a way to combat drought stress. A decrease in precipitation and consequent increase in drought are extremely likely in the future due to an increase in global warming. Plants have come up with many mechanisms and adaptations to try and deal with drought stress. One of the leading ways that plants combat drought stress is by closing their stomata. A key hormone regulating stomatal opening and closing is abscisic acid (ABA). Synthesis of ABA causes the ABA to bind to receptors. This binding then affects the opening of ion channels, thereby decreasing turgor pressure in the stomata and causing them to close. Recent studies by Gonzalez-Villagra, et al., have shown how ABA levels increased in drought-stressed plants (2018). They showed that when plants were placed in a stressful situation, they produced more ABA to try to conserve any water they had in their leaves. Another extremely important factor in dealing with drought stress and regulating the uptake and export of water is aquaporins (AQPs). AQPs are integral membrane proteins that make up channels. These channels' main job is the transport of water and other essential solutes. AQPs are both transcriptionally and post-transcriptionally regulated by many different factors such as ABA, GA3, pH and Ca2+; and the specific levels of AQPs in certain parts of the plant, such as roots or leaves, helps to draw as much water into the plant as possible. By understanding the mechanisms of both AQPs and the hormone ABA, scientists will be better able to produce drought-resistant plants in the future.
A study by Tombesi et al., found that plants which had previously been exposed to drought were able to minimize water loss and decrease water use. They found that plants which were exposed to drought conditions actually changed the way they regulated their stomata and what they called "hydraulic safety margin" so as to decrease the vulnerability of the plant. By changing the regulation of stomata and subsequently the transpiration, plants were able to function better when less water was available.
In animals
For animals, the most stressful of all the abiotic stressors is heat. This is because many species are unable to regulate their internal body temperature. Even in the species that are able to regulate their own temperature, it is not always a completely accurate system. Temperature determines metabolic rates, heart rates, and other very important factors within the bodies of animals, so an extreme temperature change can easily distress the animal's body. Animals can respond to extreme heat, for example, through natural heat acclimation or by burrowing into the ground to find a cooler space.
It is also possible to see in animals that a high genetic diversity is beneficial in providing resiliency against harsh abiotic stressors. This acts as a sort of stock room when a species is plagued by the perils of natural selection. A variety of galling insects are among the most specialized and diverse herbivores on the planet, and their extensive protections against abiotic stress factors have helped the insect in gaining that position of honor.
In endangered species
Biodiversity is determined by many things, and one of them is abiotic stress. If an environment is highly stressful, biodiversity tends to be low. If abiotic stress does not have a strong presence in an area, the biodiversity will be much higher.
This idea leads into the understanding of how abiotic stress and endangered species are related. It has been observed through a variety of environments that as the level of abiotic stress increases, the number of species decreases. This means that species are more likely to become population threatened, endangered, and even extinct, when and where abiotic stress is especially harsh.
Effects of anthropogenic climate change on abiotic stress
Data suggests that anthropogenic activity has increased the global temperature, and likely increased the odds of extreme climate events such as drought, fire conditions and flooding. Threats to organisms and ecosystem biodiversity due to increased abiotic stress are one major impact of this change. The effects of climate change on biomes vary due to the location, patterns of precipitation, and the organisms which inhabit them. On the species level, the increased abiotic stress due to climate change can lead to adaptations which increase a species' reproductive success under these conditions. However, such highly specialized adaptations may leave species vulnerable to other stresses.
See also
Ecophysiology
References
|
Agriculture;Biodiversity;Botany;Habitat;Stress (biological and psychological)
|
https://en.wikipedia.org/wiki/Chemistry%20of%20ascorbic%20acid
|
Ascorbic acid is an organic compound with formula , originally called hexuronic acid. It is a white solid, but impure samples can appear yellowish. It dissolves freely in water to give mildly acidic solutions. It is a mild reducing agent.
Ascorbic acid exists as two enantiomers (mirror-image isomers), commonly denoted "" (for "levo") and "" (for "dextro"). The isomer is the one most often encountered: it occurs naturally in many foods, and is one form ("vitamer") of vitamin C, an essential nutrient for humans and many animals. Deficiency of vitamin C causes scurvy, formerly a major disease of sailors in long sea voyages. It is used as a food additive and a dietary supplement for its antioxidant properties. The "" form (erythorbic acid) can be made by chemical synthesis, but has no significant biological role.
History
The antiscorbutic properties of certain foods were demonstrated in the 18th century by James Lind. In 1907, Axel Holst and Theodor Frølich discovered that the antiscorbutic factor was a water-soluble chemical substance, distinct from the one that prevented beriberi. Between 1928 and 1932, Albert Szent-Györgyi isolated a candidate for this substance, which he called "hexuronic acid", first from plants and later from animal adrenal glands. In 1932 Charles Glen King confirmed that it was indeed the antiscorbutic factor.
In 1933, sugar chemist Walter Norman Haworth, working with samples of "hexuronic acid" that Szent-Györgyi had isolated from paprika and sent him in the previous year, deduced the correct structure and optical-isomeric nature of the compound, and in 1934 reported its first synthesis. In reference to the compound's antiscorbutic properties, Haworth and Szent-Györgyi proposed to rename it "a-scorbic acid" for the compound, and later specifically -ascorbic acid. Because of their work, in 1937 two Nobel Prizes: in Chemistry and in Physiology or Medicine were awarded to Haworth and Szent-Györgyi, respectively.
Chemical properties
Acidity
Ascorbic acid is a furan-based lactone of 2-ketogluconic acid. It contains an adjacent enediol adjacent to the carbonyl. This −C(OH)=C(OH)−C(=O)− structural pattern is characteristic of reductones, and increases the acidity of one of the enol hydroxyl groups. The deprotonated conjugate base is the ascorbate anion, which is stabilized by electron delocalization that results from resonance between two forms:
For this reason, ascorbic acid is much more acidic than would be expected if the compound contained only isolated hydroxyl groups.
Salts
The ascorbate anion forms salts, such as sodium ascorbate, calcium ascorbate, and potassium ascorbate.
Esters
Ascorbic acid can also react with organic acids as an alcohol forming esters such as ascorbyl palmitate and ascorbyl stearate.
Nucleophilic attack
Nucleophilic attack of ascorbic acid on a proton results in a 1,3-diketone:
Oxidation
The ascorbate ion is the predominant species at typical biological pH values. It is a mild reducing agent and antioxidant, typically reacting with oxidants of the reactive oxygen species, such as the hydroxyl radical.
Reactive oxygen species are damaging to animals and plants at the molecular level due to their possible interaction with nucleic acids, proteins, and lipids. Sometimes these radicals initiate chain reactions. Ascorbate can terminate these chain radical reactions by electron transfer. The oxidized forms of ascorbate are relatively unreactive and do not cause cellular damage.
Ascorbic acid and its sodium, potassium, and calcium salts are commonly used as antioxidant food additives. These compounds are water-soluble and, thus, cannot protect fats from oxidation: For this purpose, the fat-soluble esters of ascorbic acid with long-chain fatty acids (ascorbyl palmitate or ascorbyl stearate) can be used as antioxidant food additives. Sodium-dependent active transport process enables absorption of Ascorbic acid from the intestine.
Ascorbate readily donates a hydrogen atom to free radicals, forming the radical anion semidehydroascorbate (also known as monodehydroascorbate), a resonance-stabilized semitrione:
Loss of an electron from semidehydroascorbate to produce the 1,2,3-tricarbonyl pseudodehydroascorbate is thermodynamically disfavored, which helps prevent propagation of free radical chain reactions such as autoxidation:
However, being a good electron donor, excess ascorbate in the presence of free metal ions can not only promote but also initiate free radical reactions, thus making it a potentially dangerous pro-oxidative compound in certain metabolic contexts.
Semidehydroascorbate oxidation instead occurs in conjunction with hydration, yielding the bicyclic hemiketal dehydroascorbate. In particular, semidehydroascorbate undergoes disproportionation to ascorbate and dehydroascorbate:
Aqueous solutions of dehydroascorbate are unstable, undergoing hydrolysis with a half-life of 5–15 minutes at . Decomposition products include diketogulonic acid, xylonic acid, threonic acid and oxalic acid.
Other reactions
It creates volatile compounds when mixed with glucose and amino acids at 90 °C.
It is a cofactor in tyrosine oxidation, though because a crude extract of animal liver is used, it is unclear which reaction catalyzed by which enzyme is being helped here. For known roles in enzymatic reactions, see .
Because it reduces iron(III) and chelates iron ions, it enhances the oral absorption of non-heme iron. This property also applies to its enantiomer.
Conversion to oxalate
In 1958, it was discovered that ascorbic acid can be converted to oxalate, a key component of calcium oxalate kidney stones. The process begins with the formation of dehydroascorbic acid (DHA) from the ascorbyl radical. While DHA can be recycled back to ascorbic acid, a portion irreversibly degrades to 2,3-diketogulonic acid (DKG), which then breaks down to both oxalate and the sugars L-erythrulose and threosone. Research conducted in the 1960s suggested ascorbic acid could substantially contribute to urinary oxalate content (possibly over 40%), but these estimates have been questioned due to methodological limitations. Subsequent large cohort studies have yielded conflicting results regarding the link between vitamin C intake and kidney stone formation. The overall clinical significance of ascorbic acid consumption to kidney stone risk, however, remains inconclusive, although several studies have suggested a potential association, especially with high-dose supplementation in men.
Uses
Food additive
The main use of -ascorbic acid and its salts is as food additives, mostly to combat oxidation and prevent discoloration of the product during storage. It is approved for this purpose in the EU with E number E300, the US, Australia, and New Zealand.
The "" enantiomer (erythorbic acid) shares all of the non-biological chemical properties with the more common enantiomer. As a result, it is an equally effective food antioxidant, and is also approved in processed foods.
Dietary supplement
Another major use of -ascorbic acid is as a dietary supplement. It is on the World Health Organization's List of Essential Medicines. Its deficiency over a prolonged period of time could cause scurvy, which is characterized by fatigue, widespread weakness in connective tissues and capillary fragility. It affects multiple organ systems due to its role in the biochemical reactions of connective tissue synthesis.
Niche, non-food uses
Ascorbic acid is easily oxidized and so is used as a reductant in photographic developer solutions (among others) and as a preservative.
In fluorescence microscopy and related fluorescence-based techniques, ascorbic acid can be used as an antioxidant to increase fluorescent signal and chemically retard dye photobleaching.
It is also commonly used to remove dissolved metal stains, such as iron, from fiberglass swimming pool surfaces.
In plastic manufacturing, ascorbic acid can be used to assemble molecular chains more quickly and with less waste than traditional synthesis methods.
Heroin users are known to use ascorbic acid as a means to convert heroin base to a water-soluble salt so that it can be injected.
As justified by its reaction with iodine, it is used to negate the effects of iodine tablets in water purification. It reacts with the sterilized water, removing the taste, color, and smell of the iodine. This is why it is often sold as a second set of tablets in most sporting goods stores as Potable Aqua-Neutralizing Tablets, along with the potassium iodide tablets.
Intravenous high-dose ascorbate is being used as a chemotherapeutic and biological response modifying agent. It is undergoing clinical trials.
It is sometimes used as a urinary acidifier to enhance the antiseptic effect of methenamine.
Synthesis
Natural biosynthesis of vitamin C occurs through various processes in many plants and animals.
Industrial preparation
Seventy percent of the world's supply of ascorbic acid is produced in China. Ascorbic acid is prepared in industry from glucose in a method based on the historical Reichstein process. In the first of a five-step process, glucose is catalytically hydrogenated to sorbitol, which is then oxidized by the microorganism Acetobacter suboxydans to sorbose. Only one of the six hydroxy groups is oxidized by this enzymatic reaction. From this point, two routes are available. Treatment of the product with acetone in the presence of an acid catalyst converts four of the remaining hydroxyl groups to acetals. The unprotected hydroxyl group is oxidized to the carboxylic acid by reaction with the catalytic oxidant TEMPO (regenerated by sodium hypochlorite bleaching solution). Historically, industrial preparation via the Reichstein process used potassium permanganate as the bleaching solution. Acid-catalyzed hydrolysis of this product performs the dual function of removing the two acetal groups and ring-closing lactonization. This step yields ascorbic acid. Each of the five steps has a yield larger than 90%.
A biotechnological process, first developed in China in the 1960s but further developed in the 1990s, bypasses acetone-protecting groups. A second genetically modified microbe species, such as mutant Erwinia, among others, oxidises sorbose into 2-ketogluconic acid (2-KGA), which can then undergo ring-closing lactonization via dehydration. This method is used in the predominant process used by the ascorbic acid industry in China, which supplies 70% of the world's ascorbic acid. Researchers are exploring means for one-step fermentation.
Determination
The traditional way to analyze the ascorbic acid content is by titration with an oxidizing agent, and several procedures have been developed.
The popular iodometry approach uses iodine in the presence of a starch indicator. Iodine is reduced by ascorbic acid, and when all the ascorbic acid has reacted, the iodine is in excess, forming a blue-black complex with the starch indicator. This indicates the end-point of the titration.
As an alternative, ascorbic acid can be treated with iodine in excess, followed by back titration with sodium thiosulfate using starch as an indicator.
This iodometric method has been revised to exploit the reaction of ascorbic acid with iodate and iodide in acid solution. Electrolyzing the potassium iodide solution produces iodine, which reacts with ascorbic acid. The end of the process is determined by potentiometric titration like Karl Fischer titration. The amount of ascorbic acid can be calculated by Faraday's law.
Another alternative uses N-bromosuccinimide (NBS) as the oxidizing agent in the presence of potassium iodide and starch. The NBS first oxidizes the ascorbic acid; when the latter is exhausted, the NBS liberates the iodine from the potassium iodide, which then forms the blue-black complex with starch.
References
|
3-Hydroxypropenals;Antioxidants;Biomolecules;Coenzymes;Corrosion inhibitors;Dietary antioxidants;Furanones;Organic acids;Vitamers;Vitamin C
|
https://en.wikipedia.org/wiki/Arthur%20Eddington
|
Sir Arthur Stanley Eddington, (28 December 1882 – 22 November 1944) was an English astronomer, physicist, and mathematician. He was also a philosopher of science and a populariser of science. The Eddington limit, the natural limit to the luminosity of stars, or the radiation generated by accretion onto a compact object, is named in his honour.
Around 1920, he foreshadowed the discovery and mechanism of nuclear fusion processes in stars, in his paper "The Internal Constitution of the Stars". At that time, the source of stellar energy was a complete mystery; Eddington was the first to correctly speculate that the source was fusion of hydrogen into helium.
Eddington wrote a number of articles that announced and explained Einstein's theory of general relativity to the English-speaking world. World War I had severed many lines of scientific communication, and new developments in German science were not well known in England. He also conducted an expedition to observe the solar eclipse of 29 May 1919 on the Island of Príncipe that provided one of the earliest confirmations of general relativity, and he became known for his popular expositions and interpretations of the theory.
Early years
Eddington was born 28 December 1882 in Kendal, Westmorland (now Cumbria), England, the son of Quaker parents, Arthur Henry Eddington, headmaster of the Quaker School, and Sarah Ann Shout.
His father taught at a Quaker training college in Lancashire before moving to Kendal to become headmaster of Stramongate School. He died in the typhoid epidemic which swept England in 1884. His mother was left to bring up her two children with relatively little income. The family moved to Weston-super-Mare where at first Stanley (as his mother and sister always called Eddington) was educated at home before spending three years at a preparatory school. The family lived in a house called Varzin, at 42 Walliscote Road, Weston-super-Mare. A commemorative plaque on the building explains Eddington's contributions to science.
In 1893 Eddington entered Brynmelyn School. He proved to be a most capable scholar, particularly in mathematics and English literature. His performance earned him a scholarship to Owens College, Manchester (what was later to become the University of Manchester), in 1898, which he was able to attend, having turned 16 that year. He spent the first year in a general course, but he turned to physics for the next three years. Eddington was greatly influenced by his physics and mathematics teachers, Arthur Schuster and Horace Lamb. At Manchester, Eddington lived at Dalton Hall, where he came under the lasting influence of the Quaker mathematician J. W. Graham. His progress was rapid, winning him several scholarships, and he graduated with a BSc in physics with First Class Honours in 1902.
Based on his performance at Owens College, he was awarded a scholarship to Trinity College, Cambridge, in 1902. His tutor at Cambridge was Robert Alfred Herman and in 1904 Eddington became the first ever second-year student to be placed as Senior Wrangler. After receiving his M.A. in 1905, he began research on thermionic emission in the Cavendish Laboratory. This did not go well, and meanwhile he spent time teaching mathematics to first year engineering students. This hiatus was brief. Through a recommendation by E. T. Whittaker, his senior colleague at Trinity College, he secured a position at the Royal Observatory, Greenwich, where he was to embark on his career in astronomy, a career whose seeds had been sown even as a young child when he would often "try to count the stars".
Astronomy
In January 1906, Eddington was nominated to the post of chief assistant to the Astronomer Royal at the Royal Greenwich Observatory. He left Cambridge for Greenwich the following month. He was put to work on a detailed analysis of the parallax of 433 Eros on photographic plates that had started in 1900. He developed a new statistical method based on the apparent drift of two background stars, winning him the Smith's Prize in 1907. The prize won him a fellowship of Trinity College, Cambridge. In December 1912, George Darwin, son of Charles Darwin, died suddenly, and Eddington was promoted to his chair as the Plumian Professor of Astronomy and Experimental Philosophy in early 1913. Later that year, Robert Ball, holder of the theoretical Lowndean chair, also died, and Eddington was named the director of the entire Cambridge Observatory the next year. In May 1914, he was elected a fellow of the Royal Society: he was awarded the Royal Medal in 1928 and delivered the Bakerian Lecture in 1926.
Eddington also investigated the interior of stars through theory, and developed the first true understanding of stellar processes. He began this in 1916 with investigations of possible physical explanations for Cepheid variable stars. He began by extending Karl Schwarzschild's earlier work on radiation pressure in Emden polytropic models. These models treated a star as a sphere of gas held up against gravity by internal thermal pressure, and one of Eddington's chief additions was to show that radiation pressure was necessary to prevent collapse of the sphere. He developed his model despite knowingly lacking firm foundations for understanding opacity and energy generation in the stellar interior. However, his results allowed for calculation of temperature, density and pressure at all points inside a star (thermodynamic anisotropy), and Eddington argued that his theory was so useful for further astrophysical investigation that it should be retained despite not being based on completely accepted physics. James Jeans contributed the important suggestion that stellar matter would certainly be ionized, but that was the end of any collaboration between the pair, who became famous for their lively debates.
Eddington defended his method by pointing to the utility of his results, particularly his important mass–luminosity relation. This had the unexpected result of showing that virtually all stars, including giants and dwarfs, behaved as ideal gases. In the process of developing his stellar models, he sought to overturn current thinking about the sources of stellar energy. Jeans and others defended the Kelvin–Helmholtz mechanism, which was based on classical mechanics, while Eddington speculated broadly about the qualitative and quantitative consequences of possible proton–electron annihilation and nuclear fusion processes.
Around 1920, he anticipated the discovery and mechanism of nuclear fusion processes in stars, in his paper "The Internal Constitution of the Stars". At that time, the source of stellar energy was a complete mystery; Eddington correctly speculated that the source was fusion of hydrogen into helium, liberating enormous energy according to Einstein's equation . This was a particularly remarkable development since at that time fusion and thermonuclear energy, and even the fact that stars are largely composed of hydrogen (see metallicity), had not yet been discovered. Eddington's paper, based on knowledge at the time, reasoned that:
The leading theory of stellar energy, the contraction hypothesis (cf. the Kelvin–Helmholtz mechanism), should cause stars' rotation to visibly speed up due to conservation of angular momentum. But observations of Cepheid variable stars showed this was not happening.
The only other known plausible source of energy was conversion of matter to energy; Einstein had shown some years earlier that a small amount of matter was equivalent to a large amount of energy.
Francis Aston had also recently shown that the mass of a helium atom was about 0.8% less than the mass of the four hydrogen atoms which would, combined, form a helium atom, suggesting that if such a combination could happen, it would release considerable energy as a byproduct.
If a star contained just 5% of fusible hydrogen, it would suffice to explain how stars got their energy. (We now know that most "ordinary" stars contain far more than 5% hydrogen.)
Further elements might also be fused, and other scientists had speculated that stars were the "crucible" in which light elements combined to create heavy elements, but without more-accurate measurements of their atomic masses nothing more could be said at the time.
All of these speculations were proven correct in the following decades.
With these assumptions, he demonstrated that the interior temperature of stars must be millions of degrees. In 1924, he discovered the mass–luminosity relation for stars (see Lecchini in ). Despite some disagreement, Eddington's models were eventually accepted as a powerful tool for further investigation, particularly in issues of stellar evolution. The confirmation of his estimated stellar diameters by Michelson in 1920 proved crucial in convincing astronomers unused to Eddington's intuitive, exploratory style. Eddington's theory appeared in mature form in 1926 as The Internal Constitution of the Stars, which became an important text for training an entire generation of astrophysicists.
Eddington's work in astrophysics in the late 1920s and the 1930s continued his work in stellar structure, and precipitated further clashes with Jeans and Edward Arthur Milne. An important topic was the extension of his models to take advantage of developments in quantum physics, including the use of degeneracy physics in describing dwarf stars.
Dispute with Chandrasekhar on the mass limit of stars
The topic of extension of his models precipitated his dispute with Subrahmanyan Chandrasekhar, who was then a student at Cambridge. Chandrasekhar's work presaged the discovery of black holes, which at the time seemed so absurdly non-physical that Eddington refused to believe that Chandrasekhar's purely mathematical derivation had consequences for the real world. Eddington was wrong and his motivation is controversial. Chandrasekhar's narrative of this incident, in which his work is harshly rejected, portrays Eddington as rather cruel and dogmatic. Chandra benefited from his friendship with Eddington. It was Eddington and Milne who put up Chandra's name for the fellowship for the Royal Society which Chandra obtained. An FRS meant he was at the Cambridge high-table with all the luminaries and a very comfortable endowment for research. Eddington's criticism seems to have been based partly on a suspicion that a purely mathematical derivation from relativity theory was not enough to explain the seemingly daunting physical paradoxes that were inherent to degenerate stars, but to have "raised irrelevant objections" in addition, as Thanu Padmanabhan puts it.
Relativity
During World War I, Eddington was secretary of the Royal Astronomical Society, which meant he was the first to receive a series of letters and papers from Willem de Sitter regarding Einstein's theory of general relativity. Eddington was fortunate in being not only one of the few astronomers with the mathematical skills to understand general relativity, but owing to his internationalist and pacifist views inspired by his Quaker religious beliefs, one of the few at the time who was still interested in pursuing a theory developed by a German physicist. He quickly became the chief supporter and expositor of relativity in Britain. He and Astronomer Royal Frank Watson Dyson organized two expeditions to observe a solar eclipse in 1919 to make the first empirical test of Einstein's theory: the measurement of the deflection of light by the Sun's gravitational field. In fact, Dyson's argument for the indispensability of Eddington's expertise in this test was what prevented Eddington from eventually having to enter military service.
When conscription was introduced in Britain on 2 March 1916, Eddington intended to apply for an exemption as a conscientious objector. Cambridge University authorities instead requested and were granted an exemption on the ground of Eddington's work being of national interest. In 1918, this was appealed against by the Ministry of National Service. Before the appeal tribunal in June, Eddington claimed conscientious objector status, which was not recognized and would have ended his exemption in August 1918. A further two hearings took place in June and July, respectively. Eddington's personal statement at the June hearing about his objection to war based on religious grounds is on record. The Astronomer Royal, Sir Frank Dyson, supported Eddington at the July hearing with a written statement, emphasising Eddington's essential role in the solar eclipse expedition to Príncipe in May 1919. Eddington made clear his willingness to serve in the Friends' Ambulance Unit, under the jurisdiction of the British Red Cross, or as a harvest labourer. However, the tribunal's decision to grant a further twelve months' exemption from military service was on condition of Eddington continuing his astronomy work, in particular in preparation for the Príncipe expedition. The war ended before the end of his exemption.
After the war, Eddington travelled to the island of Príncipe off the west coast of Africa to watch the solar eclipse of 29 May 1919. During the eclipse, he took pictures of the stars (several stars in the Hyades cluster, including Kappa Tauri of the constellation Taurus) whose line of sight from the Earth happened to be near the Sun's location in the sky at that time of year. This effect is noticeable only during a total solar eclipse when the sky is dark enough to see stars which are normally obscured by the Sun's brightness. According to the theory of general relativity, stars with light rays that passed near the Sun would appear to have been slightly shifted because their light had been curved by its gravitational field. Eddington showed that Newtonian gravitation could be interpreted to predict half the shift predicted by Einstein.
Eddington's observations published the next year allegedly confirmed Einstein's theory, and were hailed at the time as evidence of general relativity over the Newtonian model. The news was reported in newspapers all over the world as a major story. Afterward, Eddington embarked on a campaign to popularize relativity and the expedition as landmarks both in scientific development and international scientific relations.
It has been claimed that Eddington's observations were of poor quality, and he had unjustly discounted simultaneous observations at Sobral, Brazil, which appeared closer to the Newtonian model, but a 1979 re-analysis with modern measuring equipment and contemporary software validated Eddington's results and conclusions. The quality of the 1919 results was indeed poor compared to later observations, but was sufficient to persuade contemporary astronomers. The rejection of the results from the expedition to Brazil was due to a defect in the telescopes used which, again, was completely accepted and well understood by contemporary astronomers.
Throughout this period, Eddington lectured on relativity, and was particularly well known for his ability to explain the concepts in lay terms as well as scientific. He collected many of these into the Mathematical Theory of Relativity in 1923, which Albert Einstein suggested was "the finest presentation of the subject in any language." He was an early advocate of Einstein's general relativity, and an interesting anecdote well illustrates his humour and personal intellectual investment: Ludwik Silberstein, a physicist who thought of himself as an expert on relativity, approached Eddington at the Royal Society's (6 November) 1919 meeting where he had defended Einstein's relativity with his Brazil-Príncipe solar eclipse calculations with some degree of scepticism, and ruefully charged Arthur as one who claimed to be one of three men who actually understood the theory (Silberstein, of course, was including himself and Einstein as the other). When Eddington refrained from replying, he insisted Arthur not be "so shy", whereupon Eddington replied, "Oh, no! I was wondering who the third one might be!"
Cosmology
Eddington was also heavily involved with the development of the first generation of general relativistic cosmological models. He had been investigating the instability of the Einstein universe when he learned of both Lemaître's 1927 paper postulating an expanding or contracting universe and Hubble's work on the recession of the spiral nebulae. He felt the cosmological constant must have played the crucial role in the universe's evolution from an Einsteinian steady state to its current expanding state, and most of his cosmological investigations focused on the constant's significance and characteristics. In The Mathematical Theory of Relativity, Eddington interpreted the cosmological constant to mean that the universe is "self-gauging".
Fundamental theory and the Eddington number
During the 1920s until his death, Eddington increasingly concentrated on what he called "fundamental theory" which was intended to be a unification of quantum theory, relativity, cosmology, and gravitation. At first he progressed along "traditional" lines, but turned increasingly to an almost numerological analysis of the dimensionless ratios of fundamental constants.
His basic approach was to combine several fundamental constants in order to produce a dimensionless number. In many cases these would result in numbers close to 1040, its square, or its square root. He was convinced that the mass of the proton and the charge of the electron were a "natural and complete specification for constructing a Universe" and that their values were not accidental. One of the discoverers of quantum mechanics, Paul Dirac, also pursued this line of investigation, which has become known as the Dirac large numbers hypothesis.
A somewhat damaging statement in his defence of these concepts involved the fine-structure constant, α. At the time it was measured to be very close to 1/136, and he argued that the value should in fact be exactly 1/136 for epistemological reasons. Later measurements placed the value much closer to 1/137, at which point he switched his line of reasoning to argue that one more should be added to the degrees of freedom, so that the value should in fact be exactly 1/137, the Eddington number. Wags at the time started calling him "Arthur Adding-one". This change of stance detracted from Eddington's credibility in the physics community. The current CODATA value is 1/
Eddington believed he had identified an algebraic basis for fundamental physics, which he termed "E-numbers" (representing a certain group – a Clifford algebra). These in effect incorporated spacetime into a higher-dimensional structure. While his theory has long been neglected by the general physics community, similar algebraic notions underlie many modern attempts at a grand unified theory. Moreover, Eddington's emphasis on the values of the fundamental constants, and specifically upon dimensionless numbers derived from them, is nowadays a central concern of physics. In particular, he predicted a number of hydrogen atoms in the Universe ≈ , or equivalently the half of the total number of particles protons + electrons. He did not complete this line of research before his death in 1944; his book Fundamental Theory was published posthumously in 1948.
Eddington number for cycling
Eddington is credited with devising a measure of a cyclist's long-distance riding achievements. The Eddington number in the context of cycling is defined as the maximum number E such that the cyclist has cycled at least E miles on at least E days.
For example, an Eddington number of 70 would imply that the cyclist has cycled at least 70 miles in a day on at least 70 occasions. Achieving a high Eddington number is difficult, since moving from, say, 70 to 75 will (probably) require more than five new long-distance rides, since any rides shorter than 75 miles will no longer be included in the reckoning. Eddington's own life-time E-number was 84.
The Eddington number for cycling is analogous to the h-index that quantifies both the actual scientific productivity and the apparent scientific impact of a scientist.
Philosophy
Idealism
Eddington wrote in his book The Nature of the Physical World that "The stuff of the world is mind-stuff."
The idealist conclusion was not integral to his epistemology but was based on two main arguments.
The first derives directly from current physical theory. Briefly, mechanical theories of the ether and of the behaviour of fundamental particles have been discarded in both relativity and quantum physics. From this, Eddington inferred that a materialistic metaphysics was outmoded and that, in consequence, since the disjunction of materialism or idealism are assumed to be exhaustive, an idealistic metaphysics is required. The second, and more interesting argument, was based on Eddington's epistemology, and may be regarded as consisting of two parts. First, all we know of the objective world is its structure, and the structure of the objective world is precisely mirrored in our own consciousness. We therefore have no reason to doubt that the objective world too is "mind-stuff". Dualistic metaphysics, then, cannot be evidentially supported.
But, second, not only can we not know that the objective world is nonmentalistic, we also cannot intelligibly suppose that it could be material. To conceive of a dualism entails attributing material properties to the objective world. However, this presupposes that we could observe that the objective world has material properties. But this is absurd, for whatever is observed must ultimately be the content of our own consciousness, and consequently, nonmaterial.
Eddington believed that physics cannot explain consciousness - "light waves are propagated from the table to the eye; chemical changes occur in the retina; propagation of some kind occurs in the optic nerves; atomic changes follow in the brain. Just where the final leap into consciousness occurs is not clear. We do not know the last stage of the message in the physical world before it became a sensation in consciousness".
Ian Barbour, in his book Issues in Science and Religion (1966), p. 133, cites Eddington's The Nature of the Physical World (1928) for a text that argues the Heisenberg uncertainty principle provides a scientific basis for "the defense of the idea of human freedom" and his Science and the Unseen World (1929) for support of philosophical idealism, "the thesis that reality is basically mental".
Charles De Koninck points out that Eddington believed in objective reality existing apart from our minds, but was using the phrase "mind-stuff" to highlight the inherent intelligibility of the world: that our minds and the physical world are made of the same "stuff" and that our minds are the inescapable connection to the world. As De Koninck quotes Eddington,
Science
Against Albert Einstein and others who advocated determinism, indeterminism—championed by Eddington—says that a physical object has an ontologically undetermined component that is not due to the epistemological limitations of physicists' understanding. The uncertainty principle in quantum mechanics, then, would not necessarily be due to hidden variables but to an indeterminism in nature itself. Eddington proclaimed "It is a consequence of the advent of the quantum theory that physics is no longer pledged to a scheme of deterministic law".
Eddington agreed with the tenet of logical positivism that "the meaning of a scientific statement is to be ascertained by reference to the steps which would be taken to verify it".
Popular and philosophical writings
Eddington wrote a parody of The Rubaiyat of Omar Khayyam, recounting his 1919 solar eclipse experiment. It contained the following quatrain:
In addition to his textbook The Mathematical Theory of Relativity, during the 1920s and 30s, Eddington gave numerous lectures, interviews, and radio broadcasts on relativity, and later, quantum mechanics. Many of these were gathered into books, including The Nature of the Physical World and New Pathways in Science. His use of literary allusions and humour helped make these difficult subjects more accessible. One familiar image drawn by Eddington consisted of his "two tables", which represent a paradox concerned with what really exists: one table is the familiar and commonplace one, with properties of extension, colour, and permanence, it is "substantial" in the sense that it is constituted of "substance"; the other is his 'scientific' one, nothing but myriad minute particles in empty space: the table which "modern physics has by delicate test and remorseless logic assured me . . . is the only one which is really there ... wherever 'there' may be." He began the lectures where he discussed this paradox in 1927 with an allusion to these two tables: The second table is mostly emptiness, with numerous electric charges moving around at great speed, and this table is not "substantial" in any way. Eddington portrays the two tables as a recent innovation: physicists "used to borrow the raw material of [their] world from the familiar world", but for the new concepts, such as the electron, quantum or potential, there is no "familiar counterpart to these things" in "the world of commonplace experience".
Eddington's books and lectures were immensely popular with the public, not only because of his clear exposition, but also for his willingness to discuss the philosophical and religious implications of the new physics. He argued for a deeply rooted philosophical harmony between scientific investigation and religious mysticism, and also that the positivist nature of relativity and quantum physics provided new room for personal religious experience and free will. Unlike many other spiritual scientists, he rejected the idea that science could provide proof of religious propositions.
His popular writings made him a household name in Great Britain between the world wars.
Death
Eddington died of cancer in the Evelyn Nursing Home, Cambridge, on 22 November 1944. He was unmarried. His body was cremated at Cambridge Crematorium (Cambridgeshire) on 27 November 1944; the cremated remains were buried in the grave of his mother in the Ascension Parish Burial Ground in Cambridge.
Cambridge University's North West Cambridge development has been named Eddington in his honour.
Eddington was played by David Tennant in the television film Einstein and Eddington, with Einstein played by Andy Serkis. The film was notable for its groundbreaking portrayal of Eddington as a somewhat repressed gay man. It was first broadcast in 2008.
The actor Paul Eddington was a relative, mentioning in his autobiography (in light of his own weakness in mathematics) "what I then felt to be the misfortune" of being related to "one of the foremost physicists in the world". Paul's father Albert and Sir Arthur were second cousins, both great-grandsons of William Eddington (1755–1806).
Honours
Awards and honors
Smith's Prize (1907)
International Honorary Member of the American Academy of Arts and Sciences (1922)
Bruce Medal of Astronomical Society of the Pacific (1924)
Henry Draper Medal of the National Academy of Sciences (1924)
Gold Medal of the Royal Astronomical Society (1924)
International Member of the United States National Academy of Sciences (1925)
Foreign membership of the Royal Netherlands Academy of Arts and Sciences (1926)
Prix Jules Janssen of the Société astronomique de France (French Astronomical Society) (1928)
Royal Medal of the Royal Society (1928)
Knighthood (1930)
International Member of the American Philosophical Society (1931)
Order of Merit (1938)
Honorary member of the Norwegian Astronomical Society (1939)
Hon. Freeman of Kendal, 1930
Named after him
Lunar crater Eddington
asteroid 2761 Eddington
Royal Astronomical Society's Eddington Medal
Eddington mission, now cancelled
Eddington Tower, halls of residence at the University of Essex
Eddington Astronomical Society, an amateur society based in his hometown of Kendal
Eddington, a house (group of students, used for in-school sports matches) of Kirkbie Kendal School
Eddington, new suburb of North West Cambridge, opened in 2017
Eddington Community Interest Company (CIC), 2003. A Community Centre focusing on Climate Information and projects, including a Waste Food Community Café and Larder, in partnership with SLACC (South Lakes Action on Climate Change), converting the former United Reform Church in Kendal
Service
Gave the Swarthmore Lecture in 1929
Chairman of the National Peace Council 1941–1943
President of the International Astronomical Union; of the Physical Society, 1930–32; of the Royal Astronomical Society, 1921–23
Romanes Lecturer, 1922
Gifford Lecturer, 1927
In popular culture
Eddington is a central figure in the short story "The Mathematician's Nightmare: The Vision of Professor Squarepunt" by Bertrand Russell, a work featured in The Mathematical Magpie by Clifton Fadiman.
He was portrayed by David Tennant in the television film Einstein and Eddington, a co-production of the BBC and HBO, broadcast in the United Kingdom on Saturday, 22 November 2008, on BBC2.
His thoughts on humour and religious experience were quoted in the adventure game The Witness, a production of the Thelka, Inc., released on 26 January 2016.
Time placed him on the cover on 16 April 1934.
The song “In Transit”, from the 2023 album Signs Of Life by Neil Gaiman and Fourplay String Quartet was written in memory of him.
Publications
1914. Stellar Movements and the Structure of the Universe. London: Macmillan.
1918. Report on the relativity theory of gravitation. London, Fleetway Press, Ltd.
1920. Space, Time and Gravitation: An Outline of the General Relativity Theory. Cambridge University Press.
1922. The theory of relativity and its influence on scientific thought
1923. 1952. The Mathematical Theory of Relativity. Cambridge University Press.
1925. The Domain of Physical Science. 2005 reprint:
1926. Stars and Atoms. Oxford: British Association.
1926. The Internal Constitution of Stars. Cambridge University Press.
1928. The Nature of the Physical World. MacMillan. 1935 replica edition: , University of Michigan 1981 edition: (1926–27 Gifford lectures)
1929. Science and the Unseen World. US Macmillan, UK Allen & Unwin. 1980 Reprint Arden Library . 2004 US reprint – Whitefish, Montana : Kessinger Publications: . 2007 UK reprint London, Allen & Unwin (Swarthmore Lecture), with a new foreword by George Ellis.
1930. Why I Believe in God: Science and Religion, as a Scientist Sees It. Arrow/scrollable preview.
1933. The Expanding Universe: Astronomy's 'Great Debate', 1900–1931. Cambridge University Press.
1935. New Pathways in Science. Cambridge University Press.
1936. Relativity Theory of Protons and Electrons. Cambridge Univ. Press.
1939. Philosophy of Physical Science. Cambridge University Press. (1938 Tarner lectures at Cambridge)
1946. Fundamental Theory. Cambridge University Press.
See also
Astronomy
Chandrasekhar limit
Eddington luminosity (also called the Eddington limit)
Gravitational lens
Outline of astronomy
Stellar nucleosynthesis
Timeline of stellar astronomy
List of astronomers
Science
Arrow of time
Classical unified field theories
Degenerate matter
Dimensionless physical constant
Dirac large numbers hypothesis (also called the Eddington–Dirac number)
Eddington number
Introduction to quantum mechanics
Luminiferous aether
Parameterized post-Newtonian formalism
Special relativity
Theory of everything (also called "final theory" or "ultimate theory")
Timeline of gravitational physics and relativity
List of experiments
People
List of science and religion scholars
Other
Infinite monkey theorem
Numerology
Ontic structural realism
References
Further reading
Durham, Ian T., "Eddington & Uncertainty". Physics in Perspective (September – December). Arxiv, History of Physics
Lecchini, Stefano, "How Dwarfs Became Giants. The Discovery of the Mass–Luminosity Relation" Bern Studies in the History and Philosophy of Science, pp. 224. (2007)
Stanley, Matthew. "An Expedition to Heal the Wounds of War: The 1919 Eclipse Expedition and Eddington as Quaker Adventurer." Isis 94 (2003): 57–89.
Stanley, Matthew. "So Simple a Thing as a Star: Jeans, Eddington, and the Growth of Astrophysical Phenomenology" in British Journal for the History of Science, 2007, 40: 53–82.
External links
Trinity College Chapel
Arthur Stanley Eddington (1882–1944) . University of St Andrews, Scotland.
Quotations by Arthur Eddington
Arthur Stanley Eddington The Bruce Medalists.
Russell, Henry Norris, "Review of The Internal Constitution of the Stars by A.S. Eddington". Ap.J. 67, 83 (1928).
Experiments of Sobral and Príncipe repeated in the space project in proceeding in fórum astronomical.
Biography and bibliography of Bruce medalists: Arthur Stanley Eddington
Eddington books: The Nature of the Physical World, The Philosophy of Physical Science, Relativity Theory of Protons and Electrons, and Fundamental Theory
Obituaries
Obituary 1 by Henry Norris Russell, Astrophysical Journal 101 (1943–46) 133
Obituary 2 by A. Vibert Douglas, Journal of the Royal Astronomical Society of Canada, 39 (1943–46) 1
Obituary 3 by Harold Spencer Jones and E. T. Whittaker, Monthly Notices of the Royal Astronomical Society 105 (1943–46) 68
Obituary 4 by Herbert Dingle, The Observatory 66 (1943–46) 1
The Times, Thursday, 23 November 1944; pg. 7; Issue 49998; col D: Obituary (unsigned) – Image of cutting available at
|
1882 births;1944 deaths;20th-century British astronomers;20th-century British physicists;Alumni of Trinity College, Cambridge;Alumni of the Victoria University of Manchester;British Christian pacifists;British Quakers;British anti–World War I activists;British astrophysicists;British conscientious objectors;British cosmologists;British relativity theorists;Corresponding Members of the Russian Academy of Sciences (1917–1925);Corresponding Members of the USSR Academy of Sciences;Fellows of Trinity College, Cambridge;Fellows of the Royal Astronomical Society;Fellows of the Royal Society;Foreign associates of the National Academy of Sciences;International members of the American Philosophical Society;Knights Bachelor;Members of the Order of Merit;Members of the Royal Netherlands Academy of Arts and Sciences;People from Kendal;Plumian Professors of Astronomy and Experimental Philosophy;Presidents of the International Astronomical Union;Presidents of the Physical Society;Presidents of the Royal Astronomical Society;Recipients of the Bruce Medal;Recipients of the Gold Medal of the Royal Astronomical Society;Royal Medal winners;Senior Wranglers
|
https://en.wikipedia.org/wiki/Apple%20II%20%28original%29
|
The Apple II (stylized as ) is a personal computer released by Apple Inc. in June 1977. It was one of the first successful mass-produced microcomputer products and is widely regarded as one of the most important personal computers of all time due to its role in popularizing home computing and influencing later software development.
The Apple II was designed primarily by Steve Wozniak. The system is based around the 8-bit MOS Technology 6502 microprocessor. Jerry Manock designed the foam-molded plastic case, Rod Holt developed the switching power supply, while Steve Jobs was not involved in the design of the computer. It was introduced by Jobs and Wozniak at the 1977 West Coast Computer Faire, and marks Apple's first launch of a computer aimed at a consumer market—branded toward American households rather than businessmen or computer hobbyists.
Byte magazine referred to the Apple II, Commodore PET 2001, and TRS-80 as the "1977 Trinity". As the Apple II had the defining feature of being able to display color graphics, the Apple logo was redesigned to have a spectrum of colors.
The Apple II was the first in a series of computers collectively referred to by the Apple II name. It was followed by the Apple II+, Apple IIe, Apple IIc, Apple IIc Plus, and the 16-bit Apple IIGS—all of which remained compatible. Production of the last available model, the Apple IIe, ceased in November 1993.
History
By 1976, Steve Jobs had convinced product designer Jerry Manock (who had formerly worked at Hewlett Packard designing calculators) to create the "shell" for the Apple II—a smooth case inspired by kitchen appliances that concealed the internal mechanics. The earliest Apple II computers were assembled in Silicon Valley and later in Texas; printed circuit boards were manufactured in Ireland and Singapore. The first computers went on sale on June 10, 1977 with an MOS Technology 6502 microprocessor running at 1.023 MHz ( of the NTSC color subcarrier), two game paddles (bundled until 1980, when they were found to violate FCC regulations), 4 KiB of RAM, an audio cassette interface for loading programs and storing data, and the Integer BASIC programming language built into ROMs. The video controller displayed 24 lines by 40 columns of monochrome, uppercase-only text on the screen (the original character set matches ASCII characters 20h to 5Fh), with NTSC composite video output suitable for display on a video monitor or on a regular TV set (by way of a separate RF modulator).
The original retail price of the computer with 4 KiB of RAM was and with the maximum 48 KiB of RAM, it was To reflect the computer's color graphics capability, the Apple logo on the casing has rainbow stripes, which remained a part of Apple's corporate logo until early 1998. Perhaps most significantly, the Apple II was a catalyst for personal computers across many industries; it opened the doors to software marketed at consumers.
Certain aspects of the system's design were influenced by Atari, Inc.'s arcade video game Breakout (1976), which was designed by Wozniak, who said: "A lot of features of the Apple II went in because I had designed Breakout for Atari. I had designed it in hardware. I wanted to write it in software now". This included his design of color graphics circuitry, the addition of game paddle support and sound, and graphics commands in Integer BASIC, with which he wrote Brick Out, a software clone of his own hardware game. Wozniak said in 1984: "Basically, all the game features were put in just so I could show off the game I was familiar with—Breakout—at the Homebrew Computer Club. It was the most satisfying day of my life [when] I demonstrated Breakout—totally written in BASIC. It seemed like a huge step to me. After designing hardware arcade games, I knew that being able to program them in BASIC was going to change the world."
Overview
In the May 1977 issue of Byte, Steve Wozniak published a detailed description of his design; the article began, "To me, a personal computer should be small, reliable, convenient to use, and inexpensive."
The Apple II used peculiar engineering shortcuts to save hardware and reduce costs, such as:
Taking advantage of the way the 6502 processor accesses memory: it occurs only on alternate phases of the clock cycle; video generation circuitry memory access on the otherwise unused phase avoids memory contention issues and interruptions of the video stream;
This arrangement simultaneously eliminated the need for a separate refresh circuit for DRAM chips, as video transfer accessed each row of dynamic memory within the timeout period. In addition, it did not require separate RAM chips for video RAM, while the PET and TRS-80 had SRAM chips for video;
Apart from the 6502 CPU and a few support chips, the vast majority of the semiconductors used were 74LS low-power Schottky chips;
Rather than use a complex analog-to-digital circuit to read the outputs of the game controller, Wozniak used a simple timer circuit, built around a quad 555 timer IC called a 558, whose period is proportional to the resistance of the game controller, and he used a software loop to measure the timers;
A single 14.31818 MHz master oscillator (fM) was divided by various ratios to produce all other required frequencies, including microprocessor clock signals (fM/14), video transfer counters, and color-burst samples (fM/4). A solderable jumper on the main board allowed to switch between European 50 Hz and USA 60 Hz video.
The text and graphics screens have a complex arrangement. For instance, the scanlines were not stored in sequential areas of memory. This complexity was reportedly due to Wozniak's realization that the method would allow for the refresh of dynamic RAM as a side effect (as described above). This method had no cost overhead to have software calculate or look up the address of the required scanline and avoided the need for significant extra hardware. Similarly, in high-resolution graphics mode, color is determined by pixel position and thus can be implemented in software, saving Wozniak the chips needed to convert bit patterns to colors. This also allowed the ability to draw text with subpixel rendering, since orange and blue pixels appear half a pixel-width farther to the right on the screen than green and purple pixels.
The Apple II at first used data cassette storage, like most other microcomputers of the time. In 1978, the company introduced an external -inch floppy disk drive, called Disk II (stylized as Disk ][), attached through a controller card that plugs into one of the computer's expansion slots (usually slot 6). The Disk II interface, created by Wozniak, is regarded as an engineering masterpiece for its economy of electronic components.
The approach taken in the Disk II controller is typical of Wozniak's designs. With a few small-scale logic chips and a cheap PROM (programmable read-only memory), he created a functional floppy disk interface at a fraction of the component cost of standard circuit configurations.
Case design
The first production Apple II computers had hand-molded cases; these had visible bubbles and other lumps in them from the imperfect plastic molding process, which was soon switched to machine molding. In addition, the initial case design had no vent openings, causing high heat buildup from the printed circuit board (PCB) and resulting in the plastic softening and sagging. Apple added vent holes to the case within three months of production; customers with the original case could have them replaced at no charge.
PCB revisions
The Apple II's printed circuit board (PCB) underwent several revisions, as Steve Wozniak made modifications to it. The earliest version was known as Revision 0, and the first 6,000 units shipped used it. Later revisions added a "color killer" circuit to prevent color fringing when the computer was in text mode, as well as modifications to improve the reliability of cassette I/O. Revision 0 Apple IIs powered up in an undefined mode and had garbage on-screen, requiring the user to press Reset. This was eliminated in later board revisions. Revision 0 Apple IIs could display only four colors in hi-res mode, but Wozniak was able to increase this to six hi-res colors on later board revisions. (Technically it was eight, but only six were visible.)
Apple II PCBs have three RAM banks for a total of 24 RAM chips. Original Apple IIs had jumper switches to adjust RAM size, and RAM configurations could be 4, 8, 12, 16, 20, 24, 32, 36, or 48 KiB. The three smallest memory configurations used 4kx1 DRAMs, with larger ones using 16kx1 DRAMs, or mix of 4-kilobyte and 16-kilobyte banks (the chips in any one bank have to be the same size). The early Apple II+ models retained this feature, but after a drop in DRAM prices, Apple redesigned the circuit boards without the jumpers, so that only 16kx1 chips were supported. A few months later, they started shipping all machines with a full 48 KiB complement of DRAM.
Unlike most machines, all integrated circuits on the Apple II PCB were socketed; this cost more to manufacture and created the possibility of loose chips causing a system malfunction, but it was considered preferable to make servicing and replacement of bad chips easier.
The Apple II PCB lacks any means of generating an interrupt request, although expansion cards may generate one. Program code had to stop everything to perform any I/O task; like many of the computer's other idiosyncrasies, this was due to cost reasons and Steve Wozniak assuming interrupts were not needed for gaming or using the computer as a teaching tool.
Display and graphics
Color on the Apple II series uses a quirk of the NTSC television signal standard, which made color display relatively easy and inexpensive to implement. The original NTSC television signal specification was black and white. Color was added later by adding a 3.58-megahertz subcarrier signal that was partially ignored by black-and-white TV sets. Color is encoded based on the phase of this signal in relation to a reference color burst signal. The result is that the position, size, and intensity of a series of pulses define color information. These pulses can translate into pixels on the computer screen, with the possibility of exploiting composite artifact colors.
The Apple II display provides two pixels per subcarrier cycle. When the color burst reference signal is turned on and the computer is attached to a color display, it can display green by showing one alternating pattern of pixels, magenta with an opposite pattern of alternating pixels, and white by placing two pixels next to each other. Blue and orange are available by tweaking the pixel offset by half a pixel-width in relation to the color-burst signal. The high-resolution display offers more colors by compressing more (and narrower) pixels into each subcarrier cycle.
The coarse, low-resolution graphics display mode works differently, as it can output a pattern of dots per pixel to offer more color options. These patterns are stored in the character generator ROM, and replace the text character bit patterns when the computer is switched to low-res graphics mode. The text mode and low-res graphics mode use the same memory region and the same circuitry is used for both.
A single HGR page occupied 8 KiB of RAM; in practice this meant that the user had to have at least 12 KiB of total RAM to use HGR mode and 20 KiB to use two pages. Early Apple II games from the 1977–79 period often ran only in text or low-resolution mode in order to support users with small memory configurations; HGR not being near universally supported by games until 1980.
Sound
Rather than a dedicated sound-synthesis chip, the Apple II contains a toggle circuit that can only emit a click through a built-in speaker or a line-out jack. More complex sounds, such as music or audio samples, are generated by software manually toggling the speaker at an appropriate frequency. This technique requires careful and precise timing, rendering it difficult to display moving graphics while sound is playing. Third party expansion cards were later released that addressed this problem.
A similar technique is used for cassette storage: cassette output works the same as the speaker, and input uses a simple zero-crossing detector as a 1-bit audio digitizer. Routines in machine ROM encode and decode data in frequency-shift keying for the cassette.
Programming languages
Initially, the Apple II was shipped with Integer BASIC encoded in the motherboard ROM chips. Written by Wozniak, the interpreter enabled users to write software applications without needing to purchase additional development utilities. Written with game programmers and hobbyists in mind, the language only supported the encoding of numbers in 16-bit integer format. Since it only supported integers between -32768 and +32767 (signed 16-bit integer), it was less suitable to business software, and Apple soon received complaints from customers. Because Steve Wozniak was busy developing the Disk II hardware, he did not have time to modify Integer BASIC for floating point support. Apple instead licensed Microsoft's 6502 BASIC to create Applesoft BASIC.
Disk users normally purchased a so-called Language Card, which had Applesoft in ROM, and was located below the Integer BASIC ROM in system memory. The user could switch between either BASIC by typing or in BASIC prompt. Apple also offered a different version of Applesoft for cassette users, which occupied low memory, and was started by using the command in Integer BASIC.
As shipped, Apple II incorporated a machine code monitor with commands for displaying and altering the computer's RAM, either one byte at a time, or in blocks of 256 bytes at once. This enabled programmers to write and debug machine code programs without further development software. The computer powers on into the monitor ROM, displaying a prompt. From there, enters BASIC, or a machine language program can be loaded from cassette. Disk software can be booted with followed by , referring to Slot 6 which normally contained the Disk II controller.
A 6502 assembler was soon offered on disk, and later the UCSD compiler and operating system for the Pascal language were made available. The Pascal system requires a 16 KiB RAM card to be installed in the language card position (expansion slot 0) in addition to the full 48 KiB of motherboard memory.
Manual
The first 1,000 or so Apple IIs shipped in 1977 with a 68-page mimeographed "Apple II Mini Manual", hand-bound with brass paper fasteners. This was the basis for the Apple II Reference Manual, which became known as the Red Book for its red cover, published in January 1978. All existing customers who sent in their warranty cards were sent free copies of the Red Book. The Apple II Reference Manual contained the complete schematic of the entire computer's circuitry, and a complete source listing of the "Monitor" ROM firmware that served as the machine's BIOS.
An Apple II manual signed by Steve Jobs in 1980 with the inscription "Julian, your generation is the first to grow up with computers. Go change the world." sold at auction for $787,484 in 2021.
Operating system
The original Apple II came with an 8 KiB ROM containing a BASIC variant called Integer BASIC as well as a resident monitor called the Apple System Monitor. Initially, only cassette tape was available for storage, which was considered too slow and unreliable for business use. In late 1977, Apple began to develop the Disk II floppy disk drive and required an operating system to utilize it. The existing standard at the time was CP/M, but due to incompatibility with the 6502 processor and a perceived clunkiness, Apple contracted Shepardson Microsystems for $13,000 to write Apple DOS. At Shepardson, Paul Laughton developed the software in just 35 days, a remarkably short deadline, even for the time. The Disk II and Apple DOS were released in late 1978. The final and most popular version of this software was Apple DOS 3.3.
Apple DOS was superseded by ProDOS, which supported a hierarchical filesystem and larger storage devices. With an optional third-party Z80-based expansion card, the Apple II could boot into the CP/M operating system and run WordStar, dBase II, and other CP/M software.
Apple released Applesoft BASIC in 1977, a more advanced variant of the language which users could run instead of Integer BASIC for more capabilities, such as the ability to use floating point numbers.
Some commercial Apple II software came on self-booting disks and did not use standard DOS disk formats. This discouraged the copying or modifying of the software on the disks, and improved loading speed.
Third-party devices and applications
When the Apple II initially shipped in June 1977, no expansion cards were available for the slots. This meant that the user did not have any way of connecting a modem or a printer. One popular hack involved connecting a teletype machine to the cassette output.
Wozniak's open-architecture design and Apple II's multiple expansion slots permitted a wide variety of third-party devices, including peripheral cards, such as serial controllers, display controllers, memory boards, hard disks, networking components, and real-time clocks. There were plug-in expansion cards—such as the Z-80 SoftCard—that permitted Apple II to use the Z80 processor and run programs for the CP/M operating system, including the dBase II database and the WordStar word processor. The Z80 card also allowed the connection to a modem, and thereby to any networks that a user might have access to. In the early days, such networks were scarce. But they expanded significantly with the development of bulletin board systems in later years. There was also a third-party 6809 card that allowed OS-9 Level One to be run. Third-party sound cards greatly improved audio capabilities, allowing simple music synthesis and text-to-speech functions. Apple II accelerator cards doubled or quadrupled the computer's speed.
Early Apple IIs were often sold with a Sup'R'Mod, which allowed the composite video signal to be viewed in a television.
The Soviet Union radio-electronics industry designed Apple II-compatible computer Agat. Roughly 12,000 Agat 7 and 9 models were produced and they were widely used in Soviet schools. Agat 9 computers could run "Apple II" compatibility and native modes. "Apple II" mode allowed to run a wider variety of (presumably pirated) Apple II software, but at the expense of less RAM. Because of that Soviet developers preferred native mode over "Apple II" compatibility mode.
In 1978, Bob Bishop of Apple Computer, Inc. programmed 9 Apple II computers to run the gameboard on the TV game show Tic-Tac-Dough;. Each Apple was responsible for displaying various contents for each box of the gameboard (category, X, O, bonus game numbers and amounts, TIC, TAC or Dragon, as well displaying custom messages and an active screensaver), and in turn controlled by an Altair 8800 system. It was the first game show to use computerized graphics.
Reception
Jesse Adams Stein wrote, "As the first company to release a 'consumer appliance' micro-computer, Apple Computer offers us a clear view of this shift from a machine to an appliance." But the company also had "to negotiate the attitudes of its potential buyers, bearing in mind social anxieties about the uptake of new technologies in multiple contexts. The office, the home and the 'office-in-the-home' were implicated in these changing spheres of gender stereotypes and technological development." After seeing a crude, wire-wrapped prototype demonstrated by Wozniak and Steve Jobs in November 1976, Byte predicted in April 1977, that the Apple II "may be the first product to fully qualify as the 'appliance computer' ... a completed system which is purchased off the retail shelf, taken home, plugged in and used". The computer's color graphics capability especially impressed the magazine. The magazine published a favorable review of the computer in March 1978, concluding: "For the user that wants color graphics, the Apple II is the only practical choice available in the 'appliance' computer class."
Personal Computer World in August 1978 also cited the color capability as a strength, stating that "the prime reason that anyone buys an Apple II must surely be for the colour graphics". While mentioning the "oddity" of the artifact colors that produced output "that is not always what one wishes to do", it noted that "no-one has colour graphics like this at this sort of price". The magazine praised the sophisticated monitor software, user expandability, and comprehensive documentation. The author concluded that "the Apple II is a very promising machine" which "would be even more of a temptation were its price slightly lower ... for the moment, colour is an Apple II".
Although it sold well from the launch, the initial market was to hobbyists and computer enthusiasts. Sales expanded exponentially into the business and professional market, when the spreadsheet program VisiCalc was launched in mid-1979. VisiCalc is credited as the defining killer app in the microcomputer industry.
By the end of 1977 Apple had sales of for the fiscal year, which included sales of the Apple I. This put Apple clearly behind the others of the "holy trinity" of the TRS-80 and Commodore PET, even though the TRS-80 was launched last of the three. However, during the first five years of operations, revenues doubled about every four months. Between September 1977 and September 1980, annual sales grew from to . During this period the sole products of the company were the Apple II and its peripherals, accessories, and software.
In 2006, PC World wrote that the Apple II was the greatest PC of all time.
References
External links
Additional documentation in Bitsavers PDF Document archive
Apple II on Old-computers.com
Online Apple II Resource
Apple2History.org
How the Apple ][ Works! – on YouTube by the 8-Bit Guy
ca:Apple II
|
1977 establishments in the United States;1979 disestablishments in the United States;6502-based home computers;8-bit computers;Apple II computers;Computer-related introductions in 1977;Products and services discontinued in 1979
|
https://en.wikipedia.org/wiki/Audio%20signal%20processing
|
Audio signal processing is a subfield of signal processing that is concerned with the electronic manipulation of audio signals. Audio signals are electronic representations of sound waves—longitudinal waves which travel through air, consisting of compressions and rarefactions. The energy contained in audio signals or sound power level is typically measured in decibels. As audio signals may be represented in either digital or analog format, processing may occur in either domain. Analog processors operate directly on the electrical signal, while digital processors operate mathematically on its digital representation.
History
The motivation for audio signal processing began at the beginning of the 20th century with inventions like the telephone, phonograph, and radio that allowed for the transmission and storage of audio signals. Audio processing was necessary for early radio broadcasting, as there were many problems with studio-to-transmitter links. The theory of signal processing and its application to audio was largely developed at Bell Labs in the mid 20th century. Claude Shannon and Harry Nyquist's early work on communication theory, sampling theory and pulse-code modulation (PCM) laid the foundations for the field. In 1957, Max Mathews became the first person to synthesize audio from a computer, giving birth to computer music.
Major developments in digital audio coding and audio data compression include differential pulse-code modulation (DPCM) by C. Chapin Cutler at Bell Labs in 1950, linear predictive coding (LPC) by Fumitada Itakura (Nagoya University) and Shuzo Saito (Nippon Telegraph and Telephone) in 1966, adaptive DPCM (ADPCM) by P. Cummiskey, Nikil S. Jayant and James L. Flanagan at Bell Labs in 1973, discrete cosine transform (DCT) coding by Nasir Ahmed, T. Natarajan and K. R. Rao in 1974, and modified discrete cosine transform (MDCT) coding by J. P. Princen, A. W. Johnson and A. B. Bradley at the University of Surrey in 1987. LPC is the basis for perceptual coding and is widely used in speech coding, while MDCT coding is widely used in modern audio coding formats such as MP3 and Advanced Audio Coding (AAC).
Types
Analog
An analog audio signal is a continuous signal represented by an electrical voltage or current that is analogous to the sound waves in the air. Analog signal processing then involves physically altering the continuous signal by changing the voltage or current or charge via electrical circuits.
Historically, before the advent of widespread digital technology, analog was the only method by which to manipulate a signal. Since that time, as computers and software have become more capable and affordable, digital signal processing has become the method of choice. However, in music applications, analog technology is often still desirable as it often produces nonlinear responses that are difficult to replicate with digital filters.
Digital
A digital representation expresses the audio waveform as a sequence of symbols, usually binary numbers. This permits signal processing using digital circuits such as digital signal processors, microprocessors and general-purpose computers. Most modern audio systems use a digital approach as the techniques of digital signal processing are much more powerful and efficient than analog domain signal processing.
Applications
Processing methods and application areas include storage, data compression, music information retrieval, speech processing, localization, acoustic detection, transmission, noise cancellation, acoustic fingerprinting, sound recognition, synthesis, and enhancement (e.g. equalization, filtering, level compression, echo and reverb removal or addition, etc.).
Audio broadcasting
Audio signal processing is used when broadcasting audio signals in order to enhance their fidelity or optimize for bandwidth or latency. In this domain, the most important audio processing takes place just before the transmitter. The audio processor here must prevent or minimize overmodulation, compensate for non-linear transmitters (a potential issue with medium wave and shortwave broadcasting), and adjust overall loudness to the desired level.
Active noise control
Active noise control is a technique designed to reduce unwanted sound. By creating a signal that is identical to the unwanted noise but with the opposite polarity, the two signals cancel out due to destructive interference.
Audio synthesis
Audio synthesis is the electronic generation of audio signals. A musical instrument that accomplishes this is called a synthesizer. Synthesizers can either imitate sounds or generate new ones. Audio synthesis is also used to generate human speech using speech synthesis.
Audio effects
Audio effects alter the sound of a musical instrument or other audio source. Common effects include distortion, often used with electric guitar in electric blues and rock music; dynamic effects such as volume pedals and compressors, which affect loudness; filters such as wah-wah pedals and graphic equalizers, which modify frequency ranges; modulation effects, such as chorus, flangers and phasers; pitch effects such as pitch shifters; and time effects, such as reverb and delay, which create echoing sounds and emulate the sound of different spaces.
Musicians, audio engineers and record producers use effects units during live performances or in the studio, typically with electric guitar, bass guitar, electronic keyboard or electric piano. While effects are most frequently used with electric or electronic instruments, they can be used with any audio source, such as acoustic instruments, drums, and vocals.
Computer audition
Computer audition (CA) or machine listening is the general field of study of algorithms and systems for audio interpretation by machines. Since the notion of what it means for a machine to "hear" is very broad and somewhat vague, computer audition attempts to bring together several disciplines that originally dealt with specific problems or had a concrete application in mind. The engineer Paris Smaragdis, interviewed in Technology Review, talks about these systems "software that uses sound to locate people moving through rooms, monitor machinery for impending breakdowns, or activate traffic cameras to record accidents."
Inspired by models of human audition, CA deals with questions of representation, transduction, grouping, use of musical knowledge and general sound semantics for the purpose of performing intelligent operations on audio and music signals by the computer. Technically this requires a combination of methods from the fields of signal processing, auditory modelling, music perception and cognition, pattern recognition, and machine learning, as well as more traditional methods of artificial intelligence for musical knowledge representation.
|
Audio electronics;Signal processing
|
https://en.wikipedia.org/wiki/Amdahl%27s%20law
|
In computer architecture, Amdahl's law (or Amdahl's argument) is a formula that shows how much faster a task can be completed when more resources are added to the system.
The law can be stated as:
"the overall performance improvement gained by optimizing a single part of a system is limited by the fraction of time that the improved part is actually used".
It is named after computer scientist Gene Amdahl, and was presented at the American Federation of Information Processing Societies (AFIPS) Spring Joint Computer Conference in 1967.
Amdahl's law is often used in parallel computing to predict the theoretical speedup when using multiple processors.
Definition
In the context of Amdahl's law, speedup can be defined as:
or
Amdahl's law can be formulated in the following way:
where
represents the total speedup of a program
represents the proportion of time spent on the portion of the code where improvements are made
represents the extent of the improvement
The is frequently much lower than one might expect. For instance, if a programmer enhances a part of the code that represents 10% of the total execution time (i.e. of 0.10) and achieves a of 10,000, then becomes 1.11 which means only 11% improvement in total speedup of the program. So, despite a massive improvement in one section, the overall benefit is quite small. In another example, if the programmer optimizes a section that accounts for 99% of the execution time (i.e. of 0.99) with a speedup factor of 100 (i.e. of 100), the only reaches 50. This indicates that half of the potential performance gain ( will reach 100 if 100% of the execution time is covered) is lost due to the remaining 1% of execution time that was not improved.
Implications
Followings are implications of Amdahl's law:
Diminishing Returns: Adding more processors gives diminishing returns. Beyond a certain point, adding more processors doesn't significantly increase speedup.
Limited Speedup: Even with many processors, there's a limit to how much faster a task can be completed due to parts of the task that cannot be parallelized.
Limitations
Followings are limitations of Amdahl's law:
Assumption of Fixed Workload: Amdahl's Law assumes that the workload remains constant. It doesn't account for dynamic or increasing workloads, which can impact the effectiveness of parallel processing.
Overhead Ignored: Amdahl's Law neglects overheads associated with concurrency, including coordination, synchronization, inter-process communication, and concurrency control. Notably, merging data from multiple threads or processes incurs significant overhead due to conflict resolution, data consistency, versioning, and synchronization.
Neglecting extrinsic factors: Amdahl's Law addresses computational parallelism, neglecting extrinsic factors such as data persistence, I/O operations, and memory access overheads, and assumes idealized conditions.
Scalability Issues: While it highlights the limits of parallel speedup, it doesn't address practical scalability issues, such as the cost and complexity of adding more processors.
Non-Parallelizable Work: Amdahl's Law emphasizes the non-parallelizable portion of the task as a bottleneck but doesn't provide solutions for reducing or optimizing this portion.
Assumes Homogeneous Processors: It assumes that all processors are identical and contribute equally to speedup, which may not be the case in heterogeneous computing environments.
Amdahl's law applies only to the cases where the problem size is fixed. In practice, as more computing resources become available, they tend to get used on larger problems (larger datasets), and the time spent in the parallelizable part often grows much faster than the inherently serial work. In this case, Gustafson's law gives a less pessimistic and more realistic assessment of the parallel performance.
Universal Scalability Law (USL), developed by Neil J. Gunther, extends the Amdahl's law and accounts for the additional overhead due to inter-process communication. USL quantifies scalability based on parameters such as contention and coherency.
Derivation
A task executed by a system whose resources are improved compared to an initial similar system can be split up into two parts:
a part that does not benefit from the improvement of the resources of the system;
a part that benefits from the improvement of the resources of the system.
An example is a computer program that processes files. A part of that program may scan the directory of the disk and create a list of files internally in memory. After that, another part of the program passes each file to a separate thread for processing. The part that scans the directory and creates the file list cannot be sped up on a parallel computer, but the part that processes the files can.
The execution time of the whole task before the improvement of the resources of the system is denoted as . It includes the execution time of the part that would not benefit from the improvement of the resources and the execution time of the one that would benefit from it. The fraction of the execution time of the task that would benefit from the improvement of the resources is denoted by . The one concerning the part that would not benefit from it is therefore . Then:
It is the execution of the part that benefits from the improvement of the resources that is accelerated by the factor after the improvement of the resources. Consequently, the execution time of the part that does not benefit from it remains the same, while the part that benefits from it becomes:
The theoretical execution time of the whole task after the improvement of the resources is then:
Amdahl's law gives the theoretical speedup in latency of the execution of the whole task at fixed workload , which yields
Parallel programs
If 30% of the execution time may be the subject of a speedup, p will be 0.3; if the improvement makes the affected part twice as fast, s will be 2. Amdahl's law states that the overall speedup of applying the improvement will be:
For example, assume that we are given a serial task which is split into four consecutive parts, whose percentages of execution time are , , , and respectively. Then we are told that the 1st part is not sped up, so , while the 2nd part is sped up 5 times, so , the 3rd part is sped up 20 times, so , and the 4th part is sped up 1.6 times, so . By using Amdahl's law, the overall speedup is
Notice how the 5 times and 20 times speedup on the 2nd and 3rd parts respectively don't have much effect on the overall speedup when the 4th part (48% of the execution time) is accelerated by only 1.6 times.
Serial programs
[[File:Optimizing-different-parts.svg|thumb|400px|Assume that a task has two independent parts, A and B. Part B takes roughly 25% of the time of the whole computation. By working very hard, one may be able to make this part 5 times faster, but this reduces the time of the whole computation only slightly. In contrast, one may need to perform less work to make part A perform twice as fast. This will make the computation much faster than by optimizing part B, even though part B'''s speedup is greater in terms of the ratio, (5 times versus 2 times).]]
For example, with a serial program in two parts A and B for which and ,
if part B is made to run 5 times faster, that is and , then
if part A is made to run 2 times faster, that is and , then
Therefore, making part A to run 2 times faster is better than making part B to run 5 times faster. The percentage improvement in speed can be calculated as
Improving part A by a factor of 2 will increase overall program speed by a factor of 1.60, which makes it 37.5% faster than the original computation.
However, improving part B by a factor of 5, which presumably requires more effort, will achieve an overall speedup factor of 1.25 only, which makes it 20% faster.
Optimizing the sequential part of parallel programs
If the non-parallelizable part is optimized by a factor of , then
It follows from Amdahl's law that the speedup due to parallelism is given by
When , we have , meaning that the speedup is
measured with respect to the execution time after the non-parallelizable part is optimized.
When ,
If , and , then:
Transforming sequential parts of parallel programs into parallelizable
Next, we consider the case wherein the non-parallelizable part is reduced by a factor of , and the parallelizable part is correspondingly increased. Then
It follows from Amdahl's law that the speedup due to parallelism is given by
Relation to the law of diminishing returns
Amdahl's law is often conflated with the law of diminishing returns, whereas only a special case of applying Amdahl's law demonstrates law of diminishing returns. If one picks optimally (in terms of the achieved speedup) what is to be improved, then one will see monotonically decreasing improvements as one improves. If, however, one picks non-optimally, after improving a sub-optimal component and moving on to improve a more optimal component, one can see an increase in the return. Note that it is often rational to improve a system in an order that is "non-optimal" in this sense, given that some improvements are more difficult or require larger development time than others.
Amdahl's law does represent the law of diminishing returns if one is considering what sort of return one gets by adding more processors to a machine, if one is running a fixed-size computation that will use all available processors to their capacity. Each new processor added to the system will add less usable power than the previous one. Each time one doubles the number of processors the speedup ratio will diminish, as the total throughput heads toward the limit of 1/(1 − p'').
This analysis neglects other potential bottlenecks such as memory bandwidth and I/O bandwidth. If these resources do not scale with the number of processors, then merely adding processors provides even lower returns.
An implication of Amdahl's law is that to speed up real applications which have both serial and parallel portions, heterogeneous computing techniques are required. There are novel speedup and energy consumption models based on a more general representation of heterogeneity, referred to as the normal form heterogeneity, that support a wide range of heterogeneous many-core architectures. These modelling methods aim to predict system power efficiency and performance ranges, and facilitates research and development at the hardware and system software levels.
|
Analysis of parallel algorithms;Computer architecture statements
|
https://en.wikipedia.org/wiki/Ayahuasca
|
Ayahuasca is a South American psychoactive decoction prepared from Banisteriopsis caapi vine and a dimethyltryptamine (DMT)-containing plant, used by Indigenous cultures in the Amazon and Orinoco basins as part of traditional medicine and shamanism. The word ayahuasca, originating from Quechuan languages spoken in the Andes, refers both to the Banisteriopsis caapi vine and the psychoactive brew made from it, with its name meaning “spirit rope” or “liana of the soul.”
The specific ritual use of ayahuasca was widespread among Indigenous groups by the 19th century, though its precise origin is uncertain. Ayahuasca is traditionally prepared by macerating and boiling Banisteriopsis caapi with other plants like Psychotria viridis, through a ritualistic, multi-day process. Ayahuasca has been used in diverse South American cultures for spiritual, social, and medicinal purposes, often guided by shamans in ceremonial contexts involving specific dietary and ritual practices, with the Shipibo-Konibo people playing a significant historical and cultural role in its use. It spread widely by the mid-20th century through syncretic religions in Brazil. In the late 20th century, ayahuasca use expanded beyond South America to Europe, North America, and elsewhere, leading to legal cases, non-religious adaptations, and the development of ayahuasca analogs using local or synthetic ingredients.
While DMT is internationally classified as a controlled substance, the plants containing it—including those used to make ayahuasca—are not regulated under international law, leading to varied national policies that range from permitting religious use to imposing bans or decriminalization. The United States patent office controversially granted, challenged, revoked, reinstated, and ultimately allowed to expire a patent on the ayahuasca vine, sparking disputes over intellectual property rights and the cultural and religious significance of traditional Indigenous knowledge.
Ayahuasca produces intense psychological and spiritual experiences with potential therapeutic effects. A systematic review found that ayahuasca and its alkaloids show promising anxiolytic and antidepressant effects in both animal and human studies, suggesting the potential for new treatments with fewer side effects. A 2024 systematic review found that traditional ayahuasca use is generally safe, though higher doses of ayahuasca or higher doses of isolated harmala alkaloids like harmaline may pose risks.
Etymology
Ayahuasca is the hispanicized spelling (i.e., spelled according to Spanish orthography) of a word that originates from the Quechuan languages, which are spoken in the Andean states of Ecuador, Bolivia, Peru, and Colombia. Speakers of Quechuan languages who use modern Quechuan orthography spell it ayawaska. The word refers both to the liana Banisteriopsis caapi, and to the brew prepared from it. In the Quechua languages, aya means "spirit, soul", or "corpse, dead body", and waska means "rope" or "woody vine", "liana". The word ayahuasca has been variously translated as "liana of the soul", "liana of the dead", and "spirit liana". In the cosmovision of its users, the ayahuasca is the vine that allows the spirit to wander detached from the body, entering the spiritual world, otherwise forbidden for the alive.
Common names
Although ayahuasca is the most widely used term in Peru, Bolivia, Ecuador and Brazil, the brew is known by many names throughout northern South America:
hoasca or oasca in Brazil
(or , from the Cofán language or iagê in Portuguese). Relatively widespread use in Andean and Amazonian regions throughout the border areas of Colombia, Peru, Ecuador and Brazil. The Cofán people also use the word .
(or / in Tupi–Guarani language or in proto-Arawak language), used to address both the brew and the B. caapi itself. Meaning "weed" or "thin leaf", it was the word utilized by Spruce for naming the liana.
(or /), used by the Colorado people
(or ), from the Chicham languages
, (or ) and , from the Yaminawa language
, from the Shipibo language
, , , and , from the Kashinawa language
, and , used by the Tucano people
(or ) and , from the Arawakan languages
, from Bora-Muinane languages
, used by Ese'Ejja people
, from Guahibo language
(or /), used by Tsáchila people
, from Kamëntšá language
("") or , in Portuguese language, used by União do Vegetal church members
Daime or Santo Daime, meaning "give me" in Portuguese, the term was coined by Santo Daime's founder Mestre Irineu in the 1940s, from a prayer dai-me alegria, dai-me resistência ("give me happiness, give me strength"). Daime members also uses the words Luz ("light") or Santa Luz ("holy light")
Some nomenclature are created by the cultural and symbolic signification of ayahuasca, with names like planta professora ("plant teacher"), professor dos professores ("teacher of the teachers"), sagrada medicina ("holy medicine") or la purga ("the purge").
Other names in the Western world
In the last decades, two new important terminologies emerged. Both are commonly used in the Western world in neoshamanic, recreative or pharmaceutical contexts to address ayahuasca-like substances created without the traditional botanical species, due to it being expensive and/or hard to find in these countries. These concepts are surrounded by some controversies involving ethnobotany, patents, commodification and biopiracy:
Anahuasca (ayahuasca analogues). A term usually used to refer to the ayahuasca produced with other plant species as sources of DMT (e.g., Mimosa hostilis) or β-carbolines (e.g., Peganum harmala).
Pharmahuasca (pharmaceutical ayahuasca). This indicates the pills produced from freebase DMT, synthetic harmaline, MAOI medications (such as moclobemide) and other isolated or purified compounds or extracts.
History
Origins
Archaeological evidence of the use of psychoactive plants in northeastern Amazon dates back to 1500–2000 BCE. Anthropomorphic figurines, snuffing trays and pottery vessels, often adorned with mythological figures and sacred animals, offer a glimpse of the pre-Columbian culture regarding use of the sacred plants, their preparation and ritual consumption [citar naranjo 86]. Although several botanical specimens (like tobacco, coca and Anadenanthera spp.) were identified among these objects, there is no unequivocal evidence of this date referring directly to ayahuasca. Banisteriopsis caapi use is suggested from a pouch containing carved snuffing trays, bone spatulas and other paraphernalia with traces of harmine and DMT, discovered in a cave in southwestern Bolivia in 2008, and chemical traces of harmine in the hair of two mummies found in northern Chile. Both cases are linked to Tiwanaku people, circa 900 CE. There are several reports of oral and nasal use of Anadenanthera spp. (rich in bufotenin) ritualistically and therapeutically during labor and infancy, and researchers suggest that addition of Banisteriopsis spp. to catalyze its psychoactivity emerged later, due to contact between different groups of Amazon and Altiplano.
Despite claims by numerous anthropologists and ethnologists, such as Plutarco Naranjo, regarding the millennial usage of ayahuasca, compelling evidence substantiating its pre-Columbian consumption is yet to be firmly established. As articulated by Dennis McKenna: "No one can say for certain where the practice may have originated, and about all that can be stated with certainty is that is already spread among numerous indigenous tribes throughout Amazon basin by the time ayahuasca came to the attention of Western ethnographers in the mid-nineteenth century" The first western references of the ayahuasca beverage dates back to seventeenth century, during the European colonization of the Americas. The earlier report is a letter from Vincente de Valverde to the Holy Office of the Inquisition. Jose Chantre y Herrera still in the seventeenth century, provided the first detailed description of a "devilish potion" cooked from bitter herbs and lianas (called ayaguasca) and its rituals: "[...] In other nations, they set aside an entire night for divination. For this purpose, they select the most capable house in the vicinity because many people are expected to attend the event. The diviner hangs his bed in the middle and places an infernal potion, known as ayahuasca, by his side, which is particularly effective at altering one's senses. They prepare a brew from bitter vines or herbs, which, when boiled sufficiently, must become quite potent. Since it's so strong at altering one's judgment in small quantities, the precaution is not excessive, and it fits into two small pots. The witch doctor drinks a very small amount each time and knows well how many times he can sample the brew without losing his senses to properly conduct the ritual and lead the choir". Another report produced in 1737 by the missionary Pablo Maroni, describes the use of a psychoactive liana called ayahuasca for divination in the Napo River, Ecuador: "For divination, they use a beverage, some of white datura flowers, which they also call Campana due to its shape, and others from a vine commonly known as Ayahuasca, both highly effective at numbing the senses and even at taking one's life if taken in excess. They also occasionally use these substances for the treatment of common illnesses, especially headaches. So, the person who wants to divine drinks the chosen substance with certain rituals, and while deprived of their senses from the mouth downwards, to prevent the strength of the plant from harming them, they remain in this state for many hours and sometimes even two or three days until the effects run their course, and the intoxication subsides. After this, they reflect on what their imagination revealed, which occasionally remains with them for delirium. This is what they consider accomplished and propagate as an oracle." Latter reports were produced by Juan Magnin in 1740, describing ayahuasca use as a medicinal plant by the Jivaroan peoples (called ayahuessa) and by Franz Xaver Veigl in 1768, that reports about several "dangerous plants", including a bitter liana used for precognition and sorcery. All these reports were written in context of Jesuit missions in South America, specially the Mainas missions, in Latin and sent only to Rome, so their audience wasn't very large and they were promptly lost in the archives. For this reason, ayahuasca didn't receive interest for the entire subsequent century.
Early academic research
In academic discourse, the initial mention of ayahuasca dates back to Manuel Villavicencio's 1858 book, "Geografía de la República del Ecuador." This work vividly delineates the employment and rituals involving ayahuasca by the Jivaro people. Concurrently, Richard Spruce embarked on an Amazonian expedition in 1852 to collect and classify previously unidentified botanical specimens. During this journey, Spruce encountered and documented Banisteriopsis caapi (at time named Banisteria caapi) and observed an ayahuasca ceremony among the Tucano community situated along the Vaupés River. Subsequently, Spruce uncovered the usage and cultivation of B. caapi among various indigenous groups dispersed across the Amazon and Orinoco basins, like the Guahibo and Sápara. These multifarious encounters, together with Spruce's personal accounts of subjective ayahuasca experiences, were collated in his work, "Notes of a Botanist On The Amazon and Andes.". By the end of the century, other explorers and anthropologists contributed more extensive documentation concerning ayahuasca, notably the Theodor Koch-Grünberg's documents about Tucano and Arecuna's rituals and ceremonies, Stradelli's first-hand reports of ayahuasca rituals and mythology along the Jurupari and Vaupés and Alfred Simson's first description of admixture of several ingredients in the making of ayahuasca in Putumayo region, published in 1886.
In 1905, Rafael Zerda Bayón named the active extract of ayahuasca as telepathine, a name latter used by the Colombian chemist Guillermo Fischer Cárdenas when he isolated the substance in 1932. Contemporaneously, Lewin and Gunn were independently studying the properties of the banisterine, extracted of the B. caapi, and its effects on animal models. Further clinical trials were being conducted, exploring the effects of banisterine on Parkinson's disease. Later it was found that both telepathine and banisterine are the same substance, identical to a chemical already isolated from Peganum harmala and given the name Harmine.
Shamanism, mestizos and vegetalistas
Researchers like Peter Gow and Brabec de Mori argue that ayahuasca use indeed developed alongside the Jesuit missions after the 17th century. By examining the ícaros (ayahuasca-related healing chants), they found that the chants are always sung in Quechua (a lingua franca along the Jesuit and Franciscan missions in the region), no matter the linguistic background of the group, with similar language structures between different ícaros that are markedly different from other indigenous songs. Moreover, often the cosmology of ayahuasca often mirrors the Catholicism, with particular similarities in the belief that ayahuasca is thought to be the body of ayahuascamama that is imbibed as part of the ritual, like wine and bread are taken as being the body and blood of Jesus Christ during Christian Eucharist. Brabec de Mori called this “Christian camouflage” and suggested that rather than being a way for disguising the ayahuasca ritual, it suggests that practice evolved entirely within these contexts.
Indeed, the colonial processes in Western Amazon are intrinsically related with the development of ayahuasca use in the last three centuries, as it promoted a deep reshape in traditional ways of life in the region. Many indigenous groups moved into the Missions, seeking protection from death and slavery promoted by the Bandeiras, inter-tribal violence, starvation and disease (smallpox). This movement resulted in an intense cultural exchange and resulted in the formation of mestizos (in Spanish) or caboclos (in Portuguese), a social category formed by people with mixture of European and native ancestry, who were an important part of the economy and culture of the region. According to Peter Gow, the ayahuasca shamanism (the use of ayahuasca by a trained shaman to diagnose and cure illnesses) was developed by these mestizos in the processes of colonial transformation. The Amazon rubber cycles (1879–1912 and 1945–1945) sped up these transformations, due to slavery, genocide and brutality against indigenous populations and large migratory movements, specially from the Brazilian Northeast Region as a workforce for the rubber plantations. The mestizo practices became deeply intertwined with the culture of rubber workers, called caucheros (in Spanish) or seringueiros (in Portuguese). Ayahuasca use with therapeutic goals is the main result of this Trans-cultural diffusion, with some practitioners pointing the caucheros as the main responsible for using ayahuasca to cure all sort of ailments of the body, mind and soul, with even some regions using the term Yerba de Cauchero ("rubber-worker herb"). As a result, the ayahuasca shamans in urban areas and mestizo settlements, specially in the regions of Iquitos and Pucallpa (in Peru), became the vegetalistas, folk healers who are said to gain all their knowledge from the plants and the spirits bound to it.
So the vegetalist movement was a heterogeneous mixture of Western Amazon (mestizo shamanic practices and cauchero culture) and Andean elements (shaped by other migratory movements, like those originated from Cuzco through Urubamba Valley and from western Ecuador), influenced by Christian aspects derived from the Jesuit missions, as reflected by the mythology, rituals and moral codes related to vegetalista ayahuasca use.
Ayahuasca religions
Although mestizo, vegetalista and indigenous ayahuasca use was part of a longer tradition, these several configurations of mestizo vegetalismo were not isolated phenomena. In the end of the nineteenth century, several messianic/millennialist cults sparkled across semi-urban areas across the entire Amazon region, merging different elements of indigenous and mestizo folk culture with Catholicism, Spiritism and Protestantism. In this context, the use of ayahuasca will take form of urban, organized non-indigenous religions in outskirts of main cities of northwest of Brazil, (along the basins of Madeira, Juruá and Purus River) within the cauchero/seringueiro cultural complex, resignifying and adapting both the vegetalista and mestizo shamanism to new urban formations, unifying essential elements to building a cosmology for the new emerging cult/faith, merging with elements of folk Catholicism, African-Brazilian religions and Kardecist spiritism. These new cults arise from charismatic leaderships, often messianic and prophetic, who came from rural areas after migration movements, sometimes called ayahuasqueiros, in semi-urban communities across the borders of Brazil, Bolívia and Peru (a region that will later form the state of Acre). This new configuration of these belief systems is referred by Goulart as tradição religiosa ayahuasqueira urbana amazônica ("urban-amazonian ayahuasqueiro religious tradition") or campo ayahuasqueiro brasileiro ("brazilian ayahuasqueiro field") by Labate, emerging as three main structured religions, the Santo Daime and Barquinha, in Rio Branco and the União do Vegetal (UDV) in Porto Velho, three denominations that, notwithstanding shared characteristics besides ayahuasca utilization, have several particularities regarding its practices, conceptions and processes building social legitimacy and relationships with Brazilian government, media, science and other society stances. Since the latter half of twentieth century, the ayahuasca religious expanded to other parts of Brazil and several countries in the world, notably in the West.
Modern use
Beat writer William S. Burroughs read a paper by Richard Evans Schultes on the subject and while traveling through South America in the early 1950s sought out ayahuasca in the hopes that it could relieve or cure opiate addiction (see The Yage Letters). Ayahuasca became more widely known when the McKenna brothers published their experience in the Amazon in True Hallucinations. Dennis McKenna later studied pharmacology, botany, and chemistry of ayahuasca and oo-koo-he, which became the subject of his master's thesis.
Richard Evans Schultes allowed Claudio Naranjo to make a special journey by canoe up the Amazon River to study ayahuasca with the South American Indigenous peoples. He brought back samples of the beverage and published the first scientific description of the effects of its active alkaloids.
In recent years, the brew has been popularized by Wade Davis (One River), English novelist Martin Goodman in I Was Carlos Castaneda, Chilean novelist Isabel Allende, writer Kira Salak, author Jeremy Narby (The Cosmic Serpent), author Jay Griffiths (Wild: An Elemental Journey), American novelist Steven Peck, radio personality Robin Quivers,, writer Paul Theroux (Figures in a Landscape: People and Places), and NFL quarterback Aaron Rodgers.
Preparation
Sections of Banisteriopsis caapi vine are macerated and boiled alone or with leaves from any of a number of other plants, including Psychotria viridis (chacruna), Diplopterys cabrerana (also known as chaliponga and chacropanga), and Mimosa tenuiflora, among other ingredients which can vary greatly from one shaman to the next. The resulting brew may contain the powerful psychedelic drug dimethyltryptamine and monoamine oxidase inhibiting harmala alkaloids, which are necessary to make the DMT orally active by allowing it (DMT) to be processed by the liver. The traditional making of ayahuasca follows a ritual process that requires the user to pick the lower Chacruna leaf at sunrise, then say a prayer. The vine must be "cleaned meticulously with wooden spoons" and pounded "with wooden mallets until it's fibre."
Brews can also be made with plants that do not contain DMT, Psychotria viridis being replaced by plants such as Justicia pectoralis, Brugmansia, or sacred tobacco, also known as mapacho (Nicotiana rustica), or sometimes left out with no replacement. This brew varies radically from one batch to the next, both in potency and psychoactive effect, based mainly on the skill of the shaman or brewer, as well as other admixtures sometimes added and the intent of the ceremony. Natural variations in plant alkaloid content and profiles also affect the final concentration of alkaloids in the brew, and the physical act of cooking may also serve to modify the alkaloid profile of harmala alkaloids.
The actual preparation of the brew takes several hours, often taking place over the course of more than one day. After adding the plant material, each separately at this stage, to a large pot of water, it is boiled until the water is reduced by half in volume. The individual brews are then added together and brewed until reduced significantly. This combined brew is what is taken by participants in ayahuasca ceremonies.
Traditional use
The uses of ayahuasca in traditional societies in South America vary greatly. Some cultures do use it for shamanic purposes, but in other cases, it is consumed socially among friends, in order to learn more about the natural environment, and even in order to visit friends and family who are far away.
Nonetheless, people who work with ayahuasca in non-traditional contexts often align themselves with the philosophies and cosmologies associated with ayahuasca shamanism, as practiced among Indigenous peoples like the Urarina of the Peruvian Amazon. Dietary taboos are often associated with the use of ayahuasca, although these seem to be specific to the culture around Iquitos, Peru, a major center of ayahuasca tourism. Ayahuasca retreats or healing centers can also be found in the Sacred Valley of Peru, in areas such as Cusco and Urubamba, where similar dietary preparations can be observed. These retreats often employ members of the Shipibo-Konibo tribe, an indigenous community native to the Peruvian Amazon.
In the rainforest, these taboos tend towards the purification of one's self—abstaining from spicy and heavily seasoned foods, excess fat, salt, caffeine, acidic foods (such as citrus) and sex before, after, or during a ceremony. A diet low in foods containing tyramine has been recommended, as the speculative interaction of tyramine and MAOIs could lead to a hypertensive crisis; however, evidence indicates that harmala alkaloids act only on MAO-A, in a reversible way similar to moclobemide (an antidepressant that does not require dietary restrictions). Dietary restrictions are not used by the highly urban Brazilian ayahuasca church União do Vegetal, suggesting the risk is much lower than perceived and probably non-existent.
The ritual use of ayahuasca by the Achuar people is featured in the Bruce Parry 2008 documentary series Amazon, in which Parry forces himself to participate in the rite.
Ceremony and the role of shamans
Shamans, curanderos and experienced users of ayahuasca advise against consuming ayahuasca when not in the presence of one or several well-trained shamans.
In some areas, there are purported brujos (Spanish for "witches") who masquerade as real shamans and who entice tourists to drink ayahuasca in their presence. Shamans believe one of the purposes for this is to steal one's energy and/or power, of which they believe every person has a limited stockpile.
The shamans lead the ceremonial consumption of the ayahuasca beverage, in a rite that typically takes place over the entire night. During the ceremony, the effect of the drink lasts for hours. Prior to the ceremony, participants are instructed to abstain from spicy foods, red meat and sex. The ceremony is usually accompanied with purging which include vomiting and diarrhea, which is believed to release built-up emotions and negative energy.
Shipibo-Konibo and their relation to Ayahuasca
It is believed that the Shipibo-Konibo are among the earliest practitioners of Ayahuasca ceremonies, with their connection to the brew and ceremonies surrounding it dating back centuries, perhaps a millennium.
Some members of the Shipibo community have taken to the media to express their views on Ayahuasca entering the mainstream, with some calling it "the commercialization of ayahuasca." Some of them have even expressed their worry regarding the increased popularity, saying "the contemporary 'ayahuasca ceremony' may be understood as a substitute for former cosmogonical rituals that are nowadays not performed anymore."
Icaros
The Shipibo have their own language, called Shipibo, a Panoan language spoken by approximately 26,000 people in Peru and Brazil. This language is commonly sung by the shaman in the form of a chant, called an Icaro, during the Ayahuasca ritual as a way to establish a "balance of energy" during the ritual to help protect and guide the user during their experience.
Traditional brew
Traditional ayahuasca brews are usually made with Banisteriopsis caapi as an MAOI, while dimethyltryptamine sources and other admixtures vary from region to region. There are several varieties of caapi, often known as different "colors", with varying effects, potencies, and uses.
DMT admixtures:
Psychotria viridis (Chacruna) – leaves
Psychotria carthagenensis (Amyruca) – leaves
Diplopterys cabrerana (Chaliponga, Chagropanga, Banisteriopsis rusbyana) – leaves
Mimosa tenuiflora (M. hostilis) - root bark
Other common admixtures:
Justicia pectoralis
Brugmansia sp. (Toé)
Opuntia sp.
Epiphyllum sp.
Cyperus sp.
Nicotiana rustica (Mapacho, variety of tobacco)
Ilex guayusa, a relative of yerba mate
Lygodium venustum, (Tchai del monte)
Phrygilanthus eugenioides and Clusia sp (both called Miya)
Lomariopsis japurensis (Shoka)
Common admixtures with their associated ceremonial values and spirits:
Ayahuma bark: Cannon Ball tree. Provides protection and is used in healing susto (soul loss from spiritual fright or trauma).
Capirona bark: Provides cleansing, balance and protection. It is noted for its smooth bark, white flowers, and hard wood.
Chullachaki caspi bark (Byrsonima christianeae): Provides cleansing to the physical body. Used to transcend physical body ailments.
Lopuna blanca bark: Provides protection.
Punga amarilla bark: Yellow Punga. Provides protection. Used to pull or draw out negative spirits or energies.
Remo caspi bark: Oar Tree. Used to move dense or dark energies.
Wyra (huaira) caspi bark (Cedrelinga catanaeformis): Air Tree. Used to create purging, transcend gastro/intestinal ailments, calm the mind, and bring tranquility.
Shiwawaku bark: Brings purple medicine to the ceremony.
Uchu sanango: Head of the sanango plants.
Huacapurana: Giant tree of the Amazon with very hard bark.
Bobinsana (Calliandra angustifolia): Mermaid Spirit. Provides major heart chakra opening, healing of emotions and relationships.
Non-traditional use
In the late 20th century, the practice of ayahuasca drinking began spreading to Europe, North America and elsewhere. The first ayahuasca churches, affiliated with the Brazilian Santo Daime, were established in the Netherlands. A legal case was filed against two of the Church's leaders, Hans Bogers (one of the original founders of the Dutch Santo Daime community) and Geraldine Fijneman (the head of the Amsterdam Santo Daime community). Bogers and Fijneman were charged with distributing a controlled substance (DMT); however, the prosecution was unable to prove that the use of ayahuasca by members of the Santo Daime constituted a sufficient threat to public health and order such that it warranted denying their rights to religious freedom under ECHR Article 9. The 2001 verdict of the Amsterdam district court is an important precedent. Since then groups that are not affiliated to the Santo Daime have used ayahuasca, and a number of different "styles" have been developed, including non-religious approaches.
Ayahuasca analogs
In modern Europe and North America, ayahuasca analogs are often prepared using non-traditional plants which contain the same alkaloids. For example, seeds of the Syrian rue plant can be used as a substitute for the ayahuasca vine, and the DMT-rich Mimosa hostilis is used in place of chacruna. Australia has several indigenous plants which are popular among modern ayahuasqueros there, such as various DMT-rich species of Acacia.
The name "ayahuasca" specifically refers to a botanical decoction that contains Banisteriopsis caapi.
Brews similar to ayahuasca may be prepared using several plants not traditionally used in South America:
DMT admixtures:
Acacia maidenii (Maiden's wattle) – bark *not all plants are "active strains", meaning some plants will have very little DMT and others larger amounts
Acacia phlebophylla, and other Acacias, most commonly employed in Australia – bark
Anadenanthera peregrina, A. colubrina, A. excelsa, A. macrocarpa
Desmanthus illinoensis (Illinois bundleflower) – root bark is mixed with a native source of beta-Carbolines (e.g., passion flower in North America) to produce a hallucinogenic drink called prairiehuasca.
MAOI admixtures:
Harmal (Peganum harmala, Syrian rue) – seeds
Passion flower
synthetic MAOIs, especially RIMAs (due to the dangers presented by irreversible MAOIs)
Effects
Adverse effects
In the short term, ingesting Ayahuasca can cause nausea, vomiting and diarrhea. These three effects, known as purging, are traditionally recognized to be a part of the spiritual experience of ayahuasca. Physiologically, vomiting is a result of increased serotonin circulating in the gut, which directly stimulates the vagus nerve. Other short-term side effects include increased blood pressure and tachycardia. Additionally, increased secretion of hormones like prolactin, cortisone, and growth hormone has been correlated with ayahuasca consumption. Rarer side effects include dyspnea, seizures and serotonin syndrome. Ayahuasca is suspected of triggering psychosis in people with a predisposition to the condition, and there is a lack of safety information for Ayahuasca's possible effects on pregnancy and breastfeeding.
Psychological effects
People who have consumed ayahuasca report having mystical experiences and spiritual revelations regarding their purpose on earth, the true nature of the universe, and deep insight into how to be the best person they possibly can. Many people also report therapeutic effects, especially around depression and personal traumas.
This is viewed by many as a spiritual awakening and what is often described as a near-death experience or rebirth. It is often reported that individuals feel they gain access to higher spiritual dimensions and make contact with various spiritual or extra-dimensional beings who can act as guides or healers.
The experiences that people have while under the influence of ayahuasca are also culturally influenced. Westerners typically describe experiences with psychological terms like "ego death" and understand the hallucinations as repressed memories or metaphors of mental states. However, at least in Iquitos, Peru (a center of ayahuasca ceremonies), those from the area describe the experiences more in terms of the actions in the body and understand the visions as reflections of their environment, sometimes including the person who they believe caused their illness, as well as interactions with spirits.
Most psychological effects can be accredited to the influx of serotonin caused by the psychoactive combination of DMT with beta-carbolines. Serotonin stimulates a group of G-protein coupled receptors known as 5-HT receptors. Specifically, stimulation of the 5-HT2A receptor type is correlated with hallucinogenic effects.
Potential therapeutic effects
There are potential antidepressant and anxiolytic effects of ayahuasca.
Ayahuasca has also been studied for the treatment of addictions and shown to be effective, with lower Addiction Severity Index scores seen in users of ayahuasca compared to controls. Ayahuasca users have also been seen to consume less alcohol.
Pharmacology
Harmala alkaloids
Harmala alkaloids are MAO-inhibiting beta-carbolines. The three most studied harmala alkaloids in the B. caapi vine are harmine, harmaline and tetrahydroharmine. Harmine and harmaline are selective and reversible inhibitors of monoamine oxidase A (MAO-A), while tetrahydroharmine is a weak serotonin reuptake inhibitor (SRI).
Individual polymorphisms of the cytochrome P450-2D6 enzyme, and more over the isolated indocine metabolite from the inhabitation of CPY134a, with a varied rate of gustation due to physiological factors affect the ability of individuals to metabolize harmine.
Interactions
Legal status
Internationally, DMT is a Schedule I drug under the Convention on Psychotropic Substances. The Commentary on the Convention on Psychotropic Substances notes, however, that the plants containing it are not subject to international control:
A fax from the Secretary of the International Narcotics Control Board (INCB) to the Netherlands Ministry of Public Health sent in 2001 goes on to state that "Consequently, preparations (e.g. decoctions) made of these plants, including ayahuasca, are not under international control and, therefore, not subject to any of the articles of the 1971 Convention."
Despite the INCB's 2001 affirmation that ayahuasca is not subject to drug control by international convention, in its 2010 Annual Report the Board recommended that governments consider controlling (i.e. criminalizing) ayahuasca at the national level. This recommendation by the INCB has been criticized as an attempt by the Board to overstep its legitimate mandate and as establishing a reason for governments to violate the human rights (i.e., religious freedom) of ceremonial ayahuasca drinkers.
Under American federal law, DMT is a Schedule I drug that is illegal to possess or consume; however, certain religious groups have been legally permitted to consume ayahuasca. A court case allowing the União do Vegetal to import and use the tea for religious purposes in the United States, Gonzales v. O Centro Espírita Beneficente União do Vegetal, was heard by the U.S. Supreme Court on November 1, 2005; the decision, released February 21, 2006, allows the UDV to use the tea in its ceremonies pursuant to the Religious Freedom Restoration Act. In a similar case in Ashland, Oregon-based Santo Daime church sued for their right to import and consume ayahuasca tea. In March 2009, U.S. District Court Judge Panner ruled in favor of the Santo Daime, acknowledging its protection from prosecution under the Religious Freedom Restoration Act.
In 2017 the Santo Daime Church Céu do Montréal in Canada received religious exemption to use ayahuasca as a sacrament in their rituals.
Religious use in Brazil was legalized after two official inquiries into the tea in the mid-1980s, which concluded that ayahuasca is not a recreational drug and has valid spiritual uses.
In France, Santo Daime won a court case allowing them to use the tea in early 2005; however, they were not allowed an exception for religious purposes, but rather for the simple reason that they did not perform chemical extractions to end up with pure DMT and harmala and the plants used were not scheduled. Four months after the court victory, the common ingredients of ayahuasca as well as harmala were declared stupéfiants, or narcotic schedule I substances, making the tea and its ingredients illegal to use or possess.
In June 2019, Oakland, California, decriminalized natural entheogens. The City Council passed the resolution in a unanimous vote, ending the investigation and imposition of criminal penalties for use and possession of entheogens derived from plants or fungi. The resolution states: "Practices with Entheogenic Plants have long existed and have been considered to be sacred to human cultures and human interrelationships with nature for thousands of years, and continue to be enhanced and improved to this day by religious and spiritual leaders, practicing professionals, mentors, and healers throughout the world, many of whom have been forced underground."
In January 2020, Santa Cruz, California, and in September 2020, Ann Arbor, Michigan, decriminalized natural entheogens.
Intellectual property issues
Ayahuasca has stirred debate regarding intellectual property protection of traditional knowledge. In 1986 the US Patent and Trademarks Office (PTO) allowed the granting of a patent on the ayahuasca vine B. caapi. It allowed this patent based on the assumption that ayahuasca's properties had not been previously described in writing. Several public interest groups, including the Coordinating Body of Indigenous Organizations of the Amazon Basin (COICA) and the Coalition for Amazonian Peoples and Their Environment (Amazon Coalition) objected. In 1999 they brought a legal challenge to this patent which had granted a private US citizen "ownership" of the knowledge of a plant that is well-known and sacred to many Indigenous peoples of the Amazon, and used by them in religious and healing ceremonies.
Later that year the PTO issued a decision rejecting the patent, on the basis that the petitioners' arguments that the plant was not "distinctive or novel" were valid; however, the decision did not acknowledge the argument that the plant's religious or cultural values prohibited a patent. In 2001, after an appeal by the patent holder, the US Patent Office reinstated the patent, albeit to only a specific plant and its asexually reproduced offspring. The law at the time did not allow a third party such as COICA to participate in that part of the reexamination process. The patent, held by US entrepreneur Loren Miller, expired in 2003.
External links
Ayahuasca - PsychonautWiki
Ayahuasca - Erowid
What is Ayahuasca? - Tripsitter
The Ayahuasca Experience: A Pilgrimage to the Spirit - Double Blind Magazine
|
;5-HT2A agonists;Biopiracy;Entheogens;Herbal and fungal hallucinogens;Indigenous culture of the Amazon;Mixed drinks;Monoamine oxidase inhibitors;Polysubstance drinks;Serotonin receptor agonists
|
https://en.wikipedia.org/wiki/Anointing%20of%20the%20sick
|
Anointing of the sick, known also by other names such as unction, is a form of religious anointing or "unction" (an older term with the same meaning) for the benefit of a sick person. It is practiced by many Christian churches and denominations.
Anointing of the sick was a customary practice in many civilizations, including among the ancient Greeks and early Jewish communities. The use of oil for healing purposes is referred to in the writings of Hippocrates.
Anointing of the sick should be distinguished from other religious anointings that occur in relation to other sacraments, in particular baptism, confirmation and ordination, and also in the coronation of a monarch.
Names
Since 1972, the Roman Catholic Church has used the name "Anointing of the Sick" both in the English translations issued by the Holy See of its official documents in Latin and in the English official documents of Episcopal conferences. It does not, of course, forbid the use of other names, for example the more archaic term "Unction of the Sick" or the term "Extreme Unction". Cardinal Walter Kasper used the latter term in his intervention at the 2005 Assembly of the Synod of Bishops. However, the Church declared that "'Extreme unction' ... may also and more fittingly be called 'anointing of the sick'", and has itself adopted the latter term, while not outlawing the former. This is to emphasize that the sacrament is available, and recommended, to all those suffering from any serious illness, and to dispel the common misconception that it is exclusively for those at or very near the point of death.
Extreme Unction was the usual name for the sacrament in the West from the late twelfth century until 1972, and was thus used at the Council of Trent and in the 1913 Catholic Encyclopedia. Peter Lombard (died 1160) is the first writer known to have used the term, which did not become the usual name in the West till towards the end of the twelfth century, and never became current in the East. The word "extreme" (final) indicated either that it was the last of the sacramental unctions (after the anointings at Baptism, Confirmation and, if received, Holy Orders) or because at that time it was normally administered only when a patient was in extremis.
Other names used in the West include the unction or blessing of consecrated oil, the unction of God, and the office of the unction. Among some Protestant bodies, who do not consider it a sacrament, but instead as a practice suggested rather than commanded by Scripture, it is called anointing with oil.
In the Greek Church, the sacrament is called Euchelaion (Greek Εὐχέλαιον, from εὐχή, "prayer", and ἔλαιον, "oil"). Other names are also used, such as ἅγιον ἔλαιον (holy oil), ἡγιασμένον ἔλαιον (consecrated oil), and χρῖσις or χρῖσμα (anointing).
The Community of Christ uses the term administration to the sick.
The term "last rites" refers to administration to a dying person not only of this sacrament but also of Penance and Holy Communion, the last of which, when administered in such circumstances, is known as "Viaticum", a word whose original meaning in Latin was "provision for the journey". The normal order of administration is: first Penance (if the dying person is physically unable to confess, absolution, conditional on the existence of contrition, is given); next, Anointing; finally, Viaticum (if the person can receive it).
Biblical texts
The chief biblical text concerning the rite is the Epistle of James (): "Is any among you sick? Let him call for the elders of the church, and let them pray over him, anointing him with oil in the name of the Lord; and the prayer of faith will save the sick man, and the Lord will raise him up; and if he has committed sins, he will be forgiven" (RSV).
, and are also quoted in this context.
Sacramental beliefs
The Catholic, Eastern Orthodox and Coptic and Old Catholic Churches consider this anointing to be a sacrament. Other Christians too, in particular, Lutherans, Anglicans and some Protestant and other Christian communities use a rite of anointing the sick, without necessarily classifying it as a sacrament.
In the Churches mentioned here by name, the oil used (called "oil of the sick" in both West and East) is blessed specifically for this purpose.
Roman Catholic Church
An extensive account of the teaching of the Catholic Church on Anointing of the Sick is given in Catechism of the Catholic Church.
Anointing of the Sick is one of the seven Sacraments recognized by the Catholic Church, and is associated with not only bodily healing but also forgiveness of sins. Only ordained priests can administer it, and "any priest may carry the holy oil with him, so that in a case of necessity he can administer the sacrament of anointing of the sick."
Sacramental graces
The Catholic Church sees the effects of the sacrament as follows. As the sacrament of Marriage gives grace for the married state, the sacrament of Anointing of the Sick gives grace for the state into which people enter through sickness. Through the sacrament a gift of the Holy Spirit is given, that renews confidence and faith in God and strengthens against temptations to discouragement, despair and anguish at the thought of death and the struggle of death; it prevents from losing Christian hope in God's justice, truth and salvation.
The special grace of the sacrament of the Anointing of the Sick has as its effects:
the uniting of the sick person to the passion of Christ, for his own good and that of the whole Church;
the strengthening, peace, and courage to endure, in a Christian manner, the sufferings of illness or old age;
the forgiveness of sins, if the sick person was not able to obtain it through the sacrament of penance;
the restoration of health, if it is conducive to the salvation of his soul;
the preparation for passing over to eternal life."
Sacramental oil
The duly blessed oil used in the sacrament is, as laid down in the Apostolic Constitution, Sacram unctionem infirmorum, pressed from olives or from other plants. It is blessed by the bishop of the diocese at the Chrism Mass he celebrates on Holy Thursday or on a day close to it. If oil blessed by the bishop is not available, the priest administering the sacrament may bless the oil, but only within the framework of the celebration.
Ordinary Form of the Roman Rite (1972)
The Roman Rite Anointing of the Sick, as revised in 1972, puts greater stress than in the immediately preceding centuries on the sacrament's aspect of healing, primarily spiritual but also physical, and points to the place sickness holds in the normal life of Christians and its part in the redemptive work of the Church. Canon law permits its administration to a Catholic who has reached the age of reason and is beginning to be put in danger by illness or old age, unless the person in question obstinately persists in a manifestly grave sin. "If there is any doubt as to whether the sick person has reached the use of reason, or is dangerously ill, or is dead, this sacrament is to be administered". There is an obligation to administer it to the sick who, when they were in possession of their faculties, at least implicitly asked for it. A new illness or a renewal or worsening of the first illness enables a person to receive the sacrament a further time.
The ritual book on pastoral care of the sick provides three rites: anointing outside Mass, anointing within Mass, and anointing in a hospital or institution. The rite of anointing outside Mass begins with a greeting by the priest, followed by sprinkling of all present with holy water, if deemed desirable, and a short instruction. There follows a penitential act, as at the beginning of Mass. If the sick person wishes to receive the sacrament of penance, it is preferable that the priest make himself available for this during a previous visit; but if the sick person must confess during the celebration of the sacrament of anointing, this confession replaces the penitential rite A passage of Scripture is read, and the priest may give a brief explanation of the reading, a short litany is said, and the priest lays his hands on the head of the sick person and then says a prayer of thanksgiving over the already blessed oil or, if necessary, blesses the oil himself.
The actual anointing of the sick person is done on the forehead, with the prayer: "Through this holy anointing may the Lord in his love and mercy help you with the grace of the Holy Spirit", and on the hands, with the prayer "May the Lord who frees you from sin save you and raise you up". To each prayer the sick person, if able, responds: "Amen."
It is permitted, in accordance with local culture and traditions and the condition of the sick person, to anoint other parts of the body in addition, such as the area of pain or injury, but without repeating the sacramental form. In case of emergency, a single anointing, if possible but not absolutely necessary if not possible on the forehead, is sufficient.
Extraordinary Form of the Roman Rite
From the early Middle Ages until after the Second Vatican Council the sacrament was administered, within the Latin Church, only when death was approaching and, in practice, bodily recovery was not ordinarily looked for, giving rise, as mentioned above to the name "Extreme Unction" (i.e. final anointing). The extraordinary form of the Roman Rite includes anointing of seven parts of the body while saying in Latin:
The last phrase was chosen to correspond to the part of the body that was touched. The 1913 Catholic Encyclopedia explains that "the unction of the loins is generally, if not universally, omitted in English-speaking countries, and it is of course everywhere forbidden in case of women".
Anointing in the extraordinary form is still permitted under the conditions mentioned in article 9 of the 2007 .
In the case of necessity when only a single anointing on the forehead is possible, it suffices for valid administration of the sacrament to use the shortened form:
When it becomes opportune, all the anointings are to be supplied together with their respective forms for the integrity of the sacrament. If the sacrament is conferred conditionally, for example, if a person is unconscious, ("if you are capable") is added to the beginning of the form, not ("if you are disposed"). In doubt if the soul has left the body through death, the priest adds, ("if you are alive").
Other Western historical forms
Liturgical rites of the Catholic Church, both Western and Eastern, other than the Roman, have a variety of other forms for celebrating the sacrament. For example, according to Giovanni Diclich who cites De Rubeis, &c. cap. 28 p. 381, the Aquileian Rite, also called , had twelve anointings, namely, of the head, forehead, eyes, ears, nose, lips, throat, chest, heart, shoulders, hands, and feet. The form used to anoint is the first person plural indicative, except for the anointing on the head which could be either in the first person singular or plural.
For example, the form is given as:
The other anointings all mention an anointing with oil and are all made "through Christ our Lord", and "in the name of the Father, and of the Son, and of the Holy Spirit", except the anointing of the heart which, as in the second option for anointing of the head, is "in the name of the Holy and Undivided Trinity". The Latin forms are as follows:
Eastern Orthodox Church
The teaching of the Eastern Orthodox Church on the Holy Mystery (sacrament) of Unction is similar to that of the Roman Catholic Church. However, the reception of the Mystery is not limited to those who are enduring physical illness. The Mystery is given for healing (both physical and spiritual) and for the forgiveness of sin. For this reason, it is normally required that one go to confession before receiving Unction. Because it is a Sacred Mystery of the Church, only Orthodox Christians may receive it.
The solemn form of Eastern Christian anointing requires the ministry of seven priests. A table is prepared, upon which is set a vessel containing wheat. Into the wheat has been placed an empty shrine-lamp, seven candles, and seven anointing brushes. Candles are distributed for all to hold during the service. The rite begins with reading Psalm 50 (the great penitential psalm), followed by the chanting of a special canon. After this, the senior priest (or bishop) pours pure olive oil and a small amount of wine into the shrine lamp, and says the "Prayer of the Oil", which calls upon God to "...sanctify this Oil, that it may be effectual for those who shall be anointed therewith, unto healing, and unto relief from every passion, every malady of the flesh and of the spirit, and every ill..." Then follow seven series of epistles, gospels, long prayers, Ektenias (litanies) and anointings. Each series is served by one of the seven priests in turn. The afflicted one is anointed with the sign of the cross on seven places: the forehead, the nostrils, the cheeks, the lips, the breast, the palms of both hands, and the back of the hands. After the last anointing, the Gospel Book is opened and placed with the writing down upon the head of the one who was anointed, and the senior priest reads the "Prayer of the Gospel". At the end, the anointed kisses the Gospel, the Cross and the right hands of the priests, receiving their blessing.
Anointing is considered to be a public rather than a private sacrament, and so as many of the faithful who are able are encouraged to attend. It should be celebrated in the church when possible, but if this is impossible, it may be served in the home or hospital room of the afflicted.
Unction in the Greek Orthodox Church and Churches of Hellenic custom (Antiochian Eastern Orthodox, Melkite, etc.) is usually given with a minimum of ceremony.
Anointing may also be given during Forgiveness Vespers and Great Week, on Great and Holy Wednesday, to all who are prepared. Those who receive Unction on Holy Wednesday should go to Holy Communion on Great Thursday. The significance of receiving Unction on Holy Wednesday is shored up by the hymns in the Triodion for that day, which speak of the sinful woman who anointed the feet of Christ. Just as her sins were forgiven because of her penitence, so the faithful are exhorted to repent of their sins. In the same narrative, Jesus says, "in that she hath poured this ointment on my body, she did it for my burial" (Id., v. 12), linking the unction with Christ's death and resurrection.
In some dioceses of the Russian Orthodox Church it is customary for the bishop to visit each parish or region of the diocese some time during Great Lent and give Anointing for the faithful, together with the local clergy.
Oriental Orthodox Church
The Oriental Orthodox Church regards anointing of the sick as one of the seven sacraments.
Armenian Orthodox Church
From the 4th to the 15th centuries, the Armenian Church administered the sacrament of the unction of the sick. This is recorded in the Church Canons and commentary works. However, beginning in the 15th century, the Armenian Church did not refuse, but abstained from conducting the sacrament in order to resist the influence of the Catholic Church, over time, being left out of liturgical life, deeming sufficient the laying on of hands and the administration of the Sacraments of Penance and Holy Communion.
Instead of the sacrament being used for anointing of the sick, in the Armenian Church unction is administered at the time of Baptism, particularly at Chrismation. In addition, the Armenian Church has the tradition of anointing the sick with blessed oil or water into which Holy Chrism has been poured during the Blessing of Water service in memory of the Lord's Baptism at Theophany. But this Chrism and the anointment of the body of a deceased clergyman with Holy Chrism has nothing to do with extreme unction or the sacrament of anointing the sick, although some Armenians may conflate the two. This tradition is still in force, and there is no objection if the sick are anointed, believing that the Holy Myron will always transfer the gifts of the Holy Spirit as long as they are alive and conscious of their Christian faith. Archbishop Malachia explains:That which is called extreme unction is not in use; the various attempts that have been made to introduce it into the Church have hardly been successful. The wish expressed, to substitute for the unction the prayers used for the dying, cannot sufficiently satisfy the essential conditions which are required for sacraments. It is seen, therefore, that the doctrine of the seven sacraments cannot be accepted by the Armenians. Excepting extreme unction, all the others are administered in the Armenian Church.
Hussite Church
The Hussite Church regards anointing of the sick as one of the seven sacraments.
Anabaptist Churches
Anabaptists observe the ordinance of anointing of the sick in obedience to , with it being counted among the seven ordinances by Conservative Mennonite Anabaptists. In a compendium of Anabaptist doctrine, theologian Daniel Kauffman stated:
The 2021 Church Polity of the Dunkard Brethren Church, a Conservative Anabaptist denomination in the Schwarzenau Brethren tradition, teaches:
Lutheran churches
Anointing of the sick has been retained in Lutheran churches since the Reformation. Although it is not considered a sacrament like baptism, confession and the Eucharist, it is known as a ritual in the same respect as confirmation, holy orders, and matrimony.
Liturgy
After the penitent has received absolution following confession, the presiding minister recites James 5:14-16. He goes on to recite the following:
[Name], you have confessed your sins and received Holy Absolution. In remembrance of the grace of God given by the Holy Spirit in the waters of Holy Baptism, I will anoint you with oil. Confident in our Lord and in love for you, we also pray for you that you will not lose faith. Knowing that in Godly patience the Church endures with you and supports you during this affliction. We firmly believe that this illness is for the glory of God and that the Lord will both hear our prayer and work according to His good and gracious will.
He anoints the person on the forehead and says this blessing:
Almighty God, the Father of our Lord Jesus Christ, who has given you the new birth of water and the Spirit and has forgiven you all your sins, strengthen you with His grace to life everlasting. Amen.
Anglican churches
The 1552 and later editions of the Book of Common Prayer omitted the form of anointing given in the original (1549) version in its Order for the Visitation of the Sick, but most twentieth-century Anglican prayer books do have anointing of the sick. The Book of Common Prayer (1662) and the proposed revision of 1928 include the "visitation of the sick" and "communion of the sick" (which consist of various prayers, exhortations and psalms).
Some Anglicans accept that anointing of the sick has a sacramental character and is therefore a channel of God's grace, seeing it as an "outward and visible sign of an inward and spiritual grace" which is the definition of a sacrament. The Catechism of the Episcopal Church of the United States of America includes Unction of the Sick as among the "other sacramental rites" and it states that unction can be done with oil or simply with laying on of hands. The rite of anointing is included in the Episcopal Church's "Ministration to the Sick".
Article 25 of the Thirty-Nine Articles, which are one of the historical formularies of the Church of England (and as such, the Anglican Communion), speaking of the sacraments, says: "Those five commonly called Sacraments, that is to say, Confirmation, Penance, Orders, Matrimony, and extreme Unction, are not to be counted for Sacraments of the Gospel, being such as have grown partly of the corrupt following of the Apostles, partly are states of life allowed in the Scriptures; but yet have not like nature of Sacraments with Baptism, and the Lord's Supper, for that they have not any visible sign or ceremony ordained of God."
In 1915 members of the Anglican Communion founded the Guild of St Raphael, an organisation dedicated to promoting, supporting and practising Christ's ministry of healing.
Other Protestant communities
Protestants provide anointing in a wide variety of formats. Protestant communities generally vary widely on the sacramental character of anointing. Most Mainline Protestants recognize only two sacraments, the eucharist and baptism, deeming anointing only a humanly-instituted rite. Non-traditional Protestant communities generally use the term ordinance rather than sacrament.
Mainline beliefs
Liturgical or Mainline Protestant communities (e.g. Presbyterian, Congregationalist/United Church of Christ, Methodist, etc.) all have official yet often optional liturgical rites for the anointing of the sick partly on the model of Western pre-Reformation rites. Anointing need not be associated with grave illness or imminent danger of death.
Charismatic and Pentecostal beliefs
In Charismatic and Pentecostal communities, anointing of the sick is a frequent practice and has been an important ritual in these communities since the respective movements were founded in the 19th and 20th centuries. These communities use extemporaneous forms of administration at the discretion of the minister, who need not be a pastor. There is minimal ceremony attached to its administration. Usually, several people physically touch (laying on of hands) the recipient during the anointing. It may be part of a worship service with the full assembly of the congregation present, but may also be done in more private settings, such as homes or hospital rooms. Some Pentecostals believe that physical healing is within the anointing and so there is often great expectation or at least great hope that a miraculous cure or improvement will occur when someone is being prayed over for healing.
Evangelical and fundamentalist beliefs
In Evangelical and Fundamentalist communities, anointing of the sick is performed with varying degrees of frequency, although laying on of hands may be more common than anointing. The rite would be similar to that of Pentecostals in its simplicity, but would usually not have the same emotionalism attached to it. Unlike some Pentecostals, Evangelicals and Fundamentalists generally do not believe that physical healing is within the anointing. Therefore, God may or may not grant physical healing to the sick. The healing conferred by anointing is thus a spiritual event that may not result in physical recovery.
The Church of the Brethren practices Anointing with Oil as an ordinance along with Baptism, Communion, Laying on of Hands, and the Love Feast.
Evangelical Protestants who use anointing differ about whether the person doing the anointing must be an ordained member of the clergy, whether the oil must necessarily be olive oil and have been previously specially consecrated, and about other details. Several Evangelical groups reject the practice so as not to be identified with charismatic and Pentecostal groups, which practice it widely.
Latter Day Saint movement
Church of Jesus Christ of Latter-day Saints
Latter-day Saints, who consider themselves restorationists, also practice ritual anointing of the sick, as well as other forms of anointing. Members of the Church of Jesus Christ of Latter-day Saints (LDS Church) consider anointing to be an ordinance.
Members of the LDS Church who hold the Melchizedek priesthood may use consecrated olive oil in performing the ordinance of blessing of the "sick or afflicted", though oil is not required if it is unavailable. The priesthood holder anoints the recipient's head with a drop of oil, then lays hands upon that head and declare their act of anointing. Then another priesthood holder joins in, if available, and pronounces a "sealing" of the anointing and other words of blessing, as he feels inspired. Melchizedek priesthood holders are also authorized to consecrate any pure olive oil and often carry a personal supply in case they have need to perform an anointing. Oil is not used in other blessings, such as for people seeking comfort or counsel.
In addition to the James 5:14-15 reference, the Doctrine and Covenants contains numerous references to the anointing and healing of the sick by those with authority to do so.
Community of Christ
Administration to the sick is one of the eight sacraments of the Community of Christ, in which it has also been used for people seeking spiritual, emotional or mental healing.
See also
Anointing of the Sick in the Catholic Church
Faith healing
References
|
Christian terminology;New Testament words and phrases;Sacramentals;Supernatural healing
|
https://en.wikipedia.org/wiki/Antibody
|
An antibody (Ab) or immunoglobulin (Ig) is a large, Y-shaped protein belonging to the immunoglobulin superfamily which is used by the immune system to identify and neutralize antigens such as bacteria and viruses, including those that cause disease. Antibodies can recognize virtually any size antigen, able to perceive diverse chemical compositions. Each antibody recognizes one or more specific antigens. Antigen literally means "antibody generator", as it is the presence of an antigen that drives the formation of an antigen-specific antibody. Each tip of the "Y" of an antibody contains a paratope that specifically binds to one particular epitope on an antigen, allowing the two molecules to bind together with precision. Using this mechanism, antibodies can effectively "tag" a microbe or an infected cell for attack by other parts of the immune system, or can neutralize it directly (for example, by blocking a part of a virus that is essential for its invasion).
More narrowly, an antibody (Ab) can refer to the free (secreted) form of these proteins, as opposed to the membrane-bound form found in a B cell receptor. The term immunoglobulin can then refer to both forms. Since they are, broadly speaking, the same protein, the terms are often treated as synonymous.
To allow the immune system to recognize millions of different antigens, the antigen-binding sites at both tips of the antibody come in an equally wide variety. The rest of the antibody structure is much less variable; in humans, antibodies occur in five classes, sometimes called isotypes: IgA, IgD, IgE, IgG, and IgM. Human IgG and IgA antibodies are also divided into discrete subclasses (IgG1, IgG2, IgG3, IgG4; IgA1 and IgA2). The class refers to the functions triggered by the antibody (also known as effector functions), in addition to some other structural features. Antibodies from different classes also differ in where they are released in the body and at what stage of an immune response. Between species, while classes and subclasses of antibodies may be shared (at least in name), their functions and distribution throughout the body may be different. For example, mouse IgG1 is closer to human IgG2 than human IgG1 in terms of its function.
The term humoral immunity is often treated as synonymous with the antibody response, describing the function of the immune system that exists in the body's humors (fluids) in the form of soluble proteins, as distinct from cell-mediated immunity, which generally describes the responses of T cells (especially cytotoxic T cells). In general, antibodies are considered part of the adaptive immune system, though this classification can become complicated. For example, natural IgM, which are made by B-1 lineage cells that have properties more similar to innate immune cells than adaptive, refers to IgM antibodies made independently of an immune response that demonstrate polyreactivity- they recognize multiple distinct (unrelated) antigens. These can work with the complement system in the earliest phases of an immune response to help facilitate clearance of the offending antigen and delivery of the resulting immune complexes to the lymph nodes or spleen for initiation of an immune response. Hence in this capacity, the function of antibodies is more akin to that of innate immunity than adaptive. Nonetheless, in general, antibodies are regarded as part of the adaptive immune system because they demonstrate exceptional specificity (with some exceptions), are produced through genetic rearrangements (rather than being encoded directly in the germline), and are a manifestation of immunological memory.
In the course of an immune response, B cells can progressively differentiate into antibody-secreting cells or into memory B cells. Antibody-secreting cells comprise plasmablasts and plasma cells, which differ mainly in the degree to which they secrete antibody, their lifespan, metabolic adaptations, and surface markers. Plasmablasts are rapidly proliferating, short-lived cells produced in the early phases of the immune response (classically described as arising extrafollicularly rather than from a germinal center) which have the potential to differentiate further into plasma cells. Occasionally plasmablasts are mis-described as short-lived plasma cells; formally this is incorrect. Plasma cells, in contrast, do not divide (they are terminally differentiated), and rely on survival niches comprising specific cell types and cytokines to persist. Plasma cells will secrete huge quantities of antibody regardless of whether or not their cognate antigen is present, ensuring that antibody levels to the antigen in question do not fall to 0, provided the plasma cell stays alive. The rate of antibody secretion, however, can be regulated, for example, by the presence of adjuvant molecules that stimulate the immune response such as TLR ligands. Long-lived plasma cells can live for potentially the entire lifetime of the organism. Classically, the survival niches that house long-lived plasma cells reside in the bone marrow, though it cannot be assumed that any given plasma cell in the bone marrow will be long-lived. However, other work indicates that survival niches can readily be established within the mucosal tissues- though the classes of antibodies involved show a different hierarchy from those in the bone marrow. B cells can also differentiate into memory B cells which can persist for decades similarly to long-lived plasma cells. These cells can be rapidly recalled in a secondary immune response, undergoing class switching, affinity maturation, and differentiating into antibody-secreting cells.
Antibodies are central to the immune protection elicited by most vaccines and infections (although other components of the immune system certainly participate and for some diseases are considerably more important than antibodies in generating an immune response, e.g. in the case of herpes zoster). Durable protection from infections caused by a given microbe – that is, the ability of the microbe to enter the body and begin to replicate (not necessarily to cause disease) – depends on sustained production of large quantities of antibodies, meaning that effective vaccines ideally elicit persistent high levels of antibody, which relies on long-lived plasma cells. At the same time, many microbes of medical importance have the ability to mutate to escape antibodies elicited by prior infections, and long-lived plasma cells cannot undergo affinity maturation or class switching. This is compensated for through memory B cells: novel variants of a microbe that still retain structural features of previously encountered antigens can elicit memory B cell responses that adapt to those changes. It has been suggested that long-lived plasma cells secrete B cell receptors with higher affinity than those on the surfaces of memory B cells, but findings are not entirely consistent on this point.
Structure
Antibodies are heavy (~150 kDa) proteins of about 10 nm in size,
arranged in three globular regions that roughly form a Y shape.
In humans and most other mammals, an antibody unit consists of four polypeptide chains; two identical heavy chains and two identical light chains connected by disulfide bonds.
Each chain is a series of domains: somewhat similar sequences of about 110 amino acids each.
These domains are usually represented in simplified schematics as rectangles.
Light chains consist of one variable domain VL and one constant domain CL, while heavy chains contain one variable domain VH and three to four constant domains CH1, CH2, ...
Structurally an antibody is also partitioned into two antigen-binding fragments (Fab), containing one VL, VH, CL, and CH1 domain each, as well as the crystallisable fragment (Fc), forming the trunk of the Y shape.
In between them is a hinge region of the heavy chains, whose flexibility allows antibodies to bind to pairs of epitopes at various distances, to form complexes (dimers, trimers, etc.), and to bind effector molecules more easily.
In an electrophoresis test of blood proteins, antibodies mostly migrate to the last, gamma globulin fraction.
Conversely, most gamma-globulins are antibodies, which is why the two terms were historically used as synonyms, as were the symbols Ig and γ.
This variant terminology fell out of use due to the correspondence being inexact and due to confusion with γ (gamma) heavy chains which characterize the IgG class of antibodies.
Antigen-binding site
The variable domains can also be referred to as the FV region. It is the subregion of Fab that binds to an antigen.
More specifically, each variable domain contains three hypervariable regions – the amino acids seen there vary the most from antibody to antibody.
When the protein folds, these regions give rise to three loops of β-strands, localized near one another on the surface of the antibody.
These loops are referred to as the complementarity-determining regions (CDRs), since their shape complements that of an antigen.
Three CDRs from each of the heavy and light chains together form an antibody-binding site whose shape can be anything from a pocket to which a smaller antigen binds, to a larger surface, to a protrusion that sticks out into a groove in an antigen.
Typically though, only a few residues contribute to most of the binding energy.
The existence of two identical antibody-binding sites allows antibody molecules to bind strongly to multivalent antigen (repeating sites such as polysaccharides in bacterial cell walls, or other sites at some distance apart), as well as to form antibody complexes and larger antigen-antibody complexes.
The structures of CDRs have been clustered and classified by Chothia et al.
and more recently by North et al.
and Nikoloudis et al. However, describing an antibody's binding site using only one single static structure limits the understanding and characterization of the antibody's function and properties. To improve antibody structure prediction and to take the strongly correlated CDR loop and interface movements into account, antibody paratopes should be described as interconverting states in solution with varying probabilities.
In the framework of the immune network theory, CDRs are also called idiotypes. According to immune network theory, the adaptive immune system is regulated by interactions between idiotypes.
Fc region
The Fc region (the trunk of the Y shape) is composed of constant domains from the heavy chains. Its role is in modulating immune cell activity: it is where effector molecules bind to, triggering various effects after the antibody Fab region binds to an antigen.
Effector cells (such as macrophages or natural killer cells) bind via their Fc receptors (FcR) to the Fc region of an antibody, while the complement system is activated by binding the C1q protein complex. IgG or IgM can bind to C1q, but IgA cannot, therefore IgA does not activate the classical complement pathway.
Another role of the Fc region is to selectively distribute different antibody classes across the body. In particular, the neonatal Fc receptor (FcRn) binds to the Fc region of IgG antibodies to transport it across the placenta, from the mother to the fetus. In addition to this, binding to FcRn endows IgG with an exceptionally long half-life relative to other plasma proteins of 3-4 weeks. IgG3 in most cases (depending on allotype) has mutations at the FcRn binding site which lower affinity for FcRn, which are thought to have evolved to limit the highly inflammatory effects of this subclass.
Antibodies are glycoproteins, that is, they have carbohydrates (glycans) added to conserved amino acid residues.
These conserved glycosylation sites occur in the Fc region and influence interactions with effector molecules.
Protein structure
The N-terminus of each chain is situated at the tip.
Each immunoglobulin domain has a similar structure, characteristic of all the members of the immunoglobulin superfamily:
it is composed of between 7 (for constant domains) and 9 (for variable domains) β-strands, forming two beta sheets in a Greek key motif.
The sheets create a "sandwich" shape, the immunoglobulin fold, held together by a disulfide bond.
Antibody complexes
Secreted antibodies can occur as a single Y-shaped unit, a monomer.
However, some antibody classes also form dimers with two Ig units (as with IgA), tetramers with four Ig units (like teleost fish IgM), or pentamers with five Ig units (like shark IgW or mammalian IgM, which occasionally forms hexamers as well, with six units). IgG can also form hexamers, though no J chain is required. IgA tetramers and pentamers have also been reported.
Antibodies also form complexes by binding to antigen: this is called an antigen-antibody complex or immune complex.
Small antigens can cross-link two antibodies, also leading to the formation of antibody dimers, trimers, tetramers, etc.
Multivalent antigens (e.g., cells with multiple epitopes) can form larger complexes with antibodies.
An extreme example is the clumping, or agglutination, of red blood cells with antibodies in blood typing to determine blood groups: the large clumps become insoluble, leading to visually apparent precipitation.
B cell receptors
The membrane-bound form of an antibody may be called a surface immunoglobulin (sIg) or a membrane immunoglobulin (mIg). It is part of the B cell receptor (BCR), which allows a B cell to detect when a specific antigen is present in the body and triggers B cell activation. The BCR is composed of surface-bound IgD or IgM antibodies and associated Ig-α and Ig-β heterodimers, which are capable of signal transduction. A typical human B cell will have 50,000 to 100,000 antibodies bound to its surface. Upon antigen binding, they cluster in large patches, which can exceed 1 micrometer in diameter, on lipid rafts that isolate the BCRs from most other cell signaling receptors.
These patches may improve the efficiency of the cellular immune response. In humans, the cell surface is bare around the B cell receptors for several hundred nanometers, which further isolates the BCRs from competing influences.
Classes
Antibodies can come in different varieties known as isotypes or classes. In humans there are five antibody classes known as IgA, IgD, IgE, IgG, and IgM, which are further subdivided into subclasses such as IgA1, IgA2.
The prefix "Ig" stands for immunoglobulin, while the suffix denotes the type of heavy chain the antibody contains: the heavy chain types α (alpha), γ (gamma), δ (delta), ε (epsilon), μ (mu) give rise to IgA, IgG, IgD, IgE, IgM, respectively.
The distinctive features of each class are determined by the part of the heavy chain within the hinge and Fc region.
The classes differ in their biological properties, functional locations and ability to deal with different antigens, as depicted in the table.
For example, IgE antibodies are responsible for an allergic response consisting of histamine release from mast cells, often a sole contributor to asthma (though other pathways exist as do symptoms very similar to yet not technically asthma). The variable region of these antibodies bind to allergic antigen, for example house dust mite particles, while its Fc region (in the ε heavy chains) binds to Fc receptor ε on a mast cell, triggering its degranulation: the release of molecules stored in its granules.
The antibody isotype of a B cell changes during cell development and activation. Immature B cells, which have never been exposed to an antigen, express only the IgM isotype in a cell surface bound form. The B lymphocyte, in this ready-to-respond form, is known as a "naive B lymphocyte." The naive B lymphocyte expresses both surface IgM and IgD. The co-expression of both of these immunoglobulin isotypes renders the B cell ready to respond to antigen. B cell activation follows engagement of the cell-bound antibody molecule with an antigen, causing the cell to divide and differentiate into an antibody-producing cell called a plasma cell. This requires cytokines from T helper cells, unless antigen cross-links B cell receptors. In this activated form, the B cell starts to produce antibody in a secreted form rather than a membrane-bound form. Activated B cells that encounter certain signaling molecules undergo immunoglobulin class switching, also known as isotope switching, which causes the production of antibodies to change from IgM or IgD to the other antibody isotypes, IgE, IgA, or IgG.
Light chain types
In mammals there are two types of immunoglobulin light chain, which are called lambda (λ) and kappa (κ). However, there is no known functional difference between them, and both can occur with any of the five major types of heavy chains. Each antibody contains two identical light chains: both κ or both λ. Proportions of κ and λ types vary by species and can be used to detect abnormal proliferation of B cell clones. Other types of light chains, such as the iota (ι) chain, are found in other vertebrates like sharks (Chondrichthyes) and bony fishes (Teleostei).
In non-mammalian animals
In most placental mammals, the structure of antibodies is generally the same.
Jawed fish appear to be the most primitive animals that are able to make antibodies similar to those of mammals, although many features of their adaptive immunity appeared somewhat earlier.
Cartilaginous fish (such as sharks) produce heavy-chain-only antibodies (i.e., lacking light chains) which moreover feature longer chain pentamers (with five constant units per molecule). Camelids (such as camels, llamas, alpacas) are also notable for producing heavy-chain-only antibodies.
Antibody–antigen interactions
The antibody's paratope interacts with the antigen's epitope. An antigen usually contains different epitopes along its surface arranged discontinuously, and dominant epitopes on a given antigen are called determinants.
Antibody and antigen interact by spatial complementarity (lock and key). The molecular forces involved in the Fab-epitope interaction are weak and non-specific – for example electrostatic forces, hydrogen bonds, hydrophobic interactions, and van der Waals forces. This means binding between antibody and antigen is reversible, and the antibody's affinity towards an antigen is relative rather than absolute. Relatively weak binding also means it is possible for an antibody to cross-react with different antigens of different relative affinities.
Function
The main categories of antibody action include the following:
Neutralisation, in which neutralizing antibodies block parts of the surface of a bacterial cell or virion to render its attack ineffective
Agglutination, in which antibodies "glue together" foreign cells into clumps that are attractive targets for phagocytosis
Precipitation, in which antibodies "glue together" serum-soluble antigens, forcing them to precipitate out of solution in clumps that are attractive targets for phagocytosis
Complement activation (fixation), in which antibodies that are latched onto a foreign cell encourage complement to attack it with a membrane attack complex, which leads to the following:
Lysis of the foreign cell
Encouragement of inflammation by chemotactically attracting inflammatory cells
More indirectly, an antibody can signal immune cells to present antibody fragments to T cells, or downregulate other immune cells to avoid autoimmunity.
Activated B cells differentiate into either
antibody-producing cells called plasma cells that secrete soluble antibody or
memory cells that survive in the body for years afterward in order to allow the immune system to remember an antigen and respond faster upon future exposures.
At the prenatal and neonatal stages of life, the presence of antibodies is provided by passive immunization from the mother. Early endogenous antibody production varies for different kinds of antibodies, and usually appear within the first years of life. Since antibodies exist freely in the bloodstream, they are said to be part of the humoral immune system. Circulating antibodies are produced by clonal B cells that specifically respond to only one antigen (an example is a virus capsid protein fragment). Antibodies contribute to immunity in three ways: They prevent pathogens from entering or damaging cells by binding to them; they stimulate removal of pathogens by macrophages and other cells by coating the pathogen; and they trigger destruction of pathogens by stimulating other immune responses such as the complement pathway. Antibodies will also trigger vasoactive amine degranulation to contribute to immunity against certain types of antigens (helminths, allergens).
Activation of complement
Antibodies that bind to surface antigens (for example, on bacteria) will attract the first component of the complement cascade with their Fc region and initiate activation of the "classical" complement system. This results in the killing of bacteria in two ways. First, the binding of the antibody and complement molecules marks the microbe for ingestion by phagocytes in a process called opsonization; these phagocytes are attracted by certain complement molecules generated in the complement cascade. Second, some complement system components form a membrane attack complex to assist antibodies to kill the bacterium directly (bacteriolysis).
Activation of effector cells
To combat pathogens that replicate outside cells, antibodies bind to pathogens to link them together, causing them to agglutinate. Since an antibody has at least two paratopes, it can bind more than one antigen by binding identical epitopes carried on the surfaces of these antigens. By coating the pathogen, antibodies stimulate effector functions against the pathogen in cells that recognize their Fc region.
Those cells that recognize coated pathogens have Fc receptors, which, as the name suggests, interact with the Fc region of IgA, IgG, and IgE antibodies. The engagement of a particular antibody with the Fc receptor on a particular cell triggers an effector function of that cell; phagocytes will phagocytose, mast cells and neutrophils will degranulate, natural killer cells will release cytokines and cytotoxic molecules; that will ultimately result in destruction of the invading microbe. The activation of natural killer cells by antibodies initiates a cytotoxic mechanism known as antibody-dependent cell-mediated cytotoxicity (ADCC) – this process may explain the efficacy of monoclonal antibodies used in biological therapies against cancer. The Fc receptors are isotype-specific, which gives greater flexibility to the immune system, invoking only the appropriate immune mechanisms for distinct pathogens.
Natural antibodies
Humans and higher primates also produce "natural antibodies" that are present in serum before viral infection. Natural antibodies have been defined as antibodies that are produced without any previous infection, vaccination, other foreign antigen exposure or passive immunization. These antibodies can activate the classical complement pathway leading to lysis of enveloped virus particles long before the adaptive immune response is activated. Antibodies are produced exclusively by B cells in response to antigens where initially, antibodies are formed as membrane-bound receptors, but upon activation by antigens and helper T cells, B cells differentiate to produce soluble antibodies. Many natural antibodies are directed against the disaccharide galactose α(1,3)-galactose (α-Gal), which is found as a terminal sugar on glycosylated cell surface proteins, and generated in response to production of this sugar by bacteria contained in the human gut. These antibodies undergo quality checks in the endoplasmic reticulum (ER), which contains proteins that assist in proper folding and assembly. Rejection of xenotransplantated organs is thought to be, in part, the result of natural antibodies circulating in the serum of the recipient binding to α-Gal antigens expressed on the donor tissue.
Immunoglobulin diversity
Virtually all microbes can trigger an antibody response. Successful recognition and eradication of many different types of microbes requires diversity among antibodies; their amino acid composition varies allowing them to interact with many different antigens. It has been estimated that humans generate about 10 billion different antibodies, each capable of binding a distinct epitope of an antigen. Although a huge repertoire of different antibodies is generated in a single individual, the number of genes available to make these proteins is limited by the size of the human genome. Several complex genetic mechanisms have evolved that allow vertebrate B cells to generate a diverse pool of antibodies from a relatively small number of antibody genes.
Domain variability
The chromosomal region that encodes an antibody is large and contains several distinct gene loci for each domain of the antibody—the chromosome region containing heavy chain genes (IGH@) is found on chromosome 14, and the loci containing lambda and kappa light chain genes (IGL@ and IGK@) are found on chromosomes 22 and 2 in humans. One of these domains is called the variable domain, which is present in each heavy and light chain of every antibody, but can differ in different antibodies generated from distinct B cells. Differences between the variable domains are located on three loops known as hypervariable regions (HV-1, HV-2 and HV-3) or complementarity-determining regions (CDR1, CDR2 and CDR3). CDRs are supported within the variable domains by conserved framework regions. The heavy chain locus contains about 65 different variable domain genes that all differ in their CDRs. Combining these genes with an array of genes for other domains of the antibody generates a large cavalry of antibodies with a high degree of variability. This combination is called V(D)J recombination and discussed below.
V(D)J recombination
Somatic recombination of immunoglobulins, also known as V(D)J recombination, involves the generation of a unique immunoglobulin variable region. The variable region of each immunoglobulin heavy or light chain is encoded in several pieces—known as gene segments (subgenes). These segments are called variable (V), diversity (D) and joining (J) segments. V, D and J segments are found in Ig heavy chains, but only V and J segments are found in Ig light chains. Multiple copies of the V, D and J gene segments exist, and are tandemly arranged in the genomes of mammals. In the bone marrow, each developing B cell will assemble an immunoglobulin variable region by randomly selecting and combining one V, one D and one J gene segment (or one V and one J segment in the light chain). As there are multiple copies of each type of gene segment, and different combinations of gene segments can be used to generate each immunoglobulin variable region, this process generates a huge number of antibodies, each with different paratopes, and thus different antigen specificities. The rearrangement of several subgenes (i.e. V2 family) for lambda light chain immunoglobulin is coupled with the activation of microRNA miR-650, which further influences biology of B-cells.
RAG proteins play an important role with V(D)J recombination in cutting DNA at a particular region. Without the presence of these proteins, V(D)J recombination would not occur.
After a B cell produces a functional immunoglobulin gene during V(D)J recombination, it cannot express any other variable region (a process known as allelic exclusion) thus each B cell can produce antibodies containing only one kind of variable chain.
Somatic hypermutation and affinity maturation
Following activation with antigen, B cells begin to proliferate rapidly. In these rapidly dividing cells, the genes encoding the variable domains of the heavy and light chains undergo a high rate of point mutation, by a process called somatic hypermutation (SHM). SHM results in approximately one nucleotide change per variable gene, per cell division. As a consequence, any daughter B cells will acquire slight amino acid differences in the variable domains of their antibody chains.
This serves to increase the diversity of the antibody pool and impacts the antibody's antigen-binding affinity. Some point mutations will result in the production of antibodies that have a weaker interaction (low affinity) with their antigen than the original antibody, and some mutations will generate antibodies with a stronger interaction (high affinity). B cells that express high affinity antibodies on their surface will receive a strong survival signal during interactions with other cells, whereas those with low affinity antibodies will not, and will die by apoptosis. Thus, B cells expressing antibodies with a higher affinity for the antigen will outcompete those with weaker affinities for function and survival allowing the average affinity of antibodies to increase over time. The process of generating antibodies with increased binding affinities is called affinity maturation. Affinity maturation occurs in mature B cells after V(D)J recombination, and is dependent on help from helper T cells.
Class switching
Isotype or class switching is a biological process occurring after activation of the B cell, which allows the cell to produce different classes of antibody (IgA, IgE, or IgG). The different classes of antibody, and thus effector functions, are defined by the constant (C) regions of the immunoglobulin heavy chain. Initially, naive B cells express only cell-surface IgM and IgD with identical antigen binding regions. Each isotype is adapted for a distinct function; therefore, after activation, an antibody with an IgG, IgA, or IgE effector function might be required to effectively eliminate an antigen. Class switching allows different daughter cells from the same activated B cell to produce antibodies of different isotypes. Only the constant region of the antibody heavy chain changes during class switching; the variable regions, and therefore antigen specificity, remain unchanged. Thus the progeny of a single B cell can produce antibodies, all specific for the same antigen, but with the ability to produce the effector function appropriate for each antigenic challenge. Class switching is triggered by cytokines; the isotype generated depends on which cytokines are present in the B cell environment.
Class switching occurs in the heavy chain gene locus by a mechanism called class switch recombination (CSR). This mechanism relies on conserved nucleotide motifs, called switch (S) regions, found in DNA upstream of each constant region gene (except in the δ-chain). The DNA strand is broken by the activity of a series of enzymes at two selected S-regions. The variable domain exon is rejoined through a process called non-homologous end joining (NHEJ) to the desired constant region (γ, α or ε). This process results in an immunoglobulin gene that encodes an antibody of a different isotype.
Specificity designations
An antibody can be called monospecific if it has specificity for a single antigen or epitope, or bispecific if it has affinity for two different antigens or two different epitopes on the same antigen. A group of antibodies can be called polyvalent (or unspecific) if they have affinity for various antigens or microorganisms. Intravenous immunoglobulin, if not otherwise noted, consists of a variety of different IgG (polyclonal IgG). In contrast, monoclonal antibodies are identical antibodies produced by a single B cell.
Asymmetrical antibodies
Heterodimeric antibodies, which are also asymmetrical antibodies, allow for greater flexibility and new formats for attaching a variety of drugs to the antibody arms. One of the general formats for a heterodimeric antibody is the "knobs-into-holes" format. This format is specific to the heavy chain part of the constant region in antibodies. The "knobs" part is engineered by replacing a small amino acid with a larger one. It fits into the "hole", which is engineered by replacing a large amino acid with a smaller one. What connects the "knobs" to the "holes" are the disulfide bonds between each chain. The "knobs-into-holes" shape facilitates antibody dependent cell mediated cytotoxicity. Single-chain variable fragments (scFv) are connected to the variable domain of the heavy and light chain via a short linker peptide. The linker is rich in glycine, which gives it more flexibility, and serine/threonine, which gives it specificity. Two different scFv fragments can be connected together, via a hinge region, to the constant domain of the heavy chain or the constant domain of the light chain. This gives the antibody bispecificity, allowing for the binding specificities of two different antigens. The "knobs-into-holes" format enhances heterodimer formation but does not suppress homodimer formation.
To further improve the function of heterodimeric antibodies, many scientists are looking towards artificial constructs. Artificial antibodies are largely diverse protein motifs that use the functional strategy of the antibody molecule, but are not limited by the loop and framework structural constraints of the natural antibody. Being able to control the combinational design of the sequence and three-dimensional space could transcend the natural design and allow for the attachment of different combinations of drugs to the arms.
Heterodimeric antibodies have a greater range in shapes they can take and the drugs that are attached to the arms do not have to be the same on each arm, allowing for different combinations of drugs to be used in cancer treatment. Pharmaceuticals are able to produce highly functional bispecific, and even multispecific, antibodies. The degree to which they can function is impressive given that such a change of shape from the natural form should lead to decreased functionality.
Interchromosomal DNA Transposition
Antibody diversification typically occurs through somatic hypermutation, class switching, and affinity maturation targeting the BCR gene loci, but on occasion more unconventional forms of diversification have been documented. For example, in the case of malaria caused by Plasmodium falciparum, some antibodies from those who had been infected demonstrated an insertion from chromosome 19 containing a 98-amino acid stretch from leukocyte-associated immunoglobulin-like receptor 1, LAIR1, in the elbow joint. This represents a form of interchromosomal transposition. LAIR1 normally binds collagen, but can recognize repetitive interspersed families of polypeptides (RIFIN) family members that are highly expressed on the surface of P. falciparum-infected red blood cells. In fact, these antibodies underwent affinity maturation that enhanced affinity for RIFIN but abolished affinity for collagen. These "LAIR1-containing" antibodies have been found in 5-10% of donors from Tanzania and Mali, though not in European donors. European donors did show 100-1000 nucleotide stretches inside the elbow joints as well, however. This particular phenomenon may be specific to malaria, as infection is known to induce genomic instability.
History
The first use of the term "antibody" occurred in a text by Paul Ehrlich. The term Antikörper (the German word for antibody) appears in the conclusion of his article "Experimental Studies on Immunity", published in October 1891, which states that, "if two substances give rise to two different Antikörper, then they themselves must be different". However, the term was not accepted immediately and several other terms for antibody were proposed; these included Immunkörper, Amboceptor, Zwischenkörper, substance sensibilisatrice, copula, Desmon, philocytase, fixateur, and Immunisin. The word antibody has formal analogy to the word antitoxin and a similar concept to Immunkörper (immune body in English). As such, the original construction of the word contains a logical flaw; the antitoxin is something directed against a toxin, while the antibody is a body directed against something.
The study of antibodies began in 1890 when Emil von Behring and Kitasato Shibasaburō described antibody activity against diphtheria and tetanus toxins. Von Behring and Kitasato put forward the theory of humoral immunity, proposing that a mediator in serum could react with a foreign antigen. His idea prompted Paul Ehrlich to propose the side-chain theory for antibody and antigen interaction in 1897, when he hypothesized that receptors (described as "side-chains") on the surface of cells could bind specifically to toxins – in a "lock-and-key" interaction – and that this binding reaction is the trigger for the production of antibodies. Other researchers believed that antibodies existed freely in the blood and, in 1904, Almroth Wright suggested that soluble antibodies coated bacteria to label them for phagocytosis and killing; a process that he named opsoninization.
In the 1920s, Michael Heidelberger and Oswald Avery observed that antigens could be precipitated by antibodies and went on to show that antibodies are made of protein. The biochemical properties of antigen-antibody-binding interactions were examined in more detail in the late 1930s by John Marrack. The next major advance was in the 1940s, when Linus Pauling confirmed the lock-and-key theory proposed by Ehrlich by showing that the interactions between antibodies and antigens depend more on their shape than their chemical composition. In 1948, Astrid Fagraeus discovered that B cells, in the form of plasma cells, were responsible for generating antibodies.
Further work concentrated on characterizing the structures of the antibody proteins. A major advance in these structural studies was the discovery in the early 1960s by Gerald Edelman and Joseph Gally of the antibody light chain, and their realization that this protein is the same as the Bence-Jones protein described in 1845 by Henry Bence Jones. Edelman went on to discover that antibodies are composed of disulfide bond-linked heavy and light chains. Around the same time, antibody-binding (Fab) and antibody tail (Fc) regions of IgG were characterized by Rodney Porter. Together, these scientists deduced the structure and complete amino acid sequence of IgG, a feat for which they were jointly awarded the 1972 Nobel Prize in Physiology or Medicine. The Fv fragment was prepared and characterized by David Givol. While most of these early studies focused on IgM and IgG, other immunoglobulin isotypes were identified in the 1960s: Thomas Tomasi discovered secretory antibody (IgA); David S. Rowe and John L. Fahey discovered IgD; and Kimishige Ishizaka and Teruko Ishizaka discovered IgE and showed it was a class of antibodies involved in allergic reactions. In a landmark series of experiments beginning in 1976, Susumu Tonegawa showed that genetic material can rearrange itself to form the vast array of available antibodies.
Medical applications
Disease diagnosis
Detection of particular antibodies is a very common form of medical diagnostics, and applications such as serology depend on these methods. For example, in biochemical assays for disease diagnosis, a titer of antibodies directed against Epstein-Barr virus or Lyme disease is estimated from the blood. If those antibodies are not present, either the person is not infected or the infection occurred a very long time ago, and the B cells generating these specific antibodies have naturally decayed.
In clinical immunology, levels of individual classes of immunoglobulins are measured by nephelometry (or turbidimetry) to characterize the antibody profile of patient. Elevations in different classes of immunoglobulins are sometimes useful in determining the cause of liver damage in patients for whom the diagnosis is unclear. For example, IgM levels are often elevated in patients with primary biliary cirrhosis, whereas IgA deposition along hepatic sinusoids can suggest alcoholic liver disease.
Autoimmune disorders can often be traced to antibodies that bind the body's own epitopes; many can be detected through blood tests. Antibodies directed against red blood cell surface antigens in immune mediated hemolytic anemia are detected with the Coombs test. The Coombs test is also used for antibody screening in blood transfusion preparation and also for antibody screening in antenatal women.
Practically, several immunodiagnostic methods based on detection of complex antigen-antibody are used to diagnose infectious diseases, for example ELISA, immunofluorescence, Western blot, immunodiffusion, immunoelectrophoresis, and magnetic immunoassay.
Over-the-counter home pregnancy tests rely on human chorionic gonadotropin (hCG)-directed antibodies.
New dioxaborolane chemistry enables radioactive fluoride (18F) labeling of antibodies, which allows for positron emission tomography (PET) imaging of cancer.
Disease therapy
Targeted monoclonal antibody therapy is employed to treat diseases such as rheumatoid arthritis, multiple sclerosis, psoriasis, and many forms of cancer including non-Hodgkin's lymphoma, colorectal cancer, head and neck cancer and breast cancer.
Some immune deficiencies, such as X-linked agammaglobulinemia and hypogammaglobulinemia, result in partial or complete lack of antibodies. These diseases are often treated by inducing a short-term form of immunity called passive immunity. Passive immunity is achieved through the transfer of ready-made antibodies in the form of human or animal serum, pooled immunoglobulin or monoclonal antibodies, into the affected individual.
Prenatal therapy
Rh factor, also known as Rh D antigen, is an antigen found on red blood cells; individuals that are Rh-positive (Rh+) have this antigen on their red blood cells and individuals that are Rh-negative (Rh–) do not. During normal childbirth, delivery trauma or complications during pregnancy, blood from a fetus can enter the mother's system. In the case of an Rh-incompatible mother and child, consequential blood mixing may sensitize an Rh- mother to the Rh antigen on the blood cells of the Rh+ child, putting the remainder of the pregnancy, and any subsequent pregnancies, at risk for hemolytic disease of the newborn.
Rho(D) immune globulin antibodies are specific for human RhD antigen. Anti-RhD antibodies are administered as part of a prenatal treatment regimen to prevent sensitization that may occur when a Rh-negative mother has a Rh-positive fetus. Treatment of a mother with Anti-RhD antibodies prior to and immediately after trauma and delivery destroys Rh antigen in the mother's system from the fetus. This occurs before the antigen can stimulate maternal B cells to "remember" Rh antigen by generating memory B cells. Therefore, her humoral immune system will not make anti-Rh antibodies, and will not attack the Rh antigens of the current or subsequent babies. Rho(D) Immune Globulin treatment prevents sensitization that can lead to Rh disease, but does not prevent or treat the underlying disease itself.
Research applications
Specific antibodies are produced by injecting an antigen into a mammal, such as a mouse, rat, rabbit, goat, sheep, or horse for large quantities of antibody. Blood isolated from these animals contains polyclonal antibodies—multiple antibodies that bind to the same antigen—in the serum, which can now be called antiserum. Antigens are also injected into chickens for generation of polyclonal antibodies in egg yolk. To obtain antibody that is specific for a single epitope of an antigen, antibody-secreting lymphocytes are isolated from the animal and immortalized by fusing them with a cancer cell line. The fused cells are called hybridomas, and will continually grow and secrete antibody in culture. Single hybridoma cells are isolated by dilution cloning to generate cell clones that all produce the same antibody; these antibodies are called monoclonal antibodies. Polyclonal and monoclonal antibodies are often purified using Protein A/G or antigen-affinity chromatography.
In research, purified antibodies are used in many applications. Antibodies for research applications can be found directly from antibody suppliers, or through use of a specialist search engine. Research antibodies are most commonly used to identify and locate intracellular and extracellular proteins. Antibodies are used in flow cytometry to differentiate cell types by the proteins they express; different types of cells express different combinations of cluster of differentiation molecules on their surface, and produce different intracellular and secretable proteins. They are also used in immunoprecipitation to separate proteins and anything bound to them (co-immunoprecipitation) from other molecules in a cell lysate, in Western blot analyses to identify proteins separated by electrophoresis, and in immunohistochemistry or immunofluorescence to examine protein expression in tissue sections or to locate proteins within cells with the assistance of a microscope. Proteins can also be detected and quantified with antibodies, using ELISA and ELISpot techniques.
Antibodies used in research are some of the most powerful, yet most problematic reagents with a tremendous number of factors that must be controlled in any experiment including cross reactivity, or the antibody recognizing multiple epitopes and affinity, which can vary widely depending on experimental conditions such as pH, solvent, state of tissue etc. Multiple attempts have been made to improve both the way that researchers validate antibodies and ways in which they report on antibodies. Researchers using antibodies in their work need to record them correctly in order to allow their research to be reproducible (and therefore tested, and qualified by other researchers). Less than half of research antibodies referenced in academic papers can be easily identified. Papers published in F1000 in 2014 and 2015 provide researchers with a guide for reporting research antibody use. The RRID paper, is co-published in 4 journals that implemented the RRIDs Standard for research resource citation, which draws data from the antibodyregistry.org as the source of antibody identifiers (see also group at Force11).
Antibody regions can be used to further biomedical research by acting as a guide for drugs to reach their target. Several application involve using bacterial plasmids to tag plasmids with the Fc region of the antibody such as pFUSE-Fc plasmid.
Regulations
Production and testing
There are several ways to obtain antibodies, including in vivo techniques like animal immunization and various in vitro approaches, such as the phage display method. Traditionally, most antibodies are produced by hybridoma cell lines through immortalization of antibody-producing cells by chemically induced fusion with myeloma cells. In some cases, additional fusions with other lines have created "triomas" and "quadromas". The manufacturing process should be appropriately described and validated. Validation studies should at least include:
The demonstration that the process is able to produce in good quality (the process should be validated)
The efficiency of the antibody purification (all impurities and virus must be eliminated)
The characterization of purified antibody (physicochemical characterization, immunological properties, biological activities, contaminants, ...)
Determination of the virus clearance studies
Before clinical trials
Product safety testing: Sterility (bacteria and fungi), in vitro and in vivo testing for adventitious viruses, murine retrovirus testing..., product safety data needed before the initiation of feasibility trials in serious or immediately life-threatening conditions, it serves to evaluate dangerous potential of the product.
Feasibility testing: These are pilot studies whose objectives include, among others, early characterization of safety and initial proof of concept in a small specific patient population (in vitro or in vivo testing).
Preclinical studies
Testing cross-reactivity of antibody: to highlight unwanted interactions (toxicity) of antibodies with previously characterized tissues. This study can be performed in vitro (reactivity of the antibody or immunoconjugate should be determined with a quick-frozen adult tissues) or in vivo (with appropriates animal models).
Preclinical pharmacology and toxicity testing: preclinical safety testing of antibody is designed to identify possible toxicity in humans, to estimate the likelihood and severity of potential adverse events in humans, and to identify a safe starting dose and dose escalation, when possible.
Animal toxicity studies: Acute toxicity testing, repeat-dose toxicity testing, long-term toxicity testing
Pharmacokinetics and pharmacodynamics testing: Use for determinate clinical dosages, antibody activities, evaluation of the potential clinical effects
Structure prediction and computational antibody design
The importance of antibodies in health care and the biotechnology industry demands knowledge of their structures at high resolution. This information is used for protein engineering, modifying the antigen binding affinity, and identifying an epitope, of a given antibody. X-ray crystallography is one commonly used method for determining antibody structures. However, crystallizing an antibody is often laborious and time-consuming. Computational approaches provide a cheaper and faster alternative to crystallography, but their results are more equivocal, since they do not produce empirical structures. Online web servers such as Web Antibody Modeling (WAM) and Prediction of Immunoglobulin Structure (PIGS) enable computational modeling of antibody variable regions. Rosetta Antibody is a novel antibody FV region structure prediction server, which incorporates sophisticated techniques to minimize CDR loops and optimize the relative orientation of the light and heavy chains, as well as homology models that predict successful docking of antibodies with their unique antigen. However, describing an antibody's binding site using only one single static structure limits the understanding and characterization of the antibody's function and properties. To improve antibody structure prediction and to take the strongly correlated CDR loop and interface movements into account, antibody paratopes should be described as interconverting states in solution with varying probabilities.
The ability to describe the antibody through binding affinity to the antigen is supplemented by information on antibody structure and amino acid sequences for the purpose of patent claims. Several methods have been presented for computational design of antibodies based on the structural bioinformatics studies of antibody CDRs.
There are a variety of methods used to sequence an antibody including Edman degradation, cDNA, etc.; albeit one of the most common modern uses for peptide/protein identification is liquid chromatography coupled with tandem mass spectrometry (LC-MS/MS). High volume antibody sequencing methods require computational approaches for the data analysis, including de novo sequencing directly from tandem mass spectra and database search methods that use existing protein sequence databases. Many versions of shotgun protein sequencing are able to increase the coverage by utilizing CID/HCD/ETD fragmentation methods and other techniques, and they have achieved substantial progress in attempt to fully sequence proteins, especially antibodies. Other methods have assumed the existence of similar proteins, a known genome sequence, or combined top-down and bottom up approaches. Current technologies have the ability to assemble protein sequences with high accuracy by integrating de novo sequencing peptides, intensity, and positional confidence scores from database and homology searches.
Antibody mimetic
Antibody mimetics are organic compounds, like antibodies, that can specifically bind antigens. They consist of artificial peptides or proteins, or aptamer-based nucleic acid molecules with a molar mass of about 3 to 20 kDa. Antibody fragments, such as Fab and nanobodies are not considered as antibody mimetics. Common advantages over antibodies are better solubility, tissue penetration, stability towards heat and enzymes, and comparatively low production costs. Antibody mimetics have been developed and commercialized as research, diagnostic and therapeutic agents.
Binding antibody unit
BAU (binding antibody unit, often as BAU/mL) is a measurement unit defined by the WHO for the comparison of assays detecting the same class of immunoglobulins with the same specificity.
|
;Glycoproteins;Immunology;Reagents for biochemistry
|
https://en.wikipedia.org/wiki/Accelerated%20Graphics%20Port
|
Accelerated Graphics Port (AGP) is a parallel expansion card standard, designed for attaching a video card to a computer system to assist in the acceleration of 3D computer graphics. It was originally designed as a successor to PCI-type connections for video cards. Since 2004, AGP was progressively phased out in favor of PCI Express (PCIe), which is serial, as opposed to parallel; by mid-2008, PCI Express cards dominated the market and only a few AGP models were available, with GPU manufacturers and add-in board partners eventually dropping support for the interface in favor of PCI Express.
Advantages over PCI
AGP is a superset of the PCI standard, designed to overcome PCI's limitations in serving the requirements of the era's high-performance graphics cards.
The primary advantage of AGP is that it doesn't share the PCI bus, providing a dedicated, point-to-point pathway between the expansion slot(s) and the motherboard chipset. The direct connection also allows higher clock speeds.
The second major change is the use of split transactions, wherein the address and data phases are separated. The card may send many address phases, so the host can process them in order, avoiding any long delays caused by the bus being idle during read operations.
Third, PCI bus handshaking is simplified. Unlike PCI bus transactions, whose length is negotiated on a cycle-by-cycle basis using the FRAME# and STOP# signals, AGP transfers are always a multiple of 8 bytes long, with the total length included in the request. Further, rather than using the IRDY# and TRDY# signals for each word, data is transferred in blocks of 4 clock cycles (32 words at AGP 8× speed), and pauses are allowed only between blocks.
Finally, AGP allows (mandatory only in AGP 3.0) sideband addressing, meaning that the address and data buses are separated, so the address phase does not use the main address/data (AD) lines at all. This is done by adding an extra 8-bit "SideBand Address" bus, over which the graphics controller can issue new AGP requests while other AGP data is flowing over the main 32 address/data (AD) lines. This results in improved overall AGP data throughput.
This great improvement in memory read performance makes it practical for an AGP card to read textures directly from system RAM, while a PCI graphics card must copy it from system RAM to the card's video memory. System memory is made available using the graphics address remapping table (GART), which apportions main memory as needed for texture storage. The maximum amount of system memory available to AGP is defined as the AGP aperture.
History
The AGP slot first appeared on x86-compatible system boards based on Socket 7 Intel P5 Pentium and Slot 1 P6 Pentium II processors. Intel introduced AGP support with the i440LX Slot 1 chipset on August 26, 1997, and a flood of products followed from all the major system board vendors.
The first Socket 7 chipsets to support AGP were the VIA Apollo VP3, SiS 5591/5592, and the ALI Aladdin V. Intel never released an AGP-equipped Socket 7 chipset. FIC demonstrated the first Socket 7 AGP system board in November 1997 as the FIC PA-2012 based on the VIA Apollo VP3 chipset, followed very quickly by the EPoX P55-VP3 also based on the VIA VP3 chipset which was first to market.
Early video chipsets featuring AGP support included the Rendition Vérité V2200, 3dfx Voodoo Banshee, Nvidia RIVA 128, 3Dlabs PERMEDIA 2, Intel i740, ATI Rage series, Matrox Millennium II, and S3 ViRGE GX/2. Some early AGP boards used graphics processors built around PCI and were simply bridged to AGP. This resulted in the cards benefiting little from the new bus, with the only improvement used being the 66 MHz bus clock, with its resulting doubled bandwidth over PCI, and bus exclusivity. Intel's i740 was explicitly designed to exploit the new AGP feature set; in fact it was designed to texture only from AGP memory, making PCI versions of the board difficult to implement (local board RAM had to emulate AGP memory), though this was eventually accomplished much later in the form of AGP-to-PCI bridges.
Microsoft first introduced AGP support into Windows via the USB Supplement patch for OSR2 of Windows 95 in 1997, also known as OSR2.1. The first Windows NT-based operating system to receive AGP support was Windows NT 4.0 with Service Pack 3, also in 1997. Linux support for AGP-enhanced fast data transfers was first added in 1999 with the implementation of the AGPgart kernel module.
Later use
With the increasing adoption of PCIe, graphics cards manufacturers continued to produce AGP cards as the standard became obsolete. As GPUs began to be designed to connect to PCIe, an additional PCIe-to-AGP bridge-chip was required to create an AGP-compatible graphics card. The inclusion of a bridge, and the need for a separate AGP card design, incurred additional board costs.
The GeForce 6600 and ATI Radeon X800 XL, released during 2004–2005, were the first bridged cards. In 2009 AGP cards from Nvidia had a ceiling of the GeForce 7 series. In 2011 DirectX 10-capable AGP cards from AMD vendors (Club 3D, HIS, Sapphire, Jaton, Visiontek, Diamond, etc.) included the Radeon HD 2400, 3450, 3650, 3850, 4350, 4650, and 4670. The HD 5000 AGP series mentioned in the AMD Catalyst software was never available. There were many problems with the AMD Catalyst 11.2 - 11.6 AGP hotfix drivers under Windows 7 with the HD 4000 series AGP video cards; use of 10.12 or 11.1 AGP hotfix drivers is a possible workaround. Several of the vendors listed above make available past versions of the AGP drivers.
By 2010, no new motherboard chipsets supported AGP and few new motherboards had AGP slots, however some continued to be produced with older AGP-supporting chipsets.
In 2016, Windows 10 version 1607 dropped support for AGP. Possible future removal of support for AGP from open-source Linux kernel drivers was considered in 2020.
Versions
Intel released "AGP specification 1.0" in 1997. It specified 3.3 V signals and 1× and 2× speeds. Specification 2.0 documented 1.5 V signaling, which could be used at 1×, 2× and the additional 4× speed and 3.0 added 0.8 V signaling, which could be operated at 4× and 8× speeds. (1× and 2× speeds are physically possible, but were not specified.)
Available versions are listed in the adjacent table.
AGP version 3.5 is only publicly mentioned by Microsoft under Universal Accelerated Graphics Port (UAGP), which specifies mandatory supports of extra registers once marked optional under AGP 3.0. Upgraded registers include PCISTS, CAPPTR, NCAPID, AGPSTAT, AGPCMD, NISTAT, NICMD. New required registers include APBASELO, APBASEHI, AGPCTRL, APSIZE, NEPG, GARTLO, GARTHI.
There are various physical interfaces (connectors); see the Compatibility section.
Official extensions
AGP Pro
An official extension for cards that required more electrical power, with a longer slot with additional pins for that purpose. AGP Pro cards were usually workstation-class cards used to accelerate professional computer-aided design applications employed in the fields of architecture, machining, engineering, simulations, and similar fields.
64-bit AGP
A 64-bit channel was once proposed as an optional standard for AGP 3.0 in draft documents, but it was dropped in the final version of the standard.
The standard allows 64-bit transfer for AGP8× reads, writes, and fast writes; 32-bit transfer for PCI operations.
Unofficial variations
A number of non-standard variations of the AGP interface have been produced by manufacturers.
Internal AGP interface
Ultra-AGP, Ultra-AGPII It is an internal AGP interface standard used by SiS for the north bridge controllers with integrated graphics. The original version supports same bandwidth as AGP 8×, while Ultra-AGPII has maximum 3.2 GB/s bandwidth.
PCI-based AGP ports
AGP Express Not a true AGP interface, but allows an AGP card to be connected over the legacy PCI bus on a PCI Express motherboard. It is a technology used on motherboards made by ECS, intended to allow an existing AGP card to be used in a new motherboard instead of requiring a PCIe card to be obtained (since the introduction of PCIe graphics cards few motherboards provide AGP slots). An "AGP Express" slot is basically a PCI slot (with twice the electrical power) with an AGP connector. It offers backward compatibility with AGP cards, but provides incomplete support (some AGP cards do not work with AGP Express) and reduced performance—the card is forced to use the shared PCI bus at its lower bandwidth, rather than having exclusive use of the faster AGP.
AGI The ASRock Graphics Interface (AGI) is a proprietary variant of the Accelerated Graphics Port (AGP) standard. Its purpose is to provide AGP-support for ASRock motherboards that use chipsets lacking native AGP support. However, it is not fully compatible with AGP, and several video card chipsets are known not to be supported.
AGX The EPoX Advanced Graphics eXtended (AGX) is another proprietary AGP variant with the same advantages and disadvantages as AGI. User manuals recommend not using AGP 8× ATI cards with AGX slots.
XGP The Biostar Xtreme Graphics Port is another AGP variant, also with the same advantages and disadvantages as AGI and AGX.
PCIe based AGP ports
AGR The Advanced Graphics Riser is a variation of the AGP port used in some PCIe motherboards made by MSI to offer limited backward compatibility with AGP. It is, effectively, a modified PCIe slot allowing for performance comparable to an AGP 4×/8× slot, but does not support all AGP cards; the manufacturer published a list of some cards and chipsets that work with the modified slot.
Compatibility
AGP cards are backward and forward compatible within limits. 1.5 V-only keyed cards will not go into 3.3 V slots and vice versa, though "Universal" cards exist which will fit into either type of slot. There are also unkeyed "Universal" slots that will accept either type of card. When an AGP Universal card is plugged-into an AGP Universal slot, only the 1.5 V portion of the card is used. Some cards, like Nvidia's GeForce 6 series (except the 6200) or ATI's Radeon X800 series, only have keys for 1.5 V to prevent them from being installed in older mainboards without 1.5 V support. Some of the last modern cards with 3.3 V support were:
the Nvidia GeForce FX series (FX 5200, FX 5500, FX 5700, some FX 5800, FX 5900 and some FX 5950),
certain Nvidia GeForce 6 series and 7 series (some 6600, 6800, 7300, 7600, 7800, 7900 and 7950 cards, really uncommon compared to their AGP 1.5v only versions; the GeForce 6200 is the only exception, as it was the most common card with 3.3 V support),
the ATI Radeon 9000 series (Radeon 9500/9700/9800 (R300/R350), but not 9600/9800 (R360/RV360)).
Some cards incorrectly have dual notches, and some motherboards incorrectly have fully open slots, allowing a card to be plugged into a slot that does not support the correct signaling voltage, which may damage card or motherboard. Some incorrectly designed older 3.3 V cards have the 1.5 V key.
AGP Pro cards will not fit into standard slots, but standard AGP cards will work in a Pro slot. Motherboards equipped with a Universal AGP Pro slot will accept a 1.5 V or 3.3 V card in either the AGP Pro or standard AGP configuration, a Universal AGP card, or a Universal AGP Pro card.
There are some proprietary systems incompatible with standard AGP; for example, Apple Power Macintosh computers with the Apple Display Connector (ADC) have an extra connector which delivers power to the attached display. Some cards designed to work with a specific CPU architecture (e.g., PC, Apple) may not work with others due to firmware issues.
Mark Allen of Playtools.com has made the following comments regarding practical AGP compatibility for AGP 3.0 and AGP 2.0:
Power consumption
Actual power supplied by an AGP slot depends upon the card used. The maximum current drawn from the various rails is given in the specifications for the various versions. For example, if maximum current is drawn from all supplies and all voltages are at their specified upper limits, an AGP 3.0 slot can supply up to 48.25 watts; this figure can be used to specify a power supply conservatively, but in practice a card is unlikely ever to draw more than 40 W from the slot, with many using less. AGP Pro provides additional power up to 110 W. Many AGP cards had additional power connectors to supply them with more power than the slot could provide.
Protocol
An AGP bus is a superset of a 66 MHz conventional PCI bus and, immediately after reset, follows the same protocol. The card must act as a PCI target, and optionally may act as a PCI master. (AGP 2.0 added a "fast writes" extension which allows PCI writes from the motherboard to the card to transfer data at higher speed.)
After the card is initialized using PCI transactions, AGP transactions are permitted. For these, the card is always the AGP master and the motherboard is always the AGP target. The card queues multiple requests which correspond to the PCI address phase, and the motherboard schedules the corresponding data phases later. An important part of initialization is telling the card the maximum number of outstanding AGP requests which may be queued at a given time.
AGP requests are similar to PCI memory read and write requests, but use a different encoding on command lines C/BE[3:0] and are always 8-byte aligned; their starting address and length are always multiples of 8 bytes (64 bits). The three low-order bits of the address are used instead to communicate the length of the request.
Whenever the PCI GNT# signal is asserted, granting the bus to the card, three additional status bits ST[2:0] indicate the type of transfer to be performed next. If the bits are 0xx, a previously queued AGP transaction's data is to be transferred; if the three bits are 111, the card may begin a PCI transaction or (if sideband addressing is not in use) queue a request in-band using PIPE#.
AGP command codes
Like PCI, each AGP transaction begins with an address phase, communicating an address and 4-bit command code. The possible commands are different from PCI, however:
000p Read
Read 8×(AD[2:0]+1) = 8, 16, 24, ..., 64 bytes. The least significant bit p is 0 for low-priority, 1 for high.
001x (reserved):
010p Write
Write 8×(AD[2:0]+1) = 8–64 bytes.
011x (reserved):
100p Long read
Read 32×(AD[2:0]+1) = 32, 64, 96, ..., 256 bytes. This is the same as a read request, but the length is multiplied by four.
1010 Flush
Force previously written data to memory, for synchronization. This acts as a low-priority read, taking a queue slot and returning 8 bytes of random data to indicate completion. The address and length supplied with this command are ignored.
1011 (reserved):
1100 Fence
This acts as a memory fence, requiring that all earlier AGP requests complete before any following requests. Ordinarily, for increased performance, AGP uses a very weak consistency model, and allows a later write to pass an earlier read. (E.g. after sending "write 1, write 2, read, write 3, write 4" requests, all to the same address, the read may return any value from 2 to 4. Only returning 1 is forbidden, as writes must complete before following reads.) This operation does not require any queue slots.
1101 Dual address cycle
When making a request to an address above 232, this is used to indicate that a second address cycle will follow with additional address bits. This operates like a regular PCI dual address cycle; it is accompanied by the low-order 32 bits of the address (and the length), and the following cycle includes the high 32 address bits and the desired command. The two cycles make one request, and take only one slot in the request queue. This request code is not used with side-band addressing.
111x (reserved):
AGP 3.0 dropped high-priority requests and the long read commands, as they were little used. It also mandated side-band addressing, thus dropping the dual address cycle, leaving only four request types: low-priority read (0000), low-priority write (0100), flush (1010) and fence (1100).
In-band AGP requests using PIPE#
To queue a request in-band, the card must request the bus using the standard PCI REQ# signal, and receive GNT# plus bus status ST[2:0] equal to 111. Then, instead of asserting FRAME# to begin a PCI transaction, the card asserts the PIPE# signal while driving the AGP command, address, and length on the C/BE[3:0], AD[31:3] and AD[2:0] lines, respectively. (If the address is 64 bits, a dual address cycle similar to PCI is used.) For every cycle that PIPE# is asserted, the card sends another request without waiting for acknowledgement from the motherboard, up to the configured maximum queue depth. The last cycle is marked by deasserting REQ#, and PIPE# is deasserted on the following idle cycle.
Side-band AGP requests using SBA[7:0]
If side-band addressing is supported and configured, the PIPE# signal is not used. (And the signal is re-used for another purpose in the AGP 3.0 protocol, which requires side-band addressing.) Instead, requests are broken into 16-bit pieces which are sent as two bytes across the SBA bus. There is no need for the card to ask permission from the motherboard; a new request may be sent at any time as long as the number of outstanding requests is within the configured maximum queue depth. The possible values are:
0aaa aaaa aaaa alll
Queue a request with the given low-order address bits A[14:3] and length 8×(L[2:0]+1). The command and high-order bits are as previously specified. Any number of requests may be queued by sending only this pattern, as long as the command and higher address bits remain the same.
10cc ccra aaaa aaaa
Use command C[3:0] and address bits A[23:15] for future requests. (Bit R is reserved.) This does not queue a request, but sets values that will be used in all future queued requests.
110r aaaa aaaa aaaa
Use address bits A[35:24] for future requests.
1110 aaaa aaaa aaaa
Use address bits A[47:36] for future requests.
1111 0xxx, 1111 10xx, 1111 110x
Reserved, do not use.
1111 1110
Synchronization pattern used when starting the SBA bus after an idle period.
1111 1111
No operation; no request. At AGP 1× speed, this may be sent as a single byte and a following 16-bit side-band request started one cycle later. At AGP 2× and higher speeds, all side-band requests, including this NOP, are 16 bits long.
Sideband address bytes are sent at the same rate as data transfers, up to 8× the 66 MHz basic bus clock. Sideband addressing has the advantage that it mostly eliminates the need for turnaround cycles on the AD bus between transfers, in the usual case when read operations greatly outnumber writes.
AGP responses
While asserting GNT#, the motherboard may instead indicate via the ST bits that a data phase for a queued request will be performed next. There are four queues: two priorities (low- and high-priority) for each of reads and writes, and each is processed in order. Obviously, the motherboard will attempt to complete high-priority requests first, but there is no limit on the number of low-priority responses which may be delivered while the high-priority request is processed.
For each cycle when the GNT# is asserted and the status bits have the value 00p, a read response of the indicated priority is scheduled to be returned. At the next available opportunity (typically the next clock cycle), the motherboard will assert TRDY# (target ready) and begin transferring the response to the oldest request in the indicated read queue. (Other PCI bus signals like FRAME#, DEVSEL# and IRDY# remain deasserted.) Up to four clock cycles worth of data (16 bytes at AGP 1× or 128 bytes at AGP 8×) are transferred without waiting for acknowledgement from the card. If the response is longer than that, both the card and motherboard must indicate their ability to continue on the third cycle by asserting IRDY# (initiator ready) and TRDY#, respectively. If either one does not, wait states will be inserted until two cycles after they both do. (The value of IRDY# and TRDY# at other times is irrelevant and they are usually deasserted.)
The C/BE# byte enable lines may be ignored during read responses, but are held asserted (all bytes valid) by the motherboard.
The card may also assert the RBF# (read buffer full) signal to indicate that it is temporarily unable to receive more low-priority read responses. The motherboard will refrain from scheduling any more low-priority read responses. The card must still be able to receive the end of the current response, and the first four-cycle block of the following one if scheduled, plus any high-priority responses it has requested.
For each cycle when GNT# is asserted and the status bits have the value 01p, write data is scheduled to be sent across the bus. At the next available opportunity (typically the next clock cycle), the card will assert IRDY# (initiator ready) and begin transferring the data portion of the oldest request in the indicated write queue. If the data is longer than four clock cycles, the motherboard will indicate its ability to continue by asserting TRDY# on the third cycle. Unlike reads, there is no provision for the card to delay the write; if it didn't have the data ready to send, it shouldn't have queued the request.
The C/BE# lines are used with write data, and may be used by the card to select which bytes should be written to memory.
The multiplier in AGP 2×, 4× and 8× indicates the number of data transfers across the bus during each 66 MHz clock cycle. Such transfers use source synchronous clocking with a "strobe" signal (AD_STB[0], AD_STB[1], and SB_STB) generated by the data source. AGP 4× adds complementary strobe signals.
Because AGP transactions may be as short as two transfers, at AGP 4× and 8× speeds it is possible for a request to complete in the middle of a clock cycle. In such a case, the cycle is padded with dummy data transfers (with the C/BE# byte enable lines held deasserted).
Connector pinout
The AGP connector contains almost all PCI signals, plus several additions. The connector has 66 contacts on each side, although 4 are removed for each keying notch. Pin 1 is closest to the I/O bracket, and the B and A sides are as in the table, looking down at the motherboard connector.
Contacts are spaced at 1 mm intervals, however they are arranged in two staggered vertical rows so that there is 2 mm space between pins in each row. Odd-numbered A-side contacts, and even-numbered B-side contacts are in the lower row (1.0 to 3.5 mm from the card edge). The others are in the upper row (3.7 to 6.0 mm from the card edge).
PCI signals omitted are:
The −12 V supply
The third and fourth interrupt requests (INTC#, INTD#)
The JTAG pins (TRST#, TCK, TMS, TDI, TDO)
The SMBus pins (SMBCLK, SMBDAT)
The IDSEL pin; an AGP card connects AD[16] to IDSEL internally
The 64-bit extension (REQ64#, ACK64#) and 66 MHz (M66EN) pins
The LOCK# pin for locked transaction support
Signals added are:
Data strobes AD_STB[1:0] (and AD_STB[1:0]# in AGP 2.0)
The sideband address bus SBA[7:0] and SB_STB (and SB_STB# in AGP 2.0)
The ST[2:0] status signals
USB+ and USB− (and OVERCNT# in AGP 2.0)
The PIPE# signal (removed in AGP 3.0 for 0.8 V signaling)
The RBF# signal
The TYPEDET#, Vregcg and Vreggc pins (AGP 2.0 for 1.5V signaling)
The DBI_HI and DBI_LO signals (AGP 3.0 for 0.8 V signaling only)
The GC_DET# and MB_DET# pins (AGP 3.0 for 0.8V signaling)
The WBF# signal (AGP 3.0 fast write extension)
See also
List of device bandwidths
Serial Digital Video Out for ADD DVI adapter cards
AGP Inline Memory Module
Notes
References
External links
Archived AGP Implementors Forum
AGP specifications: 1.0, 2.0, 3.0, Pro 1.0, Pro 1.1a
AGP Compatibility For Sticklers
AGP pinout
AGP expansion slots
AGP compatibility (with pictures)
Universal Accelerated Graphics Port (UAGP)
How Stuff Works - AGP
A discussion from 2003 of what AGP aperture is, how it works, and how much memory should be allocated to it.
|
IBM PC compatibles;Intel graphics;Macintosh internals;Motherboard expansion slot;Peripheral Component Interconnect
|
https://en.wikipedia.org/wiki/AMD
|
Advanced Micro Devices, Inc. (AMD) is an American multinational corporation and technology company headquartered in Santa Clara, California and maintains significant operations in Austin, Texas. AMD is a hardware and fabless company that designs and develops central processing units (CPUs), graphics processing units (GPUs), field-programmable gate arrays (FPGAs), system-on-chip (SoC), and high-performance computer solutions. AMD serves a wide range of business and consumer markets, including gaming, data centers, artificial intelligence (AI), and embedded systems.
AMD's main products include microprocessors, motherboard chipsets, embedded processors, and graphics processors for servers, workstations, personal computers, and embedded system applications. The company has also expanded into new markets, such as the data center, gaming, and high-performance computing markets. AMD's processors are used in a wide range of computing devices, including personal computers, servers, laptops, and gaming consoles. While it initially manufactured its own processors, the company later outsourced its manufacturing, after GlobalFoundries was spun off in 2009. Through its Xilinx acquisition in 2022, AMD offers field-programmable gate array (FPGA) products.
AMD was founded in 1969 by Jerry Sanders and a group of other technology professionals. The company's early products were primarily memory chips and other components for computers. In 1975, AMD entered the microprocessor market, competing with Intel, its main rival in the industry. In the early 2000s, it experienced significant growth and success, thanks in part to its strong position in the PC market and the success of its Athlon and Opteron processors. However, the company faced challenges in the late 2000s and early 2010s, as it struggled to keep up with Intel in the race to produce faster and more powerful processors.
In the late 2010s, AMD regained market share by pursuing a penetration pricing strategy and building on the success of its Ryzen processors, which were considerably more competitive with Intel microprocessors in terms of performance whilst offering attractive pricing. In 2022, AMD surpassed Intel by market capitalization for the first time.
History
Foundational years
Advanced Micro Devices was formally incorporated by Jerry Sanders, along with seven of his colleagues from Fairchild Semiconductor, on May 1, 1969. Sanders, an electrical engineer who was the director of marketing at Fairchild, had, like many Fairchild executives, grown frustrated with the increasing lack of support, opportunity, and flexibility within the company. He later decided to leave to start his own semiconductor company, following the footsteps of Robert Noyce (developer of the first silicon integrated circuit at Fairchild in 1959) and Gordon Moore, who together founded the semiconductor company Intel in July 1968.
In September 1969, AMD moved from its temporary location in Santa Clara to Sunnyvale, California. To immediately secure a customer base, AMD initially became a second source supplier of microchips designed by Fairchild and National Semiconductor. AMD first focused on producing logic chips. The company guaranteed quality control to United States Military Standard, an advantage in the early computer industry since unreliability in microchips was a distinct problem that customers – including computer manufacturers, the telecommunications industry, and instrument manufacturers – wanted to avoid.
In November 1969, the company manufactured its first product: the Am9300, a 4-bit MSI shift register, which began selling in 1970. Also in 1970, AMD produced its first proprietary product, the Am2501 logic counter, which was highly successful. Its bestselling product in 1971 was the Am2505, the fastest multiplier available.
In 1971, AMD entered the RAM chip market, beginning with the Am3101, a 64-bit bipolar RAM. That year AMD also greatly increased the sales volume of its linear integrated circuits, and by year-end the company's total annual sales reached US$4.6 million.
AMD went public in September 1972. The company was a second source for Intel MOS/LSI circuits by 1973, with products such as Am14/1506 and Am14/1507, dual 100-bit dynamic shift registers. By 1975, AMD was producing 212 products – of which 49 were proprietary, including the Am9102 (a static N-channel 1024-bit RAM) and three low-power Schottky MSI circuits: Am25LS07, Am25LS08, and Am25LS09.
Intel had created the first microprocessor, its 4-bit 4004, in 1971. By 1975, AMD entered the microprocessor market with the Am9080, a reverse-engineered clone of the Intel 8080, and the Am2900 bit-slice microprocessor family. When Intel began installing microcode in its microprocessors in 1976, it entered into a cross-licensing agreement with AMD, which was granted a copyright license to the microcode in its microprocessors and peripherals, effective October 1976.
In 1977, AMD entered into a joint venture with Siemens, a German engineering conglomerate wishing to enhance its technology expertise and enter the American market. Siemens purchased 20% of AMD's stock, giving the company an infusion of cash to increase its product lines. The two companies also jointly established Advanced Micro Computers (AMC), located in Silicon Valley and in Germany, allowing AMD to enter the microcomputer development and manufacturing field, in particular based on AMD's second-source Zilog Z8000 microprocessors. When the two companies' vision for Advanced Micro Computers diverged, AMD bought out Siemens' stake in the American division in 1979. AMD closed Advanced Micro Computers in late 1981 after switching focus to manufacturing second-source Intel x86 microprocessors.
Total sales in fiscal year 1978 topped $100 million, and in 1979, AMD debuted on the New York Stock Exchange. In 1979, production also began on AMD's new semiconductor fabrication plant in Austin, Texas; the company already had overseas assembly facilities in Penang and Manila, and began construction on a fabrication plant in San Antonio in 1981. In 1980, AMD began supplying semiconductor products for telecommunications, an industry undergoing rapid expansion and innovation.
Intel partnership
Intel had introduced the first x86 microprocessors in 1978. In 1981, IBM created its PC, and wanted Intel's x86 processors, but only under the condition that Intel would also provide a second-source manufacturer for its patented x86 microprocessors. Intel and AMD entered into a 10-year technology exchange agreement, first signed in October 1981 and formally executed in February 1982. The terms of the agreement were that each company could acquire the right to become a second-source manufacturer of semiconductor products developed by the other; that is, each party could "earn" the right to manufacture and sell a product developed by the other, if agreed to, by exchanging the manufacturing rights to a product of equivalent technical complexity. The technical information and licenses needed to make and sell a part would be exchanged for a royalty to the developing company. The 1982 agreement also extended the 1976 AMD–Intel cross-licensing agreement through 1995. The agreement included the right to invoke arbitration of disagreements, and after five years the right of either party to end the agreement with one year's notice. The main result of the 1982 agreement was that AMD became a second-source manufacturer of Intel's x86 microprocessors and related chips, and Intel provided AMD with database tapes for its 8086, 80186, and 80286 chips. However, in the event of a bankruptcy or takeover of AMD, the cross-licensing agreement would be effectively canceled.
Beginning in 1982, AMD began volume-producing second-source Intel-licensed 8086, 8088, 80186, and 80188 processors, and by 1984, its own Am286 clone of Intel's 80286 processor, for the rapidly growing market of IBM PCs and IBM clones. It also continued its successful concentration on proprietary bipolar chips.
The company continued to spend greatly on research and development, and created the world's first 512K EPROM in 1984. That year, AMD was listed in the book The 100 Best Companies to Work for in America, and later made the Fortune 500 list for the first time in 1985.
By mid-1985, the microchip market experienced a severe downturn, mainly due to long-term aggressive trade practices (dumping) from Japan, but also due to a crowded and non-innovative chip market in the United States. AMD rode out the mid-1980s crisis by aggressively innovating and modernizing, devising the Liberty Chip program of designing and manufacturing one new chip or chipset per week for 52 weeks in fiscal year 1986, and by heavily lobbying the U.S. government until sanctions and restrictions were put in place to prevent predatory Japanese pricing. During this time, AMD withdrew from the DRAM market, and made some headway into the CMOS market, which it had lagged in entering, having focused instead on bipolar chips.
AMD had some success in the mid-1980s with the AMD7910 and AMD7911 "World Chip" FSK modem, one of the first multi-standard devices that covered both Bell and CCITT tones at up to 1200 baud half duplex or 300/300 full duplex. Beginning in 1986, AMD embraced the perceived shift toward RISC with their own AMD Am29000 (29k) processor; the 29k survived as an embedded processor. The company also increased its EPROM memory market share in the late 1980s. Throughout the 1980s, AMD was a second-source supplier of Intel x86 processors. In 1991, it introduced its 386-compatible Am386, an AMD-designed chip. Creating its own chips, AMD began to compete directly with Intel.
AMD had a large, successful flash memory business, even during the dotcom bust. In 2003, to divest some manufacturing and aid its overall cash flow, which was under duress from aggressive microprocessor competition from Intel, AMD spun off its flash memory business and manufacturing into Spansion, a joint venture with Fujitsu, which had been co-manufacturing flash memory with AMD since 1993. In December 2005, AMD divested itself of Spansion to focus on the microprocessor market, and Spansion went public in an IPO.
2006–present
On July 24, 2006, AMD announced its acquisition of the Canadian 3D graphics card company ATI Technologies. AMD paid $4.3 billion and 58 million shares of its capital stock, for approximately $5.4 billion. The transaction was completed on October 25, 2006. On August 30, 2010, AMD announced that it would retire the ATI brand name for its graphics chipsets in favor of the AMD brand name.
In October 2008, AMD announced plans to spin off manufacturing operations in the form of GlobalFoundries Inc., a multibillion-dollar joint venture with Advanced Technology Investment Co., an investment company formed by the government of Abu Dhabi. The partnership and spin-off gave AMD an infusion of cash and allowed it to focus solely on chip design. To assure the Abu Dhabi investors of the new venture's success, AMD's CEO Hector Ruiz stepped down in July 2008, while remaining executive chairman, in preparation for becoming chairman of GlobalFoundries in March 2009. President and COO Dirk Meyer became AMD's CEO. Recessionary losses necessitated AMD cutting 1,100 jobs in 2009.
In August 2011, AMD announced that former Lenovo executive Rory Read would be joining the company as CEO, replacing Meyer. In November 2011, AMD announced plans to lay off more than 10% (1,400) of its employees from across all divisions worldwide. In October 2012, it announced plans to lay off an additional 15% of its workforce to reduce costs in the face of declining sales revenue. The inclusion of AMD chips into the PlayStation 4 and Xbox One were later seen as saving AMD from bankruptcy.
AMD acquired the low-power server manufacturer SeaMicro in early 2012, with an eye to bringing out an Arm64 server chip.
On October 8, 2014, AMD announced that Rory Read had stepped down after three years as president and chief executive officer. He was succeeded by Lisa Su, a key lieutenant who had been chief operating officer since June.
On October 16, 2014, AMD announced a new restructuring plan along with its Q3 results. Effective July 1, 2014, AMD reorganized into two business groups: Computing and Graphics, which primarily includes desktop and notebook processors and chipsets, discrete GPUs, and professional graphics; and Enterprise, Embedded, and Semi-Custom, which primarily includes server and embedded processors, dense servers, semi-custom SoC products (including solutions for gaming consoles), engineering services, and royalties. As part of this restructuring, AMD announced that 7% of its global workforce would be laid off by the end of 2014.
After the GlobalFoundries spin-off and subsequent layoffs, AMD was left with significant vacant space at 1 AMD Place, its aging Sunnyvale headquarters office complex. In August 2016, AMD's 47 years in Sunnyvale came to a close when it signed a lease with the Irvine Company for a new 220,000 sq. ft. headquarters building in Santa Clara. AMD's new location at Santa Clara Square faces the headquarters of archrival Intel across the Bayshore Freeway and San Tomas Aquino Creek. Around the same time, AMD also agreed to sell 1 AMD Place to the Irvine Company. In April 2019, the Irvine Company secured approval from the Sunnyvale City Council of its plans to demolish 1 AMD Place and redevelop the entire 32-acre site into townhomes and apartments.
In October 2020, AMD announced that it was acquiring Xilinx, one of the market leaders in field programmable gate arrays and complex programmable logic devices (FPGAs and CPLDs) in an all-stock transaction. The acquisition was completed in February 2022, with an estimated acquisition price of $50 billion.
In October 2023, AMD acquired an open-source AI software provider, Nod.ai, to bolster its AI software ecosystem.
In January 2024, AMD announced it was discontinuing the production of all complex programmable logic devices (CPLDs) acquired through Xilinx.
In March 2024, a rally in semiconductor stocks pushed AMD's valuation above $300B for the first time.
In July 2024, AMD announced that it would acquire the Finnish-based artificial intelligence startup company Silo AI in a $665 million all-cash deal in an attempt to better compete with AI chip market leader Nvidia.
In August 2024, AMD sign a deal to acquire ZT Systems for $4.9 Billion. The company creates custom computing infrastructure that is used for AI tasks.
List of CEOs
Products
CPUs and APUs
IBM PC and the x86 architecture
In February 1982, AMD signed a contract with Intel, becoming a licensed second-source manufacturer of 8086 and 8088 processors. IBM wanted to use the Intel 8088 in its IBM PC, but its policy at the time was to require at least two sources for its chips. AMD later produced the Am286 under the same arrangement. In 1984, Intel internally decided to no longer cooperate with AMD in supplying product information to shore up its advantage in the marketplace, and delayed and eventually refused to convey the technical details of the Intel 80386. In 1987, AMD invoked arbitration over the issue, and Intel reacted by canceling the 1982 technological-exchange agreement altogether. After three years of testimony, AMD eventually won in arbitration in 1992, but Intel disputed this decision. Another long legal dispute followed, ending in 1994 when the Supreme Court of California sided with the arbitrator and AMD.
In 1990, Intel countersued AMD, renegotiating AMD's right to use derivatives of Intel's microcode for its cloned processors. In the face of uncertainty during the legal dispute, AMD was forced to develop clean room designed versions of Intel code for its x386 and x486 processors, the former long after Intel had released its own x386 in 1985. In March 1991, AMD released the Am386, its clone of the Intel 386 processor. By October of the same year it had sold one million units.
In 1993, AMD introduced the first of the Am486 family of processors, which proved popular with a large number of original equipment manufacturers, including Compaq, which signed an exclusive agreement using the Am486. The Am5x86, another Am486-based processor, was released in November 1995, and continued AMD's success as a fast, cost-effective processor.
Finally, in an agreement effective 1996, AMD received the rights to the microcode in Intel's x386 and x486 processor families, but not the rights to the microcode in the following generations of processors.
K5, K6, Athlon, Duron, and Sempron
AMD's first in-house x86 processor was the K5, launched in 1996. The "K" in its name was a reference to Kryptonite, the only substance known to harm comic book character Superman. This itself was a reference to Intel's hegemony over the market, i.e., an anthropomorphization of them as Superman. The number "5" was a reference to the fifth generation of x86 processors; rival Intel had previously introduced its line of fifth-generation x86 processors as Pentium because the U.S. Trademark and Patent Office had ruled that mere numbers could not be trademarked.
In 1996, AMD purchased NexGen, specifically for the rights to their Nx series of x86-compatible processors. AMD gave the NexGen design team their own building, left them alone, and gave them time and money to rework the Nx686. The result was the K6 processor, introduced in 1997. Although it was based on Socket 7, variants such as K6-III/450 were faster than Intel's Pentium II (sixth-generation processor).
The K7 was AMD's seventh-generation x86 processor, making its debut under the brand name Athlon on June 23, 1999. Unlike previous AMD processors, it could not be used on the same motherboards as Intel's, due to licensing issues surrounding Intel's Slot 1 connector, and instead used a Slot A connector, referenced to the Alpha processor bus. The Duron was a lower-cost and limited version of the Athlon (64 KB instead of 256 KB L2 cache) in a 462-pin socketed PGA (socket A) or soldered directly onto the motherboard. Sempron was released as a lower-cost Athlon XP, replacing Duron in the socket A PGA era. It has since been migrated upward to all new sockets, up to AM3.
On October 9, 2001, the Athlon XP was released. On February 10, 2003, the Athlon XP with 512 KB L2 Cache was released.
Athlon 64, Opteron, and Phenom
The K8 was a major revision of the K7 architecture, with the most notable features being the addition of a 64-bit extension to the x86 instruction set (called x86-64, AMD64, or x64), the incorporation of an on-chip memory controller, and the implementation of an extremely high-performance point-to-point interconnect called HyperTransport, as part of the Direct Connect Architecture. The technology was initially launched as the Opteron server-oriented processor on April 22, 2003. Shortly thereafter, it was incorporated into a product for desktop PCs, branded Athlon 64.
On April 21, 2005, AMD released the first dual-core Opteron, an x86-based server CPU. A month later, it released the Athlon 64 X2, the first desktop-based dual-core processor family. In May 2007, AMD abandoned the string "64" in its dual-core desktop product branding, becoming Athlon X2, downplaying the significance of 64-bit computing in its processors. Further updates involved improvements to the microarchitecture, and a shift of the target market from mainstream desktop systems to value dual-core desktop systems. In 2008, AMD started to release dual-core Sempron processors exclusively in China, branded as the Sempron 2000 series, with lower HyperTransport speed and smaller L2 cache. AMD completed its dual-core product portfolio for each market segment.
In September 2007, AMD released the first server Opteron K10 processors, followed in November by the Phenom processor for desktop. K10 processors came in dual-core, triple-core, and quad-core versions, with all cores on a single die. AMD released a new platform codenamed "Spider", which used the new Phenom processor, and an R770 GPU and a 790 GX/FX chipset from the AMD 700 chipset series. However, AMD built the Spider at 65nm, which was uncompetitive with Intel's smaller and more power-efficient 45nm.
In January 2009, AMD released a new processor line dubbed Phenom II, a refresh of the original Phenom built using the 45 nm process. AMD's new platform, codenamed "Dragon", used the new Phenom II processor, and an ATI R770 GPU from the R700 GPU family, and a 790 GX/FX chipset from the AMD 700 chipset series. The Phenom II came in dual-core, triple-core and quad-core variants, all using the same die, with cores disabled for the triple-core and dual-core versions. The Phenom II resolved issues that the original Phenom had, including a low clock speed, a small L3 cache, and a Cool'n'Quiet bug that decreased performance. The Phenom II cost less but was not performance-competitive with Intel's mid-to-high-range Core 2 Quads. The Phenom II also enhanced its predecessor's memory controller, allowing it to use DDR3 in a new native socket AM3, while maintaining backward compatibility with AM2+, the socket used for the Phenom, and allowing the use of the DDR2 memory that was used with the platform.
In April 2010, AMD released a new Phenom II Hexa-core (6-core) processor codenamed "Thuban". This was a totally new die based on the hexa-core "Istanbul" Opteron processor. It included AMD's "turbo core" technology, which allows the processor to automatically switch from 6 cores to 3 faster cores when more pure speed is needed.
The Magny Cours and Lisbon server parts were released in 2010. The Magny Cours part came in 8 to 12 cores and the Lisbon part in 4 and 6 core parts. Magny Cours is focused on performance while the Lisbon part is focused on high performance per watt. Magny Cours is an MCM (multi-chip module) with two hexa-core "Istanbul" Opteron parts. This will use a new socket G34 for dual and quad-socket processors and thus will be marketed as Opteron 61xx series processors. Lisbon uses socket C32 certified for dual-socket use or single socket use only and thus will be marketed as Opteron 41xx processors. Both will be built on a 45 nm SOI process.
Fusion becomes the AMD APU
Following AMD's 2006 acquisition of Canadian graphics company ATI Technologies, an initiative codenamed Fusion was announced to integrate a CPU and GPU together on some of AMD's microprocessors, including a built in PCI Express link to accommodate separate PCI Express peripherals, eliminating the northbridge chip from the motherboard. The initiative intended to move some of the processing originally done on the CPU (e.g. floating-point unit operations) to the GPU, which is better optimized for some calculations. The Fusion was later renamed the AMD APU (Accelerated Processing Unit).
Llano was AMD's first APU built for laptops. Llano was the second APU released, targeted at the mainstream market. It incorporated a CPU and GPU on the same die, and northbridge functions, and used "Socket FM1" with DDR3 memory. The CPU part of the processor was based on the Phenom II "Deneb" processor. AMD suffered an unexpected decrease in revenue based on production problems for the Llano. More AMD APUs for laptops running Windows 7 and Windows 8 OS are being used commonly. These include AMD's price-point APUs, the E1 and E2, and their mainstream competitors with Intel's Core i-series: The Vision A- series, the A standing for accelerated. These range from the lower-performance A4 chipset to the A6, A8, and A10. These all incorporate next-generation Radeon graphics cards, with the A4 utilizing the base Radeon HD chip and the rest using a Radeon R4 graphics card, with the exception of the highest-model A10 (A10-7300) which uses an R6 graphics card.
New microarchitectures
High-power, high-performance Bulldozer cores
Bulldozer was AMD's microarchitecture codename for server and desktop AMD FX processors, first released on October 12, 2011. This family 15h microarchitecture is the successor to the family 10h (K10) microarchitecture design. Bulldozer was a clean-sheet design, not a development of earlier processors. The core was specifically aimed at 10–125 W TDP computing products. AMD claimed dramatic performance-per-watt efficiency improvements in high-performance computing (HPC) applications with Bulldozer cores. While hopes were high that Bulldozer would bring AMD to be performance-competitive with Intel once more, most benchmarks were disappointing. In some cases the new Bulldozer products were slower than the K10 models they were built to replace.
The Piledriver microarchitecture was the 2012 successor to Bulldozer, increasing clock speeds and performance relative to its predecessor. Piledriver would be released in AMD FX, APU, and Opteron product lines. Piledriver was subsequently followed by the Steamroller microarchitecture in 2013. Used exclusively in AMD's APUs, Steamroller focused on greater parallelism.
In 2015, the Excavator microarchitecture replaced Piledriver. Expected to be the last microarchitecture of the Bulldozer series, Excavator focused on improved power efficiency.
Low-power Cat cores
The Bobcat microarchitecture was revealed during a speech from AMD executive vice-president Henri Richard in Computex 2007 and was put into production during the first quarter of 2011. Based on the difficulty competing in the x86 market with a single core optimized for the 10–100 W range, AMD had developed a simpler core with a target range of 1–10 watts. In addition, it was believed that the core could migrate into the hand-held space if the power consumption can be reduced to less than 1 W.
Jaguar is a microarchitecture codename for Bobcat's successor, released in 2013, that is used in various APUs from AMD aimed at the low-power/low-cost market. Jaguar and its derivates would go on to be used in the custom APUs of the PlayStation 4, Xbox One, PlayStation 4 Pro, Xbox One S, and Xbox One X. Jaguar would be later followed by the Puma microarchitecture in 2014.
ARM architecture-based designs
In 2012, AMD announced it was working on ARM products, both as a semi-custom product and server product. The initial server product was announced as the Opteron A1100 in 2014, an 8-core Cortex-A57-based ARMv8-A SoC, and was expected to be followed by an APU incorporating a Graphics Core Next GPU. However, the Opteron A1100 was not released until 2016, with the delay attributed to adding software support. The A1100 was also criticized for not having support from major vendors upon its release.
In 2014, AMD also announced the K12 custom core for release in 2016. While being ARMv8-A instruction set architecture compliant, the K12 was expected to be entirely custom-designed, targeting the server, embedded, and semi-custom markets. While ARM architecture development continued, products based on K12 were subsequently delayed with no release planned. Development of AMD's x86-based Zen microarchitecture was preferred.
Zen-based CPUs and APUs
Zen is an architecture for x86-64 based Ryzen series of CPUs and APUs, introduced in 2017 by AMD and built from the ground up by a team led by Jim Keller, beginning with his arrival in 2012, and taping out before his departure in September 2015.
One of AMD's primary goals with Zen was an IPC increase of at least 40%, however in February 2017 AMD announced that they had actually achieved a 52% increase. Processors made on the Zen architecture are built on the 14 nm FinFET node and have a renewed focus on single-core performance and HSA compatibility. Previous processors from AMD were either built in the 32 nm process ("Bulldozer" and "Piledriver" CPUs) or the 28 nm process ("Steamroller" and "Excavator" APUs). Because of this, Zen is much more energy efficient.
The Zen architecture is the first to encompass CPUs and APUs from AMD built for a single socket (Socket AM4). Also new for this architecture is the implementation of simultaneous multithreading (SMT) technology, something Intel has had for years on some of their processors with their proprietary hyper-threading implementation of SMT. This is a departure from the "Clustered MultiThreading" design introduced with the Bulldozer architecture. Zen also has support for DDR4 memory.
AMD released the Zen-based high-end Ryzen 7 "Summit Ridge" series CPUs on March 2, 2017, mid-range Ryzen 5 series CPUs on April 11, 2017, and entry level Ryzen 3 series CPUs on July 27, 2017. AMD later released the Epyc line of Zen derived server processors for 1P and 2P systems. In October 2017, AMD released Zen-based APUs as Ryzen Mobile, incorporating Vega graphics cores. In January 2018 AMD has announced their new lineup plans, with Ryzen 2. AMD launched CPUs with the 12nm Zen+ microarchitecture in April 2018, following up with the 7nm Zen 2 microarchitecture in June 2019, including an update to the Epyc line with new processors using the Zen 2 microarchitecture in August 2019, and Zen 3 slated for release in Q3 2020.
As of 2019, AMD's Ryzen processors were reported to outsell Intel's consumer desktop processors. At CES 2020 AMD announced their Ryzen Mobile 4000, as the first 7 nm x86 mobile processor, the first 7 nm 8-core (also 16-thread) high-performance mobile processor, and the first 8-core (also 16-thread) processor for ultrathin laptops. This generation is still based on the Zen 2 architecture. In October 2020, AMD announced new processors based on the Zen 3 architecture. On PassMark's Single thread performance test the Ryzen 5 5600x bested all other CPUs besides the Ryzen 9 5950X.
In April 2020, AMD launched three new SKUs which target commercial HPC workloads & hyperconverged infrastructure applications. The launch was based on Epyc’s 7 nm second-generation Rome platform and supported by Dell EMC, Hewlett Packard Enterprise, Lenovo, Supermicro, and Nutanix. IBM Cloud was its first public cloud partner. In August 2022, AMD announced their initial lineup of CPUs based on the new Zen 4 architecture.
The Steam Deck, PlayStation 5, Xbox Series X and Series S all use chips based on the Zen 2 microarchitecture, with proprietary tweaks and different configurations in each system's implementation than AMD sells in its own commercially available APUs.
In March 2025 AMD announced Instella an open source large language model.
Graphics products and GPUs
ATI prior to AMD acquisition
Radeon within AMD
In 2007, the ATI division of AMD released the TeraScale microarchitecture implementing a unified shader model. This design replaced the previous fixed-function hardware of previous graphics cards with multipurpose, programmable shaders. Initially released as part of the GPU for the Xbox 360, this technology would go on to be used in Radeon branded HD 2000 parts. Three generations of TeraScale would be designed and used in parts from 2007 to 2015.
Combined GPU and CPU divisions
In a 2009 restructuring, AMD merged the CPU and GPU divisions to support the company's APUs, which fused both graphics and general purpose processing. In 2011, AMD released the successor to TeraScale, Graphics Core Next (GCN). This new microarchitecture emphasized GPGPU compute capability in addition to graphics processing, with a particular aim of supporting heterogeneous computing on AMD's APUs. GCN's reduced instruction set ISA allowed for significantly increased compute capability over TeraScale's very long instruction word ISA. Since GCN's introduction with the HD 7970, five generations of the GCN architecture have been produced from 2011 through at least 2018.
Radeon Technologies Group
In September 2015, AMD separated the graphics technology division of the company into an independent internal unit called the Radeon Technologies Group (RTG) headed by Raja Koduri. This gave the graphics division of AMD autonomy in product design and marketing. The RTG then went on to create and release the Polaris and Vega microarchitectures released in 2016 and 2017, respectively. In particular the Vega, or fifth-generation GCN, microarchitecture includes a number of major revisions to improve performance and compute capabilities.
In November 2017, Raja Koduri left RTG and CEO and President Lisa Su took his position. In January 2018, it was reported that two industry veterans joined RTG, namely Mike Rayfield as senior vice president and general manager of RTG, and David Wang as senior vice president of engineering for RTG. In January 2020, AMD announced that its second-generation RDNA graphics architecture was in development, with the aim of competing with the Nvidia RTX graphics products for performance leadership. In October 2020, AMD announced their new RX 6000 series series GPUs, their first high-end product based on RDNA2 and capable of handling ray-tracing natively, aiming to challenge Nvidia's RTX 3000 GPUs.
Semi-custom and game console products
In 2012, AMD's then CEO Rory Read began a program to offer semi-custom designs. Rather than AMD simply designing and offering a single product, potential customers could work with AMD to design a custom chip based on AMD's intellectual property. Customers pay a non-recurring engineering fee for design and development, and a purchase price for the resulting semi-custom products. In particular, AMD noted their unique position of offering both x86 and graphics intellectual property. These semi-custom designs would have design wins as the APUs in the PlayStation 4 and Xbox One and the subsequent PlayStation 4 Pro, Xbox One S, Xbox One X, Xbox Series X/S, and PlayStation 5. Financially, these semi-custom products would represent a majority of the company's revenue in 2016. In November 2017, AMD and Intel announced that Intel would market a product combining in a single package an Intel Core CPU, a semi-custom AMD Radeon GPU, and HBM2 memory.
Other hardware
AMD motherboard chipsets
Before the launch of Athlon 64 processors in 2003, AMD designed chipsets for their processors spanning the K6 and K7 processor generations. The chipsets include the AMD-640, AMD-751, and the AMD-761 chipsets. The situation changed in 2003 with the release of Athlon 64 processors, and AMD chose not to further design its own chipsets for its desktop processors while opening the desktop platform to allow other firms to design chipsets. This was the "Open Platform Management Architecture" with ATI, VIA and SiS developing their own chipset for Athlon 64 processors and later Athlon 64 X2 and Athlon 64 FX processors, including the Quad FX platform chipset from Nvidia.
The initiative went further with the release of Opteron server processors as AMD stopped the design of server chipsets in 2004 after releasing the AMD-8111 chipset, and again opened the server platform for firms to develop chipsets for Opteron processors. As of today, Nvidia and Broadcom are the sole designing firms of server chipsets for Opteron processors.
As the company completed the acquisition of ATI Technologies in 2006, the firm gained the ATI design team for chipsets which previously designed the Radeon Xpress 200 and the Radeon Xpress 3200 chipsets. AMD then renamed the chipsets for AMD processors under AMD branding (for instance, the CrossFire Xpress 3200 chipset was renamed as AMD 580X CrossFire chipset). In February 2007, AMD announced the first AMD-branded chipset since 2004 with the release of the AMD 690G chipset (previously under the development codename RS690), targeted at mainstream IGP computing. It was the industry's first to implement a HDMI 1.2 port on motherboards, shipping for more than a million units. While ATI had aimed at releasing an Intel IGP chipset, the plan was scrapped and the inventories of Radeon Xpress 1250 (codenamed RS600, sold under ATI brand) was sold to two OEMs, Abit and ASRock. Although AMD stated the firm would still produce Intel chipsets, Intel had not granted the license of FSB to ATI.
On November 15, 2007, AMD announced a new chipset series portfolio, the AMD 7-Series chipsets, covering from the enthusiast multi-graphics segment to the value IGP segment, to replace the AMD 480/570/580 chipsets and AMD 690 series chipsets, marking AMD's first enthusiast multi-graphics chipset. Discrete graphics chipsets were launched on November 15, 2007, as part of the codenamed Spider desktop platform, and IGP chipsets were launched at a later time in spring 2008 as part of the codenamed Cartwheel platform.
AMD returned to the server chipsets market with the AMD 800S series server chipsets. It includes support for up to six SATA 6.0 Gbit/s ports, the C6 power state, which is featured in Fusion processors and AHCI 1.2 with SATA FIS-based switching support. This is a chipset family supporting Phenom processors and Quad FX enthusiast platform (890FX), IGP (890GX).
With the advent of AMD's APUs in 2011, traditional northbridge features such as the connection to graphics and the PCI Express controller were incorporated into the APU die. Accordingly, APUs were connected to a single chip chipset, renamed the Fusion Controller Hub (FCH), which primarily provided southbridge functionality.
AMD released new chipsets in 2017 to support the release of their new Ryzen products. As the Zen microarchitecture already includes much of the northbridge connectivity, the AM4-based chipsets primarily varied in the number of additional PCI Express lanes, USB connections, and SATA connections available. These AM4 chipsets were designed in conjunction with ASMedia.
Embedded products
Embedded CPUs
In the early 1990s, AMD began marketing a series of embedded system-on-a-chips (SoCs) called AMD Élan, starting with the SC300 and SC310. Both combines a 32-Bit, Am386SX, low-voltage 25 MHz or 33 MHz CPU with memory controller, PC/AT peripheral controllers, real-time clock, PLL clock generators and ISA bus interface. The SC300 integrates in addition two PC card slots and a CGA-compatible LCD controller. They were followed in 1996 by the SC4xx types. Now supporting VESA Local Bus and using the Am486 with up to 100 MHz clock speed. A SC450 with 33 MHz, for example, was used in the Nokia 9000 Communicator. In 1999 the SC520 was announced. Using an Am586 with 100 MHz or 133 MHz and supporting SDRAM and PCI it was the latest member of the series.
In February 2002, AMD acquired Alchemy Semiconductor for its Alchemy line of MIPS processors for the hand-held and portable media player markets. On June 13, 2006, AMD officially announced that the line was to be transferred to Raza Microelectronics, Inc., a designer of MIPS processors for embedded applications.
In August 2003, AMD also purchased the Geode business which was originally the Cyrix MediaGX from National Semiconductor to augment its existing line of embedded x86 processor products. During the second quarter of 2004, it launched new low-power Geode NX processors based on the K7 Thoroughbred architecture with speeds of fanless processors and , and processor with fan, of TDP 25 W. This technology is used in a variety of embedded systems (Casino slot machines and customer kiosks for instance), several UMPC designs in Asia markets, and the OLPC XO-1 computer, an inexpensive laptop computer intended to be distributed to children in developing countries around the world. The Geode LX processor was announced in 2005 and is said will continue to be available through 2015.
AMD has also introduced 64-bit processors into its embedded product line starting with the AMD Opteron processor. Leveraging the high throughput enabled through HyperTransport and the Direct Connect Architecture these server-class processors have been targeted at high-end telecom and storage applications. In 2007, AMD added the AMD Athlon, AMD Turion, and Mobile AMD Sempron processors to its embedded product line. Leveraging the same 64-bit instruction set and Direct Connect Architecture as the AMD Opteron but at lower power levels, these processors were well suited to a variety of traditional embedded applications. Throughout 2007 and into 2008, AMD has continued to add both single-core Mobile AMD Sempron and AMD Athlon processors and dual-core AMD Athlon X2 and AMD Turion processors to its embedded product line and now offers embedded 64-bit solutions starting with 8 W TDP Mobile AMD Sempron and AMD Athlon processors for fan-less designs up to multi-processor systems leveraging multi-core AMD Opteron processors all supporting longer than standard availability.
The ATI acquisition in 2006 included the Imageon and Xilleon product lines. In late 2008, the entire handheld division was sold off to Qualcomm, who have since produced the Adreno series. Also in 2008, the Xilleon division was sold to Broadcom.
In April 2007, AMD announced the release of the M690T integrated graphics chipset for embedded designs. This enabled AMD to offer complete processor and chipset solutions targeted at embedded applications requiring high-performance 3D and video such as emerging digital signage, kiosk, and Point of Sale applications. The M690T was followed by the M690E specifically for embedded applications which removed the TV output, which required Macrovision licensing for OEMs, and enabled native support for dual TMDS outputs, enabling dual independent DVI interfaces.
In January 2011, AMD announced the AMD Embedded G-Series Accelerated Processing Unit. This was the first APU for embedded applications. These were followed by updates in 2013 and 2016.
In May 2012, AMD Announced the AMD Embedded R-Series Accelerated Processing Unit. This family of products incorporates the Bulldozer CPU architecture, and Discrete-class Radeon HD 7000G Series graphics. This was followed by a system-on-a-chip (SoC) version in 2015 which offered a faster CPU and faster graphics, with support for DDR4 SDRAM memory.
Embedded graphics
AMD builds graphic processors for use in embedded systems. They can be found in anything from casinos to healthcare, with a large portion of products being used in industrial machines. These products include a complete graphics processing device in a compact multi-chip module including RAM and the GPU. ATI began offering embedded GPUs with the E2400 in 2008. Since that time AMD has released regular updates to their embedded GPU lineup in 2009, 2011, 2015, and 2016; reflecting improvements in their GPU technology.
Current product lines
CPU and APU products
AMD's portfolio of CPUs and APUs
Athlon – brand of entry level CPUs (Excavator) and APUs (Ryzen)
A-series – Excavator-class consumer desktop and laptop APUs
G-series – Excavator- and Jaguar-class low-power embedded APUs
Ryzen – brand of consumer CPUs and APUs
Ryzen Threadripper – brand of prosumer/professional CPUs
R-series – Excavator class high-performance embedded APUs
Epyc – brand of server CPUs
Opteron – brand of microserver APUs
Graphics products
AMD's portfolio of dedicated graphics processors
Radeon – brand for consumer line of graphics cards; the brand name originated with ATI.
Mobility Radeon offers power-optimized versions of Radeon graphics chips for use in laptops.
Radeon Pro – Workstation graphics card brand. Successor to the FirePro brand.
Radeon Instinct – brand of server and workstation targeted machine learning and GPGPU products
Radeon-branded products
RAM
In 2011, AMD began selling Radeon branded DDR3 SDRAM to support the higher bandwidth needs of AMD's APUs. While the RAM is sold by AMD, it was manufactured by Patriot Memory and VisionTek. This was later followed by higher speeds of gaming oriented DDR3 memory in 2013. Radeon branded DDR4 SDRAM memory was released in 2015, despite no AMD CPUs or APUs supporting DDR4 at the time. AMD noted in 2017 that these products are "mostly distributed in Eastern Europe" and that it continues to be active in the business.
Solid-state drives
AMD announced in 2014 it would sell Radeon branded solid-state drives manufactured by OCZ with capacities up to 480 GB and using the SATA interface.
Technologies
CPU hardware
technologies found in AMD CPU/APU and other products include:
HyperTransport – a high-bandwidth, low-latency system bus used in AMD's CPU and APU products
Infinity Fabric – a derivative of HyperTransport used as the communication bus in AMD's Zen microarchitecture
Graphics hardware
technologies found in AMD GPU products include:
AMD Eyefinity – facilitates multi-monitor setup of up to 6 monitors per graphics card
AMD FreeSync – display synchronization based on the VESA Adaptive Sync standard
AMD TrueAudio – acceleration of audio calculations
AMD XConnect – allows the use of External GPU enclosures through Thunderbolt 3
AMD CrossFire – multi-GPU technology allowing the simultaneous use of multiple GPUs
Unified Video Decoder (UVD) – acceleration of video decompression (decoding)
Video Coding Engine (VCE) – acceleration of video compression (encoding)
Software
AMD has made considerable efforts towards opening its software tools above the firmware level in the past decade.
For the following mentions, software not expressely stated as being free can be assumed to be proprietary.
Distribution
AMD Radeon Software is the default channel for official software distribution from AMD. It includes both free and proprietary software components, and supports both Microsoft Windows and Linux.
Software by type
CPU
AOCC is AMD's optimizing proprietary C/C++ compiler based on LLVM and available for Linux.
AMDuProf is AMD's CPU performance and Power profiling tool suite, available for Linux and Windows.
AMD has also taken an active part in developing coreboot, an open-source project aimed at replacing the proprietary BIOS firmware. This cooperation ceased in 2013, but AMD has indicated recently that it is considering releasing source code so that Ryzen can be compatible with coreboot in the future.
GPU
Most notable public AMD software is on the GPU side.
AMD has opened both its graphic and compute stacks:
GPUOpen is AMD's graphics stack, which includes for example FidelityFX Super Resolution.
ROCm (Radeon Open Compute platform) is AMD's compute stack for machine learning and high-performance computing, based on the LLVM compiler technologies. Under the ROCm project, AMDgpu is AMD's open-source device driver supporting the GCN and following architectures, available for Linux. This latter driver component is used both by the graphics and compute stacks.
Other
AMD conducts open research on heterogeneous computing.
Other AMD software includes the AMD Core Math Library, and open-source software including the AMD Performance Library.
AMD contributes to open-source projects, including working with Sun Microsystems to enhance OpenSolaris and Sun xVM on the AMD platform. AMD also maintains its own Open64 compiler distribution and contributes its changes back to the community.
In 2008, AMD released the low-level programming specifications for its GPUs, and works with the X.Org Foundation to develop drivers for AMD graphics cards.
Extensions for software parallelism (xSP), aimed at speeding up programs to enable multi-threaded and multi-core processing, announced in Technology Analyst Day 2007. One of the initiatives being discussed since August 2007 is the Light Weight Profiling (LWP), providing internal hardware monitor with runtimes, to observe information about executing process and help the re-design of software to be optimized with multi-core and even multi-threaded programs. Another one is the extension of Streaming SIMD Extension (SSE) instruction set, the SSE5.
Codenamed SIMFIRE – interoperability testing tool for the Desktop and mobile Architecture for System Hardware (DASH) open architecture.
Production and fabrication
Previously, AMD produced its chips at company-owned semiconductor foundries. AMD pursued a strategy of collaboration with other semiconductor manufacturers IBM and Motorola to co-develop production technologies. AMD's founder Jerry Sanders termed this the "Virtual Gorilla" strategy to compete with Intel's significantly greater investments in fabrication.
In 2008, AMD spun off its chip foundries into an independent company named GlobalFoundries. This breakup of the company was attributed to the increasing costs of each process node. The Emirate of Abu Dhabi purchased the newly created company through its subsidiary Advanced Technology Investment Company (ATIC), purchasing the final stake from AMD in 2009.
With the spin-off of its foundries, AMD became a fabless semiconductor manufacturer, designing products to be produced at for-hire foundries. Part of the GlobalFoundries spin-off included an agreement with AMD to produce some number of products at GlobalFoundries. Both prior to the spin-off and after AMD has pursued production with other foundries including TSMC and Samsung. It has been argued that this would reduce risk for AMD by decreasing dependence on any one foundry which has caused issues in the past.
In 2018, AMD started shifting the production of their CPUs and GPUs to TSMC, following GlobalFoundries' announcement that they were halting development of their 7 nm process. AMD revised their wafer purchase requirement with GlobalFoundries in 2019, allowing AMD to freely choose foundries for 7 nm nodes and below, while maintaining purchase agreements for 12 nm and above through 2021.
Corporate affairs
Business trends
The key trends for AMD are (as of the financial year ending in late December):
Partnerships
AMD uses strategic industry partnerships to further its business interests and to rival Intel's dominance and resources:
A partnership between AMD and Alpha Processor Inc. developed HyperTransport, a point-to-point interconnect standard which was turned over to an industry standards body for finalization. It is now used in modern motherboards that are compatible with AMD processors.
AMD also formed a strategic partnership with IBM, under which AMD gained silicon on insulator (SOI) manufacturing technology, and detailed advice on 90 nm implementation. AMD announced that the partnership would extend to 2011 for 32 nm and 22 nm fabrication-related technologies.
To facilitate processor distribution and sales, AMD is loosely partnered with end-user companies, such as HP, Dell, Asus, Acer, and Microsoft.
In 1993, AMD established a 50–50 partnership with Fujitsu called FASL, and merged into a new company called FASL LLC in 2003. The joint venture went public under the name Spansion and ticker symbol SPSN in December 2005, with AMD shares dropping 37%. AMD no longer directly participates in the Flash memory devices market now as AMD entered into a non-competition agreement on December 21, 2005, with Fujitsu and Spansion, pursuant to which it agreed not to directly or indirectly engage in a business that manufactures or supplies standalone semiconductor devices (including single-chip, multiple-chip or system devices) containing only Flash memory.
On May 18, 2006, Dell announced that it would roll out new servers based on AMD's Opteron chips by year's end, thus ending an exclusive relationship with Intel. In September 2006, Dell began offering AMD Athlon X2 chips in their desktop lineup.
In June 2011, HP announced new business and consumer notebooks equipped with the latest versions of AMD APUsaccelerated processing units. AMD will power HP's Intel-based business notebooks as well.
In the spring of 2013, AMD announced that it would be powering all three major next-generation consoles. The Xbox One and Sony PlayStation 4 are both powered by a custom-built AMD APU, and the Nintendo Wii U is powered by an AMD GPU. According to AMD, having their processors in all three of these consoles will greatly assist developers with cross-platform development to competing consoles and PCs and increased support for their products across the board.
AMD has entered into an agreement with Hindustan Semiconductor Manufacturing Corporation (HSMC) for the production of AMD products in India.
AMD is a founding member of the HSA Foundation which aims to ease the use of a Heterogeneous System Architecture. A Heterogeneous System Architecture is intended to use both central processing units and graphics processors to complete computational tasks.
AMD announced in 2016 that it was creating a joint venture to produce x86 server chips for the Chinese market.
On May 7, 2019, it was reported that the U.S. Department of Energy, Oak Ridge National Laboratory, and Cray Inc., are working in collaboration with AMD to develop the Frontier exascale supercomputer. Featuring the AMD Epyc CPUs and Radeon GPUs, the supercomputer is set to produce more than 1.5 exaflops (peak double-precision) in computing performance. It is expected to debut sometime in 2021.
On March 5, 2020, it was announced that the U.S. Department of Energy, Lawrence Livermore National Laboratory, and HPE are working in collaboration with AMD to develop the El Capitan exascale supercomputer. Featuring the AMD Epyc CPUs and Radeon GPUs, the supercomputer is set to produce more than 2 exaflops (peak double-precision) in computing performance. It is expected to debut in 2023.
In the summer of 2020, it was reported that AMD would be powering the next-generation console offerings from Microsoft and Sony.
On November 8, 2021, AMD announced a partnership with Meta to make the chips used in the Metaverse.
In January 2022, AMD partnered with Samsung to develop a mobile processor to be used in future products. The processor was named Exynos 2022 and works with the AMD RDNA 2 architecture.
Litigation with Intel
AMD has a long history of litigation with former (and current) partner and x86 creator Intel.
In 1986, Intel broke an agreement it had with AMD to allow them to produce Intel's microchips for IBM; AMD filed for arbitration in 1987 and the arbitrator decided in AMD's favor in 1992. Intel disputed this, and the case ended up in the Supreme Court of California. In 1994, that court upheld the arbitrator's decision and awarded damages for breach of contract.
In 1990, Intel brought a copyright infringement action alleging illegal use of its 287 microcode. The case ended in 1994 with a jury finding for AMD and its right to use Intel's microcode in its microprocessors through the 486 generation.
In 1997, Intel filed suit against AMD and Cyrix Corp. for misuse of the term MMX. AMD and Intel settled, with AMD acknowledging MMX as a trademark owned by Intel, and with Intel granting AMD rights to market the AMD K6 MMX processor.
In 2005, following an investigation, the Japan Federal Trade Commission found Intel guilty of a number of violations. On June 27, 2005, AMD won an antitrust suit against Intel in Japan, and on the same day, AMD filed a broad antitrust complaint against Intel in the U.S. Federal District Court in Delaware. The complaint alleges systematic use of secret rebates, special discounts, threats, and other means used by Intel to lock AMD processors out of the global market. Since the start of this action, the court has issued subpoenas to major computer manufacturers including Acer, Dell, Lenovo, HP and Toshiba.
In November 2009, Intel agreed to pay AMD $1.25 billion and renew a five-year patent cross-licensing agreement as part of a deal to settle all outstanding legal disputes between them.
Guinness World Record achievement
On August 31, 2011, in Austin, Texas, AMD achieved a Guinness World Record for the "Highest frequency of a computer processor": 8.429 GHz. The company ran an 8-core FX-8150 processor with only one active module (two cores), and cooled with liquid helium. The previous record was 8.308 GHz, with an Intel Celeron 352 (one core).
On November 1, 2011, geek.com reported that Andre Yang, an overclocker from Taiwan, used an FX-8150 to set another record: 8.461 GHz.
On November 19, 2012, Andre Yang used an FX-8350 to set another record: 8.794 GHz.
Acquisitions, mergers, and investments
Corporate responsibility
In its 2022 report, AMD stated that it aimed to embed environmental sustainability across its business, promote safe and responsible workplaces in its global supply chain and advance stronger communities.
In 2022, AMD achieved a 19 percent reduction in its Scope 1 and 2 GHG emissions compared to 2020. Based on AMD calculations that are third-party verified (limited level assurance).
Other initiatives
The Green Grid, founded by AMD together with other founders, such as IBM, Sun and Microsoft, to seek lower power consumption for grids.
Sponsorships
AMD's sponsorship of Formula 1 racing began in 2002 and since 2020 has sponsored the Mercedes-AMG Petronas team. AMD was also a sponsor of the BMW Sauber and Scuderia Ferrari Formula 1 teams together with Intel, Vodafone, AT&T, Pernod Ricard and Diageo. On 18 April 2018, AMD began a multi-year sponsorship with Scuderia Ferrari. In February 2020, just prior to the start of the 2020 race season, the Mercedes Formula 1 team announced it was adding AMD to its sponsorship portfolio.
AMD began a sponsorship deal with Victory Five (V5) for the League of Legends Pro League (LPL) in 2022. AMD was a sponsor of the Chinese Dota Pro Circuit together with Perfect World.
In February 2024, AMD was a Diamond sponsor for the World Artificial Intelligence Cannes Festival (WAICF).
AMD was a Platinum sponsor for the HPE Discover 2024, an event hosted by Hewlett Packard Enterprise to showcase technology for government and business customers. The event was held from 17 to 20 June 2024 in Las Vegas.
Sources
Rodengen, Jeffrey L. (1998). The Spirit of AMD: Advanced Micro Devices. Write Stuff.
Ruiz, Hector (2013). Slingshot: AMD's Fight to Free an Industry from the Ruthless Grip of Intel. Greenleaf Book Group.
|
;1969 establishments in California;1970s initial public offerings;American companies established in 1969;Companies based in Santa Clara, California;Companies formerly listed on the New York Stock Exchange;Companies in the Nasdaq-100;Companies listed on the Nasdaq;Computer companies established in 1969;Computer companies of the United States;Computer hardware companies;Electronics companies established in 1969;Fabless semiconductor companies;Graphics hardware companies;HSA Foundation founding members;Manufacturing companies based in the San Francisco Bay Area;Motherboard companies;Semiconductor companies of the United States;Superfund sites in California;Technology companies based in the San Francisco Bay Area;Technology companies established in 1969
|
https://en.wikipedia.org/wiki/Aon%20%28company%29
|
Aon plc () is a British-American professional services firm that offers a range of risk-mitigation products. Aon has over 66,000 employees across 120 countries.
Founded in Chicago by Patrick Ryan, Aon was created in 1982 when the Ryan Insurance Group merged with the Combined Insurance Company of America under W. Clement Stone. In 1987, the holding company was renamed Aon from aon, a Gaelic word meaning "one". The company is globally headquartered in London with its North America operations based in Chicago at the Aon Center. Aon is listed on the New York Stock Exchange under AON with a market cap of $65 billion in April 2023.
History
W. Clement Stone's mother bought a small Detroit insurance agency, and in 1918 brought her son into the business. Mr. Stone sold low-cost, low-benefit accident insurance, underwriting and issuing policies on-site. The next year he founded his own agency, the Combined Registry Co.
As the Great Depression began, Stone reduced his workforce and improved training. Forced by his son's respiratory illness to winter in the South, Stone moved to Arkansas and Texas. In 1939 he bought American Casualty Insurance Co. of Dallas, Texas. It was consolidated with other purchases as the Combined Insurance Co. of America in 1947. The company continued through the 1950s and 1960s, continuing to sell health and accident policies. In the 1970s, Combined expanded overseas despite being hit hard by the recession.
In 1982, after 10 years of stagnation under Clement Stone Jr., the elder Stone, then 79, resumed control until the completion of a merger with Ryan Insurance Co. allowed him to transfer control to Patrick Ryan. Ryan, the son of a Ford dealer in Wisconsin and a graduate of Northwestern University, had started his company as an auto credit insurer in 1964. In 1976, the company bought the insurance brokerage units of the Esmark conglomerate. Ryan focused on insurance brokering and added more upscale insurance products. He also trimmed staff and took other cost-cutting measures, and in 1987 he changed Combined's name to Aon. In 1992, he bought Dutch insurance broker Hudig-Langeveldt. In 1995, the company sold its remaining direct life insurance holdings to General Electric to focus on consulting.
Aon built a global presence through purchases. In 1997, it bought The Minet Group, as well as the insurance brokerage A&A Services, Inc., founded by Alexander Howden in the late 19th century. It was then that insurance broker David Howden reclaimed the family brand name for the Howden Group. These transactions made Aon (temporarily) the largest insurance broker worldwide. The firm made no US buys in 1998, but doubled its employee base with purchases including Spain's largest retail insurance broker, Gil y Carvajal, and the formation of Aon Korea.
Responding to industry demands, Aon announced its new fee disclosure policy in 1999, and the company reorganised to focus on buying personal line insurance firms and to integrate its acquisitions. That year it bought Nikols Sedgwick Group, an Italian insurance firm, and formed RiskAttack (with Zurich US), a risk analysis and financial management concern aimed at technology companies. The cost of integrating its numerous purchases, however, hammered profits in 1999.
Despite its troubles, in 2000 Aon bought Reliance Group's accident and health insurance business, as well as Actuarial Sciences Associates, a compensation and employee benefits consulting company. Later in that year, however, the company decided to cut 6% of its workforce as part of a restructuring effort. In 2003, the company saw revenues increase primarily because of rate hikes in the insurance industry. Also that year, Endurance Specialty, a Bermuda-based underwriting operation that Aon helped to establish in November 2001 along with other investors, went public. The next year Aon sold most of its holdings in Endurance.
In the late 2007, Aon announced the divestiture of its underwriting business. With this move, the firm sold off its two major underwriting subsidiaries: Combined Insurance Company of America (acquired by ACE Limited for $2.4 billion) and Sterling Life Insurance Company (purchased by Munich Re Group for $352 million). The low margin and capital-intensive nature of the underwriting industry was the primary reason for the firm's decision to divest.
This growth strategy manifested in November 2008 when Aon announced it had acquired reinsurance intermediary and capital advisor Benfield Group Limited for $1.75 billion. The acquisition amplified the firm's broking capabilities, positioning Aon one of the largest players in the reinsurance brokerage industry.
In 2010, Aon made its most significant acquisition to date with the purchase of Hewitt Associates for $4.9 billion. Aside from drastically boosting Aon's human resources consulting capacity and entering the firm into the business process outsourcing industry, the move added 23,000 colleagues and more than $3 billion in revenue.
In January 2012, Aon announced that its headquarters would be moved to London, although North American operations and jobs remained in Chicago.
On 10 February 2017, Aon announced that it was selling its employee benefits outsourcing business to private equity firm The Blackstone Group for US$4.8 billion (£3.8 billion).
In February 2020, Aon named Eric Andersen as president of Aon after co-president Michael O'Connor departed the company to pursue new opportunities. He will be reporting to Greg Case, the firm's CEO.
In June 2020, Aon announced it was planning to repay the temporary 20% pay cut from 70% of employees that was published in a statement in April 2020 regarding the COVID-19 pandemic. On 30 June 2020, Aon announced it would repay staff in full, plus 5% of the withheld amount.
In June 2020, Willis Towers Watson called its shareholders to two meetings to discuss its acquisition with Aon for August 26, 2020. It was revealed that the US Department of Justice has requested more information on the deal under antitrust rules.
September 11 attacks
Aon's New York offices were on the 92nd and 98th–105th floors of the South Tower of the World Trade Center at the time of the September 11 attacks. When the North Tower was struck by American Airlines Flight 11 at 8:46 a.m., an evacuation of Aon's offices was quickly initiated by executive Eric Eisenberg, and 924 of the estimated 1,100 Aon employees present at the time managed to get below the 77th floor before United Airlines Flight 175 crashed between Floors 77 and 85 at 9:03 a.m.
Many, however, did not manage to get beneath in the 17 minutes they had between the two impacts. As a result, 176 employees of Aon were killed in the crash or died in the eventual collapse of the tower or from smoke inhalation. At 9:59 a.m., the tower collapsed, killing any survivors still within, including Eisenberg and Kevin Cosgrove.
Spitzer investigation
In 2004–2005, Aon, along with other brokers including Marsh & McLennan and Willis, fell under regulatory investigation under New York Attorney General Eliot Spitzer and other state attorneys general. At issue was the practice of insurance companies' payments to brokers (known as contingent commissions). The payments were thought to bring a conflict of interest, swaying broker decisions on behalf of carriers, rather than customers. In the spring of 2005, without acknowledging any wrongdoing, Aon agreed to a $190 million settlement, payable over 30 months.
UK regulatory breach
In January 2009, Aon was fined £5.69 million in the UK by the Financial Services Authority, who stated that the fine related to the company's inadequate bribery and corruption controls, claiming that between 14 January 2005 and 30 September 2007 Aon had failed to properly assess the risks involved in its dealings with overseas firms and individuals. The Authority did not find that any money had actually made its way to illegal organisations. Aon qualified for a 30% discount on the fine as a result of its cooperation with the investigation. Aon said its conduct was not deliberate, adding it had since "significantly strengthened and enhanced its controls around the usage of third parties".
US Foreign Corrupt Practices Act violations
In December 2011, Aon Corporation paid a $16.26 million penalty to the U.S. Securities and Exchange Commission and the U.S. Department of Justice for violations of the US Foreign Corrupt Practices Act.
According to the Securities and Exchange Commission, Aon's subsidiaries made improper payments of over $3.6 million to government officials and third-party facilitators in Costa Rica, Egypt, Vietnam, Indonesia, the United Arab Emirates, Myanmar and Bangladesh, between 1983 and 2007, to obtain and retain insurance contracts.
Major acquisitions
On 5 January 2007, Aon announced that its Aon Affinity group had acquired the WedSafe Wedding Insurance program.
On 22 August 2008, Aon announced that it had acquired London-based Benfield Group. The acquiring price was US$1.75 billion or £935 million, with US$170 million of debt.
On 5 March 2010, Hewitt Associates announced that it acquired Senior Educators Ltd. The acquisition offers companies a new way to address retiree medical insurance commitments.
On 12 July 2010, Aon announced that it had agreed to buy Lincolnshire, Illinois-based Hewitt Associates for $4.9 billion in cash and stock.
On 7 April 2011, Aon announced that it had acquired Johannesburg, South Africa-based Glenrand MIB. Financial terms were not disclosed.
On 19 July 2011, Aon announced that it bought Westfield Financial Corp., the owner of insurance-industry consulting firm Ward Financial Group, from Ohio Farmers Insurance Co. Financial terms were not disclosed.
On 22 October 2012, Aon announced that it agreed to buy OmniPoint, Inc, a Workday consulting firm. Financial terms were not disclosed.
On 16 June 2014, Aon announced that it agreed to buy National Flood Services, Inc., a large processor of flood insurance, from Stoneriver Group, L.P.
On 31 October 2016, Aon's Aon Risk Solutions completed acquisition of Stroz Friedberg LLC, a specialised risk management firm focusing on cybersecurity.
On 14 November 2016, Aon acquired CoCubes an online Indian Assessment firm, facilitating hiring of entry-level engineering graduates.
On 10 February 2017, Aon plc agreed to sell its human resources outsourcing platform for US$4.8 billion (£3.8 billion) to Blackstone Group L.P. (BX.N), creating a new company called Alight Solutions.
In September 2017, Aon announced its intent to purchase real estate investment management firm The Townsend Group from Colony NorthStar for $475 million, expanding Aon's property investment management portfolio.
On 9 March 2020, Aon announced its merger with Willis Towers Watson for nearly $30 billion in an all-stock deal that creates the world's largest insurance broker. As of 21 May 2020, Willis board was under probe over merger agreement with Aon. The deal was called off in July 2021.
In December 2023, Aon agreed to acquire NFP, a middle-market provider of risk, benefits, wealth and retirement plan advisory services company, for $13.4 billion.
In March 2024, Aon plc acquired the technology assets and intellectual property of Humn.ai, an AI-powered platform. This will enhance its commercial fleet proposition.
Operations
Manchester United
On 3 June 2009, it was reported that Aon had signed a four-year shirt sponsorship deal with English football giant Manchester United. On 1 June 2010, Aon replaced American insurance company AIG as the principal sponsor of the club. The Aon logo was prominently displayed on the front of the club's shirts until the 2014/2015 season when Chevrolet replaced them. The deal was said to be worth £80 million over four years, replacing United's deal with AIG as the most lucrative shirt deal in history at the time.
In April 2013, Aon signed a new eight-year deal with Manchester United to rename their training ground as the Aon Training Complex and sponsor the club's training kits, reportedly worth £180 million to the club.
Awards
Aon was awarded Investment Consultancy of the Year and Fiduciary Manager of the Year at the FT's 2014 Pension and Investment Provider Awards
Aon received a perfect score on the Human Rights Campaign's 2013 Corporate Equality Index
Aon was named to Working Mother's list of the 100 Best Companies for 2012
Aon Risk Solutions was the most recommended broker in 2012 for service and expertise by middle market buyers in Business Insurance's Buyers Choice Awards
Aon Risk Solutions was named Broker of the Year and Training Programme of the Year in 2012 by Insurance Times
Aon Benfield was named 2012 European Reinsurance Broker of the Year, Best European Property Reinsurance Broker and Best European Casualty Reinsurance Broker at the European Intelligent Insurer Awards
Aon Benfield was named Best Global Reinsurance Broking Company for Analytics at Reactions Global Awards 2012
Aon Hewitt was named Top Retirement Consultant of 2012 by PLANSPONSOR Magazine2
Aon Hewitt was named Actuarial and Investment Consultant of the Year for 2012 at the Professional Pensions Awards
References
External links
|
1982 establishments in Michigan;Actuarial firms;British brands;Companies based in London;Companies listed on the New York Stock Exchange;Consulting firms established in 1982;Consulting firms of the United States;Dual-listed companies;Financial services companies based in the City of London;Financial services companies established in 1982;Financial services companies of the United States;Human resource management consulting firms;Insurance companies of the United Kingdom;International management consulting firms;Management consulting firms of the United Kingdom;Risk management companies;Tax inversions
|
https://en.wikipedia.org/wiki/Analytical%20chemistry
|
Analytical chemistry studies and uses instruments and methods to separate, identify, and quantify matter. In practice, separation, identification or quantification may constitute the entire analysis or be combined with another method. Separation isolates analytes. Qualitative analysis identifies analytes, while quantitative analysis determines the numerical amount or concentration.
Analytical chemistry consists of classical, wet chemical methods and modern analytical techniques. Classical qualitative methods use separations such as precipitation, extraction, and distillation. Identification may be based on differences in color, odor, melting point, boiling point, solubility, radioactivity or reactivity. Classical quantitative analysis uses mass or volume changes to quantify amount. Instrumental methods may be used to separate samples using chromatography, electrophoresis or field flow fractionation. Then qualitative and quantitative analysis can be performed, often with the same instrument and may use light interaction, heat interaction, electric fields or magnetic fields. Often the same instrument can separate, identify and quantify an analyte.
Analytical chemistry is also focused on improvements in experimental design, chemometrics, and the creation of new measurement tools. Analytical chemistry has broad applications to medicine, science, and engineering.
History
Analytical chemistry has been important since the early days of chemistry, providing methods for determining which elements and chemicals are present in the object in question. During this period, significant contributions to analytical chemistry included the development of systematic elemental analysis by Justus von Liebig and systematized organic analysis based on the specific reactions of functional groups.
The first instrumental analysis was flame emissive spectrometry developed by Robert Bunsen and Gustav Kirchhoff who discovered rubidium (Rb) and caesium (Cs) in 1860.
Most of the major developments in analytical chemistry took place after 1900. During this period, instrumental analysis became progressively dominant in the field. In particular, many of the basic spectroscopic and spectrometric techniques were discovered in the early 20th century and refined in the late 20th century.
The separation sciences follow a similar time line of development and also became increasingly transformed into high performance instruments. In the 1970s many of these techniques began to be used together as hybrid techniques to achieve a complete characterization of samples.
Starting in the 1970s, analytical chemistry became progressively more inclusive of biological questions (bioanalytical chemistry), whereas it had previously been largely focused on inorganic or small organic molecules. Lasers have been increasingly used as probes and even to initiate and influence a wide variety of reactions. The late 20th century also saw an expansion of the application of analytical chemistry from somewhat academic chemical questions to forensic, environmental, industrial and medical questions, such as in histology.
Modern analytical chemistry is dominated by instrumental analysis. Many analytical chemists focus on a single type of instrument. Academics tend to either focus on new applications and discoveries or on new methods of analysis. The discovery of a chemical present in blood that increases the risk of cancer would be a discovery that an analytical chemist might be involved in. An effort to develop a new method might involve the use of a tunable laser to increase the specificity and sensitivity of a spectrometric method. Many methods, once developed, are kept purposely static so that data can be compared over long periods of time. This is particularly true in industrial quality assurance (QA), forensic and environmental applications. Analytical chemistry plays an increasingly important role in the pharmaceutical industry where, aside from QA, it is used in the discovery of new drug candidates and in clinical applications where understanding the interactions between the drug and the patient are critical.
Classical methods
Although modern analytical chemistry is dominated by sophisticated instrumentation, the roots of analytical chemistry and some of the principles used in modern instruments are from traditional techniques, many of which are still used today. These techniques also tend to form the backbone of most undergraduate analytical chemistry educational labs.
Qualitative analysis
Qualitative analysis determines the presence or absence of a particular compound, but not the mass or concentration. By definition, qualitative analyses do not measure quantity.
Chemical tests
There are numerous qualitative chemical tests, for example, the acid test for gold and the Kastle-Meyer test for the presence of blood.
Flame test
Inorganic qualitative analysis generally refers to a systematic scheme to confirm the presence of certain aqueous ions or elements by performing a series of reactions that eliminate a range of possibilities and then confirm suspected ions with a confirming test. Sometimes small carbon-containing ions are included in such schemes. With modern instrumentation, these tests are rarely used but can be useful for educational purposes and in fieldwork or other situations where access to state-of-the-art instruments is not available or expedient.
Quantitative analysis
Quantitative analysis is the measurement of the quantities of particular chemical constituents present in a substance. Quantities can be measured by mass (gravimetric analysis) or volume (volumetric analysis).
Gravimetric analysis
The gravimetric analysis involves determining the amount of material present by weighing the sample before and/or after some transformation. A common example used in undergraduate education is the determination of the amount of water in a hydrate by heating the sample to remove the water such that the difference in weight is due to the loss of water.
Volumetric analysis
Titration involves the gradual addition of a measurable reactant to an exact volume of a solution being analyzed until some equivalence point is reached. Titration is a family of techniques used to determine the concentration of an analyte. Titrating accurately to either the half-equivalence point or the endpoint of a titration allows the chemist to determine the amount of moles used, which can then be used to determine a concentration or composition of the titrant. Most familiar to those who have taken chemistry during secondary education is the acid-base titration involving a color-changing indicator, such as phenolphthalein. There are many other types of titrations, for example, potentiometric titrations or precipitation titrations. Chemists might also create titration curves in order by systematically testing the pH every drop in order to understand different properties of the titrant.
Instrumental methods
Spectroscopy
Spectroscopy measures the interaction of the molecules with electromagnetic radiation. Spectroscopy consists of many different applications such as atomic absorption spectroscopy, atomic emission spectroscopy, ultraviolet-visible spectroscopy, X-ray spectroscopy, fluorescence spectroscopy, infrared spectroscopy, Raman spectroscopy, dual polarization interferometry, nuclear magnetic resonance spectroscopy, photoemission spectroscopy, Mössbauer spectroscopy and so on.
Mass spectrometry
Mass spectrometry measures mass-to-charge ratio of molecules using electric and magnetic fields. In a mass spectrometer, a small amount of sample is ionized and converted to gaseous ions, where they are separated and analyzed according to their mass-to-charge ratios. There are several ionization methods: electron ionization, chemical ionization, electrospray ionization, fast atom bombardment, matrix-assisted laser desorption/ionization, and others. Also, mass spectrometry is categorized by approaches of mass analyzers: magnetic-sector, quadrupole mass analyzer, quadrupole ion trap, time-of-flight, Fourier transform ion cyclotron resonance, and so on.
Electrochemical analysis
Electroanalytical methods measure the potential (volts) and/or current (amps) in an electrochemical cell containing the analyte. These methods can be categorized according to which aspects of the cell are controlled and which are measured. The four main categories are potentiometry (the difference in electrode potentials is measured), coulometry (the transferred charge is measured over time), amperometry (the cell's current is measured over time), and voltammetry (the cell's current is measured while actively altering the cell's potential).
Potentiometry measures the cell's potential, coulometry measures the cell's current, and voltammetry measures the change in current when cell potential changes.
Thermal analysis
Calorimetry and thermogravimetric analysis measure the interaction of a material and heat.
Separation
Separation processes are used to decrease the complexity of material mixtures. Chromatography, electrophoresis and field flow fractionation are representative of this field.
Chromatographic assays
Chromatography can be used to determine the presence of substances in a sample as different components in a mixture have different tendencies to adsorb onto the stationary phase or dissolve in the mobile phase. Thus, different components of the mixture move at different speed. Different components of a mixture can therefore be identified by their respective Rƒ values, which is the ratio between the migration distance of the substance and the migration distance of the solvent front during chromatography.
In combination with the instrumental methods, chromatography can be used in quantitative determination of the substances. Chromatography separates the analyte from the rest of the sample so that it may be measured without interference from other compounds. There are different types of chromatography that differ from the media they use to separate the analyte and the sample. In Thin-layer chromatography, the analyte mixture moves up and separates along the coated sheet under the volatile mobile phase. In Gas chromatography, gas separates the volatile analytes. A common method for chromatography using liquid as a mobile phase is High-performance liquid chromatography.
Hybrid techniques
Combinations of the above techniques produce a "hybrid" or "hyphenated" technique. Several examples are in popular use today and new hybrid techniques are under development. For example, gas chromatography-mass spectrometry, gas chromatography-infrared spectroscopy, liquid chromatography-mass spectrometry, liquid chromatography-NMR spectroscopy, liquid chromatography-infrared spectroscopy, and capillary electrophoresis-mass spectrometry.
Hyphenated separation techniques refer to a combination of two (or more) techniques to detect and separate chemicals from solutions. Most often the other technique is some form of chromatography. Hyphenated techniques are widely used in chemistry and biochemistry. A slash is sometimes used instead of hyphen, especially if the name of one of the methods contains a hyphen itself.
Microscopy
The visualization of single molecules, single cells, biological tissues, and nanomaterials is an important and attractive approach in analytical science. Also, hybridization with other traditional analytical tools is revolutionizing analytical science. Microscopy can be categorized into three different fields: optical microscopy, electron microscopy, and scanning probe microscopy. Recently, this field is rapidly progressing because of the rapid development of the computer and camera industries.
Lab-on-a-chip
Devices that integrate (multiple) laboratory functions on a single chip of only millimeters to a few square centimeters in size and that are capable of handling extremely small fluid volumes down to less than picoliters.
Errors
Error can be defined as numerical difference between observed value and true value. The experimental error can be divided into two types, systematic error and random error. Systematic error results from a flaw in equipment or the design of an experiment while random error results from uncontrolled or uncontrollable variables in the experiment.
In error the true value and observed value in chemical analysis can be related with each other by the equation
where
is the absolute error.
is the true value.
is the observed value.
An error of a measurement is an inverse measure of accurate measurement, i.e. smaller the error greater the accuracy of the measurement.
Errors can be expressed relatively. Given the relative error():
The percent error can also be calculated:
If we want to use these values in a function, we may also want to calculate the error of the function. Let be a function with variables. Therefore, the propagation of uncertainty must be calculated in order to know the error in :
Standards
Standard curve
A general method for analysis of concentration involves the creation of a calibration curve. This allows for the determination of the amount of a chemical in a material by comparing the results of an unknown sample to those of a series of known standards. If the concentration of element or compound in a sample is too high for the detection range of the technique, it can simply be diluted in a pure solvent. If the amount in the sample is below an instrument's range of measurement, the method of addition can be used. In this method, a known quantity of the element or compound under study is added, and the difference between the concentration added and the concentration observed is the amount actually in the sample.
Internal standards
Sometimes an internal standard is added at a known concentration directly to an analytical sample to aid in quantitation. The amount of analyte present is then determined relative to the internal standard as a calibrant. An ideal internal standard is an isotopically enriched analyte which gives rise to the method of isotope dilution.
Standard addition
The method of standard addition is used in instrumental analysis to determine the concentration of a substance (analyte) in an unknown sample by comparison to a set of samples of known concentration, similar to using a calibration curve. Standard addition can be applied to most analytical techniques and is used instead of a calibration curve to solve the matrix effect problem.
Signals and noise
One of the most important components of analytical chemistry is maximizing the desired signal while minimizing the associated noise. The analytical figure of merit is known as the signal-to-noise ratio (S/N or SNR).
Noise can arise from environmental factors as well as from fundamental physical processes.
Thermal noise
Thermal noise results from the motion of charge carriers (usually electrons) in an electrical circuit generated by their thermal motion. Thermal noise is white noise meaning that the power spectral density is constant throughout the frequency spectrum.
The root mean square value of the thermal noise in a resistor is given by
where kB is the Boltzmann constant, T is the temperature, R is the resistance, and is the bandwidth of the frequency .
Shot noise
Shot noise is a type of electronic noise that occurs when the finite number of particles (such as electrons in an electronic circuit or photons in an optical device) is small enough to give rise to statistical fluctuations in a signal.
Shot noise is a Poisson process, and the charge carriers that make up the current follow a Poisson distribution. The root mean square current fluctuation is given by
where e is the elementary charge and I is the average current. Shot noise is white noise.
Flicker noise
Flicker noise is electronic noise with a 1/ƒ frequency spectrum; as f increases, the noise decreases. Flicker noise arises from a variety of sources, such as impurities in a conductive channel, generation, and recombination noise in a transistor due to base current, and so on. This noise can be avoided by modulation of the signal at a higher frequency, for example, through the use of a lock-in amplifier.
Environmental noise
Environmental noise arises from the surroundings of the analytical instrument. Sources of electromagnetic noise are power lines, radio and television stations, wireless devices, compact fluorescent lamps and electric motors. Many of these noise sources are narrow bandwidth and, therefore, can be avoided. Temperature and vibration isolation may be required for some instruments.
Noise reduction
Noise reduction can be accomplished either in computer hardware or software. Examples of hardware noise reduction are the use of shielded cable, analog filtering, and signal modulation. Examples of software noise reduction are digital filtering, ensemble average, boxcar average, and correlation methods.
Applications
Analytical chemistry has applications including in forensic science, bioanalysis, clinical analysis, environmental analysis, and materials analysis. Analytical chemistry research is largely driven by performance (sensitivity, detection limit, selectivity, robustness, dynamic range, linear range, accuracy, precision, and speed), and cost (purchase, operation, training, time, and space). Among the main branches of contemporary analytical atomic spectrometry, the most widespread and universal are optical and mass spectrometry. In the direct elemental analysis of solid samples, the new leaders are laser-induced breakdown and laser ablation mass spectrometry, and the related techniques with transfer of the laser ablation products into inductively coupled plasma. Advances in design of diode lasers and optical parametric oscillators promote developments in fluorescence and ionization spectrometry and also in absorption techniques where uses of optical cavities for increased effective absorption pathlength are expected to expand. The use of plasma- and laser-based methods is increasing. An interest towards absolute (standardless) analysis has revived, particularly in emission spectrometry.
Great effort is being put into shrinking the analysis techniques to chip size. Although there are few examples of such systems competitive with traditional analysis techniques, potential advantages include size/portability, speed, and cost. (micro total analysis system (μTAS) or lab-on-a-chip). Microscale chemistry reduces the amounts of chemicals used.
Many developments improve the analysis of biological systems. Examples of rapidly expanding fields in this area are genomics, DNA sequencing and related research in genetic fingerprinting and DNA microarray; proteomics, the analysis of protein concentrations and modifications, especially in response to various stressors, at various developmental stages, or in various parts of the body, metabolomics, which deals with metabolites; transcriptomics, including mRNA and associated fields; lipidomics - lipids and its associated fields; peptidomics - peptides and its associated fields; and metallomics, dealing with metal concentrations and especially with their binding to proteins and other molecules.
Analytical chemistry has played a critical role in the understanding of basic science to a variety of practical applications, such as biomedical applications, environmental monitoring, quality control of industrial manufacturing, forensic science, and so on.
The recent developments in computer automation and information technologies have extended analytical chemistry into a number of new biological fields. For example, automated DNA sequencing machines were the basis for completing human genome projects leading to the birth of genomics. Protein identification and peptide sequencing by mass spectrometry opened a new field of proteomics. In addition to automating specific processes, there is effort to automate larger sections of lab testing, such as in companies like Emerald Cloud Lab and Transcriptic.
Analytical chemistry has been an indispensable area in the development of nanotechnology. Surface characterization instruments, electron microscopes and scanning probe microscopes enable scientists to visualize atomic structures with chemical characterizations.
|
;Materials science
|
https://en.wikipedia.org/wiki/Apoptosis
|
Apoptosis (from ) is a form of programmed cell death that occurs in multicellular organisms and in some eukaryotic, single-celled microorganisms such as yeast. Biochemical events lead to characteristic cell changes (morphology) and death. These changes include blebbing, cell shrinkage, nuclear fragmentation, chromatin condensation, DNA fragmentation, and mRNA decay. The average adult human loses 50 to 70 billion cells each day due to apoptosis. For the average human child between 8 and 14 years old, each day the approximate loss is 20 to 30 billion cells.
In contrast to necrosis, which is a form of traumatic cell death that results from acute cellular injury, apoptosis is a highly regulated and controlled process that confers advantages during an organism's life cycle. For example, the separation of fingers and toes in a developing human embryo occurs because cells between the digits undergo a form of apoptosis that is genetically determined. Unlike necrosis, apoptosis produces cell fragments called apoptotic bodies that phagocytes are able to engulf and remove before the contents of the cell can spill out onto surrounding cells and cause damage to them.
Because apoptosis cannot stop once it has begun, it is a highly regulated process. Apoptosis can be initiated through one of two pathways. In the intrinsic pathway the cell kills itself because it senses cell stress, while in the extrinsic pathway the cell kills itself because of signals from other cells. Weak external signals may also activate the intrinsic pathway of apoptosis. Both pathways induce cell death by activating caspases, which are proteases, or enzymes that degrade proteins. The two pathways both activate initiator caspases, which then activate executioner caspases, which then kill the cell by degrading proteins indiscriminately.
In addition to its importance as a biological phenomenon, defective apoptotic processes have been implicated in a wide variety of diseases. Excessive apoptosis causes atrophy, whereas an insufficient amount results in uncontrolled cell proliferation, such as cancer. Some factors like Fas receptors and caspases promote apoptosis, while some members of the Bcl-2 family of proteins inhibit apoptosis.
Discovery and etymology
German scientist Carl Vogt was first to describe the principle of apoptosis in 1842. In 1885, anatomist Walther Flemming delivered a more precise description of the process of programmed cell death. However, it was not until 1965 that the topic was resurrected. While studying tissues using electron microscopy, John Kerr at the University of Queensland was able to distinguish apoptosis from traumatic cell death. Following the publication of a paper describing the phenomenon, Kerr was invited to join Alastair Currie, as well as Andrew Wyllie, who was Currie's graduate student, at the University of Aberdeen. In 1972, the trio published a seminal article in the British Journal of Cancer. Kerr had initially used the term programmed cell necrosis, but in the article, the process of natural cell death was called apoptosis. Kerr, Wyllie and Currie credited James Cormack, a professor of Greek language at University of Aberdeen, with suggesting the term apoptosis. Kerr received the Paul Ehrlich and Ludwig Darmstaedter Prize on March 14, 2000, for his description of apoptosis. He shared the prize with Boston biologist H. Robert Horvitz.
For many years, neither "apoptosis" nor "programmed cell death" was a highly cited term. Two discoveries brought cell death from obscurity to a major field of research: identification of the first component of the cell death control and effector mechanisms, and linkage of abnormalities in cell death to human disease, in particular cancer. This occurred in 1988 when it was shown that BCL2, the gene responsible for follicular lymphoma, encoded a protein that inhibited cell death.
The 2002 Nobel Prize in Medicine was awarded to Sydney Brenner, H. Robert Horvitz and John Sulston for their work identifying genes that control apoptosis. The genes were identified by studies in the nematode C. elegans and homologues of these genes function in humans to regulate apoptosis.
In Greek, apoptosis translates to the "falling off" of leaves from a tree. Cormack, professor of Greek language, reintroduced the term for medical use as it had a medical meaning for the Greeks over two thousand years before. Hippocrates used the term to mean "the falling off of the bones". Galen extended its meaning to "the dropping of the scabs". Cormack was no doubt aware of this usage when he suggested the name. Debate continues over the correct pronunciation, with opinion divided between a pronunciation with the second p silent ( ) and the second p pronounced (). In English, the p of the Greek -pt- consonant cluster is typically silent at the beginning of a word (e.g. pterodactyl, Ptolemy), but articulated when used in combining forms preceded by a vowel, as in helicopter or the orders of insects: diptera, lepidoptera, etc.
In the original Kerr, Wyllie & Currie paper, there is a footnote regarding the pronunciation:
We are most grateful to Professor James Cormack of the Department of Greek, University of Aberdeen, for suggesting this term. The word "apoptosis" () is used in Greek to describe the "dropping off" or "falling off" of petals from flowers, or leaves from trees. To show the derivation clearly, we propose that the stress should be on the penultimate syllable, the second half of the word being pronounced like "ptosis" (with the "p" silent), which comes from the same root "to fall", and is already used to describe the drooping of the upper eyelid.
Activation mechanisms
The initiation of apoptosis is tightly regulated by activation mechanisms, because once apoptosis has begun, it inevitably leads to the death of the cell. The two best-understood activation mechanisms are the intrinsic pathway (also called the mitochondrial pathway) and the extrinsic pathway. The intrinsic pathway is activated by intracellular signals generated when cells are stressed and depends on the release of proteins from the intermembrane space of mitochondria. The extrinsic pathway is activated by extracellular ligands binding to cell-surface death receptors, which leads to the formation of the death-inducing signaling complex (DISC).
A cell initiates intracellular apoptotic signaling in response to a stress, which may bring about cell death. The binding of nuclear receptors by glucocorticoids, heat, radiation, nutrient deprivation, viral infection, hypoxia, increased intracellular concentration of free fatty acids and increased intracellular calcium concentration, for example, by damage to the membrane, can all trigger the release of intracellular apoptotic signals by a damaged cell. A number of cellular components, such as poly ADP ribose polymerase, may also help regulate apoptosis. Single cell fluctuations have been observed in experimental studies of stress induced apoptosis.
Before the actual process of cell death is precipitated by enzymes, apoptotic signals must cause regulatory proteins to initiate the apoptosis pathway. This step allows those signals to cause cell death, or the process to be stopped, should the cell no longer need to die. Several proteins are involved, but two main methods of regulation have been identified: the targeting of mitochondria functionality, or directly transducing the signal via adaptor proteins to the apoptotic mechanisms. An extrinsic pathway for initiation identified in several toxin studies is an increase in calcium concentration within a cell caused by drug activity, which also can cause apoptosis via a calcium binding protease calpain.
Intrinsic pathway
The intrinsic pathway is also known as the mitochondrial pathway. Mitochondria are essential to multicellular life. Without them, a cell ceases to respire aerobically and quickly dies. This fact forms the basis for some apoptotic pathways. Apoptotic proteins that target mitochondria affect them in different ways. They may cause mitochondrial swelling through the formation of membrane pores, or they may increase the permeability of the mitochondrial membrane and cause apoptotic effectors to leak out. There is also a growing body of evidence indicating that nitric oxide is able to induce apoptosis by helping to dissipate the membrane potential of mitochondria and therefore make it more permeable. Nitric oxide has been implicated in initiating and inhibiting apoptosis through its possible action as a signal molecule of subsequent pathways that activate apoptosis.
During apoptosis, cytochrome c is released from mitochondria through the actions of the proteins Bax and Bak. The mechanism of this release is enigmatic, but appears to stem from a multitude of Bax/Bak homo- and hetero-dimers of Bax/Bak inserted into the outer membrane. Once cytochrome c is released it binds with Apoptotic protease activating factor – 1 (Apaf-1) and ATP, which then bind to pro-caspase-9 to create a protein complex known as an apoptosome. The apoptosome cleaves the pro-caspase to its active form of caspase-9, which in turn cleaves and activates pro-caspase into the effector caspase-3.
Mitochondria also release proteins known as SMACs (second mitochondria-derived activator of caspases) into the cell's cytosol following the increase in permeability of the mitochondria membranes. SMAC binds to proteins that inhibit apoptosis (IAPs) thereby deactivating them, and preventing the IAPs from arresting the process and therefore allowing apoptosis to proceed. IAP also normally suppresses the activity of a group of cysteine proteases called caspases, which carry out the degradation of the cell. Therefore, the actual degradation enzymes can be seen to be indirectly regulated by mitochondrial permeability.
Extrinsic pathway
Two theories of the direct initiation of apoptotic mechanisms in mammals have been suggested: the TNF-induced (tumor necrosis factor) model and the Fas-Fas ligand-mediated model, both involving receptors of the TNF receptor (TNFR) family coupled to extrinsic signals.
TNF pathway
TNF-alpha is a cytokine produced mainly by activated macrophages, and is the major extrinsic mediator of apoptosis. Most cells in the human body have two receptors for TNF-alpha: TNFR1 and TNFR2. The binding of TNF-alpha to TNFR1 has been shown to initiate the pathway that leads to caspase activation via the intermediate membrane proteins TNF receptor-associated death domain (TRADD) and Fas-associated death domain protein (FADD). cIAP1/2 can inhibit TNF-α signaling by binding to TRAF2. FLIP inhibits the activation of caspase-8. Binding of this receptor can also indirectly lead to the activation of transcription factors involved in cell survival and inflammatory responses. However, signalling through TNFR1 might also induce apoptosis in a caspase-independent manner. The link between TNF-alpha and apoptosis shows why an abnormal production of TNF-alpha plays a fundamental role in several human diseases, especially in autoimmune diseases. The TNF-alpha receptor superfamily also includes death receptors (DRs), such as DR4 and DR5. These receptors bind to the protein TRAIL and mediate apoptosis. Apoptosis is known to be one of the primary mechanisms of targeted cancer therapy. Luminescent iridium complex-peptide hybrids (IPHs) have recently been designed, which mimic TRAIL and bind to death receptors on cancer cells, thereby inducing their apoptosis.
Fas pathway
The fas receptor (First apoptosis signal) – (also known as Apo-1 or CD95) is a transmembrane protein of the TNF family which binds the Fas ligand (FasL). The interaction between Fas and FasL results in the formation of the death-inducing signaling complex (DISC), which contains the FADD, caspase-8 and caspase-10. In some types of cells (type I), processed caspase-8 directly activates other members of the caspase family, and triggers the execution of apoptosis of the cell. In other types of cells (type II), the Fas-DISC starts a feedback loop that spirals into increasing release of proapoptotic factors from mitochondria and the amplified activation of caspase-8.
Common components
Following TNF-R1 and Fas activation in mammalian cells a balance between proapoptotic (BAX, BID, BAK, or BAD) and anti-apoptotic (Bcl-Xl and Bcl-2) members of the Bcl-2 family are established. This balance is the proportion of proapoptotic homodimers that form in the outer-membrane of the mitochondrion. The proapoptotic homodimers are required to make the mitochondrial membrane permeable for the release of caspase activators such as cytochrome c and SMAC. Control of proapoptotic proteins under normal cell conditions of nonapoptotic cells is incompletely understood, but in general, Bax or Bak are activated by the activation of BH3-only proteins, part of the Bcl-2 family.
Caspases
Caspases play the central role in the transduction of ER apoptotic signals. Caspases are proteins that are highly conserved, cysteine-dependent aspartate-specific proteases. There are two types of caspases: initiator caspases (caspases 2, 8, 9, 10, 11, and 12) and effector caspases (caspases 3, 6, and 7). The activation of initiator caspases requires binding to specific oligomeric activator protein. Effector caspases are then activated by these active initiator caspases through proteolytic cleavage. The active effector caspases then proteolytically degrade a host of intracellular proteins to carry out the cell death program.
Caspase-independent apoptotic pathway
There also exists a caspase-independent apoptotic pathway that is mediated by AIF (apoptosis-inducing factor).
Apoptosis model in amphibians
The frog Xenopus laevis serves as an ideal model system for the study of the mechanisms of apoptosis. In fact, iodine and thyroxine also stimulate the spectacular apoptosis of the cells of the larval gills, tail and fins in amphibian's metamorphosis, and stimulate the evolution of their nervous system transforming the aquatic, vegetarian tadpole into the terrestrial, carnivorous frog.
Negative regulators of apoptosis
Negative regulation of apoptosis inhibits cell death signaling pathways, helping tumors to evade cell death and developing drug resistance. The ratio between anti-apoptotic (Bcl-2) and pro-apoptotic (Bax) proteins determines whether a cell lives or dies. Many families of proteins act as negative regulators categorized into either antiapoptotic factors, such as IAPs and Bcl-2 proteins or prosurvival factors like cFLIP, BNIP3, FADD, Akt, and NF-κB.
Proteolytic caspase cascade: Killing the cell
Many pathways and signals lead to apoptosis, but these converge on a single mechanism that actually causes the death of the cell. After a cell receives stimulus, it undergoes organized degradation of cellular organelles by activated proteolytic caspases. In addition to the destruction of cellular organelles, mRNA is rapidly and globally degraded by a mechanism that is not yet fully characterized. mRNA decay is triggered very early in apoptosis.
A cell undergoing apoptosis shows a series of characteristic morphological changes. Early alterations include:
Cell shrinkage and rounding occur because of the retraction of lamellipodia and the breakdown of the proteinaceous cytoskeleton by caspases.
The cytoplasm appears dense, and the organelles appear tightly packed.
Chromatin undergoes condensation into compact patches against the nuclear envelope (also known as the perinuclear envelope) in a process known as pyknosis, a hallmark of apoptosis.
The nuclear envelope becomes discontinuous and the DNA inside it is fragmented in a process referred to as karyorrhexis. The nucleus breaks into several discrete chromatin bodies or nucleosomal units due to the degradation of DNA.
Apoptosis progresses quickly and its products are quickly removed, making it difficult to detect or visualize on classical histology sections. During karyorrhexis, endonuclease activation leaves short DNA fragments, regularly spaced in size. These give a characteristic "laddered" appearance on agar gel after electrophoresis. Tests for DNA laddering differentiate apoptosis from ischemic or toxic cell death.
Apoptotic cell disassembly
Before the apoptotic cell is disposed of, there is a process of disassembly. There are three recognized steps in apoptotic cell disassembly:
Membrane blebbing: The cell membrane shows irregular buds known as blebs. Initially these are smaller surface blebs. Later these can grow into larger so-called dynamic membrane blebs. An important regulator of apoptotic cell membrane blebbing is ROCK1 (rho associated coiled-coil-containing protein kinase 1).
Formation of membrane protrusions: Some cell types, under specific conditions, may develop different types of long, thin extensions of the cell membrane called membrane protrusions. Three types have been described: microtubule spikes, apoptopodia (feet of death), and beaded apoptopodia (the latter having a beads-on-a-string appearance). Pannexin 1 is an important component of membrane channels involved in the formation of apoptopodia and beaded apoptopodia.
Fragmentation: The cell breaks apart into multiple vesicles called apoptotic bodies, which undergo phagocytosis. The plasma membrane protrusions may help bring apoptotic bodies closer to phagocytes.
Removal of dead cells
The removal of dead cells by neighboring phagocytic cells has been termed efferocytosis.
Dying cells that undergo the final stages of apoptosis display phagocytotic molecules, such as phosphatidylserine, on their cell surface. Phosphatidylserine is normally found on the inner leaflet surface of the plasma membrane, but is redistributed during apoptosis to the extracellular surface by a protein known as scramblase. These molecules mark the cell for phagocytosis by cells possessing the appropriate receptors, such as macrophages. The removal of dying cells by phagocytes occurs in an orderly manner without eliciting an inflammatory response. During apoptosis cellular RNA and DNA are separated from each other and sorted to different apoptotic bodies; separation of RNA is initiated as nucleolar segregation.
Pathway knock-outs
Many knock-outs have been made in the apoptosis pathways to test the function of each of the proteins. Several caspases, in addition to APAF1 and FADD, have been mutated to determine the new phenotype. In order to create a tumor necrosis factor (TNF) knockout, an exon containing the nucleotides 3704–5364 was removed from the gene. This exon encodes a portion of the mature TNF domain, as well as the leader sequence, which is a highly conserved region necessary for proper intracellular processing. TNF-/- mice develop normally and have no gross structural or morphological abnormalities. However, upon immunization with SRBC (sheep red blood cells), these mice demonstrated a deficiency in the maturation of an antibody response; they were able to generate normal levels of IgM, but could not develop specific IgG levels. Apaf-1 is the protein that turns on caspase 9 by cleavage to begin the caspase cascade that leads to apoptosis. Since a -/- mutation in the APAF-1 gene is embryonic lethal, a gene trap strategy was used in order to generate an APAF-1 -/- mouse. This assay is used to disrupt gene function by creating an intragenic gene fusion. When an APAF-1 gene trap is introduced into cells, many morphological changes occur, such as spina bifida, the persistence of interdigital webs, and open brain. In addition, after embryonic day 12.5, the brain of the embryos showed several structural changes. APAF-1 cells are protected from apoptosis stimuli such as irradiation. A BAX-1 knock-out mouse exhibits normal forebrain formation and a decreased programmed cell death in some neuronal populations and in the spinal cord, leading to an increase in motor neurons.
The caspase proteins are integral parts of the apoptosis pathway, so it follows that knock-outs made have varying damaging results. A caspase 9 knock-out leads to a severe brain malformation . A caspase 8 knock-out leads to cardiac failure and thus embryonic lethality . However, with the use of cre-lox technology, a caspase 8 knock-out has been created that exhibits an increase in peripheral T cells, an impaired T cell response, and a defect in neural tube closure . These mice were found to be resistant to apoptosis mediated by CD95, TNFR, etc. but not resistant to apoptosis caused by UV irradiation, chemotherapeutic drugs, and other stimuli. Finally, a caspase 3 knock-out was characterized by ectopic cell masses in the brain and abnormal apoptotic features such as membrane blebbing or nuclear fragmentation . A remarkable feature of these KO mice is that they have a very restricted phenotype: Casp3, 9, APAF-1 KO mice have deformations of neural tissue and FADD and Casp 8 KO showed defective heart development, however, in both types of KO other organs developed normally and some cell types were still sensitive to apoptotic stimuli suggesting that unknown proapoptotic pathways exist.
Methods for distinguishing apoptotic from necrotic cells
Label-free live cell imaging, time-lapse microscopy, flow fluorocytometry, and transmission electron microscopy can be used to compare apoptotic and necrotic cells. There are also various biochemical techniques for analysis of cell surface markers (phosphatidylserine exposure versus cell permeability by flow cytometry), cellular markers such as DNA fragmentation (flow cytometry), caspase activation, Bid cleavage, and cytochrome c release (Western blotting). Supernatant screening for caspases, HMGB1, and cytokeratin 18 release can identify primary from secondary necrotic cells. However, no distinct surface or biochemical markers of necrotic cell death have been identified yet, and only negative markers are available. These include absence of apoptotic markers (caspase activation, cytochrome c release, and oligonucleosomal DNA fragmentation) and differential kinetics of cell death markers (phosphatidylserine exposure and cell membrane permeabilization). A selection of techniques that can be used to distinguish apoptosis from necroptotic cells could be found in these references.
Implication in disease
Defective pathways
The many different types of apoptotic pathways contain a multitude of different biochemical components, many of them not yet understood. As a pathway is more or less sequential in nature, removing or modifying one component leads to an effect in another. In a living organism, this can have disastrous effects, often in the form of disease or disorder. A discussion of every disease caused by modification of the various apoptotic pathways would be impractical, but the concept overlying each one is the same: The normal functioning of the pathway has been disrupted in such a way as to impair the ability of the cell to undergo normal apoptosis. This results in a cell that lives past its "use-by date" and is able to replicate and pass on any faulty machinery to its progeny, increasing the likelihood of the cell's becoming cancerous or diseased.
A recently described example of this concept in action can be seen in the development of a lung cancer called NCI-H460. The X-linked inhibitor of apoptosis protein (XIAP) is overexpressed in cells of the H460 cell line. XIAPs bind to the processed form of caspase-9 and suppress the activity of apoptotic activator cytochrome c, therefore overexpression leads to a decrease in the number of proapoptotic agonists. As a consequence, the balance of anti-apoptotic and proapoptotic effectors is upset in favour of the former, and the damaged cells continue to replicate despite being directed to die. Defects in regulation of apoptosis in cancer cells occur often at the level of control of transcription factors. As a particular example, defects in molecules that control transcription factor NF-κB in cancer change the mode of transcriptional regulation and the response to apoptotic signals, to curtail dependence on the tissue that the cell belongs. This degree of independence from external survival signals, can enable cancer metastasis.
Dysregulation of p53
The tumor-suppressor protein p53 accumulates when DNA is damaged due to a chain of biochemical factors. Part of this pathway includes alpha-interferon and beta-interferon, which induce transcription of the p53 gene, resulting in the increase of p53 protein level and enhancement of cancer cell-apoptosis. p53 prevents the cell from replicating by stopping the cell cycle at G1, or interphase, to give the cell time to repair; however, it will induce apoptosis if damage is extensive and repair efforts fail. Any disruption to the regulation of the p53 or interferon genes will result in impaired apoptosis and the possible formation of tumors.
Inhibition
Inhibition of apoptosis can result in a number of cancers, inflammatory diseases, and viral infections. It was originally believed that the associated accumulation of cells was due to an increase in cellular proliferation, but it is now known that it is also due to a decrease in cell death. The most common of these diseases is cancer, the disease of excessive cellular proliferation, which is often characterized by an overexpression of IAP family members. As a result, the malignant cells experience an abnormal response to apoptosis induction: Cycle-regulating genes (such as p53, ras or c-myc) are mutated or inactivated in diseased cells, and further genes (such as bcl-2) also modify their expression in tumors. Some apoptotic factors are vital during mitochondrial respiration e.g. cytochrome C. Pathological inactivation of apoptosis in cancer cells is correlated with frequent respiratory metabolic shifts toward glycolysis (an observation known as the "Warburg hypothesis".
HeLa cell
Apoptosis in HeLa cells is inhibited by proteins produced by the cell; these inhibitory proteins target retinoblastoma tumor-suppressing proteins. These tumor-suppressing proteins regulate the cell cycle, but are rendered inactive when bound to an inhibitory protein. HPV E6 and E7 are inhibitory proteins expressed by the human papillomavirus, HPV being responsible for the formation of the cervical tumor from which HeLa cells are derived. HPV E6 causes p53, which regulates the cell cycle, to become inactive. HPV E7 binds to retinoblastoma tumor suppressing proteins and limits its ability to control cell division. These two inhibitory proteins are partially responsible for HeLa cells' immortality by inhibiting apoptosis to occur.
Treatments
The main method of treatment for potential death from signaling-related diseases involves either increasing or decreasing the susceptibility of apoptosis in diseased cells, depending on whether the disease is caused by either the inhibition of or excess apoptosis. For instance, treatments aim to restore apoptosis to treat diseases with deficient cell death and to increase the apoptotic threshold to treat diseases involved with excessive cell death. To stimulate apoptosis, one can increase the number of death receptor ligands (such as TNF or TRAIL), antagonize the anti-apoptotic Bcl-2 pathway, or introduce Smac mimetics to inhibit the inhibitor (IAPs). The addition of agents such as Herceptin, Iressa, or Gleevec works to stop cells from cycling and causes apoptosis activation by blocking growth and survival signaling further upstream. Finally, adding p53-MDM2 complexes displaces p53 and activates the p53 pathway, leading to cell cycle arrest and apoptosis. Many different methods can be used either to stimulate or to inhibit apoptosis in various places along the death signaling pathway.
Apoptosis is a multi-step, multi-pathway cell-death programme that is inherent in every cell of the body. In cancer, the apoptosis cell-division ratio is altered. Cancer treatment by chemotherapy and irradiation kills target cells primarily by inducing apoptosis.
Hyperactive apoptosis
On the other hand, loss of control of cell death (resulting in excess apoptosis) can lead to neurodegenerative diseases, hematologic diseases, and tissue damage. Neurons that rely on mitochondrial respiration undergo apoptosis in neurodegenerative diseases such as Alzheimer's and Parkinson's. (an observation known as the "Inverse Warburg hypothesis"). Moreover, there is an inverse epidemiological comorbidity between neurodegenerative diseases and cancer. The progression of HIV is directly linked to excess, unregulated apoptosis. In a healthy individual, the number of CD4+ lymphocytes is in balance with the cells generated by the bone marrow; however, in HIV-positive patients, this balance is lost due to an inability of the bone marrow to regenerate CD4+ cells. In the case of HIV, CD4+ lymphocytes die at an accelerated rate through uncontrolled apoptosis, when stimulated.
At the molecular level, hyperactive apoptosis can be caused by defects in signaling pathways that regulate the Bcl-2 family proteins. Increased expression of apoptotic proteins such as BIM, or their decreased proteolysis, leads to cell death and can cause a number of pathologies, depending on the cells where excessive activity of BIM occurs. Cancer cells can escape apoptosis through mechanisms that suppress BIM expression or by increased proteolysis of BIM.
Treatments
Treatments aiming to inhibit works to block specific caspases. Finally, the Akt protein kinase promotes cell survival through two pathways. Akt phosphorylates and inhibits Bad (a Bcl-2 family member), causing Bad to interact with the 14-3-3 scaffold, resulting in Bcl dissociation and thus cell survival. Akt also activates IKKα, which leads to NF-κB activation and cell survival. Active NF-κB induces the expression of anti-apoptotic genes such as Bcl-2, resulting in inhibition of apoptosis. NF-κB has been found to play both an antiapoptotic role and a proapoptotic role depending on the stimuli utilized and the cell type.
HIV progression
The progression of the human immunodeficiency virus infection into AIDS is due primarily to the depletion of CD4+ T-helper lymphocytes in a manner that is too rapid for the body's bone marrow to replenish the cells, leading to a compromised immune system. One of the mechanisms by which T-helper cells are depleted is apoptosis, which results from a series of biochemical pathways:
HIV enzymes deactivate anti-apoptotic Bcl-2. This does not directly cause cell death but primes the cell for apoptosis should the appropriate signal be received. In parallel, these enzymes activate proapoptotic procaspase-8, which does directly activate the mitochondrial events of apoptosis.
HIV may increase the level of cellular proteins that prompt Fas-mediated apoptosis.
HIV proteins decrease the amount of CD4 glycoprotein marker present on the cell membrane.
Released viral particles and proteins present in extracellular fluid are able to induce apoptosis in nearby "bystander" T helper cells.
HIV decreases the production of molecules involved in marking the cell for apoptosis, giving the virus time to replicate and continue releasing apoptotic agents and virions into the surrounding tissue.
The infected CD4+ cell may also receive the death signal from a cytotoxic T cell.
Cells may also die as direct consequences of viral infections. HIV-1 expression induces tubular cell G2/M arrest and apoptosis. The progression from HIV to AIDS is not immediate or even necessarily rapid; HIV's cytotoxic activity toward CD4+ lymphocytes is classified as AIDS once a given patient's CD4+ cell count falls below 200.
Researchers from Kumamoto University in Japan have developed a new method to eradicate HIV in viral reservoir cells, named "Lock-in and apoptosis." Using the synthesized compound Heptanoylphosphatidyl L-Inositol Pentakisphophate (or L-Hippo) to bind strongly to the HIV protein PR55Gag, they were able to suppress viral budding. By suppressing viral budding, the researchers were able to trap the HIV virus in the cell and allow for the cell to undergo apoptosis (natural cell death). Associate Professor Mikako Fujita has stated that the approach is not yet available to HIV patients because the research team has to conduct further research on combining the drug therapy that currently exists with this "Lock-in and apoptosis" approach to lead to complete recovery from HIV.
Viral infection
Viral induction of apoptosis occurs when one or several cells of a living organism are infected with a virus, leading to cell death. Cell death in organisms is necessary for the normal development of cells and the cell cycle maturation. It is also important in maintaining the regular functions and activities of cells.
Viruses can trigger apoptosis of infected cells via a range of mechanisms including:
Receptor binding
Activation of protein kinase R (PKR)
Interaction with p53
Expression of viral proteins coupled to MHC proteins on the surface of the infected cell, allowing recognition by cells of the immune system (such as natural killer and cytotoxic T cells) that then induce the infected cell to undergo apoptosis.
Canine distemper virus (CDV) is known to cause apoptosis in central nervous system and lymphoid tissue of infected dogs in vivo and in vitro.
Apoptosis caused by CDV is typically induced via the extrinsic pathway, which activates caspases that disrupt cellular function and eventually leads to the cells death. In normal cells, CDV activates caspase-8 first, which works as the initiator protein followed by the executioner protein caspase-3. However, apoptosis induced by CDV in HeLa cells does not involve the initiator protein caspase-8. HeLa cell apoptosis caused by CDV follows a different mechanism than that in vero cell lines. This change in the caspase cascade suggests CDV induces apoptosis via the intrinsic pathway, excluding the need for the initiator caspase-8. The executioner protein is instead activated by the internal stimuli caused by viral infection not a caspase cascade.
The Oropouche virus (OROV) is found in the family Bunyaviridae. The study of apoptosis brought on by Bunyaviridae was initiated in 1996, when it was observed that apoptosis was induced by the La Crosse virus into the kidney cells of baby hamsters and into the brains of baby mice.
OROV is a disease that is transmitted between humans by the biting midge (Culicoides paraensis). It is referred to as a zoonotic arbovirus and causes febrile illness, characterized by the onset of a sudden fever known as Oropouche fever.
The Oropouche virus also causes disruption in cultured cells – cells that are cultivated in distinct and specific conditions. An example of this can be seen in HeLa cells, whereby the cells begin to degenerate shortly after they are infected.
With the use of gel electrophoresis, it can be observed that OROV causes DNA fragmentation in HeLa cells. It can be interpreted by counting, measuring, and analyzing the cells of the Sub/G1 cell population. When HeLA cells are infected with OROV, the cytochrome C is released from the membrane of the mitochondria, into the cytosol of the cells. This type of interaction shows that apoptosis is activated via an intrinsic pathway.
In order for apoptosis to occur within OROV, viral uncoating, viral internalization, along with the replication of cells is necessary. Apoptosis in some viruses is activated by extracellular stimuli. However, studies have demonstrated that the OROV infection causes apoptosis to be activated through intracellular stimuli and involves the mitochondria.
Many viruses encode proteins that can inhibit apoptosis. Several viruses encode viral homologs of Bcl-2. These homologs can inhibit proapoptotic proteins such as BAX and BAK, which are essential for the activation of apoptosis. Examples of viral Bcl-2 proteins include the Epstein-Barr virus BHRF1 protein and the adenovirus E1B 19K protein. Some viruses express caspase inhibitors that inhibit caspase activity and an example is the CrmA protein of cowpox viruses. Whilst a number of viruses can block the effects of TNF and Fas. For example, the M-T2 protein of myxoma viruses can bind TNF preventing it from binding the TNF receptor and inducing a response. Furthermore, many viruses express p53 inhibitors that can bind p53 and inhibit its transcriptional transactivation activity. As a consequence, p53 cannot induce apoptosis, since it cannot induce the expression of proapoptotic proteins. The adenovirus E1B-55K protein and the hepatitis B virus HBx protein are examples of viral proteins that can perform such a function.
Viruses can remain intact from apoptosis in particular in the latter stages of infection. They can be exported in the apoptotic bodies that pinch off from the surface of the dying cell, and the fact that they are engulfed by phagocytes prevents the initiation of a host response. This favours the spread of the virus. Prions can cause apoptosis in neurons.
Plants
Programmed cell death in plants has a number of molecular similarities to that of animal apoptosis, but it also has differences, notable ones being the presence of a cell wall and the lack of an immune system that removes the pieces of the dead cell. Instead of an immune response, the dying cell synthesizes substances to break itself down and places them in a vacuole that ruptures as the cell dies. Additionally, plants do not contain phagocytic cells, which are essential in the process of breaking down and removing apoptotic bodies. Whether this whole process resembles animal apoptosis closely enough to warrant using the name apoptosis (as opposed to the more general programmed cell death) is unclear.
Caspase-independent apoptosis
The characterization of the caspases allowed the development of caspase inhibitors, which can be used to determine whether a cellular process involves active caspases. Using these inhibitors it was discovered that cells can die while displaying a morphology similar to apoptosis without caspase activation. Later studies linked this phenomenon to the release of AIF (apoptosis-inducing factor) from the mitochondria and its translocation into the nucleus mediated by its NLS (nuclear localization signal). Inside the mitochondria, AIF is anchored to the inner membrane. In order to be released, the protein is cleaved by a calcium-dependent calpain protease.
Explanatory footnotes
Citations
General bibliography
|
;Cell signaling;Cellular senescence;Immunology;Medical aspects of death;Programmed cell death
|
https://en.wikipedia.org/wiki/Asynchronous%20Transfer%20Mode
|
Asynchronous Transfer Mode (ATM) is a telecommunications standard defined by the American National Standards Institute and International Telecommunication Union Telecommunication Standardization Sector (ITU-T, formerly CCITT) for digital transmission of multiple types of traffic. ATM was developed to meet the needs of the Broadband Integrated Services Digital Network as defined in the late 1980s, and designed to integrate telecommunication networks. It can handle both traditional high-throughput data traffic and real-time, low-latency content such as telephony (voice) and video. ATM is a cell switching technology, providing functionality that combines features of circuit switching and packet switching networks by using asynchronous time-division multiplexing. ATM was seen in the 1990s as a competitor to Ethernet and networks carrying IP traffic as, unlike Ethernet, it was faster and designed with quality-of-service in mind, but it fell out of favor once Ethernet reached speeds of 1 gigabits per second.
In the Open Systems Interconnection (OSI) reference model data link layer (layer 2), the basic transfer units are called frames. In ATM these frames are of a fixed length (53 octets) called cells. This differs from approaches such as Internet Protocol (IP) (OSI layer 3) or Ethernet (also layer 2) that use variable-sized packets or frames. ATM uses a connection-oriented model in which a virtual circuit must be established between two endpoints before the data exchange begins. These virtual circuits may be either permanent (dedicated connections that are usually preconfigured by the service provider), or switched (set up on a per-call basis using signaling and disconnected when the call is terminated).
The ATM network reference model approximately maps to the three lowest layers of the OSI model: physical layer, data link layer, and network layer. ATM is a core protocol used in the synchronous optical networking and synchronous digital hierarchy (SONET/SDH) backbone of the public switched telephone network and in the Integrated Services Digital Network (ISDN) but has largely been superseded in favor of next-generation networks based on IP technology. Wireless and mobile ATM never established a significant foothold.
Protocol architecture
To minimize queuing delay and packet delay variation (PDV), all ATM cells are the same small size. Reduction of PDV is particularly important when carrying voice traffic, because the conversion of digitized voice into an analog audio signal is an inherently real-time process. The decoder needs an evenly spaced stream of data items.
At the time of the design of ATM, synchronous digital hierarchy with payload was considered a fast optical network link, and many plesiochronous digital hierarchy links in the digital network were considerably slower, ranging from 1.544 to in the US, and 2 to in Europe.
At , a typical full-length 1,500 byte Ethernet frame would take 77.42 μs to transmit. On a lower-speed T1 line, the same packet would take up to 7.8 milliseconds. A queuing delay induced by several such data packets might exceed the figure of 7.8 ms several times over. This was considered unacceptable for speech traffic.
The design of ATM aimed for a low-jitter network interface. Cells were introduced to provide short queuing delays while continuing to support datagram traffic. ATM broke up all data packets and voice streams into 48-byte pieces, adding a 5-byte routing header to each one so that they could be reassembled later. Being 1/30th the size reduced cell contention jitter by the same factor of 30.
The choice of 48 bytes was political rather than technical. When the CCITT (now ITU-T) was standardizing ATM, parties from the United States wanted a 64-byte payload because this was felt to be a good compromise between larger payloads optimized for data transmission and shorter payloads optimized for real-time applications like voice. Parties from Europe wanted 32-byte payloads because the small size (4 ms of voice data) would avoid the need for echo cancellation on domestic voice calls. The United States, due to its larger size, already had echo cancellers widely deployed. Most of the European parties eventually came around to the arguments made by the Americans, but France and a few others held out for a shorter cell length.
48 bytes was chosen as a compromise, despite having all the disadvantages of both proposals and the additional inconvenience of not being a power of two in size. 5-byte headers were chosen because it was thought that 10% of the payload was the maximum price to pay for routing information.
Cell structure
An ATM cell consists of a 5-byte header and a 48-byte payload. ATM defines two different cell formats: user–network interface (UNI) and network–network interface (NNI). Most ATM links use UNI cell format.
GFC
The generic flow control (GFC) field is a 4-bit field that was originally added to support the connection of ATM networks to shared access networks such as a distributed queue dual bus (DQDB) ring. The GFC field was designed to give the User-Network Interface (UNI) 4 bits in which to negotiate multiplexing and flow control among the cells of various ATM connections. However, the use and exact values of the GFC field have not been standardized, and the field is always set to 0000.
VPI
Virtual path identifier (8 bits UNI, or 12 bits NNI)
VCI
Virtual channel identifier (16 bits)
PT
Payload type (3 bits)
Bit 3 (msbit): Network management cell. If 0, user data cell and the following apply:
Bit 2: Explicit forward congestion indication (EFCI); 1 = network congestion experienced
Bit 1 (lsbit): ATM user-to-user (AAU) bit. Used by AAL5 to indicate packet boundaries.
CLP
Cell loss priority (1-bit)
HEC
Header error control (8-bit CRC, polynomial = X8 + X2 + X + 1)
ATM uses the PT field to designate various special kinds of cells for operations, administration and management (OAM) purposes, and to delineate packet boundaries in some ATM adaptation layers (AAL). If the most significant bit (MSB) of the PT field is 0, this is a user data cell, and the other two bits are used to indicate network congestion and as a general-purpose header bit available for ATM adaptation layers. If the MSB is 1, this is a management cell, and the other two bits indicate the type: network management segment, network management end-to-end, resource management, and reserved for future use.
Several ATM link protocols use the HEC field to drive a CRC-based framing algorithm, which allows locating the ATM cells with no overhead beyond what is otherwise needed for header protection. The 8-bit CRC is used to correct single-bit header errors and detect multi-bit header errors. When multi-bit header errors are detected, the current and subsequent cells are dropped until a cell with no header errors is found.
A UNI cell reserves the GFC field for a local flow control and sub-multiplexing system between users. This was intended to allow several terminals to share a single network connection in the same way that two ISDN phones can share a single basic rate ISDN connection. All four GFC bits must be zero by default.
The NNI cell format replicates the UNI format almost exactly, except that the 4-bit GFC field is re-allocated to the VPI field, extending the VPI to 12 bits. Thus, a single NNI ATM interconnection is capable of addressing almost 212 VPs of up to almost 216 VCs each.
Service types
ATM supports different types of services via AALs. Standardized AALs include AAL1, AAL2, and AAL5, and the rarely used AAL3 and AAL4. AAL1 is used for constant bit rate (CBR) services and circuit emulation. Synchronization is also maintained at AAL1. AAL2 through AAL4 are used for variable bitrate (VBR) services, and AAL5 for data. Which AAL is in use for a given cell is not encoded in the cell. Instead, it is negotiated by or configured at the endpoints on a per-virtual-connection basis.
Following the initial design of ATM, networks have become much faster. A 1500 byte (12000-bit) full-size Ethernet frame takes only 1.2 μs to transmit on a network, reducing the motivation for small cells to reduce jitter due to contention. The increased link speeds by themselves do not eliminate jitter due to queuing.
ATM provides a useful ability to carry multiple logical circuits on a single physical or virtual medium, although other techniques exist, such as Multi-link PPP, Ethernet VLANs, VxLAN, MPLS, and multi-protocol support over SONET.
Virtual circuits
An ATM network must establish a connection before two parties can send cells to each other. This is called a virtual circuit (VC). It can be a permanent virtual circuit (PVC), which is created administratively on the end points, or a switched virtual circuit (SVC), which is created as needed by the communicating parties. SVC creation is managed by signaling, in which the requesting party indicates the address of the receiving party, the type of service requested, and whatever traffic parameters may be applicable to the selected service. Call admission is then performed by the network to confirm that the requested resources are available and that a route exists for the connection.
Motivation
ATM operates as a channel-based transport layer, using VCs. This is encompassed in the concept of the virtual paths (VP) and virtual channels. Every ATM cell has an 8- or 12-bit virtual path identifier (VPI) and 16-bit virtual channel identifier (VCI) pair defined in its header. The VCI, together with the VPI, is used to identify the next destination of a cell as it passes through a series of ATM switches on its way to its destination. The length of the VPI varies according to whether the cell is sent on a user-network interface (at the edge of the network), or if it is sent on a network-network interface (inside the network).
As these cells traverse an ATM network, switching takes place by changing the VPI/VCI values (label swapping). Although the VPI/VCI values are not necessarily consistent from one end of the connection to the other, the concept of a circuit is consistent (unlike IP, where any given packet could get to its destination by a different route than the others). ATM switches use the VPI/VCI fields to identify the virtual channel link (VCL) of the next network that a cell needs to transit on its way to its final destination. The function of the VCI is similar to that of the data link connection identifier (DLCI) in Frame Relay and the logical channel number and logical channel group number in X.25.
Another advantage of the use of virtual circuits comes with the ability to use them as a multiplexing layer, allowing different services (such as voice, Frame Relay, IP). The VPI is useful for reducing the switching table of some virtual circuits which have common paths.
Types
ATM can build virtual circuits and virtual paths either statically or dynamically. Static circuits (permanent virtual circuits or PVCs) or paths (permanent virtual paths or PVPs) require that the circuit is composed of a series of segments, one for each pair of interfaces through which it passes.
PVPs and PVCs, though conceptually simple, require significant effort in large networks. They also do not support the re-routing of service in the event of a failure. Dynamically built PVPs (soft PVPs or SPVPs) and PVCs (soft PVCs or SPVCs), in contrast, are built by specifying the characteristics of the circuit (the service contract) and the two endpoints.
ATM networks create and remove switched virtual circuits (SVCs) on demand when requested by an end station. One application for SVCs is to carry individual telephone calls when a network of telephone switches are interconnected using ATM. SVCs were also used in attempts to replace local area networks with ATM.
Routing
Most ATM networks supporting SPVPs, SPVCs, and SVCs use the Private Network-to-Network Interface (PNNI) protocol to share topology information between switches and select a route through a network. PNNI is a link-state routing protocol like OSPF and IS-IS. PNNI also includes a very powerful route summarization mechanism to allow construction of very large networks, as well as a call admission control (CAC) algorithm which determines the availability of sufficient bandwidth on a proposed route through a network in order to satisfy the service requirements of a VC or VP.
Traffic engineering
Another key ATM concept involves the traffic contract. When an ATM circuit is set up each switch on the circuit is informed of the traffic class of the connection. ATM traffic contracts form part of the mechanism by which quality of service (QoS) is ensured. There are four basic types (and several variants) which each have a set of parameters describing the connection.
CBR Constant bit rate: a Peak Cell Rate (PCR) is specified, which is constant.
VBR Variable bit rate: an average or Sustainable Cell Rate (SCR) is specified, which can peak at a certain level, a PCR, for a maximum interval before being problematic.
ABR Available bit rate: a minimum guaranteed rate is specified.
UBR Unspecified bit rate: traffic is allocated to all remaining transmission capacity.
VBR has real-time and non-real-time variants, and serves for bursty traffic. Non-real-time is sometimes abbreviated to vbr-nrt. Most traffic classes also introduce the concept of cell-delay variation tolerance (CDVT), which defines the clumping of cells in time.
Traffic policing
To maintain network performance, networks may apply traffic policing to virtual circuits to limit them to their traffic contracts at the entry points to the network, i.e. the user–network interfaces (UNIs) and network-to-network interfaces (NNIs) using usage/network parameter control (UPC and NPC). The reference model given by the ITU-T and ATM Forum for UPC and NPC is the generic cell rate algorithm (GCRA), which is a version of the leaky bucket algorithm. CBR traffic will normally be policed to a PCR and CDVT alone, whereas VBR traffic will normally be policed using a dual leaky bucket controller to a PCR and CDVT and an SCR and maximum burst size (MBS). The MBS will normally be the packet (SAR-SDU) size for the VBR VC in cells.
If the traffic on a virtual circuit exceeds its traffic contract, as determined by the GCRA, the network can either drop the cells or set the Cell Loss Priority (CLP) bit, allowing the cells to be dropped at a congestion point. Basic policing works on a cell-by-cell basis, but this is sub-optimal for encapsulated packet traffic as discarding a single cell will invalidate a packet's worth of cells. As a result, schemes such as partial packet discard (PPD) and early packet discard (EPD) have been developed to discard a whole packet's cells. This reduces the number of useless cells in the network, saving bandwidth for full packets. EPD and PPD work with AAL5 connections as they use the end of packet marker: the ATM user-to-ATM user (AUU) indication bit in the payload-type field of the header, which is set in the last cell of a SAR-SDU.
Traffic shaping
Traffic shaping usually takes place in the network interface controller (NIC) in user equipment, and attempts to ensure that the cell flow on a VC will meet its traffic contract, i.e. cells will not be dropped or reduced in priority at the UNI. Since the reference model given for traffic policing in the network is the GCRA, this algorithm is normally used for shaping as well, and single and dual leaky bucket implementations may be used as appropriate.
Reference model
The ATM network reference model approximately maps to the three lowest layers of the OSI reference model. It specifies the following layers:
At the physical network level, ATM specifies a layer that is equivalent to the OSI physical layer.
The ATM layer 2 roughly corresponds to the OSI data link layer.
The OSI network layer is implemented as the ATM adaptation layer (AAL).
Deployment
ATM became popular with telephone companies and many computer makers in the 1990s. However, even by the end of the decade, the better price–performance ratio of Internet Protocol-based products was competing with ATM technology for integrating real-time and bursty network traffic. Additionally, among cable companies using ATM there often would be discrete and competing management teams for telephony, video on demand, and broadcast and digital video reception, which adversely impacted efficiency. Companies such as FORE Systems focused on ATM products, while other large vendors such as Cisco Systems provided ATM as an option. After the burst of the dot-com bubble, some still predicted that "ATM is going to dominate". However, in 2005 the ATM Forum, which had been the trade organization promoting the technology, merged with groups promoting other technologies, and eventually became the Broadband Forum.
Wireless or mobile ATM
Wireless ATM, or mobile ATM, consists of an ATM core network with a wireless access network. ATM cells are transmitted from base stations to mobile terminals. Mobility functions are performed at an ATM switch in the core network, known as a crossover switch, which is similar to the mobile switching center of GSM networks.
The advantage of wireless ATM is its high bandwidth and high-speed handoffs done at layer 2. In the early 1990s, Bell Labs and NEC research labs worked actively in this field. Andy Hopper from the University of Cambridge Computer Laboratory also worked in this area. There was a wireless ATM forum formed to standardize the technology behind wireless ATM networks. The forum was supported by several telecommunication companies, including NEC, Fujitsu and AT&T. Mobile ATM aimed to provide high-speed multimedia communications technology, capable of delivering broadband mobile communications beyond that of GSM and WLANs.
Further reading
|
;ITU-T recommendations;Link protocols;Networking standards
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.