source
stringlengths 31
81
| text
stringlengths 72
169k
|
|---|---|
https://en.wikipedia.org/wiki/Anarchism
|
Anarchism is a political philosophy and movement that is skeptical of all justifications for authority and seeks to abolish the institutions it claims maintain unnecessary coercion and hierarchy, typically including nation-states, and capitalism. Anarchism advocates for the replacement of the state with stateless societies and voluntary free associations. As a historically left-wing movement, this reading of anarchism is placed on the farthest left of the political spectrum, usually described as the libertarian wing of the socialist movement (libertarian socialism).
Humans have lived in societies without formal hierarchies long before the establishment of states, realms, or empires. With the rise of organised hierarchical bodies, scepticism toward authority also rose. Although traces of anarchist ideas are found all throughout history, modern anarchism emerged from the Enlightenment. During the latter half of the 19th and the first decades of the 20th century, the anarchist movement flourished in most parts of the world and had a significant role in workers' struggles for emancipation. Various anarchist schools of thought formed during this period. Anarchists have taken part in several revolutions, most notably in the Paris Commune, the Russian Civil War and the Spanish Civil War, whose end marked the end of the classical era of anarchism. In the last decades of the 20th and into the 21st century, the anarchist movement has been resurgent once more, growing in popularity and influence within anti-capitalist, anti-war and anti-globalisation movements.
Anarchists employ diverse approaches, which may be generally divided into revolutionary and evolutionary strategies; there is significant overlap between the two. Evolutionary methods try to simulate what an anarchist society might be like, but revolutionary tactics, which have historically taken a violent turn, aim to overthrow authority and the state. Many facets of human civilization have been influenced by anarchist theory, critique, and praxis.
Etymology, terminology, and definition
The etymological origin of anarchism is from the Ancient Greek anarkhia, meaning "without a ruler", composed of the prefix an- ("without") and the word arkhos ("leader" or "ruler"). The suffix -ism denotes the ideological current that favours anarchy. Anarchism appears in English from 1642 as anarchisme and anarchy from 1539; early English usages emphasised a sense of disorder. Various factions within the French Revolution labelled their opponents as anarchists, although few such accused shared many views with later anarchists. Many revolutionaries of the 19th century such as William Godwin (1756–1836) and Wilhelm Weitling (1808–1871) would contribute to the anarchist doctrines of the next generation but did not use anarchist or anarchism in describing themselves or their beliefs.
The first political philosopher to call himself an anarchist () was Pierre-Joseph Proudhon (1809–1865), marking the formal birth of anarchism in the mid-19th century. Since the 1890s and beginning in France, libertarianism has often been used as a synonym for anarchism and its use as a synonym is still common outside the United States. Some usages of libertarianism refer to individualistic free-market philosophy only, and free-market anarchism in particular is termed libertarian anarchism.
While the term libertarian has been largely synonymous with anarchism, its meaning has more recently been diluted by wider adoption from ideologically disparate groups, including both the New Left and libertarian Marxists, who do not associate themselves with authoritarian socialists or a vanguard party, and extreme cultural liberals, who are primarily concerned with civil liberties. Additionally, some anarchists use libertarian socialist to avoid anarchism's negative connotations and emphasise its connections with socialism. Anarchism is broadly used to describe the anti-authoritarian wing of the socialist movement. Anarchism is contrasted to socialist forms which are state-oriented or from above. Scholars of anarchism generally highlight anarchism's socialist credentials and criticise attempts at creating dichotomies between the two. Some scholars describe anarchism as having many influences from liberalism, and being both liberal and socialist but more so. Many scholars reject anarcho-capitalism as a misunderstanding of anarchist principles.
While opposition to the state is central to anarchist thought, defining anarchism is not an easy task for scholars, as there is a lot of discussion among scholars and anarchists on the matter, and various currents perceive anarchism slightly differently. Major definitional elements include the will for a non-coercive society, the rejection of the state apparatus, the belief that human nature allows humans to exist in or progress toward such a non-coercive society, and a suggestion on how to act to pursue the ideal of anarchy.
History
Pre-modern era
Before the creation of towns and cities, established authority did not exist. It was after the institution of authority that anarchistic ideas were espoused as a reaction. The most notable precursors to anarchism in the ancient world were in China and Greece. In China, philosophical anarchism (the discussion on the legitimacy of the state) was delineated by Taoist philosophers Zhuang Zhou and Laozi. Alongside Stoicism, Taoism has been said to have had "significant anticipations" of anarchism.
Anarchic attitudes were also articulated by tragedians and philosophers in Greece. Aeschylus and Sophocles used the myth of Antigone to illustrate the conflict between laws imposed by the state and personal autonomy. Socrates questioned Athenian authorities constantly and insisted on the right of individual freedom of conscience. Cynics dismissed human law (nomos) and associated authorities while trying to live according to nature (physis). Stoics were supportive of a society based on unofficial and friendly relations among its citizens without the presence of a state.
In medieval Europe, there was no anarchistic activity except some ascetic religious movements. These, and other Muslim movements, later gave birth to religious anarchism. In the Sasanian Empire, Mazdak called for an egalitarian society and the abolition of monarchy, only to be soon executed by Emperor Kavad I.
In Basra, religious sects preached against the state. In Europe, various sects developed anti-state and libertarian tendencies. Renewed interest in antiquity during the Renaissance and in private judgment during the Reformation restored elements of anti-authoritarian secularism, particularly in France. Enlightenment challenges to intellectual authority (secular and religious) and the revolutions of the 1790s and 1848 all spurred the ideological development of what became the era of classical anarchism.
Modern era
During the French Revolution, partisan groups such as the Enragés and the saw a turning point in the fermentation of anti-state and federalist sentiments. The first anarchist currents developed throughout the 18th century as William Godwin espoused philosophical anarchism in England, morally delegitimising the state, Max Stirner's thinking paved the way to individualism and Pierre-Joseph Proudhon's theory of mutualism found fertile soil in France. By the late 1870s, various anarchist schools of thought had become well-defined and a wave of then unprecedented globalisation occurred from 1880 to 1914. This era of classical anarchism lasted until the end of the Spanish Civil War and is considered the golden age of anarchism.
Drawing from mutualism, Mikhail Bakunin founded collectivist anarchism and entered the International Workingmen's Association, a class worker union later known as the First International that formed in 1864 to unite diverse revolutionary currents. The International became a significant political force, with Karl Marx being a leading figure and a member of its General Council. Bakunin's faction (the Jura Federation) and Proudhon's followers (the mutualists) opposed state socialism, advocating political abstentionism and small property holdings. After bitter disputes, the Bakuninists were expelled from the International by the Marxists at the 1872 Hague Congress. Anarchists were treated similarly in the Second International, being ultimately expelled in 1896. Bakunin famously predicted that if revolutionaries gained power by Marx's terms, they would end up the new tyrants of workers. In response to their expulsion from the First International, anarchists formed the St. Imier International. Under the influence of Peter Kropotkin, a Russian philosopher and scientist, anarcho-communism overlapped with collectivism. Anarcho-communists, who drew inspiration from the 1871 Paris Commune, advocated for free federation and for the distribution of goods according to one's needs.
By the turn of the 20th century, anarchism had spread all over the world. It was a notable feature of the international syndicalist movement. In China, small groups of students imported the humanistic pro-science version of anarcho-communism. Tokyo was a hotspot for rebellious youth from East Asian countries, who moved to the Japanese capital to study. In Latin America, Argentina was a stronghold for anarcho-syndicalism, where it became the most prominent left-wing ideology. During this time, a minority of anarchists adopted tactics of revolutionary political violence, known as propaganda of the deed. The dismemberment of the French socialist movement into many groups and the execution and exile of many Communards to penal colonies following the suppression of the Paris Commune favoured individualist political expression and acts. Even though many anarchists distanced themselves from these terrorist acts, infamy came upon the movement and attempts were made to prevent anarchists immigrating to the US, including the Immigration Act of 1903, also called the Anarchist Exclusion Act. Illegalism was another strategy which some anarchists adopted during this period.
Despite concerns, anarchists enthusiastically participated in the Russian Revolution in opposition to the White movement, especially in the Makhnovshchina; however, they met harsh suppression after the Bolshevik government had stabilised, including during the Kronstadt rebellion. Several anarchists from Petrograd and Moscow fled to Ukraine, before the Bolsheviks crushed the anarchist movement there too. With the anarchists being repressed in Russia, two new antithetical currents emerged, namely platformism and synthesis anarchism. The former sought to create a coherent group that would push for revolution while the latter were against anything that would resemble a political party. Seeing the victories of the Bolsheviks in the October Revolution and the resulting Russian Civil War, many workers and activists turned to communist parties which grew at the expense of anarchism and other socialist movements. In France and the United States, members of major syndicalist movements such as the General Confederation of Labour and the Industrial Workers of the World left their organisations and joined the Communist International.
In the Spanish Civil War of 1936–39, anarchists and syndicalists (CNT and FAI) once again allied themselves with various currents of leftists. A long tradition of Spanish anarchism led to anarchists playing a pivotal role in the war, and particularly in the Spanish Revolution of 1936. In response to the army rebellion, an anarchist-inspired movement of peasants and workers, supported by armed militias, took control of Barcelona and of large areas of rural Spain, where they collectivised the land. The Soviet Union provided some limited assistance at the beginning of the war, but the result was a bitter fight between communists and other leftists in a series of events known as the May Days, as Joseph Stalin asserted Soviet control of the Republican government, ending in another defeat of anarchists at the hands of the communists.
Post-WWII
By the end of World War II, the anarchist movement had been severely weakened. The 1960s witnessed a revival of anarchism, likely caused by a perceived failure of Marxism–Leninism and tensions built by the Cold War. During this time, anarchism found a presence in other movements critical towards both capitalism and the state such as the anti-nuclear, environmental, and peace movements, the counterculture of the 1960s, and the New Left. It also saw a transition from its previous revolutionary nature to provocative anti-capitalist reformism. Anarchism became associated with punk subculture as exemplified by bands such as Crass and the Sex Pistols. The established feminist tendencies of anarcha-feminism returned with vigour during the second wave of feminism. Black anarchism began to take form at this time and influenced anarchism's move from a Eurocentric demographic. This coincided with its failure to gain traction in Northern Europe and its unprecedented height in Latin America.
Around the turn of the 21st century, anarchism grew in popularity and influence within anti-capitalist, anti-war and anti-globalisation movements. Anarchists became known for their involvement in protests against the World Trade Organization (WTO), the Group of Eight and the World Economic Forum. During the protests, ad hoc leaderless anonymous cadres known as black blocs engaged in rioting, property destruction and violent confrontations with the police. Other organisational tactics pioneered at this time include affinity groups, security culture and the use of decentralised technologies such as the Internet. A significant event of this period was the confrontations at the 1999 Seattle WTO conference. Anarchist ideas have been influential in the development of the Zapatistas in Mexico and the Democratic Federation of Northern Syria, more commonly known as Rojava, a de facto autonomous region in northern Syria.
While having revolutionary aspirations, many forms of anarchism are not confrontational nowadays. Instead, they are trying to build an alternative way of social organization, based on mutual interdependence and voluntary cooperation. Scholar Carissa Honeywell takes the example of Food not Bombs group of collectives, to highlight some features of how anarchist groups work: direct action, working together and in solidarity with those left behind. While doing so, they inform about the rising rates of world hunger suggest a policy to tackle hunger, ranging from de-funding the arms industry to addressing Monsanto seed-saving policies and patents, helping farmers and commodification of food and housing. Honeywell also emphasizes that contemporary anarchists are interested in the flourishing not only of humans, but non-humans and the environment as well. Honeywell argues that escalation of problems such as continuous wars and world poverty show that the current framework not only cannot solve those pressing problems for humanity, but are causal factors as well, resulting in the rejection of representative democracy and the state as a whole.
Thought
Anarchist schools of thought have been generally grouped into two main historical traditions, social anarchism and individualist anarchism, owing to their different origins, values and evolution. The individualist current emphasises negative liberty in opposing restraints upon the free individual, while the social current emphasises positive liberty in aiming to achieve the free potential of society through equality and social ownership. In a chronological sense, anarchism can be segmented by the classical currents of the late 19th century and the post-classical currents (anarcha-feminism, green anarchism, and post-anarchism) developed thereafter.
Beyond the specific factions of anarchist movements which constitute political anarchism lies philosophical anarchism which holds that the state lacks moral legitimacy, without necessarily accepting the imperative of revolution to eliminate it. A component especially of individualist anarchism, philosophical anarchism may tolerate the existence of a minimal state but claims that citizens have no moral obligation to obey government when it conflicts with individual autonomy. Anarchism pays significant attention to moral arguments since ethics have a central role in anarchist philosophy. Anarchism's emphasis on anti-capitalism, egalitarianism, and for the extension of community and individuality sets it apart from anarcho-capitalism and other types of economic libertarianism.
Anarchism is usually placed on the far-left of the political spectrum. Much of its economics and legal philosophy reflect anti-authoritarian, anti-statist, libertarian, and radical interpretations of left-wing and socialist politics such as collectivism, communism, individualism, mutualism, and syndicalism, among other libertarian socialist economic theories. As anarchism does not offer a fixed body of doctrine from a single particular worldview, many anarchist types and traditions exist and varieties of anarchy diverge widely. One reaction against sectarianism within the anarchist milieu was anarchism without adjectives, a call for toleration and unity among anarchists first adopted by Fernando Tarrida del Mármol in 1889 in response to the bitter debates of anarchist theory at the time. Belief in political nihilism has been espoused by anarchists. Despite separation, the various anarchist schools of thought are not seen as distinct entities but rather as tendencies that intermingle and are connected through a set of uniform principles such as individual and local autonomy, mutual aid, network organisation, communal democracy, justified authority and decentralisation.
Classical
Inceptive currents among classical anarchist currents were mutualism and individualism. They were followed by the major currents of social anarchism (collectivist, communist and syndicalist). They differ on organisational and economic aspects of their ideal society.
Mutualism is an 18th-century economic theory that was developed into anarchist theory by Pierre-Joseph Proudhon. Its aims include "abolishing the state", reciprocity, free association, voluntary contract, federation and monetary reform of both credit and currency that would be regulated by a bank of the people. Mutualism has been retrospectively characterised as ideologically situated between individualist and collectivist forms of anarchism. In What Is Property? (1840), Proudhon first characterised his goal as a "third form of society, the synthesis of communism and property." Collectivist anarchism is a revolutionary socialist form of anarchism commonly associated with Mikhail Bakunin. Collectivist anarchists advocate collective ownership of the means of production which is theorised to be achieved through violent revolution and that workers be paid according to time worked, rather than goods being distributed according to need as in communism. Collectivist anarchism arose alongside Marxism but rejected the dictatorship of the proletariat despite the stated Marxist goal of a collectivist stateless society.
Anarcho-communism is a theory of anarchism that advocates a communist society with common ownership of the means of production, held by a federal network of voluntary associations, with production and consumption based on the guiding principle "From each according to his ability, to each according to his need." Anarcho-communism developed from radical socialist currents after the French Revolution but was first formulated as such in the Italian section of the First International. It was later expanded upon in the theoretical work of Peter Kropotkin, whose specific style would go onto become the dominating view of anarchists by the late 19th century. Anarcho-syndicalism is a branch of anarchism that views labour syndicates as a potential force for revolutionary social change, replacing capitalism and the state with a new society democratically self-managed by workers. The basic principles of anarcho-syndicalism are direct action, workers' solidarity and workers' self-management.
Individualist anarchism is a set of several traditions of thought within the anarchist movement that emphasise the individual and their will over any kinds of external determinants. Early influences on individualist forms of anarchism include William Godwin, Max Stirner, and Henry David Thoreau. Through many countries, individualist anarchism attracted a small yet diverse following of Bohemian artists and intellectuals as well as young anarchist outlaws in what became known as illegalism and individual reclamation.
Post-classical and contemporary
Anarchist principles undergird contemporary radical social movements of the left. Interest in the anarchist movement developed alongside momentum in the anti-globalisation movement, whose leading activist networks were anarchist in orientation. As the movement shaped 21st century radicalism, wider embrace of anarchist principles signaled a revival of interest. Anarchism has continued to generate many philosophies and movements, at times eclectic, drawing upon various sources and combining disparate concepts to create new philosophical approaches. The anti-capitalist tradition of classical anarchism has remained prominent within contemporary currents.
Contemporary news coverage which emphasizes black bloc demonstrations has reinforced anarchism's historical association with chaos and violence. Its publicity has also led more scholars in fields such as anthropology and history to engage with the anarchist movement, although contemporary anarchism favours actions over academic theory. Various anarchist groups, tendencies, and schools of thought exist today, making it difficult to describe the contemporary anarchist movement. While theorists and activists have established "relatively stable constellations of anarchist principles", there is no consensus on which principles are core and commentators describe multiple anarchisms, rather than a singular anarchism, in which common principles are shared between schools of anarchism while each group prioritizes those principles differently. Gender equality can be a common principle, although it ranks as a higher priority to anarcha-feminists than anarcho-communists.
Anarchists are generally committed against coercive authority in all forms, namely "all centralized and hierarchical forms of government (e.g., monarchy, representative democracy, state socialism, etc.), economic class systems (e.g., capitalism, Bolshevism, feudalism, slavery, etc.), autocratic religions (e.g., fundamentalist Islam, Roman Catholicism, etc.), patriarchy, heterosexism, white supremacy, and imperialism." Anarchist schools disagree on the methods by which these forms should be opposed. The principle of equal liberty is closer to anarchist political ethics in that it transcends both the liberal and socialist traditions. This entails that liberty and equality cannot be implemented within the state, resulting in the questioning of all forms of domination and hierarchy.
Tactics
Anarchists' tactics take various forms but in general serve two major goals, namely, to first oppose the Establishment and secondly to promote anarchist ethics and reflect an anarchist vision of society, illustrating the unity of means and ends. A broad categorisation can be made between aims to destroy oppressive states and institutions by revolutionary means on one hand and aims to change society through evolutionary means on the other. Evolutionary tactics embrace nonviolence, reject violence and take a gradual approach to anarchist aims, although there is significant overlap between the two.
Anarchist tactics have shifted during the course of the last century. Anarchists during the early 20th century focused more on strikes and militancy while contemporary anarchists use a broader array of approaches.
Classical era
During the classical era, anarchists had a militant tendency. Not only did they confront state armed forces, as in Spain and Ukraine, but some of them also employed terrorism as propaganda of the deed. Assassination attempts were carried out against heads of state, some of which were successful. Anarchists also took part in revolutions. Many anarchists, especially the Galleanists, believed that these attempts would be the impetus for a revolution against capitalism and the state. Many of these attacks were done by individual assailants and the majority took place in the late 1870s, the early 1880s and the 1890s, with some still occurring in the early 1900s. Their decrease in prevalence was the result of further judicial power and targeting and cataloging by state institutions.
Anarchist perspectives towards violence have always been controversial. Anarcho-pacifists advocate for non-violence means to achieve their stateless, nonviolent ends. Other anarchist groups advocate direct action, a tactic which can include acts of sabotage or terrorism. This attitude was quite prominent a century ago when seeing the state as a tyrant and some anarchists believing that they had every right to oppose its oppression by any means possible. Emma Goldman and Errico Malatesta, who were proponents of limited use of violence, stated that violence is merely a reaction to state violence as a necessary evil.
Anarchists took an active role in strike actions, although they tended to be antipathetic to formal syndicalism, seeing it as reformist. They saw it as a part of the movement which sought to overthrow the state and capitalism. Anarchists also reinforced their propaganda within the arts, some of whom practiced naturism and nudism. Those anarchists also built communities which were based on friendship and were involved in the news media.
Revolutionary
In the current era, Italian anarchist Alfredo Bonanno, a proponent of insurrectionary anarchism, has reinstated the debate on violence by rejecting the nonviolence tactic adopted since the late 19th century by Kropotkin and other prominent anarchists afterwards. Both Bonanno and the French group The Invisible Committee advocate for small, informal affiliation groups, where each member is responsible for their own actions but works together to bring down oppression utilizing sabotage and other violent means against state, capitalism, and other enemies. Members of The Invisible Committee were arrested in 2008 on various charges, terrorism included.
Overall, contemporary anarchists are much less violent and militant than their ideological ancestors. They mostly engage in confronting the police during demonstrations and riots, especially in countries such as Canada, Greece, and Mexico. Militant black bloc protest groups are known for clashing with the police; however, anarchists not only clash with state operators, they also engage in the struggle against fascists and racists, taking anti-fascist action and mobilizing to prevent hate rallies from happening.
Evolutionary
Anarchists commonly employ direct action. This can take the form of disrupting and protesting against unjust hierarchy, or the form of self-managing their lives through the creation of counter-institutions such as communes and non-hierarchical collectives. Decision-making is often handled in an anti-authoritarian way, with everyone having equal say in each decision, an approach known as horizontalism. Contemporary-era anarchists have been engaging with various grassroots movements that are more or less based on horizontalism, although not explicitly anarchist, respecting personal autonomy and participating in mass activism such as strikes and demonstrations. In contrast with the big-A anarchism of the classical era, the newly coined term small-a anarchism signals their tendency not to base their thoughts and actions on classical-era anarchism or to refer to classical anarchists such as Peter Kropotkin and Pierre-Joseph Proudhon to justify their opinions. Those anarchists would rather base their thought and praxis on their own experience which they will later theorize.
The decision-making process of small anarchist affinity groups plays a significant tactical role. Anarchists have employed various methods in order to build a rough consensus among members of their group without the need of a leader or a leading group. One way is for an individual from the group to play the role of facilitator to help achieve a consensus without taking part in the discussion themselves or promoting a specific point. Minorities usually accept rough consensus, except when they feel the proposal contradicts anarchist ethics, goals and values. Anarchists usually form small groups (5–20 individuals) to enhance autonomy and friendships among their members. These kinds of groups more often than not interconnect with each other, forming larger networks. Anarchists still support and participate in strikes, especially wildcat strikes as these are leaderless strikes not organised centrally by a syndicate.
As in the past, newspapers and journals are used, and anarchists have gone online in the World Wide Web to spread their message. Anarchists have found it easier to create websites because of distributional and other difficulties, hosting electronic libraries and other portals. Anarchists were also involved in developing various software that are available for free. The way these hacktivists work to develop and distribute resembles the anarchist ideals, especially when it comes to preserving users' privacy from state surveillance.
Anarchists organize themselves to squat and reclaim public spaces. During important events such as protests and when spaces are being occupied, they are often called Temporary Autonomous Zones (TAZ), spaces where art, poetry, and surrealism are blended to display the anarchist ideal. As seen by anarchists, squatting is a way to regain urban space from the capitalist market, serving pragmatical needs and also being an exemplary direct action. Acquiring space enables anarchists to experiment with their ideas and build social bonds. Adding up these tactics while having in mind that not all anarchists share the same attitudes towards them, along with various forms of protesting at highly symbolic events, make up a carnivalesque atmosphere that is part of contemporary anarchist vividity.
Key issues
As anarchism is a philosophy that embodies many diverse attitudes, tendencies, and schools of thought; disagreement over questions of values, ideology, and tactics is common. Its diversity has led to widely different uses of identical terms among different anarchist traditions which has created a number of definitional concerns in anarchist theory. The compatibility of capitalism, nationalism, and religion with anarchism is widely disputed, and anarchism enjoys complex relationships with ideologies such as communism, collectivism, Marxism, and trade unionism. Anarchists may be motivated by humanism, divine authority, enlightened self-interest, veganism, or any number of alternative ethical doctrines. Phenomena such as civilisation, technology (e.g. within anarcho-primitivism), and the democratic process may be sharply criticised within some anarchist tendencies and simultaneously lauded in others.
The state
Objection to the state and its institutions is a sine qua non of anarchism. Anarchists consider the state as a tool of domination and believe it to be illegitimate regardless of its political tendencies. Instead of people being able to control the aspects of their life, major decisions are taken by a small elite. Authority ultimately rests solely on power, regardless of whether that power is open or transparent, as it still has the ability to coerce people. Another anarchist argument against states is that the people constituting a government, even the most altruistic among officials, will unavoidably seek to gain more power, leading to corruption. Anarchists consider the idea that the state is the collective will of the people to be an unachievable fiction due to the fact that the ruling class is distinct from the rest of society.
Specific anarchist attitudes towards the state vary. Robert Paul Wolff believed that the tension between authority and autonomy would mean the state could never be legitimate. Bakunin saw the state as meaning "coercion, domination by means of coercion, camouflaged if possible but unceremonious and overt if need be." A. John Simmons and Leslie Green, who leaned toward philosophical anarchism, believed that the state could be legitimate if it is governed by consensus, although they saw this as highly unlikely. Beliefs on how to abolish the state also differ.
Gender, sexuality, and free love
As gender and sexuality carry along them dynamics of hierarchy, many anarchists address, analyse, and oppose the suppression of one's autonomy imposed by gender roles.
Sexuality was not often discussed by classical anarchists but the few that did felt that an anarchist society would lead to sexuality naturally developing. Sexual violence was a concern for anarchists such as Benjamin Tucker, who opposed age of consent laws, believing they would benefit predatory men. A historical current that arose and flourished during 1890 and 1920 within anarchism was free love. In contemporary anarchism, this current survives as a tendency to support polyamory, relationship anarchy, and queer anarchism. Free love advocates were against marriage, which they saw as a way of men imposing authority over women, largely because marriage law greatly favoured the power of men. The notion of free love was much broader and included a critique of the established order that limited women's sexual freedom and pleasure. Those free love movements contributed to the establishment of communal houses, where large groups of travelers, anarchists and other activists slept in beds together. Free love had roots both in Europe and the United States; however, some anarchists struggled with the jealousy that arose from free love. Anarchist feminists were advocates of free love, against marriage, and pro-choice (utilising a contemporary term), and had a similar agenda. Anarchist and non-anarchist feminists differed on suffrage but were supportive of one another.
During the second half of the 20th century, anarchism intermingled with the second wave of feminism, radicalising some currents of the feminist movement and being influenced as well. By the latest decades of the 20th century, anarchists and feminists were advocating for the rights and autonomy of women, gays, queers and other marginalised groups, with some feminist thinkers suggesting a fusion of the two currents. With the third wave of feminism, sexual identity and compulsory heterosexuality became a subject of study for anarchists, yielding a post-structuralist critique of sexual normality. Some anarchists distanced themselves from this line of thinking, suggesting that it leaned towards an individualism that was dropping the cause of social liberation.
Education
The interest of anarchists in education stretches back to the first emergence of classical anarchism. Anarchists consider proper education, one which sets the foundations of the future autonomy of the individual and the society, to be an act of mutual aid. Anarchist writers such as William Godwin (Political Justice) and Max Stirner ("The False Principle of Our Education") attacked both state education and private education as another means by which the ruling class replicate their privileges.
In 1901, Catalan anarchist and free thinker Francisco Ferrer established the Escuela Moderna in Barcelona as an opposition to the established education system which was dictated largely by the Catholic Church. Ferrer's approach was secular, rejecting both state and church involvement in the educational process whilst giving pupils large amounts of autonomy in planning their work and attendance. Ferrer aimed to educate the working class and explicitly sought to foster class consciousness among students. The school closed after constant harassment by the state and Ferrer was later arrested. Nonetheless, his ideas formed the inspiration for a series of modern schools around the world. Christian anarchist Leo Tolstoy, who published the essay Education and Culture, also established a similar school with its founding principle being that "for education to be effective it had to be free." In a similar token, A. S. Neill founded what became the Summerhill School in 1921, also declaring being free from coercion.
Anarchist education is based largely on the idea that a child's right to develop freely and without manipulation ought to be respected and that rationality would lead children to morally good conclusions; however, there has been little consensus among anarchist figures as to what constitutes manipulation. Ferrer believed that moral indoctrination was necessary and explicitly taught pupils that equality, liberty and social justice were not possible under capitalism, along with other critiques of government and nationalism.
Late 20th century and contemporary anarchist writers (Paul Goodman, Herbert Read, and Colin Ward) intensified and expanded the anarchist critique of state education, largely focusing on the need for a system that focuses on children's creativity rather than on their ability to attain a career or participate in consumerism as part of a consumer society. Contemporary anarchists such as Ward claim that state education serves to perpetuate socioeconomic inequality.
While few anarchist education institutions have survived to the modern-day, major tenets of anarchist schools, among them respect for child autonomy and relying on reasoning rather than indoctrination as a teaching method, have spread among mainstream educational institutions. Judith Suissa names three schools as explicitly anarchists' schools, namely the Free Skool Santa Cruz in the United States which is part of a wider American-Canadian network of schools, the Self-Managed Learning College in Brighton, England, and the Paideia School in Spain.
The arts
The connection between anarchism and art was quite profound during the classical era of anarchism, especially among artistic currents that were developing during that era such as futurists, surrealists and others. In literature, anarchism was mostly associated with the New Apocalyptics and the neo-romanticism movement. In music, anarchism has been associated with music scenes such as punk. Anarchists such as Leo Tolstoy and Herbert Read stated that the border between the artist and the non-artist, what separates art from a daily act, is a construct produced by the alienation caused by capitalism and it prevents humans from living a joyful life.
Other anarchists advocated for or used art as a means to achieve anarchist ends. In his book Breaking the Spell: A History of Anarchist Filmmakers, Videotape Guerrillas, and Digital Ninjas, Chris Robé claims that "anarchist-inflected practices have increasingly structured movement-based video activism." Throughout the 20th century, many prominent anarchists (Peter Kropotkin, Emma Goldman, Gustav Landauer and Camillo Berneri) and publications such as Anarchy wrote about matters pertaining to the arts.
Three overlapping properties made art useful to anarchists. It could depict a critique of existing society and hierarchies, serve as a prefigurative tool to reflect the anarchist ideal society and even turn into a means of direct action such as in protests. As it appeals to both emotion and reason, art could appeal to the whole human and have a powerful effect. The 19th-century neo-impressionist movement had an ecological aesthetic and offered an example of an anarchist perception of the road towards socialism. In Les chataigniers a Osny by anarchist painter Camille Pissarro, the blending of aesthetic and social harmony is prefiguring an ideal anarchistic agrarian community.
Criticism
The most common critique of anarchism is the assertion that humans cannot self-govern and so a state is necessary for human survival. Philosopher Bertrand Russell supported this critique, stating that "[p]eace and war, tariffs, regulations of sanitary conditions and the sale of noxious drugs, the preservation of a just system of distribution: these, among others, are functions which could hardly be performed in a community in which there was no central government." Another common criticism of anarchism is that it fits a world of isolation in which only the small enough entities can be self-governing; a response would be that major anarchist thinkers advocated anarchist federalism.
Another criticism of anarchism is the belief that it is inherently unstable: that an anarchist society would inevitably evolve back into a state. Thomas Hobbes and other early social contract theorists argued that the state emerges in response to natural anarchy in order to protect the people's interests and keep order. Philosopher Robert Nozick argued that a "night-watchman state", or minarchy, would emerge from anarchy through the process of an invisible hand, in which people would exercise their liberty and buy protection from protection agencies, evolving into a minimal state. Anarchists reject these criticisms by arguing that humans in a state of nature would not just be in a state of war. Anarcho-primitivists in particular argue that humans were better off in a state of nature in small tribes living close to the land, while anarchists in general argue that the negatives of state organization, such as hierarchies, monopolies and inequality, outweigh the benefits.
Philosophy lecturer Andrew G. Fiala composed a list of common arguments against anarchism which includes critiques such as that anarchism is innately related to violence and destruction, not only in the pragmatic world, such as at protests, but in the world of ethics as well. Secondly, anarchism is evaluated as unfeasible or utopian since the state cannot be defeated practically. This line of arguments most often calls for political action within the system to reform it. The third argument is that anarchism is self-contradictory as a ruling theory that has no ruling theory. Anarchism also calls for collective action whilst endorsing the autonomy of the individual, hence no collective action can be taken. Lastly, Fiala mentions a critique towards philosophical anarchism of being ineffective (all talk and thoughts) and in the meantime capitalism and bourgeois class remains strong.
Philosophical anarchism has met the criticism of members of academia following the release of pro-anarchist books such as A. John Simmons' Moral Principles and Political Obligations. Law professor William A. Edmundson authored an essay to argue against three major philosophical anarchist principles which he finds fallacious. Edmundson says that while the individual does not owe the state a duty of obedience, this does not imply that anarchism is the inevitable conclusion and the state is still morally legitimate. In The Problem of Political Authority, Michael Huemer defends philosophical anarchism, claiming that "political authority is a moral illusion."
One of the earliest criticisms is that anarchism defies and fails to understand the biological inclination to authority. Joseph Raz states that the acceptance of authority implies the belief that following their instructions will afford more success. Raz believes that this argument is true in following both authorities' successful and mistaken instruction. Anarchists reject this criticism because challenging or disobeying authority does not entail the disappearance of its advantages by acknowledging authority such as doctors or lawyers as reliable, nor does it involve a complete surrender of independent judgment. Anarchist perception of human nature, rejection of the state, and commitment to social revolution has been criticised by academics as naive, overly simplistic, and unrealistic, respectively. Classical anarchism has been criticised for relying too heavily on the belief that the abolition of the state will lead to human cooperation prospering.
Friedrich Engels, considered to be one of the principal founders of Marxism, criticised anarchism's anti-authoritarianism as inherently counter-revolutionary because in his view a revolution is by itself authoritarian. Academic John Molyneux writes in his book Anarchism: A Marxist Criticism that "anarchism cannot win", believing that it lacks the ability to properly implement its ideas. The Marxist criticism of anarchism is that it has a utopian character because all individuals should have anarchist views and values. According to the Marxist view, that a social idea would follow directly from this human ideal and out of the free will of every individual formed its essence. Marxists state that this contradiction was responsible for their inability to act. In the anarchist vision, the conflict between liberty and equality was resolved through coexistence and intertwining.
See also
Anarchism by country
Governance without government
List of anarchist political ideologies
List of books about anarchism
References
Explanatory notes
Citations
General and cited sources
Primary sources
Secondary sources
Tertiary sources
Further reading
Criticism of philosophical anarchism.
A defence of philosophical anarchism, stating that "both kinds of 'anarchism' [i.e. philosophical and political anarchism] are philosophical and political claims." (p. 137)
Anarchistic popular fiction novel.
An argument for philosophical anarchism.
External links
Anarchy Archives – an online research center on the history and theory of anarchism.
Anti-capitalism
Anti-fascism
Economic ideologies
Far-left politics
Left-wing politics
Libertarian socialism
Libertarianism
Political culture
Political ideologies
Political movements
Social theories
Socialism
|
https://en.wikipedia.org/wiki/Albedo
|
Albedo (; ) is the fraction of sunlight that is diffusely reflected by a body. It is measured on a scale from 0 (corresponding to a black body that absorbs all incident radiation) to 1 (corresponding to a body that reflects all incident radiation).
Surface albedo is defined as the ratio of radiosity Je to the irradiance Ee (flux per unit area) received by a surface. The proportion reflected is not only determined by properties of the surface itself, but also by the spectral and angular distribution of solar radiation reaching the Earth's surface. These factors vary with atmospheric composition, geographic location, and time (see position of the Sun). While bi-hemispherical reflectance is calculated for a single angle of incidence (i.e., for a given position of the Sun), albedo is the directional integration of reflectance over all solar angles in a given period. The temporal resolution may range from seconds (as obtained from flux measurements) to daily, monthly, or annual averages.
Unless given for a specific wavelength (spectral albedo), albedo refers to the entire spectrum of solar radiation. Due to measurement constraints, it is often given for the spectrum in which most solar energy reaches the surface (between 0.3 and 3 μm). This spectrum includes visible light (0.4–0.7 μm), which explains why surfaces with a low albedo appear dark (e.g., trees absorb most radiation), whereas surfaces with a high albedo appear bright (e.g., snow reflects most radiation).
Ice–albedo feedback is a positive feedback climate process where a change in the area of ice caps, glaciers, and sea ice alters the albedo and surface temperature of a planet. Ice is very reflective, therefore it reflects far more solar energy back to space than the other types of land area or open water. Ice–albedo feedback plays an important role in global climate change.
Albedo is an important concept in climatology, astronomy, and environmental management. The average albedo of the Earth from the upper atmosphere, its planetary albedo, is 30–35% because of cloud cover, but widely varies locally across the surface because of different geological and environmental features.
Terrestrial albedo
Any albedo in visible light falls within a range of about 0.9 for fresh snow to about 0.04 for charcoal, one of the darkest substances. Deeply shadowed cavities can achieve an effective albedo approaching the zero of a black body. When seen from a distance, the ocean surface has a low albedo, as do most forests, whereas desert areas have some of the highest albedos among landforms. Most land areas are in an albedo range of 0.1 to 0.4. The average albedo of Earth is about 0.3. This is far higher than for the ocean primarily because of the contribution of clouds.
Earth's surface albedo is regularly estimated via Earth observation satellite sensors such as NASA's MODIS instruments on board the Terra and Aqua satellites, and the CERES instrument on the Suomi NPP and JPSS. As the amount of reflected radiation is only measured for a single direction by satellite, not all directions, a mathematical model is used to translate a sample set of satellite reflectance measurements into estimates of directional-hemispherical reflectance and bi-hemispherical reflectance (e.g.,). These calculations are based on the bidirectional reflectance distribution function (BRDF), which describes how the reflectance of a given surface depends on the view angle of the observer and the solar angle. BDRF can facilitate translations of observations of reflectance into albedo.
Earth's average surface temperature due to its albedo and the greenhouse effect is currently about . If Earth were frozen entirely (and hence be more reflective), the average temperature of the planet would drop below . If only the continental land masses became covered by glaciers, the mean temperature of the planet would drop to about . In contrast, if the entire Earth was covered by water – a so-called ocean planet – the average temperature on the planet would rise to almost .
In 2021, scientists reported that Earth dimmed by ~0.5% over two decades (1998–2017) as measured by earthshine using modern photometric techniques. This may have both been co-caused by climate change as well as a substantial increase in global warming. However, the link to climate change has not been explored to date and it is unclear whether or not this represents an ongoing trend.
White-sky, black-sky, and blue-sky albedo
For land surfaces, it has been shown that the albedo at a particular solar zenith angle θi can be approximated by the proportionate sum of two terms:
the directional-hemispherical reflectance at that solar zenith angle, , sometimes referred to as black-sky albedo, and
the bi-hemispherical reflectance, , sometimes referred to as white-sky albedo.
with being the proportion of direct radiation from a given solar angle, and being the proportion of diffuse illumination, the actual albedo (also called blue-sky albedo) can then be given as:
This formula is important because it allows the albedo to be calculated for any given illumination conditions from a knowledge of the intrinsic properties of the surface.
Human activities
Human activities (e.g., deforestation, farming, and urbanization) change the albedo of various areas around the globe. As per Campra et al., human impacts to "the physical properties of the land surface can perturb the climate by altering the Earth’s radiative energy balance" even on a small scale or when undetected by satellites.
The tens of thousands of hectares of greenhouses in Almería, Spain form a large expanse of whitened plastic roofs. A 2008 study found that this anthropogenic change lowered the local surface area temperature of the high-albedo area, although changes were localized. A follow-up study found that "CO2-eq. emissions associated to changes in surface albedo are a consequence of land transformation" and can reduce surface temperature increases associated with climate change.
It has been found that urbanization generally decreases albedo (commonly being 0.01–0.02 lower than adjacent croplands), which contributes to global warming. Deliberately increasing albedo in urban areas can mitigate urban heat island. Ouyang et al. estimated that, on a global scale, "an albedo increase of 0.1 in worldwide urban areas would result in a cooling effect that is equivalent to absorbing ~44 Gt of CO2 emissions."
Intentionally enhancing the albedo of the Earth's surface, along with its daytime thermal emittance, has been proposed as a solar radiation management strategy to mitigate energy crises and global warming known as passive daytime radiative cooling (PDRC). Efforts toward widespread implementation of PDRCs may focus on maximizing the albedo of surfaces from very low to high values, so long as a thermal emittance of at least 90% can be achieved.
Examples of terrestrial albedo effects
Illumination
Albedo is not directly dependent on illumination because changing the amount of incoming light proportionally changes the amount of reflected light, except in circumstances where a change in illumination induces a change in the Earth's surface at that location (e.g. through melting of reflective ice). That said, albedo and illumination both vary by latitude. Albedo is highest near the poles and lowest in the subtropics, with a local maximum in the tropics.
Insolation effects
The intensity of albedo temperature effects depends on the amount of albedo and the level of local insolation (solar irradiance); high albedo areas in the Arctic and Antarctic regions are cold due to low insolation, whereas areas such as the Sahara Desert, which also have a relatively high albedo, will be hotter due to high insolation. Tropical and sub-tropical rainforest areas have low albedo, and are much hotter than their temperate forest counterparts, which have lower insolation. Because insolation plays such a big role in the heating and cooling effects of albedo, high insolation areas like the tropics will tend to show a more pronounced fluctuation in local temperature when local albedo changes.
Arctic regions notably release more heat back into space than what they absorb, effectively cooling the Earth. This has been a concern since arctic ice and snow has been melting at higher rates due to higher temperatures, creating regions in the arctic that are notably darker (being water or ground which is darker color) and reflects less heat back into space. This feedback loop results in a reduced albedo effect.
Climate and weather
Albedo affects climate by determining how much radiation a planet absorbs. The uneven heating of Earth from albedo variations between land, ice, or ocean surfaces can drive weather.
The response of the climate system to an initial forcing is modified by feedbacks: increased by "self-reinforcing" or "positive" feedbacks and reduced by "balancing" or "negative" feedbacks. The main reinforcing feedbacks are the water-vapour feedback, the ice–albedo feedback, and the net effect of clouds.
Albedo–temperature feedback
When an area's albedo changes due to snowfall, a snow–temperature feedback results. A layer of snowfall increases local albedo, reflecting away sunlight, leading to local cooling. In principle, if no outside temperature change affects this area (e.g., a warm air mass), the raised albedo and lower temperature would maintain the current snow and invite further snowfall, deepening the snow–temperature feedback. However, because local weather is dynamic due to the change of seasons, eventually warm air masses and a more direct angle of sunlight (higher insolation) cause melting. When the melted area reveals surfaces with lower albedo, such as grass, soil, or ocean, the effect is reversed: the darkening surface lowers albedo, increasing local temperatures, which induces more melting and thus reducing the albedo further, resulting in still more heating.
Snow
Snow albedo is highly variable, ranging from as high as 0.9 for freshly fallen snow, to about 0.4 for melting snow, and as low as 0.2 for dirty snow. Over Antarctica snow albedo averages a little more than 0.8. If a marginally snow-covered area warms, snow tends to melt, lowering the albedo, and hence leading to more snowmelt because more radiation is being absorbed by the snowpack (the ice–albedo positive feedback).
Just as fresh snow has a higher albedo than does dirty snow, the albedo of snow-covered sea ice is far higher than that of sea water. Sea water absorbs more solar radiation than would the same surface covered with reflective snow. When sea ice melts, either due to a rise in sea temperature or in response to increased solar radiation from above, the snow-covered surface is reduced, and more surface of sea water is exposed, so the rate of energy absorption increases. The extra absorbed energy heats the sea water, which in turn increases the rate at which sea ice melts. As with the preceding example of snowmelt, the process of melting of sea ice is thus another example of a positive feedback. Both positive feedback loops have long been recognized as important for global warming.
Cryoconite, powdery windblown dust containing soot, sometimes reduces albedo on glaciers and ice sheets.
The dynamical nature of albedo in response to positive feedback, together with the effects of small errors in the measurement of albedo, can lead to large errors in energy estimates. Because of this, in order to reduce the error of energy estimates, it is important to measure the albedo of snow-covered areas through remote sensing techniques rather than applying a single value for albedo over broad regions.
Small-scale effects
Albedo works on a smaller scale, too. In sunlight, dark clothes absorb more heat and light-coloured clothes reflect it better, thus allowing some control over body temperature by exploiting the albedo effect of the colour of external clothing.
Solar photovoltaic effects
Albedo can affect the electrical energy output of solar photovoltaic devices. For example, the effects of a spectrally responsive albedo are illustrated by the differences between the spectrally weighted albedo of solar photovoltaic technology based on hydrogenated amorphous silicon (a-Si:H) and crystalline silicon (c-Si)-based compared to traditional spectral-integrated albedo predictions. Research showed impacts of over 10% for vertically (90°) mounted systems, but such effects were substantially lower for systems with lower surface tilts. Spectral albedo strongly affects the performance of bifacial solar cells where rear surface performance gains of over 20% have been observed for c-Si cells installed above healthy vegetation. An analysis on the bias due to the specular reflectivity of 22 commonly occurring surface materials (both human-made and natural) provided effective albedo values for simulating the performance of seven photovoltaic materials mounted on three common photovoltaic system topologies: industrial (solar farms), commercial flat rooftops and residential pitched-roof applications.
Trees
Forests generally have a low albedo because the majority of the ultraviolet and visible spectrum is absorbed through photosynthesis. For this reason, the greater heat absorption by trees could offset some of the carbon benefits of afforestation (or offset the negative climate impacts of deforestation). In other words: The climate change mitigation effect of carbon sequestration by forests is partially counterbalanced in that reforestation can decrease the reflection of sunlight (albedo).
In the case of evergreen forests with seasonal snow cover albedo reduction may be great enough for deforestation to cause a net cooling effect. Trees also impact climate in extremely complicated ways through evapotranspiration. The water vapor causes cooling on the land surface, causes heating where it condenses, acts a strong greenhouse gas, and can increase albedo when it condenses into clouds. Scientists generally treat evapotranspiration as a net cooling impact, and the net climate impact of albedo and evapotranspiration changes from deforestation depends greatly on local climate.
Mid-to-high-latitude forests have a much lower albedo during snow seasons than flat ground, thus contributing to warming. Modeling that compares the effects of albedo differences between forests and grasslands suggests that expanding the land area of forests in temperate zones offers only a temporary mitigation benefit.
In seasonally snow-covered zones, winter albedos of treeless areas are 10% to 50% higher than nearby forested areas because snow does not cover the trees as readily. Deciduous trees have an albedo value of about 0.15 to 0.18 whereas coniferous trees have a value of about 0.09 to 0.15. Variation in summer albedo across both forest types is associated with maximum rates of photosynthesis because plants with high growth capacity display a greater fraction of their foliage for direct interception of incoming radiation in the upper canopy. The result is that wavelengths of light not used in photosynthesis are more likely to be reflected back to space rather than being absorbed by other surfaces lower in the canopy.
Studies by the Hadley Centre have investigated the relative (generally warming) effect of albedo change and (cooling) effect of carbon sequestration on planting forests. They found that new forests in tropical and midlatitude areas tended to cool; new forests in high latitudes (e.g., Siberia) were neutral or perhaps warming.
Water
Water reflects light very differently from typical terrestrial materials. The reflectivity of a water surface is calculated using the Fresnel equations.
At the scale of the wavelength of light even wavy water is always smooth so the light is reflected in a locally specular manner (not diffusely). The glint of light off water is a commonplace effect of this. At small angles of incident light, waviness results in reduced reflectivity because of the steepness of the reflectivity-vs.-incident-angle curve and a locally increased average incident angle.
Although the reflectivity of water is very low at low and medium angles of incident light, it becomes very high at high angles of incident light such as those that occur on the illuminated side of Earth near the terminator (early morning, late afternoon, and near the poles). However, as mentioned above, waviness causes an appreciable reduction. Because light specularly reflected from water does not usually reach the viewer, water is usually considered to have a very low albedo in spite of its high reflectivity at high angles of incident light.
Note that white caps on waves look white (and have high albedo) because the water is foamed up, so there are many superimposed bubble surfaces which reflect, adding up their reflectivities. Fresh 'black' ice exhibits Fresnel reflection.
Snow on top of this sea ice increases the albedo to 0.9.
Clouds
Cloud albedo has substantial influence over atmospheric temperatures. Different types of clouds exhibit different reflectivity, theoretically ranging in albedo from a minimum of near 0 to a maximum approaching 0.8. "On any given day, about half of Earth is covered by clouds, which reflect more sunlight than land and water. Clouds keep Earth cool by reflecting sunlight, but they can also serve as blankets to trap warmth."
Albedo and climate in some areas are affected by artificial clouds, such as those created by the contrails of heavy commercial airliner traffic. A study following the burning of the Kuwaiti oil fields during Iraqi occupation showed that temperatures under the burning oil fires were as much as colder than temperatures several miles away under clear skies.
Aerosol effects
Aerosols (very fine particles/droplets in the atmosphere) have both direct and indirect effects on Earth's radiative balance. The direct (albedo) effect is generally to cool the planet; the indirect effect (the particles act as cloud condensation nuclei and thereby change cloud properties) is less certain. As per Spracklen et al. the effects are:
Aerosol direct effect. Aerosols directly scatter and absorb radiation. The scattering of radiation causes atmospheric cooling, whereas absorption can cause atmospheric warming.
Aerosol indirect effect. Aerosols modify the properties of clouds through a subset of the aerosol population called cloud condensation nuclei. Increased nuclei concentrations lead to increased cloud droplet number concentrations, which in turn leads to increased cloud albedo, increased light scattering and radiative cooling (first indirect effect), but also leads to reduced precipitation efficiency and increased lifetime of the cloud (second indirect effect).
In extremely polluted cities like Delhi, aerosol pollutants influence local weather and induce an urban cool island effect during the day.
Black carbon
Another albedo-related effect on the climate is from black carbon particles. The size of this effect is difficult to quantify: the Intergovernmental Panel on Climate Change estimates that the global mean radiative forcing for black carbon aerosols from fossil fuels is +0.2 W m−2, with a range +0.1 to +0.4 W m−2. Black carbon is a bigger cause of the melting of the polar ice cap in the Arctic than carbon dioxide due to its effect on the albedo.
Astronomical albedo
In astronomy, the term albedo can be defined in several different ways, depending upon the application and the wavelength of electromagnetic radiation involved.
Optical or visual albedo
The albedos of planets, satellites and minor planets such as asteroids can be used to infer much about their properties. The study of albedos, their dependence on wavelength, lighting angle ("phase angle"), and variation in time composes a major part of the astronomical field of photometry. For small and far objects that cannot be resolved by telescopes, much of what we know comes from the study of their albedos. For example, the absolute albedo can indicate the surface ice content of outer Solar System objects, the variation of albedo with phase angle gives information about regolith properties, whereas unusually high radar albedo is indicative of high metal content in asteroids.
Enceladus, a moon of Saturn, has one of the highest known optical albedos of any body in the Solar System, with an albedo of 0.99. Another notable high-albedo body is Eris, with an albedo of 0.96. Many small objects in the outer Solar System and asteroid belt have low albedos down to about 0.05. A typical comet nucleus has an albedo of 0.04. Such a dark surface is thought to be indicative of a primitive and heavily space weathered surface containing some organic compounds.
The overall albedo of the Moon is measured to be around 0.14, but it is strongly directional and non-Lambertian, displaying also a strong opposition effect. Although such reflectance properties are different from those of any terrestrial terrains, they are typical of the regolith surfaces of airless Solar System bodies.
Two common optical albedos that are used in astronomy are the (V-band) geometric albedo (measuring brightness when illumination comes from directly behind the observer) and the Bond albedo (measuring total proportion of electromagnetic energy reflected). Their values can differ significantly, which is a common source of confusion.
In detailed studies, the directional reflectance properties of astronomical bodies are often expressed in terms of the five Hapke parameters which semi-empirically describe the variation of albedo with phase angle, including a characterization of the opposition effect of regolith surfaces. One of these five parameters is yet another type of albedo called the single-scattering albedo. It is used to define scattering of electromagnetic waves on small particles. It depends on properties of the material (refractive index), the size of the particle, and the wavelength of the incoming radiation.
An important relationship between an object's astronomical (geometric) albedo, absolute magnitude and diameter is given by:
where is the astronomical albedo, is the diameter in kilometers, and is the absolute magnitude.
Radar albedo
In planetary radar astronomy, a microwave (or radar) pulse is transmitted toward a planetary target (e.g. Moon, asteroid, etc.) and the echo from the target is measured. In most instances, the transmitted pulse is circularly polarized and the received pulse is measured in the same sense of polarization as the transmitted pulse (SC) and the opposite sense (OC). The echo power is measured in terms of radar cross-section, , , or (total power, SC + OC) and is equal to the cross-sectional area of a metallic sphere (perfect reflector) at the same distance as the target that would return the same echo power.
Those components of the received echo that return from first-surface reflections (as from a smooth or mirror-like surface) are dominated by the OC component as there is a reversal in polarization upon reflection. If the surface is rough at the wavelength scale or there is significant penetration into the regolith, there will be a significant SC component in the echo caused by multiple scattering.
For most objects in the solar system, the OC echo dominates and the most commonly reported radar albedo parameter is the (normalized) OC radar albedo (often shortened to radar albedo):
where the denominator is the effective cross-sectional area of the target object with mean radius, . A smooth metallic sphere would have .
Radar albedos of Solar System objects
The values reported for the Moon, Mercury, Mars, Venus, and Comet P/2005 JQ5 are derived from the total (OC+SC) radar albedo reported in those references.
Relationship to surface bulk density
In the event that most of the echo is from first surface reflections ( or so), the OC radar albedo is a first-order approximation of the Fresnel reflection coefficient (aka reflectivity) and can be used to estimate the bulk density of a planetary surface to a depth of a meter or so (a few wavelengths of the radar wavelength which is typically at the decimeter scale) using the following empirical relationships:
.
History
The term albedo was introduced into optics by Johann Heinrich Lambert in his 1760 work Photometria.
See also
Cool roof
Daisyworld
Emissivity
Exitance
Global dimming
Ice–albedo feedback
Irradiance
Kirchhoff's law of thermal radiation
Opposition surge
Polar see-saw
Radar astronomy
Solar radiation management
References
External links
Albedo Project
Albedo – Encyclopedia of Earth
NASA MODIS BRDF/albedo product site
Ocean surface albedo look-up-table
Surface albedo derived from Meteosat observations
A discussion of Lunar albedos
reflectivity of metals (chart)
Land surface effects on climate
Climate change feedbacks
Climate forcing
Climatology
Electromagnetic radiation
Meteorological quantities
Radiometry
Scattering, absorption and radiative transfer (optics)
Radiation
1760s neologisms
|
https://en.wikipedia.org/wiki/Aristotle
|
Aristotle (; Aristotélēs, ; 384–322 BC) was an Ancient Greek philosopher and polymath. His writings cover a broad range of subjects spanning the natural sciences, philosophy, linguistics, economics, politics, psychology and the arts. As the founder of the Peripatetic school of philosophy in the Lyceum in Athens, he began the wider Aristotelian tradition that followed, which set the groundwork for the development of modern science.
Little is known about Aristotle's life. He was born in the city of Stagira in northern Greece during the Classical period. His father, Nicomachus, died when Aristotle was a child, and he was brought up by a guardian. At 17 or 18 he joined Plato's Academy in Athens and remained there till the age of 37 (). Shortly after Plato died, Aristotle left Athens and, at the request of Philip II of Macedon, tutored his son Alexander the Great beginning in 343 BC. He established a library in the Lyceum which helped him to produce many of his hundreds of books on papyrus scrolls.
Though Aristotle wrote many elegant treatises and dialogues for publication, only around a third of his original output has survived, none of it intended for publication. Aristotle provided a complex synthesis of the various philosophies existing prior to him. His teachings and methods of inquiry have had a significant global impact, and as a result, his philosophy has exerted an influence across the world and it continues to be a subject of contemporary philosophical discussion.
Aristotle's views profoundly shaped medieval scholarship. The influence of his physical science extended from late antiquity and the Early Middle Ages into the Renaissance, and was not replaced systematically until the Enlightenment and theories such as classical mechanics were developed. Some of Aristotle's zoological observations found in his biology, such as on the hectocotyl (reproductive) arm of the octopus, were disbelieved until the 19th century. He influenced Judeo-Islamic philosophies during the Middle Ages, as well as Christian theology, especially the Neoplatonism of the Early Church and the scholastic tradition of the Catholic Church. Aristotle was revered among medieval Muslim scholars as "The First Teacher", and among medieval Christians like Thomas Aquinas as simply "The Philosopher", while the poet Dante called him "the master of those who know". His works contain the earliest known formal study of logic, and were studied by medieval scholars such as Peter Abelard and Jean Buridan. Aristotle's influence on logic continued well into the 19th century. In addition, his ethics, though always influential, gained renewed interest with the modern advent of virtue ethics.
Life
In general, the details of Aristotle's life are not well-established. The biographies written in ancient times are often speculative and historians only agree on a few salient points.
Aristotle was born in 384 BC in Stagira, Chalcidice, about 55 km (34 miles) east of modern-day Thessaloniki. His father, Nicomachus, was the personal physician to King Amyntas of Macedon. While he was young, Aristotle learned about biology and medical information, which was taught by his father. Both of Aristotle's parents died when he was about thirteen, and Proxenus of Atarneus became his guardian. Although little information about Aristotle's childhood has survived, he probably spent some time within the Macedonian palace, making his first connections with the Macedonian monarchy.
At the age of seventeen or eighteen, Aristotle moved to Athens to continue his education at Plato's Academy. He probably experienced the Eleusinian Mysteries as he wrote when describing the sights one viewed at the Eleusinian Mysteries, "to experience is to learn" [παθείν μαθεĩν]. Aristotle remained in Athens for nearly twenty years before leaving in 348/47 BC. The traditional story about his departure records that he was disappointed with the Academy's direction after control passed to Plato's nephew Speusippus, although it is possible that he feared the anti-Macedonian sentiments in Athens at that time and left before Plato died. Aristotle then accompanied Xenocrates to the court of his friend Hermias of Atarneus in Asia Minor. After the death of Hermias, Aristotle travelled with his pupil Theophrastus to the island of Lesbos, where together they researched the botany and zoology of the island and its sheltered lagoon. While in Lesbos, Aristotle married Pythias, either Hermias's adoptive daughter or niece. They had a daughter, whom they also named Pythias. In 343 BC, Aristotle was invited by Philip II of Macedon to become the tutor to his son Alexander.
Aristotle was appointed as the head of the royal Academy of Macedon. During Aristotle's time in the Macedonian court, he gave lessons not only to Alexander but also to two other future kings: Ptolemy and Cassander. Aristotle encouraged Alexander toward eastern conquest, and Aristotle's own attitude towards Persia was unabashedly ethnocentric. In one famous example, he counsels Alexander to be "a leader to the Greeks and a despot to the barbarians, to look after the former as after friends and relatives, and to deal with the latter as with beasts or plants". By 335 BC, Aristotle had returned to Athens, establishing his own school there known as the Lyceum. Aristotle conducted courses at the school for the next twelve years. While in Athens, his wife Pythias died and Aristotle became involved with Herpyllis of Stagira. They had a son whom Aristotle named after his father, Nicomachus. If the Suda an uncritical compilation from the Middle Ages is accurate, he may also have had an erômenos, Palaephatus of Abydus.
This period in Athens, between 335 and 323 BC, is when Aristotle is believed to have composed many of his works. He wrote many dialogues, of which only fragments have survived. Those works that have survived are in treatise form and were not, for the most part, intended for widespread publication; they are generally thought to be lecture aids for his students. His most important treatises include Physics, Metaphysics, Nicomachean Ethics, Politics, On the Soul and Poetics. Aristotle studied and made significant contributions to "logic, metaphysics, mathematics, physics, biology, botany, ethics, politics, agriculture, medicine, dance, and theatre."
Near the end of his life, Alexander and Aristotle became estranged over Alexander's relationship with Persia and Persians. A widespread tradition in antiquity suspected Aristotle of playing a role in Alexander's death, but the only evidence of this is an unlikely claim made some six years after the death. Following Alexander's death, anti-Macedonian sentiment in Athens was rekindled. In 322 BC, Demophilus and Eurymedon the Hierophant reportedly denounced Aristotle for impiety, prompting him to flee to his mother's family estate in Chalcis, on Euboea, at which occasion he was said to have stated: "I will not allow the Athenians to sin twice against philosophy" – a reference to Athens's trial and execution of Socrates. He died in Chalcis, Euboea of natural causes later that same year, having named his student Antipater as his chief executor and leaving a will in which he asked to be buried next to his wife.
Theoretical philosophy
Logic
With the Prior Analytics, Aristotle is credited with the earliest study of formal logic, and his conception of it was the dominant form of Western logic until 19th-century advances in mathematical logic. Kant stated in the Critique of Pure Reason that with Aristotle logic reached its completion.
Organon
What is today called Aristotelian logic with its types of syllogism (methods of logical argument), Aristotle himself would have labelled "analytics". The term "logic" he reserved to mean dialectics. Most of Aristotle's work is probably not in its original form, because it was most likely edited by students and later lecturers. The logical works of Aristotle were compiled into a set of six books called the Organon around 40 BC by Andronicus of Rhodes or others among his followers. The books are:
Categories
On Interpretation
Prior Analytics
Posterior Analytics
Topics
On Sophistical Refutations
The order of the books (or the teachings from which they are composed) is not certain, but this list was derived from analysis of Aristotle's writings. It goes from the basics, the analysis of simple terms in the Categories, the analysis of propositions and their elementary relations in On Interpretation, to the study of more complex forms, namely, syllogisms (in the Analytics) and dialectics (in the Topics and Sophistical Refutations). The first three treatises form the core of the logical theory stricto sensu: the grammar of the language of logic and the correct rules of reasoning. The Rhetoric is not conventionally included, but it states that it relies on the Topics.
Metaphysics
The word "metaphysics" appears to have been coined by the first century AD editor who assembled various small selections of Aristotle's works to the treatise we know by the name Metaphysics. Aristotle called it "first philosophy", and distinguished it from mathematics and natural science (physics) as the contemplative (theoretikē) philosophy which is "theological" and studies the divine. He wrote in his Metaphysics (1026a16):
Substance
Aristotle examines the concepts of substance (ousia) and essence (to ti ên einai, "the what it was to be") in his Metaphysics (Book VII), and he concludes that a particular substance is a combination of both matter and form, a philosophical theory called hylomorphism. In Book VIII, he distinguishes the matter of the substance as the substratum, or the stuff of which it is composed. For example, the matter of a house is the bricks, stones, timbers, etc., or whatever constitutes the potential house, while the form of the substance is the actual house, namely 'covering for bodies and chattels' or any other differentia that let us define something as a house. The formula that gives the components is the account of the matter, and the formula that gives the differentia is the account of the form.
Immanent realism
Like his teacher Plato, Aristotle's philosophy aims at the universal. Aristotle's ontology places the universal (katholou) in particulars (kath' hekaston), things in the world, whereas for Plato the universal is a separately existing form which actual things imitate. For Aristotle, "form" is still what phenomena are based on, but is "instantiated" in a particular substance.
Plato argued that all things have a universal form, which could be either a property or a relation to other things. When one looks at an apple, for example, one sees an apple, and one can also analyse a form of an apple. In this distinction, there is a particular apple and a universal form of an apple. Moreover, one can place an apple next to a book, so that one can speak of both the book and apple as being next to each other. Plato argued that there are some universal forms that are not a part of particular things. For example, it is possible that there is no particular good in existence, but "good" is still a proper universal form. Aristotle disagreed with Plato on this point, arguing that all universals are instantiated at some period of time, and that there are no universals that are unattached to existing things. In addition, Aristotle disagreed with Plato about the location of universals. Where Plato spoke of the forms as existing separately from the things that participate in them, Aristotle maintained that universals exist within each thing on which each universal is predicated. So, according to Aristotle, the form of apple exists within each apple, rather than in the world of the forms.
Potentiality and actuality
Concerning the nature of change (kinesis) and its causes, as he outlines in his Physics and On Generation and Corruption (319b–320a), he distinguishes coming-to-be (genesis, also translated as 'generation') from:
growth and diminution, which is change in quantity;
locomotion, which is change in space; and
alteration, which is change in quality.
Coming-to-be is a change where the substrate of the thing that has undergone the change has itself changed. In that particular change he introduces the concept of potentiality (dynamis) and actuality (entelecheia) in association with the matter and the form. Referring to potentiality, this is what a thing is capable of doing or being acted upon if the conditions are right and it is not prevented by something else. For example, the seed of a plant in the soil is potentially (dynamei) a plant, and if it is not prevented by something, it will become a plant. Potentially beings can either 'act' (poiein) or 'be acted upon' (paschein), which can be either innate or learned. For example, the eyes possess the potentiality of sight (innate – being acted upon), while the capability of playing the flute can be possessed by learning (exercise – acting). Actuality is the fulfilment of the end of the potentiality. Because the end (telos) is the principle of every change, and potentiality exists for the sake of the end, actuality, accordingly, is the end. Referring then to the previous example, it can be said that an actuality is when a plant does one of the activities that plants do.
In summary, the matter used to make a house has potentiality to be a house and both the activity of building and the form of the final house are actualities, which is also a final cause or end. Then Aristotle proceeds and concludes that the actuality is prior to potentiality in formula, in time and in substantiality. With this definition of the particular substance (i.e., matter and form), Aristotle tries to solve the problem of the unity of the beings, for example, "what is it that makes a man one"? Since, according to Plato there are two Ideas: animal and biped, how then is man a unity? However, according to Aristotle, the potential being (matter) and the actual one (form) are one and the same.
Epistemology
Aristotle's immanent realism means his epistemology is based on the study of things that exist or happen in the world, and rises to knowledge of the universal, whereas for Plato epistemology begins with knowledge of universal Forms (or ideas) and descends to knowledge of particular imitations of these. Aristotle uses induction from examples alongside deduction, whereas Plato relies on deduction from a priori principles.
Natural philosophy
Aristotle's "natural philosophy" spans a wide range of natural phenomena including those now covered by physics, biology and other natural sciences. In Aristotle's terminology, "natural philosophy" is a branch of philosophy examining the phenomena of the natural world, and includes fields that would be regarded today as physics, biology and other natural sciences. Aristotle's work encompassed virtually all facets of intellectual inquiry. Aristotle makes philosophy in the broad sense coextensive with reasoning, which he also would describe as "science". However, his use of the term science carries a different meaning than that covered by the term "scientific method". For Aristotle, "all science (dianoia) is either practical, poetical or theoretical" (Metaphysics 1025b25). His practical science includes ethics and politics; his poetical science means the study of fine arts including poetry; his theoretical science covers physics, mathematics and metaphysics.
Physics
Five elements
In his On Generation and Corruption, Aristotle related each of the four elements proposed earlier by Empedocles, earth, water, air, and fire, to two of the four sensible qualities, hot, cold, wet, and dry. In the Empedoclean scheme, all matter was made of the four elements, in differing proportions. Aristotle's scheme added the heavenly aether, the divine substance of the heavenly spheres, stars and planets.
Motion
Aristotle describes two kinds of motion: "violent" or "unnatural motion", such as that of a thrown stone, in the Physics (254b10), and "natural motion", such as of a falling object, in On the Heavens (300a20). In violent motion, as soon as the agent stops causing it, the motion stops also: in other words, the natural state of an object is to be at rest, since Aristotle does not address friction. With this understanding, it can be observed that, as Aristotle stated, heavy objects (on the ground, say) require more force to make them move; and objects pushed with greater force move faster. This would imply the equation
,
incorrect in modern physics.
Natural motion depends on the element concerned: the aether naturally moves in a circle around the heavens, while the 4 Empedoclean elements move vertically up (like fire, as is observed) or down (like earth) towards their natural resting places.
In the Physics (215a25), Aristotle effectively states a quantitative law, that the speed, v, of a falling body is proportional (say, with constant c) to its weight, W, and inversely proportional to the density, ρ, of the fluid in which it is falling:;
Aristotle implies that in a vacuum the speed of fall would become infinite, and concludes from this apparent absurdity that a vacuum is not possible. Opinions have varied on whether Aristotle intended to state quantitative laws. Henri Carteron held the "extreme view" that Aristotle's concept of force was basically qualitative, but other authors reject this.
Archimedes corrected Aristotle's theory that bodies move towards their natural resting places; metal boats can float if they displace enough water; floating depends in Archimedes' scheme on the mass and volume of the object, not, as Aristotle thought, its elementary composition.
Aristotle's writings on motion remained influential until the Early Modern period. John Philoponus (in Late antiquity) and Galileo (in Early modern period) are said to have shown by experiment that Aristotle's claim that a heavier object falls faster than a lighter object is incorrect. A contrary opinion is given by Carlo Rovelli, who argues that Aristotle's physics of motion is correct within its domain of validity, that of objects in the Earth's gravitational field immersed in a fluid such as air. In this system, heavy bodies in steady fall indeed travel faster than light ones (whether friction is ignored, or not), and they do fall more slowly in a denser medium.
Newton's "forced" motion corresponds to Aristotle's "violent" motion with its external agent, but Aristotle's assumption that the agent's effect stops immediately it stops acting (e.g., the ball leaves the thrower's hand) has awkward consequences: he has to suppose that surrounding fluid helps to push the ball along to make it continue to rise even though the hand is no longer acting on it, resulting in the Medieval theory of impetus.
Four causes
Aristotle suggested that the reason for anything coming about can be attributed to four different types of simultaneously active factors. His term aitia is traditionally translated as "cause", but it does not always refer to temporal sequence; it might be better translated as "explanation", but the traditional rendering will be employed here.
Material cause describes the material out of which something is composed. Thus the material cause of a table is wood. It is not about action. It does not mean that one domino knocks over another domino.
The formal cause is its form, i.e., the arrangement of that matter. It tells one what a thing is, that a thing is determined by the definition, form, pattern, essence, whole, synthesis or archetype. It embraces the account of causes in terms of fundamental principles or general laws, as the whole (i.e., macrostructure) is the cause of its parts, a relationship known as the whole-part causation. Plainly put, the formal cause is the idea in the mind of the sculptor that brings the sculpture into being. A simple example of the formal cause is the mental image or idea that allows an artist, architect, or engineer to create a drawing.
The efficient cause is "the primary source", or that from which the change under consideration proceeds. It identifies 'what makes of what is made and what causes change of what is changed' and so suggests all sorts of agents, non-living or living, acting as the sources of change or movement or rest. Representing the current understanding of causality as the relation of cause and effect, this covers the modern definitions of "cause" as either the agent or agency or particular events or states of affairs. In the case of two dominoes, when the first is knocked over it causes the second also to fall over. In the case of animals, this agency is a combination of how it develops from the egg, and how its body functions.
The final cause (telos) is its purpose, the reason why a thing exists or is done, including both purposeful and instrumental actions and activities. The final cause is the purpose or function that something is supposed to serve. This covers modern ideas of motivating causes, such as volition. In the case of living things, it implies adaptation to a particular way of life.
Optics
Aristotle describes experiments in optics using a camera obscura in Problems, book 15. The apparatus consisted of a dark chamber with a small aperture that let light in. With it, he saw that whatever shape he made the hole, the sun's image always remained circular. He also noted that increasing the distance between the aperture and the image surface magnified the image.
Chance and spontaneity
According to Aristotle, spontaneity and chance are causes of some things, distinguishable from other types of cause such as simple necessity. Chance as an incidental cause lies in the realm of accidental things, "from what is spontaneous". There is also more a specific kind of chance, which Aristotle names "luck", that only applies to people's moral choices.
Astronomy
In astronomy, Aristotle refuted Democritus's claim that the Milky Way was made up of "those stars which are shaded by the earth from the sun's rays," pointing out partly correctly that if "the size of the sun is greater than that of the earth and the distance of the stars from the earth many times greater than that of the sun, then... the sun shines on all the stars and the earth screens none of them." He also wrote descriptions of comets, including the Great Comet of 371 BC.
Geology and natural sciences
Aristotle was one of the first people to record any geological observations. He stated that geological change was too slow to be observed in one person's lifetime.
The geologist Charles Lyell noted that Aristotle described such change, including "lakes that had dried up" and "deserts that had become watered by rivers", giving as examples the growth of the Nile delta since the time of Homer, and "the upheaving of one of the Aeolian islands, previous to a volcanic eruption."'
Aristotle also made many observations about the hydrologic cycle and meteorology (including his major writings "Meteorologica"). For example, he made some of the earliest observations about desalination: he observed early – and correctly – that when seawater is heated, freshwater evaporates and that the oceans are then replenished by the cycle of rainfall and river runoff ("I have proved by experiment that salt water evaporated forms fresh and the vapor does not when it condenses condense into sea water again.")
Biology
Empirical research
Aristotle was the first person to study biology systematically, and biology forms a large part of his writings. He spent two years observing and describing the zoology of Lesbos and the surrounding seas, including in particular the Pyrrha lagoon in the centre of Lesbos. His data in History of Animals, Generation of Animals, Movement of Animals, and Parts of Animals are assembled from his own observations, statements given by people with specialized knowledge such as beekeepers and fishermen, and less accurate accounts provided by travellers from overseas. His apparent emphasis on animals rather than plants is a historical accident: his works on botany have been lost, but two books on plants by his pupil Theophrastus have survived.
Aristotle reports on the sea-life visible from observation on Lesbos and the catches of fishermen. He describes the catfish, electric ray, and frogfish in detail, as well as cephalopods such as the octopus and paper nautilus. His description of the hectocotyl arm of cephalopods, used in sexual reproduction, was widely disbelieved until the 19th century. He gives accurate descriptions of the four-chambered fore-stomachs of ruminants, and of the ovoviviparous embryological development of the hound shark.
He notes that an animal's structure is well matched to function so birds like the heron (which live in marshes with soft mud and live by catching fish) have a long neck, long legs, and a sharp spear-like beak, whereas ducks that swim have short legs and webbed feet. Darwin, too, noted these sorts of differences between similar kinds of animal, but unlike Aristotle used the data to come to the theory of evolution. Aristotle's writings can seem to modern readers close to implying evolution, but while Aristotle was aware that new mutations or hybridizations could occur, he saw these as rare accidents. For Aristotle, accidents, like heat waves in winter, must be considered distinct from natural causes. He was thus critical of Empedocles's materialist theory of a "survival of the fittest" origin of living things and their organs, and ridiculed the idea that accidents could lead to orderly results. To put his views into modern terms, he nowhere says that different species can have a common ancestor, or that one kind can change into another, or that kinds can become extinct.
Scientific style
Aristotle did not do experiments in the modern sense. He used the ancient Greek term pepeiramenoi to mean observations, or at most investigative procedures like dissection. In Generation of Animals, he finds a fertilized hen's egg of a suitable stage and opens it to see the embryo's heart beating inside.
Instead, he practiced a different style of science: systematically gathering data, discovering patterns common to whole groups of animals, and inferring possible causal explanations from these. This style is common in modern biology when large amounts of data become available in a new field, such as genomics. It does not result in the same certainty as experimental science, but it sets out testable hypotheses and constructs a narrative explanation of what is observed. In this sense, Aristotle's biology is scientific.
From the data he collected and documented, Aristotle inferred quite a number of rules relating the life-history features of the live-bearing tetrapods (terrestrial placental mammals) that he studied. Among these correct predictions are the following. Brood size decreases with (adult) body mass, so that an elephant has fewer young (usually just one) per brood than a mouse. Lifespan increases with gestation period, and also with body mass, so that elephants live longer than mice, have a longer period of gestation, and are heavier. As a final example, fecundity decreases with lifespan, so long-lived kinds like elephants have fewer young in total than short-lived kinds like mice.
Classification of living things
Aristotle distinguished about 500 species of animals, arranging these in the History of Animals in a graded scale of perfection, a nonreligious version of the scala naturae, with man at the top. His system had eleven grades of animal, from highest potential to lowest, expressed in their form at birth: the highest gave live birth to hot and wet creatures, the lowest laid cold, dry mineral-like eggs. Animals came above plants, and these in turn were above minerals. He grouped what the modern zoologist would call vertebrates as the hotter "animals with blood", and below them the colder invertebrates as "animals without blood". Those with blood were divided into the live-bearing (mammals), and the egg-laying (birds, reptiles, fish). Those without blood were insects, crustacea (non-shelled – cephalopods, and shelled) and the hard-shelled molluscs (bivalves and gastropods). He recognised that animals did not exactly fit into a linear scale, and noted various exceptions, such as that sharks had a placenta like the tetrapods. To a modern biologist, the explanation, not available to Aristotle, is convergent evolution. Philosophers of science have generally concluded that Aristotle was not interested in taxonomy, but zoologists who studied this question in the early 21st century think otherwise. He believed that purposive final causes guided all natural processes; this teleological view justified his observed data as an expression of formal design.
Psychology
Soul
Aristotle's psychology, given in his treatise On the Soul (peri psychēs), posits three kinds of soul ("psyches"): the vegetative soul, the sensitive soul, and the rational soul. Humans have a rational soul. The human soul incorporates the powers of the other kinds: Like the vegetative soul it can grow and nourish itself; like the sensitive soul it can experience sensations and move locally. The unique part of the human, rational soul is its ability to receive forms of other things and to compare them using the nous (intellect) and logos (reason).
For Aristotle, the soul is the form of a living being. Because all beings are composites of form and matter, the form of living beings is that which endows them with what is specific to living beings, e.g. the ability to initiate movement (or in the case of plants, growth and chemical transformations, which Aristotle considers types of movement). In contrast to earlier philosophers, but in accordance with the Egyptians, he placed the rational soul in the heart, rather than the brain. Notable is Aristotle's division of sensation and thought, which generally differed from the concepts of previous philosophers, with the exception of Alcmaeon.
In On the Soul, Aristotle famously criticizes Plato's theory of the soul and develops his own in response to Plato's. The first criticism is against Plato's view of the soul in the Timaeus that the soul takes up space and is able to come into physical contact with bodies. 20th-century scholarship overwhelmingly opposed Aristotle's interpretation of Plato and maintained that he had misunderstood Plato. Today's scholars have tended to re-assess Aristotle's interpretation and have warmed up to it. Aristotle's other criticism is that Plato's view of reincarnation entails that it is possible for a soul and its body to be mis-matched; in principle, Aristotle alleges, any soul can go with any body, according to Plato's theory. Aristotle's claim that the soul is the form of a living being is meant to eliminate that possibility and thus rule out reincarnation.
Memory
According to Aristotle in On the Soul, memory is the ability to hold a perceived experience in the mind and to distinguish between the internal "appearance" and an occurrence in the past. In other words, a memory is a mental picture (phantasm) that can be recovered. Aristotle believed an impression is left on a semi-fluid bodily organ that undergoes several changes in order to make a memory. A memory occurs when stimuli such as sights or sounds are so complex that the nervous system cannot receive all the impressions at once. These changes are the same as those involved in the operations of sensation, Aristotelian , and thinking.
Aristotle uses the term 'memory' for the actual retaining of an experience in the impression that can develop from sensation, and for the intellectual anxiety that comes with the impression because it is formed at a particular time and processing specific contents. Memory is of the past, prediction is of the future, and sensation is of the present. Retrieval of impressions cannot be performed suddenly. A transitional channel is needed and located in past experiences, both for previous experience and present experience.
Because Aristotle believes people receive all kinds of sense perceptions and perceive them as impressions, people are continually weaving together new impressions of experiences. To search for these impressions, people search the memory itself. Within the memory, if one experience is offered instead of a specific memory, that person will reject this experience until they find what they are looking for. Recollection occurs when one retrieved experience naturally follows another. If the chain of "images" is needed, one memory will stimulate the next. When people recall experiences, they stimulate certain previous experiences until they reach the one that is needed. Recollection is thus the self-directed activity of retrieving the information stored in a memory impression. Only humans can remember impressions of intellectual activity, such as numbers and words. Animals that have perception of time can retrieve memories of their past observations. Remembering involves only perception of the things remembered and of the time passed.
Aristotle believed the chain of thought, which ends in recollection of certain impressions, was connected systematically in relationships such as similarity, contrast, and contiguity, described in his laws of association. Aristotle believed that past experiences are hidden within the mind. A force operates to awaken the hidden material to bring up the actual experience. According to Aristotle, association is the power innate in a mental state, which operates upon the unexpressed remains of former experiences, allowing them to rise and be recalled.
Dreams
Aristotle describes sleep in On Sleep and Wakefulness. Sleep takes place as a result of overuse of the senses or of digestion, so it is vital to the body. While a person is asleep, the critical activities, which include thinking, sensing, recalling and remembering, do not function as they do during wakefulness. Since a person cannot sense during sleep they cannot have desire, which is the result of sensation. However, the senses are able to work during sleep, albeit differently, unless they are weary.
Dreams do not involve actually sensing a stimulus. In dreams, sensation is still involved, but in an altered manner. Aristotle explains that when a person stares at a moving stimulus such as the waves in a body of water, and then looks away, the next thing they look at appears to have a wavelike motion. When a person perceives a stimulus and the stimulus is no longer the focus of their attention, it leaves an impression. When the body is awake and the senses are functioning properly, a person constantly encounters new stimuli to sense and so the impressions of previously perceived stimuli are ignored. However, during sleep the impressions made throughout the day are noticed as there are no new distracting sensory experiences. So, dreams result from these lasting impressions. Since impressions are all that are left and not the exact stimuli, dreams do not resemble the actual waking experience. During sleep, a person is in an altered state of mind. Aristotle compares a sleeping person to a person who is overtaken by strong feelings toward a stimulus. For example, a person who has a strong infatuation with someone may begin to think they see that person everywhere because they are so overtaken by their feelings. Since a person sleeping is in a suggestible state and unable to make judgements, they become easily deceived by what appears in their dreams, like the infatuated person. This leads the person to believe the dream is real, even when the dreams are absurd in nature. In De Anima iii 3, Aristotle ascribes the ability to create, to store, and to recall images in the absence of perception to the faculty of imagination, phantasia.
One component of Aristotle's theory of dreams disagrees with previously held beliefs. He claimed that dreams are not foretelling and not sent by a divine being. Aristotle reasoned naturalistically that instances in which dreams do resemble future events are simply coincidences. Aristotle claimed that a dream is first established by the fact that the person is asleep when they experience it. If a person had an image appear for a moment after waking up or if they see something in the dark it is not considered a dream because they were awake when it occurred. Secondly, any sensory experience that is perceived while a person is asleep does not qualify as part of a dream. For example, if, while a person is sleeping, a door shuts and in their dream they hear a door is shut, this sensory experience is not part of the dream. Lastly, the images of dreams must be a result of lasting impressions of waking sensory experiences.
Practical philosophy
Aristotle's practical philosophy covers areas such as ethics, politics, economics, and rhetoric.
Ethics
Aristotle considered ethics to be a practical rather than theoretical study, i.e., one aimed at becoming good and doing good rather than knowing for its own sake. He wrote several treatises on ethics, most notably including the Nicomachean Ethics.
Aristotle taught that virtue has to do with the proper function (ergon) of a thing. An eye is only a good eye in so much as it can see, because the proper function of an eye is sight. Aristotle reasoned that humans must have a function specific to humans, and that this function must be an activity of the psuchē (soul) in accordance with reason (logos). Aristotle identified such an optimum activity (the virtuous mean, between the accompanying vices of excess or deficiency) of the soul as the aim of all human deliberate action, eudaimonia, generally translated as "happiness" or sometimes "well-being". To have the potential of ever being happy in this way necessarily requires a good character (ēthikē aretē), often translated as moral or ethical virtue or excellence.
Aristotle taught that to achieve a virtuous and potentially happy character requires a first stage of having the fortune to be habituated not deliberately, but by teachers, and experience, leading to a later stage in which one consciously chooses to do the best things. When the best people come to live life this way their practical wisdom (phronesis) and their intellect (nous) can develop with each other towards the highest possible human virtue, the wisdom of an accomplished theoretical or speculative thinker, or in other words, a philosopher.
Politics
In addition to his works on ethics, which address the individual, Aristotle addressed the city in his work titled Politics. Aristotle considered the city to be a natural community. Moreover, he considered the city to be prior in importance to the family, which in turn is prior to the individual, "for the whole must of necessity be prior to the part". He famously stated that "man is by nature a political animal" and argued that humanity's defining factor among others in the animal kingdom is its rationality. Aristotle conceived of politics as being like an organism rather than like a machine, and as a collection of parts none of which can exist without the others. Aristotle's conception of the city is organic, and he is considered one of the first to conceive of the city in this manner.
The common modern understanding of a political community as a modern state is quite different from Aristotle's understanding. Although he was aware of the existence and potential of larger empires, the natural community according to Aristotle was the city (polis) which functions as a political "community" or "partnership" (koinōnia). The aim of the city is not just to avoid injustice or for economic stability, but rather to allow at least some citizens the possibility to live a good life, and to perform beautiful acts: "The political partnership must be regarded, therefore, as being for the sake of noble actions, not for the sake of living together." This is distinguished from modern approaches, beginning with social contract theory, according to which individuals leave the state of nature because of "fear of violent death" or its "inconveniences".
In Protrepticus, the character 'Aristotle' states:
As Plato's disciple Aristotle was rather critical concerning democracy and, following the outline of certain ideas from Plato's Statesman, he developed a coherent theory of integrating various forms of power into a so-called mixed state:
To illustrate this approach, Aristotle proposed a first-of-its-kind mathematical model of voting, albeit textually described, where the democratic principle of "one voter–one vote" is combined with the oligarchic "merit-weighted voting"; for relevant quotes and their translation into mathematical formulas see.
Aristotle's views on women influenced later Western philosophers, who quoted him as an authority until the end of the Middle Ages, but these views have been controversial in modern times. Aristotle's analysis of procreation describes an active, ensouling masculine element bringing life to an inert, passive female element. The biological differences are a result of the fact that the female body is well-suited for reproduction, which changes her body temperature, which in turn makes her, in Aristotle's view, incapable of participating in political life. On this ground, proponents of feminist metaphysics have accused Aristotle of misogyny and sexism. However, Aristotle gave equal weight to women's happiness as he did to men's, and commented in his Rhetoric that the things that lead to happiness need to be in women as well as men.
Economics
Aristotle made substantial contributions to economic thought, especially to thought in the Middle Ages. In Politics, Aristotle addresses the city, property, and trade. His response to criticisms of private property, in Lionel Robbins's view, anticipated later proponents of private property among philosophers and economists, as it related to the overall utility of social arrangements. Aristotle believed that although communal arrangements may seem beneficial to society, and that although private property is often blamed for social strife, such evils in fact come from human nature. In Politics, Aristotle offers one of the earliest accounts of the origin of money. Money came into use because people became dependent on one another, importing what they needed and exporting the surplus. For the sake of convenience, people then agreed to deal in something that is intrinsically useful and easily applicable, such as iron or silver.
Aristotle's discussions on retail and interest was a major influence on economic thought in the Middle Ages. He had a low opinion of retail, believing that contrary to using money to procure things one needs in managing the household, retail trade seeks to make a profit. It thus uses goods as a means to an end, rather than as an end unto itself. He believed that retail trade was in this way unnatural. Similarly, Aristotle considered making a profit through interest unnatural, as it makes a gain out of the money itself, and not from its use.
Aristotle gave a summary of the function of money that was perhaps remarkably precocious for his time. He wrote that because it is impossible to determine the value of every good through a count of the number of other goods it is worth, the necessity arises of a single universal standard of measurement. Money thus allows for the association of different goods and makes them "commensurable". He goes on to state that money is also useful for future exchange, making it a sort of security. That is, "if we do not want a thing now, we shall be able to get it when we do want it".
Rhetoric
Aristotle's Rhetoric proposes that a speaker can use three basic kinds of appeals to persuade his audience: ethos (an appeal to the speaker's character), pathos (an appeal to the audience's emotion), and logos (an appeal to logical reasoning). He also categorizes rhetoric into three genres: epideictic (ceremonial speeches dealing with praise or blame), forensic (judicial speeches over guilt or innocence), and deliberative (speeches calling on an audience to make a decision on an issue). Aristotle also outlines two kinds of rhetorical proofs: enthymeme (proof by syllogism) and paradeigma (proof by example).
Poetics
Aristotle writes in his Poetics that epic poetry, tragedy, comedy, dithyrambic poetry, painting, sculpture, music, and dance are all fundamentally acts of mimesis ("imitation"), each varying in imitation by medium, object, and manner. He applies the term mimesis both as a property of a work of art and also as the product of the artist's intention and contends that the audience's realisation of the mimesis is vital to understanding the work itself. Aristotle states that mimesis is a natural instinct of humanity that separates humans from animals and that all human artistry "follows the pattern of nature". Because of this, Aristotle believed that each of the mimetic arts possesses what Stephen Halliwell calls "highly structured procedures for the achievement of their purposes." For example, music imitates with the media of rhythm and harmony, whereas dance imitates with rhythm alone, and poetry with language. The forms also differ in their object of imitation. Comedy, for instance, is a dramatic imitation of men worse than average; whereas tragedy imitates men slightly better than average. Lastly, the forms differ in their manner of imitation – through narrative or character, through change or no change, and through drama or no drama.
While it is believed that Aristotle's Poetics originally comprised two books – one on comedy and one on tragedy – only the portion that focuses on tragedy has survived. Aristotle taught that tragedy is composed of six elements: plot-structure, character, style, thought, spectacle, and lyric poetry. The characters in a tragedy are merely a means of driving the story; and the plot, not the characters, is the chief focus of tragedy. Tragedy is the imitation of action arousing pity and fear, and is meant to effect the catharsis of those same emotions. Aristotle concludes Poetics with a discussion on which, if either, is superior: epic or tragic mimesis. He suggests that because tragedy possesses all the attributes of an epic, possibly possesses additional attributes such as spectacle and music, is more unified, and achieves the aim of its mimesis in shorter scope, it can be considered superior to epic. Aristotle was a keen systematic collector of riddles, folklore, and proverbs; he and his school had a special interest in the riddles of the Delphic Oracle and studied the fables of Aesop.
Transmission
More than 2300 years after his death, Aristotle remains one of the most influential people who ever lived. He contributed to almost every field of human knowledge then in existence, and he was the founder of many new fields. According to the philosopher Bryan Magee, "it is doubtful whether any human being has ever known as much as he did".
Among countless other achievements, Aristotle was the founder of formal logic, pioneered the study of zoology, and left every future scientist and philosopher in his debt through his contributions to the scientific method. Taneli Kukkonen, observes that his achievement in founding two sciences is unmatched, and his reach in influencing "every branch of intellectual enterprise" including Western ethical and political theory, theology, rhetoric, and literary analysis is equally long. As a result, Kukkonen argues, any analysis of reality today "will almost certainly carry Aristotelian overtones ... evidence of an exceptionally forceful mind." Jonathan Barnes wrote that "an account of Aristotle's intellectual afterlife would be little less than a history of European thought".
Aristotle has been called the father of logic, biology, political science, zoology, embryology, natural law, scientific method, rhetoric, psychology, realism, criticism, individualism, teleology, and meteorology.
What follows is an overview of the transmission and influence of his texts and ideas into the modern era.
His successor, Theophrastus
Aristotle's pupil and successor, Theophrastus, wrote the History of Plants, a pioneering work in botany. Some of his technical terms remain in use, such as carpel from carpos, fruit, and pericarp, from pericarpion, seed chamber.
Theophrastus was much less concerned with formal causes than Aristotle was, instead pragmatically describing how plants functioned.
Later Greek philosophy
The immediate influence of Aristotle's work was felt as the Lyceum grew into the Peripatetic school. Aristotle's students included Aristoxenus, Dicaearchus, Demetrius of Phalerum, Eudemos of Rhodes, Harpalus, Hephaestion, Mnason of Phocis, Nicomachus, and Theophrastus. Aristotle's influence over Alexander the Great is seen in the latter's bringing with him on his expedition a host of zoologists, botanists, and researchers. He had also learned a great deal about Persian customs and traditions from his teacher. Although his respect for Aristotle was diminished as his travels made it clear that much of Aristotle's geography was clearly wrong, when the old philosopher released his works to the public, Alexander complained "Thou hast not done well to publish thy acroamatic doctrines; for in what shall I surpass other men if those doctrines wherein I have been trained are to be all men's common property?"
Hellenistic science
After Theophrastus, the Lyceum failed to produce any original work. Though interest in Aristotle's ideas survived, they were generally taken unquestioningly. It is not until the age of Alexandria under the Ptolemies that advances in biology can be again found.
The first medical teacher at Alexandria, Herophilus of Chalcedon, corrected Aristotle, placing intelligence in the brain, and connected the nervous system to motion and sensation. Herophilus also distinguished between veins and arteries, noting that the latter pulse while the former do not. Though a few ancient atomists such as Lucretius challenged the teleological viewpoint of Aristotelian ideas about life, teleology (and after the rise of Christianity, natural theology) would remain central to biological thought essentially until the 18th and 19th centuries. Ernst Mayr states that there was "nothing of any real consequence in biology after Lucretius and Galen until the Renaissance."
Byzantine scholars
Greek Christian scribes played a crucial role in the preservation of Aristotle by copying all the extant Greek language manuscripts of the corpus. The first Greek Christians to comment extensively on Aristotle were Philoponus, Elias, and David in the sixth century, and Stephen of Alexandria in the early seventh century. John Philoponus stands out for having attempted a fundamental critique of Aristotle's views on the eternity of the world, movement, and other elements of Aristotelian thought. Philoponus questioned Aristotle's teaching of physics, noting its flaws and introducing the theory of impetus to explain his observations.
After a hiatus of several centuries, formal commentary by Eustratius and Michael of Ephesus reappeared in the late eleventh and early twelfth centuries, apparently sponsored by Anna Comnena.
Medieval Islamic world
Aristotle was one of the most revered Western thinkers in early Islamic theology. Most of the still extant works of Aristotle, as well as a number of the original Greek commentaries, were translated into Arabic and studied by Muslim philosophers, scientists and scholars. Averroes, Avicenna and Alpharabius, who wrote on Aristotle in great depth, also influenced Thomas Aquinas and other Western Christian scholastic philosophers. Alkindus greatly admired Aristotle's philosophy, and Averroes spoke of Aristotle as the "exemplar" for all future philosophers. Medieval Muslim scholars regularly described Aristotle as the "First Teacher". The title was later used by Western philosophers (as in the famous poem of Dante) who were influenced by the tradition of Islamic philosophy.
Medieval Europe
With the loss of the study of ancient Greek in the early medieval Latin West, Aristotle was practically unknown there from to except through the Latin translation of the Organon made by Boethius. In the twelfth and thirteenth centuries, interest in Aristotle revived and Latin Christians had translations made, both from Arabic translations, such as those by Gerard of Cremona, and from the original Greek, such as those by James of Venice and William of Moerbeke.
After the Scholastic Thomas Aquinas wrote his Summa Theologica, working from Moerbeke's translations and calling Aristotle "The Philosopher", the demand for Aristotle's writings grew, and the Greek manuscripts returned to the West, stimulating a revival of Aristotelianism in Europe that continued into the Renaissance. These thinkers blended Aristotelian philosophy with Christianity, bringing the thought of Ancient Greece into the Middle Ages. Scholars such as Boethius, Peter Abelard, and John Buridan worked on Aristotelian logic.
According to scholar Roger Theodore Lafferty, Dante built up the philosophy of the Comedy with the works of Aristotle as a foundation, just as the scholastics used Aristotle as the basis for their thinking. Dante knew Aristotle directly from Latin translations of his works and indirectly through quotations in the works of Albert Magnus. Dante even acknowledges Aristotle's influence explicitly in the poem, specifically when Virgil justifies the Inferno's structure by citing the Nicomachean Ethics.
Medieval Judaism
Moses Maimonides (considered to be the foremost intellectual figure of medieval Judaism) adopted Aristotelianism from the Islamic scholars and based his Guide for the Perplexed on it and that became the basis of Jewish scholastic philosophy. Maimonides also considered Aristotle to be the greatest philosopher that ever lived, and styled him as the "chief of the philosophers". Also, in his letter to Samuel ibn Tibbon, Maimonides observes that there is no need for Samuel to study the writings of philosophers who preceded Aristotle because the works of the latter are "sufficient by themselves and [superior] to all that were written before them. His intellect, Aristotle's is the extreme limit of human intellect, apart from him upon whom the divine emanation has flowed forth to such an extent that they reach the level of prophecy, there being no level higher".
Early Modern scientists
In the Early Modern period, scientists such as William Harvey in England and Galileo Galilei in Italy reacted against the theories of Aristotle and other classical era thinkers like Galen, establishing new theories based to some degree on observation and experiment. Harvey demonstrated the circulation of the blood, establishing that the heart functioned as a pump rather than being the seat of the soul and the controller of the body's heat, as Aristotle thought. Galileo used more doubtful arguments to displace Aristotle's physics, proposing that bodies all fall at the same speed whatever their weight.
18th and 19th-century science
The English mathematician George Boole fully accepted Aristotle's logic, but decided "to go under, over, and beyond" it with his system of algebraic logic in his 1854 book The Laws of Thought. This gives logic a mathematical foundation with equations, enables it to solve equations as well as check validity, and allows it to handle a wider class of problems by expanding propositions of any number of terms, not just two.
Charles Darwin regarded Aristotle as the most important contributor to the subject of biology. In an 1882 letter he wrote that "Linnaeus and Cuvier have been my two gods, though in very different ways, but they were mere schoolboys to old Aristotle". Also, in later editions of the book "On the Origin of Species', Darwin traced evolutionary ideas as far back as Aristotle; the text he cites is a summary by Aristotle of the ideas of the earlier Greek philosopher Empedocles.
Surviving works
Corpus Aristotelicum
The works of Aristotle that have survived from antiquity through medieval manuscript transmission are collected in the Corpus Aristotelicum. These texts, as opposed to Aristotle's lost works, are technical philosophical treatises from within Aristotle's school. Reference to them is made according to the organization of Immanuel Bekker's Royal Prussian Academy edition (Aristotelis Opera edidit Academia Regia Borussica, Berlin, 1831–1870), which in turn is based on ancient classifications of these works.
Loss and preservation
Aristotle wrote his works on papyrus scrolls, the common writing medium of that era. His writings are divisible into two groups: the "exoteric", intended for the public, and the "esoteric", for use within the Lyceum school. Aristotle's "lost" works stray considerably in characterization from the surviving Aristotelian corpus. Whereas the lost works appear to have been originally written with a view to subsequent publication, the surviving works mostly resemble lecture notes not intended for publication. Cicero's description of Aristotle's literary style as "a river of gold" must have applied to the published works, not the surviving notes. A major question in the history of Aristotle's works is how the exoteric writings were all lost, and how the ones now possessed came to be found. The consensus is that Andronicus of Rhodes collected the esoteric works of Aristotle's school which existed in the form of smaller, separate works, distinguished them from those of Theophrastus and other Peripatetics, edited them, and finally compiled them into the more cohesive, larger works as they are known today.
According to Strabo and Plutarch, after Aristotle's death, his library and writings went to Theophrastus (Aristotle's successor as head of the Lycaeum and the Peripatetic school). After the death of Theophrastus, the peripatetic library went to Neleus of Scepsis.
Some time later, the Kingdom of Pergamon began conscripting books for a royal library, and the heirs of Neleus hid their collection in a cellar to prevent it from being seized for that purpose. The library was stored there for about a century and a half, in conditions that were not ideal for document preservation. On the death of Attalus III, which also ended the royal library ambitions, the existence of Aristotelian library was disclosed, and it was purchased by Apellicon and returned to Athens in about .
Apellicon sought to recover the texts, many of which were seriously degraded at this point due to the conditions in which they were stored. He had them copied out into new manuscripts, and used his best guesswork to fill in the gaps where the originals were unreadable.
When Sulla seized Athens in , he seized the library and transferred it to Rome. There, Andronicus of Rhodes organized the texts into the first complete edition of Aristotle's works (and works attributed to him). The Aristotelian texts we have to day are based on these.
Legacy
Depictions
Paintings
Aristotle has been depicted by major artists including Lucas Cranach the Elder, Justus van Gent, Raphael, Paolo Veronese, Jusepe de Ribera, Rembrandt, and Francesco Hayez over the centuries. Among the best-known depictions is Raphael's fresco The School of Athens, in the Vatican's Apostolic Palace, where the figures of Plato and Aristotle are central to the image, at the architectural vanishing point, reflecting their importance. Rembrandt's Aristotle with a Bust of Homer, too, is a celebrated work, showing the knowing philosopher and the blind Homer from an earlier age: as the art critic Jonathan Jones writes, "this painting will remain one of the greatest and most mysterious in the world, ensnaring us in its musty, glowing, pitch-black, terrible knowledge of time."
Sculptures
Eponyms
The Aristotle Mountains in Antarctica are named after Aristotle. He was the first person known to conjecture, in his book Meteorology, the existence of a landmass in the southern high-latitude region, which he called Antarctica. Aristoteles is a crater on the Moon bearing the classical form of Aristotle's name.
See also
Aristotelian Society
Conimbricenses
Perfectionism
References
Notes
Citations
Sources
Further reading
The secondary literature on Aristotle is vast. The following is only a small selection.
Ackrill, J. L. (1997). Essays on Plato and Aristotle, Oxford University Press.
These translations are available in several places online; see External links.
Bakalis, Nikolaos. (2005). Handbook of Greek Philosophy: From Thales to the Stoics Analysis and Fragments, Trafford Publishing, .
Bolotin, David (1998). An Approach to Aristotle's Physics: With Particular Attention to the Role of His Manner of Writing. Albany: SUNY Press. A contribution to our understanding of how to read Aristotle's scientific works.
Burnyeat, Myles F. et al. (1979). Notes on Book Zeta of Aristotle's Metaphysics. Oxford: Sub-faculty of Philosophy.
Code, Alan (1995). Potentiality in Aristotle's Science and Metaphysics, Pacific Philosophical Quarterly 76.
De Groot, Jean (2014). Aristotle's Empiricism: Experience and Mechanics in the 4th century BC, Parmenides Publishing, .
Frede, Michael (1987). Essays in Ancient Philosophy. Minneapolis: University of Minnesota Press.
Gendlin, Eugene T. (2012). Line by Line Commentary on Aristotle's De Anima , Volume 1: Books I & II; Volume 2: Book III. The Focusing Institute.
Gill, Mary Louise (1989). Aristotle on Substance: The Paradox of Unity. Princeton University Press.
Jori, Alberto (2003). Aristotele, Bruno Mondadori (Prize 2003 of the "International Academy of the History of Science"), .
Knight, Kelvin (2007). Aristotelian Philosophy: Ethics and Politics from Aristotle to MacIntyre, Polity Press.
Lewis, Frank A. (1991). Substance and Predication in Aristotle. Cambridge University Press.
Lord, Carnes (1984). Introduction to The Politics, by Aristotle. Chicago University Press.
Loux, Michael J. (1991). Primary Ousia: An Essay on Aristotle's Metaphysics Ζ and Η. Ithaca, NY: Cornell University Press.
Maso, Stefano (Ed.), Natali, Carlo (Ed.), Seel, Gerhard (Ed.) (2012) Reading Aristotle: Physics VII. 3: What is Alteration? Proceedings of the International ESAP-HYELE Conference, Parmenides Publishing. .
[Reprinted in J. Barnes, M. Schofield, and R.R.K. Sorabji, eds.(1975). Articles on Aristotle Vol 1. Science. London: Duckworth 14–34.]
Reeve, C. D. C. (2000). Substantial Knowledge: Aristotle's Metaphysics. Hackett.
Scaltsas, T. (1994). Substances and Universals in Aristotle's Metaphysics. Cornell University Press.
Strauss, Leo (1964). "On Aristotle's Politics", in The City and Man, Rand McNally.
External links
At the Internet Encyclopedia of Philosophy:
At the Internet Classics Archive
From the Stanford Encyclopedia of Philosophy:
Collections of works
At Massachusetts Institute of Technology
Perseus Project at Tufts University
At the University of Adelaide
P. Remacle
The 11-volume 1837 Bekker edition of Aristotle's Works in Greek (PDFDJVU)
Works of Aristóteles at the National Library of Portugal
384 BC births
322 BC deaths
4th-century BC mathematicians
4th-century BC philosophers
4th-century BC Greek writers
Acting theorists
Ancient Greek biologists
Ancient Greek epistemologists
Ancient Greek ethicists
Ancient Greek logicians
Ancient Greek mathematicians
Ancient Greek metaphysicians
Ancient Greek philosophers of language
Ancient Greek philosophers of mind
Ancient Greek physicists
Ancient Greek political philosophers
Ancient Greek political refugees
Ancient Greek philosophers of art
Ancient literary critics
Ancient Stagirites
Aphorists
Aristotelian philosophers
Attic Greek writers
Ancient Greek cosmologists
Greek male writers
Greek geologists
Greek meteorologists
Humor researchers
Irony theorists
Metic philosophers in Classical Athens
Natural law ethicists
Natural philosophers
Ontologists
Peripatetic philosophers
Philosophers and tutors of Alexander the Great
Philosophers of ancient Chalcidice
Philosophers of culture
Philosophers of education
Philosophers of history
Philosophers of law
Philosophers of literature
Philosophers of logic
Philosophers of love
Philosophers of psychology
Philosophers of science
Philosophers of time
Philosophers of sexuality
Philosophers of technology
Philosophical logic
Philosophical theists
Philosophy academics
Philosophy writers
Rhetoric theorists
Social philosophers
Students of Plato
Trope theorists
Virtue ethicists
Zoologists
|
https://en.wikipedia.org/wiki/Altruism
|
Altruism is the principle and practice of concern for the well-being and/or happiness of other humans or animals. While objects of altruistic concern vary, it is an important moral value in many cultures and religions. It may be considered a synonym of selflessness, the opposite of selfishness.
The word altruism was popularized (and possibly coined) by the French philosopher Auguste Comte in French, as , for an antonym of egoism. He derived it from the Italian , which in turn was derived from Latin , meaning "other people" or "somebody else".
Altruism, as observed in populations of organisms, is when an individual performs an action at a cost to themselves (in terms of e.g. pleasure and quality of life, time, probability of survival or reproduction) that benefits, directly or indirectly, another individual, without the expectation of reciprocity or compensation for that action.
Altruism can be distinguished from feelings of loyalty or concern for the common good. The latter are predicated upon social relationships, whilst altruism does not consider relationships. Whether "true" altruism is possible in human psychology is a subject of debate. The theory of psychological egoism suggests that no act of sharing, helping, or sacrificing can be truly altruistic, as the actor may receive an intrinsic reward in the form of personal gratification. The validity of this argument depends on whether such intrinsic rewards qualify as "benefits".
The term altruism may also refer to an ethical doctrine that claims that individuals are morally obliged to benefit others. Used in this sense, it is usually contrasted with egoism, which claims individuals are morally obligated to serve themselves first.
Effective altruism is the use of evidence and reason to determine the most effective ways to benefit others.
The notion of altruism
The concept of altruism has a history in philosophical and ethical thought. The term was coined in the 19th century by the founding sociologist and philosopher of science Auguste Comte, and has become a major topic for psychologists (especially evolutionary psychology researchers), evolutionary biologists, and ethologists. Whilst ideas about altruism from one field can affect the other fields, the different methods and focuses of these fields always lead to different perspectives on altruism. In simple terms, altruism is caring about the welfare of other people and acting to help them.
Scientific viewpoints
Anthropology
Marcel Mauss's essay The Gift contains a passage called "Note on alms". This note describes the evolution of the notion of alms (and by extension of altruism) from the notion of sacrifice. In it, he writes:
Evolutionary explanations
In the Science of ethology (the study of animal behaviour), and more generally in the study of social evolution, altruism refers to behavior by an individual that increases the fitness of another individual while decreasing the fitness of the actor. In evolutionary psychology this term may be applied to a wide range of human behaviors such as charity, emergency aid, help to coalition partners, tipping, courtship gifts, production of public goods, and environmentalism.
Theories of apparently altruistic behavior were by the need to produce ideas compatible with evolutionary origins. Two related strands of research on altruism have emerged from traditional evolutionary analyses and evolutionary game theory: a mathematical model and analysis of behavioral strategies.
Some of the proposed mechanisms are:
Kin selection. That animals and humans are more altruistic towards close kin than to distant kin and non-kin has been confirmed in numerous studies across many different cultures. Even subtle cues indicating kinship may unconsciously increase altruistic behavior. One kinship cue is facial resemblance. One study found that slightly altering photographs to resemble the faces of study participants more closely increased the trust the participants expressed regarding depicted persons. Another cue is having the same family name, especially if rare, which has been found to increase helpful behavior. Another study found more cooperative behavior, the greater the number of perceived kin in a group. Using kinship terms in political speeches increased audience agreement with the speaker in one study. This effect was powerful for firstborns, who are typically close to their families.
Vested interests. People are likely to suffer if their friends, allies and those from similar social ingroups suffer or disappear. Helping such group members may, therefore, also benefit the altruist. Making ingroup membership more noticeable increases cooperativeness. Extreme self-sacrifice towards the ingroup may be adaptive if a hostile outgroup threatens the entire ingroup.
Reciprocal altruism. See also Reciprocity (evolution).
Direct reciprocity. Research shows that it can be beneficial to help others if there is a chance that they will reciprocate the help. The effective tit for tat strategy is one game theoretic example. Many people seem to be following a similar strategy by cooperating if and only if others cooperate in return.
One consequence is that people are more cooperative with one another if they are more likely to interact again in the future. People tend to be less cooperative if they perceive that the frequency of helpers in the population is lower. They tend to help less if they see non-cooperativeness by others, and this effect tends to be stronger than the opposite effect of seeing cooperative behaviors. Simply changing the cooperative framing of a proposal may increase cooperativeness, such as calling it a "Community Game" instead of a "Wall Street Game".
A tendency towards reciprocity implies that people feel obligated to respond if someone helps them. This has been used by charities that give small gifts to potential donors hoping to induce reciprocity. Another method is to announce publicly that someone has given a large donation. The tendency to reciprocate can even generalize, so people become more helpful toward others after being helped. On the other hand, people will avoid or even retaliate against those perceived not to be cooperating. People sometimes mistakenly fail to help when they intended to, or their helping may not be noticed, which may cause unintended conflicts. As such, it may be an optimal strategy to be slightly forgiving of and have a slightly generous interpretation of non-cooperation.
People are more likely to cooperate on a task if they can communicate with one another first. This may be due to better cooperativeness assessments or promises exchange. They are more cooperative if they can gradually build trust instead of being asked to give extensive help immediately. Direct reciprocity and cooperation in a group can be increased by changing the focus and incentives from intra-group competition to larger-scale competitions, such as between groups or against the general population. Thus, giving grades and promotions based only on an individual's performance relative to a small local group, as is common, may reduce cooperative behaviors in the group.
Indirect reciprocity. Because people avoid poor reciprocators and cheaters, a person's reputation is important. A person esteemed for their reciprocity is more likely to receive assistance, even from individuals they haven't directly interacted with before.
Strong reciprocity. This form of reciprocity is expressed by people who invest more resources in cooperation and punishment than what is deemed optimal based on established theories of altruism.
Pseudo-reciprocity. An organism behaves altruistically and the recipient does not reciprocate but has an increased chance of acting in a way that is selfish but also as a byproduct benefits the altruist.
Costly signaling and the handicap principle. Altruism, by diverting resources from the altruist, can act as an "honest signal" of available resources and the skills to acquire them. This may signal to others that the altruist is a valuable potential partner. It may also signal interactive and cooperative intentions, since someone who does not expect to interact further in the future gains nothing from such costly signaling. While it's uncertain if costly signaling can predict long-term cooperative traits, people tend to trust helpers more. Costly signaling loses its value when everyone shares identical traits, resources, and cooperative intentions, but it gains significance as population variability in these aspects increases.
Hunters who share meat display a costly signal of ability. The research found that good hunters have higher reproductive success and more adulterous relations even if they receive no more of the hunted meat than anyone else. Similarly, holding large feasts and giving large donations are ways of demonstrating one's resources. Heroic risk-taking has also been interpreted as a costly signal of ability.
Both indirect reciprocity and costly signaling depend on reputation value and tend to make similar predictions. One is that people will be more helpful when they know that their helping behavior will be communicated to people they will interact with later, publicly announced, discussed, or observed by someone else. This has been documented in many studies. The effect is sensitive to subtle cues, such as people being more helpful when there were stylized eyespots instead of a logo on a computer screen. Weak reputational cues such as eyespots may become unimportant if there are stronger cues present and may lose their effect with continued exposure unless reinforced with real reputational effects. Public displays such as public weeping for dead celebrities and participation in demonstrations may be influenced by a desire to be seen as generous. People who know that they are publicly monitored sometimes even wastefully donate the money they know is not needed by the recipient because of reputational concerns.
Women find altruistic men to be attractive partners. When women look for a long-term partner, altruism may be a trait they prefer as it may indicate that the prospective partner is also willing to share resources with her and her children. Men perform charitable acts in the early stages of a romantic relationship or simply when in the presence of an attractive woman. While both sexes state that kindness is the most preferable trait in a partner, there is some evidence that men place less value on this than women and that women may not be more altruistic in the presence of an attractive man. Men may even avoid altruistic women in short-term relationships, which may be because they expect less success.
People may compete for the social benefit of a burnished reputation, which may cause competitive altruism. On the other hand, in some experiments, a proportion of people do not seem to care about reputation and do not help more, even if this is conspicuous. This may be due to reasons such as psychopathy or that they are so attractive that they need not be seen as altruistic. The reputational benefits of altruism occur in the future compared to the immediate costs of altruism. While humans and other organisms generally place less value on future costs/benefits as compared to those in the present, some have shorter time horizons than others, and these people tend to be less cooperative.
Explicit extrinsic rewards and punishments have sometimes been found to have a counterintuitively inverse effect on behaviors when compared to intrinsic rewards. This may be because such extrinsic incentives may replace (partially or in whole) intrinsic and reputational incentives, motivating the person to focus on obtaining the extrinsic rewards, which may make the thus-incentivized behaviors less desirable. People prefer altruism in others when it appears to be due to a personality characteristic rather than overt reputational concerns; simply pointing out that there are reputational benefits of action may reduce them. This may be used as a derogatory tactic against altruists ("you're just virtue signalling"), especially by those who are non-cooperators. A counterargument is that doing good due to reputational concerns is better than doing no good.
Group selection. It has controversially been argued by some evolutionary scientists such as David Sloan Wilson that natural selection can act at the level of non-kin groups to produce adaptations that benefit a non-kin group, even if these adaptations are detrimental at the individual level. Thus, while altruistic persons may under some circumstances be outcompeted by less altruistic persons at the individual level, according to group selection theory, the opposite may occur at the group level where groups consisting of the more altruistic persons may outcompete groups consisting of the less altruistic persons. Such altruism may only extend to ingroup members while directing prejudice and antagonism against outgroup members (see also in-group favoritism). Many other evolutionary scientists have criticized group selection theory.
Such explanations do not imply that humans consciously calculate how to increase their inclusive fitness when doing altruistic acts. Instead, evolution has shaped psychological mechanisms, such as emotions, that promote certain altruistic behaviors.
The benefits for the altruist may be increased, and the costs reduced by being more altruistic towards certain groups. Research has found that people are more altruistic to kin than to no-kin, to friends than strangers, to those attractive than to those unattractive, to non-competitors than competitors, and to members in-groups than to members of out-groups.
The study of altruism was the initial impetus behind George R. Price's development of the Price equation, a mathematical equation used to study genetic evolution. An interesting example of altruism is found in the cellular slime moulds, such as Dictyostelium mucoroides. These protists live as individual amoebae until starved, at which point they aggregate and form a multicellular fruiting body in which some cells sacrifice themselves to promote the survival of other cells in the fruiting body.
Selective investment theory proposes that close social bonds, and associated emotional, cognitive, and neurohormonal mechanisms, evolved to facilitate long-term, high-cost altruism between those closely depending on one another for survival and reproductive success.
Such cooperative behaviors have sometimes been seen as arguments for left-wing politics, for example, by the Russian zoologist and anarchist Peter Kropotkin in his 1902 book Mutual Aid: A Factor of Evolution and Moral Philosopher Peter Singer in his book A Darwinian Left.
Neurobiology
Jorge Moll and Jordan Grafman, neuroscientists at the National Institutes of Health and LABS-D'Or Hospital Network, provided the first evidence for the neural bases of altruistic giving in normal healthy volunteers, using functional magnetic resonance imaging. In their research, they showed that both pure monetary rewards and charitable donations activated the mesolimbic reward pathway, a primitive part of the brain that usually responds to food and sex. However, when volunteers generously placed the interests of others before their own by making charitable donations, another brain circuit was selectively activated: the subgenual cortex/septal region. These structures social attachment and bonding in other species. The experiment indicated that altruism isn't a higher moral faculty overpowering innate selfish desires, but a fundamental, ingrained, and enjoyable trait in the brain. One brain region, the subgenual anterior cingulate cortex/basal forebrain, contributes to learning altruistic behavior, especially in people with empathy. The same study identified giving to charity and of social bonding.
Bill Harbaugh, a University of Oregon economist, in an fMRI scanner test conducted with his psychologist colleague Dr. Ulrich Mayr, reached the same conclusions as Jorge Moll and Jordan Grafman about giving to charity, although they were able to divide the study group into two groups: "egoists" and "altruists". One of their discoveries was that, though rarely, even some of the considered "egoists" sometimes gave more than expected because that would help others, leading to the conclusion that there are other factors in charity, such as a person's environment and values.
Psychology
The International Encyclopedia of the Social Sciences defines psychological altruism as "a motivational state to increase another's welfare". Psychological altruism is contrasted with psychological egoism, which refers to the motivation to increase one's welfare.
There has been some debate on whether humans are capable of psychological altruism. Some definitions specify a self-sacrificial nature to altruism and a lack of external rewards for altruistic behaviors. However, because altruism ultimately benefits the self in many cases, the selflessness of altruistic acts is difficult to prove. The social exchange theory postulates that altruism only exists when the benefits outweigh the costs to the self.
Daniel Batson, a psychologist, examined this question and argued against the social exchange theory. He identified four significant motives: to ultimately benefit the self (egoism), to ultimately benefit the other person (altruism), to benefit a group (collectivism), or to uphold a moral principle (principlism). Altruism that ultimately serves selfish gains is thus differentiated from selfless altruism, but the general conclusion has been that empathy-induced altruism can be genuinely selfless.
The empathy-altruism hypothesis states that psychological altruism exists and is evoked by the empathic desire to help someone suffering. Feelings of empathic concern are contrasted with personal distress, which compels people to reduce their unpleasant emotions and increase their positive ones by helping someone in need. Empathy is thus not selfless since altruism works either as a way to avoid those negative, unpleasant feelings and have positive, pleasant feelings when triggered by others' need for help or as a way to gain social reward or avoid social punishment by helping. People with empathic concern help others in distress even when exposure to the situation could be easily avoided, whereas those lacking in empathic concern avoid allowing it unless it is difficult or impossible to avoid exposure to another's suffering.
Helping behavior is seen in humans from about two years old when a toddler can understand subtle emotional cues.
In psychological research on altruism, studies often observe altruism as demonstrated through prosocial behaviors such as helping, comforting, sharing, cooperation, philanthropy, and community service. People are most likely to help if they recognize that a person is in need and feel personal responsibility for reducing the person's distress. The number of bystanders witnessing pain or suffering affects the likelihood of helping (the Bystander effect). More significant numbers of bystanders decrease individual feelings of responsibility. However, a witness with a high level of empathic concern is likely to assume personal responsibility entirely regardless of the number of bystanders.
Many studies have observed the effects of volunteerism (as a form of altruism) on happiness and health and have consistently found that those who exhibit volunteerism also have better current and future health and well-being. In a study of older adults, those who volunteered had higher life satisfaction and will to live, and less depression, anxiety, and somatization. Volunteerism and helping behavior have not only been shown to improve mental health but physical health and longevity as well, attributable to the activity and social integration it encourages. One study examined the physical health of mothers who volunteered over 30 years and found that 52% of those who did not belong to a volunteer organization experienced a major illness while only 36% of those who did volunteer experienced one. A study on adults aged 55 and older found that during the four-year study period, people who volunteered for two or more organizations had a 63% lower likelihood of dying. After controlling for prior health status, it was determined that volunteerism accounted for a 44% reduction in mortality. Merely being aware of kindness in oneself and others is also associated with greater well-being. A study that asked participants to count each act of kindness they performed for one week significantly enhanced their subjective happiness.
While research supports the idea that altruistic acts bring about happiness, it has also been found to work in the opposite direction—that happier people are also kinder. The relationship between altruistic behavior and happiness is bidirectional. Studies found that generosity increases linearly from sad to happy affective states.
Feeling over-taxed by the needs of others has negative effects on health and happiness. For example, one study on volunteerism found that feeling overwhelmed by others' demands had an even stronger negative effect on mental health than helping had a positive one (although positive effects were still significant).
Pathological altruism
Pathological altruism is altruism taken to an unhealthy extreme, such that it either harms the altruistic person or the person's well-intentioned actions cause more harm than good.
The term "pathological altruism" was popularised by the book Pathological Altruism.
Examples include depression and burnout seen in healthcare professionals, an unhealthy focus on others to the detriment of one's own needs, hoarding of animals, and ineffective philanthropic and social programs that ultimately worsen the situations they are meant to aid.
Sociology
"Sociologists have long been concerned with how to build the good society". The structure of our societies and how individuals come to exhibit charitable, philanthropic, and other pro-social, altruistic actions for the common good is a commonly researched topic within the field. The American Sociology Association (ASA) acknowledges public sociology saying, "The intrinsic scientific, policy, and public relevance of this field of investigation in helping to construct 'good societies' is unquestionable". This type of sociology seeks contributions that aid popular and theoretical understandings of what motivates altruism and how it is organized, and promotes an altruistic focus in order to benefit the world and people it studies.
How altruism is framed, organized, carried out, and what motivates it at the group level is an area of focus that sociologists investigate in order to contribute back to the groups it studies and "build the good society". The motivation of altruism is also the focus of study; for example, one study links the occurrence of moral outrage to altruistic compensation of victims. Studies show that generosity in laboratory and in online experiments is contagious – people imitate the generosity they observe in others.
Religious viewpoints
Most, if not all, of the world's religions promote altruism as a very important moral value. Buddhism, Christianity, Hinduism, Islam, Jainism, Judaism, and Sikhism, etc., place particular emphasis on altruistic morality.
Buddhism
Altruism figures prominently in Buddhism. Love and compassion are components of all forms of Buddhism, and are focused on all beings equally: love is the wish that all beings be happy, and compassion is the wish that all beings be free from suffering. "Many illnesses can be cured by the one medicine of love and compassion. These qualities are the ultimate source of human happiness, and the need for them lies at the very core of our being" (Dalai Lama).
The notion of altruism is modified in such a world-view, since the belief is that such a practice promotes the practitioner's own happiness: "The more we care for the happiness of others, the greater our own sense of well-being becomes" (Dalai Lama).
In the context of larger ethical discussions on moral action and judgment, Buddhism is characterized by the belief that negative (unhappy) consequences of our actions derive not from punishment or correction based on moral judgment, but from the law of karma, which functions like a natural law of cause and effect. A simple illustration of such cause and effect is the case of experiencing the effects of what one causes: if one causes suffering, then as a natural consequence one would experience suffering; if one causes happiness, then as a natural consequence one would experience happiness.
Jainism
The fundamental principles of Jainism revolve around altruism, not only for humans but for all sentient beings. Jainism preaches – to live and let live, not harming sentient beings, i.e. uncompromising reverence for all life. It also considers all living things to be equal. The first , Rishabhdev, introduced the concept of altruism for all living beings, from extending knowledge and experience to others to donation, giving oneself up for others, non-violence, and compassion for all living things.
The principle of nonviolence seeks to minimize karmas which limit the capabilities of the soul. Jainism views every soul as worthy of respect because it has the potential to become (God in Jainism). Because all living beings possess a soul, great care and awareness is essential in one's actions. Jainism emphasizes the equality of all life, advocating harmlessness towards all, whether the creatures are great or small. This policy extends even to microscopic organisms. Jainism acknowledges that every person has different capabilities and capacities to practice and therefore accepts different levels of compliance for ascetics and householders.
Christianity
Thomas Aquinas interprets "You should love your neighbour as yourself" as meaning that love for ourselves is the exemplar of love for others. Considering that "the love with which a man loves himself is the form and root of friendship" he quotes Aristotle that "the origin of friendly relations with others lies in our relations to ourselves",. Aquinas concluded that though we are not bound to love others more than ourselves, we naturally seek the common good, the good of the whole, more than any private good, the good of a part. However, he thought we should love God more than ourselves and our neighbours, and more than our bodily life—since the ultimate purpose of loving our neighbour is to share in eternal beatitude: a more desirable thing than bodily well-being. In coining the word "altruism", as stated above, Comte was probably opposing this Thomistic doctrine, which is present in some theological schools within Catholicism. The aim and focus of Christian life is a life that glorifies God, with obeying christ's command to treat others equally, caring for them and understanding eternity in heaven is what Jesus Resurrection at calvary was all about.
Many biblical authors draw a strong connection between love of others and love of God. states that for one to love God one must love his fellowman, and that hatred of one's fellowman is the same as hatred of God. Thomas Jay Oord has argued in several books that altruism is but one possible form of love. An altruistic action is not always a loving action. Oord defines altruism as acting for the other's good, and he agrees with feminists who note that sometimes love requires acting for one's own good when the other's demands undermine overall well-being.
German philosopher Max Scheler distinguishes two ways in which the strong can help the weak. One way is a sincere expression of Christian love, "motivated by a powerful feeling of security, strength, and inner salvation, of the invincible fullness of one's own life and existence". Another way is merely "one of the many modern substitutes for love,... nothing but the urge to turn away from oneself and to lose oneself in other people's business". At its worst, Scheler says, "love for the small, the poor, the weak, and the oppressed is really disguised hatred, repressed envy, an impulse to detract, etc., directed against the opposite phenomena: wealth, strength, power, largesse."
Islam
In Islam, "" () (altruism) means "preferring others to oneself". For Sufis, this means devotion to others through complete forgetfulness of one's own concerns, where concern for others is deemed as a demand made by Allah (i.e. God) on the human body, considered to be property of Allah alone. The importance of lies in sacrifice for the sake of the greater good; Islam considers those practicing as abiding by the highest degree of nobility.
This is similar to the notion of chivalry, but unlike that European concept, in . A constant concern for Allah results in a careful attitude towards people, animals, and other things in this world.
Judaism
Judaism defines altruism as the desired goal of creation. Rabbi Abraham Isaac Kook stated that love is the most important attribute in humanity. Love is defined as bestowal, or giving, which is the intention of altruism. This can be altruism towards humanity that leads to altruism towards the creator or God. Kabbalah defines God as the force of giving in existence. Rabbi Moshe Chaim Luzzatto focused on the "purpose of creation" and how the will of God was to bring creation into perfection and adhesion with this force of giving.
Modern Kabbalah developed by Rabbi Yehuda Ashlag, in his writings about the future generation, focuses on how society could achieve an altruistic social framework. Ashlag proposed that such a framework is the purpose of creation, and everything that happens is to raise humanity to the level of altruism, love for one another. Ashlag focused on society and its relation to divinity.
Sikhism
Altruism is essential to the Sikh religion. The central faith in Sikhism is that the greatest deed anyone can do is to imbibe and live the godly qualities like love, affection, sacrifice, patience, harmony, and truthfulness. , or selfless service to the community for its own sake, is an important concept in Sikhism.
The fifth Guru, Arjun Dev, sacrificed his life to uphold "22 carats of pure truth, the greatest gift to humanity", the Guru Granth. The ninth Guru, Tegh Bahadur, sacrificed his head to protect weak and defenseless people against atrocity.
In the late seventeenth century, Guru Gobind Singh (the tenth Guru in Sikhism), was at war with the Mughal rulers to protect the people of different faiths when a fellow Sikh, Bhai Kanhaiya, attended the troops of the enemy. He gave water to both friends and foes who were wounded on the battlefield. Some of the enemy began to fight again and some Sikh warriors were annoyed by Bhai Kanhaiya as he was helping their enemy. Sikh soldiers brought Bhai Kanhaiya before Guru Gobind Singh, and complained of his action that they considered counterproductive to their struggle on the battlefield. "What were you doing, and why?" asked the Guru. "I was giving water to the wounded because I saw your face in all of them", replied Bhai Kanhaiya. The Guru responded, "Then you should also give them ointment to heal their wounds. You were practicing what you were coached in the house of the Guru."
Under the tutelage of the Guru, Bhai Kanhaiya subsequently founded a volunteer corps for altruism, which is still engaged today in doing good to others and in training new recruits for this service.
Hinduism
In Hinduism Selflessness (), Love (), Kindness (), and Forgiveness () are considered as the highest acts of humanity or "". Giving alms to the beggars or poor people is considered as a divine act or "" and Hindus believe it will free their souls from guilt or "" and will led them to heaven or "" in afterlife. Altruism is also the central act of various Hindu mythology and religious poems and songs. Mass donation of clothes to poor people (), or blood donation camp or mass food donation () for poor people is common in various Hindu religious ceremonies.
The Bhagavad Gita supports the doctrine of karma yoga (achieving oneness with God through action) & "Nishkam Karma" or action without expectation / desire for personal gain which can be said to encompass altruism. Altruistic acts are generally celebrated and very well received in Hindu literature and are central to Hindu morality.
Philosophy
There is a wide range of philosophical views on humans' obligations or motivations to act altruistically. Proponents of ethical altruism maintain that individuals are morally obligated to act altruistically. The opposing view is ethical egoism, which maintains that moral agents should always act in their own self-interest. Both ethical altruism and ethical egoism contrast with utilitarianism, which maintains that each agent should act in order to maximise the efficacy of their function and the benefit to both themselves and their co-inhabitants.
A related concept in descriptive ethics is psychological egoism, the thesis that humans always act in their own self-interest and that true altruism is impossible. Rational egoism is the view that rationality consists in acting in one's self-interest (without specifying how this affects one's moral obligations).
Effective altruism
Effective altruism is a philosophy and social movement that uses evidence and reasoning to determine the most effective ways to benefit others. Effective altruism encourages individuals to consider all causes and actions and to act in the way that brings about the greatest positive impact, based upon their values. It is the broad, evidence-based, and cause-neutral approach that distinguishes effective altruism from traditional altruism or charity. Effective altruism is part of the larger movement towards evidence-based practices.
While a substantial proportion of effective altruists have focused on the nonprofit sector, the philosophy of effective altruism applies more broadly to prioritizing the scientific projects, companies, and policy initiatives which can be estimated to save lives, help people, or otherwise have the biggest benefit. People associated with the movement include philosopher Peter Singer, Facebook co founder Dustin Moskovitz, Cari Tuna, Oxford-based researchers William MacAskill and Toby Ord, and professional poker player Liv Boeree.
Genetics
OXTR, CD38, COMT, DRD4, DRD5, IGF2, and GABRB2 are candidate genes for influencing altruistic behavior.
Digital altruism
Digital altruism is the notion that some are willing to freely share information based on the principle of reciprocity and in the belief that in the end, everyone benefits from sharing information via the Internet.
There are three types of digital altruism: (1) "everyday digital altruism", involving expedience, ease, moral engagement, and conformity; (2) "creative digital altruism", involving creativity, heightened moral engagement, and cooperation; and (3) "co-creative digital altruism" involving creativity, moral engagement, and meta cooperative efforts.
See also
Notes
References
External links
Auguste Comte
Defence mechanisms
Morality
Moral psychology
Philanthropy
Social philosophy
Interpersonal relationships
Virtue
|
https://en.wikipedia.org/wiki/Alchemy
|
Alchemy (from Arabic: al-kīmiyā; from Ancient Greek: χυμεία, khumeía) is an ancient branch of natural philosophy, a philosophical and protoscientific tradition that was historically practiced in China, India, the Muslim world, and Europe. In its Western form, alchemy is first attested in a number of pseudepigraphical texts written in Greco-Roman Egypt during the first few centuries AD.
Alchemists attempted to purify, mature, and perfect certain materials. Common aims were chrysopoeia, the transmutation of "base metals" (e.g., lead) into "noble metals" (particularly gold); the creation of an elixir of immortality; and the creation of panaceas able to cure any disease. The perfection of the human body and soul was thought to result from the alchemical magnum opus ("Great Work"). The concept of creating the philosophers' stone was variously connected with all of these projects.
Islamic and European alchemists developed a basic set of laboratory techniques, theories, and terms, some of which are still in use today. They did not abandon the Ancient Greek philosophical idea that everything is composed of four elements, and they tended to guard their work in secrecy, often making use of cyphers and cryptic symbolism. In Europe, the 12th-century translations of medieval Islamic works on science and the rediscovery of Aristotelian philosophy gave birth to a flourishing tradition of Latin alchemy. This late medieval tradition of alchemy would go on to play a significant role in the development of early modern science (particularly chemistry and medicine).
Modern discussions of alchemy are generally split into an examination of its exoteric practical applications and its esoteric spiritual aspects, despite criticisms by scholars such as Eric J. Holmyard and Marie-Louise von Franz that they should be understood as complementary. The former is pursued by historians of the physical sciences, who examine the subject in terms of early chemistry, medicine, and charlatanism, and the philosophical and religious contexts in which these events occurred. The latter interests historians of esotericism, psychologists, and some philosophers and spiritualists. The subject has also made an ongoing impact on literature and the arts.
Etymology
The word alchemy comes from old French alquemie, alkimie, used in Medieval Latin as . This name was itself adopted from the Arabic word (). The Arabic in turn was a borrowing of the Late Greek term khēmeía (), also spelled khumeia () and khēmía (), with al- being the Arabic definite article 'the'. Together this association can be interpreted as 'the process of transmutation by which to fuse or reunite with the divine or original form'. Several etymologies have been proposed for the Greek term. The first was proposed by Zosimos of Panopolis (3rd–4th centuries), who derived it from the name of a book, the Khemeu. Hermann Diels argued in 1914 that it rather derived from χύμα, used to describe metallic objects formed by casting.
Others trace its roots to the Egyptian name (hieroglyphic 𓆎𓅓𓏏𓊖 ), meaning 'black earth', which refers to the fertile and auriferous soil of the Nile valley, as opposed to red desert sand. According to the Egyptologist Wallis Budge, the Arabic word ʾ actually means "the Egyptian [science]", borrowing from the Coptic word for "Egypt", (or its equivalent in the Mediaeval Bohairic dialect of Coptic, ). This Coptic word derives from Demotic , itself from ancient Egyptian . The ancient Egyptian word referred to both the country and the colour "black" (Egypt was the "black Land", by contrast with the "red Land", the surrounding desert).
History
Alchemy encompasses several philosophical traditions spanning some four millennia and three continents. These traditions' general penchant for cryptic and symbolic language makes it hard to trace their mutual influences and "genetic" relationships. One can distinguish at least three major strands, which appear to be mostly independent, at least in their earlier stages: Chinese alchemy, centered in China; Indian alchemy, centered on the Indian subcontinent; and Western alchemy, which occurred around the Mediterranean and whose center shifted over the millennia from Greco-Roman Egypt to the Islamic world, and finally medieval Europe. Chinese alchemy was closely connected to Taoism and Indian alchemy with the Dharmic faiths. In contrast, Western alchemy developed its philosophical system mostly independent of but influenced by various Western religions. It is still an open question whether these three strands share a common origin, or to what extent they influenced each other.
Hellenistic Egypt
The start of Western alchemy may generally be traced to ancient and Hellenistic Egypt, where the city of Alexandria was a center of alchemical knowledge, and retained its pre-eminence through most of the Greek and Roman periods. Following the work of André-Jean Festugière, modern scholars see alchemical practice in the Roman Empire as originating from the Egyptian goldsmith's art, Greek philosophy and different religious traditions. Tracing the origins of the alchemical art in Egypt is complicated by the pseudepigraphic nature of texts from the Greek alchemical corpus. The treatises of Zosimos of Panopolis, the earliest historically attested author (fl. c. 300 AD), can help in situating the other authors. Zosimus based his work on that of older alchemical authors, such as Mary the Jewess, Pseudo-Democritus, and Agathodaimon, but very little is known about any of these authors. The most complete of their works, The Four Books of Pseudo-Democritus, were probably written in the first century AD.
Recent scholarship tends to emphasize the testimony of Zosimus, who traced the alchemical arts back to Egyptian metallurgical and ceremonial practices. It has also been argued that early alchemical writers borrowed the vocabulary of Greek philosophical schools but did not implement any of its doctrines in a systematic way. Zosimos of Panopolis wrote in the Final Abstinence (also known as the "Final Count"). Zosimos explains that the ancient practice of "tinctures" (the technical Greek name for the alchemical arts) had been taken over by certain "demons" who taught the art only to those who offered them sacrifices. Since Zosimos also called the demons "the guardians of places" (, ) and those who offered them sacrifices "priests" (, ), it is fairly clear that he was referring to the gods of Egypt and their priests. While critical of the kind of alchemy he associated with the Egyptian priests and their followers, Zosimos nonetheless saw the tradition's recent past as rooted in the rites of the Egyptian temples.
Mythology – Zosimos of Panopolis asserted that alchemy dated back to Pharaonic Egypt where it was the domain of the priestly class, though there is little to no evidence for his assertion. Alchemical writers used Classical figures from Greek, Roman, and Egyptian mythology to illuminate their works and allegorize alchemical transmutation. These included the pantheon of gods related to the Classical planets, Isis, Osiris, Jason, and many others.
The central figure in the mythology of alchemy is Hermes Trismegistus (or Thrice-Great Hermes). His name is derived from the god Thoth and his Greek counterpart Hermes. Hermes and his caduceus or serpent-staff, were among alchemy's principal symbols. According to Clement of Alexandria, he wrote what were called the "forty-two books of Hermes", covering all fields of knowledge. The Hermetica of Thrice-Great Hermes is generally understood to form the basis for Western alchemical philosophy and practice, called the hermetic philosophy by its early practitioners. These writings were collected in the first centuries of the common era.
Technology – The dawn of Western alchemy is sometimes associated with that of metallurgy, extending back to 3500 BC. Many writings were lost when the Roman emperor Diocletian ordered the burning of alchemical books after suppressing a revolt in Alexandria (AD 292). Few original Egyptian documents on alchemy have survived, most notable among them the Stockholm papyrus and the Leyden papyrus X. Dating from AD 250–300, they contained recipes for dyeing and making artificial gemstones, cleaning and fabricating pearls, and manufacturing of imitation gold and silver. These writings lack the mystical, philosophical elements of alchemy, but do contain the works of Bolus of Mendes (or Pseudo-Democritus), which aligned these recipes with theoretical knowledge of astrology and the classical elements. Between the time of Bolus and Zosimos, the change took place that transformed this metallurgy into a Hermetic art.
Philosophy – Alexandria acted as a melting pot for philosophies of Pythagoreanism, Platonism, Stoicism and Gnosticism which formed the origin of alchemy's character. An important example of alchemy's roots in Greek philosophy, originated by Empedocles and developed by Aristotle, was that all things in the universe were formed from only four elements: earth, air, water, and fire. According to Aristotle, each element had a sphere to which it belonged and to which it would return if left undisturbed. The four elements of the Greek were mostly qualitative aspects of matter, not quantitative, as our modern elements are; "...True alchemy never regarded earth, air, water, and fire as corporeal or chemical substances in the present-day sense of the word. The four elements are simply the primary, and most general, qualities by means of which the amorphous and purely quantitative substance of all bodies first reveals itself in differentiated form." Later alchemists extensively developed the mystical aspects of this concept.
Alchemy coexisted alongside emerging Christianity. Lactantius believed Hermes Trismegistus had prophesied its birth. St Augustine later affirmed this in the 4th & 5th centuries, but also condemned Trismegistus for idolatry. Examples of Pagan, Christian, and Jewish alchemists can be found during this period.
Most of the Greco-Roman alchemists preceding Zosimos are known only by pseudonyms, such as Moses, Isis, Cleopatra, Democritus, and Ostanes. Others authors such as Komarios, and Chymes, we only know through fragments of text. After AD 400, Greek alchemical writers occupied themselves solely in commenting on the works of these predecessors. By the middle of the 7th century alchemy was almost an entirely mystical discipline. It was at that time that Khalid Ibn Yazid sparked its migration from Alexandria to the Islamic world, facilitating the translation and preservation of Greek alchemical texts in the 8th and 9th centuries.
Byzantium
Greek alchemy was preserved in medieval Byzantine manuscripts after the fall of Egypt, and yet historians have only relatively recently begun to pay attention to the study and development of Greek alchemy in the Byzantine period.
India
The 2nd millennium BC text Vedas describe a connection between eternal life and gold. A considerable knowledge of metallurgy has been exhibited in a third-century AD text called Arthashastra which provides ingredients of explosives (Agniyoga) and salts extracted from fertile soils and plant remains (Yavakshara) such as saltpetre/nitre, perfume making (different qualities of perfumes are mentioned), granulated (refined) Sugar. Buddhist texts from the 2nd to 5th centuries mention the transmutation of base metals to gold. According to some scholars Greek alchemy may have influenced Indian alchemy but there are no hard evidences to back this claim.
The 11th-century Persian chemist and physician Abū Rayhān Bīrūnī, who visited Gujarat as part of the court of Mahmud of Ghazni, reported that they
The goals of alchemy in India included the creation of a divine body (Sanskrit divya-deham) and immortality while still embodied (Sanskrit jīvan-mukti). Sanskrit alchemical texts include much material on the manipulation of mercury and sulphur, that are homologized with the semen of the god Śiva and the menstrual blood of the goddess Devī.
Some early alchemical writings seem to have their origins in the Kaula tantric schools associated to the teachings of the personality of Matsyendranath. Other early writings are found in the Jaina medical treatise Kalyāṇakārakam of Ugrāditya, written in South India in the early 9th century.
Two famous early Indian alchemical authors were Nāgārjuna Siddha and Nityanātha Siddha. Nāgārjuna Siddha was a Buddhist monk. His book, Rasendramangalam, is an example of Indian alchemy and medicine. Nityanātha Siddha wrote Rasaratnākara, also a highly influential work. In Sanskrit, rasa translates to "mercury", and Nāgārjuna Siddha was said to have developed a method of converting mercury into gold.
Scholarship on Indian alchemy is in the publication of The Alchemical Body by David Gordon White.
A modern bibliography on Indian alchemical studies has been written by White.
The contents of 39 Sanskrit alchemical treatises have been analysed in detail in G. Jan Meulenbeld's History of Indian Medical Literature. The discussion of these works in HIML gives a summary of the contents of each work, their special features, and where possible the evidence concerning their dating. Chapter 13 of HIML, Various works on rasaśāstra and ratnaśāstra (or Various works on alchemy and gems) gives brief details of a further 655 (six hundred and fifty-five) treatises. In some cases Meulenbeld gives notes on the contents and authorship of these works; in other cases references are made only to the unpublished manuscripts of these titles.
A great deal remains to be discovered about Indian alchemical literature. The content of the Sanskrit alchemical corpus has not yet (2014) been adequately integrated into the wider general history of alchemy.
Islamic world
After the fall of the Roman Empire, the focus of alchemical development moved to the Islamic World. Much more is known about Islamic alchemy because it was better documented: indeed, most of the earlier writings that have come down through the years were preserved as Arabic translations. The word alchemy itself was derived from the Arabic word al-kīmiyā (الكيمياء). The early Islamic world was a melting pot for alchemy. Platonic and Aristotelian thought, which had already been somewhat appropriated into hermetical science, continued to be assimilated during the late 7th and early 8th centuries through Syriac translations and scholarship.
In the late ninth and early tenth centuries, the Arabic works attributed to Jābir ibn Hayyān (Latinized as "Geber" or "Geberus") introduced a new approach to alchemy. Paul Kraus, who wrote the standard reference work on Jabir, put it as follows:
Islamic philosophers also made great contributions to alchemical hermeticism. The most influential author in this regard was arguably Jabir. Jabir's ultimate goal was Takwin, the artificial creation of life in the alchemical laboratory, up to, and including, human life. He analyzed each Aristotelian element in terms of four basic qualities of hotness, coldness, dryness, and moistness. According to Jabir, in each metal two of these qualities were interior and two were exterior. For example, lead was externally cold and dry, while gold was hot and moist. Thus, Jabir theorized, by rearranging the qualities of one metal, a different metal would result. By this reasoning, the search for the philosopher's stone was introduced to Western alchemy. Jabir developed an elaborate numerology whereby the root letters of a substance's name in Arabic, when treated with various transformations, held correspondences to the element's physical properties.
The elemental system used in medieval alchemy also originated with Jabir. His original system consisted of seven elements, which included the five classical elements (aether, air, earth, fire, and water) in addition to two chemical elements representing the metals: sulphur, "the stone which burns", which characterized the principle of combustibility, and mercury, which contained the idealized principle of metallic properties. Shortly thereafter, this evolved into eight elements, with the Arabic concept of the three metallic principles: sulphur giving flammability or combustion, mercury giving volatility and stability, and salt giving solidity. The atomic theory of corpuscularianism, where all physical bodies possess an inner and outer layer of minute particles or corpuscles, also has its origins in the work of Jabir.
From the 9th to 14th centuries, alchemical theories faced criticism from a variety of practical Muslim chemists, including Alkindus, Abū al-Rayhān al-Bīrūnī, Avicenna and Ibn Khaldun. In particular, they wrote refutations against the idea of the transmutation of metals.
From the 14th century onwards, many materials and practices originally belonging to Indian alchemy (Rasayana) were assimilated in the Persian texts written by Muslim scholars.
East Asia
Researchers have found evidence that Chinese alchemists and philosophers discovered complex mathematical phenomena that were shared with Arab alchemists during the medieval period. Discovered in BC China, the "magic square of three" was propagated to followers of Abū Mūsā Jābir ibn Ḥayyān at some point over the proceeding several hundred years. Other commonalities shared between the two alchemical schools of thought include discrete naming for ingredients and heavy influence from the natural elements. The silk road provided a clear path for the exchange of goods, ideas, ingredients, religion, and many other aspects of life with which alchemy is intertwined.
Whereas European alchemy eventually centered on the transmutation of base metals into noble metals, Chinese alchemy had a more obvious connection to medicine. The philosopher's stone of European alchemists can be compared to the Grand Elixir of Immortality sought by Chinese alchemists. In the hermetic view, these two goals were not unconnected, and the philosopher's stone was often equated with the universal panacea; therefore, the two traditions may have had more in common than initially appears.
As early as 317 AD, Ge Hong documented the use of metals, minerals, and elixirs in early Chinese medicine. Hong identified three ancient Chinese documents, titled Scripture of Great Clarity, Scripture of the Nine Elixirs, and Scripture of the Golden Liquor, as texts containing fundamental alchemical information. He also described alchemy, along with meditation, as the sole spiritual practices that could allow one to gain immortality or to transcend. In his work Inner Chapters of the Book of the Master Who Embraces Spontaneous Nature (317 AD), Hong argued that alchemical solutions such as elixirs were preferable to traditional medicinal treatment due to the spiritual protection they could provide. In the centuries following Ge Hong's death, the emphasis placed on alchemy as a spiritual practice among Chinese Daoists was reduced. In 499 AD, Tao Hongjing refuted Hong's statement that alchemy is as important a spiritual practice as Shangqing meditation. While Hongjing did not deny the power of alchemical elixirs to grant immortality or provide divine protection, he ultimately found the Scripture of the Nine Elixirs to be ambiguous and spiritually unfulfilling, aiming to implement more accessible practicing techniques.
In the early 700s, Neidan (also known as internal alchemy) was adopted by Daoists as a new form of alchemy. Neidan emphasized appeasing the inner gods that inhabit the human body by practicing alchemy with compounds found in the body, rather than the mixing of natural resources that was emphasized in early Dao alchemy. For example, saliva was often considered nourishment for the inner gods and did not require any conscious alchemical reaction to produce. The inner gods were not thought of as physical presences occupying each person, but rather a collection of deities that are each said to represent and protect a specific body part or region. Although those who practiced Neidan prioritized meditation over external alchemical strategies, many of the same elixirs and constituents from previous Daoist alchemical schools of thought continued to be utilized in tandem with meditation. Eternal life remained a consideration for Neidan alchemists, as it was believed that one would become immortal if an inner god were to be immortalized within them through spiritual fulfillment.
Black powder may have been an important invention of Chinese alchemists. It is said that the Chinese invented gunpowder while trying to find a potion for eternal life. Described in 9th-century texts and used in fireworks in China by the 10th century, it was used in cannons by 1290. From China, the use of gunpowder spread to Japan, the Mongols, the Muslim world, and Europe. Gunpowder was used by the Mongols against the Hungarians in 1241, and in Europe by the 14th century.
Chinese alchemy was closely connected to Taoist forms of traditional Chinese medicine, such as Acupuncture and Moxibustion. In the early Song dynasty, followers of this Taoist idea (chiefly the elite and upper class) would ingest mercuric sulfide, which, though tolerable in low levels, led many to suicide. Thinking that this consequential death would lead to freedom and access to the Taoist heavens, the ensuing deaths encouraged people to eschew this method of alchemy in favor of external sources (the aforementioned Tai Chi Chuan, mastering of the qi, etc.) Chinese alchemy was introduced to the West by Obed Simon Johnson.
Medieval Europe
The introduction of alchemy to Latin Europe may be dated to 11 February 1144, with the completion of Robert of Chester's translation of the ("Book on the Composition of Alchemy") from an Arabic work attributed to Khalid ibn Yazid. Although European craftsmen and technicians pre-existed, Robert notes in his preface that alchemy (here still referring to the elixir rather than to the art itself) was unknown in Latin Europe at the time of his writing. The translation of Arabic texts concerning numerous disciplines including alchemy flourished in 12th-century Toledo, Spain, through contributors like Gerard of Cremona and Adelard of Bath. Translations of the time included the Turba Philosophorum, and the works of Avicenna and Muhammad ibn Zakariya al-Razi. These brought with them many new words to the European vocabulary for which there was no previous Latin equivalent. Alcohol, carboy, elixir, and athanor are examples.
Meanwhile, theologian contemporaries of the translators made strides towards the reconciliation of faith and experimental rationalism, thereby priming Europe for the influx of alchemical thought. The 11th-century St Anselm put forth the opinion that faith and rationalism were compatible and encouraged rationalism in a Christian context. In the early 12th century, Peter Abelard followed Anselm's work, laying down the foundation for acceptance of Aristotelian thought before the first works of Aristotle had reached the West. In the early 13th century, Robert Grosseteste used Abelard's methods of analysis and added the use of observation, experimentation, and conclusions when conducting scientific investigations. Grosseteste also did much work to reconcile Platonic and Aristotelian thinking.
Through much of the 12th and 13th centuries, alchemical knowledge in Europe remained centered on translations, and new Latin contributions were not made. The efforts of the translators were succeeded by that of the encyclopaedists. In the 13th century, Albertus Magnus and Roger Bacon were the most notable of these, their work summarizing and explaining the newly imported alchemical knowledge in Aristotelian terms. Albertus Magnus, a Dominican friar, is known to have written works such as the Book of Minerals where he observed and commented on the operations and theories of alchemical authorities like Hermes and Democritus and unnamed alchemists of his time. Albertus critically compared these to the writings of Aristotle and Avicenna, where they concerned the transmutation of metals. From the time shortly after his death through to the 15th century, more than 28 alchemical tracts were misattributed to him, a common practice giving rise to his reputation as an accomplished alchemist. Likewise, alchemical texts have been attributed to Albert's student Thomas Aquinas.
Roger Bacon, a Franciscan friar who wrote on a wide variety of topics including optics, comparative linguistics, and medicine, composed his Great Work () for as part of a project towards rebuilding the medieval university curriculum to include the new learning of his time. While alchemy was not more important to him than other sciences and he did not produce allegorical works on the topic, he did consider it and astrology to be important parts of both natural philosophy and theology and his contributions advanced alchemy's connections to soteriology and Christian theology. Bacon's writings integrated morality, salvation, alchemy, and the prolongation of life. His correspondence with Clement highlighted this, noting the importance of alchemy to the papacy. Like the Greeks before him, Bacon acknowledged the division of alchemy into practical and theoretical spheres. He noted that the theoretical lay outside the scope of Aristotle, the natural philosophers, and all Latin writers of his time. The practical confirmed the theoretical, and Bacon advocated its uses in natural science and medicine. In later European legend, he became an archmage. In particular, along with Albertus Magnus, he was credited with the forging of a brazen head capable of answering its owner's questions.
Soon after Bacon, the influential work of Pseudo-Geber (sometimes identified as Paul of Taranto) appeared. His Summa Perfectionis remained a staple summary of alchemical practice and theory through the medieval and renaissance periods. It was notable for its inclusion of practical chemical operations alongside sulphur-mercury theory, and the unusual clarity with which they were described. By the end of the 13th century, alchemy had developed into a fairly structured system of belief. Adepts believed in the macrocosm-microcosm theories of Hermes, that is to say, they believed that processes that affect minerals and other substances could have an effect on the human body (for example, if one could learn the secret of purifying gold, one could use the technique to purify the human soul). They believed in the four elements and the four qualities as described above, and they had a strong tradition of cloaking their written ideas in a labyrinth of coded jargon set with traps to mislead the uninitiated. Finally, the alchemists practiced their art: they actively experimented with chemicals and made observations and theories about how the universe operated. Their entire philosophy revolved around their belief that man's soul was divided within himself after the fall of Adam. By purifying the two parts of man's soul, man could be reunited with God.
In the 14th century, alchemy became more accessible to Europeans outside the confines of Latin speaking churchmen and scholars. Alchemical discourse shifted from scholarly philosophical debate to an exposed social commentary on the alchemists themselves. Dante, Piers Plowman, and Chaucer all painted unflattering pictures of alchemists as thieves and liars. Pope John XXII's 1317 edict, Spondent quas non-exhibent forbade the false promises of transmutation made by pseudo-alchemists. Roman Catholic Inquisitor General Nicholas Eymerich's Directorium Inquisitorum, written in 1376, associated alchemy with the performance of demonic rituals, which Eymerich differentiated from magic performed in accordance with scripture. This did not, however, lead to any change in the Inquisition's monitoring or prosecution of alchemists. In 1403, Henry IV of England banned the practice of multiplying metals (although it was possible to buy a licence to attempt to make gold alchemically, and a number were granted by Henry VI and Edward IV). These critiques and regulations centered more around pseudo-alchemical charlatanism than the actual study of alchemy, which continued with an increasingly Christian tone. The 14th century saw the Christian imagery of death and resurrection employed in the alchemical texts of Petrus Bonus, John of Rupescissa, and in works written in the name of Raymond Lull and Arnold of Villanova.
Nicolas Flamel is a well-known alchemist to the point where he had many pseudepigraphic imitators. Although the historical Flamel existed, the writings and legends assigned to him only appeared in 1612. Flamel was not a religious scholar as were many of his predecessors, and his entire interest in the subject revolved around the pursuit of the philosopher's stone. His work spends a great deal of time describing the processes and reactions, but never actually gives the formula for carrying out the transmutations. Most of 'his' work was aimed at gathering alchemical knowledge that had existed before him, especially as regarded the philosopher's stone. Through the 14th and 15th centuries, alchemists were much like Flamel: they concentrated on looking for the philosophers' stone. Bernard Trevisan and George Ripley made similar contributions. Their cryptic allusions and symbolism led to wide variations in interpretation of the art.
A common idea in European alchemy in the medieval era was a metaphysical "Homeric chain of wise men that link[ed] heaven and earth" that included ancient pagan philosophers and other important historical figures.
Renaissance and early modern Europe
During the Renaissance, Hermetic and Platonic foundations were restored to European alchemy. The dawn of medical, pharmaceutical, occult, and entrepreneurial branches of alchemy followed.
In the late 15th century, Marsilio Ficino translated the Corpus Hermeticum and the works of Plato into Latin. These were previously unavailable to Europeans who for the first time had a full picture of the alchemical theory that Bacon had declared absent. Renaissance Humanism and Renaissance Neoplatonism guided alchemists away from physics to refocus on mankind as the alchemical vessel.
Esoteric systems developed that blended alchemy into a broader occult Hermeticism, fusing it with magic, astrology, and Christian cabala. A key figure in this development was German Heinrich Cornelius Agrippa (1486–1535), who received his Hermetic education in Italy in the schools of the humanists. In his De Occulta Philosophia, he attempted to merge Kabbalah, Hermeticism, and alchemy. He was instrumental in spreading this new blend of Hermeticism outside the borders of Italy.
Paracelsus (Philippus Aureolus Theophrastus Bombastus von Hohenheim, 1493–1541) cast alchemy into a new form, rejecting some of Agrippa's occultism and moving away from chrysopoeia. Paracelsus pioneered the use of chemicals and minerals in medicine and wrote, "Many have said of Alchemy, that it is for the making of gold and silver. For me such is not the aim, but to consider only what virtue and power may lie in medicines."
His hermetical views were that sickness and health in the body relied on the harmony of man the microcosm and Nature the macrocosm. He took an approach different from those before him, using this analogy not in the manner of soul-purification but in the manner that humans must have certain balances of minerals in their bodies, and that certain illnesses of the body had chemical remedies that could cure them. Iatrochemistry refers to the pharmaceutical applications of alchemy championed by Paracelsus.
John Dee (13 July 1527 – December, 1608) followed Agrippa's occult tradition. Although better known for angel summoning, divination, and his role as astrologer, cryptographer, and consultant to Queen Elizabeth I, Dee's alchemical Monas Hieroglyphica, written in 1564 was his most popular and influential work. His writing portrayed alchemy as a sort of terrestrial astronomy in line with the Hermetic axiom As above so below. During the 17th century, a short-lived "supernatural" interpretation of alchemy became popular, including support by fellows of the Royal Society: Robert Boyle and Elias Ashmole. Proponents of the supernatural interpretation of alchemy believed that the philosopher's stone might be used to summon and communicate with angels.
Entrepreneurial opportunities were common for the alchemists of Renaissance Europe. Alchemists were contracted by the elite for practical purposes related to mining, medical services, and the production of chemicals, medicines, metals, and gemstones. Rudolf II, Holy Roman Emperor, in the late 16th century, famously received and sponsored various alchemists at his court in Prague, including Dee and his associate Edward Kelley. King James IV of Scotland, Julius, Duke of Brunswick-Lüneburg, Henry V, Duke of Brunswick-Lüneburg, Augustus, Elector of Saxony, Julius Echter von Mespelbrunn, and Maurice, Landgrave of Hesse-Kassel all contracted alchemists. John's son Arthur Dee worked as a court physician to Michael I of Russia and Charles I of England but also compiled the alchemical book Fasciculus Chemicus.
Although most of these appointments were legitimate, the trend of pseudo-alchemical fraud continued through the Renaissance. Betrüger would use sleight of hand, or claims of secret knowledge to make money or secure patronage. Legitimate mystical and medical alchemists such as Michael Maier and Heinrich Khunrath wrote about fraudulent transmutations, distinguishing themselves from the con artists. False alchemists were sometimes prosecuted for fraud.
The terms "chemia" and "alchemia" were used as synonyms in the early modern period, and the differences between alchemy, chemistry and small-scale assaying and metallurgy were not as neat as in the present day. There were important overlaps between practitioners, and trying to classify them into alchemists, chemists and craftsmen is anachronistic. For example, Tycho Brahe (1546–1601), an alchemist better known for his astronomical and astrological investigations, had a laboratory built at his Uraniborg observatory/research institute. Michael Sendivogius (Michał Sędziwój, 1566–1636), a Polish alchemist, philosopher, medical doctor and pioneer of chemistry wrote mystical works but is also credited with distilling oxygen in a lab sometime around 1600. Sendivogious taught his technique to Cornelius Drebbel who, in 1621, applied this in a submarine. Isaac Newton devoted considerably more of his writing to the study of alchemy (see Isaac Newton's occult studies) than he did to either optics or physics. Other early modern alchemists who were eminent in their other studies include Robert Boyle, and Jan Baptist van Helmont. Their Hermeticism complemented rather than precluded their practical achievements in medicine and science.
Later modern period
The decline of European alchemy was brought about by the rise of modern science with its emphasis on rigorous quantitative experimentation and its disdain for "ancient wisdom". Although the seeds of these events were planted as early as the 17th century, alchemy still flourished for some two hundred years, and in fact may have reached its peak in the 18th century. As late as 1781 James Price claimed to have produced a powder that could transmute mercury into silver or gold. Early modern European alchemy continued to exhibit a diversity of theories, practices, and purposes: "Scholastic and anti-Aristotelian, Paracelsian and anti-Paracelsian, Hermetic, Neoplatonic, mechanistic, vitalistic, and more—plus virtually every combination and compromise thereof."
Robert Boyle (1627–1691) pioneered the scientific method in chemical investigations. He assumed nothing in his experiments and compiled every piece of relevant data. Boyle would note the place in which the experiment was carried out, the wind characteristics, the position of the Sun and Moon, and the barometer reading, all just in case they proved to be relevant. This approach eventually led to the founding of modern chemistry in the 18th and 19th centuries, based on revolutionary discoveries and ideas of Lavoisier and John Dalton.
Beginning around 1720, a rigid distinction began to be drawn for the first time between "alchemy" and "chemistry". By the 1740s, "alchemy" was now restricted to the realm of gold making, leading to the popular belief that alchemists were charlatans, and the tradition itself nothing more than a fraud. In order to protect the developing science of modern chemistry from the negative censure to which alchemy was being subjected, academic writers during the 18th-century scientific Enlightenment attempted, for the sake of survival, to divorce and separate the "new" chemistry from the "old" practices of alchemy. This move was mostly successful, and the consequences of this continued into the 19th, 20th and 21st centuries.
During the occult revival of the early 19th century, alchemy received new attention as an occult science. The esoteric or occultist school, which arose during the 19th century, held (and continues to hold) the view that the substances and operations mentioned in alchemical literature are to be interpreted in a spiritual sense, and it downplays the role of the alchemy as a practical tradition or protoscience. This interpretation further forwarded the view that alchemy is an art primarily concerned with spiritual enlightenment or illumination, as opposed to the physical manipulation of apparatus and chemicals, and claims that the obscure language of the alchemical texts were an allegorical guise for spiritual, moral or mystical processes.
In the 19th-century revival of alchemy, the two most seminal figures were Mary Anne Atwood and Ethan Allen Hitchcock, who independently published similar works regarding spiritual alchemy. Both forwarded a completely esoteric view of alchemy, as Atwood claimed: "No modern art or chemistry, notwithstanding all its surreptitious claims, has any thing in common with Alchemy." Atwood's work influenced subsequent authors of the occult revival including Eliphas Levi, Arthur Edward Waite, and Rudolf Steiner. Hitchcock, in his Remarks Upon Alchymists (1855) attempted to make a case for his spiritual interpretation with his claim that the alchemists wrote about a spiritual discipline under a materialistic guise in order to avoid accusations of blasphemy from the church and state. In 1845, Baron Carl Reichenbach, published his studies on Odic force, a concept with some similarities to alchemy, but his research did not enter the mainstream of scientific discussion.
In 1946, Louis Cattiaux published the Message Retrouvé, a work that was at once philosophical, mystical and highly influenced by alchemy. In his lineage, many researchers, including Emmanuel and Charles d'Hooghvorst, are updating alchemical studies in France and Belgium.
Women
Several women appear in the earliest history of alchemy. Michael Maier names four women who were able to make the philosophers' stone: Mary the Jewess, Cleopatra the Alchemist, Medera, and Taphnutia. Zosimos' sister Theosebia (later known as Euthica the Arab) and Isis the Prophetess also played roles in early alchemical texts.
The first alchemist whose name we know was Mary the Jewess (). Early sources claim that Mary (or Maria) devised a number of improvements to alchemical equipment and tools as well as novel techniques in chemistry. Her best known advances were in heating and distillation processes. The laboratory water-bath, known eponymously (especially in France) as the bain-marie, is said to have been invented or at least improved by her. Essentially a double-boiler, it was (and is) used in chemistry for processes that required gentle heating. The tribikos (a modified distillation apparatus) and the kerotakis (a more intricate apparatus used especially for sublimations) are two other advancements in the process of distillation that are credited to her. Although we have no writing from Mary herself, she is known from the early-fourth-century writings of Zosimos of Panopolis. After the Greco-Roman period, women's names appear less frequently in alchemical literature.
Towards the end of the Middle Ages and beginning of the Renaissance, due to the emergence of print, women were able to access the alchemical knowledge from texts of the preceding centuries. Caterina Sforza, the Countess of Forlì and Lady of Imola, is one of the few confirmed female alchemists after Mary the Jewess. As she owned an apothecary, she would practice science and conduct experiments in her botanic gardens and laboratories. Being knowledgeable in alchemy and pharmacology, she recorded all of her alchemical ventures in a manuscript named ('Experiments'). The manuscript contained more than four hundred recipes covering alchemy as well as cosmetics and medicine. One of these recipes was for the water of talc. Talc, which makes up talcum powder, is a mineral which, when combined with water and distilled, was said to produce a solution which yielded many benefits. These supposed benefits included turning silver to gold and rejuvenation. When combined with white wine, its powder form could be ingested to counteract poison. Furthermore, if that powder was mixed and drunk with white wine, it was said to be a source of protection from any poison, sickness, or plague. Other recipes were for making hair dyes, lotions, lip colors. There was also information on how to treat a variety of ailments from fevers and coughs to epilepsy and cancer. In addition, there were instructions on producing the quintessence (or aether), an elixir which was believed to be able to heal all sicknesses, defend against diseases, and perpetuate youthfulness. She also wrote about creating the illustrious philosophers' stone.
Due to the proliferation in alchemical literature of pseudepigrapha and anonymous works, it is difficult to know which of the alchemists were actually women. As the sixteenth century went on, scientific culture flourished and people began collecting "secrets". During this period "secrets" referred to experiments, and the most coveted ones were not those which were bizarre, but the ones which had been proven to yield the desired outcome. Some women known for their interest in alchemy were Catherine de' Medici, the Queen of France, and Marie de' Medici, the following Queen of France, who carried out experiments in her personal laboratory. Also, Isabella d'Este, the Marchioness of Mantua, made perfumes herself to serve as gifts. In this period, the only book of secrets ascribed to a woman was ('The Secrets of Signora Isabella Cortese'). This book contained information on how to turn base metals into gold, medicine, and cosmetics. However, it is rumored that a man, Girolamo Ruscelli, was the real author and only used a female voice to attract female readers. This contributed to a bigger problem in which male authors would credit prominent noblewomen for beauty products with the purpose of appealing to a female audience. For example, in ("Gallant Recipe-Book"), the distillation of lemons and roses was attributed to Elisabetta Gonzaga, the duchess of Urbino. In the same book, Isabella d'Aragona, the daughter of Alfonso II of Naples, is accredited for recipes involving alum and mercury. Ippolita Maria Sforza is even referred to in an anonymous manuscript about a hand lotion created with rose powder and crushed bones.
Mary Anne Atwood's A Suggestive Inquiry into the Hermetic Mystery (1850) marks the return of women during the nineteenth-century occult revival.
Modern historical research
The history of alchemy has become a significant and recognized subject of academic study. As the language of the alchemists is analyzed, historians are becoming more aware of the intellectual connections between that discipline and other facets of Western cultural history, such as the evolution of science and philosophy, the sociology and psychology of the intellectual communities, kabbalism, spiritualism, Rosicrucianism, and other mystic movements. Institutions involved in this research include The Chymistry of Isaac Newton project at Indiana University, the University of Exeter Centre for the Study of Esotericism (EXESESO), the European Society for the Study of Western Esotericism (ESSWE), and the University of Amsterdam's Sub-department for the History of Hermetic Philosophy and Related Currents. A large collection of books on alchemy is kept in the Bibliotheca Philosophica Hermetica in Amsterdam.
Journals which publish regularly on the topic of Alchemy include 'Ambix', published by the Society for the History of Alchemy and Chemistry, and 'Isis', published by The History of Science Society.
Core concepts
Western alchemical theory corresponds to the worldview of late antiquity in which it was born. Concepts were imported from Neoplatonism and earlier Greek cosmology. As such, the classical elements appear in alchemical writings, as do the seven classical planets and the corresponding seven metals of antiquity. Similarly, the gods of the Roman pantheon who are associated with these luminaries are discussed in alchemical literature. The concepts of prima materia and anima mundi are central to the theory of the philosopher's stone.
Magnum opus
The Great Work of Alchemy is often described as a series of four stages represented by colors.
nigredo, a blackening or melanosis
albedo, a whitening or leucosis
citrinitas, a yellowing or xanthosis
rubedo, a reddening, purpling, or iosis
Modernity
Due to the complexity and obscurity of alchemical literature, and the 18th-century disappearance of remaining alchemical practitioners into the area of chemistry, the general understanding of alchemy has been strongly influenced by several distinct and radically different interpretations. Those focusing on the exoteric, such as historians of science Lawrence M. Principe and William R. Newman, have interpreted the 'decknamen' (or code words) of alchemy as physical substances. These scholars have reconstructed physicochemical experiments that they say are described in medieval and early modern texts. At the opposite end of the spectrum, focusing on the esoteric, scholars, such as Florin George Călian and Anna Marie Roos, who question the reading of Principe and Newman, interpret these same decknamen as spiritual, religious, or psychological concepts.
New interpretations of alchemy are still perpetuated, sometimes merging in concepts from New Age or radical environmentalism movements. Groups like the Rosicrucians and Freemasons have a continued interest in alchemy and its symbolism. Since the Victorian revival of alchemy, "occultists reinterpreted alchemy as a spiritual practice, involving the self-transformation of the practitioner and only incidentally or not at all the transformation of laboratory substances", which has contributed to a merger of magic and alchemy in popular thought.
Esoteric interpretations of historical texts
In the eyes of a variety of modern esoteric and Neo-Hermeticist practitioners, alchemy is fundamentally spiritual. In this interpretation, transmutation of lead into gold is presented as an analogy for personal transmutation, purification, and perfection.
According to this view, early alchemists such as Zosimos of Panopolis () highlighted the spiritual nature of the alchemical quest, symbolic of a religious regeneration of the human soul. This approach is held to have continued in the Middle Ages, as metaphysical aspects, substances, physical states, and material processes are supposed to have been used as metaphors for spiritual entities, spiritual states, and, ultimately, transformation. In this sense, the literal meanings of 'Alchemical Formulas' were like a veil, hiding their true spiritual philosophy. In the Neo-Hermeticist interpretation, both the transmutation of common metals into gold and the universal panacea are held to symbolize evolution from an imperfect, diseased, corruptible, and ephemeral state toward a perfect, healthy, incorruptible, and everlasting state, so the philosopher's stone then represented a mystic key that would make this evolution possible. Applied to the alchemist, the twin goal symbolized their evolution from ignorance to enlightenment, and the stone represented a hidden spiritual truth or power that would lead to that goal. In texts that are held to have been written according to this view, the cryptic alchemical symbols, diagrams, and textual imagery of late alchemical works are supposed to contain multiple layers of meanings, allegories, and references to other equally cryptic works; which must be laboriously decoded to discover their true meaning.
In his 1766 Alchemical Catechism, Théodore Henri de Tschudi denotes that the usage of the metals was merely symbolic:
Psychology
Alchemical symbolism has been important in analytical psychology and was revived and popularized from near extinction by the Swiss psychologist Carl Gustav Jung. Jung was initially confounded and at odds with alchemy and its images but after being given a copy of The Secret of the Golden Flower, a Chinese alchemical text translated by his friend Richard Wilhelm, he discovered a direct correlation or parallel between the symbolic images in the alchemical drawings and the inner, symbolic images coming up in his patients' dreams, visions, or fantasies. He observed these alchemical images occurring during the psychic process of transformation, a process that Jung called "individuation." Specifically, he regarded the conjuring up of images of gold or Lapis as symbolic expressions of the origin and goal of this "process of individuation." Together with his alchemical mystica soror (mystical sister) Jungian Swiss analyst Marie-Louise von Franz, Jung began collecting old alchemical texts, compiled a lexicon of key phrases with cross-references, and pored over them. The volumes of work he wrote shed new light onto understanding the art of transubstantiation and renewed alchemy's popularity as a symbolic process of coming into wholeness as a human being where opposites are brought into contact and inner and outer, spirit and matter are reunited in the hieros gamos, or divine marriage. His writings are influential in general psychology, but especially to those who have an interest in understanding the importance of dreams, symbols, and the unconscious archetypal forces (archetypes) that comprise all psychic life.
Both von Franz and Jung have contributed significantly to the subject and work of alchemy and its continued presence in psychology as well as contemporary culture. Among the volumes Jung wrote on alchemy, his magnum opus is Volume 14 of his Collected Works, Mysterium Coniunctionis.
Literature
Alchemy has had a long-standing relationship with art, seen both in alchemical texts and in mainstream entertainment. Literary alchemy appears throughout the history of English literature from Shakespeare to J. K. Rowling, and also the popular Japanese manga Fullmetal Alchemist. Here, characters or plot structure follow an alchemical magnum opus. In the 14th century, Chaucer began a trend of alchemical satire that can still be seen in recent fantasy works like those of the late Sir Terry Pratchett.
Visual artists had a similar relationship with alchemy. While some of them used alchemy as a source of satire, others worked with the alchemists themselves or integrated alchemical thought or symbols in their work. Music was also present in the works of alchemists and continues to influence popular performers. In the last hundred years, alchemists have been portrayed in a magical and spagyric role in fantasy fiction, film, television, novels, comics and video games.
Science
One goal of alchemy, the transmutation of base substances into gold, is now known to be impossible by chemical means but possible by physical means. Although not financially worthwhile, gold was synthesized in particle accelerators as early as 1941.
See also
Alchemical symbol
Corentin Louis Kervran § Biological transmutation
Cupellation
Historicism
History of chemistry
List of alchemists
List of alchemical substances
Chemistry
Nuclear transmutation
Outline of alchemy
Porta Alchemica
Renaissance magic
Spagyric
Superseded theories in science
Synthesis of precious metals
Western esotericism
Notes
References
Citations
Sources used
Bibliography
Introductions and textbooks
(focus on technical aspects)
(focus on technical aspects)
(general overview)
(Greek and Byzantine alchemy)
(focus on technical aspects)
(Greek and Byzantine alchemy)
(the second part of volume 1 was never published; the other volumes deal with the modern period and are not relevant for alchemy)
(general overview, focus on esoteric aspects)
(general overview, written in a highly accessible style)
Greco-Egyptian alchemy
Texts
Marcellin Berthelot and Charles-Émile Ruelle (eds.), Collection des anciens alchimistes grecs (CAAG), 3 vols., 1887–1888, Vol 1: https://gallica.bnf.fr/ark:/12148/bpt6k96492923, Vol 2: https://gallica.bnf.fr/ark:/12148/bpt6k9680734p, Vol. 3: https://gallica.bnf.fr/ark:/12148/bpt6k9634942s.
André-Jean Festugière, La Révélation d'Hermès Trismégiste, Paris, Les Belles Lettres, 2014 (, OCLC 897235256).
Robert Halleux and Henri-Dominique Saffrey (eds.), Les alchimistes grecs, t. 1 : Papyrus de Leyde – Papyrus de Stockholm – Recettes, Paris, Les Belles Lettres, 1981.
Otto Lagercrantz (ed), Papyrus Graecus Holmiensis, Uppsala, A.B. Akademiska Bokhandeln, 1913, Papyrus graecus holmiensis (P. holm.); Recepte für Silber, Steine und Purpur, bearb. von Otto Lagercrantz. Hrsg. mit Unterstützung des Vilh. Ekman'schen Universitätsfonds.
Michèle Mertens and Henri-Dominique Saffrey (ed.), Les alchimistes grecs, t. 4.1 : Zosime de Panopolis. Mémoires authentiques, Paris, Les Belles Lettres, 1995.
Andrée Collinet and Henri-Dominique Saffrey (ed.), Les alchimistes grecs, t. 10 : L'Anonyme de Zuretti ou l'Art sacré and divin de la chrysopée par un anonyme, Paris, Les Belles Lettres, 2000.
Andrée Collinet (ed), Les alchimistes grecs, t. 11 : Recettes alchimiques (Par. Gr. 2419; Holkhamicus 109) – Cosmas le Hiéromoine – Chrysopée, Paris, Les Belles Lettres, 2000.
Matteo Martelli (ed), The Four Books of Pseudo-Democritus, Maney Publishing, 2014.
Studies
Dylan M. Burns, " μίξεώς τινι τέχνῃ κρείττονι : Alchemical Metaphor in the Paraphrase of Shem (NHC VII,1) ", Aries 15 (2015), p. 79–106.
Alberto Camplani, " Procedimenti magico-alchemici e discorso filosofico ermetico " in Giuliana Lanata (ed.), Il Tardoantico alle soglie del Duemila, ETS, 2000, p. 73–98.
Alberto Camplani and Marco Zambon, " Il sacrificio come problema in alcune correnti filosofice di età imperiale ", Annali di storia dell'esegesi 19 (2002), p. 59–99.
Régine Charron and Louis Painchaud, " 'God is a Dyer,' The Background and Significance of a Puzzling Motif in the Coptic Gospel According to Philip (CG II, 3), Le Muséon 114 (2001), p. 41-50.
Régine Charron, " The Apocryphon of John (NHC II,1) and the Greco-Egyptian Alchemical Literature ", Vigiliae Christinae 59 (2005), p. 438-456.
Philippe Derchain, "L'Atelier des Orfèvres à Dendara et les origines de l'alchimie," Chronique d'Égypte, vol. 65, no 130, 1990, p. 219–242.
Korshi Dosoo, " A History of the Theban Magical Library ", Bulletin of the American Society of Papyrologists 53 (2016), p. 251–274.
Olivier Dufault, Early Greek Alchemy, Patronage and Innovation in Late Antiquity, California Classical Studies, 2019, Early Greek Alchemy, Patronage and Innovation in Late Antiquity.
Sergio Knipe, " Sacrifice and self-transformation in the alchemical writings of Zosimus of Panopolis ", in Christopher Kelly, Richard Flower, Michael Stuart Williams (eds.), Unclassical Traditions. Volume II: Perspectives from East and West in Late Antiquity, Cambridge University Press, 2011, p. 59–69.
André-Jean Festugière, La Révélation d'Hermès Trismégiste, Paris, Les Belles Lettres, 2014 , .
Kyle A. Fraser, " Zosimos of Panopolis and the Book of Enoch: Alchemy as Forbidden Knowledge ", Aries 4.2 (2004), p. 125–147.
Kyle A. Fraser, " Baptized in Gnosis: The Spiritual Alchemy of Zosimos of Panopolis ", Dionysius 25 (2007), p. 33–54.
Kyle A. Fraser, " Distilling Nature's Secrets: The Sacred Art of Alchemy ", in John Scarborough and Paul Keyser (eds.), Oxford Handbook of Science and Medicine in the Classical World, Oxford University Press, 2018, p. 721–742. 2018. .
Shannon Grimes, Becoming Gold: Zosimos of Panopolis and the Alchemical Arts in Roman Egypt, Auckland, Rubedo Press, 2018,
Paul T. Keyser, " Greco-Roman Alchemy and Coins of Imitation Silver ", American Journal of Numismatics 7–8 (1995–1996), p. 209–234.
Paul Keyser, " The Longue Durée of Alchemy ", in John Scarborough and Paul Keyser (eds.), Oxford Handbook of Science and Medicine in the Classical World, Oxford University Press, 2018, p. 409–430.
Jean Letrouit, "Chronologie des alchimistes grecs," in Didier Kahn and Sylvain Matton, Alchimie: art, histoire et mythes, SEHA-Archè, 1995, p. 11–93.
Lindsay, Jack. The Origins of Alchemy in Greco-Roman Egypt. Barnes & Noble, 1970.
Paul Magdalino and Maria Mavroudi (eds.), The Occult Sciences in Byzantium, La Pomme d'or, 2006.
Matteo Martelli, " The Alchemical Art of Dyeing: The Fourfold Division of Alchemy and the Enochian Tradition " in Sven Dupré (ed.), Laboratories of Art, Springer, 2014, .
Matteo Martelli, " Alchemy, Medicine and Religion: Zosimus of Panopolis and the Egyptian Priests ", Religion in the Roman Empire 3.2 (2017), p. 202–220.
Gerasimos Merianos, " Alchemy ", In A. Kaldellis & N. Siniossoglou (eds.), The Cambridge Intellectual History of Byzantium (pp. 234–251). Cambridge: Cambridge University Press, 2017, .
Efthymios Nikolaïdis (ed.), Greek Alchemy from Late Antiquity to Early Modernity, Brepols, 2019, .
Daniel Stolzenberg, " Unpropitious Tinctures: Alchemy, Astrology & Gnosis According to Zosimos of Panopolis ", Archives internationales d'histoire des sciences 49 (1999), p. 3–31.
Cristina Viano, " Byzantine Alchemy, or the Era of Systematization ", in John Scarborough and Paul Keyser (eds.), Oxford Handbook of Science and Medicine in the Classical World, Oxford University Press, 2018, p. 943–964.
C. Vlachou and al., " Experimental investigation of silvering in late Roman coinage ", Material Research Society Symposium Proceedings 712 (2002), p. II9.2.1-II9.2.9, .
Early modern
Principe, Lawrence and William Newman. Alchemy Tried in the Fire: Starkey, Boyle, and the Fate of Helmontian Chymistry. University of Chicago Press, 2002.
External links
SHAC: Society for the History of Alchemy and Chemistry
ESSWE: European Society for the Study of Western Esotericism
Association for the Study of Esotericism
Esotericism
Hermeticism
Natural philosophy
History of science
|
https://en.wikipedia.org/wiki/ASCII
|
ASCII ( ), abbreviated from American Standard Code for Information Interchange, is a character encoding standard for electronic communication. ASCII codes represent text in computers, telecommunications equipment, and other devices. Because of technical limitations of computer systems at the time it was invented, ASCII has just 128 code points, of which only 95 are , which severely limited its scope. Modern computer systems have evolved to use Unicode, which has millions of code points, but the first 128 of these are the same as the ASCII set.
The Internet Assigned Numbers Authority (IANA) prefers the name US-ASCII for this character encoding.
ASCII is one of the IEEE milestones.
Overview
ASCII was developed from telegraph code. Its first commercial use was in the Teletype Model 33 and the Teletype Model 35 as a seven-bit teleprinter code promoted by Bell data services. Work on the ASCII standard began in May 1961, with the first meeting of the American Standards Association's (ASA) (now the American National Standards Institute or ANSI) X3.2 subcommittee. The first edition of the standard was published in 1963, underwent a major revision during 1967, and experienced its most recent update during 1986. Compared to earlier telegraph codes, the proposed Bell code and ASCII were both ordered for more convenient sorting (i.e., alphabetization) of lists and added features for devices other than teleprinters.
The use of ASCII format for Network Interchange was described in 1969. That document was formally elevated to an Internet Standard in 2015.
Originally based on the (modern) English alphabet, ASCII encodes 128 specified characters into seven-bit integers as shown by the ASCII chart in this article. Ninety-five of the encoded characters are printable: these include the digits 0 to 9, lowercase letters a to z, uppercase letters A to Z, and punctuation symbols. In addition, the original ASCII specification included 33 non-printing control codes which originated with s; most of these are now obsolete, although a few are still commonly used, such as the carriage return, line feed, and tab codes.
For example, lowercase i would be represented in the ASCII encoding by binary 1101001 = hexadecimal 69 (i is the ninth letter) = decimal 105.
Despite being an American standard, ASCII does not have a code point for the cent (¢). It also does not support English terms with diacritical marks such as résumé and jalapeño, or proper nouns with diacritical marks such as Beyoncé.
History
The American Standard Code for Information Interchange (ASCII) was developed under the auspices of a committee of the American Standards Association (ASA), called the X3 committee, by its X3.2 (later X3L2) subcommittee, and later by that subcommittee's X3.2.4 working group (now INCITS). The ASA later became the United States of America Standards Institute (USASI) and ultimately became the American National Standards Institute (ANSI).
With the other special characters and control codes filled in, ASCII was published as ASA X3.4-1963, leaving 28 code positions without any assigned meaning, reserved for future standardization, and one unassigned control code. There was some debate at the time whether there should be more control characters rather than the lowercase alphabet. The indecision did not last long: during May 1963 the CCITT Working Party on the New Telegraph Alphabet proposed to assign lowercase characters to sticks 6 and 7, and International Organization for Standardization TC 97 SC 2 voted during October to incorporate the change into its draft standard. The X3.2.4 task group voted its approval for the change to ASCII at its May 1963 meeting. Locating the lowercase letters in sticks 6 and 7 caused the characters to differ in bit pattern from the upper case by a single bit, which simplified case-insensitive character matching and the construction of keyboards and printers.
The X3 committee made other changes, including other new characters (the brace and vertical bar characters), renaming some control characters (SOM became start of header (SOH)) and moving or removing others (RU was removed). ASCII was subsequently updated as USAS X3.4-1967, then USAS X3.4-1968, ANSI X3.4-1977, and finally, ANSI X3.4-1986.
Revisions of the ASCII standard:
ASA X3.4-1963
ASA X3.4-1965 (approved, but not published, nevertheless used by IBM 2260 & 2265 Display Stations and IBM 2848 Display Control)
USAS X3.4-1967
USAS X3.4-1968
ANSI X3.4-1977
ANSI X3.4-1986
ANSI X3.4-1986 (R1992)
ANSI X3.4-1986 (R1997)
ANSI INCITS 4-1986 (R2002)
ANSI INCITS 4-1986 (R2007)
(ANSI) INCITS 4-1986[R2012]
(ANSI) INCITS 4-1986[R2017]
In the X3.15 standard, the X3 committee also addressed how ASCII should be transmitted (least significant bit first) and recorded on perforated tape. They proposed a 9-track standard for magnetic tape and attempted to deal with some punched card formats.
Design considerations
Bit width
The X3.2 subcommittee designed ASCII based on the earlier teleprinter encoding systems. Like other character encodings, ASCII specifies a correspondence between digital bit patterns and character symbols (i.e. graphemes and control characters). This allows digital devices to communicate with each other and to process, store, and communicate character-oriented information such as written language. Before ASCII was developed, the encodings in use included 26 alphabetic characters, 10 numerical digits, and from 11 to 25 special graphic symbols. To include all these, and control characters compatible with the Comité Consultatif International Téléphonique et Télégraphique (CCITT) International Telegraph Alphabet No. 2 (ITA2) standard of 1924, FIELDATA (1956), and early EBCDIC (1963), more than 64 codes were required for ASCII.
ITA2 was in turn based on the 5-bit telegraph code that Émile Baudot invented in 1870 and patented in 1874.
The committee debated the possibility of a shift function (like in ITA2), which would allow more than 64 codes to be represented by a six-bit code. In a shifted code, some character codes determine choices between options for the following character codes. It allows compact encoding, but is less reliable for data transmission, as an error in transmitting the shift code typically makes a long part of the transmission unreadable. The standards committee decided against shifting, and so ASCII required at least a seven-bit code.
The committee considered an eight-bit code, since eight bits (octets) would allow two four-bit patterns to efficiently encode two digits with binary-coded decimal. However, it would require all data transmission to send eight bits when seven could suffice. The committee voted to use a seven-bit code to minimize costs associated with data transmission. Since perforated tape at the time could record eight bits in one position, it also allowed for a parity bit for error checking if desired. Eight-bit machines (with octets as the native data type) that did not use parity checking typically set the eighth bit to 0.
Internal organization
The code itself was patterned so that most control codes were together and all graphic codes were together, for ease of identification. The first two so-called ASCII sticks (32 positions) were reserved for control characters. The "space" character had to come before graphics to make sorting easier, so it became position 20hex; for the same reason, many special signs commonly used as separators were placed before digits. The committee decided it was important to support uppercase 64-character alphabets, and chose to pattern ASCII so it could be reduced easily to a usable 64-character set of graphic codes, as was done in the DEC SIXBIT code (1963). Lowercase letters were therefore not interleaved with uppercase. To keep options available for lowercase letters and other graphics, the special and numeric codes were arranged before the letters, and the letter A was placed in position 41hex to match the draft of the corresponding British standard. The digits 0–9 are prefixed with 011, but the remaining 4 bits correspond to their respective values in binary, making conversion with binary-coded decimal straightforward (for example, 5 in encoded to 0110101, where 5 is 0101 in binary).
Many of the non-alphanumeric characters were positioned to correspond to their shifted position on typewriters; an important subtlety is that these were based on mechanical typewriters, not electric typewriters. Mechanical typewriters followed the de facto standard set by the Remington No. 2 (1878), the first typewriter with a shift key, and the shifted values of 23456789- were "#$%_&'() early typewriters omitted 0 and 1, using O (capital letter o) and l (lowercase letter L) instead, but 1! and 0) pairs became standard once 0 and 1 became common. Thus, in ASCII !"#$% were placed in the second stick, positions 1–5, corresponding to the digits 1–5 in the adjacent stick. The parentheses could not correspond to 9 and 0, however, because the place corresponding to 0 was taken by the space character. This was accommodated by removing _ (underscore) from 6 and shifting the remaining characters, which corresponded to many European typewriters that placed the parentheses with 8 and 9. This discrepancy from typewriters led to bit-paired keyboards, notably the Teletype Model 33, which used the left-shifted layout corresponding to ASCII, differently from traditional mechanical typewriters.
Electric typewriters, notably the IBM Selectric (1961), used a somewhat different layout that has become de facto standard on computers following the IBM PC (1981), especially Model M (1984) and thus shift values for symbols on modern keyboards do not correspond as closely to the ASCII table as earlier keyboards did. The /? pair also dates to the No. 2, and the ,< .> pairs were used on some keyboards (others, including the No. 2, did not shift , (comma) or . (full stop) so they could be used in uppercase without unshifting). However, ASCII split the ;: pair (dating to No. 2), and rearranged mathematical symbols (varied conventions, commonly -* =+) to :* ;+ -=.
Some then-common typewriter characters were not included, notably ½ ¼ ¢, while ^ ` ~ were included as diacritics for international use, and < > for mathematical use, together with the simple line characters \ | (in addition to common /). The @ symbol was not used in continental Europe and the committee expected it would be replaced by an accented À in the French variation, so the @ was placed in position 40hex, right before the letter A.
The control codes felt essential for data transmission were the start of message (SOM), end of address (EOA), end of message (EOM), end of transmission (EOT), "who are you?" (WRU), "are you?" (RU), a reserved device control (DC0), synchronous idle (SYNC), and acknowledge (ACK). These were positioned to maximize the Hamming distance between their bit patterns.
Character order
ASCII-code order is also called ASCIIbetical order. Collation of data is sometimes done in this order rather than "standard" alphabetical order (collating sequence). The main deviations in ASCII order are:
All uppercase come before lowercase letters; for example, "Z" precedes "a"
Digits and many punctuation marks come before letters
An intermediate order converts uppercase letters to lowercase before comparing ASCII values.
Character set
Character groups
Control characters
ASCII reserves the first 32 code points (numbers 0–31 decimal) and the last one (number 127 decimal) for control characters. These are codes intended to control peripheral devices (such as printers), or to provide meta-information about data streams, such as those stored on magnetic tape. Despite their name, these code points do not represent printable characters although for debugging purposes, "placeholder" symbols (such as those given in ISO 2047 and its predecessors) are assigned.
For example, character 0x0A represents the "line feed" function (which causes a printer to advance its paper), and character 8 represents "backspace". refers to control characters that do not include carriage return, line feed or white space as non-whitespace control characters. Except for the control characters that prescribe elementary line-oriented formatting, ASCII does not define any mechanism for describing the structure or appearance of text within a document. Other schemes, such as markup languages, address page and document layout and formatting.
The original ASCII standard used only short descriptive phrases for each control character. The ambiguity this caused was sometimes intentional, for example where a character would be used slightly differently on a terminal link than on a data stream, and sometimes accidental, for example with the meaning of "delete".
Probably the most influential single device affecting the interpretation of these characters was the Teletype Model 33 ASR, which was a printing terminal with an available paper tape reader/punch option. Paper tape was a very popular medium for long-term program storage until the 1980s, less costly and in some ways less fragile than magnetic tape. In particular, the Teletype Model 33 machine assignments for codes 17 (control-Q, DC1, also known as XON), 19 (control-S, DC3, also known as XOFF), and 127 (delete) became de facto standards. The Model 33 was also notable for taking the description of control-G (code 7, BEL, meaning audibly alert the operator) literally, as the unit contained an actual bell which it rang when it received a BEL character. Because the keytop for the O key also showed a left-arrow symbol (from ASCII-1963, which had this character instead of underscore), a noncompliant use of code 15 (control-O, shift in) interpreted as "delete previous character" was also adopted by many early timesharing systems but eventually became neglected.
When a Teletype 33 ASR equipped with the automatic paper tape reader received a control-S (XOFF, an abbreviation for transmit off), it caused the tape reader to stop; receiving control-Q (XON, transmit on) caused the tape reader to resume. This so-called flow control technique became adopted by several early computer operating systems as a "handshaking" signal warning a sender to stop transmission because of impending buffer overflow; it persists to this day in many systems as a manual output control technique. On some systems, control-S retains its meaning, but control-Q is replaced by a second control-S to resume output.
The 33 ASR also could be configured to employ control-R (DC2) and control-T (DC4) to start and stop the tape punch; on some units equipped with this function, the corresponding control character lettering on the keycap above the letter was TAPE and TAPE respectively.
Delete vs backspace
The Teletype could not move its typehead backwards, so it did not have a key on its keyboard to send a BS (backspace). Instead, there was a key marked that sent code 127 (DEL). The purpose of this key was to erase mistakes in a manually-input paper tape: the operator had to push a button on the tape punch to back it up, then type the rubout, which punched all holes and replaced the mistake with a character that was intended to be ignored. Teletypes were commonly used with the less-expensive computers from Digital Equipment Corporation (DEC); these systems had to use what keys were available, and thus the DEL character was assigned to erase the previous character. Because of this, DEC video terminals (by default) sent the DEL character for the key marked "Backspace" while the separate key marked "Delete" sent an escape sequence; many other competing terminals sent a BS character for the backspace key.
The Unix terminal driver could only use one character to erase the previous character; this could be set to BS or DEL, but not both, resulting in recurring situations of ambiguity where users had to decide depending on what terminal they were using (shells that allow line editing, such as ksh, bash, and zsh, understand both). The assumption that no key sent a BS character allowed control+H to be used for other purposes, such as the "help" prefix command in GNU Emacs.
Escape
Many more of the control characters have been assigned meanings quite different from their original ones. The "escape" character (ESC, code 27), for example, was intended originally to allow sending of other control characters as literals instead of invoking their meaning, an "escape sequence". This is the same meaning of "escape" encountered in URL encodings, C language strings, and other systems where certain characters have a reserved meaning. Over time this interpretation has been co-opted and has eventually been changed.
In modern usage, an ESC sent to the terminal usually indicates the start of a command sequence usually in the form of a so-called "ANSI escape code" (or, more properly, a "Control Sequence Introducer") from ECMA-48 (1972) and its successors, beginning with ESC followed by a "[" (left-bracket) character. In contrast, an ESC sent from the terminal is most often used as an out-of-band character used to terminate an operation or special mode, as in the TECO and vi text editors. In graphical user interface (GUI) and windowing systems, ESC generally causes an application to abort its current operation or to exit (terminate) altogether.
End of line
The inherent ambiguity of many control characters, combined with their historical usage, created problems when transferring "plain text" files between systems. The best example of this is the newline problem on various operating systems. Teletype machines required that a line of text be terminated with both "carriage return" (which moves the printhead to the beginning of the line) and "line feed" (which advances the paper one line without moving the printhead). The name "carriage return" comes from the fact that on a manual typewriter the carriage holding the paper moves while the typebars that strike the ribbon remain stationary. The entire carriage had to be pushed (returned) to the right in order to position the paper for the next line.
DEC operating systems (OS/8, RT-11, RSX-11, RSTS, TOPS-10, etc.) used both characters to mark the end of a line so that the console device (originally Teletype machines) would work. By the time so-called "glass TTYs" (later called CRTs or "dumb terminals") came along, the convention was so well established that backward compatibility necessitated continuing to follow it. When Gary Kildall created CP/M, he was inspired by some of the command line interface conventions used in DEC's RT-11 operating system.
Until the introduction of PC DOS in 1981, IBM had no influence in this because their 1970s operating systems used EBCDIC encoding instead of ASCII, and they were oriented toward punch-card input and line printer output on which the concept of "carriage return" was meaningless. IBM's PC DOS (also marketed as MS-DOS by Microsoft) inherited the convention by virtue of being loosely based on CP/M, and Windows in turn inherited it from MS-DOS.
Requiring two characters to mark the end of a line introduces unnecessary complexity and ambiguity as to how to interpret each character when encountered by itself. To simplify matters, plain text data streams, including files, on Multics used line feed (LF) alone as a line terminator. Unix and Unix-like systems, and Amiga systems, adopted this convention from Multics. On the other hand, the original Macintosh OS, Apple DOS, and ProDOS used carriage return (CR) alone as a line terminator; however, since Apple has now replaced these obsolete operating systems with the Unix-based macOS operating system, they now use line feed (LF) as well. The Radio Shack TRS-80 also used a lone CR to terminate lines.
Computers attached to the ARPANET included machines running operating systems such as TOPS-10 and TENEX using CR-LF line endings; machines running operating systems such as Multics using LF line endings; and machines running operating systems such as OS/360 that represented lines as a character count followed by the characters of the line and which used EBCDIC rather than ASCII encoding. The Telnet protocol defined an ASCII "Network Virtual Terminal" (NVT), so that connections between hosts with different line-ending conventions and character sets could be supported by transmitting a standard text format over the network. Telnet used ASCII along with CR-LF line endings, and software using other conventions would translate between the local conventions and the NVT. The File Transfer Protocol adopted the Telnet protocol, including use of the Network Virtual Terminal, for use when transmitting commands and transferring data in the default ASCII mode. This adds complexity to implementations of those protocols, and to other network protocols, such as those used for E-mail and the World Wide Web, on systems not using the NVT's CR-LF line-ending convention.
End of file/stream
The PDP-6 monitor, and its PDP-10 successor TOPS-10, used control-Z (SUB) as an end-of-file indication for input from a terminal. Some operating systems such as CP/M tracked file length only in units of disk blocks, and used control-Z to mark the end of the actual text in the file. For these reasons, EOF, or end-of-file, was used colloquially and conventionally as a three-letter acronym for control-Z instead of SUBstitute. The end-of-text character (ETX), also known as control-C, was inappropriate for a variety of reasons, while using control-Z as the control character to end a file is analogous to the letter Z's position at the end of the alphabet, and serves as a very convenient mnemonic aid. A historically common and still prevalent convention uses the ETX character convention to interrupt and halt a program via an input data stream, usually from a keyboard.
The Unix terminal driver uses the end-of-transmission character (EOT), also known as control-D, to indicate the end of a data stream.
In the C programming language, and in Unix conventions, the null character is used to terminate text strings; such null-terminated strings can be known in abbreviation as ASCIZ or ASCIIZ, where here Z stands for "zero".
Control code chart
Other representations might be used by specialist equipment, for example ISO 2047 graphics or hexadecimal numbers.
Printable characters
Codes 20hex to 7Ehex, known as the printable characters, represent letters, digits, punctuation marks, and a few miscellaneous symbols. There are 95 printable characters in total.
Code 20hex, the "space" character, denotes the space between words, as produced by the space bar of a keyboard. Since the space character is considered an invisible graphic (rather than a control character) it is listed in the table below instead of in the previous section.
Code 7Fhex corresponds to the non-printable "delete" (DEL) control character and is therefore omitted from this chart; it is covered in the previous section's chart. Earlier versions of ASCII used the up arrow instead of the caret (5Ehex) and the left arrow instead of the underscore (5Fhex).
Usage
ASCII was first used commercially during 1963 as a seven-bit teleprinter code for American Telephone & Telegraph's TWX (TeletypeWriter eXchange) network. TWX originally used the earlier five-bit ITA2, which was also used by the competing Telex teleprinter system. Bob Bemer introduced features such as the escape sequence. His British colleague Hugh McGregor Ross helped to popularize this work according to Bemer, "so much so that the code that was to become ASCII was first called the Bemer–Ross Code in Europe". Because of his extensive work on ASCII, Bemer has been called "the father of ASCII".
On March 11, 1968, US President Lyndon B. Johnson mandated that all computers purchased by the United States Federal Government support ASCII, stating:
I have also approved recommendations of the Secretary of Commerce [Luther H. Hodges] regarding standards for recording the Standard Code for Information Interchange on magnetic tapes and paper tapes when they are used in computer operations.
All computers and related equipment configurations brought into the Federal Government inventory on and after July 1, 1969, must have the capability to use the Standard Code for Information Interchange and the formats prescribed by the magnetic tape and paper tape standards when these media are used.
ASCII was the most common character encoding on the World Wide Web until December 2007, when UTF-8 encoding surpassed it; UTF-8 is backward compatible with ASCII.
Variants and derivations
As computer technology spread throughout the world, different standards bodies and corporations developed many variations of ASCII to facilitate the expression of non-English languages that used Roman-based alphabets. One could class some of these variations as "ASCII extensions", although some misuse that term to represent all variants, including those that do not preserve ASCII's character-map in the 7-bit range. Furthermore, the ASCII extensions have also been mislabelled as ASCII.
7-bit codes
From early in its development, ASCII was intended to be just one of several national variants of an international character code standard.
Other international standards bodies have ratified character encodings such as ISO 646 (1967) that are identical or nearly identical to ASCII, with extensions for characters outside the English alphabet and symbols used outside the United States, such as the symbol for the United Kingdom's pound sterling (£); e.g. with code page 1104. Almost every country needed an adapted version of ASCII, since ASCII suited the needs of only the US and a few other countries. For example, Canada had its own version that supported French characters.
Many other countries developed variants of ASCII to include non-English letters (e.g. é, ñ, ß, Ł), currency symbols (e.g. £, ¥), etc. See also YUSCII (Yugoslavia).
It would share most characters in common, but assign other locally useful characters to several code points reserved for "national use". However, the four years that elapsed between the publication of ASCII-1963 and ISO's first acceptance of an international recommendation during 1967 caused ASCII's choices for the national use characters to seem to be de facto standards for the world, causing confusion and incompatibility once other countries did begin to make their own assignments to these code points.
ISO/IEC 646, like ASCII, is a 7-bit character set. It does not make any additional codes available, so the same code points encoded different characters in different countries. Escape codes were defined to indicate which national variant applied to a piece of text, but they were rarely used, so it was often impossible to know what variant to work with and, therefore, which character a code represented, and in general, text-processing systems could cope with only one variant anyway.
Because the bracket and brace characters of ASCII were assigned to "national use" code points that were used for accented letters in other national variants of ISO/IEC 646, a German, French, or Swedish, etc. programmer using their national variant of ISO/IEC 646, rather than ASCII, had to write, and thus read, something such as
ä aÄiÜ = 'Ön'; ü
instead of
{ a[i] = '\n'; }
C trigraphs were created to solve this problem for ANSI C, although their late introduction and inconsistent implementation in compilers limited their use. Many programmers kept their computers on US-ASCII, so plain-text in Swedish, German etc. (for example, in e-mail or Usenet) contained "{, }" and similar variants in the middle of words, something those programmers got used to. For example, a Swedish programmer mailing another programmer asking if they should go for lunch, could get "N{ jag har sm|rg}sar" as the answer, which should be "Nä jag har smörgåsar" meaning "No I've got sandwiches".
In Japan and Korea, still a variation of ASCII is used, in which the backslash (5C hex) is rendered as ¥ (a Yen sign, in Japan) or ₩ (a Won sign, in Korea). This means that, for example, the file path C:\Users\Smith is shown as C:¥Users¥Smith (in Japan) or C:₩Users₩Smith (in Korea).
In Europe, teletext character sets, which are variants of ASCII, are used for broadcast TV subtitles, defined by World System Teletext and broadcast using the DVB-TXT standard for embedding teletext into DVB transmissions. In the case that the subtitles were initially authored for teletext and converted, the derived subtitle formats are constrained to the same character sets.
8-bit codes
Eventually, as 8-, 16-, and 32-bit (and later 64-bit) computers began to replace 12-, 18-, and 36-bit computers as the norm, it became common to use an 8-bit byte to store each character in memory, providing an opportunity for extended, 8-bit relatives of ASCII. In most cases these developed as true extensions of ASCII, leaving the original character-mapping intact, but adding additional character definitions after the first 128 (i.e., 7-bit) characters.
Encodings include ISCII (India), VISCII (Vietnam). Although these encodings are sometimes referred to as ASCII, true ASCII is defined strictly only by the ANSI standard.
Most early home computer systems developed their own 8-bit character sets containing line-drawing and game glyphs, and often filled in some or all of the control characters from 0 to 31 with more graphics. Kaypro CP/M computers used the "upper" 128 characters for the Greek alphabet.
The PETSCII code Commodore International used for their 8-bit systems is probably unique among post-1970 codes in being based on ASCII-1963, instead of the more common ASCII-1967, such as found on the ZX Spectrum computer. Atari 8-bit computers and Galaksija computers also used ASCII variants.
The IBM PC defined code page 437, which replaced the control characters with graphic symbols such as smiley faces, and mapped additional graphic characters to the upper 128 positions. Operating systems such as DOS supported these code pages, and manufacturers of IBM PCs supported them in hardware. Digital Equipment Corporation developed the Multinational Character Set (DEC-MCS) for use in the popular VT220 terminal as one of the first extensions designed more for international languages than for block graphics. The Macintosh defined Mac OS Roman and Postscript defined another character set: both sets contained "international" letters, typographic symbols and punctuation marks instead of graphics, more like modern character sets.
The ISO/IEC 8859 standard (derived from the DEC-MCS) finally provided a standard that most systems copied (at least as accurately as they copied ASCII, but with many substitutions). A popular further extension designed by Microsoft, Windows-1252 (often mislabeled as ISO-8859-1), added the typographic punctuation marks needed for traditional text printing. ISO-8859-1, Windows-1252, and the original 7-bit ASCII were the most common character encodings until 2008 when UTF-8 became more common.
ISO/IEC 4873 introduced 32 additional control codes defined in the 80–9F hexadecimal range, as part of extending the 7-bit ASCII encoding to become an 8-bit system.
Unicode
Unicode and the ISO/IEC 10646 Universal Character Set (UCS) have a much wider array of characters and their various encoding forms have begun to supplant ISO/IEC 8859 and ASCII rapidly in many environments. While ASCII is limited to 128 characters, Unicode and the UCS support more characters by separating the concepts of unique identification (using natural numbers called code points) and encoding (to 8-, 16-, or 32-bit binary formats, called UTF-8, UTF-16, and UTF-32, respectively).
ASCII was incorporated into the Unicode (1991) character set as the first 128 symbols, so the 7-bit ASCII characters have the same numeric codes in both sets. This allows UTF-8 to be backward compatible with 7-bit ASCII, as a UTF-8 file containing only ASCII characters is identical to an ASCII file containing the same sequence of characters. Even more importantly, forward compatibility is ensured as software that recognizes only 7-bit ASCII characters as special and does not alter bytes with the highest bit set (as is often done to support 8-bit ASCII extensions such as ISO-8859-1) will preserve UTF-8 data unchanged.
See also
3568 ASCII, an asteroid named after the character encoding
Basic Latin (Unicode block) (ASCII as a subset of Unicode)
HTML decimal character rendering
Jargon File, a glossary of computer programmer slang which includes a list of common slang names for ASCII characters
List of computer character sets
List of Unicode characters
Notes
References
Further reading
from:
(facsimile, not machine readable)
External links
Computer-related introductions in 1963
Character sets
Character encoding
Latin-script representations
Presentation layer protocols
American National Standards Institute standards
|
https://en.wikipedia.org/wiki/Animation
|
Animation is the method that encompasses myriad filmmaking techniques, by which still images are manipulated to create moving images. In traditional animation, images are drawn or painted by hand on transparent celluloid sheets (cels) to be photographed and exhibited on film. Animation has been recognized as an artistic medium, specifically within the entertainment industry. Many animations are computer animations made with computer-generated imagery (CGI). Stop motion animation, in particular claymation, has continued to exist alongside these other forms.
Animation is contrasted with live-action film, although the two do not exist in isolation. Many moviemakers have produced films that are a hybrid of the two. As CGI increasingly approximates photographic imagery, filmmakers can easily composite 3D animations into their film rather than using practical effects for showy visual effects (VFX).
General overview
Computer animation can be very detailed 3D animation, while 2D computer animation (which may have the look of traditional animation) can be used for stylistic reasons, low bandwidth, or faster real-time renderings. Other common animation methods apply a stop motion technique to two- and three-dimensional objects like paper cutouts, puppets, or clay figures.
A cartoon is an animated film, usually a short film, featuring an exaggerated visual style. The style takes inspiration from comic strips, often featuring anthropomorphic animals, superheroes, or the adventures of human protagonists. Especially with animals that form a natural predator/prey relationship (e.g. cats and mice, coyotes and birds), the action often centers on violent pratfalls such as falls, collisions, and explosions that would be lethal in real life.
The illusion of animation—as in motion pictures in general—has traditionally been attributed to the persistence of vision and later to the phi phenomenon and beta movement, but the exact neurological causes are still uncertain. The illusion of motion caused by a rapid succession of images that minimally differ from each other, with unnoticeable interruptions, is a stroboscopic effect. While animators traditionally used to draw each part of the movements and changes of figures on transparent cels that could be moved over a separate background, computer animation is usually based on programming paths between key frames to maneuver digitally created figures throughout a digitally created environment.
Analog mechanical animation media that rely on the rapid display of sequential images include the phénakisticope, zoetrope, flip book, praxinoscope, and film. Television and video are popular electronic animation media that originally were analog and now operate digitally. For display on computers, technology such as the animated GIF and Flash animation were developed.
In addition to short films, feature films, television series, animated GIFs, and other media dedicated to the display of moving images, animation is also prevalent in video games, motion graphics, user interfaces, and visual effects.
The physical movement of image parts through simple mechanics—for instance, moving images in magic lantern shows—can also be considered animation. The mechanical manipulation of three-dimensional puppets and objects to emulate living beings has a very long history in automata. Electronic automata were popularized by Disney as animatronics.
Etymology
The word "animation" stems from the Latin "animātiōn", stem of "animātiō", meaning "a bestowing of life". The earlier meaning of the English word is "liveliness" and has been in use much longer than the meaning of "moving image medium".
History
Before cinematography
Hundreds of years before the introduction of true animation, people all over the world enjoyed shows with moving figures that were physically manipulated (manually, or sometimes mechanically) in puppetry, automata, shadow play, and the magic lantern (especially in phantasmagoria shows).
In 1833, the stroboscopic disc (better known as the phénakisticope) introduced the principle of modern animation, which would also be applied in the zoetrope (introduced in 1866), the flip book (1868), the praxinoscope (1877) and film.
Silent era
When cinematography eventually broke through in the 1890s, the wonder of the realistic details in the new medium was seen as its biggest accomplishment. It took years before animation found its way to the cinemas. The successful short The Haunted Hotel (1907) by J. Stuart Blackton popularized stop-motion and reportedly inspired Émile Cohl to create Fantasmagorie (1908), regarded as the oldest known example of a complete traditional (hand-drawn) animation on standard cinematographic film. Other great artistic and very influential short films were created by Ladislas Starevich with his puppet animations since 1910 and by Winsor McCay with detailed hand-drawn animation in films such as Little Nemo (1911) and Gertie the Dinosaur (1914).
During the 1910s, the production of animated "cartoons" became an industry in the US. Successful producer John Randolph Bray and animator Earl Hurd, patented the cel animation process that dominated the animation industry for the rest of the century. Felix the Cat, who debuted in 1919, became the first fully realized animal character in the history of American animation.
American golden age
In 1928, Steamboat Willie, featuring Mickey Mouse and Minnie Mouse, popularized film with synchronized sound and put Walt Disney's studio at the forefront of the animation industry. Although Disney Animation's actual output relative to total global animation output has always been very small, the studio has overwhelmingly dominated the "aesthetic norms" of animation ever since.
The enormous success of Mickey Mouse is seen as the start of the golden age of American animation that would last until the 1960s. The United States dominated the world market of animation with a plethora of cel-animated theatrical shorts. Several studios would introduce characters that would become very popular and would have long-lasting careers, including Walt Disney Productions' Goofy (1932) and Donald Duck (1934), Fleischer Studios/Paramount Cartoon Studios' Out of the Inkwell' Koko the Clown (1918), Bimbo and Betty Boop (1930), Popeye (1933) and Casper (1945), Warner Bros. Cartoons' Looney Tunes' Porky Pig (1935), Daffy Duck (1937), Elmer Fudd (1937–1940), Bugs Bunny (1938–1940), Tweety (1942), Wile E. Coyote and Road Runner (1949), MGM cartoon studio's Tom and Jerry (1940) and Droopy, Walter Lantz Productions/Universal Studio Cartoons' Woody Woodpecker (1940), Terrytoons/20th Century Fox's Mighty Mouse (1942), and United Artists' Pink Panther (1963).
Features before CGI
In 1917, Italian-Argentine director Quirino Cristiani made the first feature-length film El Apóstol (now lost), which became a critical and commercial success. It was followed by Cristiani's Sin dejar rastros in 1918, but one day after its premiere, the film was confiscated by the government.
After working on it for three years, Lotte Reiniger released the German feature-length silhouette animation Die Abenteuer des Prinzen Achmed in 1926, the oldest extant animated feature.
In 1937, Walt Disney Studios premiered their first animated feature, Snow White and the Seven Dwarfs, still one of the highest-grossing traditional animation features . The Fleischer studios followed this example in 1939 with Gulliver's Travels with some success. Partly due to foreign markets being cut off by the Second World War, Disney's next features Pinocchio, Fantasia (both 1940), Fleischer Studios' second animated feature Mr. Bug Goes to Town (1941–1942) and Disney's feature films Cinderella (1950), Alice in Wonderland (1951) and Lady and the Tramp (1955) failed at the box office. For decades afterward, Disney would be the only American studio to regularly produce animated features, until Ralph Bakshi became the first to also release more than a handful features. Sullivan-Bluth Studios began to regularly produce animated features starting with An American Tail in 1986.
Although relatively few titles became as successful as Disney's features, other countries developed their own animation industries that produced both short and feature theatrical animations in a wide variety of styles, relatively often including stop motion and cutout animation techniques. Russia's Soyuzmultfilm animation studio, founded in 1936, produced 20 films (including shorts) per year on average and reached 1,582 titles in 2018. China, Czechoslovakia / Czech Republic, Italy, France, and Belgium were other countries that more than occasionally released feature films, while Japan became a true powerhouse of animation production, with its own recognizable and influential anime style of effective limited animation.
Television
Animation became very popular on television since the 1950s, when television sets started to become common in most developed countries. Cartoons were mainly programmed for children, on convenient time slots, and especially US youth spent many hours watching Saturday-morning cartoons. Many classic cartoons found a new life on the small screen and by the end of the 1950s, the production of new animated cartoons started to shift from theatrical releases to TV series. Hanna-Barbera Productions was especially prolific and had huge hit series, such as The Flintstones (1960–1966) (the first prime time animated series), Scooby-Doo (since 1969) and Belgian co-production The Smurfs (1981–1989). The constraints of American television programming and the demand for an enormous quantity resulted in cheaper and quicker limited animation methods and much more formulaic scripts. Quality dwindled until more daring animation surfaced in the late 1980s and in the early 1990s with hit series, the first cartoon of The Simpsons (1987), the animated television series such as The Simpsons (since 1989) and SpongeBob SquarePants (since 1999) as part of a "renaissance" of American animation.
While US animated series also spawned successes internationally, many other countries produced their own child-oriented programming, relatively often preferring stop motion and puppetry over cel animation. Japanese anime TV series became very successful internationally since the 1960s, and European producers looking for affordable cel animators relatively often started co-productions with Japanese studios, resulting in hit series such as Barbapapa (The Netherlands/Japan/France 1973–1977), Wickie und die starken Männer/小さなバイキング ビッケ (Vicky the Viking) (Austria/Germany/Japan 1974), Maya the Bee (Japan/Germany 1975) and The Jungle Book (Italy/Japan 1989).
Switch from cels to computers
Computer animation was gradually developed since the 1940s. 3D wireframe animation started popping up in the mainstream in the 1970s, with an early (short) appearance in the sci-fi thriller Futureworld (1976).
The Rescuers Down Under was the first feature film to be completely created digitally without a camera. It was produced in a style that's very similar to traditional cel animation on the Computer Animation Production System (CAPS), developed by The Walt Disney Company in collaboration with Pixar in the late 1980s.
The so-called 3D style, more often associated with computer animation, became the dominant technique following the success of Pixar's Toy Story (1995), the first computer-animated feature in this style.
Most of the cel animation studios switched to producing mostly computer-animated films around the 1990s, as it proved cheaper and more profitable. Not only the very popular 3D animation style was generated with computers, but also most of the films and series with a more traditional hand-crafted appearance, in which the charming characteristics of cel animation could be emulated with software, while new digital tools helped developing new styles and effects.
Economic status
In 2010, the animation market was estimated to be worth circa US$80 billion. By 2020, the value had increased to an estimated US$270 billion. Animated feature-length films returned the highest gross margins (around 52%) of all film genres between 2004 and 2013. Animation as an art and industry continues to thrive as of the early 2020s.
Education, propaganda and commercials
The clarity of animation makes it a powerful tool for instruction, while its total malleability also allows exaggeration that can be employed to convey strong emotions and to thwart reality. It has therefore been widely used for other purposes than mere entertainment.
During World War II, animation was widely exploited for propaganda. Many American studios, including Warner Bros. and Disney, lent their talents and their cartoon characters to convey to the public certain war values. Some countries, including China, Japan and the United Kingdom, produced their first feature-length animation for their war efforts.
Animation has been very popular in television commercials, both due to its graphic appeal, and the humour it can provide. Some animated characters in commercials have survived for decades, such as Snap, Crackle and Pop in advertisements for Kellogg's cereals. Tex Avery was the producer of the first Raid "Kills Bugs Dead" commercials in 1966, which were very successful for the company.
Other media, merchandise and theme parks
Apart from their success in movie theaters and television series, many cartoon characters would also prove lucrative when licensed for all kinds of merchandise and for other media.
Animation has traditionally been very closely related to comic books. While many comic book characters found their way to the screen (which is often the case in Japan, where many manga are adapted into anime), original animated characters also commonly appear in comic books and magazines. Somewhat similarly, characters and plots for video games (an interactive form of animation that became its own medium) have been derived from films and vice versa.
Some of the original content produced for the screen can be used and marketed in other media. Stories and images can easily be adapted into children's books and other printed media. Songs and music have appeared on records and as streaming media.
While very many animation companies commercially exploit their creations outside moving image media, The Walt Disney Company is the best known and most extreme example. Since first being licensed for a children's writing tablet in 1929, their Mickey Mouse mascot has been depicted on an enormous amount of products, as have many other Disney characters. This may have influenced some pejorative use of Mickey's name, but licensed Disney products sell well, and the so-called Disneyana has many avid collectors, and even a dedicated Disneyana Fan Club (since 1984).
Disneyland opened in 1955 and features many attractions that were based on Disney's cartoon characters. Its enormous success spawned several other Disney theme parks and resorts. Disney's earnings from the theme parks have relatively often been higher than those from their movies.
Criticism
Criticism of animation has been common in media and cinema since its inception. With its popularity, a large amount of criticism has arisen, especially animated feature-length films. Criticisms regarding cultural representation and psychological effects on children have been raised around the animation industry, which some claim has remained politically unchanged and stagnant since its inception into mainstream culture.
Awards
As with any other form of media, animation has instituted awards for excellence in the field. Many are part of general or regional film award programs, like the China's Golden Rooster Award for Best Animation (since 1981). Awards programs dedicated to animation, with many categories, include ASIFA-Hollywood's Annie Awards, the Emile Awards in Europe and the Anima Mundi awards in Brazil.
Academy Awards
Apart from Academy Awards for Best Animated Short Film (since 1932) and Best Animated Feature (since 2002), animated movies have been nominated and rewarded in other categories, relatively often for Best Original Song and Best Original Score.
Beauty and the Beast was the first animated film nominated for Best Picture, in 1991. Up (2009) and Toy Story 3 (2010) also received Best Picture nominations, after the academy expanded the number of nominees from five to ten.
Production
The creation of non-trivial animation works (i.e., longer than a few seconds) has developed as a form of filmmaking, with certain unique aspects. Traits common to both live-action and animated feature-length films are labor intensity and high production costs.
The most important difference is that once a film is in the production phase, the marginal cost of one more shot is higher for animated films than live-action films. It is relatively easy for a director to ask for one more take during principal photography of a live-action film, but every take on an animated film must be manually rendered by animators (although the task of rendering slightly different takes has been made less tedious by modern computer animation). It is pointless for a studio to pay the salaries of dozens of animators to spend weeks creating a visually dazzling five-minute scene if that scene fails to effectively advance the plot of the film. Thus, animation studios starting with Disney began the practice in the 1930s of maintaining story departments where storyboard artists develop every single scene through storyboards, then handing the film over to the animators only after the production team is satisfied that all the scenes make sense as a whole. While live-action films are now also storyboarded, they enjoy more latitude to depart from storyboards (i.e., real-time improvisation).
Another problem unique to animation is the requirement to maintain a film's consistency from start to finish, even as films have grown longer and teams have grown larger. Animators, like all artists, necessarily have individual styles, but must subordinate their individuality in a consistent way to whatever style is employed on a particular film. Since the early 1980s, teams of about 500 to 600 people, of whom 50 to 70 are animators, typically have created feature-length animated films. It is relatively easy for two or three artists to match their styles; synchronizing those of dozens of artists is more difficult.
This problem is usually solved by having a separate group of visual development artists develop an overall look and palette for each film before the animation begins. Character designers on the visual development team draw model sheets to show how each character should look like with different facial expressions, posed in different positions, and viewed from different angles. On traditionally animated projects, maquettes were often sculpted to further help the animators see how characters would look from different angles.
Unlike live-action films, animated films were traditionally developed beyond the synopsis stage through the storyboard format; the storyboard artists would then receive credit for writing the film. In the early 1960s, animation studios began hiring professional screenwriters to write screenplays (while also continuing to use story departments) and screenplays had become commonplace for animated films by the late 1980s.
Techniques
Traditional
Traditional animation (also called cel animation or hand-drawn animation) was the process used for most animated films of the 20th century. The individual frames of a traditionally animated film are photographs of drawings, first drawn on paper. To create the illusion of movement, each drawing differs slightly from the one before it. The animators' drawings are traced or photocopied onto transparent acetate sheets called cels, which are filled in with paints in assigned colors or tones on the side opposite the line drawings. The completed character cels are photographed one-by-one against a painted background by a rostrum camera onto motion picture film.
The traditional cel animation process became obsolete by the beginning of the 21st century. Today, animators' drawings and the backgrounds are either scanned into or drawn directly into a computer system. Various software programs are used to color the drawings and simulate camera movement and effects. The final animated piece is output to one of several delivery media, including traditional 35 mm film and newer media with digital video. The "look" of traditional cel animation is still preserved, and the character animators' work has remained essentially the same over the past 90 years. Some animation producers have used the term "tradigital" (a play on the words "traditional" and "digital") to describe cel animation that uses significant computer technology.
Examples of traditionally animated feature films include Pinocchio (United States, 1940), Animal Farm (United Kingdom, 1954), Lucky and Zorba (Italy, 1998), and The Illusionist (British-French, 2010). Traditionally animated films produced with the aid of computer technology include The Lion King (US, 1994), The Prince of Egypt (US, 1998), Akira (Japan, 1988), Spirited Away (Japan, 2001), The Triplets of Belleville (France, 2003), and The Secret of Kells (Irish-French-Belgian, 2009).
Full
Full animation is the process of producing high-quality traditionally animated films that regularly use detailed drawings and plausible movement, having a smooth animation. Fully animated films can be made in a variety of styles, from more realistically animated works like those produced by the Walt Disney studio (The Little Mermaid, Beauty and the Beast, Aladdin, The Lion King) to the more 'cartoon' styles of the Warner Bros. animation studio. Many of the Disney animated features are examples of full animation, as are non-Disney works, The Secret of NIMH (US, 1982), The Iron Giant (US, 1999), and Nocturna (Spain, 2007). Fully animated films are often animated on "twos", sometimes on "ones", which means that 12 to 24 drawings are required for a single second of film.
Limited
Limited animation involves the use of less detailed or more stylized drawings and methods of movement usually a choppy or "skippy" movement animation. Limited animation uses fewer drawings per second, thereby limiting the fluidity of the animation. This is a more economic technique. Pioneered by the artists at the American studio United Productions of America, limited animation can be used as a method of stylized artistic expression, as in Gerald McBoing-Boing (US, 1951), Yellow Submarine (UK, 1968), and certain anime produced in Japan. Its primary use, however, has been in producing cost-effective animated content for media for television (the work of Hanna-Barbera, Filmation, and other TV animation studios) and later the Internet (web cartoons).
Rotoscoping
Rotoscoping is a technique patented by Max Fleischer in 1917 where animators trace live-action movement, frame by frame. The source film can be directly copied from actors' outlines into animated drawings, as in The Lord of the Rings (US, 1978), or used in a stylized and expressive manner, as in Waking Life (US, 2001) and A Scanner Darkly (US, 2006). Some other examples are Fire and Ice (US, 1983), Heavy Metal (1981), and Aku no Hana (Japan, 2013).
Live-action blending
Live-action/animation is a technique combining hand-drawn characters into live action shots or live-action actors into animated shots. One of the earlier uses was in Koko the Clown when Koko was drawn over live-action footage. Walt Disney and Ub Iwerks created a series of Alice Comedies (1923–1927), in which a live-action girl enters an animated world. Other examples include Allegro Non Troppo (Italy, 1976), Who Framed Roger Rabbit (US, 1988), Volere volare (Italy 1991), Space Jam (US, 1996) and Osmosis Jones (US, 2001).
Stop motion
Stop-motion animation is used to describe animation created by physically manipulating real-world objects and photographing them one frame of film at a time to create the illusion of movement. There are many different types of stop-motion animation, usually named after the materials used to create the animation. Computer software is widely available to create this type of animation; traditional stop-motion animation is usually less expensive but more time-consuming to produce than current computer animation.
Puppet animation Typically involves stop-motion puppet figures interacting in a constructed environment, in contrast to real-world interaction in model animation. The puppets generally have an armature inside of them to keep them still and steady to constrain their motion to particular joints. Examples include The Tale of the Fox (France, 1937), The Nightmare Before Christmas (US, 1993), Corpse Bride (US, 2005), Coraline (US, 2009), the films of Jiří Trnka and the adult animated sketch-comedy television series Robot Chicken (US, 2005–present).
Puppetoon Created using techniques developed by George Pal, are puppet-animated films that typically use a different version of a puppet for different frames, rather than manipulating one existing puppet.
Clay animation or Plasticine animation (Often called claymation, which, however, is a trademarked name). It uses figures made of clay or a similar malleable material to create stop-motion animation. The figures may have an armature or wire frame inside, similar to the related puppet animation (below), that can be manipulated to pose the figures. Alternatively, the figures may be made entirely of clay, in the films of Bruce Bickford, where clay creatures morph into a variety of different shapes. Examples of clay-animated works include The Gumby Show (US, 1957–1967), Mio Mao (Italy, 1974–2005), Morph shorts (UK, 1977–2000), Wallace and Gromit shorts (UK, as of 1989), Jan Švankmajer's Dimensions of Dialogue (Czechoslovakia, 1982), The Trap Door (UK, 1984). Films include Wallace & Gromit: The Curse of the Were-Rabbit, Chicken Run and The Adventures of Mark Twain.
Strata-cut animation Most commonly a form of clay animation in which a long bread-like "loaf" of clay, internally packed tight and loaded with varying imagery, is sliced into thin sheets, with the animation camera taking a frame of the end of the loaf for each cut, eventually revealing the movement of the internal images within.
Cutout animation A type of stop-motion animation produced by moving two-dimensional pieces of material paper or cloth. Examples include Terry Gilliam's animated sequences from Monty Python's Flying Circus (UK, 1969–1974); Fantastic Planet (France/Czechoslovakia, 1973); Tale of Tales (Russia, 1979), Matt Stone and Trey Parker the first cutout animation South Park (1992), the pilot episode of the adult television sitcom series (and sometimes in episodes) of South Park (US, 1997) and the music video Live for the moment, from Verona Riots band (produced by Alberto Serrano and Nívola Uyá, Spain 2014).
Silhouette animation A variant of cutout animation in which the characters are backlit and only visible as silhouettes. Examples include The Adventures of Prince Achmed (Weimar Republic, 1926) and Princes et Princesses (France, 2000).
Model animation Stop-motion animation created to interact with and exist as a part of a live-action world. Intercutting, matte effects and split screens are often employed to blend stop-motion characters or objects with live actors and settings. Examples include the work of Ray Harryhausen, as seen in films, Jason and the Argonauts (1963), and the work of Willis H. O'Brien on films, King Kong (1933).
Go motion A variant of model animation that uses various techniques to create motion blur between frames of film, which is not present in traditional stop motion. The technique was invented by Industrial Light & Magic and Phil Tippett to create special effect scenes for the film Star Wars: Episode V – The Empire Strikes Back (1980). Another example is the dragon named "Vermithrax" from the 1981 film Dragonslayer.
Object animation The use of regular inanimate objects in stop-motion animation, as opposed to specially created items.
Graphic animation Uses non-drawn flat visual graphic material (photographs, newspaper clippings, magazines, etc.), which are sometimes manipulated frame by frame to create movement. At other times, the graphics remain stationary, while the stop-motion camera is moved to create on-screen action.
Brickfilm A subgenre of object animation involving using Lego or other similar brick toys to make an animation. These have had a recent boost in popularity with the advent of video sharing sites, YouTube and the availability of cheap cameras and animation software.
Pixilation Involves the use of live humans as stop-motion characters. This allows for a number of surreal effects, including disappearances and reappearances, allowing people to appear to slide across the ground, and other effects. Examples of pixilation include The Secret Adventures of Tom Thumb and Angry Kid shorts, and the Academy Award-winning Neighbours by Norman McLaren.
Computer
Computer animation encompasses a variety of techniques, the unifying factor being that the animation is created digitally on a computer. 2D animation techniques tend to focus on image manipulation while 3D techniques usually build virtual worlds in which characters and objects move and interact. 3D animation can create images that seem real to the viewer.
2D
2D animation figures are created or edited on the computer using 2D bitmap graphics and 2D vector graphics. This includes automated computerized versions of traditional animation techniques, interpolated morphing, onion skinning and interpolated rotoscoping.
2D animation has many applications, including After Effects Animation, analog computer animation, Flash animation, and PowerPoint animation. Cinemagraphs are still photographs in the form of an animated GIF file of which part is animated.
Final line advection animation is a technique used in 2D animation, to give artists and animators more influence and control over the final product as everything is done within the same department. Speaking about using this approach in Paperman, John Kahrs said that "Our animators can change things, actually erase away the CG underlayer if they want, and change the profile of the arm."
3D
3D animation is digitally modeled and manipulated by an animator. The 3D model maker usually starts by creating a 3D polygon mesh for the animator to manipulate. A mesh typically includes many vertices that are connected by edges and faces, which give the visual appearance of form to a 3D object or 3D environment. Sometimes, the mesh is given an internal digital skeletal structure called an armature that can be used to control the mesh by weighting the vertices. This process is called rigging and can be used in conjunction with key frames to create movement.
Other techniques can be applied, mathematical functions (e.g., gravity, particle simulations), simulated fur or hair, and effects, fire and water simulations. These techniques fall under the category of 3D dynamics.
Terms
Cel-shaded animation is used to mimic traditional animation using computer software. The shading looks stark, with less blending of colors. Examples include Skyland (2007, France), The Iron Giant (1999, United States), Futurama (1999, United States) Appleseed Ex Machina (2007, Japan), The Legend of Zelda: The Wind Waker (2002, Japan), The Legend of Zelda: Breath of the Wild (2017, Japan)
Machinima – Films created by screen capturing in video games and virtual worlds. The term originated from the software introduction in the 1980s demoscene, as well as the 1990s recordings of the first-person shooter video game Quake.
Motion capture is used when live-action actors wear special suits that allow computers to copy their movements into CG characters. Examples include Polar Express (2004, US), Beowulf (2007, US), A Christmas Carol (2009, US), The Adventures of Tintin (2011, US) kochadiiyan (2014, India)
Computer animation is used primarily for animation that attempts to resemble real life, using advanced rendering that mimics in detail skin, plants, water, fire, clouds, etc. Examples include Up (2009, US), How to Train Your Dragon (2010, US)
Physically based animation is animation using computer simulations.
Analog animation is used nearly similar to analog horror genre, allowing to animate erupting signals in computers through real-life audio sources, cryptic messages, and minimal visuals. Examples include Calls (2021).
Mechanical
Animatronics is the use of mechatronics to create machines that seem animate rather than robotic.
Audio-Animatronics and Autonomatronics is a form of robotics animation, combined with 3-D animation, created by Walt Disney Imagineering for shows and attractions at Disney theme parks move and make noise (generally a recorded speech or song). They are fixed to whatever supports them. They can sit and stand, and they cannot walk. An Audio-Animatron is different from an android-type robot in that it uses prerecorded movements and sounds, rather than responding to external stimuli. In 2009, Disney created an interactive version of the technology called Autonomatronics.
Linear Animation Generator is a form of animation by using static picture frames installed in a tunnel or a shaft. The animation illusion is created by putting the viewer in a linear motion, parallel to the installed picture frames. The concept and the technical solution were invented in 2007 by Mihai Girlovan in Romania.
Chuckimation is a type of animation created by the makers of the television series Action League Now! in which characters/props are thrown, or chucked from off camera or wiggled around to simulate talking by unseen hands.
The magic lantern used mechanical slides to project moving images, probably since Christiaan Huygens invented this early image projector in 1659.
Other
Hydrotechnics: a technique that includes lights, water, fire, fog, and lasers, with high-definition projections on mist screens.
Drawn on film animation: a technique where footage is produced by creating the images directly on film stock; for example, by Norman McLaren, Len Lye and Stan Brakhage.
Paint-on-glass animation: a technique for making animated films by manipulating slow drying oil paints on sheets of glass, for example by Aleksandr Petrov.
Erasure animation: a technique using traditional 2D media, photographed over time as the artist manipulates the image. For example, William Kentridge is famous for his charcoal erasure films, and Piotr Dumała for his auteur technique of animating scratches on plaster.
Pinscreen animation: makes use of a screen filled with movable pins that can be moved in or out by pressing an object onto the screen. The screen is lit from the side so that the pins cast shadows. The technique has been used to create animated films with a range of textural effects difficult to achieve with traditional cel animation.
Sand animation: sand is moved around on a back- or front-lighted piece of glass to create each frame for an animated film. This creates an interesting effect when animated because of the light contrast.
Flip book: a flip book (sometimes, especially in British English, called a flick book) is a book with a series of pictures that vary gradually from one page to the next, so that when the pages are turned rapidly, the pictures appear to animate by simulating motion or some other change. Flip books are often illustrated books for children, they also are geared towards adults and employ a series of photographs rather than drawings. Flip books are not always separate books, they appear as an added feature in ordinary books or magazines, often in the page corners. Software packages and websites are also available that convert digital video files into custom-made flip books.
Character animation
Multi-sketching
Special effects animation
See also
Animated war film
Animation department
Animated series
Anime
Architectural animation
Avar
Independent animation
International Animation Day
International Animated Film Association
International Tournée of Animation
List of film-related topics
Motion graphic design
Society for Animation Studies
Twelve basic principles of animation
Wire-frame model
References
Citations
Sources
Journal articles
Books
Online sources
External links
The making of an 8-minute cartoon short
"Animando", a 12-minute film demonstrating 10 different animation techniques (and teaching how to use them) (archived 1 October 2009).
Cartooning
Articles containing video clips
Film and video technology
|
https://en.wikipedia.org/wiki/Apollo
|
Apollo or Apollon is one of the Olympian deities in classical Greek and Roman religion and Greek and Roman mythology. Apollo has been recognized as a god of archery, music and dance, truth and prophecy, healing and diseases, the Sun and light, poetry, and more. One of the most important and complex of the Greek gods, he is the son of Zeus and Leto, and the twin brother of Artemis, goddess of the hunt. He is considered to be the most beautiful god and is represented as the ideal of the kouros (ephebe, or a beardless, athletic youth). Apollo is known in Greek-influenced Etruscan mythology as Apulu.
As the patron deity of Delphi (Apollo Pythios), Apollo is an oracular god—the prophetic deity of the Delphic Oracle and also the deity of ritual purification. His oracles were often consulted for guidance in various matters. He was in general seen as the god who affords help and wards off evil, and is referred to as , the "averter of evil".
Medicine and healing are associated with Apollo, whether through the god himself or mediated through his son Asclepius. Apollo delivered people from epidemics, yet he is also a god who could bring ill health and deadly plague with his arrows. The invention of archery itself is credited to Apollo and his sister Artemis. Apollo is usually described as carrying a silver or golden bow and a quiver of silver or golden arrows.
As the god of mousike, Apollo presides over all music, songs, dance and poetry. He is the inventor of string-music and the frequent companion of the Muses, functioning as their chorus leader in celebrations. The lyre is a common attribute of Apollo. Protection of the young is one of the best attested facets of his panhellenic cult persona. As a , Apollo is concerned with the health and education of children, and he presided over their passage into adulthood. Long hair, which was the prerogative of boys, was cut at the coming of age () and dedicated to Apollo. The god himself is depicted with long, uncut hair to symbolise his eternal youth.
Apollo is an important pastoral deity, and was the patron of herdsmen and shepherds. Protection of herds, flocks and crops from diseases, pests and predators were his primary rustic duties. On the other hand, Apollo also encouraged the founding of new towns and the establishment of civil constitutions, is associated with dominion over colonists, and was the giver of laws. His oracles were often consulted before setting laws in a city. Apollo Agyieus was the protector of the streets, public places and home entrances.
In Hellenistic times, especially during the 5th century BCE, as Apollo Helios he became identified among Greeks with Helios, the personification of the Sun. In Latin texts, however, there was no conflation of Apollo with Sol among the classical Latin poets until 1st century CE. Apollo and Helios/Sol remained separate beings in literary and mythological texts until the 5th century CE.
Etymology
Apollo (Attic, Ionic, and Homeric Greek: , ( ); Doric: , ; Arcadocypriot: , ; Aeolic: , ; )
The name Apollo—unlike the related older name Paean—is generally not found in the Linear B (Mycenean Greek) texts, although there is a possible attestation in the lacunose form ]pe-rjo-[ (Linear B: ]-[) on the KN E 842 tablet, though it has also been suggested that the name might actually read "Hyperion" ([u]-pe-rjo-[ne]).
The etymology of the name is uncertain. The spelling ( in Classical Attic) had almost superseded all other forms by the beginning of the common era, but the Doric form, (), is more archaic, as it is derived from an earlier . It probably is a cognate to the Doric month Apellaios (), and the offerings () at the initiation of the young men during the family-festival (). According to some scholars, the words are derived from the Doric word (), which originally meant "wall," "fence for animals" and later "assembly within the limits of the square." Apella () is the name of the popular assembly in Sparta, corresponding to the (). R. S. P. Beekes rejected the connection of the theonym with the noun and suggested a Pre-Greek proto-form *Apalyun.
Several instances of popular etymology are attested by ancient authors. Thus, the Greeks most often associated Apollo's name with the Greek verb (), "to destroy". Plato in Cratylus connects the name with (), "redemption", with (apolousis), "purification", and with (), "simple", in particular in reference to the Thessalian form of the name, , and finally with (), "ever-shooting". Hesychius connects the name Apollo with the Doric (), which means "assembly", so that Apollo would be the god of political life, and he also gives the explanation (), "fold", in which case Apollo would be the god of flocks and herds. In the ancient Macedonian language () means "stone," and some toponyms may be derived from this word: (Pella, the capital of ancient Macedonia) and (Pellēnē/Pellene).
The Hittite form Apaliunas (d) is attested in the Manapa-Tarhunta letter. The Hittite testimony reflects an early form , which may also be surmised from the comparison of Cypriot with Doric . The name of the Lydian god Qλdãns /kʷʎðãns/ may reflect an earlier /kʷalyán-/ before palatalization, syncope, and the pre-Lydian sound change *y > d. Note the labiovelar in place of the labial /p/ found in pre-Doric Ἀπέλjων and Hittite Apaliunas.
A Luwian etymology suggested for Apaliunas makes Apollo "The One of Entrapment", perhaps in the sense of "Hunter".
Greco-Roman epithets
Apollo's chief epithet was Phoebus ( ; , Phoibos ), literally "bright". It was very commonly used by both the Greeks and Romans for Apollo's role as the god of light. Like other Greek deities, he had a number of others applied to him, reflecting the variety of roles, duties, and aspects ascribed to the god. However, while Apollo has a great number of appellations in Greek myth, only a few occur in Latin literature.
Sun
Aegletes ( ; Αἰγλήτης, Aiglētēs), from , "light of the Sun"
Helius ( ; , Helios), literally "Sun"
Lyceus ( ; , Lykeios, from Proto-Greek *), "light". The meaning of the epithet "Lyceus" later became associated with Apollo's mother Leto, who was the patron goddess of Lycia () and who was identified with the wolf ().
Phanaeus ( ; , Phanaios), literally "giving or bringing light"
Phoebus ( ; , Phoibos), literally "bright", his most commonly used epithet by both the Greeks and Romans
Sol (Roman) (), "Sun" in Latin
Wolf
Lycegenes ( ; , Lukēgenēs), literally "born of a wolf" or "born of Lycia"
Lycoctonus ( ; , Lykoktonos), from , "wolf", and , "to kill"
Origin and birth
Apollo's birthplace was Mount Cynthus on the island of Delos.
Cynthius ( ; , Kunthios), literally "Cynthian"
Cynthogenes ( ; , Kynthogenēs), literally "born of Cynthus"
Delius ( ; Δήλιος, Delios), literally "Delian"
Didymaeus ( ; , Didymaios) from δίδυμος, "twin", as the twin of Artemis
Place of worship
Delphi and Actium were his primary places of worship.
Acraephius ( ; , Akraiphios, literally "Acraephian") or Acraephiaeus ( ; , Akraiphiaios), "Acraephian", from the Boeotian town of Acraephia (), reputedly founded by his son Acraepheus.
Actiacus ( ; , Aktiakos), literally "Actian", after Actium ()
Delphinius ( ; , Delphinios), literally "Delphic", after Delphi (Δελφοί). An etiology in the Homeric Hymns associated this with dolphins.
Epactaeus, meaning "god worshipped on the coast", in Samos.
Pythius ( ; , Puthios, from Πυθώ, Pythō), from the region around Delphi
Smintheus ( ; , Smintheus), "Sminthian"—that is, "of the town of Sminthos or Sminthe" near the Troad town of Hamaxitus
Napaian Apollo (Ἀπόλλων Ναπαῖος), from the city of Nape at the island of Lesbos
Eutresites, from the city of Eutresis.
Healing and disease
Acesius ( ; , Akesios), from , "healing". Acesius was the epithet of Apollo worshipped in Elis, where he had a temple in the agora.
Acestor ( ; , Akestōr), literally "healer"
Culicarius (Roman) ( ), from Latin culicārius, "of midges"
Iatrus ( ; , Iātros), literally "physician"
Medicus (Roman) ( ), "physician" in Latin. A temple was dedicated to Apollo Medicus in Rome, probably next to the temple of Bellona.
Paean ( ; , Paiān), physician, healer
Parnopius ( ; , Parnopios), from , "locust"
Founder and protector
Agyieus ( ; , Aguīeus), from , "street", for his role in protecting roads and homes
Alexicacus ( ; , Alexikakos), literally "warding off evil"
Apotropaeus ( ; , Apotropaios), from , "to avert"
Archegetes ( ; , Arkhēgetēs), literally "founder"
Averruncus (Roman) ( ; from Latin āverruncare), "to avert"
Clarius ( ; , Klārios), from Doric , "allotted lot"
Epicurius ( ; , Epikourios), from , "to aid"
Genetor ( ; , Genetōr), literally "ancestor"
Nomius ( ; , Nomios), literally "pastoral"
Nymphegetes ( ; , Numphēgetēs), from , "Nymph", and , "leader", for his role as a protector of shepherds and pastoral life
Patroos from , "related to one's father," for his role as father of Ion and founder of the Ionians, as worshipped at the Temple of Apollo Patroos in Athens
Sauroctunos, "lizard killer", possibly a reference to his killing of Python
Prophecy and truth
Coelispex (Roman) ( ), from Latin coelum, "sky", and specere "to look at"
Iatromantis ( ; , Iātromantis,) from , "physician", and , "prophet", referring to his role as a god both of healing and of prophecy
Leschenorius ( ; , Leskhēnorios), from , "converser"
Loxias ( ; , Loxias), from , "to say", historically associated with , "ambiguous"
Manticus ( ; , Mantikos), literally "prophetic"
Proopsios (), meaning "foreseer" or "first seen"
Music and arts
Musagetes ( ; Doric , Mousāgetās), from , "Muse", and "leader"
Musegetes ( ; , Mousēgetēs), as the preceding
Archery
Aphetor ( ; , Aphētōr), from , "to let loose"
Aphetorus ( ; , Aphētoros), as the preceding
Arcitenens (Roman) ( ), literally "bow-carrying"
Argyrotoxus ( ; , Argyrotoxos), literally "with silver bow"
Clytotoxus ( ; , Klytótoxos), "he who is famous for his bow", the renowned archer.
Hecaërgus ( ; , Hekaergos), literally "far-shooting"
Hecebolus ( ; , Hekēbolos), "far-shooting"
Ismenius ( ; , Ismēnios), literally "of Ismenus", after Ismenus, the son of Amphion and Niobe, whom he struck with an arrow
Appearance
Acersecomes (, Akersekómēs), "he who has unshorn hair", the eternal ephebe.
Chrysocomes ( ; , Khrusokómēs), literally "he who has golden hair."
Amazons
Amazonius (), Pausanias at the Description of Greece writes that near Pyrrhichus there was a sanctuary of Apollo, called Amazonius () with an image of the god said to have been dedicated by the Amazons.
Celtic epithets and cult titles
Apollo was worshipped throughout the Roman Empire. In the traditionally Celtic lands, he was most often seen as a healing and sun god. He was often equated with Celtic gods of similar character.
Apollo Atepomarus ("the great horseman" or "possessing a great horse"). Apollo was worshipped at Mauvières (Indre). Horses were, in the Celtic world, closely linked to the Sun.
Apollo Belenus ("bright" or "brilliant"). This epithet was given to Apollo in parts of Gaul, Northern Italy and Noricum (part of modern Austria). Apollo Belenus was a healing and sun god.
Apollo Cunomaglus ("hound lord"). A title given to Apollo at a shrine at Nettleton Shrub, Wiltshire. May have been a god of healing. Cunomaglus himself may originally have been an independent healing god.
Apollo Grannus. Grannus was a healing spring god, later equated with Apollo.
Apollo Maponus. A god known from inscriptions in Britain. This may be a local fusion of Apollo and Maponus.
Apollo Moritasgus ("masses of sea water"). An epithet for Apollo at Alesia, where he was worshipped as the god of healing and, possibly, of physicians.
Apollo Vindonnus ("clear light"). Apollo Vindonnus had a temple at Essarois, near Châtillon-sur-Seine in present-day Burgundy. He was a god of healing, especially of the eyes.
Apollo Virotutis ("benefactor of mankind"). Apollo Virotutis was worshipped, among other places, at Fins d'Annecy (Haute-Savoie) and at Jublains (Maine-et-Loire).
Origins
Apollo is considered the most Hellenic (Greek) of the Olympian gods.
The cult centers of Apollo in Greece, Delphi and Delos, date from the 8th century BCE. The Delos sanctuary was primarily dedicated to Artemis, Apollo's twin sister. At Delphi, Apollo was venerated as the slayer of the monstrous serpent Python. For the Greeks, Apollo was the most Greek of all the gods, and through the centuries he acquired different functions. In Archaic Greece he was the prophet, the oracular god who in older times was connected with "healing". In Classical Greece he was the god of light and of music, but in popular religion he had a strong function to keep away evil. Walter Burkert discerned three components in the prehistory of Apollo worship, which he termed "a Dorian-northwest Greek component, a Cretan-Minoan component, and a Syro-Hittite component."
Healer and god-protector from evil
In classical times, his major function in popular religion was to keep away evil, and he was therefore called "apotropaios" (, "averting evil") and "alexikakos" ( "keeping off ill"; from v. + n. ). Apollo also had many epithets relating to his function as a healer. Some commonly-used examples are "paion" ( literally "healer" or "helper") "epikourios" (, "succouring"), "oulios" (, "healer, baleful") and "loimios" (, "of the plague"). In later writers, the word, "paion", usually spelled "Paean", becomes a mere epithet of Apollo in his capacity as a god of healing.
Apollo in his aspect of "healer" has a connection to the primitive god Paean (), who did not have a cult of his own. Paean serves as the healer of the gods in the Iliad, and seems to have originated in a pre-Greek religion. It is suggested, though unconfirmed, that he is connected to the Mycenaean figure pa-ja-wo-ne (Linear B: ). Paean was the personification of holy songs sung by "seer-doctors" (), which were supposed to cure disease.
Homer illustrated Paeon the god and the song both of apotropaic thanksgiving or triumph. Such songs were originally addressed to Apollo and afterwards to other gods: to Dionysus, to Apollo Helios, to Apollo's son Asclepius the healer. About the 4th century BCE, the paean became merely a formula of adulation; its object was either to implore protection against disease and misfortune or to offer thanks after such protection had been rendered. It was in this way that Apollo had become recognized as the god of music. Apollo's role as the slayer of the Python led to his association with battle and victory; hence it became the Roman custom for a paean to be sung by an army on the march and before entering into battle, when a fleet left the harbour, and also after a victory had been won.
In the Iliad, Apollo is the healer under the gods, but he is also the bringer of disease and death with his arrows, similar to the function of the Vedic god of disease Rudra. He sends a plague () to the Achaeans. Knowing that Apollo can prevent a recurrence of the plague he sent, they purify themselves in a ritual and offer him a large sacrifice of cows, called a hecatomb.
Dorian origin
The Homeric Hymn to Apollo depicts Apollo as an intruder from the north. The connection with the northern-dwelling Dorians and their initiation festival apellai is reinforced by the month Apellaios in northwest Greek calendars. The family-festival was dedicated to Apollo (Doric: ). Apellaios is the month of these rites, and Apellon is the "megistos kouros" (the great Kouros). However it can explain only the Doric type of the name, which is connected with the Ancient Macedonian word "pella" (Pella), stone. Stones played an important part in the cult of the god, especially in the oracular shrine of Delphi (Omphalos).
Minoan origin
George Huxley regarded the identification of Apollo with the Minoan deity Paiawon, worshipped in Crete, to have originated at Delphi. In the Homeric Hymn, Apollo appeared as a dolphin and carried Cretan priests to Delphi, where they evidently transferred their religious practices. Apollo Delphinios or Delphidios was a sea-god especially worshipped in Crete and in the islands. Apollo's sister Artemis, who was the Greek goddess of hunting, is identified with Britomartis (Diktynna), the Minoan "Mistress of the animals". In her earliest depictions she was accompanied by the "Master of the animals", a bow-wielding god of hunting whose name has been lost; aspects of this figure may have been absorbed into the more popular Apollo.
Anatolian origin
A non-Greek origin of Apollo has long been assumed in scholarship. The name of Apollo's mother Leto has Lydian origin, and she was worshipped on the coasts of Asia Minor. The inspiration oracular cult was probably introduced into Greece from Anatolia, which is the origin of Sibyl, and where some of the oldest oracular shrines originated. Omens, symbols, purifications, and exorcisms appear in old Assyro-Babylonian texts. These rituals were spread into the empire of the Hittites, and from there into Greece.
Homer pictures Apollo on the side of the Trojans, fighting against the Achaeans, during the Trojan War. He is pictured as a terrible god, less trusted by the Greeks than other gods. The god seems to be related to Appaliunas, a tutelary god of Wilusa (Troy) in Asia Minor, but the word is not complete. The stones found in front of the gates of Homeric Troy were the symbols of Apollo. A western Anatolian origin may also be bolstered by references to the parallel worship of Artimus (Artemis) and Qλdãns, whose name may be cognate with the Hittite and Doric forms, in surviving Lydian texts. However, recent scholars have cast doubt on the identification of Qλdãns with Apollo.
The Greeks gave to him the name agyieus as the protector god of public places and houses who wards off evil and his symbol was a tapered stone or column. However, while usually Greek festivals were celebrated at the full moon, all the feasts of Apollo were celebrated on the seventh day of the month, and the emphasis given to that day (sibutu) indicates a Babylonian origin.
The Late Bronze Age (from 1700 to 1200 BCE) Hittite and Hurrian Aplu was a god of plague, invoked during plague years. Here we have an apotropaic situation, where a god originally bringing the plague was invoked to end it. Aplu, meaning the son of, was a title given to the god Nergal, who was linked to the Babylonian god of the sun Shamash. Homer interprets Apollo as a terrible god () who brings death and disease with his arrows, but who can also heal, possessing a magic art that separates him from the other Greek gods. In Iliad, his priest prays to Apollo Smintheus, the mouse god who retains an older agricultural function as the protector from field rats. All these functions, including the function of the healer-god Paean, who seems to have Mycenean origin, are fused in the cult of Apollo.
Proto-Indo-European
The Vedic Rudra has some similar functions to Apollo. The terrible god is called "the archer" and the bow is also an attribute of Shiva. Rudra could bring diseases with his arrows, but he was able to free people of them and his alternative Shiva is a healer physician god. However the Indo-European component of Apollo does not explain his strong relation with omens, exorcisms, and with the oracular cult.
Oracular cult
Unusually among the Olympic deities, Apollo had two cult sites that had widespread influence: Delos and Delphi. In cult practice, Delian Apollo and Pythian Apollo (the Apollo of Delphi) were so distinct that they might both have shrines in the same locality. Lycia was sacred to the god, for this Apollo was also called Lycian. Apollo's cult was already fully established when written sources commenced, about 650 BCE. Apollo became extremely important to the Greek world as an oracular deity in the archaic period, and the frequency of theophoric names such as Apollodorus or Apollonios and cities named Apollonia testify to his popularity. Oracular sanctuaries to Apollo were established in other sites. In the 2nd and 3rd century CE, those at Didyma and Claros pronounced the so-called "theological oracles", in which Apollo confirms that all deities are aspects or servants of an all-encompassing, highest deity. "In the 3rd century, Apollo fell silent. Julian the Apostate (359–361) tried to revive the Delphic oracle, but failed."
Oracular shrines
Apollo had a famous oracle in Delphi, and other notable ones in Claros and Didyma. His oracular shrine in Abae in Phocis, where he bore the toponymic epithet Abaeus (, Apollon Abaios), was important enough to be consulted by Croesus.
His oracular shrines include:
Abae in Phocis.
Bassae in the Peloponnese.
At Clarus, on the west coast of Asia Minor; as at Delphi a holy spring which gave off a pneuma, from which the priests drank.
In Corinth, the Oracle of Corinth came from the town of Tenea, from prisoners supposedly taken in the Trojan War.
At Khyrse, in Troad, the temple was built for Apollo Smintheus.
In Delos, there was an oracle to the Delian Apollo, during summer. The Hieron (Sanctuary) of Apollo adjacent to the Sacred Lake, was the place where the god was said to have been born.
In Delphi, the Pythia became filled with the pneuma of Apollo, said to come from a spring inside the Adyton.
In Didyma, an oracle on the coast of Anatolia, south west of Lydian (Luwian) Sardis, in which priests from the lineage of the Branchidae received inspiration by drinking from a healing spring located in the temple. Was believed to have been founded by Branchus, son or lover of Apollo.
In Hierapolis Bambyce, Syria (modern Manbij), according to the treatise De Dea Syria, the sanctuary of the Syrian Goddess contained a robed and bearded image of Apollo. Divination was based on spontaneous movements of this image.
At Patara, in Lycia, there was a seasonal winter oracle of Apollo, said to have been the place where the god went from Delos. As at Delphi the oracle at Patara was a woman.
In Segesta in Sicily.
Oracles were also given by sons of Apollo.
In Oropus, north of Athens, the oracle Amphiaraus, was said to be the son of Apollo; Oropus also had a sacred spring.
in Labadea, east of Delphi, Trophonius, another son of Apollo, killed his brother and fled to the cave where he was also afterwards consulted as an oracle.
Temples of Apollo
Many temples were dedicated to Apollo in Greece and the Greek colonies. They show the spread of the cult of Apollo and the evolution of Greek architecture, which was mostly based on the rightness of form and on mathematical relations. Some of the earliest temples, especially in Crete, do not belong to any Greek order. It seems that the first peripteral temples were rectangular wooden structures. The different wooden elements were considered divine, and their forms were preserved in the marble or stone elements of the temples of Doric order. The Greeks used standard types because they believed that the world of objects was a series of typical forms which could be represented in several instances. The temples should be canonic, and the architects were trying to achieve this esthetic perfection. From the earliest times there were certain rules strictly observed in rectangular peripteral and prostyle buildings. The first buildings were built narrowly in order to hold the roof, and when the dimensions changed some mathematical relations became necessary in order to keep the original forms. This probably influenced the theory of numbers of Pythagoras, who believed that behind the appearance of things there was the permanent principle of mathematics.
The Doric order dominated during the 6th and the 5th century BC but there was a mathematical problem regarding the position of the triglyphs, which couldn't be solved without changing the original forms. The order was almost abandoned for the Ionic order, but the Ionic capital also posed an insoluble problem at the corner of a temple. Both orders were abandoned for the Corinthian order gradually during the Hellenistic age and under Rome.
The most important temples are:
Greek temples
Thebes, Greece: The oldest temple probably dedicated to Apollo Ismenius was built in the 9th century BC. It seems that it was a curvilinear building. The Doric temple was built in the early 7th century BC., but only some small parts have been found A festival called Daphnephoria was celebrated every ninth year in honour of Apollo Ismenius (or Galaxius). The people held laurel branches (daphnai), and at the head of the procession walked a youth (chosen priest of Apollo), who was called "daphnephoros".
Eretria: According to the Homeric hymn to Apollo, the god arrived on the plain, seeking for a location to establish its oracle. The first temple of Apollo Daphnephoros, "Apollo, laurel-bearer", or "carrying off Daphne", is dated to 800 BC. The temple was curvilinear hecatombedon (a hundred feet). In a smaller building were kept the bases of the laurel branches which were used for the first building. Another temple probably peripteral was built in the 7th century BC, with an inner row of wooden columns over its Geometric predecessor. It was rebuilt peripteral around 510 BC, with the stylobate measuring 21,00 x 43,00 m. The number of pteron column was 6 x 14.
Dreros (Crete). The temple of Apollo Delphinios dates from the 7th century BC, or probably from the middle of the 8th century BC. According to the legend, Apollo appeared as a dolphin, and carried Cretan priests to the port of Delphi. The dimensions of the plan are 10,70 x 24,00 m and the building was not peripteral. It contains column-bases of the Minoan type, which may be considered as the predecessors of the Doric columns.
Gortyn (Crete). A temple of Pythian Apollo, was built in the 7th century BC. The plan measured 19,00 x 16,70 m and it was not peripteral. The walls were solid, made from limestone, and there was a single door on the east side.
Thermon (West Greece): The Doric temple of Apollo Thermios, was built in the middle of the 7th century BC. It was built on an older curvilinear building dating perhaps from the 10th century, on which a peristyle was added. The temple was narrow, and the number of pteron columns (probably wooden) was 5 x 15. There was a single row of inner columns. It measures 12.13 x 38.23 m at the stylobate, which was made from stones.
Corinth: A Doric temple was built in the 6th century BC. The temple's stylobate measures 21.36 x 53.30 m, and the number of pteron columns was 6 x 15. There was a double row of inner columns. The style is similar to the Temple of Alcmeonidae at Delphi. The Corinthians were considered to be the inventors of the Doric order.
Napes (Lesbos): An Aeolic temple probably of Apollo Napaios was built in the 7th century BC. Some special capitals with floral ornament have been found, which are called Aeolic, and it seems that they were borrowed from the East.
Cyrene, Libya: The oldest Doric temple of Apollo was built in . The number of pteron columns was 6 x 11, and it measures 16.75 x 30.05 m at the stylobate. There was a double row of sixteen inner columns on stylobates. The capitals were made from stone.
Naukratis: An Ionic temple was built in the early 6th century BC. Only some fragments have been found and the earlier ones, made from limestone, are identified among the oldest of the Ionic order.
Syracuse, Sicily: A Doric temple was built at the beginning of the 6th century BC. The temple's stylobate measures 21.47 x 55.36 m and the number of pteron columns was 6 x 17. It was the first temple in Greek west built completely out of stone. A second row of columns were added, obtaining the effect of an inner porch.
Selinus (Sicily):The Doric Temple C dates from 550 BC, and it was probably dedicated to Apollo. The temple's stylobate measures 10.48 x 41.63 m and the number of pteron columns was 6 x 17. There was a portico with a second row of columns, which is also attested for the temple at Syracuse.
Delphi: The first temple dedicated to Apollo, was built in the 7th century BC. According to the legend, it was wooden made of laurel branches. The "Temple of Alcmeonidae" was built in and it is the oldest Doric temple with significant marble elements. The temple's stylobate measures 21.65 x 58.00 m, and the number of pteron columns as 6 x 15. A fest similar with Apollo's fest at Thebes, Greece was celebrated every nine years. A boy was sent to the temple, who walked on the sacred road and returned carrying a laurel branch (dopnephoros). The maidens participated with joyful songs.
Chios: An Ionic temple of Apollo Phanaios was built at the end of the 6th century BC. Only some small parts have been found and the capitals had floral ornament.
Abae (Phocis). The temple was destroyed by the Persians in the invasion of Xerxes in 480 BC, and later by the Boeotians. It was rebuilt by Hadrian. The oracle was in use from early Mycenaean times to the Roman period, and shows the continuity of Mycenaean and Classical Greek religion.
Bassae (Peloponnesus): A temple dedicated to Apollo Epikourios ("Apollo the helper"), was built in 430 BC, designed by Iktinos. It combined Doric and Ionic elements, and the earliest use of a column with a Corinthian capital in the middle. The temple is of a relatively modest size, with the stylobate measuring 14.5 x 38.3 metres containing a Doric peristyle of 6 x 15 columns. The roof left a central space open to admit light and air.
Delos: A temple probably dedicated to Apollo and not peripteral, was built in the late 7th century BC, with a plan measuring 10,00 x 15,60 m. The Doric Great temple of Apollo, was built in . The temple's stylobate measures 13.72 x 29.78 m, and the number of pteron columns as 6 x 13. Marble was extensively used.
Ambracia: A Doric peripteral temple dedicated to Apollo Pythios Sotir was built in 500 BC, at the centre of the Greek city Arta. Only some parts have been found, and it seems that the temple was built on earlier sanctuaries dedicated to Apollo. The temple measures 20,75 x 44,00 m at the stylobate. The foundation which supported the statue of the god, still exists.
Didyma (near Miletus): The gigantic Ionic temple of Apollo Didymaios started around 540 BC. The construction ceased and then it was restarted in 330 BC. The temple is dipteral, with an outer row of 10 x 21 columns, and it measures 28.90 x 80.75 m at the stylobate.
Clarus (near ancient Colophon): According to the legend, the famous seer Calchas, on his return from Troy, came to Clarus. He challenged the seer Mopsus, and died when he lost. The Doric temple of Apollo Clarius was probably built in the 3rd century BC., and it was peripteral with 6 x 11 columns. It was reconstructed at the end of the Hellenistic period, and later from the emperor Hadrian but Pausanias claims that it was still incomplete in the 2nd century BC.
Hamaxitus (Troad): In the Iliad, Chryses the priest of Apollo, addresses the god with the epithet Smintheus (Lord of Mice), related to the god's ancient role as bringer of the disease (plague). Recent excavations indicate that the Hellenistic temple of Apollo Smintheus was constructed in 150–125 BC, but the symbol of the mouse god was used on coinage probably from the 4th century . The temple measures 40,00 x 23,00 m at the stylobate, and the number of pteron columns was 8 x 14.
Pythion (), this was the name of a shrine of Apollo at Athens near the Ilisos river. It was created by Peisistratos, and tripods were placed there by those who had won in the cyclic chorus at the Thargelia.
Setae (Lydia): The temple of Apollo Aksyros located in the city.
Apollonia Pontica: There were two temples of Apollo Healer in the city. One from the Late Archaic period and the other from the Early Classical period.
Ikaros island in the Persian Gulf (modern Failaka Island): There was a temple of Apollo on the island.
Argos in Cyprus: there was a temple of Apollo Erithios (Ἐριθίου Ἀπόλλωνος ἱερῷ).
The temple and oracle of Apollo at Eutresis.
Etruscan and Roman temples
Veii (Etruria): The temple of Apollo was built in the late 6th century BC, indicating the spread of Apollo's culture (Aplu) in Etruria. There was a prostyle porch, which is called Tuscan, and a triple cella 18,50 m wide.
Falerii Veteres (Etruria): A temple of Apollo was built probably in the 4th-3rd century BC. Parts of a terracotta capital, and a terracotta base have been found. It seems that the Etruscan columns were derived from the archaic Doric. A cult of Apollo Soranus is attested by one inscription found near Falerii.
Pompeii (Italy): The cult of Apollo was widespread in the region of Campania since the 6th century BC. The temple was built in 120 BC, but its beginnings lie in the 6th century BC. It was reconstructed after an earthquake in AD 63. It demonstrates a mixing of styles which formed the basis of Roman architecture. The columns in front of the cella formed a Tuscan prostyle porch, and the cella is situated unusually far back. The peripteral colonnade of 48 Ionic columns was placed in such a way that the emphasis was given to the front side.
Rome: The temple of Apollo Sosianus and the temple of Apollo Medicus. The first temple building dates to 431 BC, and was dedicated to Apollo Medicus (the doctor), after a plague of 433 BC. It was rebuilt by Gaius Sosius, probably in 34 BC. Only three columns with Corinthian capitals exist today. It seems that the cult of Apollo had existed in this area since at least to the mid-5th century BC.
Rome: The temple of Apollo Palatinus was located on the Palatine hill within the sacred boundary of the city. It was dedicated by Augustus in 28 BC. The façade of the original temple was Ionic and it was constructed from solid blocks of marble. Many famous statues by Greek masters were on display in and around the temple, including a marble statue of the god at the entrance and a statue of Apollo in the cella.
Melite (modern Mdina, Malta): A Temple of Apollo was built in the city in the 2nd century AD. Its remains were discovered in the 18th century, and many of its architectural fragments were dispersed among private collections or reworked into new sculptures. Parts of the temple's podium were rediscovered in 2002.
Mythology
In the myths, Apollo is the son of Zeus, the king of the gods, and Leto, his previous wife or one of his mistresses. Apollo often appears in the myths, plays and hymns either directly or indirectly through his oracles. As Zeus' favorite son, he had direct access to the mind of Zeus and was willing to reveal this knowledge to humans. A divinity beyond human comprehension, he appears both as a beneficial and a wrathful god.
Birth
Homeric Hymn to Apollo
Pregnant with the offsprings of Zeus, Leto wandered through many lands wanting to give birth to Apollo. However all the lands rejected her out of fear. Upon reaching Delos, Leto requested the island to shelter her, and that in return her son would bring fame and prosperity to the island. Delos then revealed to Leto that Apollo was rumoured to be the god who will "greatly lord it among gods and men all over the fruitful earth". For this reason, all the lands were fearful and Delos feared that Apollo would cast her aside once he is be born. Hearing this, Leto swore on the river Styx that if she is allowed to give birth on the island, her son would honour Delos the most amongst all the other lands. Assured by this, Delos agreed to assist Leto. All goddesses except Hera also then came to aid Leto.
However, Hera had tricked Eileithyia, the goddess of childbirth, to stay on Olympus, due to which Leto was unable to give birth. The goddesses then convinced Iris to go bring Eileithyia by offering her a necklace of amber 9 yards (8.2 m) long. Iris did accordingly and persuaded Eilithyia to step onto the island. Thus, clutching a palm tree, Leto finally gave birth after labouring for nine days and nine nights, with Apollo "leaping forth" from his mother's womb. The goddesses washed the new born, covered him in a white garment and fastened golden bands around him. As Leto was unable to feed him, Themis, the goddess of divine law, fed him nectar and ambrosia. Upon tasting the divine food, the child broke free of the bands fastened onto him and declared that he would be the master of lyre and archery, and interpret the will of Zeus to humankind. He then started to walk, which caused the island to be filled with gold.
Callimachus' hymn to Delos
The island Delos used to be a woman named Asteria, who jumped into the waters to escape the advances of Zeus and became a free floating island of the same name. When Leto got pregnant, Hera was told that Leto will give birth to a son who would become to Zeus more dearer than Ares. Enraged by this, Hera watched over the heavens and sent out Ares and Iris to prevent Leto from giving birth on the earth. Ares stationed over the mainland and Iris over the islands, they both threatened all the lands and prevented them from helping Leto.
When Leto arrived to Thebes, fetal Apollo prophesied from his mother's womb that in the future, he would punish a slanderous woman in Thebes (Niobe), and so he did not want to be born there. Leto then went to Thessaly and sought out the help of the river nymphs whose father was the river Peneus. Though he was initially fearful and reluctant, Peneus later decided to let Leto give birth in his waters. He did not change his mind even when Ares produced a terrifying sound and threatened to hurl mountain peaks into the river. But in the end, Leto declined his help as she did not want him to suffer for her sake.
After being turned away from various lands, Apollo spoke again from the womb, asking Leto to take look at the floating island in front of her and expressing his wish to be born there. When Leto approached Asteria, all the other islands fled. But Asteria welcomed Leto without any fear of Hera. Walking on the island, she sat down against a palm tree and asked Apollo to be born. During the child birth, the swans circled the island seven times, due to which later on Apollo would play the seven stringed lyre. When Apollo finally "leapt forth" from his mother's womb, the nymphs of the island sung a hymn to Eilithyia that was heard to the heavens. The moment Apollo was born, the entire island, including the trees and the waters, became gold. Asteria bathed the new born, swaddled him and fed him with her breast milk. Since then, the island got rooted and was called as Delos.
Hera was no more angry as Zeus had managed to calm her down, and she held no grudge against Asteria due to the fact that Asteria had rejected Zeus in the past.
Pindar's fragments
Pindar is the earliest source who explicitly calls Apollo and Artemis as twins. Here, Asteria is also stated to be Leto's sister. Wanting escape Zeus' advances, she flung herself into the sea and became a floating rock called Ortygia until the twins were born. When Leto stepped on the rock, four pillars with adamantine bases rose from the earth and held up the rock. When Apollo and Artemis were born, their bodies shone radiantly and a chant was sung by Eileithyia and Lachesis, one of the three Moirai.
Pseudo-Hyginus
Scorning the advances of Zeus, Asteria tranformed herself into a bird and jumped into a sea. From her, an island rose which was called Ortygia.
When Hera discovered that Leto was pregnant with Zeus' child, she decreed that Leto can give birth only in a place where sun does not shine. During this time, the monster Python also started hounding Leto with an intent of killing her, because he had foreseen his death coming at the hands of Leto's offspring. However, on Zeus' orders, Boreas carried away Leto and entrusted her to Poseidon. To protect her, Poseidon took her to the island Ortygia and covered it with waves so that the sun would not shine on it. Leto gave birth clinging to an olive tree and henceforth the island was called Delos.
Other variations of Apollo's birth include:
Aelian states that it took Leto twelve days and twelve nights to travel from Hyperborea to Delos. Leto changed herself into a she-wolf before giving birth. This is given as the reason why Homer describes Apollo as the "wolf-born god".
Libanius wrote that neither land nor visible islands would receive Leto, but by the will of Zeus Delos then became visible, and thus received Leto and the children.
According to Strabo, the Curetes helped Leto by creating loud noises with their weapons and thus frightening Hera, they concealed Leto's childbirth.
Theognis wrote that the island was filled with ambrosial fragrance when Apollo was born, and the Earth laughed with joy.
While in some accounts Apollo's birth itself fixed the floating Delos to the earth, there are accounts of Apollo securing Delos to the bottom of the ocean a little while later.
This island became sacred to Apollo and was one of the major cult centres of the god.
Apollo was born on the seventh day (, hebdomagenes) of the month Thargelion—according to Delian tradition—or of the month Bysios—according to Delphian tradition. The seventh and twentieth, the days of the new and full moon, were ever afterwards held sacred to him.
The general consensus is that Artemis was born first and subsequently assisted with the birth of Apollo.
Hyperborea
Hyperborea, the mystical land of eternal spring, venerated Apollo above all the gods. The Hyperboreans always sang and danced in his honor and hosted Pythian games. There, a vast forest of beautiful trees was called "the garden of Apollo". Apollo spent the winter months among the Hyperboreans, leaving his shrine in Delphi under the care of Dionysus. His absence from the world caused coldness and this was marked as his annual death. No prophecies were issued during this time. He returned to the world during the beginning of the spring. The Theophania festival was held in Delphi to celebrate his return.
However, Diodorus Silculus states that Apollo visited Hyperborea every nineteen years. This nineteen-year period was called by the Greeks as the ‘year of Meton', the time period in which the stars returned to their initial positions. And that visiting Hyperborea at that time, Apollo played on the cithara and danced continuously from the vernal equinox until the rising of the Pleiades (constellations).
Hyperborea was also Leto's birthplace. It is said that Leto came to Delos from Hyperborea accompanied by a pack of wolves. Henceforth, Hyperborea became Apollo's winter home and wolves became sacred to him. His intimate connection to wolves is evident from his epithet Lyceus, meaning wolf-like. But Apollo was also the wolf-slayer in his role as the god who protected flocks from predators. The Hyperborean worship of Apollo bears the strongest marks of Apollo being worshipped as the sun god. Shamanistic elements in Apollo's cult are often liked to his Hyperborean origin, and he is likewise speculated to have originated as a solar shaman. Shamans like Abaris and Aristeas were also the followers of Apollo, who hailed from Hyperborea.
In myths, the tears of amber Apollo shed when his son Asclepius died mixed with the waters of the river Eridanos, which surrounded Hyperborea. Apollo also buried in Hyperborea the arrow which he had used to kill the Cyclopes. He later gave this arrow to Abaris.
Childhood and youth
Growing up, Apollo was nursed by the nymphs Korythalia and Aletheia, the personification of truth. As a child, Apollo is said to have built a foundation and an altar on Delos using the horns of the goats that his sister Artemis hunted. Since he learnt the art of building when young, he later came to be known as Archegetes, the founder (of towns) and a god who guided men to build new cities. From his father Zeus, Apollo had also received a golden chariot drawn by swans.
In his early years when Apollo spent his time herding cows, he was reared by Thriae, the bee nymphs, who trained him and enhanced his prophetic skills. Apollo is also said to have invented the lyre, and along with Artemis, the art of archery. He then taught the humans the art of healing and archery. Phoebe, his grandmother, gave the oracular shrine of Delphi to Apollo as a birthday gift. Themis inspired him to be the oracular voice of Delphi thereon.
Python
Python, a chthonic serpent-dragon, was a child of Gaia and the guardian of the Delphic Oracle, whose death was foretold by Apollo when he was still in Leto's womb. Python was the nurse of the giant Typhon. In most of the traditions, Apollo was still a child when he killed Python.
Python was sent by Hera to hunt the pregnant Leto to death, and assaulted her. To avenge the trouble given to his mother, Apollo went in search of Python and killed it in the sacred cave at Delphi with the bow and arrows that he had received from Hephaestus. The Delphian nymphs who were present encouraged Apollo during the battle with the cry "Hie Paean". After Apollo was victorious, they also brought him gifts and gave the Corycian cave to him. According to Homer, Apollo encountered and killed the Python when he was looking for a place to establish his shrine.
According to another version, when Leto was in Delphi, Python attacked her. Apollo defended his mother and killed Python. Euripides in his Iphigenia in Aulis gives an account of his fight with Python and the event's aftermath.
A detailed account of Apollo's conflict with Gaia and Zeus' intervention on behalf of his young son is also given.
Apollo also demanded that all other methods of divination be made inferior to his, a wish that Zeus granted him readily. Because of this, Athena, who had been practicing divination by throwing pebbles, cast her pebbles away in displeasure.
However, Apollo had committed a blood murder and had to be purified. Because Python was a child of Gaia, Gaia wanted Apollo to be banished to Tartarus as a punishment. Zeus didn't agree and instead exiled his son from Olympus, and instructed him to get purified. Apollo had to serve as a slave for nine years. After the servitude was over, as per his father's order, he travelled to the Vale of Tempe to bathe in waters of Peneus. There Zeus himself performed purificatory rites on Apollo. Purified, Apollo was escorted by his half-sister Athena to Delphi where the oracular shrine was finally handed over to him by Gaia. According to a variation, Apollo had also travelled to Crete, where Carmanor purified him. Apollo later established the Pythian games to appropriate Gaia. Henceforth, Apollo became the god who cleansed himself from the sin of murder, made men aware of their guilt and purified them.
Soon after, Zeus instructed Apollo to go to Delphi and establish his law. But Apollo, disobeying his father, went to the land of Hyperborea and stayed there for a year. He returned only after the Delphians sang hymns to him and pleaded with him to come back. Zeus, pleased with his son's integrity, gave Apollo the seat next to him on his right side. He also gave Apollo various gifts, like a golden tripod, a golden bow and arrows, a golden chariot and the city of Delphi.
Soon after his return, Apollo needed to recruit people to Delphi. So, when he spotted a ship sailing from Crete, he sprang aboard in the form of a dolphin. The crew was awed into submission and followed a course that led the ship to Delphi. There Apollo revealed himself as a god. Initiating them to his service, he instructed them to keep righteousness in their hearts. The Pythia was Apollo's high priestess and his mouthpiece through whom he gave prophecies. Pythia is arguably the constant favorite of Apollo among the mortals.
Tityos
Hera once again sent another giant, Tityos to rape Leto. This time Apollo shot him with his arrows and attacked him with his golden sword. According to another version, Artemis also aided him in protecting their mother by attacking Tityos with her arrows. After the battle Zeus finally relented his aid and hurled Tityos down to Tartarus. There, he was pegged to the rock floor, covering an area of , where a pair of vultures feasted daily on his liver.
Admetus
Admetus was the king of Pherae, who was known for his hospitality. When Apollo was exiled from Olympus for killing Python, he served as a herdsman under Admetus, who was then young and unmarried. Apollo is said to have shared a romantic relationship with Admetus during his stay. After completing his years of servitude, Apollo went back to Olympus as a god.
Because Admetus had treated Apollo well, the god conferred great benefits on him in return. Apollo's mere presence is said to have made the cattle give birth to twins. Apollo helped Admetus win the hand of Alcestis, the daughter of King Pelias, by taming a lion and a boar to draw Admetus' chariot. He was present during their wedding to give his blessings. When Admetus angered the goddess Artemis by forgetting to give her the due offerings, Apollo came to the rescue and calmed his sister. When Apollo learnt of Admetus' untimely death, he convinced or tricked the Fates into letting Admetus live past his time.
According to another version, or perhaps some years later, when Zeus struck down Apollo's son Asclepius with a lightning bolt for resurrecting the dead, Apollo in revenge killed the Cyclopes, who had fashioned the bolt for Zeus. Apollo would have been banished to Tartarus for this, but his mother Leto intervened, and reminding Zeus of their old love, pleaded with him not to kill their son. Zeus obliged and sentenced Apollo to one year of hard labor once again under Admetus.
The love between Apollo and Admetus was a favored topic of Roman poets like Ovid and Servius.
Niobe
The fate of Niobe was prophesied by Apollo while he was still in Leto's womb. Niobe was the queen of Thebes and wife of Amphion. She displayed hubris when she boasted that she was superior to Leto because she had fourteen children (Niobids), seven male and seven female, while Leto had only two. She further mocked Apollo's effeminate appearance and Artemis' manly appearance. Leto, insulted by this, told her children to punish Niobe. Accordingly, Apollo killed Niobe's sons, and Artemis her daughters. According to some versions of the myth, among the Niobids, Chloris and her brother Amyclas were not killed because they prayed to Leto. Amphion, at the sight of his dead sons, either killed himself or was killed by Apollo after swearing revenge.
A devastated Niobe fled to Mount Sipylos in Asia Minor and turned into stone as she wept. Her tears formed the river Achelous. Zeus had turned all the people of Thebes to stone and so no one buried the Niobids until the ninth day after their death, when the gods themselves entombed them.
When Chloris married and had children, Apollo granted her son Nestor the years he had taken away from the Niobids. Hence, Nestor was able to live for 3 generations.
Building the walls of Troy
Once Apollo and Poseidon served under the Trojan king Laomedon in accordance with Zeus' words. Apollodorus states that the gods willingly went to the king disguised as humans in order to check his hubris. Apollo guarded the cattle of Laomedon in the valleys of Mount Ida, while Poseidon built the walls of Troy. Other versions make both Apollo and Poseidon the builders of the wall. In Ovid's account, Apollo completes his task by playing his tunes on his lyre.
In Pindar's odes, the gods took a mortal named Aeacus as their assistant. When the work was completed, three snakes rushed against the wall, and though the two that attacked the sections of the wall built by the gods fell down dead, the third forced its way into the city through the portion of the wall built by Aeacus. Apollo immediately prophesied that Troy would fall at the hands of Aeacus's descendants, the Aeacidae (i.e. his son Telamon joined Heracles when he sieged the city during Laomedon's rule. Later, his great-grandson Neoptolemus was present in the wooden horse that leads to the downfall of Troy).
However, the king not only refused to give the gods the wages he had promised, but also threatened to bind their feet and hands, and sell them as slaves. Angered by the unpaid labour and the insults, Apollo infected the city with a pestilence and Poseidon sent the sea monster Cetus. To deliver the city from it, Laomedon had to sacrifice his daughter Hesione (who would later be saved by Heracles).
During his stay in Troy, Apollo had a lover named Ourea, who was a nymph and daughter of Poseidon. Together they had a son named Ileus, whom Apollo loved dearly.
Trojan War
Apollo sided with the Trojans during the Trojan War waged by the Greeks against the Trojans.
During the war, the Greek king Agamemnon captured Chryseis, the daughter of Apollo's priest Chryses, and refused to return her. Angered by this, Apollo shot arrows infected with the plague into the Greek encampment. He demanded that they return the girl, and the Achaeans (Greeks) complied, indirectly causing the anger of Achilles, which is the theme of the Iliad.
Receiving the aegis from Zeus, Apollo entered the battlefield as per his father's command, causing great terror to the enemy with his war cry. He pushed the Greeks back and destroyed many of the soldiers. He is described as "the rouser of armies" because he rallied the Trojan army when they were falling apart.
When Zeus allowed the other gods to get involved in the war, Apollo was provoked by Poseidon to a duel. However, Apollo declined to fight him, saying that he would not fight his uncle for the sake of mortals.
When the Greek hero Diomedes injured the Trojan hero Aeneas, Aphrodite tried to rescue him, but Diomedes injured her as well. Apollo then enveloped Aeneas in a cloud to protect him. He repelled the attacks Diomedes made on him and gave the hero a stern warning to abstain from attacking a god. Aeneas was then taken to Pergamos, a sacred spot in Troy, where he was healed.
After the death of Sarpedon, a son of Zeus, Apollo rescued the corpse from the battlefield as per his father's wish and cleaned it. He then gave it to Sleep (Hypnos) and Death (Thanatos). Apollo had also once convinced Athena to stop the war for that day, so that the warriors can relieve themselves for a while.
The Trojan hero Hector (who, according to some, was the god's own son by Hecuba) was favored by Apollo. When he got severely injured, Apollo healed him and encouraged him to take up his arms. During a duel with Achilles, when Hector was about to lose, Apollo hid Hector in a cloud of mist to save him. When the Greek warrior Patroclus tried to get into the fort of Troy, he was stopped by Apollo. Encouraging Hector to attack Patroclus, Apollo stripped the armour of the Greek warrior and broke his weapons. Patroclus was eventually killed by Hector. At last, after Hector's fated death, Apollo protected his corpse from Achilles' attempt to mutilate it by creating a magical cloud over the corpse, shielding it from the rays of the sun.
Apollo held a grudge against Achilles throughout the war because Achilles had murdered his son Tenes before the war began and brutally assassinated his son Troilus in his own temple. Not only did Apollo save Hector from Achilles, he also tricked Achilles by disguising himself as a Trojan warrior and driving him away from the gates. He foiled Achilles' attempt to mutilate Hector's dead body.
Finally, Apollo caused Achilles' death by guiding an arrow shot by Paris into Achilles' heel. In some versions, Apollo himself killed Achilles by taking the disguise of Paris.
Apollo helped many Trojan warriors, including Agenor, Polydamas, Glaucus in the battlefield. Though he greatly favored the Trojans, Apollo was bound to follow the orders of Zeus and served his father loyally during the war.
Nurturer of the young
Apollo Kourotrophos is the god who nurtures and protects children and the young, especially boys. He oversees their education and their passage into adulthood. Education is said to have originated from Apollo and the Muses. Many myths have him train his children. It was a custom for boys to cut and dedicate their long hair to Apollo after reaching adulthood.
Chiron, the abandoned centaur, was fostered by Apollo, who instructed him in medicine, prophecy, archery and more. Chiron would later become a great teacher himself.
Asclepius in his childhood gained much knowledge pertaining to medicinal arts from his father. However, he was later entrusted to Chiron for further education.
Anius, Apollo's son by Rhoeo, was abandoned by his mother soon after his birth. Apollo brought him up and educated him in mantic arts. Anius later became the priest of Apollo and the king of Delos.
Iamus was the son of Apollo and Evadne. When Evadne went into labour, Apollo sent the Moirai to assist his lover. After the child was born, Apollo sent snakes to feed the child some honey. When Iamus reached the age of education, Apollo took him to Olympia and taught him many arts, including the ability to understand and explain the languages of birds.
Idmon was educated by Apollo to be a seer. Even though he foresaw his death that would happen in his journey with the Argonauts, he embraced his destiny and died a brave death. To commemorate his son's bravery, Apollo commanded Boeotians to build a town around the tomb of the hero, and to honor him.
Apollo adopted Carnus, the abandoned son of Zeus and Europa. He reared the child with the help of his mother Leto and educated him to be a seer.
When his son Melaneus reached the age of marriage, Apollo asked the princess Stratonice to be his son's bride and carried her away from her home when she agreed.
Apollo saved a shepherd boy (name unknown) from death in a large deep cave, by means of vultures. To thank him, the shepherd built Apollo a temple under the name Vulturius.
God of music
Immediately after his birth, Apollo demanded a lyre and invented the paean, thus becoming the god of music. As the divine singer, he is the patron of poets, singers and musicians. The invention of string music is attributed to him. Plato said that the innate ability of humans to take delight in music, rhythm and harmony is the gift of Apollo and the Muses. According to Socrates, ancient Greeks believed that Apollo is the god who directs the harmony and makes all things move together, both for the gods and the humans. For this reason, he was called Homopolon before the Homo was replaced by A. Apollo's harmonious music delivered people from their pain, and hence, like Dionysus, he is also called the liberator. The swans, which were considered to be the most musical among the birds, were believed to be the "singers of Apollo". They are Apollo's sacred birds and acted as his vehicle during his travel to Hyperborea. Aelian says that when the singers would sing hymns to Apollo, the swans would join the chant in unison.
Among the Pythagoreans, the study of mathematics and music were connected to the worship of Apollo, their principal deity. Their belief was that music purifies the soul, just as medicine purifies the body. They also believed that music was delegated to the same mathematical laws of harmony as the mechanics of the cosmos, evolving into an idea known as the music of the spheres.
Apollo appears as the companion of the Muses, and as Musagetes ("leader of Muses") he leads them in dance. They spend their time on Parnassus, which is one of their sacred places. Apollo is also the lover of the Muses and by them he became the father of famous musicians like Orpheus and Linus.
Apollo is often found delighting the immortal gods with his songs and music on the lyre. In his role as the god of banquets, he was always present to play music at weddings of the gods, like the marriage of Eros and Psyche, Peleus and Thetis. He is a frequent guest of the Bacchanalia, and many ancient ceramics depict him being at ease amidst the maenads and satyrs. Apollo also participated in musical contests when challenged by others. He was the victor in all those contests, but he tended to punish his opponents severely for their hubris.
Apollo's lyre
The invention of the lyre is attributed either to Hermes or to Apollo himself. Distinctions have been made that Hermes invented lyre made of tortoise shell, whereas the lyre Apollo invented was a regular lyre.
Myths tell that the infant Hermes stole a number of Apollo's cows and took them to a cave in the woods near Pylos, covering their tracks. In the cave, he found a tortoise and killed it, then removed the insides. He used one of the cow's intestines and the tortoise shell and made his lyre.
Upon discovering the theft, Apollo confronted Hermes and asked him to return his cattle. When Hermes acted innocent, Apollo took the matter to Zeus. Zeus, having seen the events, sided with Apollo, and ordered Hermes to return the cattle. Hermes then began to play music on the lyre he had invented. Apollo fell in love with the instrument and offered to exchange the cattle for the lyre. Hence, Apollo then became the master of the lyre.
According to other versions, Apollo had invented the lyre himself, whose strings he tore in repenting of the excess punishment he had given to Marsyas. Hermes' lyre, therefore, would be a reinvention.
Contest with Pan
Once Pan had the audacity to compare his music with that of Apollo and to challenge the god of music to a contest. The mountain-god Tmolus was chosen to umpire. Pan blew on his pipes, and with his rustic melody gave great satisfaction to himself and his faithful follower, Midas, who happened to be present. Then, Apollo struck the strings of his lyre. It was so beautiful that Tmolus at once awarded the victory to Apollo, and everyone was pleased with the judgement. Only Midas dissented and questioned the justice of the award. Apollo did not want to suffer such a depraved pair of ears any longer, and caused them to become the ears of a donkey.
Contest with Marsyas
Marsyas was a satyr who was punished by Apollo for his hubris. He had found an aulos on the ground, tossed away after being invented by Athena because it made her cheeks puffy. Athena had also placed a curse upon the instrument, that whoever would pick it up would be severely punished. When Marsyas played the flute, everyone became frenzied with joy. This led Marsyas to think that he was better than Apollo, and he challenged the god to a musical contest. The contest was judged by the Muses, or the nymphs of Nysa. Athena was also present to witness the contest.
Marsyas taunted Apollo for "wearing his hair long, for having a fair face and smooth body, for his skill in so many arts". He also further said,
The Muses and Athena sniggered at this comment. The contestants agreed to take turns displaying their skills and the rule was that the victor could "do whatever he wanted" to the loser.
According to one account, after the first round, they both were deemed equal by the Nysiads. But in the next round, Apollo decided to play on his lyre and add his melodious voice to his performance. Marsyas argued against this, saying that Apollo would have an advantage and accused Apollo of cheating. But Apollo replied that since Marsyas played the flute, which needed air blown from the throat, it was similar to singing, and that either they both should get an equal chance to combine their skills or none of them should use their mouths at all. The nymphs decided that Apollo's argument was just. Apollo then played his lyre and sang at the same time, mesmerising the audience. Marsyas could not do this. Apollo was declared the winner and, angered with Marsyas' haughtiness and his accusations, decided to flay the satyr.
According to another account, Marsyas played his flute out of tune at one point and accepted his defeat. Out of shame, he assigned to himself the punishment of being skinned for a wine sack. Another variation is that Apollo played his instrument upside down. Marsyas could not do this with his instrument. So the Muses who were the judges declared Apollo the winner. Apollo hung Marsyas from a tree to flay him.
Apollo flayed the limbs of Marsyas alive in a cave near Celaenae in Phrygia for his hubris to challenge a god. He then gave the rest of his body for proper burial and nailed Marsyas' flayed skin to a nearby pine-tree as a lesson to the others. Marsyas' blood turned into the river Marsyas. But Apollo soon repented and being distressed at what he had done, he tore the strings of his lyre and threw it away. The lyre was later discovered by the Muses and Apollo's sons Linus and Orpheus. The Muses fixed the middle string, Linus the string struck with the forefinger, and Orpheus the lowest string and the one next to it. They took it back to Apollo, but the god, who had decided to stay away from music for a while, laid away both the lyre and the pipes at Delphi and joined Cybele in her wanderings to as far as Hyperborea.
Contest with Cinyras
Cinyras was a ruler of Cyprus, who was a friend of Agamemnon. Cinyras promised to assist Agamemnon in the Trojan war, but did not keep his promise. Agamemnon cursed Cinyras. He invoked Apollo and asked the god to avenge the broken promise. Apollo then had a lyre-playing contest with Cinyras, and defeated him. Either Cinyras committed suicide when he lost, or was killed by Apollo.
Patron of sailors
Apollo functions as the patron and protector of sailors, one of the duties he shares with Poseidon. In the myths, he is seen helping heroes who pray to him for a safe journey.
When Apollo spotted a ship of Cretan sailors that were caught in a storm, he quickly assumed the shape of a dolphin and guided their ship safely to Delphi.
When the Argonauts faced a terrible storm, Jason prayed to his patron, Apollo, to help them. Apollo used his bow and golden arrow to shed light upon an island, where the Argonauts soon took shelter. This island was renamed "Anaphe", which means "He revealed it".
Apollo helped the Greek hero Diomedes, to escape from a great tempest during his journey homeward. As a token of gratitude, Diomedes built a temple in honor of Apollo under the epithet Epibaterius ("the embarker").
During the Trojan War, Odysseus came to the Trojan camp to return Chriseis, the daughter of Apollo's priest Chryses, and brought many offerings to Apollo. Pleased with this, Apollo sent gentle breezes that helped Odysseus return safely to the Greek camp.
Arion was a poet who was kidnapped by some sailors for the rich prizes he possessed. Arion requested them to let him sing for the last time, to which the sailors consented. Arion began singing a song in praise of Apollo, seeking the god's help. Consequently, numerous dolphins surrounded the ship and when Arion jumped into the water, the dolphins carried him away safely.
Wars
Trojan War
Apollo played a pivotal role in the entire Trojan War. He sided with the Trojans, and sent a terrible plague to the Greek camp, which indirectly led to the conflict between Achilles and Agamemnon. He killed the Greek heroes Patroclus, Achilles, and numerous Greek soldiers. He also helped many Trojan heroes, the most important one being Hector. After the end of the war, Apollo and Poseidon together cleaned the remains of the city and the camps.
Telegony war
A war broke out between the Brygoi and the Thesprotians, who had the support of Odysseus. The gods Athena and Ares came to the battlefield and took sides. Athena helped the hero Odysseus while Ares fought alongside of the Brygoi. When Odysseus lost, Athena and Ares came into a direct duel. To stop the battling gods and the terror created by their battle, Apollo intervened and stopped the duel between them.
Indian war
When Zeus suggested that Dionysus defeat the Indians in order to earn a place among the gods, Dionysus declared war against the Indians and travelled to India along with his army of Bacchantes and satyrs. Among the warriors was Aristaeus, Apollo's son. Apollo armed his son with his own hands and gave him a bow and arrows and fitted a strong shield to his arm. After Zeus urged Apollo to join the war, he went to the battlefield. Seeing several of his nymphs and Aristaeus drowning in a river, he took them to safety and healed them. He taught Aristaeus more useful healing arts and sent him back to help the army of Dionysus.
Theban war
During the war between the sons of Oedipus, Apollo favored Amphiaraus, a seer and one of the leaders in the war. Though saddened that the seer was fated to be doomed in the war, Apollo made Amphiaraus' last hours glorious by "lighting his shield and his helm with starry gleam". When Hypseus tried to kill the hero with a spear, Apollo directed the spear towards the charioteer of Amphiaraus instead. Then Apollo himself replaced the charioteer and took the reins in his hands. He deflected many spears and arrows away from them. He also killed many of the enemy warriors like Melaneus, Antiphus, Aetion, Polites and Lampus. At last, when the moment of departure came, Apollo expressed his grief with tears in his eyes and bid farewell to Amphiaraus, who was soon engulfed by the Earth.
Slaying of giants
Apollo killed the giants Python and Tityos, who had assaulted his mother Leto.
Gigantomachy
During the gigantomachy, Apollo and Heracles blinded the giant Ephialtes by shooting him in his eyes, Apollo shooting his left and Heracles his right. He also killed Porphyrion, the king of giants, using his bow and arrows.
Aloadae
The Aloadae, namely Otis and Ephialtes, were twin giants who decided to wage war upon the gods. They attempted to storm Mt. Olympus by piling up mountains, and threatened to fill the sea with mountains and inundate dry land. They even dared to seek the hand of Hera and Artemis in marriage. Angered by this, Apollo killed them by shooting them with arrows. According to another tale, Apollo killed them by sending a deer between them; as they tried to kill it with their javelins, they accidentally stabbed each other and died.
Phorbas
Phorbas was a savage giant king of Phlegyas who was described as having swine-like features. He wished to plunder Delphi for its wealth. He seized the roads to Delphi and started harassing the pilgrims. He captured the old people and children and sent them to his army to hold them for ransom. And he challenged the young and sturdy men to a match of boxing, only to cut their heads off when they would get defeated by him. He hung the chopped-off heads to an oak tree. Finally, Apollo came to put an end to this cruelty. He entered a boxing contest with Phorbas and killed him with a single blow.
Other stories
In the first Olympic games, Apollo defeated Ares and became the victor in wrestling. He outran Hermes in the race and won first place.
Apollo divides months into summer and winter. He rides on the back of a swan to the land of the Hyperboreans during the winter months, and the absence of warmth in winter is due to his departure. During his absence, Delphi was under the care of Dionysus, and no prophecies were given during winters.
Periphas
Periphas was an Attican king and a priest of Apollo. He was noble, just and rich. He did all his duties justly. Because of this people were very fond of him and started honouring him to the same extent as Zeus. At one point, they worshipped Periphas in place of Zeus and set up shrines and temples for him. This annoyed Zeus, who decided to annihilate the entire family of Periphas. But because he was a just king and a good devotee, Apollo intervened and requested his father to spare Periphas. Zeus considered Apollo's words and agreed to let him live. But he metamorphosed Periphas into an eagle and made the eagle the king of birds. When Periphas' wife requested Zeus to let her stay with her husband, Zeus turned her into a vulture and fulfilled her wish.
Molpadia and Parthenos
Molpadia and Parthenos were the sisters of Rhoeo, a former lover of Apollo. One day, they were put in charge of watching their father's ancestral wine jar but they fell asleep while performing this duty. While they were asleep, the wine jar was broken by the swine their family kept. When the sisters woke up and saw what had happened, they threw themselves off a cliff in fear of their father's wrath. Apollo, who was passing by, caught them and carried them to two different cities in Chersonesus, Molpadia to Castabus and Parthenos to Bubastus. He turned them into goddesses and they both received divine honors. Molpadia's name was changed to Hemithea upon her deification.
Prometheus
Prometheus was the titan who was punished by Zeus for stealing fire. He was bound to a rock, where each day an eagle was sent to eat Prometheus' liver, which would then grow back overnight to be eaten again the next day. Seeing his plight, Apollo pleaded with Zeus to release the kind Titan, while Artemis and Leto stood behind him with tears in their eyes. Zeus, moved by Apollo's words and the tears of the goddesses, finally sent Heracles to free Prometheus.
Heracles
After Heracles (then named Alcides) was struck with madness and killed his family, he sought to purify himself and consulted the oracle of Apollo. Apollo, through the Pythia, commanded him to serve king Eurystheus for twelve years and complete the ten tasks the king would give him. Only then would Alcides be absolved of his sin. Apollo also renamed him Heracles.
To complete his third task, Heracles had to capture the Ceryneian Hind, a hind sacred to Artemis, and bring back it alive. After chasing the hind for one year, the animal eventually got tired, and when it tried crossing the river Ladon, Heracles captured it. While he was taking it back, he was confronted by Apollo and Artemis, who were angered at Heracles for this act. However, Heracles soothed the goddess and explained his situation to her. After much pleading, Artemis permitted him to take the hind and told him to return it later.
After he was freed from his servitude to Eurystheus, Heracles fell in conflict with Iphytus, a prince of Oechalia, and murdered him. Soon after, he contracted a terrible disease. He consulted the oracle of Apollo once again, in the hope of ridding himself of the disease. The Pythia, however, denied to give any prophesy. In anger, Heracles snatched the sacred tripod and started walking away, intending to start his own oracle. However, Apollo did not tolerate this and stopped Heracles; a duel ensued between them. Artemis rushed to support Apollo, while Athena supported Heracles. Soon, Zeus threw his thunderbolt between the fighting brothers and separated them. He reprimanded Heracles for this act of violation and asked Apollo to give a solution to Heracles. Apollo then ordered the hero to serve under Omphale, queen of Lydia for one year in order to purify himself.
After their reconciliation, Apollo and Heracles together founded the city of Gythion.
Plato's concept of soulmates
A long time ago, there were three kinds of human beings: male, descended from the sun; female, descended from the earth; and androgynous, descended from the moon. Each human being was completely round, with four arms and four legs, two identical faces on opposite sides of a head with four ears, and all else to match. They were powerful and unruly. Otis and Ephialtes even dared to scale Mount Olympus.
To check their insolence, Zeus devised a plan to humble them and improve their manners instead of completely destroying them. He cut them all in two and asked Apollo to make necessary repairs, giving humans the individual shape they still have now. Apollo turned their heads and necks around towards their wounds, he pulled together their skin at the abdomen, and sewed the skin together at the middle of it. This is what we call navel today. He smoothened the wrinkles and shaped the chest. But he made sure to leave a few wrinkles on the abdomen and around the navel so that they might be reminded of their punishment.
The rock of Leukas
Leukatas was believed to be a white-colored rock jutting out from the island of Leukas into the sea. It was present in the sanctuary of Apollo Leukates. A leap from this rock was believed to have put an end to the longings of love.
Once, Aphrodite fell deeply in love with Adonis, a young man of great beauty who was later accidentally killed by a boar. Heartbroken, Aphrodite wandered looking for the rock of Leukas. When she reached the sanctuary of Apollo in Argos, she confided in him her love and sorrow. Apollo then brought her to the rock of Leukas and asked her to throw herself from the top of the rock. She did so and was freed from her love. When she sought the reason behind this, Apollo told her that Zeus, before taking another lover, would sit on this rock to free himself from his love for Hera.
Another tale relates that a man named Nireus, who fell in love with the cult statue of Athena, came to the rock and jumped in order to relieve himself. After jumping, he fell into the net of a fisherman in which, when he was pulled out, he found a box filled with gold. He fought with the fisherman and took the gold, but Apollo appeared to him in the night in a dream and warned him not to appropriate gold which belonged to others.
It was an ancestral custom among the Leukadians to fling a criminal from this rock every year at the sacrifice performed in honor of Apollo for the sake of averting evil. However, a number of men would be stationed all around below rock to catch the criminal and take him out of the borders in order to exile him from the island. This was the same rock from which, according to a legend, Sappho took her suicidal leap.
Slaying of Titans
Once Hera, out of spite, aroused the Titans to war against Zeus and take away his throne. Accordingly, when the Titans tried to climb Mount Olympus, Zeus with the help of Apollo, Artemis and Athena, defeated them and cast them into Tartarus.
Female lovers
Love affairs ascribed to Apollo are a late development in Greek mythology. Their vivid anecdotal qualities have made some of them favorites of painters since the Renaissance, the result being that they stand out more prominently in the modern imagination.
Daphne was a nymph who scorned Apollo's advances and ran away from him. When Apollo chased her in order to persuade her, she changed herself into a laurel tree. According to other versions, she cried for help during the chase, and Gaia helped her by taking her in and placing a laurel tree in her place. According to Roman poet Ovid, the chase was brought about by Cupid, who hit Apollo with a golden arrow of love and Daphne with a leaden arrow of hatred. The myth explains the origin of the laurel and the connection of Apollo with the laurel and its leaves, which his priestess employed at Delphi. The leaves became the symbol of victory and laurel wreaths were given to the victors of the Pythian games.
Apollo is said to have been the lover of all nine Muses, and not being able to choose one of them, decided to remain unwed. He fathered the Corybantes by the Muse Thalia, Orpheus by Calliope, Linus of Thrace by Calliope or Urania and Hymenaios (Hymen) by one of the Muses.
In the Great Eoiae that is attributed to Hesoid, Scylla is the daughter of Apollo and Hecate.
Cyrene was a Thessalian princess whom Apollo loved. In her honor, he built the city Cyrene and made her its ruler. She was later granted longevity by Apollo who turned her into a nymph. The couple had two sons, Aristaeus, and Idmon.
Evadne was a nymph daughter of Poseidon and a lover of Apollo. They had a son, Iamos. During the time of the childbirth, Apollo sent Eileithyia, the goddess of childbirth to assist her.
Rhoeo, a princess of the island of Naxos was loved by Apollo. Out of affection for her, Apollo turned her sisters into goddesses. On the island Delos she bore Apollo a son named Anius. Not wanting to have the child, she entrusted the infant to Apollo and left. Apollo raised and educated the child on his own.
Ourea, a daughter of Poseidon, fell in love with Apollo when he and Poseidon were serving the Trojan king Laomedon. They both united on the day the walls of Troy were built. She bore to Apollo a son, whom Apollo named Ileus, after the city of his birth, Ilion (Troy). Ileus was very dear to Apollo.
Thero, daughter of Phylas, a maiden as beautiful as the moonbeams, was loved by the radiant Apollo, and she loved him in return. Through their union, she became the mother of Chaeron, who was famed as "the tamer of horses". He later built the city Chaeronea.
Hyrie or Thyrie was the mother of Cycnus. Apollo turned both the mother and son into swans when they jumped into a lake and tried to kill themselves.
Hecuba was the wife of King Priam of Troy, and Apollo had a son with her named Troilus. An oracle prophesied that Troy would not be defeated as long as Troilus reached the age of twenty alive. He was ambushed and killed by Achilleus, and Apollo avenged his death by killing Achilles. After the sack of Troy, Hecuba was taken to Lycia by Apollo.
Coronis was daughter of Phlegyas, King of the Lapiths. While pregnant with Asclepius, Coronis fell in love with Ischys, son of Elatus and slept with him. When Apollo found out about her infidelity through his prophetic powers or thanks to his raven who informed him, he sent his sister, Artemis, to kill Coronis. Apollo rescued the baby by cutting open Koronis' belly and gave it to the centaur Chiron to raise.
Dryope, the daughter of Dryops, was impregnated by Apollo in the form of a snake. She gave birth to a son named Amphissus.
In Euripides' play Ion, Apollo fathered Ion by Creusa, wife of Xuthus. He used his powers to conceal her pregnancy from her father. Later, when Creusa left Ion to die in the wild, Apollo asked Hermes to save the child and bring him to the oracle at Delphi, where he was raised by a priestess.
Apollo loved and kidnapped an Oceanid nymph, Melia. Her father Oceanus sent one of his sons, Caanthus, to find her, but Caanthus could not take her back from Apollo, so he burned Apollo's sanctuary. In retaliation, Apollo shot and killed Caanthus.
Male lovers
Hyacinth (or Hyacinthus), a beautiful and athletic Spartan prince, was one of Apollo's favourite lovers. The pair was practicing throwing the discus when a discus thrown by Apollo was blown off course by the jealous Zephyrus and struck Hyacinthus in the head, killing him instantly. Apollo is said to be filled with grief. Out of Hyacinthus' blood, Apollo created a flower named after him as a memorial to his death, and his tears stained the flower petals with the interjection , meaning alas. He was later resurrected and taken to heaven. The festival Hyacinthia was a national celebration of Sparta, which commemorated the death and rebirth of Hyacinthus.
Another male lover was Cyparissus, a descendant of Heracles. Apollo gave him a tame deer as a companion but Cyparissus accidentally killed it with a javelin as it lay asleep in the undergrowth. Cyparissus was so saddened by its death that he asked Apollo to let his tears fall forever. Apollo granted the request by turning him into the Cypress named after him, which was said to be a sad tree because the sap forms droplets like tears on the trunk.
Admetus, the king of Pherae, was also Apollo's lover. During his exile, which lasted either for one year or nine years, Apollo served Admetus as a herdsman. The romantic nature of their relationship was first described by Callimachus of Alexandria, who wrote that Apollo was "fired with love" for Admetus. Plutarch lists Admetus as one of Apollo's lovers and says that Apollo served Admetus because he doted upon him. Latin poet Ovid in his said that even though he was a god, Apollo forsook his pride and stayed in as a servant for the sake of Admetus. Tibullus describes Apollo's love to the king as servitium amoris (slavery of love) and asserts that Apollo became his servant not by force but by choice. He would also make cheese and serve it to Admetus. His domestic actions caused embarrassment to his family.
When Admetus wanted to marry princess Alcestis, Apollo provided a chariot pulled by a lion and a boar he had tamed. This satisfied Alcestis' father and he let Admetus marry his daughter. Further, Apollo saved the king from Artemis' wrath and also convinced the Moirai to postpone Admetus' death once.
Branchus, a shepherd, one day came across Apollo in the woods. Captivated by the god's beauty, he kissed Apollo. Apollo requited his affections and wanting to reward him, bestowed prophetic skills on him. His descendants, the Branchides, were an influential clan of prophets.
Other male lovers of Apollo include:
Adonis, who is said to have been the lover of both Apollo and Aphrodite. He behaved as a man with Aphrodite and as a woman with Apollo.
Atymnius, otherwise known as a beloved of Sarpedon
Boreas, the god of North winds
Cinyras, king of Cyprus and the priest of Aphrodite
Helenus, a Trojan prince (son of Priam and Hecuba). He received from Apollo an ivory bow with which he later wounded Achilles in the hand.
Hippolytus of Sicyon (not the same as Hippolytus, the son of Theseus)
Hymenaios, the son of Magnes
Iapis, to whom Apollo taught the art of healing
Phorbas, the dragon slayer (probably the son of Triopas)
Children
Apollo sired many children, from mortal women and nymphs as well as the goddesses. His children grew up to be physicians, musicians, poets, seers or archers. Many of his sons founded new cities and became kings.
Asclepius is the most famous son of Apollo. His skills as a physician surpassed that of Apollo's. Zeus killed him for bringing back the dead, but upon Apollo's request, he was resurrected as a god.
Aristaeus was placed under the care of Chiron after his birth. He became the god of beekeeping, cheese-making, animal husbandry and more. He was ultimately given immortality for the benefits he bestowed upon humanity. The Corybantes were spear-clashing, dancing demigods.
The sons of Apollo who participated in the Trojan War include the Trojan princes Hector and Troilus, as well as Tenes, the king of Tenedos, all three of whom were killed by Achilles over the course of the war.
Apollo's children who became musicians and bards include Orpheus, Linus, Ialemus, Hymenaeus, Philammon, Eumolpus and Eleuther. Apollo fathered 3 daughters, Apollonis, Borysthenis and Cephisso, who formed a group of minor Muses, the "Musa Apollonides". They were nicknamed Nete, Mese and Hypate after the highest, middle and lowest strings of his lyre. Phemonoe was a seer and poet who was the inventor of Hexameter.
Apis, Idmon, Iamus, Tenerus, Mopsus, Galeus, Telmessus and others were gifted seers. Anius, Pythaeus and Ismenus lived as high priests. Most of them were trained by Apollo himself.
Arabus, Delphos, Dryops, Miletos, Tenes, Epidaurus, Ceos, Lycoras, Syrus, Pisus, Marathus, Megarus, Patarus, Acraepheus, Cicon, Chaeron and many other sons of Apollo, under the guidance of his words, founded eponymous cities.
He also had a son named Chrysorrhoas who was a mechanic artist. His other daughters include Eurynome, Chariclo wife of Chiron, Eurydice the wife of Orpheus, Eriopis, famous for her beautiful hair, Melite the heroine, Pamphile the silk weaver, Parthenos, and by some accounts, Phoebe, Hilyra and Scylla. Apollo turned Parthenos into a constellation after her early death.
Additionally, Apollo fostered and educated Chiron, the centaur who later became the greatest teacher and educated many demigods, including Apollo's sons. Apollo also fostered Carnus, the son of Zeus and Europa.
Failed love attempts
Marpessa was kidnapped by Idas but was loved by Apollo as well. Zeus made her choose between them, and she chose Idas on the grounds that Apollo, being immortal, would tire of her when she grew old.
Sinope, a nymph, was approached by the amorous Apollo. She made him promise that he would grant to her whatever she would ask for, and then cleverly asked him to let her stay a virgin. Apollo kept his promise and went back.
Bolina was admired by Apollo but she refused him and jumped into the sea. To avoid her death, Apollo turned her into a nymph, saving her life.
Castalia was a nymph whom Apollo loved. She fled from him and dove into the spring at Delphi, at the base of Mt. Parnassos, which was then named after her. Water from this spring was sacred; it was used to clean the Delphian temples and inspire the priestesses.
Cassandra, was a daughter of Hecuba and Priam. Apollo wished to court her. Cassandra promised to return his love on one condition - he should give her the power to see the future. Apollo fulfilled her wish, but she went back on her word and rejected him soon after. Angered that she broke her promise, Apollo cursed her that even though she would see the future, no one would ever believe her prophecies.
The Sibyl of Cumae like Cassandra promised Apollo her love if he would give her a boon. The Sibyl took a handful of sand and asked Apollo to grant her years of life as many as the grains of sand she held in her hands. Apollo granted her wish, but Sibyl went back on her word. Although Sibyl did live an extended life as Apollo had promised, he did not give her agelessness along with it, so she shrivelled and shrank and only her voice remained.
Hestia, the goddess of the hearth, rejected both Apollo's and Poseidon's marriage proposals and swore that she would always stay unmarried.
In one version of the prophet Tiresias's origins, he was originally a woman who promised Apollo to sleep with him if he would give her music lessons. Apollo gave her her wish, but then she went back on her word and refused him. Apollo in anger turned her into a man.
Female counterparts
Artemis
Artemis as the sister of Apollo, is thea apollousa, that is, she as a female divinity represented the same idea that Apollo did as a male divinity. In the pre-Hellenic period, their relationship was described as the one between husband and wife, and there seems to have been a tradition which actually described Artemis as the wife of Apollo. However, this relationship was never sexual but spiritual, which is why they both are seen being unmarried in the Hellenic period.
Artemis, like her brother, is armed with a bow and arrows. She is the cause of sudden deaths of women. She also is the protector of the young, especially girls. Though she has nothing to do with oracles, music or poetry, she sometimes led the female chorus on Olympus while Apollo sang. The laurel (daphne) was sacred to both. Artemis Daphnaia had her temple among the Lacedemonians, at a place called Hypsoi.
Apollo Daphnephoros had a temple in Eretria, a "place where the citizens are to take the oaths". In later times when Apollo was regarded as identical with the sun or Helios, Artemis was naturally regarded as Selene or the moon.
Hecate
Hecate, the goddess of witchcraft and magic, is the chthonic counterpart of Apollo. They both are cousins, since their mothers - Leto and Asteria - are sisters. One of Apollo's epithets, Hecatos, is the masculine form of Hecate, and both names mean "working from afar". While Apollo presided over the prophetic powers and magic of light and heaven, Hecate presided over the prophetic powers and magic of night and chthonian darkness. If Hecate is the "gate-keeper", Apollo Agyieus is the "door-keeper". Hecate is the goddess of crossroads and Apollo is the god and protector of streets.
The oldest evidence found for Hecate's worship is at Apollo's temple in Miletos. There, Hecate was taken to be Apollo's sister counterpart in the absence of Artemis. Hecate's lunar nature makes her the goddess of the waning moon and contrasts and complements, at the same time, Apollo's solar nature.
Athena
As a deity of knowledge and great power, Apollo was seen being the male counterpart of Athena. Being Zeus' favorite children, they were given more powers and duties. Apollo and Athena often took up the role of protectors of cities, and were patrons of some of the important cities. Athena was the principal goddess of Athens, Apollo was the principal god of Sparta.
As patrons of arts, Apollo and Athena were companions of the Muses, the former a much more frequent companion than the latter. Apollo was sometimes called the son of Athena and Hephaestus.
In the Trojan War, as Zeus' executive, Apollo is seen holding the aegis like Athena usually does. Apollo's decisions were usually approved by his sister Athena, and they both worked to establish the law and order set forth by Zeus.
Apollo in the Oresteia
In Aeschylus' Oresteia trilogy, Clytemnestra kills her husband, King Agamemnon because he had sacrificed their daughter Iphigenia to proceed forward with the Trojan war. Apollo gives an order through the Oracle at Delphi that Agamemnon's son, Orestes, is to kill Clytemnestra and Aegisthus, her lover. Orestes and Pylades carry out the revenge, and consequently Orestes is pursued by the Erinyes or Furies (female personifications of vengeance).
Apollo and the Furies argue about whether the matricide was justified; Apollo holds that the bond of marriage is sacred and Orestes was avenging his father, whereas the Erinyes say that the bond of blood between mother and son is more meaningful than the bond of marriage. They invade his temple, and he drives them away. He says that the matter should be brought before Athena. Apollo promises to protect Orestes, as Orestes has become Apollo's supplicant. Apollo advocates Orestes at the trial, and ultimately Athena rules in favor of Apollo.
Roman Apollo
The Roman worship of Apollo was adopted from the Greeks. As a quintessentially Greek god, Apollo had no direct Roman equivalent, although later Roman poets often referred to him as Phoebus. There was a tradition that the Delphic oracle was consulted as early as the period of the kings of Rome during the reign of Tarquinius Superbus.
On the occasion of a pestilence in the 430s BCE, Apollo's first temple at Rome was established in the Flaminian fields, replacing an older cult site there known as the "Apollinare". During the Second Punic War in 212 BCE, the Ludi Apollinares ("Apollonian Games") were instituted in his honor, on the instructions of a prophecy attributed to one Marcius. In the time of Augustus, who considered himself under the special protection of Apollo and was even said to be his son, his worship developed and he became one of the chief gods of Rome.
After the Battle of Actium, which was fought near a sanctuary of Apollo, Augustus enlarged Apollo's temple, dedicated a portion of the spoils to him, and instituted quinquennial games in his honour. He also erected a new temple to the god on the Palatine hill. Sacrifices and prayers on the Palatine to Apollo and Diana formed the culmination of the Secular Games, held in 17 BCE to celebrate the dawn of a new era.
Festivals
The chief Apollonian festival was the Pythian Games held every four years at Delphi and was one of the four great Panhellenic Games. Also of major importance was the Delia held every four years on Delos.
Athenian annual festivals included the Boedromia, Metageitnia, Pyanepsia, and Thargelia.
Spartan annual festivals were the Carneia and the Hyacinthia.
Thebes every nine years held the Daphnephoria.
Attributes and symbols
Apollo's most common attributes were the bow and arrow. Other attributes of his included the kithara (an advanced version of the common lyre), the plectrum and the sword. Another common emblem was the sacrificial tripod, representing his prophetic powers. The Pythian Games were held in Apollo's honor every four years at Delphi. The bay laurel plant was used in expiatory sacrifices and in making the crown of victory at these games.
The palm tree was also sacred to Apollo because he had been born under one in Delos. Animals sacred to Apollo included wolves, dolphins, roe deer, swans, cicadas (symbolizing music and song), ravens, hawks, crows (Apollo had hawks and crows as his messengers), snakes (referencing Apollo's function as the god of prophecy), mice and griffins, mythical eagle–lion hybrids of Eastern origin.
Homer and Porphyry wrote that Apollo had a hawk as his messenger. In many myths Apollo is transformed into a hawk. In addition, Claudius Aelianus wrote that in Ancient Egypt people believed that hawks were sacred to the god and that according to the ministers of Apollo in Egypt there were certain men called "hawk-keepers" (ἱερακοβοσκοί) who fed and tended the hawks belonging to the god. Eusebius wrote that the second appearance of the moon is held sacred in the city of Apollo in Egypt and that the city's symbol is a man with a hawklike face (Horus). Claudius Aelianus wrote that Egyptians called Apollo Horus in their own language.
As god of colonization, Apollo gave oracular guidance on colonies, especially during the height of colonization, 750–550 BCE. According to Greek tradition, he helped Cretan or Arcadian colonists found the city of Troy. However, this story may reflect a cultural influence which had the reverse direction: Hittite cuneiform texts mention an Asia Minor god called Appaliunas or Apalunas in connection with the city of Wilusa attested in Hittite inscriptions, which is now generally regarded as being identical with the Greek Ilion by most scholars. In this interpretation, Apollo's title of Lykegenes can simply be read as "born in Lycia", which effectively severs the god's supposed link with wolves (possibly a folk etymology).
In literary contexts, Apollo represents harmony, order, and reason—characteristics contrasted with those of Dionysus, god of wine, who represents ecstasy and disorder. The contrast between the roles of these gods is reflected in the adjectives Apollonian and Dionysian. However, the Greeks thought of the two qualities as complementary: the two gods are brothers, and when Apollo at winter left for Hyperborea, he would leave the Delphic oracle to Dionysus. This contrast appears to be shown on the two sides of the Borghese Vase.
Apollo is often associated with the Golden Mean. This is the Greek ideal of moderation and a virtue that opposes gluttony.
In antiquity, Apollo was associated with the planet Mercury. The ancient Greeks believed that Mercury as observed during the morning was a different planet than the one during the evening, because each twilight Mercury would appear farther from the Sun as it set than it had the night before. The morning planet was called Apollo, and the one at evening Hermes/Mercury before they realised they were the same, thereupon the name 'Mercury/Hermes' was kept, and 'Apollo' was dropped.
Apollo in the arts
Apollo is a common theme in Greek and Roman art and also in the art of the Renaissance. The earliest Greek word for a statue is "delight" (, agalma), and the sculptors tried to create forms which would inspire such guiding vision. Maurice Bowra notices that the Greek artist puts into a god the highest degree of power and beauty that can be imagined. The sculptors derived this from observations on human beings, but they also embodied in concrete form, issues beyond the reach of ordinary thought.
The naked bodies of the statues are associated with the cult of the body which was essentially a religious activity. The muscular frames and limbs combined with slim waists indicate the Greek desire for health, and the physical capacity which was necessary in the hard Greek environment. The statues of Apollo and the other gods present them in their full youth and strength. "In the balance and relation of their limbs, such figures express their whole character, mental and physical, and reveal their central being . ,the radiant reality of youth in its heyday".
Archaic sculpture
Numerous free-standing statues of male youths from Archaic Greece exist, and were once thought to be representations of Apollo, though later discoveries indicated that many represented mortals. In 1895, V. I. Leonardos proposed the term kouros ("male youth") to refer to those from Keratea; this usage was later expanded by Henri Lechat in 1904 to cover all statues of this format.
The earliest examples of life-sized statues of Apollo may be two figures from the Ionic sanctuary on the island of Delos. Such statues were found across the Greek-speaking world, the preponderance of these were found at the sanctuaries of Apollo with more than one hundred from the sanctuary of Apollo Ptoios, Boeotia alone. Significantly more rare are the life-sized bronze statues. One of the few originals which survived into the present day—so rare that its discovery in 1959 was described as "a miracle" by Ernst Homann-Wedeking—is the masterpiece bronze, Piraeus Apollo. It was found in Piraeus, a port city close to Athens, and is believed to have come from north-eastern Peloponnesus. It is the only surviving large-scale Peloponnesian statue.
Classical sculpture
The famous Apollo of Mantua and its variants are early forms of the Apollo Citharoedus statue type, in which the god holds the cithara, a sophisticated seven-stringed variant of the lyre, in his left arm. While none of the Greek originals have survived, several Roman copies from approximately the late 1st or early 2nd century exist, of which an example is the Apollo Barberini.
Hellenistic Greece-Rome
Apollo as a handsome beardless young man, is often depicted with a cithara (as Apollo Citharoedus) or bow in his hand, or reclining on a tree (the Apollo Lykeios and Apollo Sauroctonos types). The Apollo Belvedere is a marble sculpture that was rediscovered in the late 15th century; for centuries it epitomized the ideals of Classical Antiquity for Europeans, from the Renaissance through the 19th century. The marble is a Hellenistic or Roman copy of a bronze original by the Greek sculptor Leochares, made between 330 and 320 BCE.
The life-size so-called "Adonis" found in 1780 on the site of a villa suburbana near the Via Labicana in the Roman suburb of Centocelle is identified as an Apollo by modern scholars. In the late 2nd century CE floor mosaic from El Djem, Roman Thysdrus, he is identifiable as Apollo Helios by his effulgent halo, though now even a god's divine nakedness is concealed by his cloak, a mark of increasing conventions of modesty in the later Empire.
Another haloed Apollo in mosaic, from Hadrumentum, is in the museum at Sousse. The conventions of this representation, head tilted, lips slightly parted, large-eyed, curling hair cut in locks grazing the neck, were developed in the 3rd century BCE to depict Alexander the Great. Some time after this mosaic was executed, the earliest depictions of Christ would also be beardless and haloed.
Modern reception
Apollo often appears in modern and popular culture due to his status as the god of music, dance and poetry.
Postclassical art and literature
Dance and music
Apollo has featured in dance and music in modern culture. Percy Bysshe Shelley composed a "Hymn of Apollo" (1820), and the god's instruction of the Muses formed the subject of Igor Stravinsky's Apollon musagète (1927–1928). In 1978, the Canadian band Rush released an album with songs "Apollo: Bringer of Wisdom"/"Dionysus: Bringer of Love".
Books
Apollo has been portrayed in modern literature, such as when Charles Handy, in Gods of Management (1978) uses Greek gods as a metaphor to portray various types of organizational culture. Apollo represents a 'role' culture where order, reason, and bureaucracy prevail. In 2016, author Rick Riordan published the first book in the Trials of Apollo series, publishing four other books in the series in 2017, 2018, 2019 and 2020.
Film
Apollo has been depicted in modern films—for instance, by Keith David in the 1997 animated feature film Hercules, by Luke Evans in the 2010 action film Clash of the Titans, and by Dimitri Lekkos in the 2010 film Percy Jackson & the Olympians: The Lightning Thief.
Video games
Apollo has appeared in many modern video games. Apollo appears as a minor character in Santa Monica Studio's 2010 action-adventure game God of War III with his bow being used by Peirithous. He also appears in the 2014 Hi-Rez Studios Multiplayer Online Battle Arena game Smite as a playable character.
Psychology and philosophy
In the philosophical discussion of the arts, a distinction is sometimes made between the Apollonian and Dionysian impulses, where the former is concerned with imposing intellectual order and the latter with chaotic creativity. Friedrich Nietzsche argued that a fusion of the two was most desirable. Psychologist Carl Jung's Apollo archetype represents what he saw as the disposition in people to over-intellectualise and maintain emotional distance.
Spaceflight
In spaceflight, the 1960s and 1970s NASA program for orbiting and landing astronauts on the Moon was named after Apollo, by NASA manager Abe Silverstein:
Genealogy
See also
Darrhon
Dryad
Epirus
Family tree of the Greek gods
Phoebus (disambiguation)
Sibylline oracles
Tegyra
Temple of Apollo (disambiguation)
Notes
References
Sources
Primary sources
Aelian, On Animals, Volume II: Books 6–11. Translated by A. F. Scholfield. Loeb Classical Library 447. Cambridge, MA: Harvard University Press, 1958.
Aeschylus, The Eumenides in Aeschylus, with an English translation by Herbert Weir Smyth, Ph. D. in two volumes, Vol 2, Cambridge, Massachusetts, Harvard University Press, 1926, Online version at the Perseus Digital Library.
Antoninus Liberalis, The Metamorphoses of Antoninus Liberalis translated by Francis Celoria (Routledge 1992). Online version at the Topos Text Project.
Apollodorus, Apollodorus, The Library, with an English Translation by Sir James George Frazer, F.B.A., F.R.S. in 2 Volumes. Cambridge, MA, Harvard University Press; London, William Heinemann Ltd. 1921. Online version at the Perseus Digital Library.
Apollonius of Rhodes, Apollonius Rhodius: the Argonautica, translated by Robert Cooper Seaton, W. Heinemann, 1912. Internet Archive.
Callimachus, Callimachus and Lycophron with an English Translation by A. W. Mair; Aratus, with an English Translation by G. R. Mair, London: W. Heinemann, New York: G. P. Putnam 1921. Online version at Harvard University Press. Internet Archive.
Cicero, Marcus Tullius, De Natura Deorum in Cicero in Twenty-eight Volumes, XIX De Natura Deorum; Academica, with an English translation by H. Rackham, Cambridge, Massachusetts: Harvard University Press; London: William Heinemann, Ltd, 1967. Internet Archive.
Diodorus Siculus, Library of History, Volume III: Books 4.59-8, translated by C. H. Oldfather, Loeb Classical Library No. 340. Cambridge, Massachusetts, Harvard University Press, 1939. . Online version at Harvard University Press. Online version by Bill Thayer.
Herodotus, Herodotus, with an English translation by A. D. Godley. Cambridge. Harvard University Press. 1920. Online version available at The Perseus Digital Library.
Hesiod, Theogony, in The Homeric Hymns and Homerica with an English Translation by Hugh G. Evelyn-White, Cambridge, MA., Harvard University Press; London, William Heinemann Ltd. 1914. Online version at the Perseus Digital Library.
Homeric Hymn 3 to Apollo in The Homeric Hymns and Homerica with an English Translation by Hugh G. Evelyn-White, Cambridge, MA., Harvard University Press; London, William Heinemann Ltd. 1914. Online version at the Perseus Digital Library.
Homeric Hymn 4 to Hermes, in The Homeric Hymns and Homerica with an English Translation by Hugh G. Evelyn-White, Cambridge, Massachusetts, Harvard University Press; London, William Heinemann Ltd. 1914. Online version at the Perseus Digital Library.
Homer, The Iliad with an English Translation by A.T. Murray, PhD in two volumes. Cambridge, MA., Harvard University Press; London, William Heinemann, Ltd. 1924. Online version at the Perseus Digital Library.
Homer; The Odyssey with an English Translation by A.T. Murray, PH.D. in two volumes. Cambridge, MA., Harvard University Press; London, William Heinemann, Ltd. 1919. Online version at the Perseus Digital Library.
Hyginus, Gaius Julius, De Astronomica, in The Myths of Hyginus, edited and translated by Mary A. Grant, Lawrence: University of Kansas Press, 1960. Online version at ToposText.
Hyginus, Gaius Julius, Fabulae, in The Myths of Hyginus, edited and translated by Mary A. Grant, Lawrence: University of Kansas Press, 1960. Online version at ToposText.
Livy, The History of Rome, Books I and II With An English Translation. Cambridge. Cambridge, Mass., Harvard University Press; London, William Heinemann, Ltd. 1919.
Nonnus, Dionysiaca; translated by Rouse, W H D, I Books I-XV. Loeb Classical Library No. 344, Cambridge, Massachusetts, Harvard University Press; London, William Heinemann Ltd. 1940. Internet Archive
Nonnus, Dionysiaca; translated by Rouse, W H D, II Books XVI-XXXV. Loeb Classical Library No. 345, Cambridge, Massachusetts, Harvard University Press; London, William Heinemann Ltd. 1940. Internet Archive
Statius, Thebaid. Translated by Mozley, J H. Loeb Classical Library Volumes. Cambridge, Massachusetts, Harvard University Press; London, William Heinemann Ltd. 1928.
Strabo, The Geography of Strabo. Edition by H.L. Jones. Cambridge, Mass.: Harvard University Press; London: William Heinemann, Ltd. 1924. Online version at the Perseus Digital Library.
Sophocles, Oedipus Rex
Palaephatus, On Unbelievable Tales 46. Hyacinthus (330 BCE)
Ovid, Metamorphoses, Brookes More, Boston, Cornhill Publishing Co. 1922. Online version at the Perseus Digital Library. 10. 162–219 (1–8 CE)
Pausanias, Pausanias Description of Greece with an English Translation by W.H.S. Jones, Litt.D., and H.A. Ormerod, M.A., in 4 Volumes. Cambridge, MA, Harvard University Press; London, William Heinemann Ltd. 1918. Online version at the Perseus Digital Library.
Philostratus the Elder, Imagines, in Philostratus the Elder, Imagines. Philostratus the Younger, Imagines. Callistratus, Descriptions. Translated by Arthur Fairbanks. Loeb Classical Library No. 256. Cambridge, Massachusetts: Harvard University Press, 1931. . Online version at Harvard University Press. Internet Archive 1926 edition. i.24 Hyacinthus (170–245 CE)
Philostratus the Younger, Imagines, in Philostratus the Elder, Imagines. Philostratus the Younger, Imagines. Callistratus, Descriptions. Translated by Arthur Fairbanks. Loeb Classical Library No. 256. Cambridge, Massachusetts: Harvard University Press, 1931. . Online version at Harvard University Press. Internet Archive 1926 edition. 14. Hyacinthus (170–245 CE)
Pindar, Odes, Diane Arnson Svarlien. 1990. Online version at the Perseus Digital Library.
Plutarch. Lives, Volume I: Theseus and Romulus. Lycurgus and Numa. Solon and Publicola. Translated by Bernadotte Perrin. Loeb Classical Library No. 46. Cambridge, Massachusetts: Harvard University Press, 1914. . Online version at Harvard University Press. Numa at the Perseus Digital Library.
Pseudo-Plutarch, De fluviis, in Plutarch's morals, Volume V, edited and translated by William Watson Goodwin, Boston: Little, Brown & Co., 1874. Online version at the Perseus Digital Library.
Lucian, Dialogues of the Dead. Dialogues of the Sea-Gods. Dialogues of the Gods. Dialogues of the Courtesans, translated by M. D. MacLeod, Loeb Classical Library No. 431, Cambridge, Massachusetts, Harvard University Press, 1961. . Online version at Harvard University Press. Internet Archive.
First Vatican Mythographer, 197. Thamyris et Musae
Tzetzes, John, Chiliades, editor Gottlieb Kiessling, F.C.G. Vogel, 1826. Google Books. (English translation: Book I by Ana Untila; Books II–IV, by Gary Berkowitz; Books V–VI by Konstantino Ramiotis; Books VII–VIII by Vasiliki Dogani; Books IX–X by Jonathan Alexander; Books XII–XIII by Nikolaos Giallousis. Internet Archive).
Valerius Flaccus, Argonautica, translated by J. H. Mozley, Loeb Classical Library No. 286. Cambridge, Massachusetts, Harvard University Press; London, William Heinemann Ltd. 1928. . Online version at Harvard University Press. Online translated text available at theoi.com.
Vergil, Aeneid. Theodore C. Williams. trans. Boston. Houghton Mifflin Co. 1910. Online version at the Perseus Digital Library.
Secondary sources
Athanassakis, Apostolos N., and Benjamin M. Wolkow, The Orphic Hymns, Johns Hopkins University Press; owlerirst Printing edition (May 29, 2013). . Google Books.
M. Bieber, 1964. Alexander the Great in Greek and Roman Art. Chicago.
Hugh Bowden, 2005. Classical Athens and the Delphic Oracle: Divination and Democracy. Cambridge University Press.
Walter Burkert, 1985. Greek Religion (Harvard University Press) III.2.5 passim
Fontenrose, Joseph Eddy, Python: A Study of Delphic Myth and Its Origins, University of California Press, 1959. .
Gantz, Timothy, Early Greek Myth: A Guide to Literary and Artistic Sources, Johns Hopkins University Press, 1996, Two volumes: (Vol. 1), (Vol. 2).
Miranda J. Green, 1997. Dictionary of Celtic Myth and Legend, Thames and Hudson.
Grimal, Pierre, The Dictionary of Classical Mythology, Wiley-Blackwell, 1996. .
Hard, Robin, The Routledge Handbook of Greek Mythology: Based on H.J. Rose's "Handbook of Greek Mythology", Psychology Press, 2004, . Google Books.
Karl Kerenyi, 1953. Apollon: Studien über Antiken Religion und Humanität revised edition.
Kerényi, Karl 1951, The Gods of the Greeks, Thames and Hudson, London.
Mertens, Dieter; Schutzenberger, Margareta. Città e monumenti dei Greci d'Occidente: dalla colonizzazione alla crisi di fine V secolo a.C.. Roma L'Erma di Bretschneider, 2006. .
Martin Nilsson, 1955. Die Geschichte der Griechische Religion, vol. I. C.H. Beck.
Parada, Carlos, Genealogical Guide to Greek Mythology, Jonsered, Paul Åströms Förlag, 1993. .
Pauly–Wissowa, Realencyclopädie der klassischen Altertumswissenschaft: II, "Apollon". The best repertory of cult sites (Burkert).
Peck, Harry Thurston, Harpers Dictionary of Classical Antiquities, New York. Harper and Brothers. 1898. Online version at the Perseus Digital Library.
Pfeiff, K.A., 1943. Apollon: Wandlung seines Bildes in der griechischen Kunst. Traces the changing iconography of Apollo.
D.S.Robertson (1945) A handbook of Greek and Roman Architecture Cambridge University Press
Smith, William; Dictionary of Greek and Roman Biography and Mythology, London (1873). "Apollo"
Smith, William, A Dictionary of Greek and Roman Antiquities. William Smith, LLD. William Wayte. G. E. Marindin. Albemarle Street, London. John Murray. 1890. Online version at the Perseus Digital Library.
Spivey Nigel (1997) Greek art Phaedon Press Ltd.
External links
Apollo at the Greek Mythology Link, by Carlos Parada
The Warburg Institute Iconographic Database (ca 1650 images of Apollo)
Greek gods
Roman gods
Beauty gods
Health gods
Knowledge gods
Light gods
Maintenance deities
Music and singing gods
Oracular gods
Solar gods
Plague gods
Dragonslayers
Mythological Greek archers
Mythological rapists
Homosexuality and bisexuality deities
Divine twins
Deities in the Iliad
Metamorphoses characters
Musicians in Greek mythology
LGBT themes in Greek mythology
Children of Zeus
Characters in the Odyssey
Characters in the Argonautica
Characters in Roman mythology
Childhood gods
Mythological Greek physicians
Arts gods
Dii Consentes
Medicine deities
Mercurian deities
Twelve Olympians
Dance gods
Kourotrophoi
Shapeshifters in Greek mythology
Supernatural healing
Wolf deities
|
https://en.wikipedia.org/wiki/Alaska
|
Alaska ( ) is a non-contiguous U.S. state on the northwest extremity of North America. It borders British Columbia and Yukon in Canada to the east and it shares a western maritime border in the Bering Strait with Russia's Chukotka Autonomous Okrug. The Chukchi and Beaufort Seas of the Arctic Ocean lie to the north and the Pacific Ocean lies to the south. Technically a semi-exclave of the U.S., Alaska is the largest exclave in the world.
Alaska is the largest U.S. state by area, comprising more total area than the next three largest states of Texas, California and Montana combined and is the seventh-largest subnational division in the world. It is the third-least populous and most sparsely populated U.S. state, but with a population of 736,081 as of 2020, is the continent's most populous territory located mostly north of the 60th parallel, with more than quadruple the combined populations of Northern Canada and Greenland. The state capital of Juneau is the second-largest city in the United States by area. The former capital of Alaska, Sitka, is the largest U.S. city by area. The state's most populous city is Anchorage. Approximately half of Alaska's residents live within the Anchorage metropolitan area.
Indigenous people have lived in Alaska for thousands of years, and it is widely believed that the region served as the entry point for the initial settlement of North America by way of the Bering land bridge. The Russian Empire was the first to actively colonize the area beginning in the 18th century, eventually establishing Russian America, which spanned most of the current state, and promoted and maintained a native Alaskan Creole population. The expense and logistical difficulty of maintaining this distant possession prompted its sale to the U.S. in 1867 for US$7.2 million (equivalent to $ million in ). The area went through several administrative changes before becoming organized as a territory on May 11, 1912. It was admitted as the 49th state of the U.S. on January 3, 1959.
Abundant natural resources have enabled Alaska—with one of the smallest state economies—to have one of the highest per capita incomes, with commercial fishing, and the extraction of natural gas and oil, dominating Alaska's economy. U.S. Armed Forces bases and tourism also contribute to the economy; more than half the state is federally-owned land containing national forests, national parks, and wildlife refuges. It is among the most irreligious states, one of the first to legalize recreational marijuana, and is known for its libertarian-leaning political culture, generally supporting the Republican Party in national elections. The Indigenous population of Alaska is proportionally the highest of any U.S. state, at over 15 percent. Various Indigenous languages are spoken, and Alaskan Natives are influential in local and state politics.
Etymology
The name "Alaska" () was introduced in the Russian colonial period when it was used to refer to the Alaska Peninsula. It was derived from an Aleut-language idiom, , meaning "the mainland" or, more literally, "the object towards which the action of the sea is directed".
History
Pre-colonization
Numerous indigenous peoples occupied Alaska for thousands of years before the arrival of European peoples to the area. Linguistic and DNA studies done here have provided evidence for the settlement of North America by way of the Bering land bridge. At the Upward Sun River site in the Tanana Valley in Alaska, remains of a six-week-old infant were found. The baby's DNA showed that she belonged to a population that was genetically separate from other native groups present elsewhere in the New World at the end of the Pleistocene. Ben Potter, the University of Alaska Fairbanks archaeologist who unearthed the remains at the Upward Sun River site in 2013, named this new group Ancient Beringians.
The Tlingit people developed a society with a matrilineal kinship system of property inheritance and descent in what is today Southeast Alaska, along with parts of British Columbia and the Yukon. Also in Southeast were the Haida, now well known for their unique arts. The Tsimshian people came to Alaska from British Columbia in 1887, when President Grover Cleveland, and later the U.S. Congress, granted them permission to settle on Annette Island and found the town of Metlakatla. All three of these peoples, as well as other indigenous peoples of the Pacific Northwest Coast, experienced smallpox outbreaks from the late 18th through the mid-19th century, with the most devastating epidemics occurring in the 1830s and 1860s, resulting in high fatalities and social disruption.
The Aleutian Islands are still home to the Aleut people's seafaring society, although they were the first Native Alaskans to be exploited by the Russians. Western and Southwestern Alaska are home to the Yup'ik, while their cousins the Alutiiq ~ Sugpiaq live in what is now Southcentral Alaska. The Gwich'in people of the northern Interior region are Athabaskan and primarily known today for their dependence on the caribou within the much-contested Arctic National Wildlife Refuge. The North Slope and Little Diomede Island are occupied by the widespread Inupiat people.
Colonization
Some researchers believe the first Russian settlement in Alaska was established in the 17th century. According to this hypothesis, in 1648 several koches of Semyon Dezhnyov's expedition came ashore in Alaska by storm and founded this settlement. This hypothesis is based on the testimony of Chukchi geographer Nikolai Daurkin, who had visited Alaska in 1764–1765 and who had reported on a village on the Kheuveren River, populated by "bearded men" who "pray to the icons". Some modern researchers associate Kheuveren with Koyuk River.
The first European vessel to reach Alaska is generally held to be the St. Gabriel under the authority of the surveyor M. S. Gvozdev and assistant navigator I. Fyodorov on August 21, 1732, during an expedition of Siberian Cossack A. F. Shestakov and Russian explorer Dmitry Pavlutsky (1729–1735). Another European contact with Alaska occurred in 1741, when Vitus Bering led an expedition for the Russian Navy aboard the St. Peter. After his crew returned to Russia with sea otter pelts judged to be the finest fur in the world, small associations of fur traders began to sail from the shores of Siberia toward the Aleutian Islands. The first permanent European settlement was founded in 1784.
Between 1774 and 1800, Spain sent several expeditions to Alaska to assert its claim over the Pacific Northwest. In 1789, a Spanish settlement and fort were built in Nootka Sound. These expeditions gave names to places such as Valdez, Bucareli Sound, and Cordova. Later, the Russian-American Company carried out an expanded colonization program during the early-to-mid-19th century. Sitka, renamed New Archangel from 1804 to 1867, on Baranof Island in the Alexander Archipelago in what is now Southeast Alaska, became the capital of Russian America. It remained the capital after the colony was transferred to the United States. The Russians never fully colonized Alaska, and the colony was never very profitable. Evidence of Russian settlement in names and churches survive throughout southeastern Alaska.
William H. Seward, the 24th United States Secretary of State, negotiated the Alaska Purchase (referred to pejoratively as Seward's Folly) with the Russians in 1867 for $7.2 million. Russia's contemporary ruler Tsar Alexander II, the Emperor of the Russian Empire, King of Poland and Grand Duke of Finland, also planned the sale; the purchase was made on March 30, 1867. Six months later the commissioners arrived in Sitka and the formal transfer was arranged; the formal flag-raising took place at Fort Sitka on October 18, 1867. In the ceremony 250 uniformed U.S. soldiers marched to the governor's house at "Castle Hill", where the Russian troops lowered the Russian flag and the U.S. flag was raised. This event is celebrated as Alaska Day, a legal holiday on October 18.
Alaska was loosely governed by the military initially, and was administered as a district starting in 1884, with a governor appointed by the United States president. A federal district court was headquartered in Sitka. For most of Alaska's first decade under the United States flag, Sitka was the only community inhabited by American settlers. They organized a "provisional city government", which was Alaska's first municipal government, but not in a legal sense. Legislation allowing Alaskan communities to legally incorporate as cities did not come about until 1900, and home rule for cities was extremely limited or unavailable until statehood took effect in 1959.
Alaska as an incorporated U.S. territory
Starting in the 1890s and stretching in some places to the early 1910s, gold rushes in Alaska and the nearby Yukon Territory brought thousands of miners and settlers to Alaska. Alaska was officially incorporated as an organized territory in 1912. Alaska's capital, which had been in Sitka until 1906, was moved north to Juneau. Construction of the Alaska Governor's Mansion began that same year. European immigrants from Norway and Sweden also settled in southeast Alaska, where they entered the fishing and logging industries.
During World War II, the Aleutian Islands Campaign focused on Attu, Agattu and Kiska, all of which were occupied by the Empire of Japan. During the Japanese occupation, a white American civilian and two United States Navy personnel were killed at Attu and Kiska respectively, and nearly a total of 50 Aleut civilians and eight sailors were interned in Japan. About half of the Aleuts died during the period of internment. Unalaska/Dutch Harbor and Adak became significant bases for the United States Army, United States Army Air Forces and United States Navy. The United States Lend-Lease program involved flying American warplanes through Canada to Fairbanks and then Nome; Soviet pilots took possession of these aircraft, ferrying them to fight the German invasion of the Soviet Union. The construction of military bases contributed to the population growth of some Alaskan cities.
Statehood
Statehood for Alaska was an important cause of James Wickersham early in his tenure as a congressional delegate. Decades later, the statehood movement gained its first real momentum following a territorial referendum in 1946. The Alaska Statehood Committee and Alaska's Constitutional Convention would soon follow. Statehood supporters also found themselves fighting major battles against political foes, mostly in the U.S. Congress but also within Alaska. Statehood was approved by the U.S. Congress on July 7, 1958; Alaska was officially proclaimed a state on January 3, 1959.
Good Friday earthquake
On March 27, 1964, the massive Good Friday earthquake killed 133 people and destroyed several villages and portions of large coastal communities, mainly by the resultant tsunamis and landslides. It was the fourth-most-powerful earthquake in recorded history, with a moment magnitude of 9.2 (more than a thousand times as powerful as the 1989 San Francisco earthquake). The time of day (5:36 pm), time of year (spring) and location of the epicenter were all cited as factors in potentially sparing thousands of lives, particularly in Anchorage.
Lasting four minutes and thirty-eight seconds, the magnitude 9.2 megathrust earthquake remains the most powerful earthquake recorded in North American history, and the second most powerful earthquake recorded in world history. of fault ruptured at once and moved up to , releasing about 500 years of stress buildup. Soil liquefaction, fissures, landslides, and other ground failures caused major structural damage in several communities and much damage to property. Anchorage sustained great destruction or damage to many inadequately earthquake-engineered houses, buildings, and infrastructure (paved streets, sidewalks, water and sewer mains, electrical systems, and other human-made equipment), particularly in the several landslide zones along Knik Arm. southwest, some areas near Kodiak were permanently raised by . Southeast of Anchorage, areas around the head of Turnagain Arm near Girdwood and Portage dropped as much as , requiring reconstruction and fill to raise the Seward Highway above the new high tide mark.
In Prince William Sound, Port Valdez suffered a massive underwater landslide, resulting in the deaths of 32 people between the collapse of the Valdez city harbor and docks, and inside the ship that was docked there at the time. Nearby, a tsunami destroyed the village of Chenega, killing 23 of the 68 people who lived there; survivors out-ran the wave, climbing to high ground. Post-quake tsunamis severely affected Whittier, Seward, Kodiak, and other Alaskan communities, as well as people and property in British Columbia, Washington, Oregon, and California. Tsunamis also caused damage in Hawaii and Japan. Evidence of motion directly related to the earthquake was also reported from Florida and Texas.
Alaska had never experienced a major disaster in a highly populated area before, and had very limited resources for dealing with the effects of such an event. In Anchorage, at the urging of geologist Lidia Selkregg, the City of Anchorage and the Alaska State Housing Authority appointed a team of 40 scientists, including geologists, soil scientists, and engineers, to assess the damage done by the earthquake to the city. The team, called the Engineering and Geological Evaluation Group, was headed by Ruth A. M. Schmidt, a geology professor at the University of Alaska Anchorage. The team of scientists came into conflict with local developers and downtown business owners who wanted to immediately rebuild; the scientists wanted to identify future dangers to ensure that rebuilt infrastructure would be safe. The team produced a report on May 8, 1964, just a little more than a month after the earthquake.
The United States military, which has a large active presence in Alaska, also stepped in to assist within moments of the end of the quake. The U.S. Army rapidly re-established communications with the lower 48 states, deployed troops to assist the citizens of Anchorage, and dispatched a convoy to Valdez. On the advice of military and civilian leaders, President Lyndon B. Johnson declared all of Alaska a major disaster area the day after the quake. The U.S. Navy and U.S. Coast Guard deployed ships to isolated coastal communities to assist with immediate needs. Bad weather and poor visibility hampered air rescue and observation efforts the day after the quake, but on Sunday the 29th the situation improved and rescue helicopters and observation aircraft were deployed. A military airlift immediately began shipping relief supplies to Alaska, eventually delivering of food and other supplies. Broadcast journalist, Genie Chance, assisted in recovery and relief efforts, staying on the KENI air waves over Anchorage for more than 24 continuous hours as the voice of calm from her temporary post within the Anchorage Public Safety Building. She was effectively designated as the public safety officer by the city's police chief. Chance provided breaking news of the catastrophic events that continued to develop following the magnitude 9.2 earthquake, and she served as the voice of the public safety office, coordinating response efforts, connecting available resources to needs around the community, disseminating information about shelters and prepared food rations, passing messages of well-being between loved ones, and helping to reunite families.
In the longer term, the U.S. Army Corps of Engineers led the effort to rebuild roads, clear debris, and establish new townsites for communities that had been completely destroyed, at a cost of $110 million. The West Coast and Alaska Tsunami Warning Center was formed as a direct response to the disaster. Federal disaster relief funds paid for reconstruction as well as financially supporting the devastated infrastructure of Alaska's government, spending hundreds of millions of dollars that helped keep Alaska financially solvent until the discovery of massive oil deposits at Prudhoe Bay. At the order of the U.S. Defense Department, the Alaska National Guard founded the Alaska Division of Emergency Services to respond to any future disasters.
Alaska oil boom
The 1968 discovery of oil at Prudhoe Bay and the 1977 completion of the Trans-Alaska Pipeline System led to an oil boom. Royalty revenues from oil have funded large state budgets from 1980 onward.
Oil production was not the only economic value of Alaska's land, however. In the second half of the 20th century, Alaska discovered tourism as an important source of revenue. Tourism became popular after World War II, when military personnel stationed in the region returned home praising its natural splendor. The Alcan Highway, built during the war, and the Alaska Marine Highway System, completed in 1963, made the state more accessible than before. Tourism became increasingly important in Alaska, and today over 1.4 million people visit the state each year.
With tourism more vital to the economy, environmentalism also rose in importance. The Alaska National Interest Lands Conservation Act (ANILCA) of 1980 added 53.7 million acres (217,000 km2) to the National Wildlife Refuge system, parts of 25 rivers to the National Wild and Scenic Rivers system, 3.3 million acres (13,000 km2) to National Forest lands, and 43.6 million acres (176,000 km2) to National Park land. Because of the Act, Alaska now contains two-thirds of all American national parklands. Today, more than half of Alaskan land is owned by the Federal Government.
In 1989, the Exxon Valdez hit a reef in the Prince William Sound, spilling more than of crude oil over of coastline. Today, the battle between philosophies of development and conservation is seen in the contentious debate over oil drilling in the Arctic National Wildlife Refuge and the proposed Pebble Mine.
Geography
Located at the northwest corner of North America, Alaska is the northernmost and westernmost state in the United States, but also has the most easterly longitude in the United States because the Aleutian Islands extend into the Eastern Hemisphere. Alaska is the only non-contiguous U.S. state on continental North America; about of British Columbia (Canada) separates Alaska from Washington. It is technically part of the continental U.S., but is not usually included in the colloquial use of the term; Alaska is not part of the contiguous U.S., often called "the Lower 48". The capital city, Juneau, is situated on the mainland of the North American continent but is not connected by road to the rest of the North American highway system.
The state is bordered by Canada's Yukon and British Columbia to the east (making it the only state to only border a Canadian territory); the Gulf of Alaska and the Pacific Ocean to the south and southwest; the Bering Sea, Bering Strait, and Chukchi Sea to the west; and the Arctic Ocean to the north. Alaska's territorial waters touch Russia's territorial waters in the Bering Strait, as the Russian Big Diomede Island and Alaskan Little Diomede Island are only apart. Alaska has a longer coastline than all the other U.S. states combined.
At in total area, Alaska is by far the largest state in the United States. Alaska is more than twice the size of the second-largest U.S. state (Texas), and it is larger than the next three largest states (Texas, California, and Montana) combined. Alaska is the seventh largest subnational division in the world. If it was an independent nation would be the 18th largest country in the world, almost the same size as Iran.
With its myriad islands, Alaska has nearly of tidal shoreline. The Aleutian Islands chain extends west from the southern tip of the Alaska Peninsula. Many active volcanoes are found in the Aleutians and in coastal regions. Unimak Island, for example, is home to Mount Shishaldin, which is an occasionally smoldering volcano that rises to above the North Pacific. The chain of volcanoes extends to Mount Spurr, west of Anchorage on the mainland. Geologists have identified Alaska as part of Wrangellia, a large region consisting of multiple states and Canadian provinces in the Pacific Northwest, which is actively undergoing continent building.
One of the world's largest tides occurs in Turnagain Arm, just south of Anchorage, where tidal differences can be more than .
Alaska has more than three million lakes. Marshlands and wetland permafrost cover (mostly in northern, western and southwest flatlands). Glacier ice covers about of Alaska. The Bering Glacier is the largest glacier in North America, covering alone.
Regions
There are no officially defined borders demarcating the various regions of Alaska, but there are five/six regions that the state is most commonly broken up into:
South Central
The most populous region of Alaska, containing Anchorage, the Matanuska-Susitna Valley and the Kenai Peninsula. Rural, mostly unpopulated areas south of the Alaska Range and west of the Wrangell Mountains also fall within the definition of South Central, as do the Prince William Sound area and the communities of Cordova and Valdez.
Southeast
Also referred to as the Panhandle or Inside Passage, this is the region of Alaska closest to the contiguous states. As such, this was where most of the initial non-indigenous settlement occurred in the years following the Alaska Purchase. The region is dominated by the Alexander Archipelago as well as the Tongass National Forest, the largest national forest in the United States. It contains the state capital Juneau, the former capital Sitka, and Ketchikan, at one time Alaska's largest city. The Alaska Marine Highway provides a vital surface transportation link throughout the area and country, as only three communities (Haines, Hyder and Skagway) enjoy direct connections to the contiguous North American road system.
Interior
The Interior is the largest region of Alaska; much of it is uninhabited wilderness. Fairbanks is the only large city in the region. Denali National Park and Preserve is located here. Denali, formerly Mount McKinley, is the highest mountain in North America, and is also located here.
North Slope
The North Slope is mostly tundra peppered with small villages. The area is known for its massive reserves of crude oil and contains both the National Petroleum Reserve–Alaska and the Prudhoe Bay Oil Field. The city of Utqiaġvik, formerly known as Barrow, is the northernmost city in the United States and is located here. The Northwest Arctic area, anchored by Kotzebue and also containing the Kobuk River valley, is often regarded as being part of this region. However, the respective Inupiat of the North Slope and of the Northwest Arctic seldom consider themselves to be one people.
Southwest
Southwest Alaska is a sparsely inhabited region stretching some inland from the Bering Sea. Most of the population lives along the coast. Kodiak Island is also located in Southwest. The massive Yukon–Kuskokwim Delta, one of the largest river deltas in the world, is here. Portions of the Alaska Peninsula are considered part of the Southwest, with the Aleutian Islands often (but not always) being grouped in as well.
Aleutian Islands
While primarily part of Southwest Alaska when grouped economically, the Aleutian islands are sometimes recognized as an alternate group from the rest of the region due to the geographic separation from the continent. More than 300 small volcanic islands make up this chain, which stretches more than into the Pacific Ocean. Some of these islands fall in the Eastern Hemisphere, but the International Date Line was drawn west of 180° to keep the whole state, and thus the entire North American continent, within the same legal day. Two of the islands, Attu and Kiska, were occupied by Japanese forces during World War II.
Land ownership
According to an October 1998 report by the United States Bureau of Land Management, approximately 65% of Alaska is owned and managed by the U.S. federal government as public lands, including a multitude of national forests, national parks, and national wildlife refuges. Of these, the Bureau of Land Management manages , or 23.8% of the state. The Arctic National Wildlife Refuge is managed by the United States Fish and Wildlife Service. It is the world's largest wildlife refuge, comprising .
Of the remaining land area, the state of Alaska owns , its entitlement under the Alaska Statehood Act. A portion of that acreage is occasionally ceded to the organized boroughs presented above, under the statutory provisions pertaining to newly formed boroughs. Smaller portions are set aside for rural subdivisions and other homesteading-related opportunities. These are not very popular due to the often remote and roadless locations. The University of Alaska, as a land grant university, also owns substantial acreage which it manages independently.
Another are owned by 12 regional, and scores of local, Native corporations created under the Alaska Native Claims Settlement Act (ANCSA) of 1971. Regional Native corporation Doyon, Limited often promotes itself as the largest private landowner in Alaska in advertisements and other communications. Provisions of ANCSA allowing the corporations' land holdings to be sold on the open market starting in 1991 were repealed before they could take effect. Effectively, the corporations hold title (including subsurface title in many cases, a privilege denied to individual Alaskans) but cannot sell the land. Individual Native allotments can be and are sold on the open market, however.
Various private interests own the remaining land, totaling about one percent of the state. Alaska is, by a large margin, the state with the smallest percentage of private land ownership when Native corporation holdings are excluded.
Alaska Heritage Resources Survey
The Alaska Heritage Resources Survey (AHRS) is a restricted inventory of all reported historic and prehistoric sites within the U.S. state of Alaska; it is maintained by the Office of History and Archaeology. The survey's inventory of cultural resources includes objects, structures, buildings, sites, districts, and travel ways, with a general provision that they are more than fifty years old. , more than 35,000 sites have been reported.
Cities, towns and boroughs
Alaska is not divided into counties, as most of the other U.S. states, but it is divided into boroughs. Delegates to the Alaska Constitutional Convention wanted to avoid the pitfalls of the traditional county system and adopted their own unique model. Many of the more densely populated parts of the state are part of Alaska's 16 boroughs, which function somewhat similarly to counties in other states. However, unlike county-equivalents in the other 49 states, the boroughs do not cover the entire land area of the state. The area not part of any borough is referred to as the Unorganized Borough.
The Unorganized Borough has no government of its own, but the U.S. Census Bureau in cooperation with the state divided the Unorganized Borough into 11 census areas solely for the purposes of statistical analysis and presentation. A recording district is a mechanism for management of the public record in Alaska. The state is divided into 34 recording districts which are centrally administered under a state recorder. All recording districts use the same acceptance criteria, fee schedule, etc., for accepting documents into the public record.
Whereas many U.S. states use a three-tiered system of decentralization—state/county/township—most of Alaska uses only two tiers—state/borough. Owing to the low population density, most of the land is located in the Unorganized Borough. As the name implies, it has no intermediate borough government but is administered directly by the state government. In 2000, 57.71% of Alaska's area has this status, with 13.05% of the population.
Anchorage merged the city government with the Greater Anchorage Area Borough in 1975 to form the Municipality of Anchorage, containing the city proper and the communities of Eagle River, Chugiak, Peters Creek, Girdwood, Bird, and Indian. Fairbanks has a separate borough (the Fairbanks North Star Borough) and municipality (the City of Fairbanks).
The state's most populous city is Anchorage, home to 291,247 people in 2020. The richest location in Alaska by per capita income is Denali ($42,245). Yakutat City, Sitka, Juneau, and Anchorage are the four largest cities in the U.S. by area.
Cities and census-designated places (by population)
As reflected in the 2020 United States census, Alaska has a total of 355 incorporated cities and census-designated places (CDPs). The tally of cities includes four unified municipalities, essentially the equivalent of a consolidated city–county. The majority of these communities are located in the rural expanse of Alaska known as "The Bush" and are unconnected to that contiguous North American road network. The table at the bottom of this section lists about the 100 largest cities and census-designated places in Alaska, in population order.
Of Alaska's 2020 U.S. census population figure of 733,391, 16,655 people, or 2.27% of the population, did not live in an incorporated city or census-designated place. Approximately three-quarters of that figure were people who live in urban and suburban neighborhoods on the outskirts of the city limits of Ketchikan, Kodiak, Palmer and Wasilla. CDPs have not been established for these areas by the United States Census Bureau, except that seven CDPs were established for the Ketchikan-area neighborhoods in the 1980 Census (Clover Pass, Herring Cove, Ketchikan East, Mountain Point, North Tongass Highway, Pennock Island and Saxman East), but have not been used since. The remaining population was scattered throughout Alaska, both within organized boroughs and in the Unorganized Borough, in largely remote areas.
Climate
The climate in south and southeastern Alaska is a mid-latitude oceanic climate (Köppen climate classification: Cfb), and a subarctic oceanic climate (Köppen Cfc) in the northern parts. On an annual basis, the southeast is both the wettest and warmest part of Alaska with milder temperatures in the winter and high precipitation throughout the year. Juneau averages over of precipitation a year, and Ketchikan averages over . This is also the only region in Alaska in which the average daytime high temperature is above freezing during the winter months.The climate of Anchorage and south central Alaska is mild by Alaskan standards due to the region's proximity to the seacoast. While the area gets less rain than southeast Alaska, it gets more snow, and days tend to be clearer. On average, Anchorage receives of precipitation a year, with around of snow, although there are areas in the south central which receive far more snow. It is a subarctic climate (Köppen: Dfc) due to its brief, cool summers.
The climate of western Alaska is determined in large part by the Bering Sea and the Gulf of Alaska. It is a subarctic oceanic climate in the southwest and a continental subarctic climate farther north. The temperature is somewhat moderate considering how far north the area is. This region has a tremendous amount of variety in precipitation. An area stretching from the northern side of the Seward Peninsula to the Kobuk River valley (i.e., the region around Kotzebue Sound) is technically a desert, with portions receiving less than of precipitation annually. On the other extreme, some locations between Dillingham and Bethel average around of precipitation.
The climate of the interior of Alaska is subarctic. Some of the highest and lowest temperatures in Alaska occur around the area near Fairbanks. The summers may have temperatures reaching into the 90s °F (the low-to-mid 30s °C), while in the winter, the temperature can fall below . Precipitation is sparse in the Interior, often less than a year, but what precipitation falls in the winter tends to stay the entire winter.
The highest and lowest recorded temperatures in Alaska are both in the Interior. The highest is in Fort Yukon (which is just inside the arctic circle) on June 27, 1915, making Alaska tied with Hawaii as the state with the lowest high temperature in the United States. The lowest official Alaska temperature is in Prospect Creek on January 23, 1971, one degree above the lowest temperature recorded in continental North America (in Snag, Yukon, Canada).
The climate in the extreme north of Alaska is Arctic (Köppen: ET) with long, very cold winters and short, cool summers. Even in July, the average low temperature in Utqiaġvik is . Precipitation is light in this part of Alaska, with many places averaging less than per year, mostly as snow which stays on the ground almost the entire year.
Flora and fauna
Demographics
The United States Census Bureau found in the 2020 United States census that the population of Alaska was 733,391 on April 1, 2020, a 3.3% increase since the 2010 United States census. According to the 2010 United States census, the U.S. state of Alaska had a population of 710,231, a 13.3% increase from 626,932 at the 2000 U.S. census.
In 2020, Alaska ranked as the 48th largest state by population, ahead of only Vermont and Wyoming. Alaska is the least densely populated state, and one of the most sparsely populated areas in the world, at , with the next state, Wyoming, at . Alaska is by far the largest U.S. state by area, and the tenth wealthiest (per capita income). due to its population size, it is one of 14 U.S. states that still have only one telephone area code.
According to HUD's 2022 Annual Homeless Assessment Report, there were an estimated 2,320 homeless people in Alaska.
Race and ethnicity
The 2019 American Community Survey estimated 60.2% of the population was non-Hispanic white, 3.7% black or African American, 15.6% American Indian or Alaska Native, 6.5% Asian, 1.4% Native Hawaiian and other Pacific Islander, 7.5% two or more races, and 7.3% Hispanic or Latin American of any race. At the survey estimates, 7.8% of the total population was foreign-born from 2015 to 2019. In 2015, 61.3% was non-Hispanic white, 3.4% black or African American, 13.3% American Indian or Alaska Native, 6.2% Asian, 0.9% Native Hawaiian and other Pacific Islander, 0.3% some other race, and 7.7% multiracial. Hispanics and Latin Americans were 7% of the state population in 2015. From 2015 to 2019, the largest Hispanic and Latin American groups were Mexican Americans, Puerto Ricans, and Cuban Americans. The largest Asian groups living in the state were Filipinos, Korean Americans, and Japanese and Chinese Americans.
The state was 66.7% white (64.1% non-Hispanic white), 14.8% American Indian and Alaska Native, 5.4% Asian, 3.3% black or African American, 1.0% Native Hawaiian and other Pacific Islander, 1.6% from some other race, and 7.3% from two or more races in 2010. Hispanics or Latin Americans of any race made up 5.5% of the population in 2010. , 50.7% of Alaska's population younger than one year of age belonged to minority groups (i.e., did not have two parents of non-Hispanic white ancestry). In 1960, the United States Census Bureau reported Alaska's population as 77.2% white, 3% black, and 18.8% American Indian and Alaska Native.
In 2018, The top countries of origin for Alaska's immigrants were the Philippines, Mexico, Canada, Thailand and Korea.
Languages
According to the 2011 American Community Survey, 83.4% of people over the age of five spoke only English at home. About 3.5% spoke Spanish at home, 2.2% spoke another Indo-European language, about 4.3% spoke an Asian language (including Tagalog), and about 5.3% spoke other languages at home. In 2019, the American Community Survey determined 83.7% spoke only English, and 16.3% spoke another language other than English. The most spoken European language after English was Spanish, spoken by approximately 4.0% of the state population. Collectively, Asian and Pacific Islander languages were spoken by 5.6% of Alaskans. Since 2010, a total of 5.2% of Alaskans speak one of the state's 20 indigenous languages, known locally as "native languages".
The Alaska Native Language Center at the University of Alaska Fairbanks claims that at least 20 Alaskan native languages exist and there are also some languages with different dialects. Most of Alaska's native languages belong to either the Eskimo–Aleut or Na-Dene language families; however, some languages are thought to be isolates (e.g. Haida) or have not yet been classified (e.g. Tsimshianic). nearly all of Alaska's native languages were classified as either threatened, shifting, moribund, nearly extinct, or dormant languages.
In October 2014, the governor of Alaska signed a bill declaring the state's 20 indigenous languages to have official status. This bill gave them symbolic recognition as official languages, though they have not been adopted for official use within the government. The 20 languages that were included in the bill are:
Inupiaq
Siberian Yupik
Central Alaskan Yup'ik
Alutiiq
Unangax
Dena'ina
Deg Xinag
Holikachuk
Koyukon
Upper Kuskokwim
Gwich'in
Tanana
Upper Tanana
Tanacross
Hän
Ahtna
Eyak
Tlingit
Haida
Tsimshian
Religion
Multiple surveys have ranked Alaska among the most irreligious states.
According to statistics collected by the Association of Religion Data Archives (ARDA) from 2010, about 34% of Alaska residents were members of religious congregations. Of the religious population, 100,960 people identified as evangelical Protestants; 50,866 as Roman Catholic; and 32,550 as mainline Protestants. Roughly 4% were Mormon, 0.5% Jewish, 0.5% Muslim, 1% Buddhist, 0.2% Baháʼí, and 0.5% Hindu. The largest religious denominations in Alaska was the Roman Catholic Church with 50,866 adherents; non-denominational Evangelicals with 38,070 adherents; The Church of Jesus Christ of Latter-day Saints with 32,170 adherents; and the Southern Baptist Convention with 19,891 adherents. Alaska has been identified, along with Washington and Oregon in the Pacific Northwest, as being the least religious states in the United States, in terms of church membership.
The Pew Research Center in 2014 determined 62% of the adult population practiced Christianity. Protestantism was the largest Christian tradition, dominated by Evangelicalism. Mainline Protestants were the second largest Protestant Christian group, followed by predominantly African American churches. The Roman Catholic Church remained the largest single Christian tradition practiced in Alaska. Of the unaffiliated population, they made up the largest non-Christian religious affiliation. Atheists made up 5% of the population and the largest non-Christian religion was Buddhism. In 2020, the Public Religion Research Institute (PRRI) determined 57% of adults were Christian. By 2022, Christianity increased to 77% of the population according to the PRRI.
Through the Association of Religion Data Archives in 2020, its Christian population was dominated by non/inter-denominational Protestantism as the single largest Christian cohort, with 73,930 adherents. Roman Catholics were second with 40,280 members; throughout its Christian population, non-denominational Christians had an adherence rate of 100.81 per 1,000 residents, and Catholics 54.92 per 1,000 residents. Per 2014's Pew study, religion was seen as very important to 41% of the population, although 29% considered it somewhat important. In 2014, Pew determined roughly 55% believed in God with absolute certainty, and 24% believed fairly certainly. Reflecting the separate 2020 ARDA study, the 2014 Pew study showed 30% attended religious services once a week, 34% once or twice a month, and 36% seldom/never. In 2018, The Gospel Coalition published an article using Pew data and determined non-churchgoing Christians nationwide did not attend religious services often through the following: practicing the faith in other ways, not finding a house of worship they liked, disliking sermons and feeling unwelcomed, and logistics.
In 1795, the first Russian Orthodox Church was established in Kodiak. Intermarriage with Alaskan Natives helped the Russian immigrants integrate into society. As a result, an increasing number of Russian Orthodox churches gradually became established within Alaska. Alaska also has the largest Quaker population (by percentage) of any state. In 2009, there were 6,000 Jews in Alaska (for whom observance of halakha may pose special problems). Alaskan Hindus often share venues and celebrations with members of other Asian religious communities, including Sikhs and Jains. In 2010, Alaskan Hindus established the Sri Ganesha Temple of Alaska, making it the first Hindu Temple in Alaska and the northernmost Hindu Temple in the world. There are an estimated 2,000–3,000 Hindus in Alaska. The vast majority of Hindus live in Anchorage or Fairbanks.
Estimates for the number of Muslims in Alaska range from 2,000 to 5,000. In 2020, ARDA estimated there were 400 Muslims in the state. The Islamic Community Center of Anchorage began efforts in the late 1990s to construct a mosque in Anchorage. They broke ground on a building in south Anchorage in 2010 and were nearing completion in late 2014. When completed, the mosque was the first in the state and one of the northernmost mosques in the world. There's also a Baháʼí center, and there were 690 adherents in 2020. Additionally, there were 469 adherents of Hinduism and Yoga altogether in 2020, and a small number of Buddhists were present.
Economy
As of October 2022, Alaska had a total employment of 316,900. The number of employer establishments was 21,077.
The 2018 gross state product was $55 billion, 48th in the U.S. Its per capita personal income for 2018 was $73,000, ranking 7th in the nation. According to a 2013 study by Phoenix Marketing International, Alaska had the fifth-largest number of millionaires per capita in the United States, with a ratio of 6.75 percent. The oil and gas industry dominates the Alaskan economy, with more than 80% of the state's revenues derived from petroleum extraction. Alaska's main export product (excluding oil and natural gas) is seafood, primarily salmon, cod, pollock and crab.
Agriculture represents a very small fraction of the Alaskan economy. Agricultural production is primarily for consumption within the state and includes nursery stock, dairy products, vegetables, and livestock. Manufacturing is limited, with most foodstuffs and general goods imported from elsewhere.
Employment is primarily in government and industries such as natural resource extraction, shipping, and transportation. Military bases are a significant component of the economy in the Fairbanks North Star, Anchorage and Kodiak Island boroughs, as well as Kodiak. Federal subsidies are also an important part of the economy, allowing the state to keep taxes low. Its industrial outputs are crude petroleum, natural gas, coal, gold, precious metals, zinc and other mining, seafood processing, timber and wood products. There is also a growing service and tourism sector. Tourists have contributed to the economy by supporting local lodging.
Energy
Alaska has vast energy resources, although its oil reserves have been largely depleted. Major oil and gas reserves were found in the Alaska North Slope (ANS) and Cook Inlet basins, but according to the Energy Information Administration, by February 2014 Alaska had fallen to fourth place in the nation in crude oil production after Texas, North Dakota, and California. Prudhoe Bay on Alaska's North Slope is still the second highest-yielding oil field in the United States, typically producing about , although by early 2014 North Dakota's Bakken Formation was producing over . Prudhoe Bay was the largest conventional oil field ever discovered in North America, but was much smaller than Canada's enormous Athabasca oil sands field, which by 2014 was producing about of unconventional oil, and had hundreds of years of producible reserves at that rate.
The Trans-Alaska Pipeline can transport and pump up to of crude oil per day, more than any other crude oil pipeline in the United States. Additionally, substantial coal deposits are found in Alaska's bituminous, sub-bituminous, and lignite coal basins. The United States Geological Survey estimates that there are of undiscovered, technically recoverable gas from natural gas hydrates on the Alaskan North Slope. Alaska also offers some of the highest hydroelectric power potential in the country from its numerous rivers. Large swaths of the Alaskan coastline offer wind and geothermal energy potential as well.
Alaska's economy depends heavily on increasingly expensive diesel fuel for heating, transportation, electric power and light. Although wind and hydroelectric power are abundant and underdeveloped, proposals for statewide energy systems (e.g. with special low-cost electric interties) were judged uneconomical (at the time of the report, 2001) due to low (less than 50¢/gal) fuel prices, long distances and low population. The cost of a gallon of gas in urban Alaska today is usually thirty to sixty cents higher than the national average; prices in rural areas are generally significantly higher but vary widely depending on transportation costs, seasonal usage peaks, nearby petroleum development infrastructure and many other factors.
Permanent Fund
The Alaska Permanent Fund is a constitutionally authorized appropriation of oil revenues, established by voters in 1976 to manage a surplus in state petroleum revenues from oil, largely in anticipation of the then recently constructed Trans-Alaska Pipeline System. The fund was originally proposed by Governor Keith Miller on the eve of the 1969 Prudhoe Bay lease sale, out of fear that the legislature would spend the entire proceeds of the sale (which amounted to $900 million) at once. It was later championed by Governor Jay Hammond and Kenai state representative Hugh Malone. It has served as an attractive political prospect ever since, diverting revenues which would normally be deposited into the general fund.
The Alaska Constitution was written so as to discourage dedicating state funds for a particular purpose. The Permanent Fund has become the rare exception to this, mostly due to the political climate of distrust existing during the time of its creation. From its initial principal of $734,000, the fund has grown to $50 billion as a result of oil royalties and capital investment programs. Most if not all the principal is invested conservatively outside Alaska. This has led to frequent calls by Alaskan politicians for the Fund to make investments within Alaska, though such a stance has never gained momentum.
Starting in 1982, dividends from the fund's annual growth have been paid out each year to eligible Alaskans, ranging from an initial $1,000 in 1982 (equal to three years' payout, as the distribution of payments was held up in a lawsuit over the distribution scheme) to $3,269 in 2008 (which included a one-time $1,200 "Resource Rebate"). Every year, the state legislature takes out 8% from the earnings, puts 3% back into the principal for inflation proofing, and the remaining 5% is distributed to all qualifying Alaskans. To qualify for the Permanent Fund Dividend, one must have lived in the state for a minimum of 12 months, maintain constant residency subject to allowable absences, and not be subject to court judgments or criminal convictions which fall under various disqualifying classifications or may subject the payment amount to civil garnishment.
The Permanent Fund is often considered to be one of the leading examples of a basic income policy in the world.
Cost of living
The cost of goods in Alaska has long been higher than in the contiguous 48 states. Federal government employees, particularly United States Postal Service (USPS) workers and active-duty military members, receive a Cost of Living Allowance usually set at 25% of base pay because, while the cost of living has gone down, it is still one of the highest in the country.
Rural Alaska suffers from extremely high prices for food and consumer goods compared to the rest of the country, due to the relatively limited transportation infrastructure.
Agriculture and fishing
Due to the northern climate and short growing season, relatively little farming occurs in Alaska. Most farms are in either the Matanuska Valley, about northeast of Anchorage, or on the Kenai Peninsula, about southwest of Anchorage. The short 100-day growing season limits the crops that can be grown, but the long sunny summer days make for productive growing seasons. The primary crops are potatoes, carrots, lettuce, and cabbage.
The Tanana Valley is another notable agricultural locus, especially the Delta Junction area, about southeast of Fairbanks, with a sizable concentration of farms growing agronomic crops; these farms mostly lie north and east of Fort Greely. This area was largely set aside and developed under a state program spearheaded by Hammond during his second term as governor. Delta-area crops consist predominantly of barley and hay. West of Fairbanks lies another concentration of small farms catering to restaurants, the hotel and tourist industry, and community-supported agriculture.
Alaskan agriculture has experienced a surge in growth of market gardeners, small farms and farmers' markets in recent years, with the highest percentage increase (46%) in the nation in growth in farmers' markets in 2011, compared to 17% nationwide. The peony industry has also taken off, as the growing season allows farmers to harvest during a gap in supply elsewhere in the world, thereby filling a niche in the flower market.
Alaska, with no counties, lacks county fairs. However, a small assortment of state and local fairs (with the Alaska State Fair in Palmer the largest), are held mostly in the late summer. The fairs are mostly located in communities with historic or current agricultural activity, and feature local farmers exhibiting produce in addition to more high-profile commercial activities such as carnival rides, concerts and food. "Alaska Grown" is used as an agricultural slogan.
Alaska has an abundance of seafood, with the primary fisheries in the Bering Sea and the North Pacific. Seafood is one of the few food items that is often cheaper within the state than outside it. Many Alaskans take advantage of salmon seasons to harvest portions of their household diet while fishing for subsistence, as well as sport. This includes fish taken by hook, net or wheel.
Hunting for subsistence, primarily caribou, moose, and Dall sheep is still common in the state, particularly in remote Bush communities. An example of a traditional native food is Akutaq, the Eskimo ice cream, which can consist of reindeer fat, seal oil, dried fish meat and local berries.
Alaska's reindeer herding is concentrated on Seward Peninsula, where wild caribou can be prevented from mingling and migrating with the domesticated reindeer.
Most food in Alaska is transported into the state from "Outside" (the other 49 US states), and shipping costs make food in the cities relatively expensive. In rural areas, subsistence hunting and gathering is an essential activity because imported food is prohibitively expensive. Although most small towns and villages in Alaska lie along the coastline, the cost of importing food to remote villages can be high, because of the terrain and difficult road conditions, which change dramatically, due to varying climate and precipitation changes. The cost of transport can reach as high as 50¢ per pound ($1.10/kg) or more in some remote areas, during the most difficult times, if these locations can be reached at all during such inclement weather and terrain conditions. The cost of delivering a of milk is about $3.50 in many villages where per capita income can be $20,000 or less. Fuel cost per gallon is routinely twenty to thirty cents higher than the contiguous United States average, with only Hawaii having higher prices.
Culture
Some of Alaska's popular annual events are the Iditarod Trail Sled Dog Race from Anchorage to Nome, World Ice Art Championships in Fairbanks, the Blueberry Festival and Alaska Hummingbird Festival in Ketchikan, the Sitka Whale Fest, and the Stikine River Garnet Fest in Wrangell. The Stikine River attracts the largest springtime concentration of American bald eagles in the world.
The Alaska Native Heritage Center celebrates the rich heritage of Alaska's 11 cultural groups. Their purpose is to encourage cross-cultural exchanges among all people and enhance self-esteem among Native people. The Alaska Native Arts Foundation promotes and markets Native art from all regions and cultures in the State, using the internet.
Music
Influences on music in Alaska include the traditional music of Alaska Natives as well as folk music brought by later immigrants from Russia and Europe. Prominent musicians from Alaska include singer Jewel, traditional Aleut flautist Mary Youngblood, folk singer-songwriter Libby Roderick, Christian music singer-songwriter Lincoln Brewster, metal/post hardcore band 36 Crazyfists and the groups Pamyua and Portugal. The Man.
There are many established music festivals in Alaska, including the Alaska Folk Festival, the Fairbanks Summer Arts Festival the Anchorage Folk Festival, the Athabascan Old-Time Fiddling Festival, the Sitka Jazz Festival, and the Sitka Summer Music Festival. The most prominent orchestra in Alaska is the Anchorage Symphony Orchestra, though the Fairbanks Symphony Orchestra and Juneau Symphony are also notable. The Anchorage Opera is currently the state's only professional opera company, though there are several volunteer and semi-professional organizations in the state as well.
The official state song of Alaska is "Alaska's Flag", which was adopted in 1955; it celebrates the flag of Alaska.
Alaska on film and television
The 1983 Disney movie Never Cry Wolf was at least partially shot in Alaska. The 1991 film White Fang, based on Jack London's 1906 novel and starring Ethan Hawke, was filmed in and around Haines. Steven Seagal's 1994 On Deadly Ground, starring Michael Caine, was filmed in part at the Worthington Glacier near Valdez.
Many reality television shows are filmed in Alaska. In 2011, the Anchorage Daily News found ten set in the state.
Sports
Public health and public safety
The Alaska State Troopers are Alaska's statewide police force. They have a long and storied history, but were not an official organization until 1941. Before the force was officially organized, law enforcement in Alaska was handled by various federal agencies. Larger towns usually have their own local police and some villages rely on "Public Safety Officers" who have police training but do not carry firearms. In much of the state, the troopers serve as the only police force available. In addition to enforcing traffic and criminal law, wildlife Troopers enforce hunting and fishing regulations. Due to the varied terrain and wide scope of the Troopers' duties, they employ a wide variety of land, air, and water patrol vehicles.
Many rural communities in Alaska are considered "dry", having outlawed the importation of alcoholic beverages. Suicide rates for rural residents are higher than urban.
Domestic abuse and other violent crimes are also at high levels in the state; this is in part linked to alcohol abuse. Alaska has the highest rate of sexual assault in the nation, especially in rural areas. The average age of sexually assaulted victims is 16 years old. In four out of five cases, the suspects were relatives, friends or acquaintances.
Health insurance
, CVS Health and Premera account for 47% and 46% of private health insurance, respectively. Premera and Moda Health offer insurance on the federally-run Affordable Care Exchange.
Healthcare facilities
Providence Alaska Medical Center in Anchorage is the largest hospital in the state as of 2021; Anchorage also hosts Alaska Regional Hospital and Alaska Native Medical Center.
Alaska's other major cities such as Fairbanks and Juneau also have local hospitals. In Southeast Alaska, Southeast Alaska Regional Health Consortium, runs healthcare facilities across 27 communities as of 2022, including hospitals in Sitka and Wrangell; although it originally served Native Americans only, it has expanded access and combined with other local facilities over time.
Education
The Alaska Department of Education and Early Development administers many school districts in Alaska. In addition, the state operates a boarding school, Mt. Edgecumbe High School in Sitka, and provides partial funding for other boarding schools, including Nenana Student Living Center in Nenana and The Galena Interior Learning Academy in Galena.
There are more than a dozen colleges and universities in Alaska. Accredited universities in Alaska include the University of Alaska Anchorage, University of Alaska Fairbanks, University of Alaska Southeast, and Alaska Pacific University. Alaska is the only state that has no collegiate athletic programs that are members of NCAA Division I, although both Alaska-Fairbanks and Alaska-Anchorage maintain single sport membership in Division I for men's ice hockey.
The Alaska Department of Labor and Workforce Development operates AVTEC, Alaska's Institute of Technology. Campuses in Seward and Anchorage offer one-week to 11-month training programs in areas as diverse as Information Technology, Welding, Nursing, and Mechanics.
Alaska has had a problem with a "brain drain". Many of its young people, including most of the highest academic achievers, leave the state after high school graduation and do not return. , Alaska did not have a law school or medical school. The University of Alaska has attempted to combat this by offering partial four-year scholarships to the top 10% of Alaska high school graduates, via the Alaska Scholars Program.
Beginning in 1998, schools in rural Alaska must have at least 10 students to retain funding from the state, and campuses not meeting the number close. This was due to the loss in oil revenues that previously propped up smaller rural schools. In 2015, there was a proposal to raise that minimum to 25, but legislators in the state largely did not agree.
Transportation
Roads
Alaska has few road connections compared to the rest of the U.S. The state's road system, covering a relatively small area of the state, linking the central population centers and the Alaska Highway, the principal route out of the state through Canada. The state capital, Juneau, is not accessible by road, with access only being through ferry or flight; this has spurred debate over decades about moving the capital to a city on the road system, or building a road connection from Haines. The western part of Alaska has no road system connecting the communities with the rest of Alaska.
The Interstate Highways in Alaska consists of a total of . One unique feature of the Alaska Highway system is the Anton Anderson Memorial Tunnel, an active Alaska Railroad tunnel recently upgraded to provide a paved roadway link with the isolated community of Whittier on Prince William Sound to the Seward Highway about southeast of Anchorage at Portage. At , the tunnel was the longest road tunnel in North America until 2007. The tunnel is the longest combination road and rail tunnel in North America.
Rail
Built around 1915, the Alaska Railroad (ARR) played a key role in the development of Alaska through the 20th century. It links shipping lanes on the North Pacific with Interior Alaska with tracks that run from Seward by way of South Central Alaska, passing through Anchorage, Eklutna, Wasilla, Talkeetna, Denali, and Fairbanks, with spurs to Whittier, Palmer and North Pole. The cities, towns, villages, and region served by ARR tracks are known statewide as "The Railbelt". In recent years, the ever-improving paved highway system began to eclipse the railroad's importance in Alaska's economy.
The railroad played a vital role in Alaska's development, moving freight into Alaska while transporting natural resources southward, such as coal from the Usibelli coal mine near Healy to Seward and gravel from the Matanuska Valley to Anchorage. It is well known for its summertime tour passenger service.
The Alaska Railroad was one of the last railroads in North America to use cabooses in regular service and still uses them on some gravel trains. It continues to offer one of the last flag stop routes in the country. A stretch of about of track along an area north of Talkeetna remains inaccessible by road; the railroad provides the only transportation to rural homes and cabins in the area. Until construction of the Parks Highway in the 1970s, the railroad provided the only land access to most of the region along its entire route.
In northern Southeast Alaska, the White Pass and Yukon Route also partly runs through the state from Skagway northwards into Canada (British Columbia and Yukon Territory), crossing the border at White Pass Summit. This line is now mainly used by tourists, often arriving by cruise liner at Skagway. It was featured in the 1983 BBC television series Great Little Railways.
These two railroads are connected neither to each other nor any other railroad. The nearest link to the North American railway network is the northwest terminus of the Canadian National Railway at Prince Rupert, British Columbia, several hundred miles to the southeast. In 2000, the U.S. Congress authorized $6 million to study the feasibility of a rail link between Alaska, Canada, and the lower 48. As of 2021, the Alaska-Alberta Railway Development Corporation had been placed into receivership.
Some private companies provides car float service between Whittier and Seattle.
Marine transport
Many cities, towns and villages in the state do not have road or highway access; the only modes of access involve travel by air, river, or the sea.
Alaska's well-developed state-owned ferry system (known as the Alaska Marine Highway) serves the cities of southeast, the Gulf Coast and the Alaska Peninsula. The ferries transport vehicles as well as passengers. The system also operates a ferry service from Bellingham, Washington and Prince Rupert, British Columbia, in Canada through the Inside Passage to Skagway. The Inter-Island Ferry Authority also serves as an important marine link for many communities in the Prince of Wales Island region of Southeast and works in concert with the Alaska Marine Highway.
In recent years, cruise lines have created a summertime tourism market, mainly connecting the Pacific Northwest to Southeast Alaska and, to a lesser degree, towns along Alaska's gulf coast. The population of Ketchikan for example fluctuates dramatically on many days—up to four large cruise ships can dock there at the same time.
Air transport
Cities not served by road, sea, or river can be reached only by air, foot, dogsled, or snowmachine, accounting for Alaska's extremely well developed bush air services—an Alaskan novelty. Anchorage and, to a lesser extent Fairbanks, is served by many major airlines. Because of limited highway access, air travel remains the most efficient form of transportation in and out of the state. Anchorage recently completed extensive remodeling and construction at Ted Stevens Anchorage International Airport to help accommodate the upsurge in tourism (in 2012–2013, Alaska received almost two million visitors).
Making regular flights to most villages and towns within the state commercially viable is difficult, so they are heavily subsidized by the federal government through the Essential Air Service program. Alaska Airlines is the only major airline offering in-state travel with jet service (sometimes in combination cargo and passenger Boeing 737-400s) from Anchorage and Fairbanks to regional hubs like Bethel, Nome, Kotzebue, Dillingham, Kodiak, and other larger communities as well as to major Southeast and Alaska Peninsula communities.
The bulk of remaining commercial flight offerings come from small regional commuter airlines such as Ravn Alaska, PenAir, and Frontier Flying Service. The smallest towns and villages must rely on scheduled or chartered bush flying services using general aviation aircraft such as the Cessna Caravan, the most popular aircraft in use in the state. Much of this service can be attributed to the Alaska bypass mail program which subsidizes bulk mail delivery to Alaskan rural communities. The program requires 70% of that subsidy to go to carriers who offer passenger service to the communities.
Many communities have small air taxi services. These operations originated from the demand for customized transport to remote areas. Perhaps the most quintessentially Alaskan plane is the bush seaplane. The world's busiest seaplane base is Lake Hood, located next to Ted Stevens Anchorage International Airport, where flights bound for remote villages without an airstrip carry passengers, cargo, and many items from stores and warehouse clubs.
In 2006, Alaska had the highest number of pilots per capita of any U.S. state. In Alaska there are 8,795 active pilot certificates as of 2020.
Of these, there are 2,507 Private, 1,496 Commercial, 2,180 Airline Transport, and 2,239 student pilots. There are also 3,987 pilots with an Instrument rating and 1,511 Flight Instructors.
Other transport
Another Alaskan transportation method is the dogsled. In modern times (that is, any time after the mid-late 1920s), dog mushing is more of a sport than a true means of transportation. Various races are held around the state, but the best known is the Iditarod Trail Sled Dog Race, a trail from Anchorage to Nome (although the distance varies from year to year, the official distance is set at ). The race commemorates the famous 1925 serum run to Nome in which mushers and dogs like Togo and Balto took much-needed medicine to the diphtheria-stricken community of Nome when all other means of transportation had failed. Mushers from all over the world come to Anchorage each March to compete for cash, prizes, and prestige. The "Serum Run" is another sled dog race that more accurately follows the route of the famous 1925 relay, leaving from the community of Nenana (southwest of Fairbanks) to Nome.
In areas not served by road or rail, primary transportation in summer is by all-terrain vehicle and in winter by snowmobile or "snow machine", as it is commonly referred to in Alaska.
Data transport
Alaska's internet and other data transport systems are provided largely through the two major telecommunications companies: GCI and Alaska Communications. GCI owns and operates what it calls the Alaska United Fiber Optic system and, as of late 2011, Alaska Communications advertised that it has "two fiber optic paths to the lower 48 and two more across Alaska. In January 2011, it was reported that a $1 billion project to connect Asia and rural Alaska was being planned, aided in part by $350 million in stimulus from the federal government.
Law and government
State government
Like all other U.S. states, Alaska is governed as a republic, with three branches of government: an executive branch consisting of the governor of Alaska and their appointees which head executive departments; a legislative branch consisting of the Alaska House of Representatives and Alaska Senate; and a judicial branch consisting of the Alaska Supreme Court and lower courts.
The state of Alaska employs approximately 16,000 people statewide.
The Alaska Legislature consists of a 40-member House of Representatives and a 20-member Senate. Senators serve four-year terms and House members two. The governor of Alaska serves four-year terms. The lieutenant governor runs separately from the governor in the primaries, but during the general election, the nominee for governor and nominee for lieutenant governor run together on the same ticket.
Alaska's court system has four levels: the Alaska Supreme Court, the Alaska Court of Appeals, the superior courts and the district courts. The superior and district courts are trial courts. Superior courts are courts of general jurisdiction, while district courts hear only certain types of cases, including misdemeanor criminal cases and civil cases valued up to $100,000.
The Supreme Court and the Court of Appeals are appellate courts. The Court of Appeals is required to hear appeals from certain lower-court decisions, including those regarding criminal prosecutions, juvenile delinquency, and habeas corpus. The Supreme Court hears civil appeals and may in its discretion hear criminal appeals.
State politics
Although in its early years of statehood Alaska was a Democratic state, since the early 1970s it has been characterized as Republican-leaning. Local political communities have often worked on issues related to land use development, fishing, tourism, and individual rights. Alaska Natives, while organized in and around their communities, have been active within the Native corporations. These have been given ownership over large tracts of land, which require stewardship.
Alaska was formerly the only state in which possession of one ounce or less of marijuana in one's home was completely legal under state law, though the federal law remains in force.
The state has an independence movement favoring a vote on secession from the United States, with the Alaskan Independence Party.
Six Republicans and four Democrats have served as governor of Alaska. In addition, Republican governor Wally Hickel was elected to the office for a second term in 1990 after leaving the Republican party and briefly joining the Alaskan Independence Party ticket just long enough to be reelected. He officially rejoined the Republican party in 1994.
Alaska's voter initiative making marijuana legal took effect on February 24, 2015, placing Alaska alongside Colorado and Washington, as well as Washington D.C., as the first three U.S. states where recreational marijuana is legal. The new law means people over 21 can consume small amounts of cannabis. The first legal marijuana store opened in Valdez in October 2016.
Voter registration
Taxes
To finance state government operations, Alaska depends primarily on petroleum revenues and federal subsidies. This allows it to have the lowest individual tax burden in the United States. It is one of five states with no sales tax, one of seven states with no individual income tax, and—along with New Hampshire—one of two that has neither. The Department of Revenue Tax Division reports regularly on the state's revenue sources. The department also issues an annual summary of its operations, including new state laws that directly affect the tax division. In 2014, the Tax Foundation ranked Alaska as having the fourth most "business friendly" tax policy, behind only Wyoming, South Dakota, and Nevada.
While Alaska has no state sales tax, 89 municipalities collect a local sales tax, from 1.0 to 7.5%, typically 3–5%. Other local taxes levied include raw fish taxes, hotel, motel, and bed-and-breakfast 'bed' taxes, severance taxes, liquor and tobacco taxes, gaming (pull tabs) taxes, tire taxes and fuel transfer taxes. A part of the revenue collected from certain state taxes and license fees (such as petroleum, aviation motor fuel, telephone cooperative) is shared with municipalities in Alaska.
The fall in oil prices after the fracking boom in the early 2010s has decimated Alaska's state treasury, which has historically received about 85 percent of its revenue from taxes and fees imposed on oil and gas companies. The state government has had to drastically reduce its budget, and has brought its budget shortfall from over $2 billion in 2016 to under $500 million by 2018. In 2020, Alaska's state government budget was $4.8 billion, while projected government revenues were only $4.5 billion.
Federal politics
Alaska regularly supports Republicans in presidential elections and has done so since statehood. Republicans have won the state's electoral college votes in all but one election that it has participated in (1964). No state has voted for a Democratic presidential candidate fewer times. Alaska was carried by Democratic nominee Lyndon B. Johnson during his landslide election in 1964, while the 1960 and 1968 elections were close. Since 1972, however, Republicans have carried the state by large margins. In 2008, Republican John McCain defeated Democrat Barack Obama in Alaska, 59.49% to 37.83%. McCain's running mate was Sarah Palin, the state's governor and the first Alaskan on a major party ticket. Obama lost Alaska again in 2012, but he captured 40% of the state's vote in that election, making him the first Democrat to do so since 1968. In 2020, Joe Biden received 42.77% of the vote for president, marking the high point for a Democratic presidential candidate since Johnson's 1964 victory.
The Alaska Bush, central Juneau, midtown and downtown Anchorage, and the areas surrounding the University of Alaska Fairbanks campus and Ester have been strongholds of the Democratic Party. The Matanuska-Susitna Borough, the majority of Fairbanks (including North Pole and the military base), and South Anchorage typically have the strongest Republican showing.
Elections
Alaska has had a long history of primary defeats for incumbent U.S. Senators, with Ernest Gruening, Mike Gravel and Lisa Murkowski all being defeated for the nomination to their re-election. However, Murkowski won re-election with a write-in campaign. Despite this, Alaska has had some long-serving congressmen, with Ted Stevens serving as U.S. Senator for 40 years, and Don Young serving as the at-large representative for 49 years.
In the 2020 election cycle, Alaskan voters approved Ballot Measure 2. The measure passed by a margin of 1.1%, or about 4,000 votes. The measure requires campaigns to disclose the original source and any intermediaries for campaign contributions over $2,000. The measure also establishes non-partisan blanket primaries for statewide elections (like in Washington state and California) and ranked-choice voting (like in Maine). Measure 2 makes Alaska the third state with jungle primaries for all statewide races, the second state with ranked choice voting, and the only state with both.
The first race to use the new system of elections was the 2022 special election to fill Alaska's only U.S. House seat, left vacant by the death of Don Young, won by Mary Peltola, the first Democrat to win the House seat since 1972, and the first Alaskan Native to be elected to the United States Congress in history.
See also
Index of Alaska-related articles
Outline of Alaska
List of boroughs and census areas in Alaska
USS Alaska, 4 ships
Notes
References
External links
Alaska's Digital Archives
Alaska Inter-Tribal Council
Who Owns/Manages Alaska? (map)
Carl J. Sacarlasen Diary Extracts at Dartmouth College Library
M.E. Diemer Alaska Photographs at Dartmouth College Library
Alfred Hulse Brooks Photographs and Papers. Yale Collection of Western Americana, Beinecke Rare Book and Manuscript Library.
U.S. federal government
Alaska State Guide from the Library of Congress
Energy & Environmental Data for Alaska
USGS real-time, geographic, and other scientific resources of Alaska
US Census Bureau
Alaska State Facts
Alaska Statehood Subject Guide from the Eisenhower Presidential Library
Alaska Statehood documents, Dwight D. Eisenhower Presidential Library
Alaska state government
State of Alaska website
Alaska State Databases
Alaska Department of Natural Resources, Recorder's Office
Arctic Ocean
Former Russian colonies
States and territories established in 1959
States of the United States
States of the West Coast of the United States
1959 establishments in the United States
Western United States
Northern America
Enclaves and exclaves
Russia–United States relations
Beringia
Exclaves in the United States
|
https://en.wikipedia.org/wiki/Algae
|
Algae (, ; : alga ) is an informal term for a large and diverse group of photosynthetic, eukaryotic organisms. It is a polyphyletic grouping that includes species from multiple distinct clades. Included organisms range from unicellular microalgae, such as Chlorella, Prototheca and the diatoms, to multicellular forms, such as the giant kelp, a large brown alga which may grow up to in length. Most are aquatic and lack many of the distinct cell and tissue types, such as stomata, xylem and phloem that are found in land plants. The largest and most complex marine algae are called seaweeds, while the most complex freshwater forms are the Charophyta, a division of green algae which includes, for example, Spirogyra and stoneworts. Algae that are carried by water are plankton, specifically phytoplankton.
Algae constitute a polyphyletic group since they do not include a common ancestor, and although their plastids seem to have a single origin, from cyanobacteria, they were acquired in different ways. Green algae are examples of algae that have primary chloroplasts derived from endosymbiotic cyanobacteria. Diatoms and brown algae are examples of algae with secondary chloroplasts derived from an endosymbiotic red alga. Algae exhibit a wide range of reproductive strategies, from simple asexual cell division to complex forms of sexual reproduction.
Algae lack the various structures that characterize land plants, such as the phyllids (leaf-like structures) of bryophytes, rhizoids of non-vascular plants, and the roots, leaves, and other organs found in tracheophytes (vascular plants). Most are phototrophic, although some are mixotrophic, deriving energy both from photosynthesis and uptake of organic carbon either by osmotrophy, myzotrophy, or phagotrophy. Some unicellular species of green algae, many golden algae, euglenids, dinoflagellates, and other algae have become heterotrophs (also called colorless or apochlorotic algae), sometimes parasitic, relying entirely on external energy sources and have limited or no photosynthetic apparatus. Some other heterotrophic organisms, such as the apicomplexans, are also derived from cells whose ancestors possessed plastids, but are not traditionally considered as algae. Algae have photosynthetic machinery ultimately derived from cyanobacteria that produce oxygen as a by-product of photosynthesis, unlike other photosynthetic bacteria such as purple and green sulfur bacteria. Fossilized filamentous algae from the Vindhya basin have been dated back to 1.6 to 1.7 billion years ago.
Because of the wide range of types of algae, they have increasing different industrial and traditional applications in human society. Traditional seaweed farming practices have existed for thousands of years and have strong traditions in East Asia food cultures. More modern algaculture applications extend the food traditions for other applications include cattle feed, using algae for bioremediation or pollution control, transforming sunlight into algae fuels or other chemicals used in industrial processes, and in medical and scientific applications. A 2020 review found that these applications of algae could play an important role in carbon sequestration in order to mitigate climate change while providing lucrative value-added products for global economies.
Etymology and study
The singular is the Latin word for 'seaweed' and retains that meaning in English. The etymology is obscure. Although some speculate that it is related to Latin , 'be cold', no reason is known to associate seaweed with temperature. A more likely source is , 'binding, entwining'.
The Ancient Greek word for 'seaweed' was (), which could mean either the seaweed (probably red algae) or a red dye derived from it. The Latinization, , meant primarily the cosmetic rouge. The etymology is uncertain, but a strong candidate has long been some word related to the Biblical (), 'paint' (if not that word itself), a cosmetic eye-shadow used by the ancient Egyptians and other inhabitants of the eastern Mediterranean. It could be any color: black, red, green, or blue.
The study of algae is most commonly called phycology (); the term algology is falling out of use.
Classifications
One definition of algae is that they "have chlorophyll as their primary photosynthetic pigment and lack a sterile covering of cells around their reproductive cells". On the other hand, the colorless Prototheca under Chlorophyta are all devoid of any chlorophyll. Although cyanobacteria are often referred to as "blue-green algae", most authorities exclude all prokaryotes, including cyanobacteria, from the definition of algae.
The algae contain chloroplasts that are similar in structure to cyanobacteria. Chloroplasts contain circular DNA like that in cyanobacteria and are interpreted as representing reduced endosymbiotic cyanobacteria. However, the exact origin of the chloroplasts is different among separate lineages of algae, reflecting their acquisition during different endosymbiotic events. The table below describes the composition of the three major groups of algae. Their lineage relationships are shown in the figure in the upper right. Many of these groups contain some members that are no longer photosynthetic. Some retain plastids, but not chloroplasts, while others have lost plastids entirely.
Phylogeny based on plastid not nucleocytoplasmic genealogy:
Linnaeus, in Species Plantarum (1753), the starting point for modern botanical nomenclature, recognized 14 genera of algae, of which only four are currently considered among algae. In Systema Naturae, Linnaeus described the genera Volvox and Corallina, and a species of Acetabularia (as Madrepora), among the animals.
In 1768, Samuel Gottlieb Gmelin (1744–1774) published the Historia Fucorum, the first work dedicated to marine algae and the first book on marine biology to use the then new binomial nomenclature of Linnaeus. It included elaborate illustrations of seaweed and marine algae on folded leaves.
W. H. Harvey (1811–1866) and Lamouroux (1813) were the first to divide macroscopic algae into four divisions based on their pigmentation. This is the first use of a biochemical criterion in plant systematics. Harvey's four divisions are: red algae (Rhodospermae), brown algae (Melanospermae), green algae (Chlorospermae), and Diatomaceae.
At this time, microscopic algae were discovered and reported by a different group of workers (e.g., O. F. Müller and Ehrenberg) studying the Infusoria (microscopic organisms). Unlike macroalgae, which were clearly viewed as plants, microalgae were frequently considered animals because they are often motile. Even the nonmotile (coccoid) microalgae were sometimes merely seen as stages of the lifecycle of plants, macroalgae, or animals.
Although used as a taxonomic category in some pre-Darwinian classifications, e.g., Linnaeus (1753), de Jussieu (1789), Horaninow (1843), Agassiz (1859), Wilson & Cassin (1864), in further classifications, the "algae" are seen as an artificial, polyphyletic group.
Throughout the 20th century, most classifications treated the following groups as divisions or classes of algae: cyanophytes, rhodophytes, chrysophytes, xanthophytes, bacillariophytes, phaeophytes, pyrrhophytes (cryptophytes and dinophytes), euglenophytes, and chlorophytes. Later, many new groups were discovered (e.g., Bolidophyceae), and others were splintered from older groups: charophytes and glaucophytes (from chlorophytes), many heterokontophytes (e.g., synurophytes from chrysophytes, or eustigmatophytes from xanthophytes), haptophytes (from chrysophytes), and chlorarachniophytes (from xanthophytes).
With the abandonment of plant-animal dichotomous classification, most groups of algae (sometimes all) were included in Protista, later also abandoned in favour of Eukaryota. However, as a legacy of the older plant life scheme, some groups that were also treated as protozoans in the past still have duplicated classifications (see ambiregnal protists).
Some parasitic algae (e.g., the green algae Prototheca and Helicosporidium, parasites of metazoans, or Cephaleuros, parasites of plants) were originally classified as fungi, sporozoans, or protistans of incertae sedis, while others (e.g., the green algae Phyllosiphon and Rhodochytrium, parasites of plants, or the red algae Pterocladiophila and Gelidiocolax mammillatus, parasites of other red algae, or the dinoflagellates Oodinium, parasites of fish) had their relationship with algae conjectured early. In other cases, some groups were originally characterized as parasitic algae (e.g., Chlorochytrium), but later were seen as endophytic algae. Some filamentous bacteria (e.g., Beggiatoa) were originally seen as algae. Furthermore, groups like the apicomplexans are also parasites derived from ancestors that possessed plastids, but are not included in any group traditionally seen as algae.
Relationship to land plants
The first land plants probably evolved from shallow freshwater charophyte algae much like Chara almost 500 million years ago. These probably had an isomorphic alternation of generations and were probably filamentous. Fossils of isolated land plant spores suggest land plants may have been around as long as 475 million years ago.
Morphology
A range of algal morphologies is exhibited, and convergence of features in unrelated groups is common. The only groups to exhibit three-dimensional multicellular thalli are the reds and browns, and some chlorophytes. Apical growth is constrained to subsets of these groups: the florideophyte reds, various browns, and the charophytes. The form of charophytes is quite different from those of reds and browns, because they have distinct nodes, separated by internode 'stems'; whorls of branches reminiscent of the horsetails occur at the nodes. Conceptacles are another polyphyletic trait; they appear in the coralline algae and the Hildenbrandiales, as well as the browns.
Most of the simpler algae are unicellular flagellates or amoeboids, but colonial and nonmotile forms have developed independently among several of the groups. Some of the more common organizational levels, more than one of which may occur in the lifecycle of a species, are
Colonial: small, regular groups of motile cells
Capsoid: individual non-motile cells embedded in mucilage
Coccoid: individual non-motile cells with cell walls
Palmelloid: nonmotile cells embedded in mucilage
Filamentous: a string of connected nonmotile cells, sometimes branching
Parenchymatous: cells forming a thallus with partial differentiation of tissues
In three lines, even higher levels of organization have been reached, with full tissue differentiation. These are the brown algae,—some of which may reach 50 m in length (kelps)—the red algae, and the green algae. The most complex forms are found among the charophyte algae (see Charales and Charophyta), in a lineage that eventually led to the higher land plants. The innovation that defines these nonalgal plants is the presence of female reproductive organs with protective cell layers that protect the zygote and developing embryo. Hence, the land plants are referred to as the Embryophytes.
Turfs
The term algal turf is commonly used but poorly defined. Algal turfs are thick, carpet-like beds of seaweed that retain sediment and compete with foundation species like corals and kelps, and they are usually less than 15 cm tall. Such a turf may consist of one or more species, and will generally cover an area in the order of a square metre or more. Some common characteristics are listed:
Algae that form aggregations that have been described as turfs include diatoms, cyanobacteria, chlorophytes, phaeophytes and rhodophytes. Turfs are often composed of numerous species at a wide range of spatial scales, but monospecific turfs are frequently reported.
Turfs can be morphologically highly variable over geographic scales and even within species on local scales and can be difficult to identify in terms of the constituent species.
Turfs have been defined as short algae, but this has been used to describe height ranges from less than 0.5 cm to more than 10 cm. In some regions, the descriptions approached heights which might be described as canopies (20 to 30 cm).
Physiology
Many algae, particularly species of the Characeae, have served as model experimental organisms to understand the mechanisms of the water permeability of membranes, osmoregulation, turgor regulation, salt tolerance, cytoplasmic streaming, and the generation of action potentials.
Phytohormones are found not only in higher plants, but in algae, too.
Symbiotic algae
Some species of algae form symbiotic relationships with other organisms. In these symbioses, the algae supply photosynthates (organic substances) to the host organism providing protection to the algal cells. The host organism derives some or all of its energy requirements from the algae. Examples are:
Lichens
Lichens are defined by the International Association for Lichenology to be "an association of a fungus and a photosynthetic symbiont resulting in a stable vegetative body having a specific structure". The fungi, or mycobionts, are mainly from the Ascomycota with a few from the Basidiomycota. In nature, they do not occur separate from lichens. It is unknown when they began to associate. One mycobiont associates with the same phycobiont species, rarely two, from the green algae, except that alternatively, the mycobiont may associate with a species of cyanobacteria (hence "photobiont" is the more accurate term). A photobiont may be associated with many different mycobionts or may live independently; accordingly, lichens are named and classified as fungal species. The association is termed a morphogenesis because the lichen has a form and capabilities not possessed by the symbiont species alone (they can be experimentally isolated). The photobiont possibly triggers otherwise latent genes in the mycobiont.
Trentepohlia is an example of a common green alga genus worldwide that can grow on its own or be lichenised. Lichen thus share some of the habitat and often similar appearance with specialized species of algae (aerophytes) growing on exposed surfaces such as tree trunks and rocks and sometimes discoloring them.
Coral reefs
Coral reefs are accumulated from the calcareous exoskeletons of marine invertebrates of the order Scleractinia (stony corals). These animals metabolize sugar and oxygen to obtain energy for their cell-building processes, including secretion of the exoskeleton, with water and carbon dioxide as byproducts. Dinoflagellates (algal protists) are often endosymbionts in the cells of the coral-forming marine invertebrates, where they accelerate host-cell metabolism by generating sugar and oxygen immediately available through photosynthesis using incident light and the carbon dioxide produced by the host. Reef-building stony corals (hermatypic corals) require endosymbiotic algae from the genus Symbiodinium to be in a healthy condition. The loss of Symbiodinium from the host is known as coral bleaching, a condition which leads to the deterioration of a reef.
Sea sponges
Endosymbiontic green algae live close to the surface of some sponges, for example, breadcrumb sponges (Halichondria panicea). The alga is thus protected from predators; the sponge is provided with oxygen and sugars which can account for 50 to 80% of sponge growth in some species.
Life cycle
Rhodophyta, Chlorophyta, and Heterokontophyta, the three main algal divisions, have life cycles which show considerable variation and complexity. In general, an asexual phase exists where the seaweed's cells are diploid, a sexual phase where the cells are haploid, followed by fusion of the male and female gametes. Asexual reproduction permits efficient population increases, but less variation is possible. Commonly, in sexual reproduction of unicellular and colonial algae, two specialized, sexually compatible, haploid gametes make physical contact and fuse to form a zygote. To ensure a successful mating, the development and release of gametes is highly synchronized and regulated; pheromones may play a key role in these processes. Sexual reproduction allows for more variation and provides the benefit of efficient recombinational repair of DNA damages during meiosis, a key stage of the sexual cycle. However, sexual reproduction is more costly than asexual reproduction. Meiosis has been shown to occur in many different species of algae.
Numbers
The Algal Collection of the US National Herbarium (located in the National Museum of Natural History) consists of approximately 320,500 dried specimens, which, although not exhaustive (no exhaustive collection exists), gives an idea of the order of magnitude of the number of algal species (that number remains unknown). Estimates vary widely. For example, according to one standard textbook, in the British Isles the UK Biodiversity Steering Group Report estimated there to be 20,000 algal species in the UK. Another checklist reports only about 5,000 species. Regarding the difference of about 15,000 species, the text concludes: "It will require many detailed field surveys before it is possible to provide a reliable estimate of the total number of species ..."
Regional and group estimates have been made, as well:
5,000–5,500 species of red algae worldwide
"some 1,300 in Australian Seas"
400 seaweed species for the western coastline of South Africa, and 212 species from the coast of KwaZulu-Natal. Some of these are duplicates, as the range extends across both coasts, and the total recorded is probably about 500 species. Most of these are listed in List of seaweeds of South Africa. These exclude phytoplankton and crustose corallines.
669 marine species from California (US)
642 in the check-list of Britain and Ireland
and so on, but lacking any scientific basis or reliable sources, these numbers have no more credibility than the British ones mentioned above. Most estimates also omit microscopic algae, such as phytoplankton.
The most recent estimate suggests 72,500 algal species worldwide.
Distribution
The distribution of algal species has been fairly well studied since the founding of phytogeography in the mid-19th century. Algae spread mainly by the dispersal of spores analogously to the dispersal of Plantae by seeds and spores. This dispersal can be accomplished by air, water, or other organisms. Due to this, spores can be found in a variety of environments: fresh and marine waters, air, soil, and in or on other organisms. Whether a spore is to grow into an organism depends on the combination of the species and the environmental conditions where the spore lands.
The spores of freshwater algae are dispersed mainly by running water and wind, as well as by living carriers. However, not all bodies of water can carry all species of algae, as the chemical composition of certain water bodies limits the algae that can survive within them. Marine spores are often spread by ocean currents. Ocean water presents many vastly different habitats based on temperature and nutrient availability, resulting in phytogeographic zones, regions, and provinces.
To some degree, the distribution of algae is subject to floristic discontinuities caused by geographical features, such as Antarctica, long distances of ocean or general land masses. It is, therefore, possible to identify species occurring by locality, such as "Pacific algae" or "North Sea algae". When they occur out of their localities, hypothesizing a transport mechanism is usually possible, such as the hulls of ships. For example, Ulva reticulata and U. fasciata travelled from the mainland to Hawaii in this manner.
Mapping is possible for select species only: "there are many valid examples of confined distribution patterns." For example, Clathromorphum is an arctic genus and is not mapped far south of there. However, scientists regard the overall data as insufficient due to the "difficulties of undertaking such studies."
Ecology
Algae are prominent in bodies of water, common in terrestrial environments, and are found in unusual environments, such as on snow and ice. Seaweeds grow mostly in shallow marine waters, under deep; however, some such as Navicula pennata have been recorded to a depth of . A type of algae, Ancylonema nordenskioeldii, was found in Greenland in areas known as the 'Dark Zone', which caused an increase in the rate of melting ice sheet. Same algae was found in the Italian Alps, after pink ice appeared on parts of the Presena glacier.
The various sorts of algae play significant roles in aquatic ecology. Microscopic forms that live suspended in the water column (phytoplankton) provide the food base for most marine food chains. In very high densities (algal blooms), these algae may discolor the water and outcompete, poison, or asphyxiate other life forms.
Algae can be used as indicator organisms to monitor pollution in various aquatic systems. In many cases, algal metabolism is sensitive to various pollutants. Due to this, the species composition of algal populations may shift in the presence of chemical pollutants. To detect these changes, algae can be sampled from the environment and maintained in laboratories with relative ease.
On the basis of their habitat, algae can be categorized as: aquatic (planktonic, benthic, marine, freshwater, lentic, lotic), terrestrial, aerial (subaerial), lithophytic, halophytic (or euryhaline), psammon, thermophilic, cryophilic, epibiont (epiphytic, epizoic), endosymbiont (endophytic, endozoic), parasitic, calcifilic or lichenic (phycobiont).
Cultural associations
In classical Chinese, the word is used both for "algae" and (in the modest tradition of the imperial scholars) for "literary talent". The third island in Kunming Lake beside the Summer Palace in Beijing is known as the Zaojian Tang Dao (藻鑒堂島), which thus simultaneously means "Island of the Algae-Viewing Hall" and "Island of the Hall for Reflecting on Literary Talent".
Cultivation
Seaweed farming
Bioreactors
Uses
Agar
Agar, a gelatinous substance derived from red algae, has a number of commercial uses. It is a good medium on which to grow bacteria and fungi, as most microorganisms cannot digest agar.
Alginates
Alginic acid, or alginate, is extracted from brown algae. Its uses range from gelling agents in food, to medical dressings. Alginic acid also has been used in the field of biotechnology as a biocompatible medium for cell encapsulation and cell immobilization. Molecular cuisine is also a user of the substance for its gelling properties, by which it becomes a delivery vehicle for flavours.
Between 100,000 and 170,000 wet tons of Macrocystis are harvested annually in New Mexico for alginate extraction and abalone feed.
Energy source
To be competitive and independent from fluctuating support from (local) policy on the long run, biofuels should equal or beat the cost level of fossil fuels. Here, algae-based fuels hold great promise, directly related to the potential to produce more biomass per unit area in a year than any other form of biomass. The break-even point for algae-based biofuels is estimated to occur by 2025.
Fertilizer
For centuries, seaweed has been used as a fertilizer; George Owen of Henllys writing in the 16th century referring to drift weed in South Wales:
Today, algae are used by humans in many ways; for example, as fertilizers, soil conditioners, and livestock feed. Aquatic and microscopic species are cultured in clear tanks or ponds and are either harvested or used to treat effluents pumped through the ponds. Algaculture on a large scale is an important type of aquaculture in some places. Maerl is commonly used as a soil conditioner.
Nutrition
Naturally growing seaweeds are an important source of food, especially in Asia, leading some to label them as superfoods. They provide many vitamins including: A, B1, B2, B6, niacin, and C, and are rich in iodine, potassium, iron, magnesium, and calcium. In addition, commercially cultivated microalgae, including both algae and cyanobacteria, are marketed as nutritional supplements, such as spirulina, Chlorella and the vitamin-C supplement from Dunaliella, high in beta-carotene.
Algae are national foods of many nations: China consumes more than 70 species, including fat choy, a cyanobacterium considered a vegetable; Japan, over 20 species such as nori and aonori; Ireland, dulse; Chile, cochayuyo. Laver is used to make laverbread in Wales, where it is known as . In Korea, green laver is used to make . It is also used along the west coast of North America from California to British Columbia, in Hawaii and by the Māori of New Zealand. Sea lettuce and badderlocks are salad ingredients in Scotland, Ireland, Greenland, and Iceland. Algae is being considered a potential solution for world hunger problem.
Two popular forms of algae are used in cuisine:
Chlorella: This form of alga is found in freshwater and contains photosynthetic pigments in its chloroplast. It is high in iron, zinc, magnesium, vitamin B2 and Omega-3 fatty acids.
Furthermore, it contains all nine of the essential amino acids the body does not produce on its own
Spirulina: Known otherwise as a cyanobacterium (a prokaryote or a "blue-green alga")
The oils from some algae have high levels of unsaturated fatty acids. For example, Parietochloris incisa is high in arachidonic acid, where it reaches up to 47% of the triglyceride pool. Some varieties of algae favored by vegetarianism and veganism contain the long-chain, essential omega-3 fatty acids, docosahexaenoic acid (DHA) and eicosapentaenoic acid (EPA). Fish oil contains the omega-3 fatty acids, but the original source is algae (microalgae in particular), which are eaten by marine life such as copepods and are passed up the food chain. Algae have emerged in recent years as a popular source of omega-3 fatty acids for vegetarians who cannot get long-chain EPA and DHA from other vegetarian sources such as flaxseed oil, which only contains the short-chain alpha-linolenic acid (ALA).
Pollution control
Sewage can be treated with algae, reducing the use of large amounts of toxic chemicals that would otherwise be needed.
Algae can be used to capture fertilizers in runoff from farms. When subsequently harvested, the enriched algae can be used as fertilizer.
Aquaria and ponds can be filtered using algae, which absorb nutrients from the water in a device called an algae scrubber, also known as an algae turf scrubber.
Agricultural Research Service scientists found that 60–90% of nitrogen runoff and 70–100% of phosphorus runoff can be captured from manure effluents using a horizontal algae scrubber, also called an algal turf scrubber (ATS). Scientists developed the ATS, which consists of shallow, 100-foot raceways of nylon netting where algae colonies can form, and studied its efficacy for three years. They found that algae can readily be used to reduce the nutrient runoff from agricultural fields and increase the quality of water flowing into rivers, streams, and oceans. Researchers collected and dried the nutrient-rich algae from the ATS and studied its potential as an organic fertilizer. They found that cucumber and corn seedlings grew just as well using ATS organic fertilizer as they did with commercial fertilizers. Algae scrubbers, using bubbling upflow or vertical waterfall versions, are now also being used to filter aquaria and ponds.
Polymers
Various polymers can be created from algae, which can be especially useful in the creation of bioplastics. These include hybrid plastics, cellulose-based plastics, poly-lactic acid, and bio-polyethylene. Several companies have begun to produce algae polymers commercially, including for use in flip-flops and in surf boards.
Bioremediation
The alga Stichococcus bacillaris has been seen to colonize silicone resins used at archaeological sites; biodegrading the synthetic substance.
Pigments
The natural pigments (carotenoids and chlorophylls) produced by algae can be used as alternatives to chemical dyes and coloring agents.
The presence of some individual algal pigments, together with specific pigment concentration ratios, are taxon-specific: analysis of their concentrations with various analytical methods, particularly high-performance liquid chromatography, can therefore offer deep insight into the taxonomic composition and relative abundance of natural algae populations in sea water samples.
Stabilizing substances
Carrageenan, from the red alga Chondrus crispus, is used as a stabilizer in milk products.
Additional images
See also
AlgaeBase
AlgaePARC
Eutrophication
Iron fertilization
Marimo algae
Microbiofuels
Microphyte
Photobioreactor
Phycotechnology
Plant
Toxoid – anatoxin
References
Bibliography
General
.
Regional
Britain and Ireland
Australia
New Zealand
Europe
Arctic
Greenland
Faroe Islands
.
Canary Islands
Morocco
South Africa
North America
External links
– a database of all algal names including images, nomenclature, taxonomy, distribution, bibliography, uses, extracts
Endosymbiotic events
Polyphyletic groups
Common names of organisms
|
https://en.wikipedia.org/wiki/Abacus
|
The abacus (: abaci or abacuses), also called a counting frame, is a hand-operated calculating tool of unknown origin used since ancient times in the ancient Near East, Europe, China, and Russia, millennia before the adoption of the Hindu-Arabic numeral system.
The abacus consists of a two-dimensional array of slidable beads (or similar objects). In their earliest designs, the beads could be loose on a flat surface or sliding in grooves. Later the beads were made to slide on rods and built into a frame, allowing faster manipulation.
Each rod typically represents one digit of a multi-digit number laid out using a positional numeral system such as base ten (though some cultures used different numerical bases). Roman and East Asian abacuses use a system resembling bi-quinary coded decimal, with a top deck (containing one or two beads) representing fives and a bottom deck (containing four or five beads) representing ones. Natural numbers are normally used, but some allow simple fractional components (e.g. , , and in Roman abacus), and a decimal point can be imagined for fixed-point arithmetic.
Any particular abacus design supports multiple methods to perform calculations, including addition, subtraction, multiplication, division, and square and cube roots. The beads are first arranged to represent a number, then are manipulated to perform a mathematical operation with another number, and their final position can be read as the result (or can be used as the starting number for subsequent operations).
In the ancient world, abacuses were a practical calculating tool. Although calculators and computers are commonly used today instead of abacuses, abacuses remain in everyday use in some countries. The abacus has an advantage of not requiring a writing implement and paper (needed for algorism) or an electric power source. Merchants, traders, and clerks in some parts of Eastern Europe, Russia, China, and Africa use abacuses. The abacus remains in common use as a scoring system in non-electronic table games. Others may use an abacus due to visual impairment that prevents the use of a calculator. The abacus is still used to teach the fundamentals of mathematics to children in most countries.
Etymology
The word abacus dates to at least AD 1387 when a Middle English work borrowed the word from Latin that described a sandboard abacus. The Latin word is derived from ancient Greek (abax) which means something without a base, and colloquially, any piece of rectangular material. Alternatively, without reference to ancient texts on etymology, it has been suggested that it means "a square tablet strewn with dust", or "drawing-board covered with dust (for the use of mathematics)" (the exact shape of the Latin perhaps reflects the genitive form of the Greek word, (abakos)). While the table strewn with dust definition is popular, some argue evidence is insufficient for that conclusion. Greek probably borrowed from a Northwest Semitic language like Phoenician, evidenced by a cognate with the Hebrew word ʾābāq (), or "dust" (in the post-Biblical sense "sand used as a writing surface").
Both abacuses and abaci are used as plurals. The user of an abacus is called an abacist.
History
Mesopotamia
The Sumerian abacus appeared between 2700 and 2300 BC. It held a table of successive columns which delimited the successive orders of magnitude of their sexagesimal (base 60) number system.
Some scholars point to a character in Babylonian cuneiform that may have been derived from a representation of the abacus. It is the belief of Old Babylonian scholars, such as Ettore Carruccio, that Old Babylonians "seem to have used the abacus for the operations of addition and subtraction; however, this primitive device proved difficult to use for more complex calculations".
Egypt
Greek historian Herodotus mentioned the abacus in Ancient Egypt. He wrote that the Egyptians manipulated the pebbles from right to left, opposite in direction to the Greek left-to-right method. Archaeologists have found ancient disks of various sizes that are thought to have been used as counters. However, wall depictions of this instrument are yet to be discovered.
Persia
At around 600 BC, Persians first began to use the abacus, during the Achaemenid Empire. Under the Parthian, Sassanian, and Iranian empires, scholars concentrated on exchanging knowledge and inventions with the countries around them – India, China, and the Roman Empire- which is how the abacus may have been exported to other countries.
Greece
The earliest archaeological evidence for the use of the Greek abacus dates to the 5th century BC. Demosthenes (384 BC–322 BC) complained that the need to use pebbles for calculations was too difficult. A play by Alexis from the 4th century BC mentions an abacus and pebbles for accounting, and both Diogenes and Polybius use the abacus as a metaphor for human behavior, stating "that men that sometimes stood for more and sometimes for less" like the pebbles on an abacus. The Greek abacus was a table of wood or marble, pre-set with small counters in wood or metal for mathematical calculations. This Greek abacus was used in Achaemenid Persia, the Etruscan civilization, Ancient Rome, and the Western Christian world until the French Revolution.
A tablet found on the Greek island Salamis in 1846 AD (the Salamis Tablet) dates to 300 BC, making it the oldest counting board discovered so far. It is a slab of white marble in length, wide, and thick, on which are 5 groups of markings. In the tablet's center is a set of 5 parallel lines equally divided by a vertical line, capped with a semicircle at the intersection of the bottom-most horizontal line and the single vertical line. Below these lines is a wide space with a horizontal crack dividing it. Below this crack is another group of eleven parallel lines, again divided into two sections by a line perpendicular to them, but with the semicircle at the top of the intersection; the third, sixth and ninth of these lines are marked with a cross where they intersect with the vertical line. Also from this time frame, the Darius Vase was unearthed in 1851. It was covered with pictures, including a "treasurer" holding a wax tablet in one hand while manipulating counters on a table with the other.
Rome
The normal method of calculation in ancient Rome, as in Greece, was by moving counters on a smooth table. Originally pebbles (Latin: calculi) were used. Marked lines indicated units, fives, tens, etc. as in the Roman numeral system.
Writing in the 1st century BC, Horace refers to the wax abacus, a board covered with a thin layer of black wax on which columns and figures were inscribed using a stylus.
One example of archaeological evidence of the Roman abacus, shown nearby in reconstruction, dates to the 1st century AD. It has eight long grooves containing up to five beads in each and eight shorter grooves having either one or no beads in each. The groove marked I indicates units, X tens, and so on up to millions. The beads in the shorter grooves denote fives (five units, five tens, etc.) resembling a bi-quinary coded decimal system related to the Roman numerals. The short grooves on the right may have been used for marking Roman "ounces" (i.e. fractions).
Medieval Europe
The Roman system of 'counter casting' was used widely in medieval Europe, and persisted in limited use into the nineteenth century. Wealthy abacists used decorative minted counters, called jetons.
Due to Pope Sylvester II's reintroduction of the abacus with modifications, it became widely used in Europe again during the 11th century It used beads on wires, unlike the traditional Roman counting boards, which meant the abacus could be used much faster and was more easily moved.
China
The earliest known written documentation of the Chinese abacus dates to the 2nd century BC.
The Chinese abacus, also known as the suanpan (算盤/算盘, lit. "calculating tray"), comes in various lengths and widths, depending on the operator. It usually has more than seven rods. There are two beads on each rod in the upper deck and five beads each in the bottom one, to represent numbers in a bi-quinary coded decimal-like system. The beads are usually rounded and made of hardwood. The beads are counted by moving them up or down towards the beam; beads moved toward the beam are counted, while those moved away from it are not. One of the top beads is 5, while one of the bottom beads is 1. Each rod has a number under it, showing the place value. The suanpan can be reset to the starting position instantly by a quick movement along the horizontal axis to spin all the beads away from the horizontal beam at the center.
The prototype of the Chinese abacus appeared during the Han dynasty, and the beads are oval. The Song dynasty and earlier used the 1:4 type or four-beads abacus similar to the modern abacus including the shape of the beads commonly known as Japanese-style abacus.
In the early Ming dynasty, the abacus began to appear in a 1:5 ratio. The upper deck had one bead and the bottom had five beads. In the late Ming dynasty, the abacus styles appeared in a 2:5 ratio. The upper deck had two beads, and the bottom had five.
Various calculation techniques were devised for Suanpan enabling efficient calculations. Some schools teach students how to use it.
In the long scroll Along the River During the Qingming Festival painted by Zhang Zeduan during the Song dynasty (960–1297), a suanpan is clearly visible beside an account book and doctor's prescriptions on the counter of an apothecary's (Feibao).
The similarity of the Roman abacus to the Chinese one suggests that one could have inspired the other, given evidence of a trade relationship between the Roman Empire and China. However, no direct connection has been demonstrated, and the similarity of the abacuses may be coincidental, both ultimately arising from counting with five fingers per hand. Where the Roman model (like most modern Korean and Japanese) has 4 plus 1 bead per decimal place, the standard suanpan has 5 plus 2. Incidentally, this allows use with a hexadecimal numeral system (or any base up to 18) which may have been used for traditional Chinese measures of weight. (Instead of running on wires as in the Chinese, Korean, and Japanese models, the Roman model used grooves, presumably making arithmetic calculations much slower.)
Another possible source of the suanpan is Chinese counting rods, which operated with a decimal system but lacked the concept of zero as a placeholder. The zero was probably introduced to the Chinese in the Tang dynasty (618–907) when travel in the Indian Ocean and the Middle East would have provided direct contact with India, allowing them to acquire the concept of zero and the decimal point from Indian merchants and mathematicians.
India
The Abhidharmakośabhāṣya of Vasubandhu (316-396), a Sanskrit work on Buddhist philosophy, says that the second-century CE philosopher Vasumitra said that "placing a wick (Sanskrit vartikā) on the number one (ekāṅka) means it is a one while placing the wick on the number hundred means it is called a hundred, and on the number one thousand means it is a thousand". It is unclear exactly what this arrangement may have been. Around the 5th century, Indian clerks were already finding new ways of recording the contents of the abacus. Hindu texts used the term śūnya (zero) to indicate the empty column on the abacus.
Japan
In Japan, the abacus is called soroban (, lit. "counting tray"). It was imported from China in the 14th century. It was probably in use by the working class a century or more before the ruling class adopted it, as the class structure obstructed such changes. The 1:4 abacus, which removes the seldom-used second and fifth bead, became popular in the 1940s.
Today's Japanese abacus is a 1:4 type, four-bead abacus, introduced from China in the Muromachi era. It adopts the form of the upper deck one bead and the bottom four beads. The top bead on the upper deck was equal to five and the bottom one is similar to the Chinese or Korean abacus, and the decimal number can be expressed, so the abacus is designed as a one:four device. The beads are always in the shape of a diamond. The quotient division is generally used instead of the division method; at the same time, in order to make the multiplication and division digits consistently use the division multiplication. Later, Japan had a 3:5 abacus called 天三算盤, which is now in the Ize Rongji collection of Shansi Village in Yamagata City. Japan also used a 2:5 type abacus.
The four-bead abacus spread, and became common around the world. Improvements to the Japanese abacus arose in various places. In China an aluminium frame plastic bead abacus was used. The file is next to the four beads, and pressing the "clearing" button put the upper bead in the upper position, and the lower bead in the lower position.
The abacus is still manufactured in Japan even with the proliferation, practicality, and affordability of pocket electronic calculators. The use of the soroban is still taught in Japanese primary schools as part of mathematics, primarily as an aid to faster mental calculation. Using visual imagery can complete a calculation as quickly as a physical instrument.
Korea
The Chinese abacus migrated from China to Korea around 1400 AD. Koreans call it jupan (주판), supan (수판) or jusan (주산). The four-beads abacus (1:4) was introduced during the Goryeo Dynasty. The 5:1 abacus was introduced to Korea from China during the Ming Dynasty.
Native America
Some sources mention the use of an abacus called a nepohualtzintzin in ancient Aztec culture. This Mesoamerican abacus used a 5-digit base-20 system. The word Nepōhualtzintzin comes from Nahuatl, formed by the roots; Ne – personal -; pōhual or pōhualli – the account -; and tzintzin – small similar elements. Its complete meaning was taken as: counting with small similar elements. Its use was taught in the Calmecac to the temalpouhqueh , who were students dedicated to taking the accounts of skies, from childhood.
The Nepōhualtzintzin was divided into two main parts separated by a bar or intermediate cord. In the left part were four beads. Beads in the first row have unitary values (1, 2, 3, and 4), and on the right side, three beads had values of 5, 10, and 15, respectively. In order to know the value of the respective beads of the upper rows, it is enough to multiply by 20 (by each row), the value of the corresponding count in the first row.
The device featured 13 rows with 7 beads, 91 in total. This was a basic number for this culture. It had a close relation to natural phenomena, the underworld, and the cycles of the heavens. One Nepōhualtzintzin (91) represented the number of days that a season of the year lasts, two Nepōhualtzitzin (182) is the number of days of the corn's cycle, from its sowing to its harvest, three Nepōhualtzintzin (273) is the number of days of a baby's gestation, and four Nepōhualtzintzin (364) completed a cycle and approximated one year. When translated into modern computer arithmetic, the Nepōhualtzintzin amounted to the rank from 10 to 18 in floating point, which precisely calculated large and small amounts, although round off was not allowed.
The rediscovery of the Nepōhualtzintzin was due to the Mexican engineer David Esparza Hidalgo, who in his travels throughout Mexico found diverse engravings and paintings of this instrument and reconstructed several of them in gold, jade, encrustations of shell, etc. Very old Nepōhualtzintzin are attributed to the Olmec culture, and some bracelets of Mayan origin, as well as a diversity of forms and materials in other cultures.
Sanchez wrote in Arithmetic in Maya that another base 5, base 4 abacus had been found in the Yucatán Peninsula that also computed calendar data. This was a finger abacus, on one hand, 0, 1, 2, 3, and 4 were used; and on the other hand 0, 1, 2, and 3 were used. Note the use of zero at the beginning and end of the two cycles.
The quipu of the Incas was a system of colored knotted cords used to record numerical data, like advanced tally sticks – but not used to perform calculations. Calculations were carried out using a yupana (Quechua for "counting tool"; see figure) which was still in use after the conquest of Peru. The working principle of a yupana is unknown, but in 2001 Italian mathematician De Pasquale proposed an explanation. By comparing the form of several yupanas, researchers found that calculations were based using the Fibonacci sequence 1, 1, 2, 3, 5 and powers of 10, 20, and 40 as place values for the different fields in the instrument. Using the Fibonacci sequence would keep the number of grains within any one field at a minimum.
Russia
The Russian abacus, the schoty (, plural from , counting), usually has a single slanted deck, with ten beads on each wire (except one wire with four beads for quarter-ruble fractions). 4-bead wire was introduced for quarter-kopeks, which were minted until 1916. The Russian abacus is used vertically, with each wire running horizontally. The wires are usually bowed upward in the center, to keep the beads pinned to either side. It is cleared when all the beads are moved to the right. During manipulation, beads are moved to the left. For easy viewing, the middle 2 beads on each wire (the 5th and 6th bead) usually are of a different color from the other eight. Likewise, the left bead of the thousands wire (and the million wire, if present) may have a different color.
The Russian abacus was in use in shops and markets throughout the former Soviet Union, and its usage was taught in most schools until the 1990s. Even the 1874 invention of mechanical calculator, Odhner arithmometer, had not replaced them in Russia; according to Yakov Perelman. Some businessmen attempting to import calculators into the Russian Empire were known to leave in despair after watching a skilled abacus operator. Likewise, the mass production of Felix arithmometers since 1924 did not significantly reduce abacus use in the Soviet Union. The Russian abacus began to lose popularity only after the mass production of domestic microcalculators in 1974.
The Russian abacus was brought to France around 1820 by mathematician Jean-Victor Poncelet, who had served in Napoleon's army and had been a prisoner of war in Russia. The abacus had fallen out of use in western Europe in the 16th century with the rise of decimal notation and algorismic methods. To Poncelet's French contemporaries, it was something new. Poncelet used it, not for any applied purpose, but as a teaching and demonstration aid. The Turks and the Armenian people used abacuses similar to the Russian schoty. It was named a coulba by the Turks and a choreb by the Armenians.
School abacus
Around the world, abacuses have been used in pre-schools and elementary schools as an aid in teaching the numeral system and arithmetic.
In Western countries, a bead frame similar to the Russian abacus but with straight wires and a vertical frame is common (see image).
The wireframe may be used either with positional notation like other abacuses (thus the 10-wire version may represent numbers up to 9,999,999,999), or each bead may represent one unit (e.g. 74 can be represented by shifting all beads on 7 wires and 4 beads on the 8th wire, so numbers up to 100 may be represented). In the bead frame shown, the gap between the 5th and 6th wire, corresponding to the color change between the 5th and the 6th bead on each wire, suggests the latter use. Teaching multiplication, e.g. 6 times 7, may be represented by shifting 7 beads on 6 wires.
The red-and-white abacus is used in contemporary primary schools for a wide range of number-related lessons. The twenty bead version, referred to by its Dutch name rekenrek ("calculating frame"), is often used, either on a string of beads or on a rigid framework.
Feynman vs the abacus
Physicist Richard Feynman was noted for facility in mathematical calculations. He wrote about an encounter in Brazil with a Japanese abacus expert, who challenged him to speed contests between Feynman's pen and paper, and the abacus. The abacus was much faster for addition, somewhat faster for multiplication, but Feynman was faster at division. When the abacus was used for a really difficult challenge, i.e. cube roots, Feynman won easily. However, the number chosen at random was close to a number Feynman happened to know was an exact cube, allowing him to use approximate methods.
Neurological analysis
Learning how to calculate with the abacus may improve capacity for mental calculation. Abacus-based mental calculation (AMC), which was derived from the abacus, is the act of performing calculations, including addition, subtraction, multiplication, and division, in the mind by manipulating an imagined abacus. It is a high-level cognitive skill that runs calculations with an effective algorithm. People doing long-term AMC training show higher numerical memory capacity and experience more effectively connected neural pathways. They are able to retrieve memory to deal with complex processes. AMC involves both visuospatial and visuomotor processing that generate the visual abacus and move the imaginary beads. Since it only requires that the final position of beads be remembered, it takes less memory and less computation time.
Renaissance abacuses
Binary abacus
The binary abacus is used to explain how computers manipulate numbers. The abacus shows how numbers, letters, and signs can be stored in a binary system on a computer, or via ASCII. The device consists of a series of beads on parallel wires arranged in three separate rows. The beads represent a switch on the computer in either an "on" or "off" position.
Visually impaired users
An adapted abacus, invented by Tim Cranmer, and called a Cranmer abacus is commonly used by visually impaired users. A piece of soft fabric or rubber is placed behind the beads, keeping them in place while the users manipulate them. The device is then used to perform the mathematical functions of multiplication, division, addition, subtraction, square root, and cube root.
Although blind students have benefited from talking calculators, the abacus is often taught to these students in early grades. Blind students can also complete mathematical assignments using a braille-writer and Nemeth code (a type of braille code for mathematics) but large multiplication and long division problems are tedious. The abacus gives these students a tool to compute mathematical problems that equals the speed and mathematical knowledge required by their sighted peers using pencil and paper. Many blind people find this number machine a useful tool throughout life.
Slideshow of various abacuses
See also
Chinese Zhusuan
Chisanbop
Logical abacus
Mental abacus
Napier's bones
Sand table
Slide rule
Soroban
Suanpan
Notes
Footnotes
References
Reading
External links
Tutorials
Min Multimedia
History
Curiosities
Abacus in Various Number Systems at cut-the-knot
Java applet of Chinese, Japanese and Russian abaci
An atomic-scale abacus
Examples of Abaci
Aztex Abacus
Indian Abacus
Mathematical tools
Chinese mathematics
Egyptian mathematics
Greek mathematics
Indian mathematics
Japanese mathematics
Korean mathematics
Roman mathematics
|
https://en.wikipedia.org/wiki/Bitumen
|
Bitumen (, ) is an immensely viscous constituent of petroleum. Depending on its exact composition it can be a sticky, black liquid or an apparently solid mass that behaves as a liquid over very large time scales. In the U.S., the material is commonly referred to as asphalt. Whether found in natural deposits or refined from petroleum, the substance is classed as a pitch. Prior to the 20th century the term asphaltum was in general use. The word derives from the ancient Greek ἄσφαλτος ásphaltos, which referred to natural bitumen or pitch. The largest natural deposit of bitumen in the world, estimated to contain 10 million tons, is the Pitch Lake of southwest Trinidad.
70% of annual bitumen production destined for road construction, its primary use. In this application bitumen is used to bind aggregate particles like gravel and forms a substance referred to as asphalt concrete, which is colloquially termed asphalt. Its other main uses lie in bituminous waterproofing products, such as roofing felt and roof sealant.
In material sciences and engineering the terms "asphalt" and "bitumen" are often used interchangeably and refer both to natural and manufactured forms of the substance, although there is regional variation as to which term is most common. Worldwide, geologists tend to favor the term "bitumen" for the naturally occurring material. For the manufactured material, which is a refined residue from the distillation process of selected crude oils, "bitumen" is the prevalent term in much of the world; however, in American English, "asphalt" is more commonly used. To help avoid confusion, the phrases "liquid asphalt", "asphalt binder", or "asphalt cement" are used in the U.S. Colloquially, various forms of asphalt are sometimes referred to as "tar", as in the name of the La Brea Tar Pits.
Naturally occurring bitumen is sometimes specified by the term "crude bitumen". Its viscosity is similar to that of cold molasses while the material obtained from the fractional distillation of crude oil boiling at is sometimes referred to as "refined bitumen". The Canadian province of Alberta has most of the world's reserves of natural bitumen in the Athabasca oil sands, which cover , an area larger than England.
Terminology
Etymology
The Latin word traces to the Proto-Indo-European root *gʷet- "pitch"; see that link for other cognates.
The expression "bitumen" originated in the Sanskrit, where we find the words "jatu", meaning "pitch", and "jatu-krit", meaning "pitch creating", "pitch producing" (referring to coniferous or resinous trees). The Latin equivalent is claimed by some to be originally "gwitu-men" (pertaining to pitch), and by others, "pixtumens" (exuding or bubbling pitch), which was subsequently shortened to "bitumen", thence passing via French into English. From the same root is derived the Anglo Saxon word "cwidu" (Mastix), the German word "Kitt" (cement or mastic) and the old Norse word "kvada".
The word "ašphalt" is claimed to have been derived from the Accadian term "asphaltu" or "sphallo," meaning "to split." It was later adopted by the Homeric Greeks in the form of the adjective ἄσφαλἤς, ἐς signifying "firm," "stable," "secure," and the corresponding verb ἄσφαλίξω, ίσω meaning "to make firm or stable," "to secure".
The word "asphalt" is derived from the late Middle English, in turn from French asphalte, based on Late Latin asphalton, asphaltum, which is the latinisation of the Greek (ásphaltos, ásphalton), a word meaning "asphalt/bitumen/pitch", which perhaps derives from , "not, without", i.e. the alpha privative, and (sphallein), "to cause to fall, baffle, (in passive) err, (in passive) be balked of".
The first use of asphalt by the ancients was as a cement to secure or join various objects, and it thus seems likely that the name itself was expressive of this application. Specifically, Herodotus mentioned that bitumen was brought to Babylon to build its gigantic fortification wall.
From the Greek, the word passed into late Latin, and thence into French (asphalte) and English ("asphaltum" and "asphalt"). In French, the term asphalte is used for naturally occurring asphalt-soaked limestone deposits, and for specialised manufactured products with fewer voids or greater bitumen content than the "asphaltic concrete" used to pave roads.
Modern terminology
Bitumen mixed with clay was usually called "asphaltum", but the term is less commonly used today.
In American English, "asphalt" is equivalent to the British "bitumen". However, "asphalt" is also commonly used as a shortened form of "asphalt concrete" (therefore equivalent to the British "asphalt" or "tarmac").
In Canadian English, the word "bitumen" is used to refer to the vast Canadian deposits of extremely heavy crude oil, while "asphalt" is used for the oil refinery product. Diluted bitumen (diluted with naphtha to make it flow in pipelines) is known as "dilbit" in the Canadian petroleum industry, while bitumen "upgraded" to synthetic crude oil is known as "syncrude", and syncrude blended with bitumen is called "synbit".
"Bitumen" is still the preferred geological term for naturally occurring deposits of the solid or semi-solid form of petroleum. "Bituminous rock" is a form of sandstone impregnated with bitumen. The oil sands of Alberta, Canada are a similar material.
Neither of the terms "asphalt" or "bitumen" should be confused with tar or coal tars. Tar is the thick liquid product of the dry distillation and pyrolysis of organic hydrocarbons primarily sourced from vegetation masses, whether fossilized as with coal, or freshly harvested. The majority of bitumen, on the other hand, was formed naturally when vast quantities of organic animal materials were deposited by water and buried hundreds of metres deep at the diagenetic point, where the disorganized fatty hydrocarbon molecules joined in long chains in the absence of oxygen. Bitumen occurs as a solid or highly viscous liquid. It may even be mixed in with coal deposits. Bitumen, and coal using the Bergius process, can be refined into petrols such as gasoline, and bitumen may be distilled into tar, not the other way around.
Composition
Normal composition
The components of bitumen include four main classes of compounds:
Naphthene aromatics (naphthalene), consisting of partially hydrogenated polycyclic aromatic compounds
Polar aromatics, consisting of high molecular weight phenols and carboxylic acids produced by partial oxidation of the material
Saturated hydrocarbons; the percentage of saturated compounds in asphalt correlates with its softening point
Asphaltenes, consisting of high molecular weight phenols and heterocyclic compounds
Bitumen typically contains, elementally 80% by weight of carbon; 10% hydrogen; up to 6% sulfur; and molecularly, between 5 and 25% by weight of asphaltenes dispersed in 90% to 65% maltenes. Most natural bitumens also contain organosulfur compounds, Nickel and vanadium are found at <10 parts per million, as is typical of some petroleum. The substance is soluble in carbon disulfide. It is commonly modelled as a colloid, with asphaltenes as the dispersed phase and maltenes as the continuous phase. "It is almost impossible to separate and identify all the different molecules of bitumen, because the number of molecules with different chemical structure is extremely large".
Asphalt may be confused with coal tar, which is a visually similar black, thermoplastic material produced by the destructive distillation of coal. During the early and mid-20th century, when town gas was produced, coal tar was a readily available byproduct and extensively used as the binder for road aggregates. The addition of coal tar to macadam roads led to the word "tarmac", which is now used in common parlance to refer to road-making materials. However, since the 1970s, when natural gas succeeded town gas, bitumen has completely overtaken the use of coal tar in these applications. Other examples of this confusion include La Brea Tar Pits and the Canadian oil sands, both of which actually contain natural bitumen rather than tar. "Pitch" is another term sometimes informally used at times to refer to asphalt, as in Pitch Lake.
Additives, mixtures and contaminants
For economic and other reasons, bitumen is sometimes sold combined with other materials, often without being labeled as anything other than simply "bitumen".
Of particular note is the use of re-refined engine oil bottoms – "REOB" or "REOBs"the residue of recycled automotive engine oil collected from the bottoms of re-refining vacuum distillation towers, in the manufacture of asphalt. REOB contains various elements and compounds found in recycled engine oil: additives to the original oil and materials accumulating from its circulation in the engine (typically iron and copper). Some research has indicated a correlation between this adulteration of bitumen and poorer-performing pavement.
Occurrence
The majority of bitumen used commercially is obtained from petroleum. Nonetheless, large amounts of bitumen occur in concentrated form in nature. Naturally occurring deposits of bitumen are formed from the remains of ancient, microscopic algae (diatoms) and other once-living things. These natural deposits of bitumen have been formed during the Carboniferous period, when giant swamp forests dominated many parts of the Earth. They were deposited in the mud on the bottom of the ocean or lake where the organisms lived. Under the heat (above 50 °C) and pressure of burial deep in the earth, the remains were transformed into materials such as bitumen, kerogen, or petroleum.
Natural deposits of bitumen include lakes such as the Pitch Lake in Trinidad and Tobago and Lake Bermudez in Venezuela. Natural seeps occur in the La Brea Tar Pits and the McKittrick Tar Pits in California, as well as in the Dead Sea.
Bitumen also occurs in unconsolidated sandstones known as "oil sands" in Alberta, Canada, and the similar "tar sands" in Utah, US.
The Canadian province of Alberta has most of the world's reserves, in three huge deposits covering , an area larger than England or New York state. These bituminous sands contain of commercially established oil reserves, giving Canada the third largest oil reserves in the world. Although historically it was used without refining to pave roads, nearly all of the output is now used as raw material for oil refineries in Canada and the United States.
The world's largest deposit of natural bitumen, known as the Athabasca oil sands, is located in the McMurray Formation of Northern Alberta. This formation is from the early Cretaceous, and is composed of numerous lenses of oil-bearing sand with up to 20% oil. Isotopic studies show the oil deposits to be about 110 million years old. Two smaller but still very large formations occur in the Peace River oil sands and the Cold Lake oil sands, to the west and southeast of the Athabasca oil sands, respectively. Of the Alberta deposits, only parts of the Athabasca oil sands are shallow enough to be suitable for surface mining. The other 80% has to be produced by oil wells using enhanced oil recovery techniques like steam-assisted gravity drainage.
Much smaller heavy oil or bitumen deposits also occur in the Uinta Basin in Utah, US. The Tar Sand Triangle deposit, for example, is roughly 6% bitumen.
Bitumen may occur in hydrothermal veins. An example of this is within the Uinta Basin of Utah, in the US, where there is a swarm of laterally and vertically extensive veins composed of a solid hydrocarbon termed Gilsonite. These veins formed by the polymerization and solidification of hydrocarbons that were mobilized from the deeper oil shales of the Green River Formation during burial and diagenesis.
Bitumen is similar to the organic matter in carbonaceous meteorites. However, detailed studies have shown these materials to be distinct. The vast Alberta bitumen resources are considered to have started out as living material from marine plants and animals, mainly algae, that died millions of years ago when an ancient ocean covered Alberta. They were covered by mud, buried deeply over time, and gently cooked into oil by geothermal heat at a temperature of . Due to pressure from the rising of the Rocky Mountains in southwestern Alberta, 80 to 55 million years ago, the oil was driven northeast hundreds of kilometres and trapped into underground sand deposits left behind by ancient river beds and ocean beaches, thus forming the oil sands.
History
Ancient times
The use of natural bitumen for waterproofing, and as an adhesive dates at least to the fifth millennium BC, with a crop storage basket discovered in Mehrgarh, of the Indus Valley civilization, lined with it. By the 3rd millennium BC refined rock asphalt was in use in the region, and was used to waterproof the Great Bath in Mohenjo-daro.
In the ancient Near East, the Sumerians used natural bitumen deposits for mortar between bricks and stones, to cement parts of carvings, such as eyes, into place, for ship caulking, and for waterproofing. The Greek historian Herodotus said hot bitumen was used as mortar in the walls of Babylon.
The long Euphrates Tunnel beneath the river Euphrates at Babylon in the time of Queen Semiramis () was reportedly constructed of burnt bricks covered with bitumen as a waterproofing agent.
Bitumen was used by ancient Egyptians to embalm mummies. The Persian word for asphalt is moom, which is related to the English word mummy. The Egyptians' primary source of bitumen was the Dead Sea, which the Romans knew as Palus Asphaltites (Asphalt Lake).
In approximately 40 AD, Dioscorides described the Dead Sea material as Judaicum bitumen, and noted other places in the region where it could be found. The Sidon bitumen is thought to refer to material found at Hasbeya in Lebanon. Pliny also refers to bitumen being found in Epirus. Bitumen was a valuable strategic resource. It was the object of the first known battle for a hydrocarbon deposit – between the Seleucids and the Nabateans in 312 BC.
In the ancient Far East, natural bitumen was slowly boiled to get rid of the higher fractions, leaving a thermoplastic material of higher molecular weight that, when layered on objects, became hard upon cooling. This was used to cover objects that needed waterproofing, such as scabbards and other items. Statuettes of household deities were also cast with this type of material in Japan, and probably also in China.
In North America, archaeological recovery has indicated that bitumen was sometimes used to adhere stone projectile points to wooden shafts. In Canada, aboriginal people used bitumen seeping out of the banks of the Athabasca and other rivers to waterproof birch bark canoes, and also heated it in smudge pots to ward off mosquitoes in the summer.
Continental Europe
In 1553, Pierre Belon described in his work Observations that pissasphalto, a mixture of pitch and bitumen, was used in the Republic of Ragusa (now Dubrovnik, Croatia) for tarring of ships.
An 1838 edition of Mechanics Magazine cites an early use of asphalt in France. A pamphlet dated 1621, by "a certain Monsieur d'Eyrinys, states that he had discovered the existence (of asphaltum) in large quantities in the vicinity of Neufchatel", and that he proposed to use it in a variety of ways – "principally in the construction of air-proof granaries, and in protecting, by means of the arches, the water-courses in the city of Paris from the intrusion of dirt and filth", which at that time made the water unusable. "He expatiates also on the excellence of this material for forming level and durable terraces" in palaces, "the notion of forming such terraces in the streets not one likely to cross the brain of a Parisian of that generation".
But the substance was generally neglected in France until the revolution of 1830. In the 1830s there was a surge of interest, and asphalt became widely used "for pavements, flat roofs, and the lining of cisterns, and in England, some use of it had been made of it for similar purposes". Its rise in Europe was "a sudden phenomenon", after natural deposits were found "in France at Osbann (Bas-Rhin), the Parc (Ain) and the Puy-de-la-Poix (Puy-de-Dôme)", although it could also be made artificially. One of the earliest uses in France was the laying of about 24,000 square yards of Seyssel asphalt at the Place de la Concorde in 1835.
United Kingdom
Among the earlier uses of bitumen in the United Kingdom was for etching. William Salmon's Polygraphice (1673) provides a recipe for varnish used in etching, consisting of three ounces of virgin wax, two ounces of mastic, and one ounce of asphaltum. By the fifth edition in 1685, he had included more asphaltum recipes from other sources.
The first British patent for the use of asphalt was "Cassell's patent asphalte or bitumen" in 1834. Then on 25 November 1837, Richard Tappin Claridge patented the use of Seyssel asphalt (patent #7849), for use in asphalte pavement, having seen it employed in France and Belgium when visiting with Frederick Walter Simms, who worked with him on the introduction of asphalt to Britain. Dr T. Lamb Phipson writes that his father, Samuel Ryland Phipson, a friend of Claridge, was also "instrumental in introducing the asphalte pavement (in 1836)".
Claridge obtained a patent in Scotland on 27 March 1838, and obtained a patent in Ireland on 23 April 1838. In 1851, extensions for the 1837 patent and for both 1838 patents were sought by the trustees of a company previously formed by Claridge. Claridge's Patent Asphalte Companyformed in 1838 for the purpose of introducing to Britain "Asphalte in its natural state from the mine at Pyrimont Seysell in France","laid one of the first asphalt pavements in Whitehall". Trials were made of the pavement in 1838 on the footway in Whitehall, the stable at Knightsbridge Barracks, "and subsequently on the space at the bottom of the steps leading from Waterloo Place to St. James Park". "The formation in 1838 of Claridge's Patent Asphalte Company (with a distinguished list of aristocratic patrons, and Marc and Isambard Brunel as, respectively, a trustee and consulting engineer), gave an enormous impetus to the development of a British asphalt industry". "By the end of 1838, at least two other companies, Robinson's and the Bastenne company, were in production", with asphalt being laid as paving at Brighton, Herne Bay, Canterbury, Kensington, the Strand, and a large floor area in Bunhill-row, while meantime Claridge's Whitehall paving "continue(d) in good order". The Bonnington Chemical Works manufactured asphalt using coal tar and by 1839 had installed it in Bonnington.
In 1838, there was a flurry of entrepreneurial activity involving bitumen, which had uses beyond paving. For example, bitumen could also be used for flooring, damp proofing in buildings, and for waterproofing of various types of pools and baths, both of which were also proliferating in the 19th century. One of the earliest surviving examples of its use can be seen at Highgate Cemetery where it was used in 1839 to seal the roof of the terrace catacombs. On the London stockmarket, there were various claims as to the exclusivity of bitumen quality from France, Germany and England. And numerous patents were granted in France, with similar numbers of patent applications being denied in England due to their similarity to each other. In England, "Claridge's was the type most used in the 1840s and 50s".
In 1914, Claridge's Company entered into a joint venture to produce tar-bound macadam, with materials manufactured through a subsidiary company called Clarmac Roads Ltd. Two products resulted, namely Clarmac, and Clarphalte, with the former being manufactured by Clarmac Roads and the latter by Claridge's Patent Asphalte Co., although Clarmac was more widely used. However, the First World War ruined the Clarmac Company, which entered into liquidation in 1915. The failure of Clarmac Roads Ltd had a flow-on effect to Claridge's Company, which was itself compulsorily wound up, ceasing operations in 1917, having invested a substantial amount of funds into the new venture, both at the outset and in a subsequent attempt to save the Clarmac Company.
Bitumen was thought in 19th century Britain to contain chemicals with medicinal properties. Extracts from bitumen were used to treat catarrh and some forms of asthma and as a remedy against worms, especially the tapeworm.
United States
The first use of bitumen in the New World was by aboriginal peoples. On the west coast, as early as the 13th century, the Tongva, Luiseño and Chumash peoples collected the naturally occurring bitumen that seeped to the surface above underlying petroleum deposits. All three groups used the substance as an adhesive. It is found on many different artifacts of tools and ceremonial items. For example, it was used on rattles to adhere gourds or turtle shells to rattle handles. It was also used in decorations. Small round shell beads were often set in asphaltum to provide decorations. It was used as a sealant on baskets to make them watertight for carrying water, possibly poisoning those who drank the water. Asphalt was used also to seal the planks on ocean-going canoes.
Asphalt was first used to pave streets in the 1870s. At first naturally occurring "bituminous rock" was used, such as at Ritchie Mines in Macfarlan in Ritchie County, West Virginia from 1852 to 1873. In 1876, asphalt-based paving was used to pave Pennsylvania Avenue in Washington DC, in time for the celebration of the national centennial.
In the horse-drawn era, US streets were mostly unpaved and covered with dirt or gravel. Especially where mud or trenching often made streets difficult to pass, pavements were sometimes made of diverse materials including wooden planks, cobble stones or other stone blocks, or bricks. Unpaved roads produced uneven wear and hazards for pedestrians. In the late 19th century with the rise of the popular bicycle, bicycle clubs were important in pushing for more general pavement of streets. Advocacy for pavement increased in the early 20th century with the rise of the automobile. Asphalt gradually became an ever more common method of paving. St. Charles Avenue in New Orleans was paved its whole length with asphalt by 1889.
In 1900, Manhattan alone had 130,000 horses, pulling streetcars, wagons, and carriages, and leaving their waste behind. They were not fast, and pedestrians could dodge and scramble their way across the crowded streets. Small towns continued to rely on dirt and gravel, but larger cities wanted much better streets. They looked to wood or granite blocks by the 1850s. In 1890, a third of Chicago's 2000 miles of streets were paved, chiefly with wooden blocks, which gave better traction than mud. Brick surfacing was a good compromise, but even better was asphalt paving, which was easy to install and to cut through to get at sewers. With London and Paris serving as models, Washington laid 400,000 square yards of asphalt paving by 1882; it became the model for Buffalo, Philadelphia and elsewhere. By the end of the century, American cities boasted 30 million square yards of asphalt paving, well ahead of brick. The streets became faster and more dangerous so electric traffic lights were installed. Electric trolleys (at 12 miles per hour) became the main transportation service for middle class shoppers and office workers until they bought automobiles after 1945 and commuted from more distant suburbs in privacy and comfort on asphalt highways.
Canada
Canada has the world's largest deposit of natural bitumen in the Athabasca oil sands, and Canadian First Nations along the Athabasca River had long used it to waterproof their canoes. In 1719, a Cree named Wa-Pa-Su brought a sample for trade to Henry Kelsey of the Hudson's Bay Company, who was the first recorded European to see it. However, it wasn't until 1787 that fur trader and explorer Alexander MacKenzie saw the Athabasca oil sands and said, "At about 24 miles from the fork (of the Athabasca and Clearwater Rivers) are some bituminous fountains into which a pole of 20 feet long may be inserted without the least resistance."
The value of the deposit was obvious from the start, but the means of extracting the bitumen was not. The nearest town, Fort McMurray, Alberta, was a small fur trading post, other markets were far away, and transportation costs were too high to ship the raw bituminous sand for paving. In 1915, Sidney Ells of the Federal Mines Branch experimented with separation techniques and used the product to pave 600 feet of road in Edmonton, Alberta. Other roads in Alberta were paved with material extracted from oil sands, but it was generally not economic. During the 1920s Dr. Karl A. Clark of the Alberta Research Council patented a hot water oil separation process and entrepreneur Robert C. Fitzsimmons built the Bitumount oil separation plant, which between 1925 and 1958 produced up to per day of bitumen using Dr. Clark's method. Most of the bitumen was used for waterproofing roofs, but other uses included fuels, lubrication oils, printers ink, medicines, rust- and acid-proof paints, fireproof roofing, street paving, patent leather, and fence post preservatives. Eventually Fitzsimmons ran out of money and the plant was taken over by the Alberta government. Today the Bitumount plant is a Provincial Historic Site.
Photography and art
Bitumen was used in early photographic technology. In 1826, or 1827, it was used by French scientist Joseph Nicéphore Niépce to make the oldest surviving photograph from nature. The bitumen was thinly coated onto a pewter plate which was then exposed in a camera. Exposure to light hardened the bitumen and made it insoluble, so that when it was subsequently rinsed with a solvent only the sufficiently light-struck areas remained. Many hours of exposure in the camera were required, making bitumen impractical for ordinary photography, but from the 1850s to the 1920s it was in common use as a photoresist in the production of printing plates for various photomechanical printing processes.
Bitumen was the nemesis of many artists during the 19th century. Although widely used for a time, it ultimately proved unstable for use in oil painting, especially when mixed with the most common diluents, such as linseed oil, varnish and turpentine. Unless thoroughly diluted, bitumen never fully solidifies and will in time corrupt the other pigments with which it comes into contact. The use of bitumen as a glaze to set in shadow or mixed with other colors to render a darker tone resulted in the eventual deterioration of many paintings, for instance those of Delacroix. Perhaps the most famous example of the destructiveness of bitumen is Théodore Géricault's Raft of the Medusa (1818–1819), where his use of bitumen caused the brilliant colors to degenerate into dark greens and blacks and the paint and canvas to buckle.
Modern use
Global use
The vast majority of refined bitumen is used in construction: primarily as a constituent of products used in paving and roofing applications. According to the requirements of the end use, bitumen is produced to specification. This is achieved either by refining or blending. It is estimated that the current world use of bitumen is approximately 102 million tonnes per year. Approximately 85% of all the bitumen produced is used as the binder in asphalt concrete for roads. It is also used in other paved areas such as airport runways, car parks and footways. Typically, the production of asphalt concrete involves mixing fine and coarse aggregates such as sand, gravel and crushed rock with asphalt, which acts as the binding agent. Other materials, such as recycled polymers (e.g., rubber tyres), may be added to the bitumen to modify its properties according to the application for which the bitumen is ultimately intended.
A further 10% of global bitumen production is used in roofing applications, where its waterproofing qualities are invaluable.
The remaining 5% of bitumen is used mainly for sealing and insulating purposes in a variety of building materials, such as pipe coatings, carpet tile backing and paint. Bitumen is applied in the construction and maintenance of many structures, systems, and components, such as the following:
Highways
Airport runways
Footways and pedestrian ways
Car parks
Racetracks
Tennis courts
Roofing
Damp proofing
Dams
Reservoir and pool linings
Soundproofing
Pipe coatings
Cable coatings
Paints
Building water proofing
Tile underlying waterproofing
Newspaper ink production
and many other applications
Rolled asphalt concrete
The largest use of bitumen is for making asphalt concrete for road surfaces; this accounts for approximately 85% of the bitumen consumed in the United States. There are about 4,000 asphalt concrete mixing plants in the US, and a similar number in Europe.
Asphalt concrete pavement mixes are typically composed of 5% bitumen (known as asphalt cement in the US) and 95% aggregates (stone, sand, and gravel). Due to its highly viscous nature, bitumen must be heated so it can be mixed with the aggregates at the asphalt mixing facility. The temperature required varies depending upon characteristics of the bitumen and the aggregates, but warm-mix asphalt technologies allow producers to reduce the temperature required.
The weight of an asphalt pavement depends upon the aggregate type, the bitumen, and the air void content. An average example in the United States is about 112 pounds per square yard, per inch of pavement thickness.
When maintenance is performed on asphalt pavements, such as milling to remove a worn or damaged surface, the removed material can be returned to a facility for processing into new pavement mixtures. The bitumen in the removed material can be reactivated and put back to use in new pavement mixes. With some 95% of paved roads being constructed of or surfaced with asphalt, a substantial amount of asphalt pavement material is reclaimed each year. According to industry surveys conducted annually by the Federal Highway Administration and the National Asphalt Pavement Association, more than 99% of the bitumen removed each year from road surfaces during widening and resurfacing projects is reused as part of new pavements, roadbeds, shoulders and embankments or stockpiled for future use.
Asphalt concrete paving is widely used in airports around the world. Due to the sturdiness and ability to be repaired quickly, it is widely used for runways.
Mastic asphalt
Mastic asphalt is a type of asphalt that differs from dense graded asphalt (asphalt concrete) in that it has a higher bitumen (binder) content, usually around 7–10% of the whole aggregate mix, as opposed to rolled asphalt concrete, which has only around 5% asphalt. This thermoplastic substance is widely used in the building industry for waterproofing flat roofs and tanking underground. Mastic asphalt is heated to a temperature of and is spread in layers to form an impervious barrier about thick.
Bitumen emulsion
Bitumen emulsions are colloidal mixtures of bitumen and water. Due to the different surface tensions of the two liquids, stable emulsions cannot be created simply by mixing. Therefore, various emulsifiers and stabilizers are added. Emulsifiers are amphiphilic molecules that differ in the charge of their polar head group. They reduce the surface tension of the emulsion and thus prevent bitumen particles from fusing. The emulsifier charge defines the type of emulsion: anionic (negatively charged) and cationic (positively charged). The concentration of an emulsifier is a critical parameter affecting the size of the bitumen particles - higher concentrations lead to smaller bitumen particles. Thus, emulsifiers have a great impact on the stability, viscosity, breaking strength, and adhesion of the bitumen emulsion. The size of bitumen particles is usually between 0.1 and 50 µm with a main fraction between 1 µm and 10 µm. Laser diffraction techniques can be used to determine the particle size distribution quickly and easily. Cationic emulsifiers primarily include long-chain amines such as imidazolines, amido-amines, and diamines, which acquire a positive charge when an acid is added. Anionic emulsifiers are often fatty acids extracted from lignin, tall oil, or tree resin saponified with bases such as NaOH, which creates a negative charge.
During the storage of bitumen emulsions, bitumen particles sediment, agglomerate (flocculation), or fuse (coagulation), which leads to a certain instability of the bitumen emulsion. How fast this process occurs depends on the formulation of the bitumen emulsion but also storage conditions such as temperature and humidity. When emulsified bitumen gets into contact with aggregates, emulsifiers lose their effectiveness, the emulsion breaks down, and an adhering bitumen film is formed referred to as 'breaking'. Bitumen particles almost instantly create a continuous bitumen film by coagulating and separating from water which evaporates. Not each asphalt emulsion reacts as fast as the other when it gets into contact with aggregates. That enables a classification into Rapid-setting (R), Slow-setting (SS), and Medium-setting (MS) emulsions, but also an individual, application-specific optimization of the formulation and a wide field of application (1). For example, Slow-breaking emulsions ensure a longer processing time which is particularly advantageous for fine aggregates (1).
Adhesion problems are reported for anionic emulsions in contact with quartz-rich aggregates. They are substituted by cationic emulsions achieving better adhesion. The extensive range of bitumen emulsions is covered insufficiently by standardization. DIN EN 13808 for cationic asphalt emulsions has been existing since July 2005. Here, a classification of bitumen emulsions based on letters and numbers is described, considering charges, viscosities, and the type of bitumen. The production process of bitumen emulsions is very complex. Two methods are commonly used, the "Colloid mill" method and the "High Internal Phase Ratio (HIPR)" method. In the "Colloid mill" method, a rotor moves at high speed within a stator by adding bitumen and a water-emulsifier mixture. The resulting shear forces generate bitumen particles between 5 µm and 10 µm coated with emulsifiers. The "High Internal Phase Ratio (HIPR)" method is used for creating smaller bitumen particles, monomodal, narrow particle size distributions, and very high bitumen concentrations. Here, a highly concentrated bitumen emulsion is produced first by moderate stirring and diluted afterward. In contrast to the "Colloid-Mill" method, the aqueous phase is introduced into hot bitumen, enabling very high bitumen concentrations.
T The "High Internal Phase Ratio (HIPR)" method is used for creating smaller bitumen particles, monomodal, narrow particle size distributions, and very high bitumen concentrations. Here, a highly concentrated bitumen emulsion is produced first by moderate stirring and diluted afterward. In contrast to the "Colloid-Mill" method, the aqueous phase is introduced into hot bitumen, enabling very high bitumen concentrations (1).he "High Internal Phase Ratio (HIPR)" method is used for creating smaller bitumen particles, monomodal, narrow particle size distributions, and very high bitumen concentrations. Here, a highly concentrated bitumen emulsion is produced first by moderate stirring and diluted afterward. In contrast to the "Colloid-Mill" method, the aqueous phase is introduced into hot bitumen, enabling very high bitumen concentrations (1).
Bitumen emulsions are used in a wide variety of applications. They are used in road construction and building protection and primarily include the application in cold recycling mixtures, adhesive coating, and surface treatment (1). Due to the lower viscosity in comparison to hot bitumen, processing requires less energy and is associated with significantly less risk of fire and burns. Chipseal involves spraying the road surface with bitumen emulsion followed by a layer of crushed rock, gravel or crushed slag. Slurry seal is a mixture of bitumen emulsion and fine crushed aggregate that is spread on the surface of a road. Cold-mixed asphalt can also be made from bitumen emulsion to create pavements similar to hot-mixed asphalt, several inches in depth, and bitumen emulsions are also blended into recycled hot-mix asphalt to create low-cost pavements. Bitumen emulsion based techniques are known to be useful for all classes of roads, their use may also be possible in the following applications: 1. Asphalts for heavily trafficked roads (based on the use of polymer modified emulsions) 2. Warm emulsion based mixtures, to improve both their maturation time and mechanical properties 3. Half-warm technology, in which aggregates are heated up to 100 degrees, producing mixtures with similar properties to those of hot asphalts 4. High performance surface dressing.
Synthetic crude oil
Synthetic crude oil, also known as syncrude, is the output from a bitumen upgrader facility used in connection with oil sand production in Canada. Bituminous sands are mined using enormous (100-ton capacity) power shovels and loaded into even larger (400-ton capacity) dump trucks for movement to an upgrading facility. The process used to extract the bitumen from the sand is a hot water process originally developed by Dr. Karl Clark of the University of Alberta during the 1920s. After extraction from the sand, the bitumen is fed into a bitumen upgrader which converts it into a light crude oil equivalent. This synthetic substance is fluid enough to be transferred through conventional oil pipelines and can be fed into conventional oil refineries without any further treatment. By 2015 Canadian bitumen upgraders were producing over per day of synthetic crude oil, of which 75% was exported to oil refineries in the United States.
In Alberta, five bitumen upgraders produce synthetic crude oil and a variety of other products: The Suncor Energy upgrader near Fort McMurray, Alberta produces synthetic crude oil plus diesel fuel; the Syncrude Canada, Canadian Natural Resources, and Nexen upgraders near Fort McMurray produce synthetic crude oil; and the Shell Scotford Upgrader near Edmonton produces synthetic crude oil plus an intermediate feedstock for the nearby Shell Oil Refinery. A sixth upgrader, under construction in 2015 near Redwater, Alberta, will upgrade half of its crude bitumen directly to diesel fuel, with the remainder of the output being sold as feedstock to nearby oil refineries and petrochemical plants.
Non-upgraded crude bitumen
Canadian bitumen does not differ substantially from oils such as Venezuelan extra-heavy and Mexican heavy oil in chemical composition, and the real difficulty is moving the extremely viscous bitumen through oil pipelines to the refinery. Many modern oil refineries are extremely sophisticated and can process non-upgraded bitumen directly into products such as gasoline, diesel fuel, and refined asphalt without any preprocessing. This is particularly common in areas such as the US Gulf coast, where refineries were designed to process Venezuelan and Mexican oil, and in areas such as the US Midwest where refineries were rebuilt to process heavy oil as domestic light oil production declined. Given the choice, such heavy oil refineries usually prefer to buy bitumen rather than synthetic oil because the cost is lower, and in some cases because they prefer to produce more diesel fuel and less gasoline. By 2015 Canadian production and exports of non-upgraded bitumen exceeded that of synthetic crude oil at over per day, of which about 65% was exported to the United States.
Because of the difficulty of moving crude bitumen through pipelines, non-upgraded bitumen is usually diluted with natural-gas condensate in a form called dilbit or with synthetic crude oil, called synbit. However, to meet international competition, much non-upgraded bitumen is now sold as a blend of multiple grades of bitumen, conventional crude oil, synthetic crude oil, and condensate in a standardized benchmark product such as Western Canadian Select. This sour, heavy crude oil blend is designed to have uniform refining characteristics to compete with internationally marketed heavy oils such as Mexican Mayan or Arabian Dubai Crude.
Radioactive waste encapsulation matrix
Bitumen was used starting in the 1960s as a hydrophobic matrix aiming to encapsulate radioactive waste such as medium-activity salts (mainly soluble sodium nitrate and sodium sulfate) produced by the reprocessing of spent nuclear fuels or radioactive sludges from sedimentation ponds. Bituminised radioactive waste containing highly radiotoxic alpha-emitting transuranic elements from nuclear reprocessing plants have been produced at industrial scale in France, Belgium and Japan, but this type of waste conditioning has been abandoned because operational safety issues (risks of fire, as occurred in a bituminisation plant at Tokai Works in Japan) and long-term stability problems related to their geological disposal in deep rock formations. One of the main problems is the swelling of bitumen exposed to radiation and to water. Bitumen swelling is first induced by radiation because of the presence of hydrogen gas bubbles generated by alpha and gamma radiolysis. A second mechanism is the matrix swelling when the encapsulated hygroscopic salts exposed to water or moisture start to rehydrate and to dissolve. The high concentration of salt in the pore solution inside the bituminised matrix is then responsible for osmotic effects inside the bituminised matrix. The water moves in the direction of the concentrated salts, the bitumen acting as a semi-permeable membrane. This also causes the matrix to swell. The swelling pressure due to osmotic effect under constant volume can be as high as 200 bar. If not properly managed, this high pressure can cause fractures in the near field of a disposal gallery of bituminised medium-level waste. When the bituminised matrix has been altered by swelling, encapsulated radionuclides are easily leached by the contact of ground water and released in the geosphere. The high ionic strength of the concentrated saline solution also favours the migration of radionuclides in clay host rocks. The presence of chemically reactive nitrate can also affect the redox conditions prevailing in the host rock by establishing oxidizing conditions, preventing the reduction of redox-sensitive radionuclides. Under their higher valences, radionuclides of elements such as selenium, technetium, uranium, neptunium and plutonium have a higher solubility and are also often present in water as non-retarded anions. This makes the disposal of medium-level bituminised waste very challenging.
Different types of bitumen have been used: blown bitumen (partly oxidized with air oxygen at high temperature after distillation, and harder) and direct distillation bitumen (softer). Blown bitumens like Mexphalte, with a high content of saturated hydrocarbons, are more easily biodegraded by microorganisms than direct distillation bitumen, with a low content of saturated hydrocarbons and a high content of aromatic hydrocarbons.
Concrete encapsulation of radwaste is presently considered a safer alternative by the nuclear industry and the waste management organisations.
Other uses
Roofing shingles and roll roofing account for most of the remaining bitumen consumption. Other uses include cattle sprays, fence-post treatments, and waterproofing for fabrics. Bitumen is used to make Japan black, a lacquer known especially for its use on iron and steel, and it is also used in paint and marker inks by some exterior paint supply companies to increase the weather resistance and permanence of the paint or ink, and to make the color darker. Bitumen is also used to seal some alkaline batteries during the manufacturing process.
Production
About 40,000,000 tons were produced in 1984. It is obtained as the "heavy" (i.e., difficult to distill) fraction. Material with a boiling point greater than around 500 °C is considered asphalt. Vacuum distillation separates it from the other components in crude oil (such as naphtha, gasoline and diesel). The resulting material is typically further treated to extract small but valuable amounts of lubricants and to adjust the properties of the material to suit applications. In a de-asphalting unit, the crude bitumen is treated with either propane or butane in a supercritical phase to extract the lighter molecules, which are then separated. Further processing is possible by "blowing" the product: namely reacting it with oxygen. This step makes the product harder and more viscous.
Bitumen is typically stored and transported at temperatures around . Sometimes diesel oil or kerosene are mixed in before shipping to retain liquidity; upon delivery, these lighter materials are separated out of the mixture. This mixture is often called "bitumen feedstock", or BFS. Some dump trucks route the hot engine exhaust through pipes in the dump body to keep the material warm. The backs of tippers carrying asphalt, as well as some handling equipment, are also commonly sprayed with a releasing agent before filling to aid release. Diesel oil is no longer used as a release agent due to environmental concerns.
Oil sands
Naturally occurring crude bitumen impregnated in sedimentary rock is the prime feed stock for petroleum production from "oil sands", currently under development in Alberta, Canada. Canada has most of the world's supply of natural bitumen, covering 140,000 square kilometres (an area larger than England), giving it the second-largest proven oil reserves in the world. The Athabasca oil sands are the largest bitumen deposit in Canada and the only one accessible to surface mining, although recent technological breakthroughs have resulted in deeper deposits becoming producible by in situ methods. Because of oil price increases after 2003, producing bitumen became highly profitable, but as a result of the decline after 2014 it became uneconomic to build new plants again. By 2014, Canadian crude bitumen production averaged about per day and was projected to rise to per day by 2020. The total amount of crude bitumen in Alberta that could be extracted is estimated to be about , which at a rate of would last about 200 years.
Alternatives and bioasphalt
Although uncompetitive economically, bitumen can be made from nonpetroleum-based renewable resources such as sugar, molasses and rice, corn and potato starches. Bitumen can also be made from waste material by fractional distillation of used motor oil, which is sometimes otherwise disposed of by burning or dumping into landfills. Use of motor oil may cause premature cracking in colder climates, resulting in roads that need to be repaved more frequently.
Nonpetroleum-based asphalt binders can be made light-colored. Lighter-colored roads absorb less heat from solar radiation, reducing their contribution to the urban heat island effect. Parking lots that use bitumen alternatives are called green parking lots.
Albanian deposits
Selenizza is a naturally occurring solid hydrocarbon bitumen found in native deposits in Selenice, in Albania, the only European asphalt mine still in use. The bitumen is found in the form of veins, filling cracks in a more or less horizontal direction. The bitumen content varies from 83% to 92% (soluble in carbon disulphide), with a penetration value near to zero and a softening point (ring and ball) around 120 °C. The insoluble matter, consisting mainly of silica ore, ranges from 8% to 17%.
Albanian bitumen extraction has a long history and was practiced in an organized way by the Romans. After centuries of silence, the first mentions of Albanian bitumen appeared only in 1868, when the Frenchman Coquand published the first geological description of the deposits of Albanian bitumen. In 1875, the exploitation rights were granted to the Ottoman government and in 1912, they were transferred to the Italian company Simsa. Since 1945, the mine was exploited by the Albanian government and from 2001 to date, the management passed to a French company, which organized the mining process for the manufacture of the natural bitumen on an industrial scale.
Today the mine is predominantly exploited in an open pit quarry but several of the many underground mines (deep and extending over several km) still remain viable. Selenizza is produced primarily in granular form, after melting the bitumen pieces selected in the mine.
Selenizza is mainly used as an additive in the road construction sector. It is mixed with traditional bitumen to improve both the viscoelastic properties and the resistance to ageing. It may be blended with the hot bitumen in tanks, but its granular form allows it to be fed in the mixer or in the recycling ring of normal asphalt plants. Other typical applications include the production of mastic asphalts for sidewalks, bridges, car-parks and urban roads as well as drilling fluid additives for the oil and gas industry. Selenizza is available in powder or in granular material of various particle sizes and is packaged in sacks or in thermal fusible polyethylene bags.
A life-cycle assessment study of the natural selenizza compared with petroleum bitumen has shown that the environmental impact of the selenizza is about half the impact of the road asphalt produced in oil refineries in terms of carbon dioxide emission.
Recycling
Bitumen is a commonly recycled material in the construction industry. The two most common recycled materials that contain bitumen are reclaimed asphalt pavement (RAP) and reclaimed asphalt shingles (RAS). RAP is recycled at a greater rate than any other material in the United States, and typically contains approximately 5–6% bitumen binder. Asphalt shingles typically contain 20–40% bitumen binder.
Bitumen naturally becomes stiffer over time due to oxidation, evaporation, exudation, and physical hardening. For this reason, recycled asphalt is typically combined with virgin asphalt, softening agents, and/or rejuvenating additives to restore its physical and chemical properties.
For information on the processing and performance of RAP and RAS, see Asphalt Concrete.
For information on the different types of RAS and associated health and safety concerns, see Asphalt Shingles.
For information on in-place recycling methods used to restore pavements and roadways, see Road Surface.
Economics
Although bitumen typically makes up only 4 to 5 percent (by weight) of the pavement mixture, as the pavement's binder, it is also the most expensive part of the cost of the road-paving material.
During bitumen's early use in modern paving, oil refiners gave it away. However, bitumen is a highly traded commodity today. Its prices increased substantially in the early 21st Century. A U.S. government report states:
"In 2002, asphalt sold for approximately $160 per ton. By the end of 2006, the cost had doubled to approximately $320 per ton, and then it almost doubled again in 2012 to approximately $610 per ton."
The report indicates that an "average" 1-mile (1.6-kilometer)-long, four-lane highway would include "300 tons of asphalt," which, "in 2002 would have cost around $48,000. By 2006 this would have increased to $96,000 and by 2012 to $183,000... an increase of about $135,000 for every mile of highway in just 10 years."
Health and safety
People can be exposed to bitumen in the workplace by breathing in fumes or skin absorption. The National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit of 5 mg/m3 over a 15-minute period.
Bitumen is basically an inert material that must be heated or diluted to a point where it becomes workable for the production of materials for paving, roofing, and other applications. In examining the potential health hazards associated with bitumen, the International Agency for Research on Cancer (IARC) determined that it is the application parameters, predominantly temperature, that affect occupational exposure and the potential bioavailable carcinogenic hazard/risk of the bitumen emissions. In particular, temperatures greater than 199 °C (390 °F), were shown to produce a greater exposure risk than when bitumen was heated to lower temperatures, such as those typically used in asphalt pavement mix production and placement. IARC has classified paving asphalt fumes as a Class 2B possible carcinogen, indicating inadequate evidence of carcinogenicity in humans.
In 2020, scientists reported that bitumen currently is a significant and largely overlooked source of air pollution in urban areas, especially during hot and sunny periods.
A bitumen-like substance found in the Himalayas and known as shilajit is sometimes used as an Ayurveda medicine, but is not in fact a tar, resin or bitumen.
See also
Asphalt plant
Asphaltene
Bioasphalt
Bitumen-based fuel
Bituminous rocks
Blacktop
Cariphalte
Duxit
Macadam
Oil sands
Pitch drop experiment
Pitch (resin)
Road surface
Tar
Tarmac
Sealcoat
Stamped asphalt
Notes
References
Sources
Barth, Edwin J. (1962), Asphalt: Science and Technology, Gordon and Breach. .
External links
Pavement Interactive – Asphalt
CSU Sacramento, The World Famous Asphalt Museum!
National Institute for Occupational Safety and Health – Asphalt Fumes
Scientific American, "Asphalt", 20-Aug-1881, pp. 121
Amorphous solids
Building materials
Chemical mixtures
IARC Group 2B carcinogens
Pavements
Petroleum products
Road construction materials
|
https://en.wikipedia.org/wiki/Alphabet
|
An alphabet is a standardized set of basic written graphemes (called letters) representing phonemes, units of sounds that distinguish words, of certain spoken languages. Not all writing systems represent language in this way; in a syllabary, each character represents a syllable, and logographic systems use characters to represent words, morphemes, or other semantic units.
The Egyptians have created the first alphabet in a technical sense. The short uniliteral signs are used to write pronunciation guides for logograms, or a character that represents a word, or morpheme, and later on, being used to write foreign words. This was used up to the 5th century AD. The first fully phonemic script, the Proto-Sinaitic script, which developed into the Phoenician alphabet, is considered to be the first alphabet and is the ancestor of most modern alphabets, abjads, and abugidas, including Arabic, Cyrillic, Greek, Hebrew, Latin, and possibly Brahmic. It was created by Semitic-speaking workers and slaves in the Sinai Peninsula in modern-day Egypt, by selecting a small number of hieroglyphs commonly seen in their Egyptian surroundings to describe the sounds, as opposed to the semantic values of the Canaanite languages.
Peter T. Daniels distinguishes an abugida, a set of graphemes that represent consonantal base letters that diacritics modify to represent vowels, like in Devanagari and other South Asian scripts, an abjad, in which letters predominantly or exclusively represent consonants such as the original Phoenician, Hebrew or Arabic, and an alphabet, a set of graphemes that represent both consonants and vowels. In this narrow sense of the word, the first true alphabet was the Greek alphabet, which was based on the earlier Phoenician abjad.
Alphabets are usually associated with a standard ordering of letters. This makes them useful for purposes of collation, which allows words to be sorted in a specific order, commonly known as the alphabetical order. It also means that their letters can be used as an alternative method of "numbering" ordered items, in such contexts as numbered lists and number placements. There are also names for letters in some languages. This is known as acrophony; It is present in some modern scripts, such as Greek, and many Semitic scripts, such as Arabic, Hebrew, and Syriac. It was used in some ancient alphabets, such as in Phoenician. However, this system is not present in all languages, such as the Latin alphabet, which adds a vowel after a character for each letter. Some systems also used to have this system but later on abandoned it for a system similar to Latin, such as Cyrillic.
Etymology
The English word alphabet came into Middle English from the Late Latin word , which in turn originated in the Greek, ἀλφάβητος (alphábētos); it was made from the first two letters of the Greek alphabet, alpha (α) and beta (β). The names for the Greek letters, in turn, came from the first two letters of the Phoenician alphabet: aleph, the word for ox, and bet, the word for house.
History
Ancient Near Eastern alphabets
The Ancient Egyptian writing system had a set of some 24 hieroglyphs that are called uniliterals, which are glyphs that provide one sound. These glyphs were used as pronunciation guides for logograms, to write grammatical inflections, and, later, to transcribe loan words and foreign names. The script was used a fair amount in the 4th century CE. However, after pagan temples were closed down, it was forgotten in the 5th century until the discovery of the Rosetta Stone. There was also the Cuneiform script. The script was used to write several ancient languages. However, it was primarily used to write Sumerian. The last known use of the Cuneiform script was in 75 CE, after which the script fell out of use.
In the Middle Bronze Age, an apparently "alphabetic" system known as the Proto-Sinaitic script appeared in Egyptian turquoise mines in the Sinai peninsula dated 15th century BCE, apparently left by Canaanite workers. In 1999, John and Deborah Darnell, American Egyptologists, discovered an earlier version of this first alphabet at the Wadi el-Hol valley in Egypt. The script dated to 1800 BCE and shows evidence of having been adapted from specific forms of Egyptian hieroglyphs that could be dated to 2000 BCE, strongly suggesting that the first alphabet had developed about that time. The script was based on letter appearances and names, believed to be based on Egyptian hieroglyphs. This script had no characters representing vowels. Originally, it probably was a syllabary—a script where syllables are represented with characters—with symbols that were not needed being removed. The best-attested Bronze Age alphabet is Ugaritic, invented in Ugarit (Syria) before the 15th century BCE. This was an alphabetic cuneiform script with 30 signs, including three that indicate the following vowel. This script was not used after the destruction of Ugarit in 1178 BCE.The Proto-Sinaitic script eventually developed into the Phoenician alphabet, conventionally called "Proto-Canaanite" before 1050 BCE. The oldest text in Phoenician script is an inscription on the sarcophagus of King Ahiram 1000 BCE. This script is the parent script of all western alphabets. By the tenth century BCE, two other forms distinguish themselves, Canaanite and Aramaic. The Aramaic gave rise to the Hebrew script.
The South Arabian alphabet, a sister script to the Phoenician alphabet, is the script from which the Ge'ez alphabet, an abugida, a writing system where consonant-vowel sequences are written as units, which was used around the horn of Africa, descended. Vowel-less alphabets are called abjads, currently exemplified in others such as Arabic, Hebrew, and Syriac. The omission of vowels was not always a satisfactory solution due to the need of preserving sacred texts. "Weak" consonants are used to indicate vowels. These letters have a dual function since they can also be used as pure consonants.
The Proto-Sinaitic script and the Ugaritic script were the first scripts with a limited number of signs instead of using many different signs for words, in contrast to the other widely used writing systems at the time, Cuneiform, Egyptian hieroglyphs, and Linear B. The Phoenician script was probably the first phonemic script, and it contained only about two dozen distinct letters, making it a script simple enough for traders to learn. Another advantage of the Phoenician alphabet was that it could write different languages since it recorded words phonemically.
The Phoenician script was spread across the Mediterranean by the Phoenicians. The Greek Alphabet was the first alphabet in which vowels have independent letter forms separate from those of consonants. The Greeks chose letters representing sounds that did not exist in Phoenician to represent vowels. The syllabical Linear B, a script that was used by the Mycenaean Greeks from the 16th century BCE, had 87 symbols, including five vowels. In its early years, there were many variants of the Greek alphabet, causing many different alphabets to evolve from it.
European alphabets
The Greek alphabet, in Euboean form, was carried over by Greek colonists to the Italian peninsula -600 BCE giving rise to many different alphabets used to write the Italic languages, like the Etruscan alphabet. One of these became the Latin alphabet, which spread across Europe as the Romans expanded their republic. After the fall of the Western Roman Empire, the alphabet survived in intellectual and religious works. It came to be used for the descendant languages of Latin (the Romance languages) and most of the other languages of western and central Europe. Today, it is the most widely used script in the world.
The Etruscan alphabet remained nearly unchanged for several hundred years. Only evolving once the Etruscan language changed itself. The letters used for non-existent phonemes were dropped. Afterwards, however, the alphabet went through many different changes. The final classical form of Etruscan contained 20 letters. Four of them are vowels (a, e, i, and u). Six fewer letters than the earlier forms. The script in its classical form was used until the 1st century CE. The Etruscan language itself was not used in imperial Rome, but the script was used for religious texts.
Some adaptations of the Latin alphabet have ligatures, a combination of two letters make one, such as æ in Danish and Icelandic and Ȣ in Algonquian; borrowings from other alphabets, such as the thorn þ in Old English and Icelandic, which came from the Futhark runes; and modified existing letters, such as the eth ð of Old English and Icelandic, which is a modified d. Other alphabets only use a subset of the Latin alphabet, such as Hawaiian and Italian, which uses the letters j, k, x, y, and w only in foreign words.
Another notable script is Elder Futhark, believed to have evolved out of one of the Old Italic alphabets. Elder Futhark gave rise to other alphabets known collectively as the Runic alphabets. The Runic alphabets were used for Germanic languages from 100 CE to the late Middle Ages, being engraved on stone and jewelry, although inscriptions found on bone and wood occasionally appear. These alphabets have since been replaced with the Latin alphabet. The exception was for decorative use, where the runes remained in use until the 20th century.
The Old Hungarian script was the writing system of the Hungarians. It was in use during the entire history of Hungary, albeit not as an official writing system. From the 19th century, it once again became more and more popular.
The Glagolitic alphabet was the initial script of the liturgical language Old Church Slavonic and became, together with the Greek uncial script, the basis of the Cyrillic script. Cyrillic is one of the most widely used modern alphabetic scripts and is notable for its use in Slavic languages and also for other languages within the former Soviet Union. Cyrillic alphabets include Serbian, Macedonian, Bulgarian, Russian, Belarusian, and Ukrainian. The Glagolitic alphabet is believed to have been created by Saints Cyril and Methodius, while the Cyrillic alphabet was created by Clement of Ohrid, their disciple. They feature many letters that appear to have been borrowed from or influenced by Greek and Hebrew.
Asian alphabets
Beyond the logographic Chinese writing, many phonetic scripts exist in Asia. The Arabic alphabet, Hebrew alphabet, Syriac alphabet, and other abjads of the Middle East are developments of the Aramaic alphabet.
Most alphabetic scripts of India and Eastern Asia descend from the Brahmi script, believed to be a descendant of Aramaic.
Hangul
In Korea, Sejong the Great created the Hangul alphabet in 1443 CE. Hangul is a unique alphabet: it is a featural alphabet, where the design of many of the letters comes from a sound's place of articulation, like P looking like the widened mouth and L looking like the tongue pulled in. The creation of Hangul was planned by the government of the day, and it places individual letters in syllable clusters with equal dimensions, in the same way as Chinese characters. This change allows for mixed-script writing, where one syllable always takes up one type space no matter how many letters get stacked into building that one sound-block.
Zhuyin
Zhuyin, sometimes referred to as Bopomofo, is a semi-syllabary. It transcribes Mandarin phonetically in the Republic of China. After the later establishment of the People's Republic of China and its adoption of Hanyu Pinyin, the use of Zhuyin today is limited. However, it is still widely used in Taiwan. Zhuyin developed from a form of Chinese shorthand based on Chinese characters in the early 1900s and has elements of both an alphabet and a syllabary. Like an alphabet, the phonemes of syllable initials are represented by individual symbols, but like a syllabary, the phonemes of the syllable finals are not; each possible final (excluding the medial glide) has its own character, an example being luan written as ㄌㄨㄢ (l-u-an). The last symbol ㄢ takes place as the entire final -an. While Zhuyin is not a mainstream writing system, it is still often used in ways similar to a romanization system, for aiding pronunciation and as an input method for Chinese characters on computers and cellphones.
Romanization
European alphabets, especially Latin and Cyrillic, have been adapted for many languages of Asia. Arabic is also widely used, sometimes as an abjad, as with Urdu and Persian, and sometimes as a complete alphabet, as with Kurdish and Uyghur.
Types
The term "alphabet" is used by linguists and paleographers in both a wide and a narrow sense. In a broader sense, an alphabet is a segmental script at the phoneme level—that is, it has separate glyphs for individual sounds and not for larger units such as syllables or words. In the narrower sense, some scholars distinguish "true" alphabets from two other types of segmental script, abjads, and abugidas. These three differ in how they treat vowels. Abjads have letters for consonants and leave most vowels unexpressed. Abugidas are also consonant-based but indicate vowels with diacritics, a systematic graphic modification of the consonants. The earliest known alphabet using this sense is the Wadi el-Hol script, believed to be an abjad. Its successor, Phoenician, is the ancestor of modern alphabets, including Arabic, Greek, Latin (via the Old Italic alphabet), Cyrillic (via the Greek alphabet), and Hebrew (via Aramaic).
Examples of present-day abjads are the Arabic and Hebrew scripts; true alphabets include Latin, Cyrillic, and Korean Hangul; and abugidas, used to write Tigrinya, Amharic, Hindi, and Thai. The Canadian Aboriginal syllabics are also an abugida, rather than a syllabary, as their name would imply, because each glyph stands for a consonant and is modified by rotation to represent the following vowel. In a true syllabary, each consonant-vowel combination gets represented by a separate glyph.
All three types may be augmented with syllabic glyphs. Ugaritic, for example, is essentially an abjad but has syllabic letters for These are the only times that vowels are indicated. Coptic has a letter for . Devanagari is typically an abugida augmented with dedicated letters for initial vowels, though some traditions use अ as a zero consonant as the graphic base for such vowels.
The boundaries between the three types of segmental scripts are not always clear-cut. For example, Sorani Kurdish is written in the Arabic script, which, when used for other languages, is an abjad. In Kurdish, writing the vowels is mandatory, and whole letters are used, so the script is a true alphabet. Other languages may use a Semitic abjad with forced vowel diacritics, effectively making them abugidas. On the other hand, the Phagspa script of the Mongol Empire was based closely on the Tibetan abugida, but vowel marks are written after the preceding consonant rather than as diacritic marks. Although short a is not written, as in the Indic abugidas, The source of the term "abugida", namely the Ge'ez abugida now used for Amharic and Tigrinya, has assimilated into their consonant modifications. It is no longer systematic and must be learned as a syllabary rather than as a segmental script. Even more extreme, the Pahlavi abjad eventually became logographic.
Thus the primary categorisation of alphabets reflects how they treat vowels. For tonal languages, further classification can be based on their treatment of tone. Though names do not yet exist to distinguish the various types. Some alphabets disregard tone entirely, especially when it does not carry a heavy functional load, as in Somali and many other languages of Africa and the Americas. Most commonly, tones are indicated by diacritics, which is how vowels are treated in abugidas, which is the case for Vietnamese (a true alphabet) and Thai (an abugida). In Thai, the tone is determined primarily by a consonant, with diacritics for disambiguation. In the Pollard script, an abugida, vowels are indicated by diacritics. The placing of the diacritic relative to the consonant is modified to indicate the tone. More rarely, a script may have separate letters for tones, as is the case for Hmong and Zhuang. For many, regardless of whether letters or diacritics get used, the most common tone is not marked, just as the most common vowel is not marked in Indic abugidas. In Zhuyin, not only is one of the tones unmarked; but there is a diacritic to indicate a lack of tone, like the virama of Indic.
Alphabetical order
Alphabets often come to be associated with a standard ordering of their letters; this is for collation—namely, for listing words and other items in alphabetical order.
Latin alphabets
The basic ordering of the Latin alphabet (A B C D E F G H I J K L M N O P Q R S T U V W X Y Z), which derives from the Northwest Semitic "Abgad" order, is already well established. Although, languages using this alphabet have different conventions for their treatment of modified letters (such as the French é, à, and ô) and certain combinations of letters (multigraphs). In French, these are not considered to be additional letters for collation. However, in Icelandic, the accented letters such as á, í, and ö are considered distinct letters representing different vowel sounds from sounds represented by their unaccented counterparts. In Spanish, ñ is considered a separate letter, but accented vowels such as á and é are not. The ll and ch were also formerly considered single letters and sorted separately after l and c, but in 1994, the tenth congress of the Association of Spanish Language Academies changed the collating order so that ll came to be sorted between lk and lm in the dictionary and ch came to be sorted between cg and ci; those digraphs were still formally designated as letters, but in 2010 the changed it, so they are no longer considered letters at all.
In German, words starting with sch- (which spells the German phoneme ) are inserted between words with initial sca- and sci- (all incidentally loanwords) instead of appearing after the initial sz, as though it were a single letter, which contrasts several languages such as Albanian, in which dh-, ë-, gj-, ll-, rr-, th-, xh-, and zh-, which all represent phonemes and considered separate single letters, would follow the letters d, e, g, l, n, r, t, x, and z, respectively, as well as Hungarian and Welsh. Further, German words with an umlaut get collated ignoring the umlaut as—contrary to Turkish, which adopted the graphemes ö and ü, and where a word like tüfek would come after tuz, in the dictionary. An exception is the German telephone directory, where umlauts are sorted like ä=ae since names such as Jäger also appear with the spelling Jaeger and are not distinguished in the spoken language.
The Danish and Norwegian alphabets end with æ—ø—å, whereas the Swedish conventionally put å—ä—ö at the end. However, æ phonetically corresponds with ä, as does ø and ö.
Early alphabets
It is unknown whether the earliest alphabets had a defined sequence. Some alphabets today, such as the Hanuno'o script, are learned one letter at a time, in no particular order, and are not used for collation where a definite order is required. However, a dozen Ugaritic tablets from the fourteenth century BCE preserve the alphabet in two sequences. One, the ABCDE order later used in Phoenician, has continued with minor changes in Hebrew, Greek, Armenian, Gothic, Cyrillic, and Latin; the other, HMĦLQ, was used in southern Arabia and is preserved today in Ethiopic. Both orders have therefore been stable for at least 3000 years.
Runic used an unrelated Futhark sequence, which got simplified later on. Arabic uses usually uses its sequence, although Arabic retains the traditional abjadi order, which is used for numbers.
The Brahmic family of alphabets used in India uses a unique order based on phonology: The letters are arranged according to how and where the sounds get produced in the mouth. This organization is present in Southeast Asia, Tibet, Korean hangul, and even Japanese kana, which is not an alphabet.
Acrophony
In Phoenician, each letter got associated with a word that begins with that sound. This is called acrophony and is continuously used to varying degrees in Samaritan, Aramaic, Syriac, Hebrew, Greek, and Arabic.
Acrophony got abandoned in Latin. It referred to the letters by adding a vowel (usually "e", sometimes "a", or "u") before or after the consonant. Two exceptions were Y and Z, which were borrowed from the Greek alphabet rather than Etruscan. They were known as Y Graeca "Greek Y" and zeta (from Greek)—this discrepancy was inherited by many European languages, as in the term zed for Z in all forms of English, other than American English. Over time names sometimes shifted or were added, as in double U for W, or "double V" in French, the English name for Y, and the American zee for Z. Comparing them in English and French gives a clear reflection of the Great Vowel Shift: A, B, C, and D are pronounced in today's English, but in contemporary French they are . The French names (from which the English names got derived) preserve the qualities of the English vowels before the Great Vowel Shift. By contrast, the names of F, L, M, N, and S () remain the same in both languages because "short" vowels were largely unaffected by the Shift.
In Cyrillic, originally, acrophony was present using Slavic words. The first three words going, azŭ, buky, vědě, with the Cyrillic collation order being, А, Б, В. However, this was later abandoned in favor of a system similar to Latin.
Orthography and pronunciation
When an alphabet is adopted or developed to represent a given language, an orthography generally comes into being, providing rules for spelling words, following the principle on which alphabets get based. These rules will map letters of the alphabet to the phonemes of the spoken language. In a perfectly phonemic orthography, there would be a consistent one-to-one correspondence between the letters and the phonemes so that a writer could predict the spelling of a word given its pronunciation, and a speaker would always know the pronunciation of a word given its spelling, and vice versa. However, this ideal is usually never achieved in practice. Languages can come close to it, such as Spanish and Finnish. others, such as English, deviate from it to a much larger degree.
The pronunciation of a language often evolves independently of its writing system. Writing systems have been borrowed for languages the orthography was not initially made to use. The degree to which letters of an alphabet correspond to phonemes of a language varies.
Languages may fail to achieve a one-to-one correspondence between letters and sounds in any of several ways:
A language may represent a given phoneme by combinations of letters rather than just a single letter. Two-letter combinations are called digraphs, and three-letter groups are called trigraphs. German uses the tetragraphs (four letters) "tsch" for the phoneme and (in a few borrowed words) "dsch" for . Kabardian also uses a tetragraph for one of its phonemes, namely "кхъу." Two letters representing one sound occur in several instances in Hungarian as well (where, for instance, cs stands for [tʃ], sz for [s], zs for [ʒ], dzs for [dʒ]).
A language may represent the same phoneme with two or more different letters or combinations of letters. An example is modern Greek which may write the phoneme in six different ways: , , , , , and .
A language may spell some words with unpronounced letters that exist for historical or other reasons. For example, the spelling of the Thai word for "beer" [เบียร์] retains a letter for the final consonant "r" present in the English word it borrows, but silences it.
Pronunciation of individual words may change according to the presence of surrounding words in a sentence, for example, in Sandhi.
Different dialects of a language may use different phonemes for the same word.
A language may use different sets of symbols or rules for distinct vocabulary items, typically for foreign words, such as in the Japanese katakana syllabary is used for foreign words, and there are rules in English for using loanwords from other languages.
National languages sometimes elect to address the problem of dialects by associating the alphabet with the national standard. Some national languages like Finnish, Armenian, Turkish, Russian, Serbo-Croatian (Serbian, Croatian, and Bosnian), and Bulgarian have a very regular spelling system with nearly one-to-one correspondence between letters and phonemes. Similarly, the Italian verb corresponding to 'spell (out),' compitare, is unknown to many Italians because spelling is usually trivial, as Italian spelling is highly phonemic. In standard Spanish, one can tell the pronunciation of a word from its spelling, but not vice versa, as phonemes sometimes can be represented in more than one way, but a given letter is consistently pronounced. French using silent letters, nasal vowels, and elision, may seem to lack much correspondence between the spelling and pronunciation. However, its rules on pronunciation, though complex, are consistent and predictable with a fair degree of accuracy.
At the other extreme are languages such as English, where pronunciations mostly have to be memorized as they do not correspond to the spelling consistently. For English, this is because the Great Vowel Shift occurred after the orthography got established and because English has acquired a large number of loanwords at different times, retaining their original spelling at varying levels. However, even English has general, albeit complex, rules that predict pronunciation from spelling. Rules like this are usually successful. However, rules to predict spelling from pronunciation have a higher failure rate.
Sometimes, countries have the written language undergo a spelling reform to realign the writing with the contemporary spoken language. These can range from simple spelling changes and word forms to switching the entire writing system. For example, Turkey switched from the Arabic alphabet to a Latin-based Turkish alphabet, and when Kazakh changed from an Arabic script to a Cyrillic script due to the Soviet Union's influence, and in 2021, it made a transition to the Latin alphabet, similar to Turkish. The Cyrillic script used to be official in Uzbekistan and Turkmenistan before they all switched to the Latin alphabet, including Uzbekistan that is having a reform of the alphabet to use diacritics on the letters that are marked by apostrophes and the letters that are digraphs.
The standard system of symbols used by linguists to represent sounds in any language, independently of orthography, is called the International Phonetic Alphabet.
See also
Abecedarium
Acrophony
Akshara
Alphabet book
Alphabet effect
Alphabet song
Alphabetical order
Butterfly Alphabet
Character encoding
Constructed script
Fingerspelling
NATO phonetic alphabet
Lipogram
List of writing systems
Pangram
Thoth
Transliteration
Unicode
References
Bibliography
Overview of modern and some ancient writing systems.
Chapter 3 traces and summarizes the invention of alphabetic writing.
Chapter 4 traces the invention of writing
Further reading
Josephine Quinn, "Alphabet Politics" (review of Silvia Ferrara, The Greatest Invention: A History of the World in Nine Mysterious Scripts, translated from the Italian by Todd Portnowitz, Farrar, Straus and Giroux, 2022, 289 pp.; and Johanna Drucker, Inventing the Alphabet: The Origins of Letters from Antiquity to the Present, University of Chicago Press, 2022, 380 pp.), The New York Review of Books, vol. LXX, no. 1 (19 January 2023), pp. 6, 8, 10.
External links
The Origins of abc
"Language, Writing and Alphabet: An Interview with Christophe Rico", Damqātum 3 (2007)
Michael Everson's Alphabets of Europe
Evolution of alphabets, animation by Prof. Robert Fradkin at the University of Maryland
How the Alphabet Was Born from Hieroglyphs—Biblical Archaeology Review
An Early Hellenic Alphabet
Museum of the Alphabet
The Alphabet, BBC Radio 4 discussion with Eleanor Robson, Alan Millard and Rosalind Thomas (In Our Time, 18 December 2003)
Orthography
|
https://en.wikipedia.org/wiki/Anatomy
|
Anatomy () is the branch of biology concerned with the study of the structure of organisms and their parts. Anatomy is a branch of natural science that deals with the structural organization of living things. It is an old science, having its beginnings in prehistoric times. Anatomy is inherently tied to developmental biology, embryology, comparative anatomy, evolutionary biology, and phylogeny, as these are the processes by which anatomy is generated, both over immediate and long-term timescales. Anatomy and physiology, which study the structure and function of organisms and their parts respectively, make a natural pair of related disciplines, and are often studied together. Human anatomy is one of the essential basic sciences that are applied in medicine.
Anatomy is a complex and dynamic field that is constantly evolving as new discoveries are made. In recent years, there has been a significant increase in the use of advanced imaging techniques, such as MRI and CT scans, which allow for more detailed and accurate visualizations of the body's structures.
The discipline of anatomy is divided into macroscopic and microscopic parts. Macroscopic anatomy, or gross anatomy, is the examination of an animal's body parts using unaided eyesight. Gross anatomy also includes the branch of superficial anatomy. Microscopic anatomy involves the use of optical instruments in the study of the tissues of various structures, known as histology, and also in the study of cells.
The history of anatomy is characterized by a progressive understanding of the functions of the organs and structures of the human body. Methods have also improved dramatically, advancing from the examination of animals by dissection of carcasses and cadavers (corpses) to 20th-century medical imaging techniques, including X-ray, ultrasound, and magnetic resonance imaging.
Etymology and definition
Derived from the Greek anatomē "dissection" (from anatémnō "I cut up, cut open" from ἀνά aná "up", and τέμνω témnō "I cut"), anatomy is the scientific study of the structure of organisms including their systems, organs and tissues. It includes the appearance and position of the various parts, the materials from which they are composed, and their relationships with other parts. Anatomy is quite distinct from physiology and biochemistry, which deal respectively with the functions of those parts and the chemical processes involved. For example, an anatomist is concerned with the shape, size, position, structure, blood supply and innervation of an organ such as the liver; while a physiologist is interested in the production of bile, the role of the liver in nutrition and the regulation of bodily functions.
The discipline of anatomy can be subdivided into a number of branches, including gross or macroscopic anatomy and microscopic anatomy. Gross anatomy is the study of structures large enough to be seen with the naked eye, and also includes superficial anatomy or surface anatomy, the study by sight of the external body features. Microscopic anatomy is the study of structures on a microscopic scale, along with histology (the study of tissues), and embryology (the study of an organism in its immature condition). Regional anatomy is the study of the interrelationships of all of the structures in a specific body region, such as the abdomen. In contrast, systemic anatomy is the study of the structures that make up a discrete body system—that is, a group of structures that work together to perform a unique body function, such as the digestive system.
Anatomy can be studied using both invasive and non-invasive methods with the goal of obtaining information about the structure and organization of organs and systems. Methods used include dissection, in which a body is opened and its organs studied, and endoscopy, in which a video camera-equipped instrument is inserted through a small incision in the body wall and used to explore the internal organs and other structures. Angiography using X-rays or magnetic resonance angiography are methods to visualize blood vessels.
The term "anatomy" is commonly taken to refer to human anatomy. However, substantially similar structures and tissues are found throughout the rest of the animal kingdom, and the term also includes the anatomy of other animals. The term zootomy is also sometimes used to specifically refer to non-human animals. The structure and tissues of plants are of a dissimilar nature and they are studied in plant anatomy.
Animal tissues
The kingdom Animalia contains multicellular organisms that are heterotrophic and motile (although some have secondarily adopted a sessile lifestyle). Most animals have bodies differentiated into separate tissues and these animals are also known as eumetazoans. They have an internal digestive chamber, with one or two openings; the gametes are produced in multicellular sex organs, and the zygotes include a blastula stage in their embryonic development. Metazoans do not include the sponges, which have undifferentiated cells.
Unlike plant cells, animal cells have neither a cell wall nor chloroplasts. Vacuoles, when present, are more in number and much smaller than those in the plant cell. The body tissues are composed of numerous types of cells, including those found in muscles, nerves and skin. Each typically has a cell membrane formed of phospholipids, cytoplasm and a nucleus. All of the different cells of an animal are derived from the embryonic germ layers. Those simpler invertebrates which are formed from two germ layers of ectoderm and endoderm are called diploblastic and the more developed animals whose structures and organs are formed from three germ layers are called triploblastic. All of a triploblastic animal's tissues and organs are derived from the three germ layers of the embryo, the ectoderm, mesoderm and endoderm.
Animal tissues can be grouped into four basic types: connective, epithelial, muscle and nervous tissue.
Connective tissue
Connective tissues are fibrous and made up of cells scattered among inorganic material called the extracellular matrix. Connective tissue gives shape to organs and holds them in place. The main types are loose connective tissue, adipose tissue, fibrous connective tissue, cartilage and bone. The extracellular matrix contains proteins, the chief and most abundant of which is collagen. Collagen plays a major part in organizing and maintaining tissues. The matrix can be modified to form a skeleton to support or protect the body. An exoskeleton is a thickened, rigid cuticle which is stiffened by mineralization, as in crustaceans or by the cross-linking of its proteins as in insects. An endoskeleton is internal and present in all developed animals, as well as in many of those less developed.
Epithelium
Epithelial tissue is composed of closely packed cells, bound to each other by cell adhesion molecules, with little intercellular space. Epithelial cells can be squamous (flat), cuboidal or columnar and rest on a basal lamina, the upper layer of the basement membrane, the lower layer is the reticular lamina lying next to the connective tissue in the extracellular matrix secreted by the epithelial cells. There are many different types of epithelium, modified to suit a particular function. In the respiratory tract there is a type of ciliated epithelial lining; in the small intestine there are microvilli on the epithelial lining and in the large intestine there are intestinal villi. Skin consists of an outer layer of keratinized stratified squamous epithelium that covers the exterior of the vertebrate body. Keratinocytes make up to 95% of the cells in the skin. The epithelial cells on the external surface of the body typically secrete an extracellular matrix in the form of a cuticle. In simple animals this may just be a coat of glycoproteins. In more advanced animals, many glands are formed of epithelial cells.
Muscle tissue
Muscle cells (myocytes) form the active contractile tissue of the body. Muscle tissue functions to produce force and cause motion, either locomotion or movement within internal organs. Muscle is formed of contractile filaments and is separated into three main types; smooth muscle, skeletal muscle and cardiac muscle. Smooth muscle has no striations when examined microscopically. It contracts slowly but maintains contractibility over a wide range of stretch lengths. It is found in such organs as sea anemone tentacles and the body wall of sea cucumbers. Skeletal muscle contracts rapidly but has a limited range of extension. It is found in the movement of appendages and jaws. Obliquely striated muscle is intermediate between the other two. The filaments are staggered and this is the type of muscle found in earthworms that can extend slowly or make rapid contractions. In higher animals striated muscles occur in bundles attached to bone to provide movement and are often arranged in antagonistic sets. Smooth muscle is found in the walls of the uterus, bladder, intestines, stomach, oesophagus, respiratory airways, and blood vessels. Cardiac muscle is found only in the heart, allowing it to contract and pump blood round the body.
Nervous tissue
Nervous tissue is composed of many nerve cells known as neurons which transmit information. In some slow-moving radially symmetrical marine animals such as ctenophores and cnidarians (including sea anemones and jellyfish), the nerves form a nerve net, but in most animals they are organized longitudinally into bundles. In simple animals, receptor neurons in the body wall cause a local reaction to a stimulus. In more complex animals, specialized receptor cells such as chemoreceptors and photoreceptors are found in groups and send messages along neural networks to other parts of the organism. Neurons can be connected together in ganglia. In higher animals, specialized receptors are the basis of sense organs and there is a central nervous system (brain and spinal cord) and a peripheral nervous system. The latter consists of sensory nerves that transmit information from sense organs and motor nerves that influence target organs. The peripheral nervous system is divided into the somatic nervous system which conveys sensation and controls voluntary muscle, and the autonomic nervous system which involuntarily controls smooth muscle, certain glands and internal organs, including the stomach.
Vertebrate anatomy
All vertebrates have a similar basic body plan and at some point in their lives, mostly in the embryonic stage, share the major chordate characteristics: a stiffening rod, the notochord; a dorsal hollow tube of nervous material, the neural tube; pharyngeal arches; and a tail posterior to the anus. The spinal cord is protected by the vertebral column and is above the notochord, and the gastrointestinal tract is below it. Nervous tissue is derived from the ectoderm, connective tissues are derived from mesoderm, and gut is derived from the endoderm. At the posterior end is a tail which continues the spinal cord and vertebrae but not the gut. The mouth is found at the anterior end of the animal, and the anus at the base of the tail. The defining characteristic of a vertebrate is the vertebral column, formed in the development of the segmented series of vertebrae. In most vertebrates the notochord becomes the nucleus pulposus of the intervertebral discs. However, a few vertebrates, such as the sturgeon and the coelacanth, retain the notochord into adulthood. Jawed vertebrates are typified by paired appendages, fins or legs, which may be secondarily lost. The limbs of vertebrates are considered to be homologous because the same underlying skeletal structure was inherited from their last common ancestor. This is one of the arguments put forward by Charles Darwin to support his theory of evolution.
Fish anatomy
The body of a fish is divided into a head, trunk and tail, although the divisions between the three are not always externally visible. The skeleton, which forms the support structure inside the fish, is either made of cartilage, in cartilaginous fish, or bone in bony fish. The main skeletal element is the vertebral column, composed of articulating vertebrae which are lightweight yet strong. The ribs attach to the spine and there are no limbs or limb girdles. The main external features of the fish, the fins, are composed of either bony or soft spines called rays, which with the exception of the caudal fins, have no direct connection with the spine. They are supported by the muscles which compose the main part of the trunk. The heart has two chambers and pumps the blood through the respiratory surfaces of the gills and on round the body in a single circulatory loop. The eyes are adapted for seeing underwater and have only local vision. There is an inner ear but no external or middle ear. Low frequency vibrations are detected by the lateral line system of sense organs that run along the length of the sides of fish, and these respond to nearby movements and to changes in water pressure.
Sharks and rays are basal fish with numerous primitive anatomical features similar to those of ancient fish, including skeletons composed of cartilage. Their bodies tend to be dorso-ventrally flattened, they usually have five pairs of gill slits and a large mouth set on the underside of the head. The dermis is covered with separate dermal placoid scales. They have a cloaca into which the urinary and genital passages open, but not a swim bladder. Cartilaginous fish produce a small number of large, yolky eggs. Some species are ovoviviparous and the young develop internally but others are oviparous and the larvae develop externally in egg cases.
The bony fish lineage shows more derived anatomical traits, often with major evolutionary changes from the features of ancient fish. They have a bony skeleton, are generally laterally flattened, have five pairs of gills protected by an operculum, and a mouth at or near the tip of the snout. The dermis is covered with overlapping scales. Bony fish have a swim bladder which helps them maintain a constant depth in the water column, but not a cloaca. They mostly spawn a large number of small eggs with little yolk which they broadcast into the water column.
Amphibian anatomy
Amphibians are a class of animals comprising frogs, salamanders and caecilians. They are tetrapods, but the caecilians and a few species of salamander have either no limbs or their limbs are much reduced in size. Their main bones are hollow and lightweight and are fully ossified and the vertebrae interlock with each other and have articular processes. Their ribs are usually short and may be fused to the vertebrae. Their skulls are mostly broad and short, and are often incompletely ossified. Their skin contains little keratin and lacks scales, but contains many mucous glands and in some species, poison glands. The hearts of amphibians have three chambers, two atria and one ventricle. They have a urinary bladder and nitrogenous waste products are excreted primarily as urea. Amphibians breathe by means of buccal pumping, a pump action in which air is first drawn into the buccopharyngeal region through the nostrils. These are then closed and the air is forced into the lungs by contraction of the throat. They supplement this with gas exchange through the skin which needs to be kept moist.
In frogs the pelvic girdle is robust and the hind legs are much longer and stronger than the forelimbs. The feet have four or five digits and the toes are often webbed for swimming or have suction pads for climbing. Frogs have large eyes and no tail. Salamanders resemble lizards in appearance; their short legs project sideways, the belly is close to or in contact with the ground and they have a long tail. Caecilians superficially resemble earthworms and are limbless. They burrow by means of zones of muscle contractions which move along the body and they swim by undulating their body from side to side.
Reptile anatomy
Reptiles are a class of animals comprising turtles, tuataras, lizards, snakes and crocodiles. They are tetrapods, but the snakes and a few species of lizard either have no limbs or their limbs are much reduced in size. Their bones are better ossified and their skeletons stronger than those of amphibians. The teeth are conical and mostly uniform in size. The surface cells of the epidermis are modified into horny scales which create a waterproof layer. Reptiles are unable to use their skin for respiration as do amphibians and have a more efficient respiratory system drawing air into their lungs by expanding their chest walls. The heart resembles that of the amphibian but there is a septum which more completely separates the oxygenated and deoxygenated bloodstreams. The reproductive system has evolved for internal fertilization, with a copulatory organ present in most species. The eggs are surrounded by amniotic membranes which prevents them from drying out and are laid on land, or develop internally in some species. The bladder is small as nitrogenous waste is excreted as uric acid.
Turtles are notable for their protective shells. They have an inflexible trunk encased in a horny carapace above and a plastron below. These are formed from bony plates embedded in the dermis which are overlain by horny ones and are partially fused with the ribs and spine. The neck is long and flexible and the head and the legs can be drawn back inside the shell. Turtles are vegetarians and the typical reptile teeth have been replaced by sharp, horny plates. In aquatic species, the front legs are modified into flippers.
Tuataras superficially resemble lizards but the lineages diverged in the Triassic period. There is one living species, Sphenodon punctatus. The skull has two openings (fenestrae) on either side and the jaw is rigidly attached to the skull. There is one row of teeth in the lower jaw and this fits between the two rows in the upper jaw when the animal chews. The teeth are merely projections of bony material from the jaw and eventually wear down. The brain and heart are more primitive than those of other reptiles, and the lungs have a single chamber and lack bronchi. The tuatara has a well-developed parietal eye on its forehead.
Lizards have skulls with only one fenestra on each side, the lower bar of bone below the second fenestra having been lost. This results in the jaws being less rigidly attached which allows the mouth to open wider. Lizards are mostly quadrupeds, with the trunk held off the ground by short, sideways-facing legs, but a few species have no limbs and resemble snakes. Lizards have moveable eyelids, eardrums are present and some species have a central parietal eye.
Snakes are closely related to lizards, having branched off from a common ancestral lineage during the Cretaceous period, and they share many of the same features. The skeleton consists of a skull, a hyoid bone, spine and ribs though a few species retain a vestige of the pelvis and rear limbs in the form of pelvic spurs. The bar under the second fenestra has also been lost and the jaws have extreme flexibility allowing the snake to swallow its prey whole. Snakes lack moveable eyelids, the eyes being covered by transparent "spectacle" scales. They do not have eardrums but can detect ground vibrations through the bones of their skull. Their forked tongues are used as organs of taste and smell and some species have sensory pits on their heads enabling them to locate warm-blooded prey.
Crocodilians are large, low-slung aquatic reptiles with long snouts and large numbers of teeth. The head and trunk are dorso-ventrally flattened and the tail is laterally compressed. It undulates from side to side to force the animal through the water when swimming. The tough keratinized scales provide body armour and some are fused to the skull. The nostrils, eyes and ears are elevated above the top of the flat head enabling them to remain above the surface of the water when the animal is floating. Valves seal the nostrils and ears when it is submerged. Unlike other reptiles, crocodilians have hearts with four chambers allowing complete separation of oxygenated and deoxygenated blood.
Bird anatomy
Birds are tetrapods but though their hind limbs are used for walking or hopping, their front limbs are wings covered with feathers and adapted for flight. Birds are endothermic, have a high metabolic rate, a light skeletal system and powerful muscles. The long bones are thin, hollow and very light. Air sac extensions from the lungs occupy the centre of some bones. The sternum is wide and usually has a keel and the caudal vertebrae are fused. There are no teeth and the narrow jaws are adapted into a horn-covered beak. The eyes are relatively large, particularly in nocturnal species such as owls. They face forwards in predators and sideways in ducks.
The feathers are outgrowths of the epidermis and are found in localized bands from where they fan out over the skin. Large flight feathers are found on the wings and tail, contour feathers cover the bird's surface and fine down occurs on young birds and under the contour feathers of water birds. The only cutaneous gland is the single uropygial gland near the base of the tail. This produces an oily secretion that waterproofs the feathers when the bird preens. There are scales on the legs, feet and claws on the tips of the toes.
Mammal anatomy
Mammals are a diverse class of animals, mostly terrestrial but some are aquatic and others have evolved flapping or gliding flight. They mostly have four limbs, but some aquatic mammals have no limbs or limbs modified into fins, and the forelimbs of bats are modified into wings. The legs of most mammals are situated below the trunk, which is held well clear of the ground. The bones of mammals are well ossified and their teeth, which are usually differentiated, are coated in a layer of prismatic enamel. The teeth are shed once (milk teeth) during the animal's lifetime or not at all, as is the case in cetaceans. Mammals have three bones in the middle ear and a cochlea in the inner ear. They are clothed in hair and their skin contains glands which secrete sweat. Some of these glands are specialized as mammary glands, producing milk to feed the young. Mammals breathe with lungs and have a muscular diaphragm separating the thorax from the abdomen which helps them draw air into the lungs. The mammalian heart has four chambers, and oxygenated and deoxygenated blood are kept entirely separate. Nitrogenous waste is excreted primarily as urea.
Mammals are amniotes, and most are viviparous, giving birth to live young. Exceptions to this are the egg-laying monotremes, the platypus and the echidnas of Australia. Most other mammals have a placenta through which the developing foetus obtains nourishment, but in marsupials, the foetal stage is very short and the immature young is born and finds its way to its mother's pouch where it latches on to a nipple and completes its development.
Human anatomy
Humans have the overall body plan of a mammal. Humans have a head, neck, trunk (which includes the thorax and abdomen), two arms and hands, and two legs and feet.
Generally, students of certain biological sciences, paramedics, prosthetists and orthotists, physiotherapists, occupational therapists, nurses, podiatrists, and medical students learn gross anatomy and microscopic anatomy from anatomical models, skeletons, textbooks, diagrams, photographs, lectures and tutorials and in addition, medical students generally also learn gross anatomy through practical experience of dissection and inspection of cadavers. The study of microscopic anatomy (or histology) can be aided by practical experience examining histological preparations (or slides) under a microscope.
Human anatomy, physiology and biochemistry are complementary basic medical sciences, which are generally taught to medical students in their first year at medical school. Human anatomy can be taught regionally or systemically; that is, respectively, studying anatomy by bodily regions such as the head and chest, or studying by specific systems, such as the nervous or respiratory systems. The major anatomy textbook, Gray's Anatomy, has been reorganized from a systems format to a regional format, in line with modern teaching methods. A thorough working knowledge of anatomy is required by physicians, especially surgeons and doctors working in some diagnostic specialties, such as histopathology and radiology.
Academic anatomists are usually employed by universities, medical schools or teaching hospitals. They are often involved in teaching anatomy, and research into certain systems, organs, tissues or cells.
Invertebrate anatomy
Invertebrates constitute a vast array of living organisms ranging from the simplest unicellular eukaryotes such as Paramecium to such complex multicellular animals as the octopus, lobster and dragonfly. They constitute about 95% of the animal species. By definition, none of these creatures has a backbone. The cells of single-cell protozoans have the same basic structure as those of multicellular animals but some parts are specialized into the equivalent of tissues and organs. Locomotion is often provided by cilia or flagella or may proceed via the advance of pseudopodia, food may be gathered by phagocytosis, energy needs may be supplied by photosynthesis and the cell may be supported by an endoskeleton or an exoskeleton. Some protozoans can form multicellular colonies.
Metazoans are a multicellular organism, with different groups of cells serving different functions. The most basic types of metazoan tissues are epithelium and connective tissue, both of which are present in nearly all invertebrates. The outer surface of the epidermis is normally formed of epithelial cells and secretes an extracellular matrix which provides support to the organism. An endoskeleton derived from the mesoderm is present in echinoderms, sponges and some cephalopods. Exoskeletons are derived from the epidermis and is composed of chitin in arthropods (insects, spiders, ticks, shrimps, crabs, lobsters). Calcium carbonate constitutes the shells of molluscs, brachiopods and some tube-building polychaete worms and silica forms the exoskeleton of the microscopic diatoms and radiolaria. Other invertebrates may have no rigid structures but the epidermis may secrete a variety of surface coatings such as the pinacoderm of sponges, the gelatinous cuticle of cnidarians (polyps, sea anemones, jellyfish) and the collagenous cuticle of annelids. The outer epithelial layer may include cells of several types including sensory cells, gland cells and stinging cells. There may also be protrusions such as microvilli, cilia, bristles, spines and tubercles.
Marcello Malpighi, the father of microscopical anatomy, discovered that plants had tubules similar to those he saw in insects like the silk worm. He observed that when a ring-like portion of bark was removed on a trunk a swelling occurred in the tissues above the ring, and he unmistakably interpreted this as growth stimulated by food coming down from the leaves, and being captured above the ring.
Arthropod anatomy
Arthropods comprise the largest phylum in the animal kingdom with over a million known invertebrate species.
Insects possess segmented bodies supported by a hard-jointed outer covering, the exoskeleton, made mostly of chitin. The segments of the body are organized into three distinct parts, a head, a thorax and an abdomen. The head typically bears a pair of sensory antennae, a pair of compound eyes, one to three simple eyes (ocelli) and three sets of modified appendages that form the mouthparts. The thorax has three pairs of segmented legs, one pair each for the three segments that compose the thorax and one or two pairs of wings. The abdomen is composed of eleven segments, some of which may be fused and houses the digestive, respiratory, excretory and reproductive systems. There is considerable variation between species and many adaptations to the body parts, especially wings, legs, antennae and mouthparts.
Spiders a class of arachnids have four pairs of legs; a body of two segments—a cephalothorax and an abdomen. Spiders have no wings and no antennae. They have mouthparts called chelicerae which are often connected to venom glands as most spiders are venomous. They have a second pair of appendages called pedipalps attached to the cephalothorax. These have similar segmentation to the legs and function as taste and smell organs. At the end of each male pedipalp is a spoon-shaped cymbium that acts to support the copulatory organ.
Other branches of anatomy
Superficial or surface anatomy is important as the study of anatomical landmarks that can be readily seen from the exterior contours of the body. It enables physicians or veterinary surgeons to gauge the position and anatomy of the associated deeper structures. Superficial is a directional term that indicates that structures are located relatively close to the surface of the body.
Comparative anatomy relates to the comparison of anatomical structures (both gross and microscopic) in different animals.
Artistic anatomy relates to anatomic studies for artistic reasons.
History
Ancient
In 1600 BCE, the Edwin Smith Papyrus, an Ancient Egyptian medical text, described the heart and its vessels, as well as the brain and its meninges and cerebrospinal fluid, and the liver, spleen, kidneys, uterus and bladder, and it showed the blood vessels diverging from the heart. The Ebers Papyrus () features a "treatise on the heart", with vessels carrying all the body's fluids to or from every member of the body.
Ancient Greek anatomy and physiology underwent great changes and advances throughout the early medieval world. Over time, this medical practice expanded by a continually developing understanding of the functions of organs and structures in the body. Phenomenal anatomical observations of the human body were made, which have contributed towards the understanding of the brain, eye, liver, reproductive organs and the nervous system.
The Hellenistic Egyptian city of Alexandria was the stepping-stone for Greek anatomy and physiology. Alexandria not only housed the biggest library for medical records and books of the liberal arts in the world during the time of the Greeks, but was also home to many medical practitioners and philosophers. Great patronage of the arts and sciences from the Ptolemaic dynasty of Egypt helped raise Alexandria up, further rivalling the cultural and scientific achievements of other Greek states.
Some of the most striking advances in early anatomy and physiology took place in Hellenistic Alexandria. Two of the most famous anatomists and physiologists of the third century were Herophilus and Erasistratus. These two physicians helped pioneer human dissection for medical research, using the cadavers of condemned criminals, which was considered taboo until the Renaissance—Herophilus was recognized as the first person to perform systematic dissections. Herophilus became known for his anatomical works making impressing contributions to many branches of anatomy and many other aspects of medicine. Some of the works included classifying the system of the pulse, the discovery that human arteries had thicker walls than veins, and that the atria were parts of the heart. Herophilus's knowledge of the human body has provided vital input towards understanding the brain, eye, liver, reproductive organs and nervous system, and characterizing the course of disease. Erasistratus accurately described the structure of the brain, including the cavities and membranes, and made a distinction between its cerebrum and cerebellum During his study in Alexandria, Erasistratus was particularly concerned with studies of the circulatory and nervous systems. He was able to distinguish the sensory and the motor nerves in the human body and believed that air entered the lungs and heart, which was then carried throughout the body. His distinction between the arteries and veins—the arteries carrying the air through the body, while the veins carried the blood from the heart was a great anatomical discovery. Erasistratus was also responsible for naming and describing the function of the epiglottis and the valves of the heart, including the tricuspid. During the third century, Greek physicians were able to differentiate nerves from blood vessels and tendons and to realize that the nerves convey neural impulses. It was Herophilus who made the point that damage to motor nerves induced paralysis. Herophilus named the meninges and ventricles in the brain, appreciated the division between cerebellum and cerebrum and recognized that the brain was the "seat of intellect" and not a "cooling chamber" as propounded by Aristotle Herophilus is also credited with describing the optic, oculomotor, motor division of the trigeminal, facial, vestibulocochlear and hypoglossal nerves.
Great feats were made during the third century BCE in both the digestive and reproductive systems. Herophilus was able to discover and describe not only the salivary glands, but the small intestine and liver. He showed that the uterus is a hollow organ and described the ovaries and uterine tubes. He recognized that spermatozoa were produced by the testes and was the first to identify the prostate gland.
The anatomy of the muscles and skeleton is described in the Hippocratic Corpus, an Ancient Greek medical work written by unknown authors. Aristotle described vertebrate anatomy based on animal dissection. Praxagoras identified the difference between arteries and veins. Also in the 4th century BCE, Herophilos and Erasistratus produced more accurate anatomical descriptions based on vivisection of criminals in Alexandria during the Ptolemaic period.
In the 2nd century, Galen of Pergamum, an anatomist, clinician, writer and philosopher, wrote the final and highly influential anatomy treatise of ancient times. He compiled existing knowledge and studied anatomy through dissection of animals. He was one of the first experimental physiologists through his vivisection experiments on animals. Galen's drawings, based mostly on dog anatomy, became effectively the only anatomical textbook for the next thousand years. His work was known to Renaissance doctors only through Islamic Golden Age medicine until it was translated from the Greek some time in the 15th century.
Medieval to early modern
Anatomy developed little from classical times until the sixteenth century; as the historian Marie Boas writes, "Progress in anatomy before the sixteenth century is as mysteriously slow as its development after 1500 is startlingly rapid". Between 1275 and 1326, the anatomists Mondino de Luzzi, Alessandro Achillini and Antonio Benivieni at Bologna carried out the first systematic human dissections since ancient times. Mondino's Anatomy of 1316 was the first textbook in the medieval rediscovery of human anatomy. It describes the body in the order followed in Mondino's dissections, starting with the abdomen, then the thorax, then the head and limbs. It was the standard anatomy textbook for the next century.
Leonardo da Vinci (1452–1519) was trained in anatomy by Andrea del Verrocchio. He made use of his anatomical knowledge in his artwork, making many sketches of skeletal structures, muscles and organs of humans and other vertebrates that he dissected.
Andreas Vesalius (1514–1564), professor of anatomy at the University of Padua, is considered the founder of modern human anatomy. Originally from Brabant, Vesalius published the influential book De humani corporis fabrica ("the structure of the human body"), a large format book in seven volumes, in 1543. The accurate and intricately detailed illustrations, often in allegorical poses against Italianate landscapes, are thought to have been made by the artist Jan van Calcar, a pupil of Titian.
In England, anatomy was the subject of the first public lectures given in any science; these were given by the Company of Barbers and Surgeons in the 16th century, joined in 1583 by the Lumleian lectures in surgery at the Royal College of Physicians.
Late modern
In the United States, medical schools began to be set up towards the end of the 18th century. Classes in anatomy needed a continual stream of cadavers for dissection and these were difficult to obtain. Philadelphia, Baltimore and New York were all renowned for body snatching activity as criminals raided graveyards at night, removing newly buried corpses from their coffins. A similar problem existed in Britain where demand for bodies became so great that grave-raiding and even anatomy murder were practised to obtain cadavers. Some graveyards were in consequence protected with watchtowers. The practice was halted in Britain by the Anatomy Act of 1832, while in the United States, similar legislation was enacted after the physician William S. Forbes of Jefferson Medical College was found guilty in 1882 of "complicity with resurrectionists in the despoliation of graves in Lebanon Cemetery".
The teaching of anatomy in Britain was transformed by Sir John Struthers, Regius Professor of Anatomy at the University of Aberdeen from 1863 to 1889. He was responsible for setting up the system of three years of "pre-clinical" academic teaching in the sciences underlying medicine, including especially anatomy. This system lasted until the reform of medical training in 1993 and 2003. As well as teaching, he collected many vertebrate skeletons for his museum of comparative anatomy, published over 70 research papers, and became famous for his public dissection of the Tay Whale. From 1822 the Royal College of Surgeons regulated the teaching of anatomy in medical schools. Medical museums provided examples in comparative anatomy, and were often used in teaching. Ignaz Semmelweis investigated puerperal fever and he discovered how it was caused. He noticed that the frequently fatal fever occurred more often in mothers examined by medical students than by midwives. The students went from the dissecting room to the hospital ward and examined women in childbirth. Semmelweis showed that when the trainees washed their hands in chlorinated lime before each clinical examination, the incidence of puerperal fever among the mothers could be reduced dramatically.
Before the modern medical era, the main means for studying the internal structures of the body were dissection of the dead and inspection, palpation and auscultation of the living. It was the advent of microscopy that opened up an understanding of the building blocks that constituted living tissues. Technical advances in the development of achromatic lenses increased the resolving power of the microscope and around 1839, Matthias Jakob Schleiden and Theodor Schwann identified that cells were the fundamental unit of organization of all living things. Study of small structures involved passing light through them and the microtome was invented to provide sufficiently thin slices of tissue to examine. Staining techniques using artificial dyes were established to help distinguish between different types of tissue. Advances in the fields of histology and cytology began in the late 19th century along with advances in surgical techniques allowing for the painless and safe removal of biopsy specimens. The invention of the electron microscope brought a great advance in resolution power and allowed research into the ultrastructure of cells and the organelles and other structures within them. About the same time, in the 1950s, the use of X-ray diffraction for studying the crystal structures of proteins, nucleic acids and other biological molecules gave rise to a new field of molecular anatomy.
Equally important advances have occurred in non-invasive techniques for examining the interior structures of the body. X-rays can be passed through the body and used in medical radiography and fluoroscopy to differentiate interior structures that have varying degrees of opaqueness. Magnetic resonance imaging, computed tomography, and ultrasound imaging have all enabled examination of internal structures in unprecedented detail to a degree far beyond the imagination of earlier generations.
See also
Anatomical model
Outline of human anatomy
Plastination
References
External links
Anatomy, In Our Time. BBC Radio 4. Melvyn Bragg with guests Ruth Richardson, Andrew Cunningham and Harold Ellis.
"Anatomy of the Human Body". 20th edition. 1918. Henry Gray
Anatomia Collection: anatomical plates 1522 to 1867 (digitized books and images)
Lyman, Henry Munson. The Book of Health (1898). Science History Institute Digital Collections .
Gunther von Hagens True Anatomy for New Ways of Teaching.
Sources
Branches of biology
Morphology (biology)
|
https://en.wikipedia.org/wiki/Ambiguity
|
Ambiguity is the type of meaning in which a phrase, statement, or resolution is not explicitly defined, making several interpretations plausible. A common aspect of ambiguity is uncertainty. It is thus an attribute of any idea or statement whose intended meaning cannot be definitively resolved, according to a rule or process with a finite number of steps. (The prefix ambi- reflects the idea of "two," as in "two meanings.")
The concept of ambiguity is generally contrasted with vagueness. In ambiguity, specific and distinct interpretations are permitted (although some may not be immediately obvious), whereas with vague information it is difficult to form any interpretation at the desired level of specificity.
Linguistic forms
Lexical ambiguity is contrasted with semantic ambiguity. The former represents a choice between a finite number of known and meaningful context-dependent interpretations. The latter represents a choice between any number of possible interpretations, none of which may have a standard agreed-upon meaning. This form of ambiguity is closely related to vagueness.
Ambiguity in human language is argued to reflect principles of efficient communication. Languages that communicate efficiently will avoid sending information that is redundant with information provided in the context. This can be shown mathematically to result in a system which is ambiguous when context is neglected. In this way, ambiguity is viewed as a generally useful feature of a linguistic system.
Linguistic ambiguity can be a problem in law, because the interpretation of written documents and oral agreements is often of paramount importance.
Lexical ambiguity
The lexical ambiguity of a word or phrase applies to it having more than one meaning in the language to which the word belongs. "Meaning" here refers to whatever should be represented by a good dictionary. For instance, the word "bank" has several distinct lexical definitions, including "financial institution" and "edge of a river". Or consider "apothecary". One could say "I bought herbs from the apothecary". This could mean one actually spoke to the apothecary (pharmacist) or went to the apothecary (pharmacy).
The context in which an ambiguous word is used often makes it clearer which of the meanings is intended. If, for instance, someone says "I put $100 in the bank", most people would not think someone used a shovel to dig in the mud. However, some linguistic contexts do not provide sufficient information to make a used word clearer.
Lexical ambiguity can be addressed by algorithmic methods that automatically associate the appropriate meaning with a word in context, a task referred to as word-sense disambiguation.
The use of multi-defined words requires the author or speaker to clarify their context, and sometimes elaborate on their specific intended meaning (in which case, a less ambiguous term should have been used). The goal of clear concise communication is that the receiver(s) have no misunderstanding about what was meant to be conveyed. An exception to this could include a politician whose "weasel words" and obfuscation are necessary to gain support from multiple constituents with mutually exclusive conflicting desires from his or her candidate of choice. Ambiguity is a powerful tool of political science.
More problematic are words whose multiple meanings express closely related concepts. "Good", for example, can mean "useful" or "functional" (That's a good hammer), "exemplary" (She's a good student), "pleasing" (This is good soup), "moral" (a good person versus the lesson to be learned from a story), "righteous", etc. "I have a good daughter" is not clear about which sense is intended. The various ways to apply prefixes and suffixes can also create ambiguity ("unlockable" can mean "capable of being opened" or "impossible to lock").
Semantic and syntactic ambiguity
Semantic ambiguity occurs when a word, phrase or sentence, taken out of context, has more than one interpretation. In "We saw her duck" (example due to Richard Nordquist), the words "her duck" can refer either
to the person's bird (the noun "duck", modified by the possessive pronoun "her"), or
to a motion she made (the verb "duck", the subject of which is the objective pronoun "her", object of the verb "saw").
Syntactic ambiguity arises when a sentence can have two (or more) different meanings because of the structure of the sentence—its syntax. This is often due to a modifying expression, such as a prepositional phrase, the application of which is unclear. "He ate the cookies on the couch", for example, could mean that he ate those cookies that were on the couch (as opposed to those that were on the table), or it could mean that he was sitting on the couch when he ate the cookies. "To get in, you will need an entrance fee of $10 or your voucher and your drivers' license." This could mean that you need EITHER ten dollars OR BOTH your voucher and your license. Or it could mean that you need your license AND you need EITHER ten dollars OR a voucher. Only rewriting the sentence, or placing appropriate punctuation can resolve a syntactic ambiguity.
For the notion of, and theoretic results about, syntactic ambiguity in artificial, formal languages (such as computer programming languages), see Ambiguous grammar.
Usually, semantic and syntactic ambiguity go hand in hand. The sentence "We saw her duck" is also syntactically ambiguous. Conversely, a sentence like "He ate the cookies on the couch" is also semantically ambiguous. Rarely, but occasionally, the different parsings of a syntactically ambiguous phrase result in the same meaning. For example, the command "Cook, cook!" can be parsed as "Cook (noun used as vocative), cook (imperative verb form)!", but also as "Cook (imperative verb form), cook (noun used as vocative)!". It is more common that a syntactically unambiguous phrase has a semantic ambiguity; for example, the lexical ambiguity in "Your boss is a funny man" is purely semantic, leading to the response "Funny ha-ha or funny peculiar?"
Spoken language can contain many more types of ambiguities which are called phonological ambiguities, where there is more than one way to compose a set of sounds into words. For example, "ice cream" and "I scream". Such ambiguity is generally resolved according to the context. A mishearing of such, based on incorrectly resolved ambiguity, is called a mondegreen.
Philosophy
Philosophers (and other users of logic) spend a lot of time and effort searching for and removing (or intentionally adding) ambiguity in arguments because it can lead to incorrect conclusions and can be used to deliberately conceal bad arguments. For example, a politician might say, "I oppose taxes which hinder economic growth", an example of a glittering generality. Some will think they oppose taxes in general because they hinder economic growth. Others may think they oppose only those taxes that they believe will hinder economic growth. In writing, the sentence can be rewritten to reduce possible misinterpretation, either by adding a comma after "taxes" (to convey the first sense) or by changing "which" to "that" (to convey the second sense) or by rewriting it in other ways. The devious politician hopes that each constituent will interpret the statement in the most desirable way, and think the politician supports everyone's opinion. However, the opposite can also be true—an opponent can turn a positive statement into a bad one if the speaker uses ambiguity (intentionally or not). The logical fallacies of amphiboly and equivocation rely heavily on the use of ambiguous words and phrases.
In continental philosophy (particularly phenomenology and existentialism), there is much greater tolerance of ambiguity, as it is generally seen as an integral part of the human condition. Martin Heidegger argued that the relation between the subject and object is ambiguous, as is the relation of mind and body, and part and whole. In Heidegger's phenomenology, Dasein is always in a meaningful world, but there is always an underlying background for every instance of signification. Thus, although some things may be certain, they have little to do with Dasein's sense of care and existential anxiety, e.g., in the face of death. In calling his work Being and Nothingness an "essay in phenomenological ontology" Jean-Paul Sartre follows Heidegger in defining the human essence as ambiguous, or relating fundamentally to such ambiguity. Simone de Beauvoir tries to base an ethics on Heidegger's and Sartre's writings (The Ethics of Ambiguity), where she highlights the need to grapple with ambiguity: "as long as there have been philosophers and they have thought, most of them have tried to mask it ... And the ethics which they have proposed to their disciples has always pursued the same goal. It has been a matter of eliminating the ambiguity by making oneself pure inwardness or pure externality, by escaping from the sensible world or being engulfed by it, by yielding to eternity or enclosing oneself in the pure moment." Ethics cannot be based on the authoritative certainty given by mathematics and logic, or prescribed directly from the empirical findings of science. She states: "Since we do not succeed in fleeing it, let us, therefore, try to look the truth in the face. Let us try to assume our fundamental ambiguity. It is in the knowledge of the genuine conditions of our life that we must draw our strength to live and our reason for acting". Other continental philosophers suggest that concepts such as life, nature, and sex are ambiguous. Corey Anton has argued that we cannot be certain what is separate from or unified with something else: language, he asserts, divides what is not, in fact, separate. Following Ernest Becker, he argues that the desire to 'authoritatively disambiguate' the world and existence has led to numerous ideologies and historical events such as genocide. On this basis, he argues that ethics must focus on 'dialectically integrating opposites' and balancing tension, rather than seeking a priori validation or certainty. Like the existentialists and phenomenologists, he sees the ambiguity of life as the basis of creativity.
Literature and rhetoric
In literature and rhetoric, ambiguity can be a useful tool. Groucho Marx's classic joke depends on a grammatical ambiguity for its humor, for example: "Last night I shot an elephant in my pajamas. How he got in my pajamas, I'll never know". Songs and poetry often rely on ambiguous words for artistic effect, as in the song title "Don't It Make My Brown Eyes Blue" (where "blue" can refer to the color, or to sadness).
In the narrative, ambiguity can be introduced in several ways: motive, plot, character. F. Scott Fitzgerald uses the latter type of ambiguity with notable effect in his novel The Great Gatsby.
Mathematical notation
Mathematical notation is a helpful tool that eliminates a lot of misunderstandings associated with natural language in physics and other sciences. Nonetheless, there are still some inherent ambiguities due to lexical, syntactic, and semantic reasons that persist in mathematical notation.
Names of functions
The ambiguity in the style of writing a function should not be confused with a multivalued function, which can (and should) be defined in a deterministic and unambiguous way. Several special functions still do not have established notations. Usually, the conversion to another notation requires to scale the argument or the resulting value; sometimes, the same name of the function is used, causing confusions. Examples of such underestablished functions:
Sinc function
Elliptic integral of the third kind; translating elliptic integral form MAPLE to Mathematica, one should replace the second argument to its square, see Talk:Elliptic integral#List of notations; dealing with complex values, this may cause problems.
Exponential integral
Hermite polynomial
Expressions
Ambiguous expressions often appear in physical and mathematical texts.
It is common practice to omit multiplication signs in mathematical expressions. Also, it is common to give the same name to a variable and a function, for example, . Then, if one sees , there is no way to distinguish whether it means multiplied by , or function evaluated at argument equal to . In each case of use of such notations, the reader is supposed to be able to perform the deduction and reveal the true meaning.
Creators of algorithmic languages try to avoid ambiguities. Many algorithmic languages (C++ and Fortran) require the character * as symbol of multiplication. The Wolfram Language used in Mathematica allows the user to omit the multiplication symbol, but requires square brackets to indicate the argument of a function; square brackets are not allowed for grouping of expressions. Fortran, in addition, does not allow use of the same name (identifier) for different objects, for example, function and variable; in particular, the expression f=f(x) is qualified as an error.
The order of operations may depend on the context. In most programming languages, the operations of division and multiplication have equal priority and are executed from left to right. Until the last century, many editorials assumed that multiplication is performed first, for example, is interpreted as ; in this case, the insertion of parentheses is required when translating the formulas to an algorithmic language. In addition, it is common to write an argument of a function without parenthesis, which also may lead to ambiguity.
In the scientific journal style, one uses roman letters to denote elementary functions, whereas variables are written using italics.
For example, in mathematical journals the expression
does not denote the sine function, but the
product of the three variables
,
,
, although in the informal notation of a slide presentation it may stand for .
Commas in multi-component subscripts and superscripts are sometimes omitted; this is also potentially ambiguous notation.
For example, in the notation , the reader can only infer from the context whether it means a single-index object, taken with the subscript equal to product of variables , and , or it is an indication to a trivalent tensor.
Examples of potentially confusing ambiguous mathematical expressions
An expression such as can be understood to mean either or . Often the author's intention can be understood from the context, in cases where only one of the two makes sense, but an ambiguity like this should be avoided, for example by writing or .
The expression means in several texts, though it might be thought to mean , since commonly means . Conversely, might seem to mean , as this exponentiation notation usually denotes function iteration: in general, means . However, for trigonometric and hyperbolic functions, this notation conventionally means exponentiation of the result of function application.
The expression can be interpreted as meaning ; however, it is more commonly understood to mean .
Notations in quantum optics and quantum mechanics
It is common to define the coherent states in quantum optics with and states with fixed number of photons with . Then, there is an "unwritten rule": the state is coherent if there are more Greek characters than Latin characters in the argument, and photon state if the Latin characters dominate. The ambiguity becomes even worse, if is used for the states with certain value of the coordinate, and means the state with certain value of the momentum, which may be used in books on quantum mechanics. Such ambiguities easily lead to confusions, especially if some normalized adimensional, dimensionless variables are used. Expression may mean a state with single photon, or the coherent state with mean amplitude equal to 1, or state with momentum equal to unity, and so on. The reader is supposed to guess from the context.
Ambiguous terms in physics and mathematics
Some physical quantities do not yet have established notations; their value (and sometimes even dimension, as in the case of the Einstein coefficients), depends on the system of notations. Many terms are ambiguous. Each use of an ambiguous term should be preceded by the definition, suitable for a specific case. Just like Ludwig Wittgenstein states in Tractatus Logico-Philosophicus: "... Only in the context of a proposition has a name meaning."
A highly confusing term is gain. For example, the sentence "the gain of a system should be doubled", without context, means close to nothing.
It may mean that the ratio of the output voltage of an electric circuit to the input voltage should be doubled.
It may mean that the ratio of the output power of an electric or optical circuit to the input power should be doubled.
It may mean that the gain of the laser medium should be doubled, for example, doubling the population of the upper laser level in a quasi-two level system (assuming negligible absorption of the ground-state).
The term intensity is ambiguous when applied to light. The term can refer to any of irradiance, luminous intensity, radiant intensity, or radiance, depending on the background of the person using the term.
Also, confusions may be related with the use of atomic percent as measure of concentration of a dopant, or resolution of an imaging system, as measure of the size of the smallest detail which still can be resolved at the background of statistical noise. See also Accuracy and precision and its talk.
The Berry paradox arises as a result of systematic ambiguity in the meaning of terms such as "definable" or "nameable". Terms of this kind give rise to vicious circle fallacies. Other terms with this type of ambiguity are: satisfiable, true, false, function, property, class, relation, cardinal, and ordinal.
Mathematical interpretation of ambiguity
In mathematics and logic, ambiguity can be considered to be an instance of the logical concept of underdetermination—for example, leaves open what the value of X is—while its opposite is a self-contradiction, also called inconsistency, paradoxicalness, or oxymoron, or in mathematics an inconsistent system—such as , which has no solution.
Logical ambiguity and self-contradiction is analogous to visual ambiguity and impossible objects, such as the Necker cube and impossible cube, or many of the drawings of M. C. Escher.
Constructed language
Some languages have been created with the intention of avoiding ambiguity, especially lexical ambiguity. Lojban and Loglan are two related languages which have been created for this, focusing chiefly on syntactic ambiguity as well. The languages can be both spoken and written. These languages are intended to provide a greater technical precision over big natural languages, although historically, such attempts at language improvement have been criticized. Languages composed from many diverse sources contain much ambiguity and inconsistency. The many exceptions to syntax and semantic rules are time-consuming and difficult to learn.
Biology
In structural biology, ambiguity has been recognized as a problem for studying protein conformations. The analysis of a protein three-dimensional structure consists in dividing the macromolecule into subunits called domains. The difficulty of this task arises from the fact that different definitions of what a domain is can be used (e.g. folding autonomy, function, thermodynamic stability, or domain motions), which sometimes results in a single protein having different—yet equally valid—domain assignments.
Christianity and Judaism
Christianity and Judaism employ the concept of paradox synonymously with "ambiguity". Many Christians and Jews endorse Rudolf Otto's description of the sacred as 'mysterium tremendum et fascinans', the awe-inspiring mystery which fascinates humans. The apocryphal Book of Judith is noted for the "ingenious ambiguity" expressed by its heroine; for example, she says to the villain of the story, Holofernes, "my lord will not fail to achieve his purposes", without specifying whether my lord refers to the villain or to God.
The orthodox Catholic writer G. K. Chesterton regularly employed paradox to tease out the meanings in common concepts which he found ambiguous or to reveal meaning often overlooked or forgotten in common phrases: the title of one of his most famous books, Orthodoxy (1908), itself employed such a paradox.
Music
In music, pieces or sections which confound expectations and may be or are interpreted simultaneously in different ways are ambiguous, such as some polytonality, polymeter, other ambiguous meters or rhythms, and ambiguous phrasing, or (Stein 2005, p. 79) any aspect of music. The music of Africa is often purposely ambiguous. To quote Sir Donald Francis Tovey (1935, p. 195), "Theorists are apt to vex themselves with vain efforts to remove uncertainty just where it has a high aesthetic value."
Visual art
In visual art, certain images are visually ambiguous, such as the Necker cube, which can be interpreted in two ways. Perceptions of such objects remain stable for a time, then may flip, a phenomenon called multistable perception.
The opposite of such ambiguous images are impossible objects.
Pictures or photographs may also be ambiguous at the semantic level: the visual image is unambiguous, but the meaning and narrative may be ambiguous: is a certain facial expression one of excitement or fear, for instance?
Social psychology and the bystander effect
In social psychology, ambiguity is a factor used in determining peoples' responses to various situations. High levels of ambiguity in an emergency (e.g. an unconscious man lying on a park bench) make witnesses less likely to offer any sort of assistance, due to the fear that they may have misinterpreted the situation and acted unnecessarily. Alternately, non-ambiguous emergencies (e.g. an injured person verbally asking for help) elicit more consistent intervention and assistance. With regard to the bystander effect, studies have shown that emergencies deemed ambiguous trigger the appearance of the classic bystander effect (wherein more witnesses decrease the likelihood of any of them helping) far more than non-ambiguous emergencies.
Computer science
In computer science, the SI prefixes kilo-, mega- and giga- were historically used in certain contexts to mean either the first three powers of 1024 (1024, 10242 and 10243) contrary to the metric system in which these units unambiguously mean one thousand, one million, and one billion. This usage is particularly prevalent with electronic memory devices (e.g. DRAM) addressed directly by a binary machine register where a decimal interpretation makes no practical sense.
Subsequently, the Ki, Mi, and Gi prefixes were introduced so that binary prefixes could be written explicitly, also rendering k, M, and G unambiguous in texts conforming to the new standard—this led to a new ambiguity in engineering documents lacking outward trace of the binary prefixes (necessarily indicating the new style) as to whether the usage of k, M, and G remains ambiguous (old style) or not (new style). 1 M (where M is ambiguously 1,000,000 or 1,048,576) is less uncertain than the engineering value 1.0e6 (defined to designate the interval 950,000 to 1,050,000). As non-volatile storage devices begin to exceed 1 GB in capacity (where the ambiguity begins to routinely impact the second significant digit), GB and TB almost always mean 109 and 1012 bytes.
See also
References
External links
Collection of Ambiguous or Inconsistent/Incomplete Statements
Leaving out ambiguities when writing
Semantics
Mathematical notation
Concepts in epistemology
Barriers to critical thinking
Formal semantics (natural language)
|
https://en.wikipedia.org/wiki/Aardvark
|
The aardvark ( ; Orycteropus afer) is a medium-sized, burrowing, nocturnal mammal native to Africa. It is the only living species of the order Tubulidentata, although other prehistoric species and genera of Tubulidentata are known. Unlike most other insectivores, it has a long snout, similar to that of a pig, which is used to sniff out food.
The aardvark is found over much of the southern two-thirds of the African continent, avoiding areas that are mainly rocky. A nocturnal feeder, it subsists on ants and termites, which it will dig out of their hills using its sharp claws and powerful legs. It also digs to create burrows in which to live and rear its young. The animal is listed as "least concern" by the IUCN, although its numbers are decreasing. Aardvarks are afrotheres, a clade which also includes elephants, manatees, and hyraxes.
Name and taxonomy
Name
The aardvark is sometimes colloquially called the "African ant bear", "anteater" (not to be confused with the South American anteaters), or the "Cape anteater" after the Cape of Good Hope. The name "aardvark" is Afrikaans (), comes from earlier Afrikaans and means "earth pig" or "ground pig" (aarde: "earth", vark: "pig", or "young pig"/child), because of its burrowing habits. The name Orycteropus means "burrowing foot", and the name afer refers to Africa. The name of the aardvark's order, Tubulidentata, comes from the tubule-style teeth.
Taxonomy
The aardvark is not closely related to the pig; rather, it is the sole extant representative of the obscure mammalian order Tubulidentata, in which it is usually considered to form one variable species of the genus Orycteropus, the sole surviving genus in the family Orycteropodidae. The aardvark is not closely related to the South American anteater, despite sharing some characteristics and a superficial resemblance. The similarities are the outcome of convergent evolution. The closest living relatives of the aardvark are the elephant shrews, tenrecidae, and golden moles. Along with sirenians, hyraxes, elephants, and their extinct relatives, these animals form the superorder Afrotheria. Studies of the brain have shown the similarities with Condylarthra.
Evolutionary history
Based on fossils, Bryan Patterson has concluded that early relatives of the aardvark appeared in Africa around the end of the Paleocene. The ptolemaiidans, a mysterious clade of mammals with uncertain affinities, may actually be stem-aardvarks, either as a sister clade to Tubulidentata or as a grade leading to true tubulidentates.
The first unambiguous tubulidentate was probably Myorycteropus africanus from Kenyan Miocene deposits. The earliest example from the genus Orycteropus was Orycteropus mauritanicus, found in Algeria in deposits from the middle Miocene, with an equally old version found in Kenya. Fossils from the aardvark have been dated to 5 million years, and have been located throughout Europe and the Near East.
The mysterious Pleistocene Plesiorycteropus from Madagascar was originally thought to be a tubulidentate that was descended from ancestors that entered the island during the Eocene. However, a number of subtle anatomical differences coupled with recent molecular evidence now lead researchers to believe that Plesiorycteropus is a relative of golden moles and tenrecs that achieved an aardvark-like appearance and ecological niche through convergent evolution.
Subspecies
The aardvark has seventeen poorly defined subspecies listed:
Orycteropus afer afer (Southern aardvark)
O. a. adametzi Grote, 1921 (Western aardvark)
O. a. aethiopicus Sundevall, 1843
O. a. angolensis Zukowsky & Haltenorth, 1957
O. a. erikssoni Lönnberg, 1906
O. a. faradjius Hatt, 1932
O. a. haussanus Matschie, 1900
O. a. kordofanicus Rothschild, 1927
O. a. lademanni Grote, 1911
O. a. leptodon Hirst, 1906
O. a. matschiei Grote, 1921
O. a. observandus Grote, 1921
O. a. ruvanensis Grote, 1921
O. a. senegalensis Lesson, 1840
O. a. somalicus Lydekker, 1908
O. a. wardi Lydekker, 1908
O. a. wertheri Matschie, 1898 (Eastern aardvark)
The 1911 Encyclopædia Britannica also mentions O. a. capensis or Cape ant-bear from South Africa.
Description
The aardvark is vaguely pig-like in appearance. Its body is stout with a prominently arched back and is sparsely covered with coarse hairs. The limbs are of moderate length, with the rear legs being longer than the forelegs. The front feet have lost the pollex (or 'thumb'), resulting in four toes, while the rear feet have all five toes. Each toe bears a large, robust nail which is somewhat flattened and shovel-like, and appears to be intermediate between a claw and a hoof. Whereas the aardvark is considered digitigrade, it appears at times to be plantigrade. This confusion happens because when it squats it stands on its soles. A contributing characteristic to the burrow digging capabilities of aardvarks is an endosteal tissue called compacted coarse cancellous bone (CCCB). The stress and strain resistance provided by CCCB allows aardvarks to create their burrows, ultimately leading to a favourable environment for plants and a variety of animals.An aardvark's weight is typically between . An aardvark's length is usually between , and can reach lengths of when its tail (which can be up to ) is taken into account. It is tall at the shoulder, and has a girth of about . It is the largest member of the proposed clade Afroinsectiphilia. The aardvark is pale yellowish-grey in colour and often stained reddish-brown by soil. The aardvark's coat is thin, and the animal's primary protection is its tough skin. Its hair is short on its head and tail; however its legs tend to have longer hair. The hair on the majority of its body is grouped in clusters of 3-4 hairs. The hair surrounding its nostrils is dense to help filter particulate matter out as it digs. Its tail is very thick at the base and gradually tapers.
Head
The greatly elongated head is set on a short, thick neck, and the end of the snout bears a disc, which houses the nostrils. It contains a thin but complete zygomatic arch. The head of the aardvark contains many unique and different features. One of the most distinctive characteristics of the Tubulidentata is their teeth. Instead of having a pulp cavity, each tooth has a cluster of thin, hexagonal, upright, parallel tubes of vasodentin (a modified form of dentine), with individual pulp canals, held together by cementum. The number of columns is dependent on the size of the tooth, with the largest having about 1,500. The teeth have no enamel coating and are worn away and regrow continuously. The aardvark is born with conventional incisors and canines at the front of the jaw, which fall out and are not replaced. Adult aardvarks have only cheek teeth at the back of the jaw, and have a dental formula of: These remaining teeth are peg-like and rootless and are of unique composition. The teeth consist of 14 upper and 12 lower jaw molars. The nasal area of the aardvark is another unique area, as it contains ten nasal conchae, more than any other placental mammal.
The sides of the nostrils are thick with hair. The tip of the snout is highly mobile and is moved by modified mimetic muscles. The fleshy dividing tissue between its nostrils probably has sensory functions, but it is uncertain whether they are olfactory or vibratory in nature. Its nose is made up of more turbinate bones than any other mammal, with between 9 and 11, compared to dogs with 4 to 5. With a large quantity of turbinate bones, the aardvark has more space for the moist epithelium, which is the location of the olfactory bulb. The nose contains nine olfactory bulbs, more than any other mammal. Its keen sense of smell is not just from the quantity of bulbs in the nose but also in the development of the brain, as its olfactory lobe is very developed. The snout resembles an elongated pig snout. The mouth is small and tubular, typical of species that feed on ants and termites. The aardvark has a long, thin, snakelike, protruding tongue (as much as long) and elaborate structures supporting a keen sense of smell. The ears, which are very effective, are disproportionately long, about long. The eyes are small for its head, and consist only of rods.
Digestive system
The aardvark's stomach has a muscular pyloric area that acts as a gizzard to grind swallowed food up, thereby rendering chewing unnecessary. Its cecum is large. Both sexes emit a strong smelling secretion from an anal gland. Its salivary glands are highly developed and almost completely ring the neck; their output is what causes the tongue to maintain its tackiness. The female has two pairs of teats in the inguinal region.
Genetically speaking, the aardvark is a living fossil, as its chromosomes are highly conserved, reflecting much of the early eutherian arrangement before the divergence of the major modern taxa.
Habitat and range
Aardvarks are found in sub-Saharan Africa, where suitable habitat (savannas, grasslands, woodlands and bushland) and food (i.e., ants and termites) is available. They spend the daylight hours in dark burrows to avoid the heat of the day. The only major habitat that they are not present in is swamp forest, as the high water table precludes digging to a sufficient depth. They also avoid terrain rocky enough to cause problems with digging. They have been documented as high as in Ethiopia. They are present throughout sub-Saharan Africa all the way to South Africa with few exceptions including the coastal areas of Namibia, Ivory Coast, and Ghana. They are not found in Madagascar.
Ecology and behaviour
Aardvarks live for up to 23 years in captivity. Its keen hearing warns it of predators: lions, leopards, cheetahs, African wild dogs, hyenas, and pythons. Some humans also hunt aardvarks for meat. Aardvarks can dig fast or run in zigzag fashion to elude enemies, but if all else fails, they will strike with their claws, tail and shoulders, sometimes flipping onto their backs lying motionless except to lash out with all four feet. They are capable of causing substantial damage to unprotected areas of an attacker. They will also dig to escape as they can. Sometimes, when pressed, aardvarks can dig extremely quickly.
Feeding
The aardvark is nocturnal and is a solitary creature that feeds almost exclusively on ants and termites (myrmecophagy); the only fruit eaten by aardvarks is the aardvark cucumber. In fact, the cucumber and the aardvark have a symbiotic relationship as they eat the subterranean fruit, then defecate the seeds near their burrows, which then grow rapidly due to the loose soil and fertile nature of the area. The time spent in the intestine of the aardvark helps the fertility of the seed, and the fruit provides needed moisture for the aardvark. They avoid eating the African driver ant and red ants. Due to their stringent diet requirements, they require a large range to survive. An aardvark emerges from its burrow in the late afternoon or shortly after sunset, and forages over a considerable home range encompassing . While foraging for food, the aardvark will keep its nose to the ground and its ears pointed forward, which indicates that both smell and hearing are involved in the search for food. They zig-zag as they forage and will usually not repeat a route for 5–8 days as they appear to allow time for the termite nests to recover before feeding on it again.
During a foraging period, they will stop to dig a "V" shaped trench with their forefeet and then sniff it profusely as a means to explore their location. When a concentration of ants or termites is detected, the aardvark digs into it with its powerful front legs, keeping its long ears upright to listen for predators, and takes up an astonishing number of insects with its long, sticky tongue—as many as 50,000 in one night have been recorded. Its claws enable it to dig through the extremely hard crust of a termite or ant mound quickly. It avoids inhaling the dust by sealing the nostrils. When successful, the aardvark's long (up to ) tongue licks up the insects; the termites' biting, or the ants' stinging attacks are rendered futile by the tough skin. After an aardvark visit at a termite mound, other animals will visit to pick up all the leftovers. Termite mounds alone do not provide enough food for the aardvark, so they look for termites that are on the move. When these insects move, they can form columns long and these tend to provide easy pickings with little effort exerted by the aardvark. These columns are more common in areas of livestock or other hoofed animals. The trampled grass and dung attract termites from the Odontotermes, Microtermes, and Pseudacanthotermes genera.
On a nightly basis they tend to be more active during the first portion of night (roughly the four hours between 8:00p.m. and 12:00a.m.); however, they do not seem to prefer bright or dark nights over the other. During adverse weather or if disturbed they will retreat to their burrow systems. They cover between per night; however, some studies have shown that they may traverse as far as in a night.
Aardvarks shift their circadian rhythms to more diurnal activity patterns in response to a reduced food supply. This survival tactic may signify an increased risk of imminent mortality.
Vocalisation
The aardvark is a rather quiet animal. However, it does make soft grunting sounds as it forages and loud grunts as it makes for its tunnel entrance. It makes a bleating sound if frightened. When it is threatened it will make for one of its burrows. If one is not close it will dig a new one rapidly. This new one will be short and require the aardvark to back out when the coast is clear.
Movement
The aardvark is known to be a good swimmer and has been witnessed successfully swimming in strong currents. It can dig a yard of tunnel in about five minutes, but otherwise moves fairly slowly.
When leaving the burrow at night, they pause at the entrance for about ten minutes, sniffing and listening. After this period of watchfulness, it will bound out and within seconds it will be away. It will then pause, prick its ears, twisting its head to listen, then jump and move off to start foraging.
Aside from digging out ants and termites, the aardvark also excavates burrows in which to live, which generally fall into one of three categories: burrows made while foraging, refuge and resting location, and permanent homes. Temporary sites are scattered around the home range and are used as refuges, while the main burrow is also used for breeding. Main burrows can be deep and extensive, have several entrances and can be as long as . These burrows can be large enough for a person to enter. The aardvark changes the layout of its home burrow regularly, and periodically moves on and makes a new one. The old burrows are an important part of the African wildlife scene. As they are vacated, then they are inhabited by smaller animals like the African wild dog, ant-eating chat, Nycteris thebaica and warthogs. Other animals that use them are hares, mongooses, hyenas, owls, pythons, and lizards. Without these refuges many animals would die during wildfire season. Only mothers and young share burrows; however, the aardvark is known to live in small family groups or as a solitary creature. If attacked in the tunnel, it will escape by digging out of the tunnel thereby placing the fresh fill between it and its predator, or if it decides to fight it will roll onto its back, and attack with its claws. The aardvark has been known to sleep in a recently excavated ant nest, which also serves as protection from its predators.
Reproduction
Aardvarks pair only during the breeding season; after a gestation period of seven months, one cub weighing around is born during May–July. When born, the young has flaccid ears and many wrinkles. When nursing, it will nurse off each teat in succession. After two weeks, the folds of skin disappear and after three, the ears can be held upright. After 5–6 weeks, body hair starts growing. It is able to leave the burrow to accompany its mother after only two weeks and eats termites at 9 weeks, and is weaned between three months and 16 weeks. At six months of age, it is able to dig its own burrows, but it will often remain with the mother until the next mating season, and is sexually mature from approximately two years of age.
Conservation
Aardvarks were thought to have declining numbers, however, this is possibly because they are not readily seen. There are no definitive counts because of their nocturnal and secretive habits; however, their numbers seem to be stable overall. They are not considered common anywhere in Africa, but due to their large range, they maintain sufficient numbers. There may be a slight decrease in numbers in eastern, northern, and western Africa. Southern African numbers are not decreasing. It has received an official designation from the IUCN as least concern. However, they are a species in a precarious situation, as they are so dependent on such specific food; therefore if a problem arises with the abundance of termites, the species as a whole would be affected drastically.
Recent research suggests that aardvarks may be particularly vulnerable to alterations in temperature caused by climate change. Droughts negatively impact the availability of termites and ants, which comprise the bulk of an aardvark's diet. Nocturnal species faced with resource scarcity may increase their diurnal activity to spare the energy costs of staying warm at night, but this comes at the cost of withstanding high temperatures during the day. A study on aardvarks in the Kalahari Desert saw that five out of six aardvarks being studied perished following a drought. Aardvarks that survive droughts can take long periods of time to regain health and optimal thermoregulatory physiology, reducing the reproductive potential of the species.
Aardvarks handle captivity well. The first zoo to have one was London Zoo in 1869, which had an animal from South Africa.
Mythology and popular culture
In African folklore, the aardvark is much admired because of its diligent quest for food and its fearless response to soldier ants. Hausa magicians make a charm from the heart, skin, forehead, and nails of the aardvark, which they then proceed to pound together with the root of a certain tree. Wrapped in a piece of skin and worn on the chest, the charm is said to give the owner the ability to pass through walls or roofs at night. The charm is said to be used by burglars and those seeking to visit young girls without their parents' permission. Also, some tribes, such as the Margbetu, Ayanda, and Logo, will use aardvark teeth to make bracelets, which are regarded as good luck charms. The meat, which has a resemblance to pork, is eaten in certain cultures. In the mythology of the Dagbon people of Ghana, the aardvark is believed to possess superpowers. The Dagombas believe this animal can transfigure into and interact with humans.
The ancient Egyptian god Set is usually depicted with the head of an unidentified animal, whose similarity to an aardvark has been noted in scholarship.
The titular character and his families from Arthur, an animated television series for children based on a book series and produced by WGBH, shown in more than 180 countries, is an aardvark. In the first book of the series, Arthur's Nose (1976), he has a long, aardvark-like nose, but in later books, his face becomes more rounded.
Otis the Aardvark was a puppet character used on Children's BBC programming.
An aardvark features as the antagonist in the cartoon The Ant and the Aardvark as well as in the Canadian animated series The Raccoons.
The supersonic fighter-bomber F-111/FB-111 was nicknamed the Aardvark because of its long nose resembling the animal. It also had similarities with its nocturnal missions flown at a very low level employing ordnance that could penetrate deep into the ground. In the US Navy, the squadron VF-114 was nicknamed the Aardvarks, flying F-4s and then F-14s. The squadron mascot was adapted from the animal in the comic strip B.C., which the F-4 was said to resemble.
Cerebus the Aardvark is a 300-issue comic book series by Dave Sim.
Footnotes
References
External links
IUCN/SSC Afrotheria Specialist Group
A YouTube video introducing the Bronx Zoo's aardvarks
"The Biology of the Aardvark (Orycteropus afer)" a diploma thesis (without images)
"The Biology of the Aardvark" (Orycteropus afer)" the thesis with images
Orycteropus
Mammals of Africa
Xerophiles
Myrmecophagous mammals
Mammals described in 1766
Extant Zanclean first appearances
Taxa named by Peter Simon Pallas
Afrikaans words and phrases
|
https://en.wikipedia.org/wiki/Adobe
|
Adobe ( ; ) is a building material made from earth and organic materials. is Spanish for mudbrick. In some English-speaking regions of Spanish heritage, such as the Southwestern United States, the term is used to refer to any kind of earthen construction, or various architectural styles like Pueblo Revival or Territorial Revival. Most adobe buildings are similar in appearance to cob and rammed earth buildings. Adobe is among the earliest building materials, and is used throughout the world.
Adobe architecture has been dated to before 5,100 B.C.
Description
Adobe bricks are rectangular prisms small enough that they can quickly air dry individually without cracking. They can be subsequently assembled, with the application of adobe mud to bond the individual bricks into a structure. There is no standard size, with substantial variations over the years and in different regions. In some areas a popular size measured weighing about ; in other contexts the size is weighing about . The maximum sizes can reach up to ; above this weight it becomes difficult to move the pieces, and it is preferred to ram the mud in situ, resulting in a different typology known as rammed earth.
Strength
In dry climates, adobe structures are extremely durable, and account for some of the oldest existing buildings in the world. Adobe buildings offer significant advantages due to their greater thermal mass, but they are known to be particularly susceptible to earthquake damage if they are not reinforced. Cases where adobe structures were widely damaged during earthquakes include the 1976 Guatemala earthquake, the 2003 Bam earthquake, and the 2010 Chile earthquake.
Distribution
Buildings made of sun-dried earth are common throughout the world (Middle East, Western Asia, North Africa, West Africa, South America, Southwestern North America, Southwestern and Eastern Europe.). Adobe had been in use by indigenous peoples of the Americas in the Southwestern United States, Mesoamerica, and the Andes for several thousand years. Puebloan peoples built their adobe structures with handsful or basketsful of adobe, until the Spanish introduced them to making bricks. Adobe bricks were used in Spain from the Late Bronze and Iron Ages (eighth century BCE onwards). Its wide use can be attributed to its simplicity of design and manufacture, and economics.
Etymology
The word adobe has existed for around 4,000 years with relatively little change in either pronunciation or meaning. The word can be traced from the Middle Egyptian () word ḏbt "mud brick" (with vowels unwritten). Middle Egyptian evolved into Late Egyptian and finally to Coptic (), where it appeared as ⲧⲱⲃⲉ tōbə. This was adopted into Arabic as aṭ-ṭawbu or aṭ-ṭūbu, with the definite article al- attached to the root tuba. This was assimilated into the Old Spanish language as adobe , probably via Mozarabic. English borrowed the word from Spanish in the early 18th century, still referring to mudbrick construction.
In more modern English usage, the term adobe has come to include a style of architecture popular in the desert climates of North America, especially in New Mexico, regardless of the construction method.
Composition
An adobe brick is a composite material made of earth mixed with water and an organic material such as straw or dung. The soil composition typically contains sand, silt and clay. Straw is useful in binding the brick together and allowing the brick to dry evenly, thereby preventing cracking due to uneven shrinkage rates through the brick. Dung offers the same advantage. The most desirable soil texture for producing the mud of adobe is 15% clay, 10–30% silt, and 55–75% fine sand. Another source quotes 15–25% clay and the remainder sand and coarser particles up to cobbles , with no deleterious effect. Modern adobe is stabilized with either emulsified asphalt or Portland cement up to 10% by weight.
No more than half the clay content should be expansive clays, with the remainder non-expansive illite or kaolinite. Too much expansive clay results in uneven drying through the brick, resulting in cracking, while too much kaolinite will make a weak brick. Typically the soils of the Southwest United States, where such construction has been widely used, are an adequate composition.
Material properties
Adobe walls are load bearing, i.e. they carry their own weight into the foundation rather than by another structure, hence the adobe must have sufficient compressive strength. In the United States, most building codes call for a minimum compressive strength of 300 lbf/in2 (2.07 newton/mm2) for the adobe block. Adobe construction should be designed so as to avoid lateral structural loads that would cause bending loads. The building codes require the building sustain a 1 g lateral acceleration earthquake load. Such an acceleration will cause lateral loads on the walls, resulting in shear and bending and inducing tensile stresses. To withstand such loads, the codes typically call for a tensile modulus of rupture strength of at least 50 lbf/in2 (0.345 newton/mm2) for the finished block.
In addition to being an inexpensive material with a small resource cost, adobe can serve as a significant heat reservoir due to the thermal properties inherent in the massive walls typical in adobe construction. In climates typified by hot days and cool nights, the high thermal mass of adobe mediates the high and low temperatures of the day, moderating the temperature of the living space. The massive walls require a large and relatively long input of heat from the sun (radiation) and from the surrounding air (convection) before they warm through to the interior. After the sun sets and the temperature drops, the warm wall will continue to transfer heat to the interior for several hours due to the time-lag effect. Thus, a well-planned adobe wall of the appropriate thickness is very effective at controlling inside temperature through the wide daily fluctuations typical of desert climates, a factor which has contributed to its longevity as a building material.
Thermodynamic material properties have significant variation in the literature. Some experiments suggest that the standard consideration of conductivity is not adequate for this material, as its main thermodynamic property is inertia, and conclude that experimental tests should be performed over a longer period of time than usual - preferably with changing thermal jumps. There is an effective R-value for a north facing 10-in wall of R0=10 hr ft2 °F/Btu, which corresponds to thermal conductivity k=10 in x 1 ft/12 in /R0=0.33 Btu/(hr ft °F) or 0.57 W/(m K) in agreement with the thermal conductivity reported from another source. To determine the total R-value of a wall, scale R0 by the thickness of the wall in inches. The thermal resistance of adobe is also stated as an R-value for a 10-inch wall R0=4.1 hr ft2 °F/Btu. Another source provides the following properties: conductivity=0.30 Btu/(hr ft °F) or 0.52 W/(m K); specific heat capacity=0.24 Btu/(lb °F) or 1 kJ/(kg K) and density=106 lb/ft3 or 1700 kg/m3, giving heat capacity=25.4 Btu/(ft3 °F) or 1700 kJ/(m3 K). Using the average value of the thermal conductivity as k = 32 Btu/(hr ft °F) or 0.55 W/(m K), the thermal diffusivity is calculated to be 0.013 ft2/h or 3.3x10−7 m2/s.
Uses
Poured and puddled adobe walls
Poured and puddled adobe (puddled clay, piled earth), today called cob, is made by placing soft adobe in layers, rather than by making individual dried bricks or using a form. "Puddle" is a general term for a clay or clay and sand-based material worked into a dense, plastic state. These are the oldest methods of building with adobe in the Americas until holes in the ground were used as forms, and later wooden forms used to make individual bricks were introduced by the Spanish.
Adobe bricks
Bricks made from adobe are usually made by pressing the mud mixture into an open timber frame. In North America, the brick is typically about in size. The mixture is molded into the frame, which is removed after initial setting. After drying for a few hours, the bricks are turned on edge to finish drying. Slow drying in shade reduces cracking.
The same mixture, without straw, is used to make mortar and often plaster on interior and exterior walls. Some cultures used lime-based cement for the plaster to protect against rain damage.
Depending on the form into which the mixture is pressed, adobe can encompass nearly any shape or size, provided drying is even and the mixture includes reinforcement for larger bricks. Reinforcement can include manure, straw, cement, rebar, or wooden posts. Straw, cement, or manure added to a standard adobe mixture can produce a stronger, more crack-resistant brick. A test is done on the soil content first. To do so, a sample of the soil is mixed into a clear container with some water, creating an almost completely saturated liquid. The container is shaken vigorously for one minute. It is then allowed to settle for a day until the soil has settled into layers. Heavier particles settle out first, sand above, silt above that, and very fine clay and organic matter will stay in suspension for days. After the water has cleared, percentages of the various particles can be determined. Fifty to 60 percent sand and 35 to 40 percent clay will yield strong bricks. The Cooperative State Research, Education, and Extension Service at New Mexico State University recommends a mix of not more than clay, not less than sand, and never more than silt.
During the Great Depression, designer and builder Hugh W. Comstock used cheaper materials and made a specialized adobe brick called "Bitudobe." His first adobe house was built in 1936. In 1948, he published the book Post-Adobe; Simplified Adobe Construction Combining A Rugged Timber Frame And Modern Stabilized Adobe, which described his method of construction, including how to make "Bitudobe." In 1938, he served as an adviser to the architects Franklin & Kump Associates, who built the Carmel High School, which used his Post-adobe system.
Adobe wall construction
The ground supporting an adobe structure should be compressed, as the weight of adobe wall is significant and foundation settling may cause cracking of the wall. Footing depth is to be below the ground frost level. The footing and stem wall are commonly 24 and 14 inches thick, respectively. Modern construction codes call for the use of reinforcing steel in the footing and stem wall. Adobe bricks are laid by course. Adobe walls usually never rise above two stories as they are load bearing and adobe has low structural strength. When creating window and door openings, a lintel is placed on top of the opening to support the bricks above. Atop the last courses of brick, bond beams made of heavy wood beams or modern reinforced concrete are laid to provide a horizontal bearing plate for the roof beams and to redistribute lateral earthquake loads to shear walls more able to carry the forces. To protect the interior and exterior adobe walls, finishes such as mud plaster, whitewash or stucco can be applied. These protect the adobe wall from water damage, but need to be reapplied periodically. Alternatively, the walls can be finished with other nontraditional plasters that provide longer protection. Bricks made with stabilized adobe generally do not need protection of plasters.
Adobe roof
The traditional adobe roof has been constructed using a mixture of soil/clay, water, sand and organic materials. The mixture was then formed and pressed into wood forms, producing rows of dried earth bricks that would then be laid across a support structure of wood and plastered into place with more adobe.
Depending on the materials available, a roof may be assembled using wood or metal beams to create a framework to begin layering adobe bricks. Depending on the thickness of the adobe bricks, the framework has been preformed using a steel framing and a layering of a metal fencing or wiring over the framework to allow an even load as masses of adobe are spread across the metal fencing like cob and allowed to air dry accordingly. This method was demonstrated with an adobe blend heavily impregnated with cement to allow even drying and prevent cracking.
The more traditional flat adobe roofs are functional only in dry climates that are not exposed to snow loads. The heaviest wooden beams, called vigas, lie atop the wall. Across the vigas lie smaller members called latillas and upon those brush is then laid. Finally, the adobe layer is applied.
To construct a flat adobe roof, beams of wood were laid to span the building, the ends of which were attached to the tops of the walls. Once the vigas, latillas and brush are laid, adobe bricks are placed. An adobe roof is often laid with bricks slightly larger in width to ensure a greater expanse is covered when placing the bricks onto the roof. Following each individual brick should be a layer of adobe mortar, recommended to be at least thick to make certain there is ample strength between the brick's edges and also to provide a relative moisture barrier during rain.
Roof design evolved around 1850 in the American Southwest. Three inches of adobe mud was applied on top of the latillas, then 18 inches of dry adobe dirt applied to the roof. The dirt was contoured into a low slope to a downspout aka a 'canal'. When moisture was applied to the roof the clay particles expanded to create a waterproof membrane. Once a year it was necessary to pull the weeds from the roof and re-slope the dirt as needed.
Depending on the materials, adobe roofs can be inherently fire-proof. The construction of a chimney can greatly influence the construction of the roof supports, creating an extra need for care in choosing the materials. The builders can make an adobe chimney by stacking simple adobe bricks in a similar fashion as the surrounding walls.
In 1927, the Uniform Building Code (UBC) was adopted in the United States. Local ordinances, referencing the UBC added requirements to building with adobe. These included: restriction of building height of adobe structures to 1-story, requirements for adobe mix (compressive and shear strength) and new requirements which stated that every building shall be designed to withstand seismic activity, specifically lateral forces. By the 1980s however, seismic related changes in the California Building Code effectively ended solid wall adobe construction in California; however Post-and-Beam adobe and veneers are still being used.
Adobe around the world
The largest structure ever made from adobe is the Arg-é Bam built by the Achaemenid Empire. Other large adobe structures are the Huaca del Sol in Peru, with 100 million signed bricks and the ciudellas of Chan Chan and Tambo Colorado, both in Peru.
See also
used adobe walls
(waterproofing plaster)
(also known as Ctesiphon Arch) in Iraq is the largest mud brick arch in the world, built beginning in 540 AD
References
External links
Soil-based building materials
Masonry
Adobe buildings and structures
Appropriate technology
Vernacular architecture
Sustainable building
Buildings and structures by construction material
Western (genre) staples and terminology
|
https://en.wikipedia.org/wiki/Ampere
|
The ampere ( , ; symbol: A), often shortened to amp, is the unit of electric current in the International System of Units (SI). One ampere is equal to 1 coulomb moving past a point in 1 second, or electrons' worth of charge moving past a point in 1 second. It is named after French mathematician and physicist André-Marie Ampère (1775–1836), considered the father of electromagnetism along with Danish physicist Hans Christian Ørsted.
As of the 2019 redefinition of the SI base units, the ampere is defined by fixing the elementary charge to be exactly (coulomb), which means an ampere is an electric current equivalent to elementary charges moving every seconds or elementary charges moving in a second. Prior to the redefinition the ampere was defined as the current passing through 2 parallel wires 1 metre apart that produces a magnetic force of newtons per metre.
The earlier CGS system has two units of current, one structured similar to the SI's and the other using Coulomb's law as a fundamental relationship, with the unit of charge defined by measuring the force between two charged metal plates. The unit of current is then defined as one unit of charge per second. In SI, the unit of charge, the coulomb, is defined as the charge carried by one ampere during one second.
History
The ampere is named for French physicist and mathematician André-Marie Ampère (1775–1836), who studied electromagnetism and laid the foundation of electrodynamics. In recognition of Ampère's contributions to the creation of modern electrical science, an international convention, signed at the 1881 International Exposition of Electricity, established the ampere as a standard unit of electrical measurement for electric current.
The ampere was originally defined as one tenth of the unit of electric current in the centimetre–gram–second system of units. That unit, now known as the abampere, was defined as the amount of current that generates a force of two dynes per centimetre of length between two wires one centimetre apart. The size of the unit was chosen so that the units derived from it in the MKSA system would be conveniently sized.
The "international ampere" was an early realization of the ampere, defined as the current that would deposit of silver per second from a silver nitrate solution. Later, more accurate measurements revealed that this current is .
Since power is defined as the product of current and voltage, the ampere can alternatively be expressed in terms of the other units using the relationship , and thus 1 A = 1 W/V. Current can be measured by a multimeter, a device that can measure electrical voltage, current, and resistance.
Former definition in the SI
Until 2019, the SI defined the ampere as follows:
The ampere is that constant current which, if maintained in two straight parallel conductors of infinite length, of negligible circular cross-section, and placed one metre apart in vacuum, would produce between these conductors a force equal to newtons per metre of length.
Ampère's force law states that there is an attractive or repulsive force between two parallel wires carrying an electric current. This force is used in the formal definition of the ampere.
The SI unit of charge, the coulomb, was then defined as "the quantity of electricity carried in 1 second by a current of 1 ampere". Conversely, a current of one ampere is one coulomb of charge going past a given point per second:
In general, charge was determined by steady current flowing for a time as .
This definition of the ampere was most accurately realised using a Kibble balance, but in practice the unit was maintained via Ohm's law from the units of electromotive force and resistance, the volt and the ohm, since the latter two could be tied to physical phenomena that are relatively easy to reproduce, the Josephson effect and the quantum Hall effect, respectively.
Techniques to establish the realisation of an ampere had a relative uncertainty of approximately a few parts in 10, and involved realisations of the watt, the ohm and the volt.
Present definition
The 2019 redefinition of the SI base units defined the ampere by taking the fixed numerical value of the elementary charge to be when expressed in the unit C, which is equal to A⋅s, where the second is defined in terms of , the unperturbed ground state hyperfine transition frequency of the caesium-133 atom.
The SI unit of charge, the coulomb, "is the quantity of electricity carried in 1 second by a current of 1 ampere". Conversely, a current of one ampere is one coulomb of charge going past a given point per second:
In general, charge is determined by steady current flowing for a time as .
Constant, instantaneous and average current are expressed in amperes (as in "the charging current is 1.2 A") and the charge accumulated (or passed through a circuit) over a period of time is expressed in coulombs (as in "the battery charge is "). The relation of the ampere (C/s) to the coulomb is the same as that of the watt (J/s) to the joule.
Units derived from the ampere
The international system of units (SI) is based on 7 SI base units the second, metre, kilogram, kelvin, ampere, mole, and candela representing 7 fundamental types of physical quantity, or "dimensions", (time, length, mass, temperature, electric current, amount of substance, and luminous intensity respectively) with all other SI units being defined using these. These SI derived units can either be given special names e.g. watt, volt, lux, etc. or defined in terms of others, e.g. metre per second. The units with special names derived from the ampere are:
There are also some SI units that are frequently used in the context of electrical engineering and electrical appliances, but can be defined independently of the ampere, notably the hertz, joule, watt, candela, lumen, and lux.
SI prefixes
Like other SI units, the ampere can be modified by adding a prefix that multiplies it by a power of 10.
See also
Ammeter
Ampacity (current-carrying capacity)
Electric current
Electric shock
Hydraulic analogy
Magnetic constant
Orders of magnitude (current)
References
External links
The NIST Reference on Constants, Units, and Uncertainty
NIST Definition of ampere and μ0
SI base units
Units of electric current
|
https://en.wikipedia.org/wiki/Algorithm
|
In mathematics and computer science, an algorithm () is a finite sequence of rigorous instructions, typically used to solve a class of specific problems or to perform a computation. Algorithms are used as specifications for performing calculations and data processing. More advanced algorithms can use conditionals to divert the code execution through various routes (referred to as automated decision-making) and deduce valid inferences (referred to as automated reasoning), achieving automation eventually. Using human characteristics as descriptors of machines in metaphorical ways was already practiced by Alan Turing with terms such as "memory", "search" and "stimulus".
In contrast, a heuristic is an approach to problem solving that may not be fully specified or may not guarantee correct or optimal results, especially in problem domains where there is no well-defined correct or optimal result.
As an effective method, an algorithm can be expressed within a finite amount of space and time and in a well-defined formal language for calculating a function. Starting from an initial state and initial input (perhaps empty), the instructions describe a computation that, when executed, proceeds through a finite number of well-defined successive states, eventually producing "output" and terminating at a final ending state. The transition from one state to the next is not necessarily deterministic; some algorithms, known as randomized algorithms, incorporate random input.
History
Ancient algorithms
Since antiquity, step-by-step procedures for solving mathematical problems have been attested. This includes Babylonian mathematics (around 2500 BC), Egyptian mathematics (around 1550 BC), Indian mathematics (around 800 BC and later; e.g. Shulba Sutras, Kerala School, and Brāhmasphuṭasiddhānta), The Ifa Oracle (around 500 BC), Greek mathematics (around 240 BC, e.g. sieve of Eratosthenes and Euclidean algorithm), and Arabic mathematics (9th century, e.g. cryptographic algorithms for code-breaking based on frequency analysis).
Al-Khwārizmī and the term algorithm
Around 825, Muḥammad ibn Mūsā al-Khwārizmī wrote kitāb al-ḥisāb al-hindī ("Book of Indian computation") and kitab al-jam' wa'l-tafriq al-ḥisāb al-hindī ("Addition and subtraction in Indian arithmetic"). Both of these texts are lost in the original Arabic at this time. (However, his other book on algebra remains.)
In the early 12th century, Latin translations of said al-Khwarizmi texts involving the Hindu–Arabic numeral system and arithmetic appeared: Liber Alghoarismi de practica arismetrice (attributed to John of Seville) and Liber Algorismi de numero Indorum (attributed to Adelard of Bath). Hereby, alghoarismi or algorismi is the Latinization of Al-Khwarizmi's name; the text starts with the phrase Dixit Algorismi ("Thus spoke Al-Khwarizmi").
In 1240, Alexander of Villedieu writes a Latin text titled Carmen de Algorismo. It begins with:
which translates to:
The poem is a few hundred lines long and summarizes the art of calculating with the new styled Indian dice (Tali Indorum), or Hindu numerals.
English evolution of the word
Around 1230, the English word algorism is attested and then by Chaucer in 1391. English adopted the French term.
In the 15th century, under the influence of the Greek word ἀριθμός (arithmos, "number"; cf. "arithmetic"), the Latin word was altered to algorithmus.
In 1656, in the English dictionary Glossographia, it says:
Algorism ([Latin] algorismus) the Art or use of Cyphers, or of numbering by Cyphers; skill in accounting.
Augrime ([Latin] algorithmus) skil in accounting or numbring.
In 1658, in the first edition of The New World of English Words, it says:
Algorithme, (a word compounded of Arabick and Spanish,) the art of reckoning by Cyphers.
In 1706, in the sixth edition of The New World of English Words, it says:
Algorithm, the Art of computing or reckoning by numbers, which contains the five principle Rules of Arithmetick, viz. Numeration, Addition, Subtraction, Multiplication and Division; to which may be added Extraction of Roots: It is also call'd Logistica Numeralis.
Algorism, the practical Operation in the several Parts of Specious Arithmetick or Algebra; sometimes it is taken for the Practice of Common Arithmetick by the ten Numeral Figures.
In 1751, in the Young Algebraist's Companion, Daniel Fenning contrasts the terms algorism and algorithm as follows:
Algorithm signifies the first Principles, and Algorism the practical Part, or knowing how to put the Algorithm in Practice.
, the term algorithm is attested to mean a "step-by-step procedure" in English.
In 1842, in the Dictionary of Science, Literature and Art, it says:
ALGORITHM, signifies the art of computing in reference to some particular subject, or in some particular way; as the algorithm of numbers; the algorithm of the differential calculus.
Machine usage
In 1928, a partial formalization of the modern concept of algorithms began with attempts to solve the Entscheidungsproblem (decision problem) posed by David Hilbert. Later formalizations were framed as attempts to define "effective calculability" or "effective method". Those formalizations included the Gödel–Herbrand–Kleene recursive functions of 1930, 1934 and 1935, Alonzo Church's lambda calculus of 1936, Emil Post's Formulation 1 of 1936, and Alan Turing's Turing machines of 1936–37 and 1939.
Informal definition
One informal definition is "a set of rules that precisely defines a sequence of operations", which would include all computer programs (including programs that do not perform numeric calculations), and (for example) any prescribed bureaucratic procedure
or cook-book recipe.
In general, a program is an algorithm only if it stops eventually—even though infinite loops may sometimes prove desirable.
A prototypical example of an algorithm is the Euclidean algorithm, which is used to determine the maximum common divisor of two integers; an example (there are others) is described by the flowchart above and as an example in a later section.
offer an informal meaning of the word "algorithm" in the following quotation:
An "enumerably infinite set" is one whose elements can be put into one-to-one correspondence with the integers. Thus Boolos and Jeffrey are saying that an algorithm implies instructions for a process that "creates" output integers from an arbitrary "input" integer or integers that, in theory, can be arbitrarily large. For example, an algorithm can be an algebraic equation such as y = m + n (i.e., two arbitrary "input variables" m and n that produce an output y), but various authors' attempts to define the notion indicate that the word implies much more than this, something on the order of (for the addition example):
Precise instructions (in a language understood by "the computer") for a fast, efficient, "good" process that specifies the "moves" of "the computer" (machine or human, equipped with the necessary internally contained information and capabilities) to find, decode, and then process arbitrary input integers/symbols m and n, symbols + and = ... and "effectively" produce, in a "reasonable" time, output-integer y at a specified place and in a specified format.
The concept of algorithm is also used to define the notion of decidability—a notion that is central for explaining how formal systems come into being starting from a small set of axioms and rules. In logic, the time that an algorithm requires to complete cannot be measured, as it is not apparently related to the customary physical dimension. From such uncertainties, that characterize ongoing work, stems the unavailability of a definition of algorithm that suits both concrete (in some sense) and abstract usage of the term.
Most algorithms are intended to be implemented as computer programs. However, algorithms are also implemented by other means, such as in a biological neural network (for example, the human brain implementing arithmetic or an insect looking for food), in an electrical circuit, or in a mechanical device.
Formalization
Algorithms are essential to the way computers process data. Many computer programs contain algorithms that detail the specific instructions a computer should perform—in a specific order—to carry out a specified task, such as calculating employees' paychecks or printing students' report cards. Thus, an algorithm can be considered to be any sequence of operations that can be simulated by a Turing-complete system. Authors who assert this thesis include Minsky (1967), Savage (1987), and Gurevich (2000):
Turing machines can define computational processes that do not terminate. The informal definitions of algorithms generally require that the algorithm always terminates. This requirement renders the task of deciding whether a formal procedure is an algorithm impossible in the general case—due to a major theorem of computability theory known as the halting problem.
Typically, when an algorithm is associated with processing information, data can be read from an input source, written to an output device and stored for further processing. Stored data are regarded as part of the internal state of the entity performing the algorithm. In practice, the state is stored in one or more data structures.
For some of these computational processes, the algorithm must be rigorously defined: and specified in the way it applies in all possible circumstances that could arise. This means that any conditional steps must be systematically dealt with, case by case; the criteria for each case must be clear (and computable).
Because an algorithm is a precise list of precise steps, the order of computation is always crucial to the functioning of the algorithm. Instructions are usually assumed to be listed explicitly, and are described as starting "from the top" and going "down to the bottom"—an idea that is described more formally by flow of control.
So far, the discussion on the formalization of an algorithm has assumed the premises of imperative programming. This is the most common conception—one that attempts to describe a task in discrete, "mechanical" means. Associated with this conception of formalized algorithms is the assignment operation, which sets the value of a variable. It derives from the intuition of "memory" as a scratchpad. An example of such an assignment can be found below.
For some alternate conceptions of what constitutes an algorithm, see functional programming and logic programming.
Expressing algorithms
Algorithms can be expressed in many kinds of notation, including natural languages, pseudocode, flowcharts, drakon-charts, programming languages or control tables (processed by interpreters). Natural language expressions of algorithms tend to be verbose and ambiguous, and are rarely used for complex or technical algorithms. Pseudocode, flowcharts, drakon-charts and control tables are structured ways to express algorithms that avoid many of the ambiguities common in the statements based on natural language. Programming languages are primarily intended for expressing algorithms in a form that can be executed by a computer, but are also often used as a way to define or document algorithms.
There is a wide variety of representations possible and one can express a given Turing machine program as a sequence of machine tables (see finite-state machine, state-transition table and control table for more), as flowcharts and drakon-charts (see state diagram for more), or as a form of rudimentary machine code or assembly code called "sets of quadruples" (see Turing machine for more).
Representations of algorithms can be classed into three accepted levels of Turing machine description, as follows:
1 High-level description
"...prose to describe an algorithm, ignoring the implementation details. At this level, we do not need to mention how the machine manages its tape or head."
2 Implementation description
"...prose used to define the way the Turing machine uses its head and the way that it stores data on its tape. At this level, we do not give details of states or transition function."
3 Formal description
Most detailed, "lowest level", gives the Turing machine's "state table".
For an example of the simple algorithm "Add m+n" described in all three levels, see Examples.
Design
Algorithm design refers to a method or a mathematical process for problem-solving and engineering algorithms. The design of algorithms is part of many solution theories, such as divide-and-conquer or dynamic programming within operation research. Techniques for designing and implementing algorithm designs are also called algorithm design patterns, with examples including the template method pattern and the decorator pattern.
One of the most important aspects of algorithm design is resource (run-time, memory usage) efficiency; the big O notation is used to describe e.g. an algorithm's run-time growth as the size of its input increases.
Typical steps in the development of algorithms:
Problem definition
Development of a model
Specification of the algorithm
Designing an algorithm
Checking the correctness of the algorithm
Analysis of algorithm
Implementation of algorithm
Program testing
Documentation preparation
Computer algorithms
"Elegant" (compact) programs, "good" (fast) programs : The notion of "simplicity and elegance" appears informally in Knuth and precisely in Chaitin:
Knuth: " ... we want good algorithms in some loosely defined aesthetic sense. One criterion ... is the length of time taken to perform the algorithm .... Other criteria are adaptability of the algorithm to computers, its simplicity, and elegance, etc."
Chaitin: " ... a program is 'elegant,' by which I mean that it's the smallest possible program for producing the output that it does"
Chaitin prefaces his definition with: "I'll show you can't prove that a program is 'elegant—such a proof would solve the Halting problem (ibid).
Algorithm versus function computable by an algorithm: For a given function multiple algorithms may exist. This is true, even without expanding the available instruction set available to the programmer. Rogers observes that "It is ... important to distinguish between the notion of algorithm, i.e. procedure and the notion of function computable by algorithm, i.e. mapping yielded by procedure. The same function may have several different algorithms".
Unfortunately, there may be a tradeoff between goodness (speed) and elegance (compactness)—an elegant program may take more steps to complete a computation than one less elegant. An example that uses Euclid's algorithm appears below.
Computers (and computors), models of computation: A computer (or human "computer") is a restricted type of machine, a "discrete deterministic mechanical device" that blindly follows its instructions. Melzak's and Lambek's primitive models reduced this notion to four elements: (i) discrete, distinguishable locations, (ii) discrete, indistinguishable counters (iii) an agent, and (iv) a list of instructions that are effective relative to the capability of the agent.
Minsky describes a more congenial variation of Lambek's "abacus" model in his "Very Simple Bases for Computability". Minsky's machine proceeds sequentially through its five (or six, depending on how one counts) instructions unless either a conditional IF-THEN GOTO or an unconditional GOTO changes program flow out of sequence. Besides HALT, Minsky's machine includes three assignment (replacement, substitution) operations: ZERO (e.g. the contents of location replaced by 0: L ← 0), SUCCESSOR (e.g. L ← L+1), and DECREMENT (e.g. L ← L − 1). Rarely must a programmer write "code" with such a limited instruction set. But Minsky shows (as do Melzak and Lambek) that his machine is Turing complete with only four general types of instructions: conditional GOTO, unconditional GOTO, assignment/replacement/substitution, and HALT. However, a few different assignment instructions (e.g. DECREMENT, INCREMENT, and ZERO/CLEAR/EMPTY for a Minsky machine) are also required for Turing-completeness; their exact specification is somewhat up to the designer. The unconditional GOTO is convenient; it can be constructed by initializing a dedicated location to zero e.g. the instruction " Z ← 0 "; thereafter the instruction IF Z=0 THEN GOTO xxx is unconditional.
Simulation of an algorithm: computer (computor) language: Knuth advises the reader that "the best way to learn an algorithm is to try it . . . immediately take pen and paper and work through an example". But what about a simulation or execution of the real thing? The programmer must translate the algorithm into a language that the simulator/computer/computor can effectively execute. Stone gives an example of this: when computing the roots of a quadratic equation the computer must know how to take a square root. If they do not, then the algorithm, to be effective, must provide a set of rules for extracting a square root.
This means that the programmer must know a "language" that is effective relative to the target computing agent (computer/computor).
But what model should be used for the simulation? Van Emde Boas observes "even if we base complexity theory on abstract instead of concrete machines, the arbitrariness of the choice of a model remains. It is at this point that the notion of simulation enters". When speed is being measured, the instruction set matters. For example, the subprogram in Euclid's algorithm to compute the remainder would execute much faster if the programmer had a "modulus" instruction available rather than just subtraction (or worse: just Minsky's "decrement").
Structured programming, canonical structures: Per the Church–Turing thesis, any algorithm can be computed by a model known to be Turing complete, and per Minsky's demonstrations, Turing completeness requires only four instruction types—conditional GOTO, unconditional GOTO, assignment, HALT. Kemeny and Kurtz observe that, while "undisciplined" use of unconditional GOTOs and conditional IF-THEN GOTOs can result in "spaghetti code", a programmer can write structured programs using only these instructions; on the other hand "it is also possible, and not too hard, to write badly structured programs in a structured language". Tausworthe augments the three Böhm-Jacopini canonical structures: SEQUENCE, IF-THEN-ELSE, and WHILE-DO, with two more: DO-WHILE and CASE. An additional benefit of a structured program is that it lends itself to proofs of correctness using mathematical induction.
Canonical flowchart symbols: The graphical aide called a flowchart offers a way to describe and document an algorithm (and a computer program corresponding to it). Like the program flow of a Minsky machine, a flowchart always starts at the top of a page and proceeds down. Its primary symbols are only four: the directed arrow showing program flow, the rectangle (SEQUENCE, GOTO), the diamond (IF-THEN-ELSE), and the dot (OR-tie). The Böhm–Jacopini canonical structures are made of these primitive shapes. Sub-structures can "nest" in rectangles, but only if a single exit occurs from the superstructure. The symbols and their use to build the canonical structures are shown in the diagram.
Examples
Algorithm example
One of the simplest algorithms is to find the largest number in a list of numbers of random order. Finding the solution requires looking at every number in the list. From this follows a simple algorithm, which can be stated in a high-level description in English prose, as:
High-level description:
If there are no numbers in the set, then there is no highest number.
Assume the first number in the set is the largest number in the set.
For each remaining number in the set: if this number is larger than the current largest number, consider this number to be the largest number in the set.
When there are no numbers left in the set to iterate over, consider the current largest number to be the largest number of the set.
(Quasi-)formal description:
Written in prose but much closer to the high-level language of a computer program, the following is the more formal coding of the algorithm in pseudocode or pidgin code:
Input: A list of numbers L.
Output: The largest number in the list L.
if L.size = 0 return null
largest ← L[0]
for each item in L, do
if item > largest, then
largest ← item
return largest
Euclid's algorithm
In mathematics, the Euclidean algorithm or Euclid's algorithm, is an efficient method for computing the greatest common divisor (GCD) of two integers (numbers), the largest number that divides them both without a remainder. It is named after the ancient Greek mathematician Euclid, who first described it in his Elements (). It is one of the oldest algorithms in common use. It can be used to reduce fractions to their simplest form, and is a part of many other number-theoretic and cryptographic calculations.
Euclid poses the problem thus: "Given two numbers not prime to one another, to find their greatest common measure". He defines "A number [to be] a multitude composed of units": a counting number, a positive integer not including zero. To "measure" is to place a shorter measuring length s successively (q times) along longer length l until the remaining portion r is less than the shorter length s. In modern words, remainder r = l − q×s, q being the quotient, or remainder r is the "modulus", the integer-fractional part left over after the division.
For Euclid's method to succeed, the starting lengths must satisfy two requirements: (i) the lengths must not be zero, AND (ii) the subtraction must be "proper"; i.e., a test must guarantee that the smaller of the two numbers is subtracted from the larger (or the two can be equal so their subtraction yields zero).
Euclid's original proof adds a third requirement: the two lengths must not be prime to one another. Euclid stipulated this so that he could construct a reductio ad absurdum proof that the two numbers' common measure is in fact the greatest. While Nicomachus' algorithm is the same as Euclid's, when the numbers are prime to one another, it yields the number "1" for their common measure. So, to be precise, the following is really Nicomachus' algorithm.
Computer language for Euclid's algorithm
Only a few instruction types are required to execute Euclid's algorithm—some logical tests (conditional GOTO), unconditional GOTO, assignment (replacement), and subtraction.
A location is symbolized by upper case letter(s), e.g. S, A, etc.
The varying quantity (number) in a location is written in lower case letter(s) and (usually) associated with the location's name. For example, location L at the start might contain the number l = 3009.
An inelegant program for Euclid's algorithm
The following algorithm is framed as Knuth's four-step version of Euclid's and Nicomachus', but, rather than using division to find the remainder, it uses successive subtractions of the shorter length s from the remaining length r until r is less than s. The high-level description, shown in boldface, is adapted from Knuth 1973:2–4:
INPUT:
[Into two locations L and S put the numbers l and s that represent the two lengths]:
INPUT L, S
[Initialize R: make the remaining length r equal to the starting/initial/input length l]:
R ← L
E0: [Ensure r ≥ s.]
[Ensure the smaller of the two numbers is in S and the larger in R]:
IF R > S THEN
the contents of L is the larger number so skip over the exchange-steps 4, 5 and 6:
GOTO step 7
ELSE
swap the contents of R and S.
L ← R (this first step is redundant, but is useful for later discussion).
R ← S
S ← L
E1: [Find remainder]: Until the remaining length r in R is less than the shorter length s in S, repeatedly subtract the measuring number s in S from the remaining length r in R.
IF S > R THEN
done measuring so
GOTO 10
ELSE
measure again,
R ← R − S
[Remainder-loop]:
GOTO 7.
E2: [Is the remainder zero?]: EITHER (i) the last measure was exact, the remainder in R is zero, and the program can halt, OR (ii) the algorithm must continue: the last measure left a remainder in R less than measuring number in S.
IF R = 0 THEN
done so
GOTO step 15
ELSE
CONTINUE TO step 11,
E3: [Interchange s and r]: The nut of Euclid's algorithm. Use remainder r to measure what was previously smaller number s; L serves as a temporary location.
L ← R
R ← S
S ← L
[Repeat the measuring process]:
GOTO 7
OUTPUT:
[Done. S contains the greatest common divisor]:
PRINT S
DONE:
HALT, END, STOP.
An elegant program for Euclid's algorithm
The flowchart of "Elegant" can be found at the top of this article. In the (unstructured) Basic language, the steps are numbered, and the instruction LET [] = [] is the assignment instruction symbolized by ←.
5 REM Euclid's algorithm for greatest common divisor
6 PRINT "Type two integers greater than 0"
10 INPUT A,B
20 IF B=0 THEN GOTO 80
30 IF A > B THEN GOTO 60
40 LET B=B-A
50 GOTO 20
60 LET A=A-B
70 GOTO 20
80 PRINT A
90 END
How "Elegant" works: In place of an outer "Euclid loop", "Elegant" shifts back and forth between two "co-loops", an A > B loop that computes A ← A − B, and a B ≤ A loop that computes B ← B − A. This works because, when at last the minuend M is less than or equal to the subtrahend S (Difference = Minuend − Subtrahend), the minuend can become s (the new measuring length) and the subtrahend can become the new r (the length to be measured); in other words the "sense" of the subtraction reverses.
The following version can be used with programming languages from the C-family:
// Euclid's algorithm for greatest common divisor
int euclidAlgorithm (int A, int B) {
A = abs(A);
B = abs(B);
while (B != 0) {
while (A > B) {
A = A-B;
}
B = B-A;
}
return A;
}
Testing the Euclid algorithms
Does an algorithm do what its author wants it to do? A few test cases usually give some confidence in the core functionality. But tests are not enough. For test cases, one source uses 3009 and 884. Knuth suggested 40902, 24140. Another interesting case is the two relatively prime numbers 14157 and 5950.
But "exceptional cases" must be identified and tested. Will "Inelegant" perform properly when R > S, S > R, R = S? Ditto for "Elegant": B > A, A > B, A = B? (Yes to all). What happens when one number is zero, both numbers are zero? ("Inelegant" computes forever in all cases; "Elegant" computes forever when A = 0.) What happens if negative numbers are entered? Fractional numbers? If the input numbers, i.e. the domain of the function computed by the algorithm/program, is to include only positive integers including zero, then the failures at zero indicate that the algorithm (and the program that instantiates it) is a partial function rather than a total function. A notable failure due to exceptions is the Ariane 5 Flight 501 rocket failure (June 4, 1996).
Proof of program correctness by use of mathematical induction: Knuth demonstrates the application of mathematical induction to an "extended" version of Euclid's algorithm, and he proposes "a general method applicable to proving the validity of any algorithm". Tausworthe proposes that a measure of the complexity of a program be the length of its correctness proof.
Measuring and improving the Euclid algorithms
Elegance (compactness) versus goodness (speed): With only six core instructions, "Elegant" is the clear winner, compared to "Inelegant" at thirteen instructions. However, "Inelegant" is faster (it arrives at HALT in fewer steps). Algorithm analysis indicates why this is the case: "Elegant" does two conditional tests in every subtraction loop, whereas "Inelegant" only does one. As the algorithm (usually) requires many loop-throughs, on average much time is wasted doing a "B = 0?" test that is needed only after the remainder is computed.
Can the algorithms be improved?: Once the programmer judges a program "fit" and "effective"—that is, it computes the function intended by its author—then the question becomes, can it be improved?
The compactness of "Inelegant" can be improved by the elimination of five steps. But Chaitin proved that compacting an algorithm cannot be automated by a generalized algorithm; rather, it can only be done heuristically; i.e., by exhaustive search (examples to be found at Busy beaver), trial and error, cleverness, insight, application of inductive reasoning, etc. Observe that steps 4, 5 and 6 are repeated in steps 11, 12 and 13. Comparison with "Elegant" provides a hint that these steps, together with steps 2 and 3, can be eliminated. This reduces the number of core instructions from thirteen to eight, which makes it "more elegant" than "Elegant", at nine steps.
The speed of "Elegant" can be improved by moving the "B=0?" test outside of the two subtraction loops. This change calls for the addition of three instructions (B = 0?, A = 0?, GOTO). Now "Elegant" computes the example-numbers faster; whether this is always the case for any given A, B, and R, S would require a detailed analysis.
Algorithmic analysis
It is frequently important to know how much of a particular resource (such as time or storage) is theoretically required for a given algorithm. Methods have been developed for the analysis of algorithms to obtain such quantitative answers (estimates); for example, an algorithm which adds up the elements of a list of n numbers would have a time requirement of , using big O notation. At all times the algorithm only needs to remember two values: the sum of all the elements so far, and its current position in the input list. Therefore, it is said to have a space requirement of , if the space required to store the input numbers is not counted, or if it is counted.
Different algorithms may complete the same task with a different set of instructions in less or more time, space, or 'effort' than others. For example, a binary search algorithm (with cost ) outperforms a sequential search (cost ) when used for table lookups on sorted lists or arrays.
Formal versus empirical
The analysis, and study of algorithms is a discipline of computer science, and is often practiced abstractly without the use of a specific programming language or implementation. In this sense, algorithm analysis resembles other mathematical disciplines in that it focuses on the underlying properties of the algorithm and not on the specifics of any particular implementation. Usually pseudocode is used for analysis as it is the simplest and most general representation. However, ultimately, most algorithms are usually implemented on particular hardware/software platforms and their algorithmic efficiency is eventually put to the test using real code. For the solution of a "one off" problem, the efficiency of a particular algorithm may not have significant consequences (unless n is extremely large) but for algorithms designed for fast interactive, commercial or long life scientific usage it may be critical. Scaling from small n to large n frequently exposes inefficient algorithms that are otherwise benign.
Empirical testing is useful because it may uncover unexpected interactions that affect performance. Benchmarks may be used to compare before/after potential improvements to an algorithm after program optimization.
Empirical tests cannot replace formal analysis, though, and are not trivial to perform in a fair manner.
Execution efficiency
To illustrate the potential improvements possible even in well-established algorithms, a recent significant innovation, relating to FFT algorithms (used heavily in the field of image processing), can decrease processing time up to 1,000 times for applications like medical imaging. In general, speed improvements depend on special properties of the problem, which are very common in practical applications. Speedups of this magnitude enable computing devices that make extensive use of image processing (like digital cameras and medical equipment) to consume less power.
Classification
There are various ways to classify algorithms, each with its own merits.
By implementation
One way to classify algorithms is by implementation means.
Recursion
A recursive algorithm is one that invokes (makes reference to) itself repeatedly until a certain condition (also known as termination condition) matches, which is a method common to functional programming. Iterative algorithms use repetitive constructs like loops and sometimes additional data structures like stacks to solve the given problems. Some problems are naturally suited for one implementation or the other. For example, towers of Hanoi is well understood using recursive implementation. Every recursive version has an equivalent (but possibly more or less complex) iterative version, and vice versa.
Serial, parallel or distributed
Algorithms are usually discussed with the assumption that computers execute one instruction of an algorithm at a time. Those computers are sometimes called serial computers. An algorithm designed for such an environment is called a serial algorithm, as opposed to parallel algorithms or distributed algorithms. Parallel algorithms are algorithms that take advantage of computer architectures where multiple processors can work on a problem at the same time. Distributed algorithms are algorithms that use multiple machines connected with a computer network. Parallel and distributed algorithms divide the problem into more symmetrical or asymmetrical subproblems and collect the results back together. For example, a CPU would be an example of a parallel algorithm. The resource consumption in such algorithms is not only processor cycles on each processor but also the communication overhead between the processors. Some sorting algorithms can be parallelized efficiently, but their communication overhead is expensive. Iterative algorithms are generally parallelizable, but some problems have no parallel algorithms and are called inherently serial problems.
Deterministic or non-deterministic
Deterministic algorithms solve the problem with exact decision at every step of the algorithm whereas non-deterministic algorithms solve problems via guessing although typical guesses are made more accurate through the use of heuristics.
Exact or approximate
While many algorithms reach an exact solution, approximation algorithms seek an approximation that is closer to the true solution. The approximation can be reached by either using a deterministic or a random strategy. Such algorithms have practical value for many hard problems. One of the examples of an approximate algorithm is the Knapsack problem, where there is a set of given items. Its goal is to pack the knapsack to get the maximum total value. Each item has some weight and some value. Total weight that can be carried is no more than some fixed number X. So, the solution must consider weights of items as well as their value.
Quantum algorithm
They run on a realistic model of quantum computation. The term is usually used for those algorithms which seem inherently quantum, or use some essential feature of Quantum computing such as quantum superposition or quantum entanglement.
By design paradigm
Another way of classifying algorithms is by their design methodology or paradigm. There is a certain number of paradigms, each different from the other. Furthermore, each of these categories includes many different types of algorithms. Some common paradigms are:
Brute-force or exhaustive search
Brute force is a method of problem-solving that involves systematically trying every possible option until the optimal solution is found. This approach can be very time consuming, as it requires going through every possible combination of variables. However, it is often used when other methods are not available or too complex. Brute force can be used to solve a variety of problems, including finding the shortest path between two points and cracking passwords.
Divide and conquer
A divide-and-conquer algorithm repeatedly reduces an instance of a problem to one or more smaller instances of the same problem (usually recursively) until the instances are small enough to solve easily. One such example of divide and conquer is merge sorting. Sorting can be done on each segment of data after dividing data into segments and sorting of entire data can be obtained in the conquer phase by merging the segments. A simpler variant of divide and conquer is called a decrease-and-conquer algorithm, which solves an identical subproblem and uses the solution of this subproblem to solve the bigger problem. Divide and conquer divides the problem into multiple subproblems and so the conquer stage is more complex than decrease and conquer algorithms. An example of a decrease and conquer algorithm is the binary search algorithm.
Search and enumeration
Many problems (such as playing chess) can be modeled as problems on graphs. A graph exploration algorithm specifies rules for moving around a graph and is useful for such problems. This category also includes search algorithms, branch and bound enumeration and backtracking.
Randomized algorithm
Such algorithms make some choices randomly (or pseudo-randomly). They can be very useful in finding approximate solutions for problems where finding exact solutions can be impractical (see heuristic method below). For some of these problems, it is known that the fastest approximations must involve some randomness. Whether randomized algorithms with polynomial time complexity can be the fastest algorithms for some problems is an open question known as the P versus NP problem. There are two large classes of such algorithms:
Monte Carlo algorithms return a correct answer with high-probability. E.g. RP is the subclass of these that run in polynomial time.
Las Vegas algorithms always return the correct answer, but their running time is only probabilistically bound, e.g. ZPP.
Reduction of complexity
This technique involves solving a difficult problem by transforming it into a better-known problem for which we have (hopefully) asymptotically optimal algorithms. The goal is to find a reducing algorithm whose complexity is not dominated by the resulting reduced algorithm's. For example, one selection algorithm for finding the median in an unsorted list involves first sorting the list (the expensive portion) and then pulling out the middle element in the sorted list (the cheap portion). This technique is also known as transform and conquer.
Back tracking
In this approach, multiple solutions are built incrementally and abandoned when it is determined that they cannot lead to a valid full solution.
Optimization problems
For optimization problems there is a more specific classification of algorithms; an algorithm for such problems may fall into one or more of the general categories described above as well as into one of the following:
Linear programming
When searching for optimal solutions to a linear function bound to linear equality and inequality constraints, the constraints of the problem can be used directly in producing the optimal solutions. There are algorithms that can solve any problem in this category, such as the popular simplex algorithm. Problems that can be solved with linear programming include the maximum flow problem for directed graphs. If a problem additionally requires that one or more of the unknowns must be an integer then it is classified in integer programming. A linear programming algorithm can solve such a problem if it can be proved that all restrictions for integer values are superficial, i.e., the solutions satisfy these restrictions anyway. In the general case, a specialized algorithm or an algorithm that finds approximate solutions is used, depending on the difficulty of the problem.
Dynamic programming
When a problem shows optimal substructures—meaning the optimal solution to a problem can be constructed from optimal solutions to subproblems—and overlapping subproblems, meaning the same subproblems are used to solve many different problem instances, a quicker approach called dynamic programming avoids recomputing solutions that have already been computed. For example, Floyd–Warshall algorithm, the shortest path to a goal from a vertex in a weighted graph can be found by using the shortest path to the goal from all adjacent vertices. Dynamic programming and memoization go together. The main difference between dynamic programming and divide and conquer is that subproblems are more or less independent in divide and conquer, whereas subproblems overlap in dynamic programming. The difference between dynamic programming and straightforward recursion is in caching or memoization of recursive calls. When subproblems are independent and there is no repetition, memoization does not help; hence dynamic programming is not a solution for all complex problems. By using memoization or maintaining a table of subproblems already solved, dynamic programming reduces the exponential nature of many problems to polynomial complexity.
The greedy method
A greedy algorithm is similar to a dynamic programming algorithm in that it works by examining substructures, in this case not of the problem but of a given solution. Such algorithms start with some solution, which may be given or have been constructed in some way, and improve it by making small modifications. For some problems they can find the optimal solution while for others they stop at local optima, that is, at solutions that cannot be improved by the algorithm but are not optimum. The most popular use of greedy algorithms is for finding the minimal spanning tree where finding the optimal solution is possible with this method. Huffman Tree, Kruskal, Prim, Sollin are greedy algorithms that can solve this optimization problem.
The heuristic method
In optimization problems, heuristic algorithms can be used to find a solution close to the optimal solution in cases where finding the optimal solution is impractical. These algorithms work by getting closer and closer to the optimal solution as they progress. In principle, if run for an infinite amount of time, they will find the optimal solution. Their merit is that they can find a solution very close to the optimal solution in a relatively short time. Such algorithms include local search, tabu search, simulated annealing, and genetic algorithms. Some of them, like simulated annealing, are non-deterministic algorithms while others, like tabu search, are deterministic. When a bound on the error of the non-optimal solution is known, the algorithm is further categorized as an approximation algorithm.
By field of study
Every field of science has its own problems and needs efficient algorithms. Related problems in one field are often studied together. Some example classes are search algorithms, sorting algorithms, merge algorithms, numerical algorithms, graph algorithms, string algorithms, computational geometric algorithms, combinatorial algorithms, medical algorithms, machine learning, cryptography, data compression algorithms and parsing techniques.
Fields tend to overlap with each other, and algorithm advances in one field may improve those of other, sometimes completely unrelated, fields. For example, dynamic programming was invented for optimization of resource consumption in industry but is now used in solving a broad range of problems in many fields.
By complexity
Algorithms can be classified by the amount of time they need to complete compared to their input size:
Constant time: if the time needed by the algorithm is the same, regardless of the input size. E.g. an access to an array element.
Logarithmic time: if the time is a logarithmic function of the input size. E.g. binary search algorithm.
Linear time: if the time is proportional to the input size. E.g. the traverse of a list.
Polynomial time: if the time is a power of the input size. E.g. the bubble sort algorithm has quadratic time complexity.
Exponential time: if the time is an exponential function of the input size. E.g. Brute-force search.
Some problems may have multiple algorithms of differing complexity, while other problems might have no algorithms or no known efficient algorithms. There are also mappings from some problems to other problems. Owing to this, it was found to be more suitable to classify the problems themselves instead of the algorithms into equivalence classes based on the complexity of the best possible algorithms for them.
Continuous algorithms
The adjective "continuous" when applied to the word "algorithm" can mean:
An algorithm operating on data that represents continuous quantities, even though this data is represented by discrete approximations—such algorithms are studied in numerical analysis; or
An algorithm in the form of a differential equation that operates continuously on the data, running on an analog computer.
Algorithm = Logic + Control
In logic programming, algorithms are viewed as having both "a logic component, which specifies the knowledge to be used in solving problems, and a control component, which determines the problem-solving strategies by means of which that knowledge is used."
The Euclidean algorithm illustrates this view of an algorithm. Here is a logic programming representation, using :- to represent "if", and the relation gcd(A, B, C) to represent the function gcd(A, B) = C:
gcd(A, A, A).
gcd(A, B, C) :- A > B, gcd(A-B, B, C).
gcd(A, B, C) :- B > A, gcd(A, B-A, C).
In the logic programming language Ciao the gcd relation can be represented directly in functional notation:
gcd(A, A) := A.
gcd(A, B) := gcd(A-B, B) :- A > B.
gcd(A, B) := gcd(A, B-A) :- B > A.
The Ciao implementation translates the functional notation into a relational representation in Prolog, extracting the embedded subtractions, A-B and B-A, as separate conditions:
gcd(A, A, A).
gcd(A, B, C) :- A > B, A' is A-B, gcd(A', B, C).
gcd(A, B, C) :- B > A, B' is B-A, gcd(A, B, C).
The resulting program has a purely logical (and "declarative") reading, as a recursive (or inductive) definition, which is independent of how the logic is used to solve problems:
The gcd of A and A is A.
The gcd of A and B is C, if A > B and A' is A-B and the gcd of A' and B is C.
The gcd of A and B is C, if B > A and B' is B-A and the gcd of A and B' is C.
Different problem-solving strategies turn the logic into different algorithms. In theory, given a pair of integers A and B, forward (or "bottom-up") reasoning could be used to generate all instances of the gcd relation, terminating when the desired gcd of A and B is generated. Of course, forward reasoning is entirely useless in this case. But in other cases, such as the definition of the Fibonacci sequence and Datalog, forward reasoning can be an efficient problem solving strategy. (See for example the logic program for computing fibonacci numbers in Algorithm = Logic + Control).
In contrast with the inefficiency of forward reasoning in this example, backward (or "top-down") reasoning using SLD resolution turns the logic into the Euclidean algorithm:
To find the gcd C of two given numbers A and B:
If A = B, then C = A.
If A > B, then let A' = A-B and find the gcd of A' and B, which is C.
If B > A, then let B' = B-A and find the gcd of A and B', which is C.
One of the advantages of the logic programming representation of the algorithm is that its purely logical reading makes it easier to verify that the algorithm is correct relative to the standard non-recursive definition of gcd. Here is the standard definition written in Prolog:
gcd(A, B, C) :- divides(C, A), divides(C, B),
forall((divides(D, A), divides(D, B)), D =< C).
divides(C, Number) :-
between(1, Number, C), 0 is Number mod C.
This definition, which is the specification of the Euclidean algorithm, is also executable in Prolog: Backward reasoning treats the specification as the brute-force algorithm that iterates through all of the integers C between 1 and A, checking whether C divides both A and B, and then for each such C iterates again through all of the integers D between 1 and A, until it finds a C such that C is greater than or equal to all of the D that also divide both A and B. Although this algorithm is hopelessly inefficient, it shows that formal specifications can often be written in logic programming form, and they can be executed by Prolog, to check that they correctly represent informal requirements.
Legal issues
Algorithms, by themselves, are not usually patentable. In the United States, a claim consisting solely of simple manipulations of abstract concepts, numbers, or signals does not constitute "processes" (USPTO 2006), so algorithms are not patentable (as in Gottschalk v. Benson). However practical applications of algorithms are sometimes patentable. For example, in Diamond v. Diehr, the application of a simple feedback algorithm to aid in the curing of synthetic rubber was deemed patentable. The patenting of software is controversial, and there are criticized patents involving algorithms, especially data compression algorithms, such as Unisys's LZW patent.
Additionally, some cryptographic algorithms have export restrictions (see export of cryptography).
History: Development of the notion of "algorithm"
Ancient Near East
The earliest evidence of algorithms is found in the Babylonian mathematics of ancient Mesopotamia (modern Iraq). A Sumerian clay tablet found in Shuruppak near Baghdad and dated to described the earliest division algorithm. During the Hammurabi dynasty , Babylonian clay tablets described algorithms for computing formulas. Algorithms were also used in Babylonian astronomy. Babylonian clay tablets describe and employ algorithmic procedures to compute the time and place of significant astronomical events.
Algorithms for arithmetic are also found in ancient Egyptian mathematics, dating back to the Rhind Mathematical Papyrus . Algorithms were later used in ancient Hellenistic mathematics. Two examples are the Sieve of Eratosthenes, which was described in the Introduction to Arithmetic by Nicomachus, and the Euclidean algorithm, which was first described in Euclid's Elements ().
Discrete and distinguishable symbols
Tally-marks: To keep track of their flocks, their sacks of grain and their money the ancients used tallying: accumulating stones or marks scratched on sticks or making discrete symbols in clay. Through the Babylonian and Egyptian use of marks and symbols, eventually Roman numerals and the abacus evolved (Dilson, p. 16–41). Tally marks appear prominently in unary numeral system arithmetic used in Turing machine and Post–Turing machine computations.
Manipulation of symbols as "place holders" for numbers: algebra
Muhammad ibn Mūsā al-Khwārizmī, a Persian mathematician, wrote the Al-jabr in the 9th century. The terms "algorism" and "algorithm" are derived from the name al-Khwārizmī, while the term "algebra" is derived from the book Al-jabr. In Europe, the word "algorithm" was originally used to refer to the sets of rules and techniques used by Al-Khwarizmi to solve algebraic equations, before later being generalized to refer to any set of rules or techniques. This eventually culminated in Leibniz's notion of the calculus ratiocinator ():
Cryptographic algorithms
The first cryptographic algorithm for deciphering encrypted code was developed by Al-Kindi, a 9th-century Arab mathematician, in A Manuscript On Deciphering Cryptographic Messages. He gave the first description of cryptanalysis by frequency analysis, the earliest codebreaking algorithm.
Mechanical contrivances with discrete states
The clock: Bolter credits the invention of the weight-driven clock as "The key invention [of Europe in the Middle Ages]", in particular, the verge escapement that provides us with the tick and tock of a mechanical clock. "The accurate automatic machine" led immediately to "mechanical automata" beginning in the 13th century and finally to "computational machines"—the difference engine and analytical engines of Charles Babbage and Countess Ada Lovelace, mid-19th century. Lovelace is credited with the first creation of an algorithm intended for processing on a computer—Babbage's analytical engine, the first device considered a real Turing-complete computer instead of just a calculator—and is sometimes called "history's first programmer" as a result, though a full implementation of Babbage's second device would not be realized until decades after her lifetime.
Logical machines 1870 – Stanley Jevons' "logical abacus" and "logical machine": The technical problem was to reduce Boolean equations when presented in a form similar to what is now known as Karnaugh maps. Jevons (1880) describes first a simple "abacus" of "slips of wood furnished with pins, contrived so that any part or class of the [logical] combinations can be picked out mechanically ... More recently, however, I have reduced the system to a completely mechanical form, and have thus embodied the whole of the indirect process of inference in what may be called a Logical Machine" His machine came equipped with "certain moveable wooden rods" and "at the foot are 21 keys like those of a piano [etc.] ...". With this machine he could analyze a "syllogism or any other simple logical argument".
This machine he displayed in 1870 before the Fellows of the Royal Society. Another logician John Venn, however, in his 1881 Symbolic Logic, turned a jaundiced eye to this effort: "I have no high estimate myself of the interest or importance of what are sometimes called logical machines ... it does not seem to me that any contrivances at present known or likely to be discovered really deserve the name of logical machines"; see more at Algorithm characterizations. But not to be outdone he too presented "a plan somewhat analogous, I apprehend, to Prof. Jevon's abacus ... [And] [a]gain, corresponding to Prof. Jevons's logical machine, the following contrivance may be described. I prefer to call it merely a logical-diagram machine ... but I suppose that it could do very completely all that can be rationally expected of any logical machine".
Jacquard loom, Hollerith punch cards, telegraphy and telephony – the electromechanical relay: Bell and Newell (1971) indicate that the Jacquard loom (1801), precursor to Hollerith cards (punch cards, 1887), and "telephone switching technologies" were the roots of a tree leading to the development of the first computers. By the mid-19th century the telegraph, the precursor of the telephone, was in use throughout the world, its discrete and distinguishable encoding of letters as "dots and dashes" a common sound. By the late 19th century the ticker tape () was in use, as was the use of Hollerith cards in the 1890 U.S. census. Then came the teleprinter () with its punched-paper use of Baudot code on tape.
Telephone-switching networks of electromechanical relays (invented 1835) was behind the work of George Stibitz (1937), the inventor of the digital adding device. As he worked in Bell Laboratories, he observed the "burdensome' use of mechanical calculators with gears. "He went home one evening in 1937 intending to test his idea... When the tinkering was over, Stibitz had constructed a binary adding device".
The mathematician Martin Davis observes the particular importance of the electromechanical relay (with its two "binary states" open and closed):
It was only with the development, beginning in the 1930s, of electromechanical calculators using electrical relays, that machines were built having the scope Babbage had envisioned."
Mathematics during the 19th century up to the mid-20th century
Symbols and rules: In rapid succession, the mathematics of George Boole (1847, 1854), Gottlob Frege (1879), and Giuseppe Peano (1888–1889) reduced arithmetic to a sequence of symbols manipulated by rules. Peano's The principles of arithmetic, presented by a new method (1888) was "the first attempt at an axiomatization of mathematics in a symbolic language".
But Heijenoort gives Frege (1879) this kudos: Frege's is "perhaps the most important single work ever written in logic. ... in which we see a formula language', that is a lingua characterica, a language written with special symbols, "for pure thought", that is, free from rhetorical embellishments ... constructed from specific symbols that are manipulated according to definite rules". The work of Frege was further simplified and amplified by Alfred North Whitehead and Bertrand Russell in their Principia Mathematica (1910–1913).
The paradoxes: At the same time a number of disturbing paradoxes appeared in the literature, in particular, the Burali-Forti paradox (1897), the Russell paradox (1902–03), and the Richard Paradox. The resultant considerations led to Kurt Gödel's paper (1931)—he specifically cites the paradox of the liar—that completely reduces rules of recursion to numbers.
Effective calculability: In an effort to solve the Entscheidungsproblem defined precisely by Hilbert in 1928, mathematicians first set about to define what was meant by an "effective method" or "effective calculation" or "effective calculability" (i.e., a calculation that would succeed). In rapid succession the following appeared: Alonzo Church, Stephen Kleene and J.B. Rosser's λ-calculus a finely honed definition of "general recursion" from the work of Gödel acting on suggestions of Jacques Herbrand (cf. Gödel's Princeton lectures of 1934) and subsequent simplifications by Kleene. Church's proof that the Entscheidungsproblem was unsolvable, Emil Post's definition of effective calculability as a worker mindlessly following a list of instructions to move left or right through a sequence of rooms and while there either mark or erase a paper or observe the paper and make a yes-no decision about the next instruction. Alan Turing's proof of that the Entscheidungsproblem was unsolvable by use of his "a- [automatic-] machine"—in effect almost identical to Post's "formulation", J. Barkley Rosser's definition of "effective method" in terms of "a machine". Kleene's proposal of a precursor to "Church thesis" that he called "Thesis I", and a few years later Kleene's renaming his Thesis "Church's Thesis" and proposing "Turing's Thesis".
Emil Post (1936) and Alan Turing (1936–37, 1939)
Emil Post (1936) described the actions of a "computer" (human being) as follows:
"...two concepts are involved: that of a symbol space in which the work leading from problem to answer is to be carried out, and a fixed unalterable set of directions.
His symbol space would be
"a two-way infinite sequence of spaces or boxes ... The problem solver or worker is to move and work in this symbol space, being capable of being in, and operating in but one box at a time. ... a box is to admit of but two possible conditions, i.e., being empty or unmarked, and having a single mark in it, say a vertical stroke.
"One box is to be singled out and called the starting point. ... a specific problem is to be given in symbolic form by a finite number of boxes [i.e., INPUT] being marked with a stroke. Likewise, the answer [i.e., OUTPUT] is to be given in symbolic form by such a configuration of marked boxes...
"A set of directions applicable to a general problem sets up a deterministic process when applied to each specific problem. This process terminates only when it comes to the direction of type (C ) [i.e., STOP]". See more at Post–Turing machine
Alan Turing's work preceded that of Stibitz (1937); it is unknown whether Stibitz knew of the work of Turing. Turing's biographer believed that Turing's use of a typewriter-like model derived from a youthful interest: "Alan had dreamt of inventing typewriters as a boy; Mrs. Turing had a typewriter, and he could well have begun by asking himself what was meant by calling a typewriter 'mechanical. Given the prevalence at the time of Morse code, telegraphy, ticker tape machines, and teletypewriters, it is quite possible that all were influences on Turing during his youth.
Turing—his model of computation is now called a Turing machine—begins, as did Post, with an analysis of a human computer that he whittles down to a simple set of basic motions and "states of mind". But he continues a step further and creates a machine as a model of computation of numbers.
"Computing is normally done by writing certain symbols on paper. We may suppose this paper is divided into squares like a child's arithmetic book...I assume then that the computation is carried out on one-dimensional paper, i.e., on a tape divided into squares. I shall also suppose that the number of symbols which may be printed is finite...
"The behavior of the computer at any moment is determined by the symbols which he is observing, and his "state of mind" at that moment. We may suppose that there is a bound B to the number of symbols or squares that the computer can observe at one moment. If he wishes to observe more, he must use successive observations. We will also suppose that the number of states of mind which need be taken into account is finite...
"Let us imagine that the operations performed by the computer to be split up into 'simple operations' which are so elementary that it is not easy to imagine them further divided."
Turing's reduction yields the following:
"The simple operations must therefore include:
"(a) Changes of the symbol on one of the observed squares
"(b) Changes of one of the squares observed to another square within L squares of one of the previously observed squares.
"It may be that some of these change necessarily invoke a change of state of mind. The most general single operation must, therefore, be taken to be one of the following:
"(A) A possible change (a) of symbol together with a possible change of state of mind.
"(B) A possible change (b) of observed squares, together with a possible change of state of mind"
"We may now construct a machine to do the work of this computer."
A few years later, Turing expanded his analysis (thesis, definition) with this forceful expression of it:
"A function is said to be "effectively calculable" if its values can be found by some purely mechanical process. Though it is fairly easy to get an intuitive grasp of this idea, it is nevertheless desirable to have some more definite, mathematical expressible definition ... [he discusses the history of the definition pretty much as presented above with respect to Gödel, Herbrand, Kleene, Church, Turing, and Post] ... We may take this statement literally, understanding by a purely mechanical process one which could be carried out by a machine. It is possible to give a mathematical description, in a certain normal form, of the structures of these machines. The development of these ideas leads to the author's definition of a computable function, and to an identification of computability † with effective calculability...
"† We shall use the expression "computable function" to mean a function calculable by a machine, and we let "effectively calculable" refer to the intuitive idea without particular identification with any one of these definitions".
J. B. Rosser (1939) and S. C. Kleene (1943)
J. Barkley Rosser defined an "effective [mathematical] method" in the following manner (italicization added):
Effective method' is used here in the rather special sense of a method each step of which is precisely determined and which is certain to produce the answer in a finite number of steps. With this special meaning, three different precise definitions have been given to date. [his footnote #5; see discussion immediately below]. The simplest of these to state (due to Post and Turing) says essentially that an effective method of solving certain sets of problems exists if one can build a machine which will then solve any problem of the set with no human intervention beyond inserting the question and (later) reading the answer. All three definitions are equivalent, so it doesn't matter which one is used. Moreover, the fact that all three are equivalent is a very strong argument for the correctness of any one." (Rosser 1939:225–226)
Rosser's footnote No. 5 references the work of (1) Church and Kleene and their definition of λ-definability, in particular, Church's use of it in his An Unsolvable Problem of Elementary Number Theory (1936); (2) Herbrand and Gödel and their use of recursion, in particular, Gödel's use in his famous paper On Formally Undecidable Propositions of Principia Mathematica and Related Systems I (1931); and (3) Post (1936) and Turing (1936–37) in their mechanism-models of computation.
Stephen C. Kleene defined as his now-famous "Thesis I" known as the Church–Turing thesis. But he did this in the following context (boldface in original):
"12. Algorithmic theories... In setting up a complete algorithmic theory, what we do is to describe a procedure, performable for each set of values of the independent variables, which procedure necessarily terminates and in such manner that from the outcome we can read a definite answer, "yes" or "no," to the question, "is the predicate value true?"" (Kleene 1943:273)
History after 1950
A number of efforts have been directed toward further refinement of the definition of "algorithm", and activity is on-going because of issues surrounding, in particular, foundations of mathematics (especially the Church–Turing thesis) and philosophy of mind (especially arguments about artificial intelligence). For more, see Algorithm characterizations.
See also
Abstract machine
ALGOL
Algorithm engineering
Algorithm characterizations
Algorithmic bias
Algorithmic composition
Algorithmic entities
Algorithmic synthesis
Algorithmic technique
Algorithmic topology
Garbage in, garbage out
Introduction to Algorithms (textbook)
Government by algorithm
List of algorithms
List of algorithm general topics
Regulation of algorithms
Theory of computation
Computability theory
Computational complexity theory
Computational mathematics
Notes
Bibliography
Bell, C. Gordon and Newell, Allen (1971), Computer Structures: Readings and Examples, McGraw–Hill Book Company, New York. .
Includes a bibliography of 56 references.
,
: cf. Chapter 3 Turing machines where they discuss "certain enumerable sets not effectively (mechanically) enumerable".
Campagnolo, M.L., Moore, C., and Costa, J.F. (2000) An analog characterization of the subrecursive functions. In Proc. of the 4th Conference on Real Numbers and Computers, Odense University, pp. 91–109
Reprinted in The Undecidable, p. 89ff. The first expression of "Church's Thesis". See in particular page 100 (The Undecidable) where he defines the notion of "effective calculability" in terms of "an algorithm", and he uses the word "terminates", etc.
Reprinted in The Undecidable, p. 110ff. Church shows that the Entscheidungsproblem is unsolvable in about 3 pages of text and 3 pages of footnotes.
Davis gives commentary before each article. Papers of Gödel, Alonzo Church, Turing, Rosser, Kleene, and Emil Post are included; those cited in the article are listed here by author's name.
Davis offers concise biographies of Leibniz, Boole, Frege, Cantor, Hilbert, Gödel and Turing with von Neumann as the show-stealing villain. Very brief bios of Joseph-Marie Jacquard, Babbage, Ada Lovelace, Claude Shannon, Howard Aiken, etc.
,
Yuri Gurevich, Sequential Abstract State Machines Capture Sequential Algorithms, ACM Transactions on Computational Logic, Vol 1, no 1 (July 2000), pp. 77–111. Includes bibliography of 33 sources.
, 3rd edition 1976[?], (pbk.)
, . Cf. Chapter "The Spirit of Truth" for a history leading to, and a discussion of, his proof.
Presented to the American Mathematical Society, September 1935. Reprinted in The Undecidable, p. 237ff. Kleene's definition of "general recursion" (known now as mu-recursion) was used by Church in his 1935 paper An Unsolvable Problem of Elementary Number Theory that proved the "decision problem" to be "undecidable" (i.e., a negative result).
Reprinted in The Undecidable, p. 255ff. Kleene refined his definition of "general recursion" and proceeded in his chapter "12. Algorithmic theories" to posit "Thesis I" (p. 274); he would later repeat this thesis (in Kleene 1952:300) and name it "Church's Thesis"(Kleene 1952:317) (i.e., the Church thesis).
Kosovsky, N.K. Elements of Mathematical Logic and its Application to the theory of Subrecursive Algorithms, LSU Publ., Leningrad, 1981
A.A. Markov (1954) Theory of algorithms. [Translated by Jacques J. Schorr-Kon and PST staff] Imprint Moscow, Academy of Sciences of the USSR, 1954 [i.e., Jerusalem, Israel Program for Scientific Translations, 1961; available from the Office of Technical Services, U.S. Dept. of Commerce, Washington] Description 444 p. 28 cm. Added t.p. in Russian Translation of Works of the Mathematical Institute, Academy of Sciences of the USSR, v. 42. Original title: Teoriya algerifmov. [QA248.M2943 Dartmouth College library. U.S. Dept. of Commerce, Office of Technical Services, number OTS .]
Minsky expands his "...idea of an algorithm – an effective procedure..." in chapter 5.1 Computability, Effective Procedures and Algorithms. Infinite machines.
Reprinted in The Undecidable, pp. 289ff. Post defines a simple algorithmic-like process of a man writing marks or erasing marks and going from box to box and eventually halting, as he follows a list of simple instructions. This is cited by Kleene as one source of his "Thesis I", the so-called Church–Turing thesis.
Reprinted in The Undecidable, p. 223ff. Herein is Rosser's famous definition of "effective method": "...a method each step of which is precisely predetermined and which is certain to produce the answer in a finite number of steps... a machine which will then solve any problem of the set with no human intervention beyond inserting the question and (later) reading the answer" (p. 225–226, The Undecidable)
Cf. in particular the first chapter titled: Algorithms, Turing Machines, and Programs. His succinct informal definition: "...any sequence of instructions that can be obeyed by a robot, is called an algorithm" (p. 4).
. Corrections, ibid, vol. 43(1937) pp. 544–546. Reprinted in The Undecidable, p. 116ff. Turing's famous paper completed as a Master's dissertation while at King's College Cambridge UK.
Reprinted in The Undecidable, pp. 155ff. Turing's paper that defined "the oracle" was his PhD thesis while at Princeton.
United States Patent and Trademark Office (2006), 2106.02 **>Mathematical Algorithms: 2100 Patentability, Manual of Patent Examining Procedure (MPEP). Latest revision August 2006
Zaslavsky, C. (1970). Mathematics of the Yoruba People and of Their Neighbors in Southern Nigeria. The Two-Year College Mathematics Journal, 1(2), 76–99. https://doi.org/10.2307/3027363
Further reading
Knuth, Donald E. (2000). Selected Papers on Analysis of Algorithms . Stanford, California: Center for the Study of Language and Information.
Knuth, Donald E. (2010). Selected Papers on Design of Algorithms . Stanford, California: Center for the Study of Language and Information.
External links
Dictionary of Algorithms and Data Structures – National Institute of Standards and Technology
Algorithm repositories
The Stony Brook Algorithm Repository – State University of New York at Stony Brook
Collected Algorithms of the ACM – Associations for Computing Machinery
The Stanford GraphBase – Stanford University
Articles with example pseudocode
Mathematical logic
Theoretical computer science
|
https://en.wikipedia.org/wiki/Anthophyta
|
The anthophytes are a paraphyletic grouping of plant taxa bearing flower-like reproductive structures. The group, once thought to be a clade, contained the angiosperms - the extant flowering plants, such as roses and grasses - as well as the Gnetales and the extinct Bennettitales.
Detailed morphological and molecular studies have shown that the group is not actually monophyletic, with proposed floral homologies of the gnetophytes and the angiosperms having evolved in parallel. This makes it easier to reconcile molecular clock data that suggests that the angiosperms diverged from the gymnosperms around 320-300 mya.
Some more recent studies have used the word anthophyte to describe a hypothetical group which includes the angiosperms and a variety of extinct seed plant groups (with various suggestions including at least some of the following groups: glossopterids, corystosperms, Petriellales Pentoxylales, Bennettitales and Caytoniales), but not the Gnetales.
References
Historically recognized plant taxa
|
https://en.wikipedia.org/wiki/Mouthwash
|
Mouthwash, mouth rinse, oral rinse, or mouth bath is a liquid which is held in the mouth passively or swirled around the mouth by contraction of the perioral muscles and/or movement of the head, and may be gargled, where the head is tilted back and the liquid bubbled at the back of the mouth.
Usually mouthwashes are antiseptic solutions intended to reduce the microbial load in the mouth, although other mouthwashes might be given for other reasons such as for their analgesic, anti-inflammatory or anti-fungal action. Additionally, some rinses act as saliva substitutes to neutralize acid and keep the mouth moist in xerostomia (dry mouth). Cosmetic mouthrinses temporarily control or reduce bad breath and leave the mouth with a pleasant taste.
Rinsing with water or mouthwash after brushing with a fluoride toothpaste can reduce the availability of salivary fluoride. This can lower the anti-cavity re-mineralization and antibacterial effects of fluoride. Fluoridated mouthwash may mitigate this effect or in high concentrations increase available fluoride, but is not as cost-effective as leaving the fluoride toothpaste on the teeth after brushing. A group of experts discussing post brushing rinsing in 2012 found that although there was clear guidance given in many public health advice publications to "spit, avoid rinsing with water/excessive rinsing with water" they believed there was a limited evidence base for best practice.
Use
Common use involves rinsing the mouth with about 20–50 ml (2/3 fl oz) of mouthwash. The wash is typically swished or gargled for about half a minute and then spat out. Most companies suggest not drinking water immediately after using mouthwash. In some brands, the expectorate is stained, so that one can see the bacteria and debris.
Mouthwash should not be used immediately after brushing the teeth so as not to wash away the beneficial fluoride residue left from the toothpaste. Similarly, the mouth should not be rinsed out with water after brushing. Patients were told to "spit don't rinse" after toothbrushing as part of a National Health Service campaign in the UK. A fluoride mouthrinse can be used at a different time of the day to brushing.
Gargling is where the head is tilted back, allowing the mouthwash to sit in the back of the mouth while exhaling, causing the liquid to bubble. Gargling is practiced in Japan for perceived prevention of viral infection. One commonly used way is with infusions or tea. In some cultures, gargling is usually done in private, typically in a bathroom at a sink so the liquid can be rinsed away.
Dangerous misuse
If one drinks mouthwash, serious harm and even death can quickly result from the high alcohol content and other harmful substances in mouthwash. It is a common cause of death among homeless people during winter months, because a person can feel warmer after drinking it.
Effects
The most-commonly-used mouthwashes are commercial antiseptics, which are used at home as part of an oral hygiene routine. Mouthwashes combine ingredients to treat a variety of oral conditions. Variations are common, and mouthwash has no standard formulation, so its use and recommendation involves concerns about patient safety. Some manufacturers of mouthwash state that their antiseptic and antiplaque mouthwashes kill the bacterial plaque that causes cavities, gingivitis, and bad breath. It is, however, generally agreed that the use of mouthwash does not eliminate the need for both brushing and flossing. The American Dental Association asserts that regular brushing and proper flossing are enough in most cases, in addition to regular dental check-ups, although they approve many mouthwashes.
For many patients, however, the mechanical methods could be tedious and time-consuming, and, additionally, some local conditions may render them especially difficult. Chemotherapeutic agents, including mouthwashes, could have a key role as adjuncts to daily home care, preventing and controlling supragingival plaque, gingivitis and oral malodor.
Minor and transient side effects of mouthwashes are very common, such as taste disturbance, tooth staining, sensation of a dry mouth, etc. Alcohol-containing mouthwashes may make dry mouth and halitosis worse, as they dry out the mouth. Soreness, ulceration and redness may sometimes occur (e.g., aphthous stomatitis or allergic contact stomatitis) if the person is allergic or sensitive to mouthwash ingredients, such as preservatives, coloring, flavors and fragrances. Such effects might be reduced or eliminated by diluting the mouthwash with water, using a different mouthwash (e.g. saltwater), or foregoing mouthwash entirely.
Prescription mouthwashes are used prior to and after oral surgery procedures, such as tooth extraction, or to treat the pain associated with mucositis caused by radiation therapy or chemotherapy. They are also prescribed for aphthous ulcers, other oral ulcers, and other mouth pain. "Magic mouthwashes" are prescription mouthwashes compounded in a pharmacy from a list of ingredients specified by a doctor. Despite a lack of evidence that prescription mouthwashes are more effective in decreasing the pain of oral lesions, many patients and prescribers continue to use them. There has been only one controlled study to evaluate the efficacy of magic mouthwash; it shows no difference in efficacy between the most common magic-mouthwash formulation, on the one hand, and commercial mouthwashes (such as chlorhexidine) or a saline/baking soda solution, on the other. Current guidelines suggest that saline solution is just as effective as magic mouthwash in pain relief and in shortening the healing time of oral mucositis from cancer therapies.
History
The first known references to mouth rinsing is in Ayurveda for treatment of gingivitis. Later, in the Greek and Roman periods, mouth rinsing following mechanical cleansing became common among the upper classes, and Hippocrates recommended a mixture of salt, alum, and vinegar. The Jewish Talmud, dating back about 1,800 years, suggests a cure for gum ailments containing "dough water" and olive oil. The ancient Chinese had also gargled salt water, tea and wine as a form of mouthwash after meals, due to the antiseptic properties of those liquids.
Before Europeans came to the Americas, Native North American and Mesoamerican cultures used mouthwashes, often made from plants such as Coptis trifolia. Indeed, Aztec dentistry was more advanced than European dentistry of the age. Peoples of the Americas used salt water mouthwashes for sore throats, and other mouthwashes for problems such as teething and mouth ulcers.
Anton van Leeuwenhoek, the famous 17th century microscopist, discovered living organisms (living, because they were mobile) in deposits on the teeth (what we now call dental plaque). He also found organisms in water from the canal next to his home in Delft. He experimented with samples by adding vinegar or brandy and found that this resulted in the immediate immobilization or killing of the organisms suspended in water. Next he tried rinsing the mouth of himself and somebody else with a mouthwash containing vinegar or brandy and found that living organisms remained in the dental plaque. He concluded—correctly—that the mouthwash either did not reach, or was not present long enough, to kill the plaque organisms.
In 1892, German Richard Seifert invented mouthwash product Odol, which was produced by company founder Karl August Lingner (1861–1916) in Dresden.
That remained the state of affairs until the late 1960s when Harald Loe (at the time a professor at the Royal Dental College in Aarhus, Denmark) demonstrated that a chlorhexidine compound could prevent the build-up of dental plaque. The reason for chlorhexidine's effectiveness is that it strongly adheres to surfaces in the mouth and thus remains present in effective concentrations for many hours.
Since then commercial interest in mouthwashes has been intense and several newer products claim effectiveness in reducing the build-up in dental plaque and the associated severity of gingivitis, in addition to fighting bad breath. Many of these solutions aim to control the volatile sulfur compound–creating anaerobic bacteria that live in the mouth and excrete substances that lead to bad breath and unpleasant mouth taste. For example, the number of mouthwash variants in the United States of America has grown from 15 (1970) to 66 (1998) to 113 (2012).
Research
Research in the field of microbiotas shows that only a limited set of microbes cause tooth decay, with most of the bacteria in the human mouth being harmless. Focused attention on cavity-causing bacteria such as Streptococcus mutans has led research into new mouthwash treatments that prevent these bacteria from initially growing. While current mouthwash treatments must be used with a degree of frequency to prevent this bacteria from regrowing, future treatments could provide a viable long-term solution.
A clinical trial and laboratory studies have shown that alcohol-containing mouthwash could reduce the growth of Neisseria gonorrhoeae in the pharynx. However, subsequent trials have found that there was no difference in gonorrhoea cases among men using daily mouthwash compared to those who did not use mouthwash for 12 weeks.
Ingredients
Alcohol
Alcohol is added to mouthwash not to destroy bacteria but to act as a carrier agent for essential active ingredients such as menthol, eucalyptol and thymol, which help to penetrate plaque. Sometimes a significant amount of alcohol (up to 27% vol) is added, as a carrier for the flavor, to provide "bite". Because of the alcohol content, it is possible to fail a breathalyzer test after rinsing, although breath alcohol levels return to normal after 10 minutes. In addition, alcohol is a drying agent, which encourages bacterial activity in the mouth, releasing more malodorous volatile sulfur compounds. Therefore, alcohol-containing mouthwash may temporarily worsen halitosis in those who already have it, or, indeed, be the sole cause of halitosis in other individuals.
It is hypothesized that alcohol in mouthwashes acts as a carcinogen (cancer-inducing agent). Generally, there is no scientific consensus about this. One review stated:
The same researchers also state that the risk of acquiring oral cancer rises almost five times for users of alcohol-containing mouthwash who neither smoke nor drink (with a higher rate of increase for those who do). In addition, the authors highlight side effects from several mainstream mouthwashes that included dental erosion and accidental poisoning of children. The review garnered media attention and conflicting opinions from other researchers. Yinka Ebo of Cancer Research UK disputed the findings, concluding that "there is still not enough evidence to suggest that using mouthwash that contains alcohol will increase the risk of mouth cancer". Studies conducted in 1985, 1995, 2003, and 2012 did not support an association between alcohol-containing mouth rinses and oral cancer. Andrew Penman, chief executive of The Cancer Council New South Wales, called for further research on the matter. In a March 2009 brief, the American Dental Association said "the available evidence does not support a connection between oral cancer and alcohol-containing mouthrinse". Many newer brands of mouthwash are alcohol-free, not just in response to consumer concerns about oral cancer, but also to cater for religious groups who abstain from alcohol consumption.
Benzydamine (analgesic)
In painful oral conditions such as aphthous stomatitis, analgesic mouthrinses (e.g. benzydamine mouthwash, or "Difflam") are sometimes used to ease pain, commonly used before meals to reduce discomfort while eating.
Benzoic acid
Benzoic acid acts as a buffer.
Betamethasone
Betamethasone is sometimes used as an anti-inflammatory, corticosteroid mouthwash. It may be used for severe inflammatory conditions of the oral mucosa such as the severe forms of aphthous stomatitis.
Cetylpyridinium chloride (antiseptic, antimalodor)
Cetylpyridinium chloride containing mouthwash (e.g. 0.05%) is used in some specialized mouthwashes for halitosis. Cetylpyridinium chloride mouthwash has less anti-plaque effect than chlorhexidine and may cause staining of teeth, or sometimes an oral burning sensation or ulceration.
Chlorhexidine digluconate and hexetidine (antiseptic)
Chlorhexidine digluconate is a chemical antiseptic and is used in a 0.05–0.2% solution as a mouthwash. There is no evidence to support that higher concentrations are more effective in controlling dental plaque and gingivitis. A randomized clinical trial conducted in Rabat University in Morocco found better results in plaque inhibition when chlorohexidine with alcohol base 0.12% was used, when compared to an alcohol-free 0.1% chlorhexidine mouthrinse.
Chlorhexidine has good substantivity (the ability of a mouthwash to bind to hard and soft tissues in the mouth). It has anti-plaque action, and also some anti-fungal action. It is especially effective against Gram-negative rods. The proportion of Gram-negative rods increase as gingivitis develops, so it is also used to reduce gingivitis. It is sometimes used as an adjunct to prevent dental caries and to treat periodontal disease, although it does not penetrate into periodontal pockets well. Chlorhexidine mouthwash alone is unable to prevent plaque, so it is not a substitute for regular toothbrushing and flossing. Instead, chlorhexidine mouthwash is more effective when used as an adjunctive treatment with toothbrushing and flossing. In the short term, if toothbrushing is impossible due to pain, as may occur in primary herpetic gingivostomatitis, chlorhexidine mouthwash is used as a temporary substitute for other oral hygiene measures. It is not suited for use in acute necrotizing ulcerative gingivitis, however. Rinsing with chlorhexidine mouthwash before and after a tooth extraction may reduce the risk of a dry socket. Other uses of chlorhexidine mouthwash include prevention of oral candidiasis in immunocompromised persons, treatment of denture-related stomatitis, mucosal ulceration/erosions and oral mucosal lesions, general burning sensation and many other uses.
Chlorhexidine mouthwash is known to have minor adverse effects. Chlorhexidine binds to tannins, meaning that prolonged use in persons who consume coffee, tea or red wine is associated with extrinsic staining (i.e. removable staining) of teeth. A systematic review of commercial chlorhexidine products with anti-discoloration systems (ADSs) found that the ADSs were able to reduce tooth staining without affecting the beneficial effects of chlorhexidine. Chlorhexidine mouthwash can also cause taste disturbance or alteration. Chlorhexidine is rarely associated with other issues like overgrowth of enterobacteria in persons with leukemia, desquamation, irritation, and stomatitis of oral mucosa, salivary gland pain and swelling, and hypersensitivity reactions including anaphylaxis.
Hexetidine also has anti-plaque, analgesic, astringent and anti-malodor properties, but is considered an inferior alternative to chlorhexidine.
Edible oils
In traditional Ayurvedic medicine, the use of oil mouthwashes is called "Kavala" ("oil swishing") or "Gandusha", and this practice has more recently been re-marketed by the complementary and alternative medicine industry as "oil pulling". Its promoters claim it works by "pulling out" "toxins", which are known as ama in Ayurvedic medicine, and thereby reducing inflammation. Ayurvedic literature claims that oil pulling is capable of improving oral and systemic health, including a benefit in conditions such as headaches, migraines, diabetes mellitus, asthma, and acne, as well as whitening teeth.
Oil pulling has received little study and there is little evidence to support claims made by the technique's advocates. When compared with chlorhexidine in one small study, it was found to be less effective at reducing oral bacterial load, and the other health claims of oil pulling have failed scientific verification or have not been investigated. There is a report of lipid pneumonia caused by accidental inhalation of the oil during oil pulling.
The mouth is rinsed with approximately one tablespoon of oil for 10–20 minutes then spat out. Sesame oil, coconut oil and ghee are traditionally used, but newer oils such as sunflower oil are also used.
Essential oils
Phenolic compounds and monoterpenes include essential oil constituents that have some antibacterial properties, such as eucalyptol, eugenol, hinokitiol, menthol, phenol, or thymol.
Essential oils are oils which have been extracted from plants. Mouthwashes based on essential oils could be more effective than traditional mouthcare as anti-gingival treatments. They have been found effective in reducing halitosis, and are being used in several commercial mouthwashes.
Fluoride (anticavity)
Anti-cavity mouthwashes use sodium fluoride to protect against tooth decay. Fluoride-containing mouthwashes are used as prevention for dental caries for individuals who are considered at higher risk for tooth decay, whether due to xerostomia related to salivary dysfunction or side effects of medication, to not drinking fluoridated water, or to being physically unable to care for their oral needs (brushing and flossing), and as treatment for those with dentinal hypersensitivity, gingival recession/ root exposure.
Flavoring agents and Xylitol
Flavoring agents include sweeteners such as sorbitol, sucralose, sodium saccharin, and xylitol, which stimulate salivary function due to their sweetness and taste and helps restore the mouth to a neutral level of acidity.
Xylitol rinses double as a bacterial inhibitor, and have been used as substitute for alcohol to avoid dryness of mouth associated with alcohol.
Hydrogen peroxide
Hydrogen peroxide can be used as an oxidizing mouthwash (e.g. Peroxyl, 1.5%). It kills anaerobic bacteria, and also has a mechanical cleansing action when it froths as it comes into contact with debris in mouth. It is often used in the short term to treat acute necrotising ulcerative gingivitis. Side effects can occur with prolonged use, including hypertrophy of the lingual papillae.
Lactoperoxidase (saliva substitute)
Enzymes and non-enzymatic proteins, such as lactoperoxidase, lysozyme, and lactoferrin, have been used in mouthwashes (e.g., Biotene) to reduce levels of oral bacteria, and, hence, of the acids produced by these bacteria.
Lidocaine/xylocaine
Oral lidocaine is useful for the treatment of mucositis symptoms (inflammation of mucous membranes) induced by radiation or chemotherapy. There is evidence that lidocaine anesthetic mouthwash has the potential to be systemically absorbed, when it was tested in patients with oral mucositis who underwent a bone marrow transplant.
Methyl salicylate
Methyl salicylate functions as an antiseptic, antiinflammatory, and analgesic agent, a flavoring, and a fragrance. Methyl salicylate has some anti-plaque action, but less than chlorhexidine. Methyl salicylate does not stain teeth.
Nystatin
Nystatin suspension is an antifungal ingredient used for the treatment of oral candidiasis.
Potassium oxalate
A randomized clinical trial found promising results in controlling and reducing dentine hypersensitivity when potassium oxalate mouthwash was used in conjugation with toothbrushing.
Povidone/iodine (PVP-I)
A 2005 study found that gargling three times a day with simple water or with a povidone-iodine solution was effective in preventing upper respiratory infection and decreasing the severity of symptoms if contracted. Other sources attribute the benefit to a simple placebo effect.
PVP-I in general covers "a wider virucidal spectrum, covering both enveloped and nonenveloped viruses, than the other commercially available antiseptics", which also includes the novel SARS-CoV-2 Virus.
Sanguinarine
Sanguinarine-containing mouthwashes are marketed as anti-plaque and anti-malodor treatments. Sanguinarine is a toxic alkaloid herbal extract, obtained from plants such as Sanguinaria canadensis (bloodroot), Argemone mexicana (Mexican prickly poppy), and others. However, its use is strongly associated with the development of leukoplakia (a white patch in the mouth), usually in the buccal sulcus. This type of leukoplakia has been termed "sanguinaria-associated keratosis", and more than 80% of people with leukoplakia in the vestibule of the mouth have used this substance. Upon stopping contact with the causative substance, the lesions may persist for years. Although this type of leukoplakia may show dysplasia, the potential for malignant transformation is unknown. Ironically, elements within the complementary and alternative medicine industry promote the use of sanguinaria as a therapy for cancer.
Sodium bicarbonate (baking soda)
Sodium bicarbonate is sometimes combined with salt to make a simple homemade mouthwash, indicated for any of the reasons that a saltwater mouthwash might be used. Pre-mixed mouthwashes of 1% sodium bicarbonate and 1.5% sodium chloride in aqueous solution are marketed, although pharmacists will easily be able to produce such a formulation from the base ingredients when required. Sodium bicarbonate mouthwash is sometimes used to remove viscous saliva and to aid visualization of the oral tissues during examination of the mouth.
Sodium chloride (salt)
Saline has a mechanical cleansing action and an antiseptic action, as it is a hypertonic solution in relation to bacteria, which undergo lysis. The heat of the solution produces a therapeutic increase in blood flow (hyperemia) to the surgical site, promoting healing. Hot saltwater mouthwashes also encourage the draining of pus from dental abscesses. In contrast, if heat is applied on the side of the face (e.g., hot water bottle) rather than inside the mouth, it may cause a dental abscess to drain extra-orally, which is later associated with an area of fibrosis on the face (see cutaneous sinus of dental origin).
Saltwater mouthwashes are also routinely used after oral surgery, to keep food debris out of healing wounds and to prevent infection. Some oral surgeons consider saltwater mouthwashes the mainstay of wound cleanliness after surgery. In dental extractions, hot saltwater mouthbaths should start about 24 hours after a dental extraction. The term mouth bath implies that the liquid is passively held in the mouth, rather than vigorously swilled around (which could dislodge a blood clot). Once the blood clot has stabilized, the mouthwash can be used more vigorously. These mouthwashes tend to be advised for use about 6 times per day, especially after meals (to remove food from the socket).
Sodium lauryl sulfate (foaming agent)
Sodium lauryl sulfate (SLS) is used as a foaming agent in many oral hygiene products, including many mouthwashes. Some may suggest that it is probably advisable to use mouthwash at least an hour after brushing with toothpaste when the toothpaste contains SLS, since the anionic compounds in the SLS toothpaste can deactivate cationic agents present in the mouthwash.
Sucralfate
Sucralfate is a mucosal coating agent, composed of an aluminum salt of sulfated sucrose. It is not recommended for use in the prevention of oral mucositis in head and neck cancer patients receiving radiotherapy or chemoradiation, due to a lack of efficacy found in a well-designed, randomized controlled trial.
Tetracycline (antibiotic)
Tetracycline is an antibiotic which may sometimes be used as a mouthwash in adults (it causes red staining of teeth in children). It is sometimes use for herpetiforme ulceration (an uncommon type of aphthous stomatitis), but prolonged use may lead to oral candidiasis, as the fungal population of the mouth overgrows in the absence of enough competing bacteria. Similarly, minocycline mouthwashes of 0.5% concentrations can relieve symptoms of recurrent aphthous stomatitis. Erythromycin is similar.
Tranexamic acid
A 4.8% tranexamic acid solution is sometimes used as an antifibrinolytic mouthwash to prevent bleeding during and after oral surgery in persons with coagulopathies (clotting disorders) or who are taking anticoagulants (blood thinners such as warfarin).
Triclosan
Triclosan is a non-ionic chlorinate bisphenol antiseptic found in some mouthwashes. When used in mouthwash (e.g. 0.03%), there is moderate substantivity, broad spectrum anti-bacterial action, some anti-fungal action, and significant anti-plaque effect, especially when combined with a copolymer or zinc citrate. Triclosan does not cause staining of the teeth. The safety of triclosan has been questioned.
Zinc
Astringents like zinc chloride provide a pleasant-tasting sensation and shrink tissues. Zinc, when used in combination with other antiseptic agents, can limit the buildup of tartar.
See also
Sodium fluoride/malic acid
Virucide
References
External links
Article on Bad-Breath Prevention Products – from MSNBC
Mayo Clinic Q&A on Magic Mouthwash for chemotherapy sores
American Dental Association article on mouthwash
Dentifrices
Oral hygiene
Drug delivery devices
Dosage forms
|
https://en.wikipedia.org/wiki/Asteroid
|
An asteroid is a minor planet—an object that is neither a true planet nor a comet—that orbits within the inner Solar System. They are rocky, metallic or icy bodies with no atmosphere. Sizes and shapes of asteroids vary significantly, ranging from 1-meter rocks to a dwarf planet almost 1000 km in diameter.
Of the roughly one million known asteroids the greatest number are located between the orbits of Mars and Jupiter, approximately 2 to 4 AU from the Sun, in the main asteroid belt. Asteroids are generally classified to be of three types: C-type, M-type, and S-type. These were named after and are generally identified with carbonaceous, metallic, and silicaceous compositions, respectively. The size of asteroids varies greatly; the largest, Ceres, is almost across and qualifies as a dwarf planet. The total mass of all the asteroids combined is only 3% that of Earth's Moon. The majority of main belt asteroids follow slightly elliptical, stable orbits, revolving in the same direction as the Earth and taking from three to six years to complete a full circuit of the Sun.
Asteroids have been historically observed from Earth; the Galileo spacecraft provided the first close observation of an asteroid. Several dedicated missions to asteroids were subsequently launched by NASA and JAXA, with plans for other missions in progress. NASA's NEAR Shoemaker studied Eros, and Dawn observed Vesta and Ceres. JAXA's missions Hayabusa and Hayabusa2 studied and returned samples of Itokawa and Ryugu, respectively. OSIRIS-REx studied Bennu, collecting a sample in 2020 which was delivered back to Earth in 2023. NASA's Lucy, launched in 2021, will study ten different asteroids, two from the main belt and eight Jupiter trojans. Psyche, launched in October 2023, will study a metallic asteroid of the same name.
Near-Earth asteroids can threaten all life on the planet; an asteroid impact event resulted in the Cretaceous–Paleogene extinction. Different asteroid deflection strategies have been proposed; the Double Asteroid Redirection Test spacecraft, or DART, was launched in 2021 and intentionally impacted Dimorphos in September 2022, successfully altering its orbit by crashing into it.
History of observations
Only one asteroid, 4 Vesta, which has a relatively reflective surface, is normally visible to the naked eye. When favorably positioned, 4 Vesta can be seen in dark skies. Rarely, small asteroids passing close to Earth may be visible to the naked eye for a short amount of time. , the Minor Planet Center had data on 1,199,224 minor planets in the inner and outer Solar System, of which about 614,690 had enough information to be given numbered designations.
Discovery of Ceres
In 1772, German astronomer Johann Elert Bode, citing Johann Daniel Titius, published a numerical procession known as the Titius–Bode law (now discredited). Except for an unexplained gap between Mars and Jupiter, Bode's formula seemed to predict the orbits of the known planets. He wrote the following explanation for the existence of a "missing planet":
This latter point seems in particular to follow from the astonishing relation which the known six planets observe in their distances from the Sun. Let the distance from the Sun to Saturn be taken as 100, then Mercury is separated by 4 such parts from the Sun. Venus is 4 + 3 = 7. The Earth 4 + 6 = 10. Mars 4 + 12 = 16. Now comes a gap in this so orderly progression. After Mars there follows a space of 4 + 24 = 28 parts, in which no planet has yet been seen. Can one believe that the Founder of the universe had left this space empty? Certainly not. From here we come to the distance of Jupiter by 4 + 48 = 52 parts, and finally to that of Saturn by 4 + 96 = 100 parts.
Bode's formula predicted another planet would be found with an orbital radius near 2.8 astronomical units (AU), or 420 million km, from the Sun. The Titius–Bode law got a boost with William Herschel's discovery of Uranus near the predicted distance for a planet beyond Saturn. In 1800, a group headed by Franz Xaver von Zach, editor of the German astronomical journal Monatliche Correspondenz (Monthly Correspondence), sent requests to 24 experienced astronomers (whom he dubbed the "celestial police"), asking that they combine their efforts and begin a methodical search for the expected planet. Although they did not discover Ceres, they later found the asteroids 2 Pallas, 3 Juno and 4 Vesta.
One of the astronomers selected for the search was Giuseppe Piazzi, a Catholic priest at the Academy of Palermo, Sicily. Before receiving his invitation to join the group, Piazzi discovered Ceres on 1 January 1801. He was searching for "the 87th [star] of the Catalogue of the Zodiacal stars of Mr la Caille", but found that "it was preceded by another". Instead of a star, Piazzi had found a moving star-like object, which he first thought was a comet:
The light was a little faint, and of the colour of Jupiter, but similar to many others which generally are reckoned of the eighth magnitude. Therefore I had no doubt of its being any other than a fixed star. [...] The evening of the third, my suspicion was converted into certainty, being assured it was not a fixed star. Nevertheless before I made it known, I waited till the evening of the fourth, when I had the satisfaction to see it had moved at the same rate as on the preceding days.
Piazzi observed Ceres a total of 24 times, the final time on 11 February 1801, when illness interrupted his work. He announced his discovery on 24 January 1801 in letters to only two fellow astronomers, his compatriot Barnaba Oriani of Milan and Bode in Berlin. He reported it as a comet but "since its movement is so slow and rather uniform, it has occurred to me several times that it might be something better than a comet". In April, Piazzi sent his complete observations to Oriani, Bode, and French astronomer Jérôme Lalande. The information was published in the September 1801 issue of the Monatliche Correspondenz.
By this time, the apparent position of Ceres had changed (mostly due to Earth's motion around the Sun), and was too close to the Sun's glare for other astronomers to confirm Piazzi's observations. Toward the end of the year, Ceres should have been visible again, but after such a long time it was difficult to predict its exact position. To recover Ceres, mathematician Carl Friedrich Gauss, then 24 years old, developed an efficient method of orbit determination. In a few weeks, he predicted the path of Ceres and sent his results to von Zach. On 31 December 1801, von Zach and fellow celestial policeman Heinrich W. M. Olbers found Ceres near the predicted position and thus recovered it. At 2.8 AU from the Sun, Ceres appeared to fit the Titius–Bode law almost perfectly; however, Neptune, once discovered in 1846, was 8 AU closer than predicted, leading most astronomers to conclude that the law was a coincidence. Piazzi named the newly discovered object Ceres Ferdinandea, "in honor of the patron goddess of Sicily and of King Ferdinand of Bourbon".
Further search
Three other asteroids (2 Pallas, 3 Juno, and 4 Vesta) were discovered by von Zach's group over the next few years, with Vesta found in 1807. No new asteroids were discovered until 1845. Amateur astronomer Karl Ludwig Hencke started his searches of new asteroids in 1830, and fifteen years later, while looking for Vesta, he found the asteroid later named 5 Astraea. It was the first new asteroid discovery in 38 years. Carl Friedrich Gauss was given the honor of naming the asteroid. After this, other astronomers joined; 15 asteroids were found by the end of 1851. In 1868, when James Craig Watson discovered the 100th asteroid, the French Academy of Sciences engraved the faces of Karl Theodor Robert Luther, John Russell Hind, and Hermann Goldschmidt, the three most successful asteroid-hunters at that time, on a commemorative medallion marking the event.
In 1891, Max Wolf pioneered the use of astrophotography to detect asteroids, which appeared as short streaks on long-exposure photographic plates. This dramatically increased the rate of detection compared with earlier visual methods: Wolf alone discovered 248 asteroids, beginning with 323 Brucia, whereas only slightly more than 300 had been discovered up to that point. It was known that there were many more, but most astronomers did not bother with them, some calling them "vermin of the skies", a phrase variously attributed to Eduard Suess and Edmund Weiss. Even a century later, only a few thousand asteroids were identified, numbered and named.
19th and 20th centuries
In the past, asteroids were discovered by a four-step process. First, a region of the sky was photographed by a wide-field telescope, or astrograph. Pairs of photographs were taken, typically one hour apart. Multiple pairs could be taken over a series of days. Second, the two films or plates of the same region were viewed under a stereoscope. A body in orbit around the Sun would move slightly between the pair of films. Under the stereoscope, the image of the body would seem to float slightly above the background of stars. Third, once a moving body was identified, its location would be measured precisely using a digitizing microscope. The location would be measured relative to known star locations.
These first three steps do not constitute asteroid discovery: the observer has only found an apparition, which gets a provisional designation, made up of the year of discovery, a letter representing the half-month of discovery, and finally a letter and a number indicating the discovery's sequential number (example: ). The last step is sending the locations and time of observations to the Minor Planet Center, where computer programs determine whether an apparition ties together earlier apparitions into a single orbit. If so, the object receives a catalogue number and the observer of the first apparition with a calculated orbit is declared the discoverer, and granted the honor of naming the object subject to the approval of the International Astronomical Union.
Naming
By 1851, the Royal Astronomical Society decided that asteroids were being discovered at such a rapid rate that a different system was needed to categorize or name asteroids. In 1852, when de Gasparis discovered the twentieth asteroid, Benjamin Valz gave it a name and a number designating its rank among asteroid discoveries, 20 Massalia. Sometimes asteroids were discovered and not seen again. So, starting in 1892, new asteroids were listed by the year and a capital letter indicating the order in which the asteroid's orbit was calculated and registered within that specific year. For example, the first two asteroids discovered in 1892 were labeled 1892A and 1892B. However, there were not enough letters in the alphabet for all of the asteroids discovered in 1893, so 1893Z was followed by 1893AA. A number of variations of these methods were tried, including designations that included year plus a Greek letter in 1914. A simple chronological numbering system was established in 1925.
Currently all newly discovered asteroids receive a provisional designation (such as ) consisting of the year of discovery and an alphanumeric code indicating the half-month of discovery and the sequence within that half-month. Once an asteroid's orbit has been confirmed, it is given a number, and later may also be given a name (e.g. ). The formal naming convention uses parentheses around the number—e.g. (433) Eros—but dropping the parentheses is quite common. Informally, it is also common to drop the number altogether, or to drop it after the first mention when a name is repeated in running text. In addition, names can be proposed by the asteroid's discoverer, within guidelines established by the International Astronomical Union.
Symbols
The first asteroids to be discovered were assigned iconic symbols like the ones traditionally used to designate the planets. By 1855 there were two dozen asteroid symbols, which often occurred in multiple variants.
In 1851, after the fifteenth asteroid, Eunomia, had been discovered, Johann Franz Encke made a major change in the upcoming 1854 edition of the Berliner Astronomisches Jahrbuch (BAJ, Berlin Astronomical Yearbook). He introduced a disk (circle), a traditional symbol for a star, as the generic symbol for an asteroid. The circle was then numbered in order of discovery to indicate a specific asteroid. The numbered-circle convention was quickly adopted by astronomers, and the next asteroid to be discovered (16 Psyche, in 1852) was the first to be designated in that way at the time of its discovery. However, Psyche was given an iconic symbol as well, as were a few other asteroids discovered over the next few years. 20 Massalia was the first asteroid that was not assigned an iconic symbol, and no iconic symbols were created after the 1855 discovery of 37 Fides.
Terminology
The first discovered asteroid, Ceres, was originally considered a new planet. It was followed by the discovery of other similar bodies, which with the equipment of the time appeared to be points of light like stars, showing little or no planetary disc, though readily distinguishable from stars due to their apparent motions. This prompted the astronomer Sir William Herschel to propose the term asteroid, coined in Greek as ἀστεροειδής, or asteroeidēs, meaning 'star-like, star-shaped', and derived from the Ancient Greek astēr 'star, planet'. In the early second half of the 19th century, the terms asteroid and planet (not always qualified as "minor") were still used interchangeably.
Traditionally, small bodies orbiting the Sun were classified as comets, asteroids, or meteoroids, with anything smaller than one meter across being called a meteoroid. The term asteroid never had a formal definition, with the broader term small Solar System bodies being preferred by the International Astronomical Union (IAU). As no IAU definition exists, asteroid can be defined as "an irregularly shaped rocky body orbiting the Sun that does not qualify as a planet or a dwarf planet under the IAU definitions of those terms".
When found, asteroids were seen as a class of objects distinct from comets, and there was no unified term for the two until small Solar System body was coined in 2006. The main difference between an asteroid and a comet is that a comet shows a coma due to sublimation of near-surface ices by solar radiation. A few objects have ended up being dual-listed because they were first classified as minor planets but later showed evidence of cometary activity. Conversely, some (perhaps all) comets are eventually depleted of their surface volatile ices and become asteroid-like. A further distinction is that comets typically have more eccentric orbits than most asteroids; "asteroids" with notably eccentric orbits are probably dormant or extinct comets.
For almost two centuries, from the discovery of Ceres in 1801 until the discovery of the first centaur, 2060 Chiron in 1977, all known asteroids spent most of their time at or within the orbit of Jupiter, though a few such as 944 Hidalgo ventured far beyond Jupiter for part of their orbit. When astronomers started finding more small bodies that permanently resided further out than Jupiter, now called centaurs, they numbered them among the traditional asteroids. There was debate over whether these objects should be considered asteroids or given a new classification. Then, when the first trans-Neptunian object (other than Pluto), 15760 Albion, was discovered in 1992, and especially when large numbers of similar objects started turning up, new terms were invented to sidestep the issue: Kuiper-belt object, trans-Neptunian object, scattered-disc object, and so on. They inhabit the cold outer reaches of the Solar System where ices remain solid and comet-like bodies are not expected to exhibit much cometary activity; if centaurs or trans-Neptunian objects were to venture close to the Sun, their volatile ices would sublimate, and traditional approaches would classify them as comets and not asteroids.
The innermost of these are the Kuiper-belt objects, called "objects" partly to avoid the need to classify them as asteroids or comets. They are thought to be predominantly comet-like in composition, though some may be more akin to asteroids. Furthermore, most do not have the highly eccentric orbits associated with comets, and the ones so far discovered are larger than traditional comet nuclei. (The much more distant Oort cloud is hypothesized to be the main reservoir of dormant comets.) Other recent observations, such as the analysis of the cometary dust collected by the Stardust probe, are increasingly blurring the distinction between comets and asteroids, suggesting "a continuum between asteroids and comets" rather than a sharp dividing line.
The minor planets beyond Jupiter's orbit are sometimes also called "asteroids", especially in popular presentations. However, it is becoming increasingly common for the term asteroid to be restricted to minor planets of the inner Solar System. Therefore, this article will restrict itself for the most part to the classical asteroids: objects of the asteroid belt, Jupiter trojans, and near-Earth objects.
When the IAU introduced the class small Solar System bodies in 2006 to include most objects previously classified as minor planets and comets, they created the class of dwarf planets for the largest minor planets—those that have enough mass to have become ellipsoidal under their own gravity. According to the IAU, "the term 'minor planet' may still be used, but generally, the term 'Small Solar System Body' will be preferred." Currently only the largest object in the asteroid belt, Ceres, at about across, has been placed in the dwarf planet category.
Formation
Many asteroids are the shattered remnants of planetesimals, bodies within the young Sun's solar nebula that never grew large enough to become planets. It is thought that planetesimals in the asteroid belt evolved much like the rest of objects in the solar nebula until Jupiter neared its current mass, at which point excitation from orbital resonances with Jupiter ejected over 99% of planetesimals in the belt. Simulations and a discontinuity in spin rate and spectral properties suggest that asteroids larger than approximately in diameter accreted during that early era, whereas smaller bodies are fragments from collisions between asteroids during or after the Jovian disruption. Ceres and Vesta grew large enough to melt and differentiate, with heavy metallic elements sinking to the core, leaving rocky minerals in the crust.
In the Nice model, many Kuiper-belt objects are captured in the outer asteroid belt, at distances greater than 2.6 AU. Most were later ejected by Jupiter, but those that remained may be the D-type asteroids, and possibly include Ceres.
Distribution within the Solar System
Various dynamical groups of asteroids have been discovered orbiting in the inner Solar System. Their orbits are perturbed by the gravity of other bodies in the Solar System and by the Yarkovsky effect. Significant populations include:
Asteroid belt
The majority of known asteroids orbit within the asteroid belt between the orbits of Mars and Jupiter, generally in relatively low-eccentricity (i.e. not very elongated) orbits. This belt is estimated to contain between 1.1 and 1.9 million asteroids larger than in diameter, and millions of smaller ones. These asteroids may be remnants of the protoplanetary disk, and in this region the accretion of planetesimals into planets during the formative period of the Solar System was prevented by large gravitational perturbations by Jupiter.
Contrary to popular imagery, the asteroid belt is mostly empty. The asteroids are spread over such a large volume that reaching an asteroid without aiming carefully would be improbable. Nonetheless, hundreds of thousands of asteroids are currently known, and the total number ranges in the millions or more, depending on the lower size cutoff. Over 200 asteroids are known to be larger than 100 km, and a survey in the infrared wavelengths has shown that the asteroid belt has between 700,000 and 1.7 million asteroids with a diameter of 1 km or more. The absolute magnitudes of most of the known asteroids are between 11 and 19, with the median at about 16.
The total mass of the asteroid belt is estimated to be kg, which is just 3% of the mass of the Moon; the mass of the Kuiper Belt and Scattered Disk is over 100 times as large. The four largest objects, Ceres, Vesta, Pallas, and Hygiea, account for maybe 62% of the belt's total mass, with 39% accounted for by Ceres alone.
Trojans
Trojans are populations that share an orbit with a larger planet or moon, but do not collide with it because they orbit in one of the two Lagrangian points of stability, and , which lie 60° ahead of and behind the larger body.
In the Solar System, most known trojans share the orbit of Jupiter. They are divided into the Greek camp at (ahead of Jupiter) and the Trojan camp at (trailing Jupiter). More than a million Jupiter trojans larger than one kilometer are thought to exist, of which more than 7,000 are currently catalogued. In other planetary orbits only nine Mars trojans, 28 Neptune trojans, two Uranus trojans, and two Earth trojans, have been found to date. A temporary Venus trojan is also known. Numerical orbital dynamics stability simulations indicate that Saturn and Uranus probably do not have any primordial trojans.
Near-Earth asteroids
Near-Earth asteroids, or NEAs, are asteroids that have orbits that pass close to that of Earth. Asteroids that actually cross Earth's orbital path are known as Earth-crossers. , a total of 28,772 near-Earth asteroids were known; 878 have a diameter of one kilometer or larger.
A small number of NEAs are extinct comets that have lost their volatile surface materials, although having a faint or intermittent comet-like tail does not necessarily result in a classification as a near-Earth comet, making the boundaries somewhat fuzzy. The rest of the near-Earth asteroids are driven out of the asteroid belt by gravitational interactions with Jupiter.
Many asteroids have natural satellites (minor-planet moons). , there were 85 NEAs known to have at least one moon, including three known to have two moons. The asteroid 3122 Florence, one of the largest potentially hazardous asteroids with a diameter of , has two moons measuring across, which were discovered by radar imaging during the asteroid's 2017 approach to Earth.
Near-Earth asteroids are divided into groups based on their semi-major axis (a), perihelion distance (q), and aphelion distance (Q):
The Atiras or Apoheles have orbits strictly inside Earth's orbit: an Atira asteroid's aphelion distance (Q) is smaller than Earth's perihelion distance (0.983 AU). That is, , which implies that the asteroid's semi-major axis is also less than 0.983 AU.
The Atens have a semi-major axis of less than 1 AU and cross Earth's orbit. Mathematically, and . (0.983 AU is Earth's perihelion distance.)
The Apollos have a semi-major axis of more than 1 AU and cross Earth's orbit. Mathematically, and . (1.017 AU is Earth's aphelion distance.)
The Amors have orbits strictly outside Earth's orbit: an Amor asteroid's perihelion distance (q) is greater than Earth's aphelion distance (1.017 AU). Amor asteroids are also near-earth objects so . In summary, . (This implies that the asteroid's semi-major axis (a) is also larger than 1.017 AU.) Some Amor asteroid orbits cross the orbit of Mars.
Martian moons
It is unclear whether Martian moons Phobos and Deimos are captured asteroids or were formed due to impact event on Mars. Phobos and Deimos both have much in common with carbonaceous C-type asteroids, with spectra, albedo, and density very similar to those of C- or D-type asteroids. Based on their similarity, one hypothesis is that both moons may be captured main-belt asteroids. Both moons have very circular orbits which lie almost exactly in Mars's equatorial plane, and hence a capture origin requires a mechanism for circularizing the initially highly eccentric orbit, and adjusting its inclination into the equatorial plane, most probably by a combination of atmospheric drag and tidal forces, although it is not clear whether sufficient time was available for this to occur for Deimos. Capture also requires dissipation of energy. The current Martian atmosphere is too thin to capture a Phobos-sized object by atmospheric braking. Geoffrey A. Landis has pointed out that the capture could have occurred if the original body was a binary asteroid that separated under tidal forces.
Phobos could be a second-generation Solar System object that coalesced in orbit after Mars formed, rather than forming concurrently out of the same birth cloud as Mars.
Another hypothesis is that Mars was once surrounded by many Phobos- and Deimos-sized bodies, perhaps ejected into orbit around it by a collision with a large planetesimal. The high porosity of the interior of Phobos (based on the density of 1.88 g/cm3, voids are estimated to comprise 25 to 35 percent of Phobos's volume) is inconsistent with an asteroidal origin. Observations of Phobos in the thermal infrared suggest a composition containing mainly phyllosilicates, which are well known from the surface of Mars. The spectra are distinct from those of all classes of chondrite meteorites, again pointing away from an asteroidal origin. Both sets of findings support an origin of Phobos from material ejected by an impact on Mars that reaccreted in Martian orbit, similar to the prevailing theory for the origin of Earth's moon.
Characteristics
Size distribution
Asteroids vary greatly in size, from almost for the largest down to rocks just 1 meter across, below which an object is classified as a meteoroid. The three largest are very much like miniature planets: they are roughly spherical, have at least partly differentiated interiors, and are thought to be surviving protoplanets. The vast majority, however, are much smaller and are irregularly shaped; they are thought to be either battered planetesimals or fragments of larger bodies.
The dwarf planet Ceres is by far the largest asteroid, with a diameter of . The next largest are 4 Vesta and 2 Pallas, both with diameters of just over . Vesta is the brightest of the four main-belt asteroids that can, on occasion, be visible to the naked eye. On some rare occasions, a near-Earth asteroid may briefly become visible without technical aid; see 99942 Apophis.
The mass of all the objects of the asteroid belt, lying between the orbits of Mars and Jupiter, is estimated to be , ≈ 3.25% of the mass of the Moon. Of this, Ceres comprises , about 40% of the total. Adding in the next three most massive objects, Vesta (11%), Pallas (8.5%), and Hygiea (3–4%), brings this figure up to a bit over 60%, whereas the next seven most-massive asteroids bring the total up to 70%. The number of asteroids increases rapidly as their individual masses decrease.
The number of asteroids decreases markedly with increasing size. Although the size distribution generally follows a power law, there are 'bumps' at about and , where more asteroids than expected from such a curve are found. Most asteroids larger than approximately 120 km in diameter are primordial (surviving from the accretion epoch), whereas most smaller asteroids are products of fragmentation of primordial asteroids. The primordial population of the main belt was probably 200 times what it is today.
Largest asteroids
Three largest objects in the asteroid belt, Ceres, Vesta, and Pallas, are intact protoplanets that share many characteristics common to planets, and are atypical compared to the majority of irregularly shaped asteroids. The fourth-largest asteroid, Hygiea, appears nearly spherical although it may have an undifferentiated interior, like the majority of asteroids. The four largest asteroids constitute half the mass of the asteroid belt.
Ceres is the only asteroid that appears to have a plastic shape under its own gravity and hence the only one that is a dwarf planet. It has a much higher absolute magnitude than the other asteroids, of around 3.32, and may possess a surface layer of ice. Like the planets, Ceres is differentiated: it has a crust, a mantle and a core. No meteorites from Ceres have been found on Earth.
Vesta, too, has a differentiated interior, though it formed inside the Solar System's frost line, and so is devoid of water; its composition is mainly of basaltic rock with minerals such as olivine. Aside from the large crater at its southern pole, Rheasilvia, Vesta also has an ellipsoidal shape. Vesta is the parent body of the Vestian family and other V-type asteroids, and is the source of the HED meteorites, which constitute 5% of all meteorites on Earth.
Pallas is unusual in that, like Uranus, it rotates on its side, with its axis of rotation tilted at high angles to its orbital plane. Its composition is similar to that of Ceres: high in carbon and silicon, and perhaps partially differentiated. Pallas is the parent body of the Palladian family of asteroids.
Hygiea is the largest carbonaceous asteroid and, unlike the other largest asteroids, lies relatively close to the plane of the ecliptic. It is the largest member and presumed parent body of the Hygiean family of asteroids. Because there is no sufficiently large crater on the surface to be the source of that family, as there is on Vesta, it is thought that Hygiea may have been completely disrupted in the collision that formed the Hygiean family and recoalesced after losing a bit less than 2% of its mass. Observations taken with the Very Large Telescope's SPHERE imager in 2017 and 2018, revealed that Hygiea has a nearly spherical shape, which is consistent both with it being in hydrostatic equilibrium, or formerly being in hydrostatic equilibrium, or with being disrupted and recoalescing.
Internal differentiation of large asteroids is possibly related to their lack of natural satellites, as satellites of main belt asteroids are mostly believed to form from collisional disruption, creating a rubble pile structure.
Rotation
Measurements of the rotation rates of large asteroids in the asteroid belt show that there is an upper limit. Very few asteroids with a diameter larger than 100 meters have a rotation period less than 2.2 hours. For asteroids rotating faster than approximately this rate, the inertial force at the surface is greater than the gravitational force, so any loose surface material would be flung out. However, a solid object should be able to rotate much more rapidly. This suggests that most asteroids with a diameter over 100 meters are rubble piles formed through the accumulation of debris after collisions between asteroids.
Color
Asteroids become darker and redder with age due to space weathering. However evidence suggests most of the color change occurs rapidly, in the first hundred thousand years, limiting the usefulness of spectral measurement for determining the age of asteroids.
Surface features
Except for the "big four" (Ceres, Pallas, Vesta, and Hygiea), asteroids are likely to be broadly similar in appearance, if irregular in shape. 253 Mathilde is a rubble pile saturated with craters with diameters the size of the asteroid's radius. Earth-based observations of 511 Davida, one of the largest asteroids after the big four, reveal a similarly angular profile, suggesting it is also saturated with radius-size craters. Medium-sized asteroids such as Mathilde and 243 Ida, that have been observed up close, also reveal a deep regolith covering the surface. Of the big four, Pallas and Hygiea are practically unknown. Vesta has compression fractures encircling a radius-size crater at its south pole but is otherwise a spheroid.
Dawn spacecraft revealed that Ceres has a heavily cratered surface, but with fewer large craters than expected. Models based on the formation of the current asteroid belt had suggested Ceres should possess 10 to 15 craters larger than in diameter. The largest confirmed crater on Ceres, Kerwan Basin, is across. The most likely reason for this is viscous relaxation of the crust slowly flattening out larger impacts.
Composition
Asteroids are classified by their characteristic emission spectra, with the majority falling into three main groups: C-type, M-type, and S-type. These were named after and are generally identified with carbonaceous (carbon-rich), metallic, and silicaceous (stony) compositions, respectively. The physical composition of asteroids is varied and in most cases poorly understood. Ceres appears to be composed of a rocky core covered by an icy mantle, where Vesta is thought to have a nickel-iron core, olivine mantle, and basaltic crust. Thought to be the largest undifferentiated asteroid, 10 Hygiea seems to have a uniformly primitive composition of carbonaceous chondrite, but it may actually be a differentiated asteroid that was globally disrupted by an impact and then reassembled. Other asteroids appear to be the remnant cores or mantles of proto-planets, high in rock and metal. Most small asteroids are believed to be piles of rubble held together loosely by gravity, although the largest are probably solid. Some asteroids have moons or are co-orbiting binaries: rubble piles, moons, binaries, and scattered asteroid families are thought to be the results of collisions that disrupted a parent asteroid, or possibly a planet.
In the main asteroid belt, there appear to be two primary populations of asteroid: a dark, volatile-rich population, consisting of the C-type and P-type asteroids, with albedos less than 0.10 and densities under , and a dense, volatile-poor population, consisting of the S-type and M-type asteroids, with albedos over 0.15 and densities greater than 2.7. Within these populations, larger asteroids are denser, presumably due to compression. There appears to be minimal macro-porosity (interstitial vacuum) in the score of asteroids with masses greater than .
Composition is calculated from three primary sources: albedo, surface spectrum, and density. The last can only be determined accurately by observing the orbits of moons the asteroid might have. So far, every asteroid with moons has turned out to be a rubble pile, a loose conglomeration of rock and metal that may be half empty space by volume. The investigated asteroids are as large as 280 km in diameter, and include 121 Hermione (268×186×183 km), and 87 Sylvia (384×262×232 km). Few asteroids are larger than 87 Sylvia, none of them have moons. The fact that such large asteroids as Sylvia may be rubble piles, presumably due to disruptive impacts, has important consequences for the formation of the Solar System: computer simulations of collisions involving solid bodies show them destroying each other as often as merging, but colliding rubble piles are more likely to merge. This means that the cores of the planets could have formed relatively quickly.
Water
Scientists hypothesize that some of the first water brought to Earth was delivered by asteroid impacts after the collision that produced the Moon. In 2009, the presence of water ice was confirmed on the surface of 24 Themis using NASA's Infrared Telescope Facility. The surface of the asteroid appears completely covered in ice. As this ice layer is sublimating, it may be getting replenished by a reservoir of ice under the surface. Organic compounds were also detected on the surface. The presence of ice on 24 Themis makes the initial theory plausible.
In October 2013, water was detected on an extrasolar body for the first time, on an asteroid orbiting the white dwarf GD 61. On 22 January 2014, European Space Agency (ESA) scientists reported the detection, for the first definitive time, of water vapor on Ceres, the largest object in the asteroid belt. The detection was made by using the far-infrared abilities of the Herschel Space Observatory. The finding is unexpected because comets, not asteroids, are typically considered to "sprout jets and plumes". According to one of the scientists, "The lines are becoming more and more blurred between comets and asteroids."
Findings have shown that solar winds can react with the oxygen in the upper layer of the asteroids and create water. It has been estimated that "every cubic metre of irradiated rock could contain up to 20 litres"; study was conducted using an atom probe tomography, numbers are given for the Itokawa S-type asteroid.
Acfer 049, a meteorite discovered in Algeria in 1990, was shown in 2019 to have an ultraporous lithology (UPL): porous texture that could be formed by removal of ice that filled these pores, this suggests that UPL "represent fossils of primordial ice".
Organic compounds
Asteroids contain traces of amino acids and other organic compounds, and some speculate that asteroid impacts may have seeded the early Earth with the chemicals necessary to initiate life, or may have even brought life itself to Earth (an event called "panspermia"). In August 2011, a report, based on NASA studies with meteorites found on Earth, was published suggesting DNA and RNA components (adenine, guanine and related organic molecules) may have been formed on asteroids and comets in outer space.
In November 2019, scientists reported detecting, for the first time, sugar molecules, including ribose, in meteorites, suggesting that chemical processes on asteroids can produce some fundamentally essential bio-ingredients important to life, and supporting the notion of an RNA world prior to a DNA-based origin of life on Earth, and possibly, as well, the notion of panspermia.
Classification
Asteroids are commonly categorized according to two criteria: the characteristics of their orbits, and features of their reflectance spectrum.
Orbital classification
Many asteroids have been placed in groups and families based on their orbital characteristics. Apart from the broadest divisions, it is customary to name a group of asteroids after the first member of that group to be discovered. Groups are relatively loose dynamical associations, whereas families are tighter and result from the catastrophic break-up of a large parent asteroid sometime in the past. Families are more common and easier to identify within the main asteroid belt, but several small families have been reported among the Jupiter trojans. Main belt families were first recognized by Kiyotsugu Hirayama in 1918 and are often called Hirayama families in his honor.
About 30–35% of the bodies in the asteroid belt belong to dynamical families, each thought to have a common origin in a past collision between asteroids. A family has also been associated with the plutoid dwarf planet .
Some asteroids have unusual horseshoe orbits that are co-orbital with Earth or another planet. Examples are 3753 Cruithne and . The first instance of this type of orbital arrangement was discovered between Saturn's moons Epimetheus and Janus. Sometimes these horseshoe objects temporarily become quasi-satellites for a few decades or a few hundred years, before returning to their earlier status. Both Earth and Venus are known to have quasi-satellites.
Such objects, if associated with Earth or Venus or even hypothetically Mercury, are a special class of Aten asteroids. However, such objects could be associated with the outer planets as well.
Spectral classification
In 1975, an asteroid taxonomic system based on color, albedo, and spectral shape was developed by Chapman, Morrison, and Zellner. These properties are thought to correspond to the composition of the asteroid's surface material. The original classification system had three categories: C-types for dark carbonaceous objects (75% of known asteroids), S-types for stony (silicaceous) objects (17% of known asteroids) and U for those that did not fit into either C or S. This classification has since been expanded to include many other asteroid types. The number of types continues to grow as more asteroids are studied.
The two most widely used taxonomies now used are the Tholen classification and SMASS classification. The former was proposed in 1984 by David J. Tholen, and was based on data collected from an eight-color asteroid survey performed in the 1980s. This resulted in 14 asteroid categories. In 2002, the Small Main-Belt Asteroid Spectroscopic Survey resulted in a modified version of the Tholen taxonomy with 24 different types. Both systems have three broad categories of C, S, and X asteroids, where X consists of mostly metallic asteroids, such as the M-type. There are also several smaller classes.
The proportion of known asteroids falling into the various spectral types does not necessarily reflect the proportion of all asteroids that are of that type; some types are easier to detect than others, biasing the totals.
Problems
Originally, spectral designations were based on inferences of an asteroid's composition. However, the correspondence between spectral class and composition is not always very good, and a variety of classifications are in use. This has led to significant confusion. Although asteroids of different spectral classifications are likely to be composed of different materials, there are no assurances that asteroids within the same taxonomic class are composed of the same (or similar) materials.
Active asteroids
Active asteroids are objects that have asteroid-like orbits but show comet-like visual characteristics. That is, they show comae, tails, or other visual evidence of mass-loss (like a comet), but their orbit remains within Jupiter's orbit (like an asteroid). These bodies were originally designated main-belt comets (MBCs) in 2006 by astronomers David Jewitt and Henry Hsieh, but this name implies they are necessarily icy in composition like a comet and that they only exist within the main-belt, whereas the growing population of active asteroids shows that this is not always the case.
The first active asteroid discovered is 7968 Elst–Pizarro. It was discovered (as an asteroid) in 1979 but then was found to have a tail by Eric Elst and Guido Pizarro in 1996 and given the cometary designation 133P/Elst-Pizarro. Another notable object is 311P/PanSTARRS: observations made by the Hubble Space Telescope revealed that it had six comet-like tails. The tails are suspected to be streams of material ejected by the asteroid as a result of a rubble pile asteroid spinning fast enough to remove material from it.
By smashing into the asteroid Dimorphos, NASA's Double Asteroid Redirection Test spacecraft made it an active asteroid. Scientists had proposed that some active asteroids are the result of impact events, but no one had ever observed the activation of an asteroid. The DART mission activated Dimorphos under precisely known and carefully observed impact conditions, enabling the detailed study of the formation of an active asteroid for the first time. Observations show that Dimorphos lost approximately 1 million kilograms after the collision. Impact produced a dust plume that temporarily brightened the Didymos system and developed a -long dust tail that persisted for several months.
Observation and exploration
Until the age of space travel, objects in the asteroid belt could only be observed with large telescopes, their shapes and terrain remaining a mystery. The best modern ground-based telescopes and the Earth-orbiting Hubble Space Telescope can only resolve a small amount of detail on the surfaces of the largest asteroids. Limited information about the shapes and compositions of asteroids can be inferred from their light curves (variation in brightness during rotation) and their spectral properties. Sizes can be estimated by timing the lengths of star occultations (when an asteroid passes directly in front of a star). Radar imaging can yield good information about asteroid shapes and orbital and rotational parameters, especially for near-Earth asteroids. Spacecraft flybys can provide much more data than any ground or space-based observations; sample-return missions gives insights about regolith composition.
Ground-based observations
As asteroids are rather small and faint objects, the data that can be obtained from ground-based observations (GBO) are limited. By means of ground-based optical telescopes the visual magnitude can be obtained; when converted into the absolute magnitude it gives a rough estimate of the asteroid's size. Light-curve measurements can also be made by GBO; when collected over a long period of time it allows an estimate of the rotational period, the pole orientation (sometimes), and a rough estimate of the asteroid's shape. Spectral data (both visible-light and near-infrared spectroscopy) gives information about the object's composition, used to classify the observed asteroids. Such observations are limited as they provide information about only the thin layer on the surface (up to several micrometers). As planetologist Patrick Michel writes:
Mid- to thermal-infrared observations, along with polarimetry measurements, are probably the only data that give some indication of actual physical properties. Measuring the heat flux of an asteroid at a single wavelength gives an estimate of the dimensions of the object; these measurements have lower uncertainty than measurements of the reflected sunlight in the visible-light spectral region. If the two measurements can be combined, both the effective diameter and the geometric albedo—the latter being a measure of the brightness at zero phase angle, that is, when illumination comes from directly behind the observer—can be derived. In addition, thermal measurements at two or more wavelengths, plus the brightness in the visible-light region, give information on the thermal properties. The thermal inertia, which is a measure of how fast a material heats up or cools off, of most observed asteroids is lower than the bare-rock reference value but greater than that of the lunar regolith; this observation indicates the presence of an insulating layer of granular material on their surface. Moreover, there seems to be a trend, perhaps related to the gravitational environment, that smaller objects (with lower gravity) have a small regolith layer consisting of coarse grains, while larger objects have a thicker regolith layer consisting of fine grains. However, the detailed properties of this regolith layer are poorly known from remote observations. Moreover, the relation between thermal inertia and surface roughness is not straightforward, so one needs to interpret the thermal inertia with caution.
Near-Earth asteroids that come into close vicinity of the planet can be studied in more details with radar; it provides information about the surface of the asteroid (for example can show the presence of craters and boulders). Such observations were conducted by the Arecibo Observatory in Puerto Rico (305 meter dish) and Goldstone Observatory in California (70 meter dish). Radar observations can also be used for accurate determination of the orbital and rotational dynamics of observed objects.
Space-based observations
Both space and ground-based observatories conducted asteroid search programs; the space-based searches are expected to detect more objects because there is no atmosphere to interfere and because they can observe larger portions of the sky. NEOWISE observed more than 100,000 asteroids of the main belt, Spitzer Space Telescope observed more than 700 near-Earth asteroids. These observations determined rough sizes of the majority of observed objects, but provided limited detail about surface properties (such as regolith depth and composition, angle of repose, cohesion, and porosity).
Asteroids were also studied by the Hubble Space Telescope, such as tracking the colliding asteroids in the main belt, break-up of an asteroid, observing an active asteroid with six comet-like tails, and observing asteroids that were chosen as targets of dedicated missions.
Space probe missions
According to Patrick Michel
The internal structure of asteroids is inferred only from indirect evidence: bulk densities measured by spacecraft, the orbits of natural satellites in the case of asteroid binaries, and the drift of an asteroid's orbit due to the Yarkovsky thermal effect. A spacecraft near an asteroid is perturbed enough by the asteroid's gravity to allow an estimate of the asteroid's mass. The volume is then estimated using a model of the asteroid's shape. Mass and volume allow the derivation of the bulk density, whose uncertainty is usually dominated by the errors made on the volume estimate. The internal porosity of asteroids can be inferred by comparing their bulk density with that of their assumed meteorite analogues, dark asteroids seem to be more porous (>40%) than bright ones. The nature of this porosity is unclear.
Dedicated missions
The first asteroid to be photographed in close-up was 951 Gaspra in 1991, followed in 1993 by 243 Ida and its moon Dactyl, all of which were imaged by the Galileo probe en route to Jupiter. Other asteroids briefly visited by spacecraft en route to other destinations include 9969 Braille (by Deep Space 1 in 1999), 5535 Annefrank (by Stardust in 2002), 2867 Šteins and 21 Lutetia (by the Rosetta probe in 2008), and 4179 Toutatis (China's lunar orbiter Chang'e 2, which flew within in 2012).
The first dedicated asteroid probe was NASA's NEAR Shoemaker, which photographed 253 Mathilde in 1997, before entering into orbit around 433 Eros, finally landing on its surface in 2001. It was the first spacecraft to successfully orbit and land on an asteroid. From September to November 2005, the Japanese Hayabusa probe studied 25143 Itokawa in detail and returned samples of its surface to Earth on 13 June 2010, the first asteroid sample-return mission. In 2007, NASA launched the Dawn spacecraft, which orbited 4 Vesta for a year, and observed the dwarf planet Ceres for three years.
Hayabusa2, a probe launched by JAXA 2014, orbited its target asteroid 162173 Ryugu for more than a year and took samples that were delivered to Earth in 2020. The spacecraft is now on an extended mission and expected to arrive at a new target in 2031.
NASA launched the OSIRIS-REx in 2016, a sample return mission to asteroid 101955 Bennu. In 2021, the probe departed the asteroid with a sample from its surface. Sample was delivered to Earth in September 2023. The spacecraft continues its extended mission, designated OSIRIS-APEX, to explore near-Earth asteroid Apophis in 2029.
In 2021, NASA launched Double Asteroid Redirection Test (DART), a mission to test technology for defending Earth against potential hazardous objects. DART deliberately crashed into the minor-planet moon Dimorphos of the double asteroid Didymos in September 2022 to assess the potential of a spacecraft impact to deflect an asteroid from a collision course with Earth. In October, NASA declared DART a success, confirming it had shortened Dimorphos' orbital period around Didymos by about 32 minutes.
Planned missions
Currently, several asteroid-dedicated missions are planned by NASA, JAXA, ESA, and CNSA.
NASA's Lucy, launched in 2021, would visit eight asteroids, one from the main belt and seven Jupiter trojans; it is the first mission to trojans. The main mission would start in 2027.
NASA's Psyche, launched in October 2023, will study the large metallic asteroid of the same name, and will arrive there in 2029.
ESA's Hera, planned for launch in 2024, will study the results of the DART impact. It will measure the size and morphology of the crater, and momentum transmitted by the impact, to determine the efficiency of the deflection produced by DART.
JAXA's DESTINY+ is a mission for a flyby of the Geminids meteor shower parent body 3200 Phaethon, as well as various minor bodies. Its launch is planned for 2024.
CNSA's Tianwen-2 is planned to launch in 2025. It will use solar electric propulsion to explore the co-orbital near-Earth asteroid 469219 Kamoʻoalewa and the active asteroid 311P/PanSTARRS. The spacecraft will collect samples of the regolith of Kamo'oalewa.
Asteroid mining
The concept of asteroid mining was proposed in 1970s. Matt Anderson defines successful asteroid mining as "the development of a mining program that is both financially self-sustaining and profitable to its investors". It has been suggested that asteroids might be used as a source of materials that may be rare or exhausted on Earth, or materials for constructing space habitats. Materials that are heavy and expensive to launch from Earth may someday be mined from asteroids and used for space manufacturing and construction.
As resource depletion on Earth becomes more real, the idea of extracting valuable elements from asteroids and returning these to Earth for profit, or using space-based resources to build solar-power satellites and space habitats, becomes more attractive. Hypothetically, water processed from ice could refuel orbiting propellant depots.
From the astrobiological perspective, asteroid prospecting could provide scientific data for the search for extraterrestrial intelligence (SETI). Some astrophysicists have suggested that if advanced extraterrestrial civilizations employed asteroid mining long ago, the hallmarks of these activities might be detectable.
Mining Ceres is also considered a possibility. As the largest body in the asteroid belt, Ceres could become the main base and transport hub for future asteroid mining infrastructure, allowing mineral resources to be transported to Mars, the Moon, and Earth. Because of its small escape velocity combined with large amounts of water ice, it also could serve as a source of water, fuel, and oxygen for ships going through and beyond the asteroid belt. Transportation from Mars or the Moon to Ceres would be even more energy-efficient than transportation from Earth to the Moon.
Threats to Earth
There is increasing interest in identifying asteroids whose orbits cross Earth's, and that could, given enough time, collide with Earth. The three most important groups of near-Earth asteroids are the Apollos, Amors, and Atens.
The near-Earth asteroid 433 Eros had been discovered as long ago as 1898, and the 1930s brought a flurry of similar objects. In order of discovery, these were: 1221 Amor, 1862 Apollo, 2101 Adonis, and finally 69230 Hermes, which approached within 0.005 AU of Earth in 1937. Astronomers began to realize the possibilities of Earth impact.
Two events in later decades increased the alarm: the increasing acceptance of the Alvarez hypothesis that an impact event resulted in the Cretaceous–Paleogene extinction, and the 1994 observation of Comet Shoemaker-Levy 9 crashing into Jupiter. The U.S. military also declassified the information that its military satellites, built to detect nuclear explosions, had detected hundreds of upper-atmosphere impacts by objects ranging from one to ten meters across.
All of these considerations helped spur the launch of highly efficient surveys, consisting of charge-coupled device (CCD) cameras and computers directly connected to telescopes. , it was estimated that 89% to 96% of near-Earth asteroids one kilometer or larger in diameter had been discovered. A list of teams using such systems includes:
Lincoln Near-Earth Asteroid Research (LINEAR)
Near-Earth Asteroid Tracking (NEAT)
Spacewatch
Lowell Observatory Near-Earth-Object Search (LONEOS)
Catalina Sky Survey (CSS)
Pan-STARRS
NEOWISE
Asteroid Terrestrial-impact Last Alert System (ATLAS)
Campo Imperatore Near-Earth Object Survey (CINEOS)
Japanese Spaceguard Association
Asiago-DLR Asteroid Survey (ADAS)
, the LINEAR system alone had discovered 147,132 asteroids. Among the surveys, 19,266 near-Earth asteroids have been discovered including almost 900 more than in diameter.
In April 2018, the B612 Foundation reported "It is 100 percent certain we'll be hit [by a devastating asteroid], but we're not 100 percent sure when." In June 2018, the National Science and Technology Council warned that the United States is unprepared for an asteroid impact event, and has developed and released the "National Near-Earth Object Preparedness Strategy Action Plan" to better prepare. According to expert testimony in the United States Congress in 2013, NASA would require at least five years of preparation before a mission to intercept an asteroid could be launched.
The United Nations declared 30 June to be International Asteroid Day to educate the public about asteroids. The date of International Asteroid Day commemorates the anniversary of the Tunguska asteroid impact over Siberia, on 30 June 1908.
Chicxulub impact
The Chicxulub crater is an impact crater buried underneath the Yucatán Peninsula in Mexico. Its center is offshore near the communities of Chicxulub Puerto and Chicxulub Pueblo, after which the crater is named. It was formed when a large asteroid, about in diameter, struck the Earth. The crater is estimated to be in diameter and in depth. It is one of the largest confirmed impact structures on Earth, and the only one whose peak ring is intact and directly accessible for scientific research.
In the late 1970s, geologist Walter Alvarez and his father, Nobel Prize–winning scientist Luis Walter Alvarez, put forth their theory that the Cretaceous–Paleogene extinction was caused by an impact event. The main evidence of such an impact was contained in a thin layer of clay present in the K–Pg boundary in Gubbio, Italy. The Alvarezes and colleagues reported that it contained an abnormally high concentration of iridium, a chemical element rare on earth but common in asteroids. Iridium levels in this layer were as much as 160 times above the background level. It was hypothesized that the iridium was spread into the atmosphere when the impactor was vaporized and settled across the Earth's surface among other material thrown up by the impact, producing the layer of iridium-enriched clay. At the time, consensus was not settled on what caused the Cretaceous–Paleogene extinction and the boundary layer, with theories including a nearby supernova, climate change, or a geomagnetic reversal. The Alvarezes' impact hypothesis was rejected by many paleontologists, who believed that the lack of fossils found close to the K–Pg boundary—the "three-meter problem"—suggested a more gradual die-off of fossil species.
There is broad consensus that the Chicxulub impactor was an asteroid with a carbonaceous chondrite composition, rather than a comet. The impactor was around in diameter—large enough that, if set at sea level, it would have reached taller than Mount Everest.
Asteroid deflection strategies
Various collision avoidance techniques have different trade-offs with respect to metrics such as overall performance, cost, failure risks, operations, and technology readiness. There are various methods for changing the course of an asteroid/comet. These can be differentiated by various types of attributes such as the type of mitigation (deflection or fragmentation), energy source (kinetic, electromagnetic, gravitational, solar/thermal, or nuclear), and approach strategy (interception, rendezvous, or remote station).
Strategies fall into two basic sets: fragmentation and delay. Fragmentation concentrates on rendering the impactor harmless by fragmenting it and scattering the fragments so that they miss the Earth or are small enough to burn up in the atmosphere. Delay exploits the fact that both the Earth and the impactor are in orbit. An impact occurs when both reach the same point in space at the same time, or more correctly when some point on Earth's surface intersects the impactor's orbit when the impactor arrives. Since the Earth is approximately 12,750 km in diameter and moves at approx. 30 km per second in its orbit, it travels a distance of one planetary diameter in about 425 seconds, or slightly over seven minutes. Delaying, or advancing the impactor's arrival by times of this magnitude can, depending on the exact geometry of the impact, cause it to miss the Earth.
"Project Icarus" was one of the first projects designed in 1967 as a contingency plan in case of collision with 1566 Icarus. The plan relied on the new Saturn V rocket, which did not make its first flight until after the report had been completed. Six Saturn V rockets would be used, each launched at variable intervals from months to hours away from impact. Each rocket was to be fitted with a single 100-megaton nuclear warhead as well as a modified Apollo Service Module and uncrewed Apollo Command Module for guidance to the target. The warheads would be detonated 30 meters from the surface, deflecting or partially destroying the asteroid. Depending on the subsequent impacts on the course or the destruction of the asteroid, later missions would be modified or cancelled as needed. The "last-ditch" launch of the sixth rocket would be 18 hours prior to impact.
Fiction
Asteroids and the asteroid belt are a staple of science fiction stories. Asteroids play several potential roles in science fiction: as places human beings might colonize, resources for extracting minerals, hazards encountered by spacecraft traveling between two other points, and as a threat to life on Earth or other inhabited planets, dwarf planets, and natural satellites by potential impact.
See also
List of asteroid close approaches to Earth
List of exceptional asteroids
Lost minor planet
Meanings of minor-planet names
Notes
References
Further reading
External links
NASA Asteroid and Comet Watch site
Minor planets
|
https://en.wikipedia.org/wiki/ABBA
|
ABBA ( , ; formerly named Björn & Benny, Agnetha & Anni-Frid or Björn & Benny, Agnetha & Frida) are a Swedish pop supergroup formed in Stockholm in 1972 by Agnetha Fältskog, Björn Ulvaeus, Benny Andersson, and Anni-Frid Lyngstad. The group's name is an acronym of the first letters of their first names arranged as a palindrome. They are one of the most popular and successful musical groups of all time, and are one of the best-selling music acts in the history of popular music, topping the charts worldwide from 1974 to 1982, and in 2022.
In , ABBA were 's first winner of the Eurovision Song Contest with the song "Waterloo", which in 2005 was chosen as the best song in the competition's history as part of the 50th anniversary celebration of the contest. During the band's main active years, it consisted of two married couples: Fältskog and Ulvaeus, and Lyngstad and Andersson. With the increase of their popularity, their personal lives suffered, which eventually resulted in the collapse of both marriages. The relationship changes were reflected in the group's music, with later compositions featuring darker and more introspective lyrics. After ABBA disbanded in December 1982, Andersson and Ulvaeus continued their success writing music for multiple audiences including stage, musicals and movies, while Fältskog and Lyngstad pursued solo careers.
Ten years after the group broke up, a compilation, ABBA Gold, was released becoming a worldwide best-seller. In 1999, ABBA's music was adapted into Mamma Mia!, a stage musical that toured worldwide and, as of April 2022, is still in the top-ten longest running productions on both Broadway (closed in 2015) and the West End (still running). A film of the same name, released in 2008, became the highest-grossing film in the United Kingdom that year. A sequel, Mamma Mia! Here We Go Again, was released in 2018.
In 2016, the group reunited and started working on a digital avatar concert tour. Newly recorded songs were announced in 2018. Voyage, their first new album in 40 years, was released on 5 November 2021 to positive critical reviews and strong sales in numerous countries. ABBA Voyage, a concert residency featuring ABBA as virtual avatars, opened in May 2022 in London.
ABBA are among the best-selling music artists in history, with record sales estimated to be between 150 million to 385 million sold worldwide and the group were ranked 3rd best-selling singles artists in the United Kingdom with a total of 11.3 million singles sold by 3 November 2012. In May 2023 ABBA were awarded the BRIT Billion Award which celebrates those who have surpassed the milestone of one billion UK streams in their career. ABBA were the first group from a non-English-speaking country to achieve consistent success in the charts of English-speaking countries, including the United Kingdom, Australia, United States, Republic of Ireland, Canada, New Zealand and South Africa. They are the best-selling Swedish band of all time and the best-selling band originating in continental Europe. ABBA had eight consecutive number-one albums in the UK. The group also enjoyed significant success in Latin America and recorded a collection of their hit songs in Spanish. ABBA were inducted into the Vocal Group Hall of Fame in 2002. The group were inducted into the Rock and Roll Hall of Fame in 2010, the first recording artists to receive this honour from outside an Anglophonic country. In 2015, their song "Dancing Queen" was inducted into the Recording Academy's Grammy Hall of Fame.
History
1958–1970: Before ABBA
Member origins and collaboration
Benny Andersson (born 16 December 1946 in Stockholm, Sweden) became (at age 18) a member of a popular Swedish pop-rock group, the Hep Stars, that performed, among other things, covers of international hits. The Hep Stars were known as "the Swedish Beatles". They also set up Hep House, their equivalent of Apple Corps. Andersson played the keyboard and eventually started writing original songs for his band, many of which became major hits, including "No Response", which hit number three in 1965, and "Sunny Girl", "Wedding", and "Consolation", all of which hit number one in 1966. Andersson also had a fruitful songwriting collaboration with Lasse Berghagen, with whom he wrote his first Svensktoppen entry, "Sagan om lilla Sofie" ("The tale of Little Sophie") in 1968.
Björn Ulvaeus (born 25 April 1945 in Gothenburg, Sweden) also began his musical career at the age of 18 (as a singer and guitarist), when he fronted the Hootenanny Singers, a popular Swedish folk–skiffle group. Ulvaeus started writing English-language songs for his group and even had a brief solo career alongside. The Hootenanny Singers and the Hep Stars sometimes crossed paths while touring. In June 1966, Ulvaeus and Andersson decided to write a song together. Their first attempt was "Isn't It Easy to Say", a song that was later recorded by the Hep Stars. Stig Anderson was the manager of the Hootenanny Singers and founder of the Polar Music label. He saw potential in the collaboration, and encouraged them to write more. The two also began playing occasionally with the other's bands on stage and on record, although it was not until 1969 that the pair wrote and produced some of their first real hits together: "Ljuva sextital" ("Sweet Sixties"), recorded by Brita Borg, and the Hep Stars' 1969 hit "Speleman" ("Fiddler").
Andersson wrote and submitted the song "Hej, Clown" for Melodifestivalen 1969, the national festival to select the Swedish entry to the Eurovision Song Contest. The song tied for first place, but re-voting relegated Andersson's song to second place. On that occasion Andersson briefly met his future spouse, singer Anni-Frid Lyngstad, who also participated in the contest. A month later, the two had become a couple. As their respective bands began to break up during 1969, Andersson and Ulvaeus teamed up and recorded their first album together in 1970, called Lycka ("Happiness"), which included original songs sung by both men. Their partners were often present in the recording studio, and sometimes added backing vocals; Fältskog even co-wrote a song with the two. Ulvaeus still occasionally recorded and performed with the Hootenanny Singers until the middle of 1974, and Andersson took part in producing their records.
Anni-Frid "Frida" Lyngstad (born 15 November 1945 in Bjørkåsen in Ballangen, Norway) sang from the age of 13 with various dance bands, and worked mainly in a jazz-oriented cabaret style. She also formed her own band, the Anni-Frid Four. In the middle of 1967, she won a national talent competition with "En ledig dag" ("A Day Off"), a Swedish version of the bossa nova song "A Day in Portofino", which is included in the EMI compilation Frida 1967–1972. The first prize was a recording contract with EMI Sweden and to perform live on the most popular TV shows in the country. This TV performance, among many others, is included in the -hour documentary Frida – The DVD. Lyngstad released several schlager style singles on EMI with mixed success. When Benny Andersson started to produce her recordings in 1971, she had her first number-one single, "Min egen stad" ("My Own Town"), written by Benny and featuring all the future ABBA members on backing vocals. Lyngstad toured and performed regularly in the folkpark circuit and made appearances on radio and TV. She had a second number-one single with "Man Vill Ju Leva Lite Dessemellan" in late 1972. She had met Ulvaeus briefly in 1963 during a talent contest, and Fältskog during a TV show in early 1968.
Lyngstad linked up with her future bandmates in 1969. On 1 March 1969, she participated in the Melodifestival, where she met Andersson for the first time. A few weeks later they met again during a concert tour in southern Sweden and they soon became a couple. Andersson produced her single "Peter Pan" in September 1969—her first collaboration with Benny & Björn, as they had written the song. Andersson would then produce Lyngstad's debut studio album, Frida, which was released in March 1971. Lyngstad also played in several revues and cabaret shows in Stockholm between 1969 and 1973. After ABBA formed, she recorded another successful album in 1975, Frida ensam, which included the original Swedish rendition of "Fernando", a hit on the Swedish radio charts before the English version was released by ABBA.
Agnetha Fältskog (born 5 April 1950 in Jönköping, Sweden) sang with a local dance band headed by Bernt Enghardt who sent a demo recording of the band to Karl-Gerhard Lundkvist. The demo tape featured a song written and sung by Agnetha: "Jag var så kär" ("I Was So in Love"). Lundkvist was so impressed with her voice that he was convinced she would be a star. After going through considerable effort to locate the singer, he arranged for Agnetha to come to Stockholm and to record two of her own songs. This led to Agnetha at the age of 18 having a number-one record in Sweden with a self-composed song, which later went on to sell over 80,000 copies. She was soon noticed by the critics and songwriters as a talented singer/songwriter of schlager style songs. Fältskog's main inspiration in her early years was singers such as Connie Francis. Along with her own compositions, she recorded covers of foreign hits and performed them on tours in Swedish folkparks. Most of her biggest hits were self-composed, which was quite unusual for a female singer in the 1960s. Agnetha released four solo LPs between 1968 and 1971. She had many successful singles in the Swedish charts.
During filming of a Swedish TV special in May 1969, Fältskog met Ulvaeus and they married on 6 July 1971. Fältskog and Ulvaeus eventually were involved in each other's recording sessions, and soon even Andersson and Lyngstad added backing vocals to Fältskog's third studio album, Som jag är ("As I Am") (1970). In 1972, Fältskog starred as Mary Magdalene in the original Swedish production of Jesus Christ Superstar and attracted favourable reviews. Between 1967 and 1975, Fältskog released five studio albums.
First live performance and the start of "Festfolket"
An attempt at combining their talents occurred in April 1970 when the two couples went on holiday together to the island of Cyprus. What started as singing for fun on the beach ended up as an improvised live performance in front of the United Nations soldiers stationed on the island. Andersson and Ulvaeus were at this time recording their first album together, Lycka, which was to be released in September 1970. Fältskog and Lyngstad added backing vocals on several tracks during June, and the idea of their working together saw them launch a stage act, "Festfolket" (which translates from Swedish to "Party People" and in pronunciation also "engaged couples"), on 1 November 1970 in Gothenburg.
The cabaret show attracted generally negative reviews, except for the performance of the Andersson and Ulvaeus hit "Hej, gamle man" ("Hello, Old Man")–the first Björn and Benny recording to feature all four. They also performed solo numbers from respective albums, but the lukewarm reception convinced the foursome to shelve plans for working together for the time being, and each soon concentrated on individual projects again.
First record together "Hej, gamle man"
"Hej, gamle man", a song about an old Salvation Army soldier, became the quartet's first hit. The record was credited to Björn & Benny and reached number five on the sales charts and number one on Svensktoppen, staying on the latter chart (which was not a chart linked to sales or airplay) for 15 weeks.
It was during 1971 that the four artists began working together more, adding vocals to the others' recordings. Fältskog, Andersson and Ulvaeus toured together in May, while Lyngstad toured on her own. Frequent recording sessions brought the foursome closer together during the summer.
1970–1973: Forming the group
After the 1970 release of Lycka, two more singles credited to "Björn & Benny" were released in Sweden, "Det kan ingen doktor hjälpa" ("No Doctor Can Help with That") and "Tänk om jorden vore ung" ("Imagine If Earth Was Young"), with more prominent vocals by Fältskog and Lyngstad–and moderate chart success.
Fältskog and Ulvaeus, now married, started performing together with Andersson on a regular basis at the Swedish folkparks in the middle of 1971.
Stig Anderson, founder and owner of Polar Music, was determined to break into the mainstream international market with music by Andersson and Ulvaeus. "One day the pair of you will write a song that becomes a worldwide hit," he predicted. Stig Anderson encouraged Ulvaeus and Andersson to write a song for Melodifestivalen, and after two rejected entries in 1971, Andersson and Ulvaeus submitted their new song "Säg det med en sång" ("Say It with a Song") for the 1972 contest, choosing newcomer Lena Anderson to perform. The song came in third place, encouraging Stig Anderson, and became a hit in Sweden.
The first signs of foreign success came as a surprise, as the Andersson and Ulvaeus single "She's My Kind of Girl" was released through Epic Records in Japan in March 1972, giving the duo a Top 10 hit. Two more singles were released in Japan, "En Carousel" ("En Karusell" in Scandinavia, an earlier version of "Merry-Go-Round") and "Love Has Its Ways" (a song they wrote with Kōichi Morita).
First hit as Björn, Benny, Agnetha and Anni-Frid
Ulvaeus and Andersson persevered with their songwriting and experimented with new sounds and vocal arrangements. "People Need Love" was released in June 1972, featuring guest vocals by the women, who were now given much greater prominence. Stig Anderson released it as a single, credited to Björn & Benny, Agnetha & Anni-Frid. The song peaked at number 17 in the Swedish combined single and album charts, enough to convince them they were on to something.
"People Need Love" also became the first record to chart for the quartet in the United States, where it peaked at number 114 on the Cashbox singles chart and number 117 on the Record World singles chart. Labelled as Björn & Benny (with Svenska Flicka) meaning Swedish Girl, it was released there through Playboy Records. According to Stig Anderson, "People Need Love" could have been a much bigger American hit, but a small label like Playboy Records did not have the distribution resources to meet the demand for the single from retailers and radio programmers.
"Ring Ring"
In 1973, the band and their manager Stig Anderson decided to have another try at Melodifestivalen, this time with the song "Ring Ring". The studio sessions were handled by Michael B. Tretow, who experimented with a "wall of sound" production technique that became a distinctive new sound thereafter associated with ABBA. Stig Anderson arranged an English translation of the lyrics by Neil Sedaka and Phil Cody and they thought this would be a success. However, on 10 February 1973, the song came third in Melodifestivalen; thus it never reached the Eurovision Song Contest itself. Nevertheless, the group released their debut studio album, also called Ring Ring. The album did well and the "Ring Ring" single was a hit in many parts of Europe and also in South Africa. However, Stig Anderson felt that the true breakthrough could only come with a UK or US hit.
When Agnetha Fältskog gave birth to her daughter Linda in 1973, she was replaced for a short period by Inger Brundin on a trip to West Germany.
Official naming
In 1973, Stig Anderson, tired of unwieldy names, started to refer to the group privately and publicly as ABBA (a palindrome). At first, this was a play on words, as Abba is also the name of a well-known fish-canning company in Sweden, and itself an abbreviation. However, since the fish-canners were unknown outside Sweden, Anderson came to believe the name would work in international markets. A competition to find a suitable name for the group was held in a Gothenburg newspaper and it was officially announced in the summer that the group were to be known as "ABBA". The group negotiated with the canners for the rights to the name.
Fred Bronson reported for Billboard that Fältskog told him in a 1988 interview that "[ABBA] had to ask permission and the factory said, 'O.K., as long as you don't make us feel ashamed for what you're doing. "ABBA" is an acronym formed from the first letters of each group member's first name: Agnetha, Björn, Benny, Anni-Frid, although there has never been any official confirmation of who each letter in the sequence refers to. The earliest known example of "ABBA" written on paper is on a recording session sheet from the Metronome Studio in Stockholm dated 16 October 1973. This was first written as "Björn, Benny, Agnetha & Frida", but was subsequently crossed out with "ABBA" written in large letters on top.
Official logo
Their official logo, distinct with the backward "B", was designed by Rune Söderqvist, who designed most of ABBA's record sleeves. The ambigram first appeared on the French compilation album, Golden Double Album, released in May 1976 by Disques Vogue, and would henceforth be used for all official releases.
The idea for the official logo was made by the German photographer on a velvet jumpsuit photo shoot for the teenage magazine Bravo. In the photo, the ABBA members held giant initial letters of their names. After the pictures were made, Heilemann found out that Benny Andersson reversed his letter "B;" this prompted discussions about the mirrored "B", and the members of ABBA agreed on the mirrored letter. From 1976 onward, the first "B" in the logo version of the name was "mirror-image" reversed on the band's promotional material, thus becoming the group's registered trademark.
Following their acquisition of the group's catalogue, PolyGram began using variations of the ABBA logo, employing a different font. In 1992, Polygram added a crown emblem to it for the first release of the ABBA Gold: Greatest Hits compilation. After Universal Music purchased PolyGram (and, thus, ABBA's label Polar Music International), control of the group's catalogue returned to Stockholm. Since then, the original logo has been reinstated on all official products.
1973–1976: Breakthrough
Eurovision Song Contest 1974
As the group entered the Melodifestivalen with "Ring Ring" but failed to qualify as the 1973 Swedish entry, Stig Anderson immediately started planning for the 1974 contest. Ulvaeus, Andersson and Stig Anderson believed in the possibilities of using the Eurovision Song Contest as a way to make the music business aware of them as songwriters, as well as the band itself. In late 1973, they were invited by Swedish television to contribute a song for the Melodifestivalen 1974 and from a number of new songs, the upbeat song "Waterloo" was chosen; the group were now inspired by the growing glam rock scene in England.
ABBA won their nation's hearts on Swedish television on 9 February 1974, and with this third attempt were far more experienced and better prepared for the Eurovision Song Contest. Winning the 1974 Eurovision Song Contest on 6 April 1974 (and singing "Waterloo" in English instead of their native tongue) gave ABBA the chance to tour Europe and perform on major television shows; thus the band saw the "Waterloo" single chart in many European countries. Following their success at the Eurovision Song Contest, ABBA spent an evening of glory partying in the appropriately named first-floor Napoleon suite of The Grand Brighton Hotel.
"Waterloo" was ABBA's first major hit in numerous countries, becoming their first number-one single in nine western and northern European countries, including the big markets of the UK and West Germany, and in South Africa. It also made the top ten in several other countries, including rising to number three in Spain, number four in Australia and France, and number seven in Canada. In the United States, the song peaked at number six on the Billboard Hot 100 chart, paving the way for their first album and their first trip as a group there. Albeit a short promotional visit, it included their first performance on American television, The Mike Douglas Show. The album Waterloo only peaked at number 145 on the Billboard 200 chart, but received unanimous high praise from the US critics: Los Angeles Times called it "a compelling and fascinating debut album that captures the spirit of mainstream pop quite effectively ... an immensely enjoyable and pleasant project", while Creem characterised it as "a perfect blend of exceptional, lovable compositions".
ABBA's follow-up single, "Honey, Honey", peaked at number 27 on the US Billboard Hot 100, reached the top twenty in several other countries, and was a number-two hit in West Germany although it only reached the top 30 in Australia and the US. In the United Kingdom, ABBA's British record label, Epic, decided to re-release a remixed version of "Ring Ring" instead of "Honey, Honey", and a cover version of the latter by Sweet Dreams peaked at number 10. Both records debuted on the UK chart within one week of each other. "Ring Ring" failed to reach the Top 30 in the UK, increasing growing speculation that the group were simply a Eurovision one-hit wonder.
Post-Eurovision
In November 1974, ABBA embarked on their first European tour, playing dates in Denmark, West Germany and Austria. It was not as successful as the band had hoped, since most of the venues did not sell out. Due to a lack of demand, they were even forced to cancel a few shows, including a sole concert scheduled in Switzerland. The second leg of the tour, which took them through Scandinavia in January 1975, was very different. They played to full houses everywhere and finally got the reception they had aimed for. Live performances continued in the middle of 1975 when ABBA embarked on a fourteen open-air date tour of Sweden and Finland. Their Stockholm show at the Gröna Lund amusement park had an estimated audience of 19,200. Björn Ulvaeus later said, "If you look at the singles we released straight after Waterloo, we were trying to be more like The Sweet, a semi-glam rock group, which was stupid because we were always a pop group."
In late 1974, "So Long" was released as a single in the United Kingdom but it received no airplay from Radio 1 and failed to chart in the UK; the only countries in which it was successful were Austria, Sweden and Germany, reaching the top ten in the first two and number 21 in the latter. In the middle of 1975, ABBA released "I Do, I Do, I Do, I Do, I Do", which again received little airplay on Radio 1, but did manage to climb to number 38 on the UK chart, while making top five in several northern and western European countries, and number one in South Africa. Later that year, the release of their self-titled third studio album ABBA and single "SOS" brought back their chart presence in the UK, where the single hit number six and the album peaked at number 13. "SOS" also became ABBA's second number-one single in Germany, their third in Australia and their first in France, plus reached number two in several other European countries, including Italy.
Success was further solidified with "Mamma Mia" reaching number-one in the United Kingdom, Germany and Australia and the top two in a few other western and northern European countries. In the United States, both "I Do, I Do, I Do, I Do, I Do" and "SOS" peaked at number 15 on the Billboard Hot 100 chart, with the latter picking up the BMI Award along the way as one of the most played songs on American radio in 1975. "Mamma Mia", however, stalled at number 32. In Canada, the three songs rose to number 12, nine and 18, respectively.
The success of the group in the United States had until that time been limited to single releases. By early 1976, the group already had four Top 30 singles on the US charts, but the album market proved to be tough to crack. The eponymous ABBA album generated three American hits, but it only peaked at number 165 on the Cashbox album chart and number 174 on the Billboard 200 chart. Opinions were voiced, by Creem in particular, that in the US ABBA had endured "a very sloppy promotional campaign". Nevertheless, the group enjoyed warm reviews from the American press. Cashbox went as far as saying that "there is a recurrent thread of taste and artistry inherent in Abba's marketing, creativity and presentation that makes it almost embarrassing to critique their efforts", while Creem wrote: "SOS is surrounded on this LP by so many good tunes that the mind boggles."
In Australia, the airing of the music videos for "I Do, I Do, I Do, I Do, I Do" and "Mamma Mia" on the nationally broadcast TV pop show Countdown (which premiered in November 1974) saw the band rapidly gain enormous popularity, and Countdown become a key promoter of the group via their distinctive music videos. This started an immense interest for ABBA in Australia, resulting in "I Do, I Do, I Do, I Do, I Do" staying at number one for three weeks, then "SOS" spending a week there, followed by "Mamma Mia" staying there for ten weeks, and the album holding down the number one position for months. The three songs were also successful in nearby New Zealand with the first two topping that chart and the third reaching number two.
1976–1981: Superstardom
Greatest Hits and Arrival
In March 1976, the band released the compilation album Greatest Hits. It became their first UK number-one album, and also took ABBA into the Top 50 on the US album charts for the first time, eventually selling more than a million copies there. Also included on Greatest Hits was a new single, "Fernando", which went to number-one in at least thirteen countries all over the world, including the UK, Germany, France, Australia, South Africa and Mexico, and the top five in most other significant markets, including, at number four, becoming their biggest hit to date in Canada; the single went on to sell over 10 million copies worldwide.
In Australia, "Fernando" occupied the top position for a then record breaking 14 weeks (and stayed in the chart for 40 weeks), and was the longest-running chart-topper there for over 40 years until it was overtaken by Ed Sheeran's "Shape of You" in May 2017. It still remains as one of the best-selling singles of all time in Australia. Also in 1976, the group received its first international prize, with "Fernando" being chosen as the "Best Studio Recording of 1975". In the United States, "Fernando" reached the Top 10 of the Cashbox Top 100 singles chart and number 13 on the Billboard Hot 100. It topped the Billboard Adult Contemporary chart, ABBA's first American number-one single on any chart. At the same time, a compilation named The Very Best of ABBA was released in Germany, becoming a number-one album there whereas the Greatest Hits compilation which followed a few months later ascended to number two in Germany, despite all similarities with The Very Best album.
The group's fourth studio album, Arrival, a number-one best-seller in parts of Europe, the UK and Australia, and a number-three hit in Canada and Japan, represented a new level of accomplishment in both songwriting and studio work, prompting rave reviews from more rock-oriented UK music weeklies such as Melody Maker and New Musical Express, and mostly appreciative notices from US critics.
Hit after hit flowed from Arrival: "Money, Money, Money", another number-one in Germany, France, Australia and other countries of western and northern Europe, plus number two in the UK; and, "Knowing Me, Knowing You", ABBA's sixth consecutive German number-one, as well as another UK number-one, plus a top five hit in many other countries, although it was only a number nine hit in Australia and France. The real sensation was the first single, "Dancing Queen", not only topping the charts in loyal markets like the UK, Germany, Sweden, several other western and northern European countries, and Australia, but also reaching number-one in the United States, Canada, the Soviet Union and Japan, and the top ten in France, Spain and Italy. All three songs were number-one hits in Mexico. In South Africa, ABBA had astounding success with each of "Fernando", "Dancing Queen" and "Knowing Me, Knowing You" being among the top 20 best-selling singles for 1976–77. In 1977, Arrival was nominated for the inaugural BRIT Award in the category "Best International Album of the Year". By this time ABBA were popular in the UK, most of Europe, Australia, New Zealand and Canada. In Frida – The DVD, Lyngstad explains how she and Fältskog developed as singers, as ABBA's recordings grew more complex over the years.
The band's mainstream popularity in the United States would remain on a comparatively smaller scale, and "Dancing Queen" became the only Billboard Hot 100 number-one single for ABBA (though it immediately became, and remains to this day, a major gay anthem) with "Knowing Me, Knowing You" later peaking at number seven; "Money, Money, Money", however, had barely charted there or in Canada (where "Knowing Me, Knowing You" had reached number five). They did, however, get three more singles to the number-one position on other Billboard US charts, including Billboard Adult Contemporary and Hot Dance Club Play). Nevertheless, Arrival finally became a true breakthrough release for ABBA on the US album market where it peaked at number 20 on the Billboard 200 chart and was certified gold by RIAA.
European and Australian tour
In January 1977, ABBA embarked on their first major tour. The group's status had changed dramatically and they were clearly regarded as superstars. They opened their much anticipated tour in Oslo, Norway, on 28 January, and mounted a lavishly produced spectacle that included a few scenes from their self-written mini-operetta The Girl with the Golden Hair. The concert attracted huge media attention from across Europe and Australia. They continued the tour through Western Europe, visiting Gothenburg, Copenhagen, Berlin, Cologne, Amsterdam, Antwerp, Essen, Hanover, and Hamburg and ending with shows in the United Kingdom in Manchester, Birmingham, Glasgow and two sold-out concerts at London's Royal Albert Hall. Tickets for these two shows were available only by mail application and it was later revealed that the box-office received 3.5 million requests for tickets, enough to fill the venue 580 times.
Along with praise ("ABBA turn out to be amazingly successful at reproducing their records", wrote Creem), there were complaints that "ABBA performed slickly...but with a zero personality coming across from a total of 16 people on stage" (Melody Maker). One of the Royal Albert Hall concerts was filmed as a reference for the filming of the Australian tour for what became ABBA: The Movie, though it is not exactly known how much of the concert was filmed.
After the European leg of the tour, in March 1977, ABBA played 11 dates in Australia before a total of 160,000 people. The opening concert in Sydney at the Sydney Showground on 3 March to an audience of 20,000 was marred by torrential rain with Lyngstad slipping on the wet stage during the concert. However, all four members would later recall this concert as the most memorable of their career.
Upon their arrival in Melbourne, a civic reception was held at the Melbourne Town Hall and ABBA appeared on the balcony to greet an enthusiastic crowd of 6,000. In Melbourne, the group gave three concerts at the Sidney Myer Music Bowl with 14,500 at each including the Australian Prime Minister Malcolm Fraser and his family. At the first Melbourne concert, an additional 16,000 people gathered outside the fenced-off area to listen to the concert. In Adelaide, the group performed one concert at Football Park in front of 20,000 people, with another 10,000 listening outside. During the first of five concerts in Perth, there was a bomb scare with everyone having to evacuate the Entertainment Centre. The trip was accompanied by mass hysteria and unprecedented media attention ("Swedish ABBA stirs box-office in Down Under tour...and the media coverage of the quartet rivals that set to cover the upcoming Royal tour of Australia", wrote Variety), and is captured on film in ABBA: The Movie, directed by Lasse Hallström.
The Australian tour and its subsequent ABBA: The Movie produced some ABBA lore, as well. Fältskog's blonde good looks had long made her the band's "pin-up girl", a role she disdained. During the Australian tour, she performed in a skin-tight white jumpsuit, causing one Australian newspaper to use the headline "Agnetha's bottom tops dull show". When asked about this at a news conference, she replied: "Don't they have bottoms in Australia?"
ABBA: The Album
In December 1977, ABBA followed up Arrival with the more ambitious fifth album, ABBA: The Album, released to coincide with the debut of ABBA: The Movie. Although the album was less well received by UK reviewers, it did spawn more worldwide hits: "The Name of the Game" and "Take a Chance on Me", which both topped the UK charts and racked up impressive sales in most countries, although "The Name of the Game" was generally the more successful in the Nordic countries and Down Under, while "Take a Chance on Me" was more successful in North America and the German-speaking countries.
"The Name of the Game" was a number two hit in the Netherlands, Belgium and Sweden while also making the Top 5 in Finland, Norway, New Zealand and Australia, while only peaking at numbers 10, 12 and 15 in Mexico, the US and Canada. "Take a Chance on Me" was a number one hit in Austria, Belgium and Mexico, made the Top 3 in the US, Canada, the Netherlands, Germany and Switzerland, while only reaching numbers 12 and 14 in Australia and New Zealand, respectively. Both songs were Top 10 hits in countries as far afield as Rhodesia and South Africa, as well as in France. Although "Take a Chance on Me" did not top the American charts, it proved to be ABBA's biggest hit single there, selling more copies than "Dancing Queen". The drop in sales in Australia was felt to be inevitable by industry observers as an "Abba-Fever" that had existed there for almost three years could only last so long as adolescents would naturally begin to move away from a group so deified by both their parents and grandparents.
A third single, "Eagle", was released in continental Europe and Down Under becoming a number one hit in Belgium and a Top 10 hit in the Netherlands, Germany, Switzerland and South Africa, but barely charting Down Under. The B-side of "Eagle" was "Thank You for the Music", and it was belatedly released as an A-side single in both the United Kingdom and Ireland in 1983. "Thank You for the Music" has become one of the best loved and best known ABBA songs without being released as a single during the group's lifetime. ABBA: The Album topped the album charts in the UK, the Netherlands, New Zealand, Sweden, Norway, Switzerland, while ascending to the Top 5 in Australia, Germany, Austria, Finland and Rhodesia, and making the Top 10 in Canada and Japan. Sources also indicate that sales in Poland exceeded 1 million copies and that sales demand in Russia could not be met by the supply available. The album peaked at number 14 in the US.
Polar Music Studio formation
By 1978, ABBA were one of the biggest bands in the world. They converted a vacant cinema into the Polar Music Studio, a state-of-the-art studio in Stockholm. The studio was used by several other bands; notably Genesis' Duke, Led Zeppelin's In Through the Out Door and Scorpions's Lovedrive were recorded there. During May 1978, the group went to the United States for a promotional campaign, performing alongside Andy Gibb on Olivia Newton-John's TV show. Recording sessions for the single "Summer Night City" were an uphill struggle, but upon release the song became another hit for the group. The track would set the stage for ABBA's foray into disco with their next album.
On 9 January 1979, the group performed "Chiquitita" at the Music for UNICEF Concert held at the United Nations General Assembly to celebrate UNICEF's Year of the Child. ABBA donated the copyright of this worldwide hit to the UNICEF; see Music for UNICEF Concert. The single was released the following week, and reached number-one in ten countries.
North American and European tours
In mid-January 1979, Ulvaeus and Fältskog announced they were getting divorced. The news caused interest from the media and led to speculation about the band's future. ABBA assured the press and their fan base they were continuing their work as a group and that the divorce would not affect them. Nonetheless, the media continued to confront them with this in interviews. To escape the media swirl and concentrate on their writing, Andersson and Ulvaeus secretly travelled to Compass Point Studios in Nassau, Bahamas, where for two weeks they prepared their next album's songs.
The group's sixth studio album, Voulez-Vous, was released in April 1979, with its title track recorded at the famous Criteria Studios in Miami, Florida, with the assistance of recording engineer Tom Dowd among others. The album topped the charts across Europe and in Japan and Mexico, hit the Top 10 in Canada and Australia and the Top 20 in the US. While none of the singles from the album reached number one on the UK chart, the lead single, "Chiquitita", and the fourth single, "I Have a Dream", both ascended to number two, and the other two, "Does Your Mother Know" and "Angeleyes" (with "Voulez-Vous", released as a double A-side) both made the top 5. All four singles reached number one in Belgium, although the last three did not chart in Sweden or Norway. "Chiquitita", which was featured in the Music for UNICEF Concert after which ABBA decided to donate half of the royalties from the song to UNICEF, topped the singles charts in the Netherlands, Switzerland, Finland, Spain, Mexico, South Africa, Rhodesia and New Zealand, rose to number two in Sweden, and made the Top 5 in Germany, Austria, Norway and Australia, although it only reached number 29 in the US.
"I Have a Dream" was a sizeable hit reaching number one in the Netherlands, Switzerland, and Austria, number three in South Africa, and number four in Germany, although it only reached number 64 in Australia. In Canada, "I Have a Dream" became ABBA's second number one on the RPM Adult Contemporary chart (after "Fernando" hit the top previously) although it did not chart in the US. "Does Your Mother Know", a rare song in which Ulvaeus sings lead vocals, was a Top 5 hit in the Netherlands and Finland, and a Top 10 hit in Germany, Switzerland, Australia, although it only reached number 27 in New Zealand. It did better in North America than "Chiquitita", reaching number 12 in Canada and number 19 in the US, and made the Top 20 in Japan. "Voulez-Vous" was a Top 10 hit in the Netherlands and Switzerland, a Top 20 hit in Germany and Finland, but only peaked in the 80s in Australia, Canada and the US.
Also in 1979, the group released their second compilation album, Greatest Hits Vol. 2, which featured a brand new track: "Gimme! Gimme! Gimme! (A Man After Midnight)", which was a Top 3 hit in the UK, Belgium, the Netherlands, Germany, Austria, Switzerland, Finland and Norway, and returned ABBA to the Top 10 in Australia. Greatest Hits Vol. 2 went to number one in the UK, Belgium, Canada and Japan while making the Top 5 in several other countries, but only reaching number 20 in Australia and number 46 in the US. In the Soviet Union during the late 1970s, the group were paid in oil commodities because of an embargo on the rouble.
On 13 September 1979, ABBA began ABBA: The Tour at Northlands Coliseum in Edmonton, Canada, with a full house of 14,000. "The voices of the band, Agnetha's high sauciness combined with round, rich lower tones of Anni-Frid, were excellent...Technically perfect, melodically correct and always in perfect pitch...The soft lower voice of Anni-Frid and the high, edgy vocals of Agnetha were stunning", raved Edmonton Journal.
During the next four weeks they played a total of 17 sold-out dates, 13 in the United States and four in Canada. The last scheduled ABBA concert in the United States in Washington, D.C. was cancelled due to emotional distress Fältskog experienced during the flight from New York to Boston. The group's private plane was subjected to extreme weather conditions and was unable to land for an extended period. They appeared at the Boston Music Hall for the performance 90 minutes late. The tour ended with a show in Toronto, Canada at Maple Leaf Gardens before a capacity crowd of 18,000. "ABBA plays with surprising power and volume; but although they are loud, they're also clear, which does justice to the signature vocal sound... Anyone who's been waiting five years to see Abba will be well satisfied", wrote Record World. On 19 October 1979, the tour resumed in Western Europe where the band played 23 sold-out gigs, including six sold-out nights at London's Wembley Arena.
Progression
In March 1980, ABBA travelled to Japan where upon their arrival at Narita International Airport, they were besieged by thousands of fans. The group performed eleven concerts to full houses, including six shows at Tokyo's Budokan. This tour was the last "on the road" adventure of their career.
In July 1980, ABBA released the single "The Winner Takes It All", the group's eighth UK chart topper (and their first since 1978). The song is widely misunderstood as being written about Ulvaeus and Fältskog's marital tribulations; Ulvaeus wrote the lyrics, but has stated they were not about his own divorce; Fältskog has repeatedly stated she was not the loser in their divorce. In the United States, the single peaked at number-eight on the Billboard Hot 100 chart and became ABBA's second Billboard Adult Contemporary number-one. It was also re-recorded by Andersson and Ulvaeus with a slightly different backing track, by French chanteuse Mireille Mathieu at the end of 1980 – as "Bravo tu as gagné", with French lyrics by Alain Boublil.
In November 1980, ABBA's seventh album Super Trouper was released, which reflected a certain change in ABBA's style with more prominent use of synthesizers and increasingly personal lyrics. It set a record for the most pre-orders ever received for a UK album after one million copies were ordered before release. The second single from the album, "Super Trouper", also hit number-one in the UK, becoming the group's ninth and final UK chart-topper. Another track from the album, "Lay All Your Love on Me", released in 1981 as a Twelve-inch single only in selected territories, managed to top the Billboard Hot Dance Club Play chart and peaked at number-seven on the UK singles chart becoming, at the time, the highest ever charting 12-inch release in UK chart history.
Also in 1980, ABBA recorded a compilation of Spanish-language versions of their hits called Gracias Por La Música. This was released in Spanish-speaking countries as well as in Japan and Australia. The album became a major success, and along with the Spanish version of "Chiquitita", this signalled the group's breakthrough in Latin America. ABBA Oro: Grandes Éxitos, the Spanish equivalent of ABBA Gold: Greatest Hits, was released in 1999.
1981–1982: The Visitors and later performances
In January 1981, Ulvaeus married Lena Källersjö, and manager Stig Anderson celebrated his 50th birthday with a party. For this occasion, ABBA recorded the track "Hovas Vittne" (a pun on the Swedish name for Jehovah's Witness and Anderson's birthplace, Hova) as a tribute to him, and released it only on 200 red vinyl copies, to be distributed to the guests attending the party. This single has become a sought-after collectable. In mid-February 1981, Andersson and Lyngstad announced they were filing for divorce. Information surfaced that their marriage had been an uphill struggle for years, and Benny had already met another woman, Mona Nörklit, whom he married in November 1981.
Andersson and Ulvaeus had songwriting sessions in early 1981, and recording sessions began in mid-March. At the end of April, the group recorded a TV special, Dick Cavett Meets ABBA with the US talk show host Dick Cavett. The Visitors, ABBA's eighth studio album, showed a songwriting maturity and depth of feeling distinctly lacking from their earlier recordings but still placing the band squarely in the pop genre, with catchy tunes and harmonies. Although not revealed at the time of its release, the album's title track, according to Ulvaeus, refers to the secret meetings held against the approval of totalitarian governments in Soviet-dominated states, while other tracks address topics like failed relationships, the threat of war, ageing, and loss of innocence. The album's only major single release, "One of Us", proved to be the last of ABBA's nine number-one singles in Germany, this being in December 1981; and the swansong of their sixteen Top 5 singles on the South African chart. "One of Us" was also ABBA's final Top 3 hit in the UK, reaching number-three on the UK Singles Chart.
Although it topped the album charts across most of Europe, including Ireland, the UK and Germany, The Visitors was not as commercially successful as its predecessors, showing a commercial decline in previously loyal markets such as France, Australia and Japan. A track from the album, "When All Is Said and Done", was released as a single in North America, Australia and New Zealand, and fittingly became ABBA's final Top 40 hit in the US (debuting on the US charts on 31 December 1981), while also reaching the US Adult Contemporary Top 10, and number-four on the RPM Adult Contemporary chart in Canada. The song's lyrics, as with "The Winner Takes It All" and "One of Us", dealt with the painful experience of separating from a long-term partner, though it looked at the trauma more optimistically. With the now publicised story of Andersson and Lyngstad's divorce, speculation increased of tension within the band. Also released in the United States was the title track of The Visitors, which hit the Top Ten on the Billboard Hot Dance Club Play chart.
Later recording sessions
In the spring of 1982, songwriting sessions had started and the group came together for more recordings. Plans were not completely clear, but a new album was discussed and the prospect of a small tour suggested. The recording sessions in May and June 1982 were a struggle, and only three songs were eventually recorded: "You Owe Me One", "I Am the City" and "Just Like That". Andersson and Ulvaeus were not satisfied with the outcome, so the tapes were shelved and the group took a break for the summer.
Back in the studio again in early August, the group had changed plans for the rest of the year: they settled for a Christmas release of a double album compilation of all their past single releases to be named The Singles: The First Ten Years. New songwriting and recording sessions took place, and during October and December, they released the singles "The Day Before You Came"/"Cassandra" and "Under Attack"/"You Owe Me One", the A-sides of which were included on the compilation album. Neither single made the Top 20 in the United Kingdom, though "The Day Before You Came" became a Top 5 hit in many European countries such as Germany, the Netherlands and Belgium. The album went to number one in the UK and Belgium, Top 5 in the Netherlands and Germany and Top 20 in many other countries. "Under Attack", the group's final release before disbanding, was a Top 5 hit in the Netherlands and Belgium.
"I Am the City" and "Just Like That" were left unreleased on The Singles: The First Ten Years for possible inclusion on the next projected studio album, though this never came to fruition. "I Am the City" was eventually released on the compilation album More ABBA Gold in 1993, while "Just Like That" has been recycled in new songs with other artists produced by Andersson and Ulvaeus. A reworked version of the verses ended up in the musical Chess. The chorus section of "Just Like That" was eventually released on a retrospective box set in 1994, as well as in the ABBA Undeleted medley featured on disc 9 of The Complete Studio Recordings. Despite a number of requests from fans, Ulvaeus and Andersson are still refusing to release ABBA's version of "Just Like That" in its entirety, even though the complete version has surfaced on bootlegs.
The group travelled to London to promote The Singles: The First Ten Years in the first week of November 1982, appearing on Saturday Superstore and The Late, Late Breakfast Show, and also to West Germany in the second week, to perform on Show Express. On 19 November 1982, ABBA appeared for the last time in Sweden on the TV programme Nöjesmaskinen, and on 11 December 1982, they made their last performance ever, transmitted to the UK on Noel Edmonds' The Late, Late Breakfast Show, through a live link from a TV studio in Stockholm.
Later performances
Andersson and Ulvaeus began collaborating with Tim Rice in early 1983 on writing songs for the musical project Chess, while Fältskog and Lyngstad both concentrated on international solo careers. While Andersson and Ulvaeus were working on the musical, a further co-operation among the three of them came with the musical Abbacadabra that was produced in France for television. It was a children's musical using 14 ABBA songs. Alain and Daniel Boublil, who wrote Les Misérables, had been in touch with Stig Anderson about the project, and the TV musical was aired over Christmas on French TV and later a Dutch version was also broadcast. Boublil previously also wrote the French lyric for Mireille Mathieu's version of "The Winner Takes It All".
Lyngstad, who had recently moved to Paris, participated in the French version, and recorded a single, "Belle", a duet with French singer Daniel Balavoine. The song was a cover of ABBA's 1976 instrumental track "Arrival". As the single "Belle" sold well in France, Cameron Mackintosh wanted to stage an English-language version of the show in London, with the French lyrics translated by David Wood and Don Black; Andersson and Ulvaeus got involved in the project, and contributed with one new song, "I Am the Seeker". "Abbacadabra" premiered on 8 December 1983 at the Lyric Hammersmith Theatre in London, to mixed reviews and full houses for eight weeks, closing on 21 January 1984. Lyngstad was also involved in this production, recording "Belle" in English as "Time", a duet with actor and singer B. A. Robertson: the single sold well, and was produced and recorded by Mike Batt. In May 1984, Lyngstad performed "I Have a Dream" with a children's choir at the United Nations Organisation Gala, in Geneva, Switzerland.
All four members made their (at the time, final) public appearance as four friends more than as ABBA in January 1986, when they recorded a video of themselves performing an acoustic version of "Tivedshambo" (which was the first song written by their manager Stig Anderson), for a Swedish TV show honouring Anderson on his 55th birthday. The four had not seen each other for more than two years. That same year they also performed privately at another friend's 40th birthday: their old tour manager, Claes af Geijerstam. They sang a self-written song titled "Der Kleine Franz" that was later to resurface in Chess. Also in 1986, ABBA Live was released, featuring selections of live performances from the group's 1977 and 1979 tours. The four members were guests at the 50th birthday of Görel Hanser in 1999. Hanser was a long-time friend of all four, and also former secretary of Stig Anderson. Honouring Görel, ABBA performed a Swedish birthday song "Med en enkel tulipan" a cappella.
Andersson has on several occasions performed ABBA songs. In June 1992, he and Ulvaeus appeared with U2 at a Stockholm concert, singing the chorus of "Dancing Queen", and a few years later during the final performance of the B & B in Concert in Stockholm, Andersson joined the cast for an encore at the piano. Andersson frequently adds an ABBA song to the playlist when he performs with his BAO band. He also played the piano during new recordings of the ABBA songs "Like an Angel Passing Through My Room" with opera singer Anne Sofie von Otter, and "When All Is Said and Done" with Swede Viktoria Tolstoy. In 2002, Andersson and Ulvaeus both performed an a cappella rendition of the first verse of "Fernando" as they accepted their Ivor Novello award in London. Lyngstad performed and recorded an a cappella version of "Dancing Queen" with the Swedish group the Real Group in 1993, and also re-recorded "I Have a Dream" with Swiss singer Dan Daniell in 2003.
Break and reunion
ABBA never officially announced the end of the group or an indefinite break, but it was long considered dissolved after their final public performance together in 1982. Their final public performance together as ABBA before their 2016 reunion was on the British TV programme The Late, Late Breakfast Show (live from Stockholm) on 11 December 1982. While reminiscing on "The Day Before You Came", Ulvaeus said: "we might have continued for a while longer if that had been a number one".
In January 1983, Fältskog started recording sessions for a solo album, as Lyngstad had successfully released her album Something's Going On some months earlier. Ulvaeus and Andersson, meanwhile, started songwriting sessions for the musical Chess. In interviews at the time, Björn and Benny denied the split of ABBA ("Who are we without our ladies? Initials of Brigitte Bardot?"), and Lyngstad and Fältskog kept claiming in interviews that ABBA would come together for a new album repeatedly during 1983 and 1984. Internal strife between the group and their manager escalated and the band members sold their shares in Polar Music during 1983. Except for a TV appearance in 1986, the foursome did not come together publicly again until they were reunited at the Swedish premiere of the Mamma Mia! movie on 4 July 2008. The individual members' endeavours shortly before and after their final public performance coupled with the collapse of both marriages and the lack of significant activity in the following few years after that widely suggested that the group had broken up.
In an interview with the Sunday Telegraph following the premiere, Ulvaeus and Andersson said that there was nothing that could entice them back on stage again. Ulvaeus said: "We will never appear on stage again. [...] There is simply no motivation to re-group. Money is not a factor and we would like people to remember us as we were. Young, exuberant, full of energy and ambition. I remember Robert Plant saying Led Zeppelin were a cover band now because they cover all their own stuff. I think that hit the nail on the head."
However, on 3 January 2011, Fältskog, long considered to be the most reclusive member of the group and a major obstacle to any reunion, raised the possibility of reuniting for a one-off engagement. She admitted that she has not yet brought the idea up to the other three members. In April 2013, she reiterated her hopes for reunion during an interview with Die Zeit, stating: "If they ask me, I'll say yes."
In a May 2013 interview, Fältskog, aged 63 at the time, stated that an ABBA reunion would never occur: "I think we have to accept that it will not happen, because we are too old and each one of us has their own life. Too many years have gone by since we stopped, and there's really no meaning in putting us together again". Fältskog further explained that the band members remained on amicable terms: "It's always nice to see each other now and then and to talk a little and to be a little nostalgic." In an April 2014 interview, Fältskog, when asked about whether the band might reunite for a new recording said: "It's difficult to talk about this because then all the news stories will be: 'ABBA is going to record another song!' But as long as we can sing and play, then why not? I would love to, but it's up to Björn and Benny."
Resurgence of public interest
The same year the members of ABBA went their separate ways, the French production of a "tribute" show (a children's TV musical named Abbacadabra using 14 ABBA songs) spawned new interest in the group's music.
After receiving little attention during the mid-to-late-1980s, ABBA's music experienced a resurgence in the early 1990s due to the UK synth-pop duo Erasure, who released Abba-esque, a four track extended play release featuring cover versions of ABBA songs which topped several European charts in 1992. As U2 arrived in Stockholm for a concert in June of that year, the band paid homage to ABBA by inviting Björn Ulvaeus and Benny Andersson to join them on stage for a rendition of "Dancing Queen", playing guitar and keyboards. September 1992 saw the release of ABBA Gold: Greatest Hits, a new compilation album. The single "Dancing Queen" received radio airplay in the UK in the middle of 1992 to promote the album. The song returned to the Top 20 of the UK singles chart in August that year, this time peaking at number 16. With sales of 30 million, Gold is the best-selling ABBA album, as well as one of the best-selling albums worldwide. With sales of 5.5 million copies it is the second-highest selling album of all time in the UK, after Queen's Greatest Hits. More ABBA Gold: More ABBA Hits, a follow-up to Gold, was released in 1993.
In 1994, two Australian cult films caught the attention of the world's media, both focusing on admiration for ABBA: The Adventures of Priscilla, Queen of the Desert and Muriel's Wedding. The same year, Thank You for the Music, a four-disc box set comprising all the group's hits and stand-out album tracks, was released with the involvement of all four members. "By the end of the twentieth century," American critic Chuck Klosterman wrote a decade later, "it was far more contrarian to hate ABBA than to love them."
ABBA were soon recognised and embraced by other acts: Evan Dando of the Lemonheads recorded a cover version of "Knowing Me, Knowing You"; Sinéad O'Connor and Boyzone's Stephen Gately have recorded "Chiquitita"; Tanita Tikaram, Blancmange and Steven Wilson paid tribute to "The Day Before You Came". Cliff Richard covered "Lay All Your Love on Me", while Dionne Warwick, Peter Cetera, Frank Sidebottom and Celebrity Skin recorded their versions of "SOS". US alternative-rock musician Marshall Crenshaw has also been known to play a version of "Knowing Me, Knowing You" in concert appearances, while legendary English Latin pop songwriter Richard Daniel Roman has recognised ABBA as a major influence. Swedish metal guitarist Yngwie Malmsteen covered "Gimme! Gimme! Gimme! (A Man After Midnight)" with slightly altered lyrics.
Two different compilation albums of ABBA songs have been released. ABBA: A Tribute coincided with the 25th anniversary celebration and featured 17 songs, some of which were recorded especially for this release. Notable tracks include Go West's "One of Us", Army of Lovers "Hasta Mañana", Information Society's "Lay All Your Love on Me", Erasure's "Take a Chance on Me" (with MC Kinky), and Lyngstad's a cappella duet with the Real Group of "Dancing Queen". A second 12-track album was released in 1999, titled ABBAmania, with proceeds going to the Youth Music charity in England. It featured all new cover versions: notable tracks were by Madness ("Money, Money, Money"), Culture Club ("Voulez-Vous"), the Corrs ("The Winner Takes It All"), Steps ("Lay All Your Love on Me", "I Know Him So Well"), and a medley titled "Thank ABBA for the Music" performed by several artists and as featured on the Brits Awards that same year.
In 1998, an ABBA tribute group was formed, the ABBA Teens, which was subsequently renamed the A-Teens to allow the group some independence. The group's first album, The ABBA Generation, consisting solely of ABBA covers reimagined as 1990s pop songs, was a worldwide success and so were subsequent albums. The group disbanded in 2004 due to a gruelling schedule and intentions to go solo. In Sweden, the growing recognition of the legacy of Andersson and Ulvaeus resulted in the 1998 B & B Concerts, a tribute concert (with Swedish singers who had worked with the songwriters through the years) showcasing not only their ABBA years, but hits both before and after ABBA. The concert was a success, and was ultimately released on CD. It later toured Scandinavia and even went to Beijing in the People's Republic of China for two concerts. In 2000 ABBA were reported to have turned down an offer of approximately one billion US dollars to do a reunion tour consisting of 100 concerts.
For the semi-final of the Eurovision Song Contest 2004, staged in Istanbul 30 years after ABBA had won the contest in Brighton, all four members made cameo appearances in a special comedy video made for the interval act, titled Our Last Video Ever. Other well-known stars such as Rik Mayall, Cher and Iron Maiden's Eddie also made appearances in the video. It was not included in the official DVD release of the 2004 Eurovision contest, but was issued as a separate DVD release, retitled The Last Video at the request of the former ABBA members. The video was made using puppet models of the members of the band. The video has surpassed 13 million views on YouTube as of November 2020.
In 2005, all four members of ABBA appeared at the Stockholm premiere of the musical Mamma Mia!. On 22 October 2005, at the 50th anniversary celebration of the Eurovision Song Contest, "Waterloo" was chosen as the best song in the competition's history. In the same month, American singer Madonna released the single "Hung Up", which contains a sample of the keyboard melody from ABBA's 1979 song "Gimme! Gimme! Gimme! (A Man After Midnight)"; the song was a smash hit, peaking at number one in at least 50 countries. On 4 July 2008, all four ABBA members were reunited at the Swedish premiere of the film Mamma Mia!. It was only the second time all of them had appeared together in public since 1986. During the appearance, they re-emphasised that they intended never to officially reunite, citing the opinion of Robert Plant that the re-formed Led Zeppelin was more like a cover band of itself than the original band. Ulvaeus stated that he wanted the band to be remembered as they were during the peak years of their success.
Gold returned to number-one in the UK album charts for the fifth time on 3 August 2008. On 14 August 2008, the Mamma Mia! The Movie film soundtrack went to number-one on the US Billboard charts, ABBA's first US chart-topping album. During the band's heyday, the highest album chart position they had ever achieved in America was number 14. In November 2008, all eight studio albums, together with a ninth of rare tracks, were released as The Albums. It hit several charts, peaking at number-four in Sweden and reaching the Top 10 in several other European territories.
In 2008, Sony Computer Entertainment Europe, in collaboration with Universal Music Group Sweden AB, released SingStar ABBA on both the PlayStation 2 and PlayStation 3 games consoles, as part of the SingStar music video games. The PS2 version features 20 ABBA songs, while 25 songs feature on the PS3 version.
On 22 January 2009, Fältskog and Lyngstad appeared together on stage to receive the Swedish music award "Rockbjörnen" (for "lifetime achievement"). In an interview, the two women expressed their gratitude for the honorary award and thanked their fans. On 25 November 2009, PRS for Music announced that the British public voted ABBA as the band they would most like to see re-form. On 27 January 2010, ABBAWORLD, a 25-room touring exhibition featuring interactive and audiovisual activities, debuted at Earls Court Exhibition Centre in London. According to the exhibition's website, ABBAWORLD is "approved and fully supported" by the band members.
"Mamma Mia" was released as one of the first few non-premium song selections for the online RPG game Bandmaster. On 17 May 2011, "Gimme! Gimme! Gimme!" was added as a non-premium song selection for the Bandmaster Philippines server. On 15 November 2011, Ubisoft released a dancing game called ABBA: You Can Dance for the Wii. In January 2012, Universal Music announced the re-release of ABBA's final album The Visitors, featuring a previously unheard track "From a Twinkling Star to a Passing Angel".
A book titled ABBA: The Official Photo Book was published in early 2014 to mark the 40th anniversary of the band's Eurovision victory. The book reveals that part of the reason for the band's outrageous costumes was that Swedish tax laws at the time allowed the cost of garish outfits that were not suitable for daily wear to be tax deductible.
2016–2022: Reunion, Voyage, and ABBAtars
On 20 January 2016, all four members of ABBA made a public appearance at Mamma Mia! The Party in Stockholm. On 6 June 2016, the quartet appeared together at a private party at Berns Salonger in Stockholm, which was held to celebrate the 50th anniversary of Andersson and Ulvaeus's first meeting. Fältskog and Lyngstad performed live, singing "The Way Old Friends Do" before they were joined on stage by Andersson and Ulvaeus.
British manager Simon Fuller announced in a statement in October 2016 that the group would be reuniting to work on a new "digital entertainment experience". The project would feature the members in their "life-like" avatar form, called ABBAtars, based on their late 1970s tour and would be set to launch by the spring of 2019.
In May 2017, a sequel to the 2008 movie Mamma Mia!, titled Mamma Mia! Here We Go Again, was announced; the film was released on 20 July 2018. Cher, who appeared in the movie, also released Dancing Queen, an ABBA cover album, in September 2018. In June 2017, a blue plaque outside Brighton Dome was set to commemorate their 1974 Eurovision win.
On 27 April 2018, all four original members of ABBA made a joint announcement that they had recorded two new songs, titled "I Still Have Faith in You" and "Don't Shut Me Down", to feature in a TV special set to air later that year. In September 2018, Ulvaeus stated that the two new songs, as well as the TV special, now called ABBA: Thank You for the Music, An All-Star Tribute, would not be released until 2019. The TV special was later revealed to be scrapped by 2018, as Andersson and Ulvaeus rejected Fuller's project, and instead partnered with visual effects company Industrial Light and Magic to prepare the ABBAtars for a music video and a concert. In January 2019, it was revealed that neither song would be released before the summer. Andersson hinted at the possibility of a third song.
In June 2019, Ulvaeus announced that the first new song and video containing the ABBAtars would be released in November 2019. In September, he stated in an interview that there were now five new ABBA songs to be released in 2020. In early 2020, Andersson confirmed that he was aiming for the songs to be released in September 2020.
In April 2020, Ulvaeus gave an interview saying that in the wake of the COVID-19 pandemic, the avatar project had been delayed. Five out of the eight original songs written by Benny for the new album had been recorded by the two female members, and the release of a new £15 million music video with new unseen technology was under consideration. In May 2020, it was announced that ABBA's entire studio discography would be released on coloured vinyl for the first time, in a box set titled ABBA: The Studio Albums. In July 2020, Ulvaeus revealed that the release of the new ABBA recordings had been delayed until 2021.
On 22 September 2020, all four ABBA members reunited at Ealing Studios in London to continue working on the avatar project and filming for the tour. Ulvaeus confirmed that the avatar tour would be scheduled for 2022. When questioned if the new recordings were definitely coming out in 2021, Björn said "There will be new music this year, that is definite, it's not a case anymore of it might happen, it will happen."
On 26 August 2021, a new website was launched, with the title ABBA Voyage. On the page, visitors were prompted to subscribe "to be the first in line to hear more about ABBA Voyage". Simultaneously with the launch of the webpage, new ABBA Voyage social media accounts were launched, and billboards around London started to appear, all showing the date "02.09.21", leading to expectation of what was to be revealed on that date. On 29 August, the band officially joined TikTok with a video of Benny Andersson playing "Dancing Queen" on the piano, and media reported on a new album to be announced on 2 September. On that date, Voyage, their first new album in 40 years, was announced to be released on 5 November 2021, along with ABBA Voyage, a concert residency in a custom-built venue at Queen Elizabeth Olympic Park in London featuring the motion capture digital avatars of the four band members alongside a 10-piece live band, starting 27 May 2022. Fältskog stated that the Voyage album and tour are likely to be their last.
The announcement of the new album was accompanied by the release of the singles "I Still Have Faith in You" and "Don't Shut Me Down". The music video for "I Still Have Faith in You", featuring footage of the band during their performing years and a first look at the ABBAtars, earned over a million views in its first three hours. "Don't Shut Me Down" became the first ABBA release since October 1978 to top the singles chart in Sweden. In October 2021, the third single "Just a Notion" was released, and it was announced that ABBA would split for good after the release of Voyage. However, in an interview with BBC Radio 2 on 11 November, Lyngstad stated "don't be too sure" that Voyage is the final ABBA album. Also, in an interview with BBC News on 5 November, Andersson stated "if they [the ladies] twist my arm I might change my mind." The fourth single from the album, "Little Things", was released on 3 December.
In May 2022, after the premiere of ABBA Voyage, Andersson stated in an interview with Variety that "nothing is going to happen after this", confirming the residency as ABBA's final group collaboration. In April 2023, longtime ABBA guitarist Lasse Wellander died at the age of 70; Wellander played on seven of the group's nine studio albums, including Voyage.
Artistry
Recording process
ABBA were perfectionists in the studio, working on tracks until they got them right rather than leaving them to come back to later on. They spent the bulk of their time within the studio; in separate 2021 interviews Ulvaeus stated they may have toured for only 6 months while Andersson said they played fewer than 100 shows during the band's career.
The band created a basic rhythm track with a drummer, guitarist and bass player, and overlaid other arrangements and instruments. Vocals were then added, and orchestra overdubs were usually left until last.
Fältskog and Lyngstad contributed ideas at the studio stage. Andersson and Ulvaeus played them the backing tracks and they made comments and suggestions. According to Fältskog, she and Lyngstad had the final say in how the lyrics were shaped.
After vocals and overdubs were done, the band took up to five days to mix a song.
Fashion, style, videos, advertising campaigns
ABBA was widely noted for the colourful and trend-setting costumes its members wore. The reason for the wild costumes was Swedish tax law: the cost of the clothes was deductible only if they could not be worn other than for performances. In their early years, group member Anni-Frid Lyngstad designed and even hand sewed the outfits. Later, as their success grew, they used professional theatrical clothes designer Owe Sandström together with tailor Lars Wigenius with Lyngstad continuing to suggest ideas while co-ordinating the outfits with concert set designs. Choreography by Graham Tainton also contributed to their performance style.
The videos that accompanied some of the band's biggest hits are often cited as being among the earliest examples of the genre. Most of ABBA's videos (and ABBA: The Movie) were directed by Lasse Hallström, who would later direct the films My Life as a Dog, The Cider House Rules and Chocolat.
ABBA made videos because their songs were hits in many different countries and personal appearances were not always possible. This was also done in an effort to minimise travelling, particularly to countries that would have required extremely long flights. Fältskog and Ulvaeus had two young children and Fältskog, who was also afraid of flying, was very reluctant to leave her children for such a long time. ABBA's manager, Stig Anderson, realised the potential of showing a simple video clip on television to publicise a single or album, thereby allowing easier and quicker exposure than a concert tour. Some of these videos have become classics because of the 1970s-era costumes and early video effects, such as the grouping of the band members in different combinations of pairs, overlapping one singer's profile with the other's full face, and the contrasting of one member against another.
In 1976, ABBA participated in an advertising campaign to promote the Matsushita Electric Industrial Co.'s brand, National, in Australia. The campaign was also broadcast in Japan. Five commercial spots, each of approximately one minute, were produced, each presenting the "National Song" performed by ABBA using the melody and instrumental arrangements of "Fernando" and revised lyrics.
Political use of ABBA's music
In September 2010, band members Andersson and Ulvaeus criticised the right-wing Danish People's Party (DF) for using the ABBA song "Mamma Mia" (with modified lyrics referencing Pia Kjærsgaard) at rallies. The band threatened to file a lawsuit against the DF, saying they never allowed their music to be used politically and that they had absolutely no interest in supporting the party. Their record label Universal Music later said that no legal action would be taken because an agreement had been reached.
Success in the United States
During their active career, from 1972 to 1982, 20 of ABBA's singles entered the Billboard Hot 100; 14 of these made the Top 40 (13 on the Cashbox Top 100), with 10 making the Top 20 on both charts. A total of four of those singles reached the Top 10, including "Dancing Queen", which reached number one in April 1977. While "Fernando" and "SOS" did not break the Top 10 on the Billboard Hot 100 (reaching number 13 and 15 respectively), they did reach the Top 10 on Cashbox ("Fernando") and Record World ("SOS") charts. Both "Dancing Queen" and "Take a Chance on Me" were certified gold by the Recording Industry Association of America for sales of over one million copies each.
The group also had 12 Top 20 singles on the Billboard Adult Contemporary chart with two of them, "Fernando" and "The Winner Takes It All", reaching number one. "Lay All Your Love on Me" was ABBA's fourth number-one single on a Billboard chart, topping the Hot Dance Club Play chart.
Ten ABBA albums have made their way into the top half of the Billboard 200 album chart, with eight reaching the Top 50, five reaching the Top 20 and one reaching the Top 10. In November 2021, Voyage became ABBA's highest-charting album on the Billboard 200 peaking at No. 2. Five albums received RIAA gold certification (more than 500,000 copies sold), while three acquired platinum status (selling more than one million copies).
The compilation album ABBA Gold: Greatest Hits topped the Billboard Top Pop Catalog Albums chart in August 2008 (15 years after it was first released in the US in 1993), becoming the group's first number-one album ever on any of the Billboard album charts. It has sold 6 million copies there.
On 15 March 2010, ABBA were inducted into the Rock and Roll Hall of Fame by Bee Gees members Barry Gibb and Robin Gibb. The ceremony was held at the Waldorf Astoria Hotel in New York City. The group were represented by Anni-Frid Lyngstad and Benny Andersson.
in November 2021, ABBA received a Grammy nomination for Record of the Year. The single, "I Still Have Faith in You", from the album, Voyage, was their first ever nomination.
Neither ABBA nor any of the band members are included in Rolling Stone's "100 Greatest Artists of All Time " list.
Members
Agnetha Fältskog – lead and backing vocals
Anni-Frid "Frida" Lyngstad – lead and backing vocals
Björn Ulvaeus – guitars, lead and backing vocals
Benny Andersson – keyboards, synthesizers, piano, accordion, backing and lead vocals
The members of ABBA were married as follows: Agnetha Fältskog and Björn Ulvaeus from 1971 to 1979; Benny Andersson and Anni-Frid Lyngstad from 1978 to 1981. For their subsequent marriages, see their articles.
In addition to the four members of ABBA, other musicians regularly played on their studio recordings, live appearances and concert performances. These include:
Rutger Gunnarsson (1972–1982) bass guitar and string arrangements
Ola Brunkert (1972–1981) drums
(1972–1980) bass guitar
Janne Schaffer (1972–1982) lead electric guitar
(1972–1979) drums
Malando Gassama (1973–1979) percussion
Lasse Wellander (1974–2021) lead electric guitar
Anders Eljas (1977) keyboards on tour and all the band's orchestration
(1978–1982) percussion
(1980–2021) drums
Discography
Studio albums
Ring Ring (1973)
Waterloo (1974)
ABBA (1975)
Arrival (1976)
The Album (1977)
Voulez-Vous (1979)
Super Trouper (1980)
The Visitors (1981)
Voyage (2021)
Tours
Concert tours
Swedish Folkpark Tour (1973)
European Tour (1974–1975)
European & Australian Tour (1977)
ABBA: The Tour (1979–1980)
Concert residencies
ABBA Voyage (2022–2024)
Awards and nominations
See also
ABBA: The Museum
ABBA City Walks – Stockholm City Museum
ABBAMAIL
List of ABBA tribute albums
List of best-selling music artists
List of Swedes in music
Music of Sweden
Popular music in Sweden
Citations
References
Bibliography
Further reading
Benny Andersson, Björn Ulvaeus, Judy Craymer: Mamma Mia! How Can I Resist You?: The Inside Story of Mamma Mia! and the Songs of ABBA. Weidenfeld & Nicolson, 2006
Carl Magnus Palm. ABBA – The Complete Recording Sessions (1994)
Carl Magnus Palm (2000). From "ABBA" to "Mamma Mia!"
Elisabeth Vincentelli: ABBA Treasures: A Celebration of the Ultimate Pop Group. Omnibus Press, 2010,
Oldham, Andrew, Calder, Tony & Irvin, Colin (1995) "ABBA: The Name of the Game",
Potiez, Jean-Marie (2000). ABBA – The Book
Simon Sheridan: The Complete ABBA. Titan Books, 2012,
Anna Henker (ed.), Astrid Heyde (ed.): Abba – Das Lexikon. Northern Europe Institut, Humboldt-University Berlin, 2015 (German)
Steve Harnell (ed.): Classic Pop Presents Abba: A Celebration. Classic Pop Magazine (special edition), November 2016
Documentaries
A for ABBA. BBC, 20 July 1993
Thierry Lecuyer, Jean-Marie Potiez: Thank You ABBA. Willow Wil Studios/A2C Video, 1993
Barry Barnes: ABBA − The History. Polar Music International AB, 1999
Chris Hunt: The Winner Takes it All − The ABBA Story. Littlestar Services/lambic Productions, 1999
Steve Cole, Chris Hunt: Super Troupers − Thirty Years of ABBA. BBC, 2004
The Joy of ABBA. BBC 4, 27 December 2013 (BBC page)
Carl Magnus Palm, Roger Backlund: ABBA – When Four Became One. SVT, 2 January 2012
Carl Magnus Palm, Roger Backlund: ABBA – Absolute Image. SVT, 2 January 2012
ABBA – Bang a boomerang. ABC 1, 30 January 2013 (ABC page)
ABBA: When All Is Said and Done, 2017
. Sunday Night (7 News), 1 October 2019
External links
The Secret Majesty of ABBA. Variety, 22 July 2018
ABBA's Essential, Influential Melancholy. NPR, 23 May 2015
What's Behind ABBA's Staying Power?. Smithsonian, 20 July 2018
ABBA – The Articles – ABBA news from throughout the world
1972 establishments in Sweden
Atlantic Records artists
English-language singers from Sweden
Epic Records artists
Eurodisco groups
Eurovision Song Contest entrants for Sweden
Eurovision Song Contest entrants of 1974
Eurovision Song Contest winners
Melodifestivalen contestants
Melodifestivalen winners
Musical groups disestablished in 1982
Musical groups established in 1972
Musical groups from Stockholm
Musical groups reestablished in 2016
Swedish musical quartets
Palindromes
RCA Records artists
Schlager groups
Swedish dance music groups
Swedish pop music groups
Swedish pop rock music groups
Swedish-language singers
Swedish co-ed groups
German-language singers
French-language singers
|
https://en.wikipedia.org/wiki/Argon
|
Argon is a chemical element with the symbol Ar and atomic number 18. It is in group 18 of the periodic table and is a noble gas. Argon is the third-most abundant gas in Earth's atmosphere, at 0.934% (9340 ppmv). It is more than twice as abundant as water vapor (which averages about 4000 ppmv, but varies greatly), 23 times as abundant as carbon dioxide (400 ppmv), and more than 500 times as abundant as neon (18 ppmv). Argon is the most abundant noble gas in Earth's crust, comprising 0.00015% of the crust.
Nearly all of the argon in Earth's atmosphere is radiogenic argon-40, derived from the decay of potassium-40 in Earth's crust. In the universe, argon-36 is by far the most common argon isotope, as it is the most easily produced by stellar nucleosynthesis in supernovas.
The name "argon" is derived from the Greek word , neuter singular form of meaning 'lazy' or 'inactive', as a reference to the fact that the element undergoes almost no chemical reactions. The complete octet (eight electrons) in the outer atomic shell makes argon stable and resistant to bonding with other elements. Its triple point temperature of 83.8058 K is a defining fixed point in the International Temperature Scale of 1990.
Argon is extracted industrially by the fractional distillation of liquid air. Argon is mostly used as an inert shielding gas in welding and other high-temperature industrial processes where ordinarily unreactive substances become reactive; for example, an argon atmosphere is used in graphite electric furnaces to prevent the graphite from burning. Argon is also used in incandescent, fluorescent lighting, and other gas-discharge tubes. Argon makes a distinctive blue-green gas laser. Argon is also used in fluorescent glow starters.
Characteristics
Argon has approximately the same solubility in water as oxygen and is 2.5 times more soluble in water than nitrogen. Argon is colorless, odorless, nonflammable and nontoxic as a solid, liquid or gas. Argon is chemically inert under most conditions and forms no confirmed stable compounds at room temperature.
Although argon is a noble gas, it can form some compounds under various extreme conditions. Argon fluorohydride (HArF), a compound of argon with fluorine and hydrogen that is stable below , has been demonstrated. Although the neutral ground-state chemical compounds of argon are presently limited to HArF, argon can form clathrates with water when atoms of argon are trapped in a lattice of water molecules. Ions, such as , and excited-state complexes, such as ArF, have been demonstrated. Theoretical calculation predicts several more argon compounds that should be stable but have not yet been synthesized.
History
Argon (Greek , neuter singular form of meaning "lazy" or "inactive") is named in reference to its chemical inactivity. This chemical property of this first noble gas to be discovered impressed the namers. An unreactive gas was suspected to be a component of air by Henry Cavendish in 1785.
Argon was first isolated from air in 1894 by Lord Rayleigh and Sir William Ramsay at University College London by removing oxygen, carbon dioxide, water, and nitrogen from a sample of clean air. They first accomplished this by replicating an experiment of Henry Cavendish's. They trapped a mixture of atmospheric air with additional oxygen in a test-tube (A) upside-down over a large quantity of dilute alkali solution (B), which in Cavendish's original experiment was potassium hydroxide, and conveyed a current through wires insulated by U-shaped glass tubes (CC) which sealed around the platinum wire electrodes, leaving the ends of the wires (DD) exposed to the gas and insulated from the alkali solution. The arc was powered by a battery of five Grove cells and a Ruhmkorff coil of medium size. The alkali absorbed the oxides of nitrogen produced by the arc and also carbon dioxide. They operated the arc until no more reduction of volume of the gas could be seen for at least an hour or two and the spectral lines of nitrogen disappeared when the gas was examined. The remaining oxygen was reacted with alkaline pyrogallate to leave behind an apparently non-reactive gas which they called argon.
Before isolating the gas, they had determined that nitrogen produced from chemical compounds was 0.5% lighter than nitrogen from the atmosphere. The difference was slight, but it was important enough to attract their attention for many months. They concluded that there was another gas in the air mixed in with the nitrogen. Argon was also encountered in 1882 through independent research of H. F. Newall and W. N. Hartley. Each observed new lines in the emission spectrum of air that did not match known elements.
Until 1957, the symbol for argon was "A", but now it is "Ar".
Occurrence
Argon constitutes 0.934% by volume and 1.288% by mass of Earth's atmosphere. Air is the primary industrial source of purified argon products. Argon is isolated from air by fractionation, most commonly by cryogenic fractional distillation, a process that also produces purified nitrogen, oxygen, neon, krypton and xenon. Earth's crust and seawater contain 1.2 ppm and 0.45 ppm of argon, respectively.
Isotopes
The main isotopes of argon found on Earth are (99.6%), (0.34%), and (0.06%). Naturally occurring , with a half-life of 1.25 years, decays to stable (11.2%) by electron capture or positron emission, and also to stable (88.8%) by beta decay. These properties and ratios are used to determine the age of rocks by K–Ar dating.
In Earth's atmosphere, is made by cosmic ray activity, primarily by neutron capture of followed by two-neutron emission. In the subsurface environment, it is also produced through neutron capture by , followed by proton emission. is created from the neutron capture by followed by an alpha particle emission as a result of subsurface nuclear explosions. It has a half-life of 35 days.
Between locations in the Solar System, the isotopic composition of argon varies greatly. Where the major source of argon is the decay of in rocks, will be the dominant isotope, as it is on Earth. Argon produced directly by stellar nucleosynthesis is dominated by the alpha-process nuclide . Correspondingly, solar argon contains 84.6% (according to solar wind measurements), and the ratio of the three isotopes 36Ar : 38Ar : 40Ar in the atmospheres of the outer planets is 8400 : 1600 : 1. This contrasts with the low abundance of primordial in Earth's atmosphere, which is only 31.5 ppmv (= 9340 ppmv × 0.337%), comparable with that of neon (18.18 ppmv) on Earth and with interplanetary gasses, measured by probes.
The atmospheres of Mars, Mercury and Titan (the largest moon of Saturn) contain argon, predominantly as , and its content may be as high as 1.93% (Mars).
The predominance of radiogenic is the reason the standard atomic weight of terrestrial argon is greater than that of the next element, potassium, a fact that was puzzling when argon was discovered. Mendeleev positioned the elements on his periodic table in order of atomic weight, but the inertness of argon suggested a placement before the reactive alkali metal. Henry Moseley later solved this problem by showing that the periodic table is actually arranged in order of atomic number (see History of the periodic table).
Compounds
Argon's complete octet of electrons indicates full s and p subshells. This full valence shell makes argon very stable and extremely resistant to bonding with other elements. Before 1962, argon and the other noble gases were considered to be chemically inert and unable to form compounds; however, compounds of the heavier noble gases have since been synthesized. The first argon compound with tungsten pentacarbonyl, W(CO)5Ar, was isolated in 1975. However, it was not widely recognised at that time. In August 2000, another argon compound, argon fluorohydride (HArF), was formed by researchers at the University of Helsinki, by shining ultraviolet light onto frozen argon containing a small amount of hydrogen fluoride with caesium iodide. This discovery caused the recognition that argon could form weakly bound compounds, even though it was not the first. It is stable up to 17 kelvins (−256 °C). The metastable dication, which is valence-isoelectronic with carbonyl fluoride and phosgene, was observed in 2010. Argon-36, in the form of argon hydride (argonium) ions, has been detected in interstellar medium associated with the Crab Nebula supernova; this was the first noble-gas molecule detected in outer space.
Solid argon hydride (Ar(H2)2) has the same crystal structure as the MgZn2 Laves phase. It forms at pressures between 4.3 and 220 GPa, though Raman measurements suggest that the H2 molecules in Ar(H2)2 dissociate above 175 GPa.
Production
Argon is extracted industrially by the fractional distillation of liquid air in a cryogenic air separation unit; a process that separates liquid nitrogen, which boils at 77.3 K, from argon, which boils at 87.3 K, and liquid oxygen, which boils at 90.2 K. About 700,000 tonnes of argon are produced worldwide every year.
Applications
Argon has several desirable properties:
Argon is a chemically inert gas.
Argon is the cheapest alternative when nitrogen is not sufficiently inert.
Argon has low thermal conductivity.
Argon has electronic properties (ionization and/or the emission spectrum) desirable for some applications.
Other noble gases would be equally suitable for most of these applications, but argon is by far the cheapest. Argon is inexpensive, since it occurs naturally in air and is readily obtained as a byproduct of cryogenic air separation in the production of liquid oxygen and liquid nitrogen: the primary constituents of air are used on a large industrial scale. The other noble gases (except helium) are produced this way as well, but argon is the most plentiful by far. The bulk of argon applications arise simply because it is inert and relatively cheap.
Industrial processes
Argon is used in some high-temperature industrial processes where ordinarily non-reactive substances become reactive. For example, an argon atmosphere is used in graphite electric furnaces to prevent the graphite from burning.
For some of these processes, the presence of nitrogen or oxygen gases might cause defects within the material. Argon is used in some types of arc welding such as gas metal arc welding and gas tungsten arc welding, as well as in the processing of titanium and other reactive elements. An argon atmosphere is also used for growing crystals of silicon and germanium.
Argon is used in the poultry industry to asphyxiate birds, either for mass culling following disease outbreaks, or as a means of slaughter more humane than electric stunning. Argon is denser than air and displaces oxygen close to the ground during inert gas asphyxiation. Its non-reactive nature makes it suitable in a food product, and since it replaces oxygen within the dead bird, argon also enhances shelf life.
Argon is sometimes used for extinguishing fires where valuable equipment may be damaged by water or foam.
Scientific research
Liquid argon is used as the target for neutrino experiments and direct dark matter searches. The interaction between the hypothetical WIMPs and an argon nucleus produces scintillation light that is detected by photomultiplier tubes. Two-phase detectors containing argon gas are used to detect the ionized electrons produced during the WIMP–nucleus scattering. As with most other liquefied noble gases, argon has a high scintillation light yield (about 51 photons/keV), is transparent to its own scintillation light, and is relatively easy to purify. Compared to xenon, argon is cheaper and has a distinct scintillation time profile, which allows the separation of electronic recoils from nuclear recoils. On the other hand, its intrinsic beta-ray background is larger due to contamination, unless one uses argon from underground sources, which has much less contamination. Most of the argon in Earth's atmosphere was produced by electron capture of long-lived ( + e− → + ν) present in natural potassium within Earth. The activity in the atmosphere is maintained by cosmogenic production through the knockout reaction (n,2n) and similar reactions. The half-life of is only 269 years. As a result, the underground Ar, shielded by rock and water, has much less contamination. Dark-matter detectors currently operating with liquid argon include DarkSide, WArP, ArDM, microCLEAN and DEAP. Neutrino experiments include ICARUS and MicroBooNE, both of which use high-purity liquid argon in a time projection chamber for fine grained three-dimensional imaging of neutrino interactions.
At Linköping University, Sweden, the inert gas is being utilized in a vacuum chamber in which plasma is introduced to ionize metallic films. This process results in a film usable for manufacturing computer processors. The new process would eliminate the need for chemical baths and use of expensive, dangerous and rare materials.
Preservative
Argon is used to displace oxygen- and moisture-containing air in packaging material to extend the shelf-lives of the contents (argon has the European food additive code E938). Aerial oxidation, hydrolysis, and other chemical reactions that degrade the products are retarded or prevented entirely. High-purity chemicals and pharmaceuticals are sometimes packed and sealed in argon.
In winemaking, argon is used in a variety of activities to provide a barrier against oxygen at the liquid surface, which can spoil wine by fueling both microbial metabolism (as with acetic acid bacteria) and standard redox chemistry.
Argon is sometimes used as the propellant in aerosol cans.
Argon is also used as a preservative for such products as varnish, polyurethane, and paint, by displacing air to prepare a container for storage.
Since 2002, the American National Archives stores important national documents such as the Declaration of Independence and the Constitution within argon-filled cases to inhibit their degradation. Argon is preferable to the helium that had been used in the preceding five decades, because helium gas escapes through the intermolecular pores in most containers and must be regularly replaced.
Laboratory equipment
Argon may be used as the inert gas within Schlenk lines and gloveboxes. Argon is preferred to less expensive nitrogen in cases where nitrogen may react with the reagents or apparatus.
Argon may be used as the carrier gas in gas chromatography and in electrospray ionization mass spectrometry; it is the gas of choice for the plasma used in ICP spectroscopy. Argon is preferred for the sputter coating of specimens for scanning electron microscopy. Argon gas is also commonly used for sputter deposition of thin films as in microelectronics and for wafer cleaning in microfabrication.
Medical use
Cryosurgery procedures such as cryoablation use liquid argon to destroy tissue such as cancer cells. It is used in a procedure called "argon-enhanced coagulation", a form of argon plasma beam electrosurgery. The procedure carries a risk of producing gas embolism and has resulted in the death of at least one patient.
Blue argon lasers are used in surgery to weld arteries, destroy tumors, and correct eye defects.
Argon has also been used experimentally to replace nitrogen in the breathing or decompression mix known as Argox, to speed the elimination of dissolved nitrogen from the blood.
Lighting
Incandescent lights are filled with argon, to preserve the filaments at high temperature from oxidation. It is used for the specific way it ionizes and emits light, such as in plasma globes and calorimetry in experimental particle physics. Gas-discharge lamps filled with pure argon provide lilac/violet light; with argon and some mercury, blue light. Argon is also used for blue and green argon-ion lasers.
Miscellaneous uses
Argon is used for thermal insulation in energy-efficient windows. Argon is also used in technical scuba diving to inflate a dry suit because it is inert and has low thermal conductivity.
Argon is used as a propellant in the development of the Variable Specific Impulse Magnetoplasma Rocket (VASIMR). Compressed argon gas is allowed to expand, to cool the seeker heads of some versions of the AIM-9 Sidewinder missile and other missiles that use cooled thermal seeker heads. The gas is stored at high pressure.
Argon-39, with a half-life of 269 years, has been used for a number of applications, primarily ice core and ground water dating. Also, potassium–argon dating and related argon-argon dating are used to date sedimentary, metamorphic, and igneous rocks.
Argon has been used by athletes as a doping agent to simulate hypoxic conditions. In 2014, the World Anti-Doping Agency (WADA) added argon and xenon to the list of prohibited substances and methods, although at this time there is no reliable test for abuse.
Safety
Although argon is non-toxic, it is 38% more dense than air and therefore considered a dangerous asphyxiant in closed areas. It is difficult to detect because it is colorless, odorless, and tasteless. A 1994 incident, in which a man was asphyxiated after entering an argon-filled section of oil pipe under construction in Alaska, highlights the dangers of argon tank leakage in confined spaces and emphasizes the need for proper use, storage and handling.
See also
Industrial gas
Oxygen–argon ratio, a ratio of two physically similar gases, which has importance in various sectors.
References
Further reading
On triple point pressure at 69 kPa.
On triple point pressure at 83.8058 K.
External links
Argon at The Periodic Table of Videos (University of Nottingham)
USGS Periodic Table – Argon
Diving applications: Why Argon?
Chemical elements
E-number additives
Noble gases
Industrial gases
|
https://en.wikipedia.org/wiki/Arsenic
|
Arsenic is a chemical element with the symbol As and atomic number 33. Arsenic occurs in many minerals, usually in combination with sulfur and metals, but also as a pure elemental crystal. Arsenic is a metalloid. It has various allotropes, but only the grey form, which has a metallic appearance, is important to industry.
The primary use of arsenic is in alloys of lead (for example, in car batteries and ammunition). Arsenic is a common n-type dopant in semiconductor electronic devices. It is also a component of the III–V compound semiconductor gallium arsenide. Arsenic and its compounds, especially the trioxide, are used in the production of pesticides, treated wood products, herbicides, and insecticides. These applications are declining with the increasing recognition of the toxicity of arsenic and its compounds.
A few species of bacteria are able to use arsenic compounds as respiratory metabolites. Trace quantities of arsenic are an essential dietary element in rats, hamsters, goats, chickens, and presumably other species. A role in human metabolism is not known. However, arsenic poisoning occurs in multicellular life if quantities are larger than needed. Arsenic contamination of groundwater is a problem that affects millions of people across the world.
The United States' Environmental Protection Agency states that all forms of arsenic are a serious risk to human health. The United States' Agency for Toxic Substances and Disease Registry ranked arsenic as number 1 in its 2001 Priority List of Hazardous Substances at Superfund sites. Arsenic is classified as a Group-A carcinogen.
Characteristics
Physical characteristics
The three most common arsenic allotropes are grey, yellow, and black arsenic, with grey being the most common. Grey arsenic (α-As, space group Rm No. 166) adopts a double-layered structure consisting of many interlocked, ruffled, six-membered rings. Because of weak bonding between the layers, grey arsenic is brittle and has a relatively low Mohs hardness of 3.5. Nearest and next-nearest neighbors form a distorted octahedral complex, with the three atoms in the same double-layer being slightly closer than the three atoms in the next. This relatively close packing leads to a high density of 5.73 g/cm3. Grey arsenic is a semimetal, but becomes a semiconductor with a bandgap of 1.2–1.4 eV if amorphized. Grey arsenic is also the most stable form.
Yellow arsenic is soft and waxy, and somewhat similar to tetraphosphorus (). Both have four atoms arranged in a tetrahedral structure in which each atom is bound to each of the other three atoms by a single bond. This unstable allotrope, being molecular, is the most volatile, least dense, and most toxic. Solid yellow arsenic is produced by rapid cooling of arsenic vapor, . It is rapidly transformed into grey arsenic by light. The yellow form has a density of 1.97 g/cm3. Black arsenic is similar in structure to black phosphorus.
Black arsenic can also be formed by cooling vapor at around 100–220 °C and by crystallization of amorphous arsenic in the presence of mercury vapors. It is glassy and brittle. Black arsenic is also a poor electrical conductor. As arsenic's triple point is at 3.628 MPa (35.81 atm), it does not have a melting point at standard pressure but instead sublimes from solid to vapor at 887 K (615 °C or 1137 °F).
Isotopes
Arsenic occurs in nature as one stable isotope, 75As, a monoisotopic element. As of 2003, at least 33 radioisotopes have also been synthesized, ranging in atomic mass from 60 to 92. The most stable of these is 73As with a half-life of 80.30 days. All other isotopes have half-lives of under one day, with the exception of 71As (t1/2=65.30 hours), 72As (t1/2=26.0 hours), 74As (t1/2=17.77 days), 76As (t1/2=26.26 hours), and 77As (t1/2=38.83 hours). Isotopes that are lighter than the stable 75As tend to decay by β+ decay, and those that are heavier tend to decay by β− decay, with some exceptions.
At least 10 nuclear isomers have been described, ranging in atomic mass from 66 to 84. The most stable of arsenic's isomers is 68mAs with a half-life of 111 seconds.
Chemistry
Arsenic has a similar electronegativity and ionization energies to its lighter congener phosphorus and accordingly readily forms covalent molecules with most of the nonmetals. Though stable in dry air, arsenic forms a golden-bronze tarnish upon exposure to humidity which eventually becomes a black surface layer. When heated in air, arsenic oxidizes to arsenic trioxide; the fumes from this reaction have an odor resembling garlic. This odor can be detected on striking arsenide minerals such as arsenopyrite with a hammer. It burns in oxygen to form arsenic trioxide and arsenic pentoxide, which have the same structure as the more well-known phosphorus compounds, and in fluorine to give arsenic pentafluoride. Arsenic (and some arsenic compounds) sublimes upon heating at atmospheric pressure, converting directly to a gaseous form without an intervening liquid state at . The triple point is 3.63 MPa and . Arsenic makes arsenic acid with concentrated nitric acid, arsenous acid with dilute nitric acid, and arsenic trioxide with concentrated sulfuric acid; however, it does not react with water, alkalis, or non-oxidising acids. Arsenic reacts with metals to form arsenides, though these are not ionic compounds containing the As3− ion as the formation of such an anion would be highly endothermic and even the group 1 arsenides have properties of intermetallic compounds. Like germanium, selenium, and bromine, which like arsenic succeed the 3d transition series, arsenic is much less stable in the group oxidation state of +5 than its vertical neighbors phosphorus and antimony, and hence arsenic pentoxide and arsenic acid are potent oxidizers.
Compounds
Compounds of arsenic resemble in some respects those of phosphorus which occupies the same group (column) of the periodic table. The most common oxidation states for arsenic are: −3 in the arsenides, which are alloy-like intermetallic compounds, +3 in the arsenites, and +5 in the arsenates and most organoarsenic compounds. Arsenic also bonds readily to itself as seen in the square ions in the mineral skutterudite. In the +3 oxidation state, arsenic is typically pyramidal owing to the influence of the lone pair of electrons.
Inorganic compounds
One of the simplest arsenic compounds is the trihydride, the highly toxic, flammable, pyrophoric arsine (AsH3). This compound is generally regarded as stable, since at room temperature it decomposes only slowly. At temperatures of 250–300 °C decomposition to arsenic and hydrogen is rapid. Several factors, such as humidity, presence of light and certain catalysts (namely aluminium) facilitate the rate of decomposition. It oxidises readily in air to form arsenic trioxide and water, and analogous reactions take place with sulfur and selenium instead of oxygen.
Arsenic forms colorless, odorless, crystalline oxides As2O3 ("white arsenic") and As2O5 which are hygroscopic and readily soluble in water to form acidic solutions. Arsenic(V) acid is a weak acid and the salts are called arsenates, the most common arsenic contamination of groundwater, and a problem that affects many people. Synthetic arsenates include Scheele's Green (cupric hydrogen arsenate, acidic copper arsenate), calcium arsenate, and lead hydrogen arsenate. These three have been used as agricultural insecticides and poisons.
The protonation steps between the arsenate and arsenic acid are similar to those between phosphate and phosphoric acid. Unlike phosphorous acid, arsenous acid is genuinely tribasic, with the formula As(OH)3.
A broad variety of sulfur compounds of arsenic are known. Orpiment (As2S3) and realgar (As4S4) are somewhat abundant and were formerly used as painting pigments. In As4S10, arsenic has a formal oxidation state of +2 in As4S4 which features As-As bonds so that the total covalency of As is still 3. Both orpiment and realgar, as well as As4S3, have selenium analogs; the analogous As2Te3 is known as the mineral kalgoorlieite, and the anion As2Te− is known as a ligand in cobalt complexes.
All trihalides of arsenic(III) are well known except the astatide, which is unknown. Arsenic pentafluoride (AsF5) is the only important pentahalide, reflecting the lower stability of the +5 oxidation state; even so, it is a very strong fluorinating and oxidizing agent. (The pentachloride is stable only below −50 °C, at which temperature it decomposes to the trichloride, releasing chlorine gas.)
Alloys
Arsenic is used as the group 5 element in the III-V semiconductors gallium arsenide, indium arsenide, and aluminium arsenide. The valence electron count of GaAs is the same as a pair of Si atoms, but the band structure is completely different which results in distinct bulk properties. Other arsenic alloys include the II-V semiconductor cadmium arsenide.
Organoarsenic compounds
A large variety of organoarsenic compounds are known. Several were developed as chemical warfare agents during World War I, including vesicants such as lewisite and vomiting agents such as adamsite. Cacodylic acid, which is of historic and practical interest, arises from the methylation of arsenic trioxide, a reaction that has no analogy in phosphorus chemistry. Cacodyl was the first organometallic compound known (even though arsenic is not a true metal) and was named from the Greek κακωδία "stink" for its offensive odor; it is very poisonous.
Occurrence and production
Arsenic comprises about 1.5 ppm (0.00015%) of the Earth's crust, and is the 53rd most abundant element. Typical background concentrations of arsenic do not exceed 3 ng/m3 in the atmosphere; 100 mg/kg in soil; 400 μg/kg in vegetation; 10 μg/L in freshwater and 1.5 μg/L in seawater.
Minerals with the formula MAsS and MAs2 (M = Fe, Ni, Co) are the dominant commercial sources of arsenic, together with realgar (an arsenic sulfide mineral) and native (elemental) arsenic. An illustrative mineral is arsenopyrite (FeAsS), which is structurally related to iron pyrite. Many minor As-containing minerals are known. Arsenic also occurs in various organic forms in the environment.
In 2014, China was the top producer of white arsenic with almost 70% world share, followed by Morocco, Russia, and Belgium, according to the British Geological Survey and the United States Geological Survey. Most arsenic refinement operations in the US and Europe have closed over environmental concerns. Arsenic is found in the smelter dust from copper, gold, and lead smelters, and is recovered primarily from copper refinement dust.
On roasting arsenopyrite in air, arsenic sublimes as arsenic(III) oxide leaving iron oxides, while roasting without air results in the production of gray arsenic. Further purification from sulfur and other chalcogens is achieved by sublimation in vacuum, in a hydrogen atmosphere, or by distillation from molten lead-arsenic mixture.
History
The word arsenic has its origin in the Syriac word zarnika,
from Arabic al-zarnīḵ 'the orpiment', based on Persian zar ("gold") from the word zarnikh, meaning "yellow" (literally "gold-colored") and hence "(yellow) orpiment". It was adopted into Greek (using folk etymology) as arsenikon () – a neuter form of the Greek adjective arsenikos (), meaning "male", "virile".
Latin-speakers adopted the Greek term as , which in French ultimately became , whence the English word "arsenic".
Arsenic sulfides (orpiment, realgar) and oxides have been known and used since ancient times. Zosimos () describes roasting sandarach (realgar) to obtain cloud of arsenic (arsenic trioxide), which he then reduces to gray arsenic. As the symptoms of arsenic poisoning are not very specific, the substance was frequently used for murder until the advent in the 1830s of the Marsh test, a sensitive chemical test for its presence. (Another less sensitive but more general test is the Reinsch test.) Owing to its use by the ruling class to murder one another and its potency and discreetness, arsenic has been called the "poison of kings" and the "king of poisons". Arsenic became known as "the inheritance powder" due to its use in killing family members in the Renaissance era.
During the Bronze Age, arsenic was often included in the manufacture of bronze, making the alloy harder (so-called "arsenical bronze").
Jabir ibn Hayyan described the isolation of arsenic before 815 AD. Albertus Magnus (Albert the Great, 1193–1280) later isolated the element from a compound in 1250, by heating soap together with arsenic trisulfide. In 1649, Johann Schröder published two ways of preparing arsenic. Crystals of elemental (native) arsenic are found in nature, although rarely.
Cadet's fuming liquid (impure cacodyl), often claimed as the first synthetic organometallic compound, was synthesized in 1760 by Louis Claude Cadet de Gassicourt through the reaction of potassium acetate with arsenic trioxide.
In the Victorian era, women would eat "arsenic" ("white arsenic" or arsenic trioxide) mixed with vinegar and chalk to improve the complexion of their faces, making their skin paler (to show they did not work in the fields). The accidental use of arsenic in the adulteration of foodstuffs led to the Bradford sweet poisoning in 1858, which resulted in 21 deaths. From the late-18th century wallpaper production began to use dyes made from arsenic,
which was thought to increase the pigment's brightness. One account of the illness and 1821 death of Napoleon I implicates arsenic poisoning involving wallpaper.
Two arsenic pigments have been widely used since their discovery – Paris Green in 1814 and Scheele's Green in 1775. After the toxicity of arsenic became widely known, these chemicals were used less often as pigments and more often as insecticides. In the 1860s, an arsenic byproduct of dye production, London Purple, was widely used. This was a solid mixture of arsenic trioxide, aniline, lime, and ferrous oxide, insoluble in water and very toxic by inhalation or ingestion But it was later replaced with Paris Green, another arsenic-based dye. With better understanding of the toxicology mechanism, two other compounds were used starting in the 1890s. Arsenite of lime and arsenate of lead were used widely as insecticides until the discovery of DDT in 1942.
Applications
Agricultural
The toxicity of arsenic to insects, bacteria, and fungi led to its use as a wood preservative. In the 1930s, a process of treating wood with chromated copper arsenate (also known as CCA or Tanalith) was invented, and for decades, this treatment was the most extensive industrial use of arsenic. An increased appreciation of the toxicity of arsenic led to a ban of CCA in consumer products in 2004, initiated by the European Union and United States. However, CCA remains in heavy use in other countries (such as on Malaysian rubber plantations).
Arsenic was also used in various agricultural insecticides and poisons. For example, lead hydrogen arsenate was a common insecticide on fruit trees, but contact with the compound sometimes resulted in brain damage among those working the sprayers. In the second half of the 20th century, monosodium methyl arsenate (MSMA) and disodium methyl arsenate (DSMA) – less toxic organic forms of arsenic – replaced lead arsenate in agriculture. These organic arsenicals were in turn phased out in the United States by 2013 in all agricultural activities except cotton farming.
The biogeochemistry of arsenic is complex and includes various adsorption and desorption processes. The toxicity of arsenic is connected to its solubility and is affected by pH. Arsenite () is more soluble than arsenate () and is more toxic; however, at a lower pH, arsenate becomes more mobile and toxic. It was found that addition of sulfur, phosphorus, and iron oxides to high-arsenite soils greatly reduces arsenic phytotoxicity.
Arsenic is used as a feed additive in poultry and swine production, in particular it was used in the U.S. until 2015 to increase weight gain, improve feed efficiency, and prevent disease. An example is roxarsone, which had been used as a broiler starter by about 70% of U.S. broiler growers.In 2011, Alpharma, a subsidiary of Pfizer Inc., which produces roxarsone, voluntarily suspended sales of the drug in response to studies showing elevated levels of inorganic arsenic, a carcinogen, in treated chickens. A successor to Alpharma, Zoetis, continued to sell nitarsone until 2015, primarily for use in turkeys.
A 2006 study of the remains of the Australian racehorse, Phar Lap, determined that the 1932 death of the famous champion was caused by a massive overdose of arsenic. Sydney veterinarian Percy Sykes stated, "In those days, arsenic was quite a common tonic, usually given in the form of a solution (Fowler's Solution) ... It was so common that I'd reckon 90 per cent of the horses had arsenic in their system."
Medical use
During the 17th, 18th, and 19th centuries, a number of arsenic compounds were used as medicines, including arsphenamine (by Paul Ehrlich) and arsenic trioxide (by Thomas Fowler). Arsphenamine, as well as neosalvarsan, was indicated for syphilis, but has been superseded by modern antibiotics. However, arsenicals such as melarsoprol are still used for the treatment of trypanosomiasis, since although these drugs have the disadvantage of severe toxicity, the disease is almost uniformly fatal if untreated.
Arsenic trioxide has been used in a variety of ways since the 15th century, most commonly in the treatment of cancer, but also in medications as diverse as Fowler's solution in psoriasis. The US Food and Drug Administration in the year 2000 approved this compound for the treatment of patients with acute promyelocytic leukemia that is resistant to all-trans retinoic acid.
A 2008 paper reports success in locating tumors using arsenic-74 (a positron emitter). This isotope produces clearer PET scan images than the previous radioactive agent, iodine-124, because the body tends to transport iodine to the thyroid gland producing signal noise. Nanoparticles of arsenic have shown ability to kill cancer cells with lesser cytotoxicity than other arsenic formulations.
In subtoxic doses, soluble arsenic compounds act as stimulants, and were once popular in small doses as medicine by people in the mid-18th to 19th centuries; its use as a stimulant was especially prevalent as sport animals such as race horses or with work dogs.
Alloys
The main use of arsenic is in alloying with lead. Lead components in car batteries are strengthened by the presence of a very small percentage of arsenic. Dezincification of brass (a copper-zinc alloy) is greatly reduced by the addition of arsenic. "Phosphorus Deoxidized Arsenical Copper" with an arsenic content of 0.3% has an increased corrosion stability in certain environments. Gallium arsenide is an important semiconductor material, used in integrated circuits. Circuits made from GaAs are much faster (but also much more expensive) than those made from silicon. Unlike silicon, GaAs has a direct bandgap, and can be used in laser diodes and LEDs to convert electrical energy directly into light.
Military
After World War I, the United States built a stockpile of 20,000 tons of weaponized lewisite (ClCH=CHAsCl2), an organoarsenic vesicant (blister agent) and lung irritant. The stockpile was neutralized with bleach and dumped into the Gulf of Mexico in the 1950s. During the Vietnam War, the United States used Agent Blue, a mixture of sodium cacodylate and its acid form, as one of the rainbow herbicides to deprive North Vietnamese soldiers of foliage cover and rice.
Other uses
Copper acetoarsenite was used as a green pigment known under many names, including Paris Green and Emerald Green. It caused numerous arsenic poisonings. Scheele's Green, a copper arsenate, was used in the 19th century as a coloring agent in sweets.
Arsenic is used in bronzing and pyrotechnics.
As much as 2% of produced arsenic is used in lead alloys for lead shot and bullets.
Arsenic is added in small quantities to alpha-brass to make it dezincification-resistant. This grade of brass is used in plumbing fittings and other wet environments.
Arsenic is also used for taxonomic sample preservation. It was also used in embalming fluids historically.
Arsenic was used as an opacifier in ceramics, creating white glazes.
Until recently, arsenic was used in optical glass. Modern glass manufacturers, under pressure from environmentalists, have ceased using both arsenic and lead.
In computers; arsenic is used in the chips as the n-type doping
Biological role
Bacteria
Some species of bacteria obtain their energy in the absence of oxygen by oxidizing various fuels while reducing arsenate to arsenite. Under oxidative environmental conditions some bacteria use arsenite as fuel, which they oxidize to arsenate. The enzymes involved are known as arsenate reductases (Arr).
In 2008, bacteria were discovered that employ a version of photosynthesis in the absence of oxygen with arsenites as electron donors, producing arsenates (just as ordinary photosynthesis uses water as electron donor, producing molecular oxygen). Researchers conjecture that, over the course of history, these photosynthesizing organisms produced the arsenates that allowed the arsenate-reducing bacteria to thrive. One strain, PHS-1, has been isolated and is related to the gammaproteobacterium Ectothiorhodospira shaposhnikovii. The mechanism is unknown, but an encoded Arr enzyme may function in reverse to its known homologues.
In 2011, it was postulated that a strain of Halomonadaceae could be grown in the absence of phosphorus if that element were substituted with arsenic, exploiting the fact that the arsenate and phosphate anions are similar structurally. The study was widely criticised and subsequently refuted by independent researcher groups.
Essential trace element in higher animals
Arsenic is understood to be an essential trace mineral in birds as it is involved in the synthesis of methionine metabolites, with feeding recommendations being between 0.012 and 0.050 mg/kg.
Some evidence indicates that arsenic is an essential trace mineral in mammals. However, the biological function is not known.
Heredity
Arsenic has been linked to epigenetic changes, heritable changes in gene expression that occur without changes in DNA sequence. These include DNA methylation, histone modification, and RNA interference. Toxic levels of arsenic cause significant DNA hypermethylation of tumor suppressor genes p16 and p53, thus increasing risk of carcinogenesis. These epigenetic events have been studied in vitro using human kidney cells and in vivo using rat liver cells and peripheral blood leukocytes in humans. Inductively coupled plasma mass spectrometry (ICP-MS) is used to detect precise levels of intracellular arsenic and other arsenic bases involved in epigenetic modification of DNA. Studies investigating arsenic as an epigenetic factor can be used to develop precise biomarkers of exposure and susceptibility.
The Chinese brake fern (Pteris vittata) hyperaccumulates arsenic from the soil into its leaves and has a proposed use in phytoremediation.
Biomethylation
Inorganic arsenic and its compounds, upon entering the food chain, are progressively metabolized through a process of methylation. For example, the mold Scopulariopsis brevicaulis produces trimethylarsine if inorganic arsenic is present. The organic compound arsenobetaine is found in some marine foods such as fish and algae, and also in mushrooms in larger concentrations. The average person's intake is about 10–50 µg/day. Values about 1000 µg are not unusual following consumption of fish or mushrooms, but there is little danger in eating fish because this arsenic compound is nearly non-toxic.
Environmental issues
Exposure
Naturally occurring sources of human exposure include volcanic ash, weathering of minerals and ores, and mineralized groundwater. Arsenic is also found in food, water, soil, and air. Arsenic is absorbed by all plants, but is more concentrated in leafy vegetables, rice, apple and grape juice, and seafood. An additional route of exposure is inhalation of atmospheric gases and dusts.
During the Victorian era, arsenic was widely used in home decor, especially wallpapers.
Occurrence in drinking water
Extensive arsenic contamination of groundwater has led to widespread arsenic poisoning in Bangladesh and neighboring countries. It is estimated that approximately 57 million people in the Bengal basin are drinking groundwater with arsenic concentrations elevated above the World Health Organization's standard of 10 parts per billion (ppb). However, a study of cancer rates in Taiwan suggested that significant increases in cancer mortality appear only at levels above 150 ppb. The arsenic in the groundwater is of natural origin, and is released from the sediment into the groundwater, caused by the anoxic conditions of the subsurface. This groundwater was used after local and western NGOs and the Bangladeshi government undertook a massive shallow tube well drinking-water program in the late twentieth century. This program was designed to prevent drinking of bacteria-contaminated surface waters, but failed to test for arsenic in the groundwater. Many other countries and districts in Southeast Asia, such as Vietnam and Cambodia, have geological environments that produce groundwater with a high arsenic content. Arsenicosis was reported in Nakhon Si Thammarat, Thailand, in 1987, and the Chao Phraya River probably contains high levels of naturally occurring dissolved arsenic without being a public health problem because much of the public uses bottled water. In Pakistan, more than 60 million people are exposed to arsenic polluted drinking water indicated by a 2017 report in Science. Podgorski's team investigated more than 1200 samples and more than 66% exceeded the WHO minimum contamination level.
Since the 1980s, residents of the Ba Men region of Inner Mongolia, China have been chronically exposed to arsenic through drinking water from contaminated wells. A 2009 research study observed an elevated presence of skin lesions among residents with well water arsenic concentrations between 5 and 10 µg/L, suggesting that arsenic induced toxicity may occur at relatively low concentrations with chronic exposure. Overall, 20 of China's 34 provinces have high arsenic concentrations in the groundwater supply, potentially exposing 19 million people to hazardous drinking water.
A study by IIT Kharagpur found high levels of Arsenic in groundwater of 20% of India's land, exposing more than 250 million people. States such as Punjab, Bihar, West Bengal, Assam, Haryana, Uttar Pradesh, and Gujarat have highest land area exposed to arsenic.
In the United States, arsenic is most commonly found in the ground waters of the southwest. Parts of New England, Michigan, Wisconsin, Minnesota and the Dakotas are also known to have significant concentrations of arsenic in ground water. Increased levels of skin cancer have been associated with arsenic exposure in Wisconsin, even at levels below the 10 ppb drinking water standard. According to a recent film funded by the US Superfund, millions of private wells have unknown arsenic levels, and in some areas of the US, more than 20% of the wells may contain levels that exceed established limits.
Low-level exposure to arsenic at concentrations of 100 ppb (i.e., above the 10 ppb drinking water standard) compromises the initial immune response to H1N1 or swine flu infection according to NIEHS-supported scientists. The study, conducted in laboratory mice, suggests that people exposed to arsenic in their drinking water may be at increased risk for more serious illness or death from the virus.
Some Canadians are drinking water that contains inorganic arsenic. Private-dug–well waters are most at risk for containing inorganic arsenic. Preliminary well water analysis typically does not test for arsenic. Researchers at the Geological Survey of Canada have modeled relative variation in natural arsenic hazard potential for the province of New Brunswick. This study has important implications for potable water and health concerns relating to inorganic arsenic.
Epidemiological evidence from Chile shows a dose-dependent connection between chronic arsenic exposure and various forms of cancer, in particular when other risk factors, such as cigarette smoking, are present. These effects have been demonstrated at contaminations less than 50 ppb. Arsenic is itself a constituent of tobacco smoke.
Analyzing multiple epidemiological studies on inorganic arsenic exposure suggests a small but measurable increase in risk for bladder cancer at 10 ppb. According to Peter Ravenscroft of the Department of Geography at the University of Cambridge, roughly 80 million people worldwide consume between 10 and 50 ppb arsenic in their drinking water. If they all consumed exactly 10 ppb arsenic in their drinking water, the previously cited multiple epidemiological study analysis would predict an additional 2,000 cases of bladder cancer alone. This represents a clear underestimate of the overall impact, since it does not include lung or skin cancer, and explicitly underestimates the exposure. Those exposed to levels of arsenic above the current WHO standard should weigh the costs and benefits of arsenic remediation.
Early (1973) evaluations of the processes for removing dissolved arsenic from drinking water demonstrated the efficacy of co-precipitation with either iron or aluminium oxides. In particular, iron as a coagulant was found to remove arsenic with an efficacy exceeding 90%. Several adsorptive media systems have been approved for use at point-of-service in a study funded by the United States Environmental Protection Agency (US EPA) and the National Science Foundation (NSF). A team of European and Indian scientists and engineers have set up six arsenic treatment plants in West Bengal based on in-situ remediation method (SAR Technology). This technology does not use any chemicals and arsenic is left in an insoluble form (+5 state) in the subterranean zone by recharging aerated water into the aquifer and developing an oxidation zone that supports arsenic oxidizing micro-organisms. This process does not produce any waste stream or sludge and is relatively cheap.
Another effective and inexpensive method to avoid arsenic contamination is to sink wells 500 feet or deeper to reach purer waters. A recent 2011 study funded by the US National Institute of Environmental Health Sciences' Superfund Research Program shows that deep sediments can remove arsenic and take it out of circulation. In this process, called adsorption, arsenic sticks to the surfaces of deep sediment particles and is naturally removed from the ground water.
Magnetic separations of arsenic at very low magnetic field gradients with high-surface-area and monodisperse magnetite (Fe3O4) nanocrystals have been demonstrated in point-of-use water purification. Using the high specific surface area of Fe3O4 nanocrystals, the mass of waste associated with arsenic removal from water has been dramatically reduced.
Epidemiological studies have suggested a correlation between chronic consumption of drinking water contaminated with arsenic and the incidence of all leading causes of mortality. The literature indicates that arsenic exposure is causative in the pathogenesis of diabetes.
Chaff-based filters have recently been shown to reduce the arsenic content of water to 3 µg/L. This may find applications in areas where the potable water is extracted from underground aquifers.
San Pedro de Atacama
For several centuries, the people of San Pedro de Atacama in Chile have been drinking water that is contaminated with arsenic, and some evidence suggests they have developed some immunity.
Hazard maps for contaminated groundwater
Around one-third of the world's population drinks water from groundwater resources. Of this, about 10 percent, approximately 300 million people, obtains water from groundwater resources that are contaminated with unhealthy levels of arsenic or fluoride. These trace elements derive mainly from minerals and ions in the ground.
Redox transformation of arsenic in natural waters
Arsenic is unique among the trace metalloids and oxyanion-forming trace metals (e.g. As, Se, Sb, Mo, V, Cr, U, Re). It is sensitive to mobilization at pH values typical of natural waters (pH 6.5–8.5) under both oxidizing and reducing conditions. Arsenic can occur in the environment in several oxidation states (−3, 0, +3 and +5), but in natural waters it is mostly found in inorganic forms as oxyanions of trivalent arsenite [As(III)] or pentavalent arsenate [As(V)]. Organic forms of arsenic are produced by biological activity, mostly in surface waters, but are rarely quantitatively important. Organic arsenic compounds may, however, occur where waters are significantly impacted by industrial pollution.
Arsenic may be solubilized by various processes. When pH is high, arsenic may be released from surface binding sites that lose their positive charge. When water level drops and sulfide minerals are exposed to air, arsenic trapped in sulfide minerals can be released into water. When organic carbon is present in water, bacteria are fed by directly reducing As(V) to As(III) or by reducing the element at the binding site, releasing inorganic arsenic.
The aquatic transformations of arsenic are affected by pH, reduction-oxidation potential, organic matter concentration and the concentrations and forms of other elements, especially iron and manganese. The main factors are pH and the redox potential. Generally, the main forms of arsenic under oxic conditions are H3AsO4, H2AsO4−, HAsO42−, and AsO43− at pH 2, 2–7, 7–11 and 11, respectively. Under reducing conditions, H3AsO4 is predominant at pH 2–9.
Oxidation and reduction affects the migration of arsenic in subsurface environments. Arsenite is the most stable soluble form of arsenic in reducing environments and arsenate, which is less mobile than arsenite, is dominant in oxidizing environments at neutral pH. Therefore, arsenic may be more mobile under reducing conditions. The reducing environment is also rich in organic matter which may enhance the solubility of arsenic compounds. As a result, the adsorption of arsenic is reduced and dissolved arsenic accumulates in groundwater. That is why the arsenic content is higher in reducing environments than in oxidizing environments.
The presence of sulfur is another factor that affects the transformation of arsenic in natural water. Arsenic can precipitate when metal sulfides form. In this way, arsenic is removed from the water and its mobility decreases. When oxygen is present, bacteria oxidize reduced sulfur to generate energy, potentially releasing bound arsenic.
Redox reactions involving Fe also appear to be essential factors in the fate of arsenic in aquatic systems. The reduction of iron oxyhydroxides plays a key role in the release of arsenic to water. So arsenic can be enriched in water with elevated Fe concentrations. Under oxidizing conditions, arsenic can be mobilized from pyrite or iron oxides especially at elevated pH. Under reducing conditions, arsenic can be mobilized by reductive desorption or dissolution when associated with iron oxides. The reductive desorption occurs under two circumstances. One is when arsenate is reduced to arsenite which adsorbs to iron oxides less strongly. The other results from a change in the charge on the mineral surface which leads to the desorption of bound arsenic.
Some species of bacteria catalyze redox transformations of arsenic. Dissimilatory arsenate-respiring prokaryotes (DARP) speed up the reduction of As(V) to As(III). DARP use As(V) as the electron acceptor of anaerobic respiration and obtain energy to survive. Other organic and inorganic substances can be oxidized in this process. Chemoautotrophic arsenite oxidizers (CAO) and heterotrophic arsenite oxidizers (HAO) convert As(III) into As(V). CAO combine the oxidation of As(III) with the reduction of oxygen or nitrate. They use obtained energy to fix produce organic carbon from CO2. HAO cannot obtain energy from As(III) oxidation. This process may be an arsenic detoxification mechanism for the bacteria.
Equilibrium thermodynamic calculations predict that As(V) concentrations should be greater than As(III) concentrations in all but strongly reducing conditions, i.e. where SO42− reduction is occurring. However, abiotic redox reactions of arsenic are slow. Oxidation of As(III) by dissolved O2 is a particularly slow reaction. For example, Johnson and Pilson (1975) gave half-lives for the oxygenation of As(III) in seawater ranging from several months to a year. In other studies, As(V)/As(III) ratios were stable over periods of days or weeks during water sampling when no particular care was taken to prevent oxidation, again suggesting relatively slow oxidation rates. Cherry found from experimental studies that the As(V)/As(III) ratios were stable in anoxic solutions for up to 3 weeks but that gradual changes occurred over longer timescales. Sterile water samples have been observed to be less susceptible to speciation changes than non-sterile samples. Oremland found that the reduction of As(V) to As(III) in Mono Lake was rapidly catalyzed by bacteria with rate constants ranging from 0.02 to 0.3-day−1.
Wood preservation in the US
As of 2002, US-based industries consumed 19,600 metric tons of arsenic. Ninety percent of this was used for treatment of wood with chromated copper arsenate (CCA). In 2007, 50% of the 5,280 metric tons of consumption was still used for this purpose. In the United States, the voluntary phasing-out of arsenic in production of consumer products and residential and general consumer construction products began on 31 December 2003, and alternative chemicals are now used, such as Alkaline Copper Quaternary, borates, copper azole, cyproconazole, and propiconazole.
Although discontinued, this application is also one of the most concerning to the general public. The vast majority of older pressure-treated wood was treated with CCA. CCA lumber is still in widespread use in many countries, and was heavily used during the latter half of the 20th century as a structural and outdoor building material. Although the use of CCA lumber was banned in many areas after studies showed that arsenic could leach out of the wood into the surrounding soil (from playground equipment, for instance), a risk is also presented by the burning of older CCA timber. The direct or indirect ingestion of wood ash from burnt CCA lumber has caused fatalities in animals and serious poisonings in humans; the lethal human dose is approximately 20 grams of ash. Scrap CCA lumber from construction and demolition sites may be inadvertently used in commercial and domestic fires. Protocols for safe disposal of CCA lumber are not consistent throughout the world. Widespread landfill disposal of such timber raises some concern, but other studies have shown no arsenic contamination in the groundwater.
Mapping of industrial releases in the US
One tool that maps the location (and other information) of arsenic releases in the United States is TOXMAP. TOXMAP is a Geographic Information System (GIS) from the Division of Specialized Information Services of the United States National Library of Medicine (NLM) funded by the US Federal Government. With marked-up maps of the United States, TOXMAP enables users to visually explore data from the United States Environmental Protection Agency's (EPA) Toxics Release Inventory and Superfund Basic Research Programs. TOXMAP's chemical and environmental health information is taken from NLM's Toxicology Data Network (TOXNET), PubMed, and from other authoritative sources.
Bioremediation
Physical, chemical, and biological methods have been used to remediate arsenic contaminated water. Bioremediation is said to be cost-effective and environmentally friendly. Bioremediation of ground water contaminated with arsenic aims to convert arsenite, the toxic form of arsenic to humans, to arsenate. Arsenate (+5 oxidation state) is the dominant form of arsenic in surface water, while arsenite (+3 oxidation state) is the dominant form in hypoxic to anoxic environments. Arsenite is more soluble and mobile than arsenate. Many species of bacteria can transform arsenite to arsenate in anoxic conditions by using arsenite as an electron donor. This is a useful method in ground water remediation. Another bioremediation strategy is to use plants that accumulate arsenic in their tissues via phytoremediation but the disposal of contaminated plant material needs to be considered.
Bioremediation requires careful evaluation and design in accordance with existing conditions. Some sites may require the addition of an electron acceptor while others require microbe supplementation (bioaugmentation). Regardless of the method used, only constant monitoring can prevent future contamination.
Arsenic removal
Coagulation and flocculation
Coagulation and flocculation are closely related processes common in arsenate removal from water. Due to the net negative charge carried by arsenate ions, they settle slowly or do not settle at all due to charge repulsion. In coagulation, a positively charged coagulent such as Fe and Alum (common used salts: FeCl3, Fe2(SO4)3, Al2(SO4)3) neutralise the negatively charged arsenate, enable it to settle. Flocculation follows where an flocculant bridge smaller particles and allows the aggregate to precipitate out from water. However, such methods may not be efficient on arsenite as As(III) exist in uncharged arsenious acid, H3AsO3, at near neutral pH.
The major drawbacks of coagulation and flocculation is the costly disposal of arsenate-concentrated sludge, and possible secondary contamination of environment. Moreover, coagulents such as Fe may produce ion contamination that exceeds safety level.
Toxicity and precautions
Arsenic and many of its compounds are especially potent poisons. Small amount of arsenic can be detected by pharmacopoial methods which includes reduction of arsenic to arsenious with help of zinc and can be confirmed with mercuric chloride paper.
Classification
Elemental arsenic and arsenic sulfate and trioxide compounds are classified as "toxic" and "dangerous for the environment" in the European Union under directive 67/548/EEC.
The International Agency for Research on Cancer (IARC) recognizes arsenic and inorganic arsenic compounds as group 1 carcinogens, and the EU lists arsenic trioxide, arsenic pentoxide, and arsenate salts as category 1 carcinogens.
Arsenic is known to cause arsenicosis when present in drinking water, "the most common species being arsenate [; As(V)] and arsenite [; As(III)]".
Legal limits, food, and drink
In the United States since 2006, the maximum concentration in drinking water allowed by the Environmental Protection Agency (EPA) is 10 ppb and the FDA set the same standard in 2005 for bottled water. The Department of Environmental Protection for New Jersey set a drinking water limit of 5 ppb in 2006. The IDLH (immediately dangerous to life and health) value for arsenic metal and inorganic arsenic compounds is 5 mg/m3 (5 ppb). The Occupational Safety and Health Administration has set the permissible exposure limit (PEL) to a time-weighted average (TWA) of 0.01 mg/m3 (0.01 ppb), and the National Institute for Occupational Safety and Health (NIOSH) has set the recommended exposure limit (REL) to a 15-minute constant exposure of 0.002 mg/m3 (0.002 ppb). The PEL for organic arsenic compounds is a TWA of 0.5 mg/m3. (0.5 ppb).
In 2008, based on its ongoing testing of a wide variety of American foods for toxic chemicals, the U.S. Food and Drug Administration set the "level of concern" for inorganic arsenic in apple and pear juices at 23 ppb, based on non-carcinogenic effects, and began blocking importation of products in excess of this level; it also required recalls for non-conforming domestic products. In 2011, the national Dr. Oz television show broadcast a program highlighting tests performed by an independent lab hired by the producers. Though the methodology was disputed (it did not distinguish between organic and inorganic arsenic) the tests showed levels of arsenic up to 36 ppb. In response, the FDA tested the worst brand from the Dr. Oz show and found much lower levels. Ongoing testing found 95% of the apple juice samples were below the level of concern. Later testing by Consumer Reports showed inorganic arsenic at levels slightly above 10 ppb, and the organization urged parents to reduce consumption. In July 2013, on consideration of consumption by children, chronic exposure, and carcinogenic effect, the FDA established an "action level" of 10 ppb for apple juice, the same as the drinking water standard.
Concern about arsenic in rice in Bangladesh was raised in 2002, but at the time only Australia had a legal limit for food (one milligram per kilogram). Concern was raised about people who were eating U.S. rice exceeding WHO standards for personal arsenic intake in 2005. In 2011, the People's Republic of China set a food standard of 150 ppb for arsenic.
In the United States in 2012, testing by separate groups of researchers at the Children's Environmental Health and Disease Prevention Research Center at Dartmouth College (early in the year, focusing on urinary levels in children) and Consumer Reports (in November) found levels of arsenic in rice that resulted in calls for the FDA to set limits. The FDA released some testing results in September 2012, and as of July 2013, is still collecting data in support of a new potential regulation. It has not recommended any changes in consumer behavior.
Consumer Reports recommended:
That the EPA and FDA eliminate arsenic-containing fertilizer, drugs, and pesticides in food production;
That the FDA establish a legal limit for food;
That industry change production practices to lower arsenic levels, especially in food for children; and
That consumers test home water supplies, eat a varied diet, and cook rice with excess water, then draining it off (reducing inorganic arsenic by about one third along with a slight reduction in vitamin content).
Evidence-based public health advocates also recommend that, given the lack of regulation or labeling for arsenic in the U.S., children should eat no more than 1.5 servings per week of rice and should not drink rice milk as part of their daily diet before age 5. They also offer recommendations for adults and infants on how to limit arsenic exposure from rice, drinking water, and fruit juice.
A 2014 World Health Organization advisory conference was scheduled to consider limits of 200–300 ppb for rice.
Reducing arsenic content in rice
In 2020, scientists assessed multiple preparation procedures of rice for their capacity to reduce arsenic content and preserve nutrients, recommending a procedure involving parboiling and water-absorption.
Occupational exposure limits
Ecotoxicity
Arsenic is bioaccumulative in many organisms, marine species in particular, but it does not appear to biomagnify significantly in food webs. In polluted areas, plant growth may be affected by root uptake of arsenate, which is a phosphate analog and therefore readily transported in plant tissues and cells. In polluted areas, uptake of the more toxic arsenite ion (found more particularly in reducing conditions) is likely in poorly-drained soils.
Toxicity in animals
Biological mechanism
Arsenic's toxicity comes from the affinity of arsenic(III) oxides for thiols. Thiols, in the form of cysteine residues and cofactors such as lipoic acid and coenzyme A, are situated at the active sites of many important enzymes.
Arsenic disrupts ATP production through several mechanisms. At the level of the citric acid cycle, arsenic inhibits lipoic acid, which is a cofactor for pyruvate dehydrogenase. By competing with phosphate, arsenate uncouples oxidative phosphorylation, thus inhibiting energy-linked reduction of NAD+, mitochondrial respiration and ATP synthesis. Hydrogen peroxide production is also increased, which, it is speculated, has potential to form reactive oxygen species and oxidative stress. These metabolic interferences lead to death from multi-system organ failure. The organ failure is presumed to be from necrotic cell death, not apoptosis, since energy reserves have been too depleted for apoptosis to occur.
Exposure risks and remediation
Occupational exposure and arsenic poisoning may occur in persons working in industries involving the use of inorganic arsenic and its compounds, such as wood preservation, glass production, nonferrous metal alloys, and electronic semiconductor manufacturing. Inorganic arsenic is also found in coke oven emissions associated with the smelter industry.
The conversion between As(III) and As(V) is a large factor in arsenic environmental contamination. According to Croal, Gralnick, Malasarn and Newman, "[the] understanding [of] what stimulates As(III) oxidation and/or limits As(V) reduction is relevant for bioremediation of contaminated sites (Croal). The study of chemolithoautotrophic As(III) oxidizers and the heterotrophic As(V) reducers can help the understanding of the oxidation and/or reduction of arsenic.
Treatment
Treatment of chronic arsenic poisoning is possible. British anti-lewisite (dimercaprol) is prescribed in doses of 5 mg/kg up to 300 mg every 4 hours for the first day, then every 6 hours for the second day, and finally every 8 hours for 8 additional days. However the USA's Agency for Toxic Substances and Disease Registry (ATSDR) states that the long-term effects of arsenic exposure cannot be predicted. Blood, urine, hair, and nails may be tested for arsenic; however, these tests cannot foresee possible health outcomes from the exposure. Long-term exposure and consequent excretion through urine has been linked to bladder and kidney cancer in addition to cancer of the liver, prostate, skin, lungs, and nasal cavity.
See also
Aqua Tofana
Arsenic and Old Lace
Arsenic biochemistry
Arsenic compounds
Arsenic poisoning
Arsenic toxicity
Arsenic trioxide
Fowler's solution
GFAJ-1
Grainger challenge
Hypothetical types of biochemistry
Organoarsenic chemistry
Toxic heavy metal
White arsenic
References
Bibliography
Further reading
External links
Arsenic Cancer Causing Substances, U.S. National Cancer Institute.
CTD's Arsenic page and CTD's Arsenicals page from the Comparative Toxicogenomics Database
Arsenic intoxication: general aspects and chelating agents, by Geir Bjørklund, Massimiliano Peana et al. Archives of Toxicology (2020) 94:1879–1897.
A Small Dose of Toxicology
Arsenic in groundwater Book on arsenic in groundwater by IAH's Netherlands Chapter and the Netherlands Hydrological Society
Contaminant Focus: Arsenic by the EPA.
Environmental Health Criteria for Arsenic and Arsenic Compounds, 2001 by the WHO.
National Institute for Occupational Safety and Health – Arsenic Page
Arsenic at The Periodic Table of Videos (University of Nottingham)
Chemical elements
Metalloids
Hepatotoxins
Pnictogens
Endocrine disruptors
IARC Group 1 carcinogens
Trigonal minerals
Minerals in space group 166
Teratogens
Fetotoxicants
Suspected testicular toxicants
Native element minerals
Chemical elements with rhombohedral structure
|
https://en.wikipedia.org/wiki/Antimony
|
Antimony is a chemical element with the symbol Sb () and atomic number 51. A lustrous gray metalloid, it is found in nature mainly as the sulfide mineral stibnite (Sb2S3). Antimony compounds have been known since ancient times and were powdered for use as medicine and cosmetics, often known by the Arabic name kohl. The earliest known description of the metalloid in the West was written in 1540 by Vannoccio Biringuccio.
China is the largest producer of antimony and its compounds, with most production coming from the Xikuangshan Mine in Hunan. The industrial methods for refining antimony from stibnite are roasting followed by reduction with carbon, or direct reduction of stibnite with iron.
The largest applications for metallic antimony are in alloys with lead and tin, which have improved properties for solders, bullets, and plain bearings. It improves the rigidity of lead-alloy plates in lead–acid batteries. Antimony trioxide is a prominent additive for halogen-containing flame retardants. Antimony is used as a dopant in semiconductor devices.
Characteristics
Properties
Antimony is a member of group 15 of the periodic table, one of the elements called pnictogens, and has an electronegativity of 2.05. In accordance with periodic trends, it is more electronegative than tin or bismuth, and less electronegative than tellurium or arsenic. Antimony is stable in air at room temperature, but reacts with oxygen if heated to produce antimony trioxide, Sb2O3.
Antimony is a silvery, lustrous gray metalloid with a Mohs scale hardness of 3, which is too soft to mark hard objects. Coins of antimony were issued in China's Guizhou province in 1931; durability was poor, and minting was soon discontinued. Antimony is resistant to attack by acids.
Four allotropes of antimony are known: a stable metallic form, and three metastable forms (explosive, black, and yellow). Elemental antimony is a brittle, silver-white, shiny metalloid. When slowly cooled, molten antimony crystallizes into a trigonal cell, isomorphic with the gray allotrope of arsenic. A rare explosive form of antimony can be formed from the electrolysis of antimony trichloride. When scratched with a sharp implement, an exothermic reaction occurs and white fumes are given off as metallic antimony forms; when rubbed with a pestle in a mortar, a strong detonation occurs. Black antimony is formed upon rapid cooling of antimony vapor. It has the same crystal structure as red phosphorus and black arsenic; it oxidizes in air and may ignite spontaneously. At 100 °C, it gradually transforms into the stable form. The yellow allotrope of antimony is the most unstable; it has been generated only by oxidation of stibine (SbH3) at −90 °C. Above this temperature and in ambient light, this metastable allotrope transforms into the more stable black allotrope.
Elemental antimony adopts a layered structure (space group Rm No. 166) whose layers consist of fused, ruffled, six-membered rings. The nearest and next-nearest neighbors form an irregular octahedral complex, with the three atoms in each double layer slightly closer than the three atoms in the next. This relatively close packing leads to a high density of 6.697 g/cm3, but the weak bonding between the layers leads to the low hardness and brittleness of antimony.
Isotopes
Antimony has two stable isotopes: 121Sb with a natural abundance of 57.36% and 123Sb with a natural abundance of 42.64%. It also has 35 radioisotopes, of which the longest-lived is 125Sb with a half-life of 2.75 years. In addition, 29 metastable states have been characterized. The most stable of these is 120m1Sb with a half-life of 5.76 days. Isotopes that are lighter than the stable 123Sb tend to decay by β+ decay, and those that are heavier tend to decay by β− decay, with some exceptions. Antimony is the lightest element to have an isotope with an alpha decay branch, excluding 8Be and other light nuclides with beta-delayed alpha emission.
Occurrence
The abundance of antimony in the Earth's crust is estimated at 0.2 parts per million, comparable to thallium at 0.5 parts per million and silver at 0.07 ppm. Even though this element is not abundant, it is found in more than 100 mineral species. Antimony is sometimes found natively (e.g. on Antimony Peak), but more frequently it is found in the sulfide stibnite (Sb2S3) which is the predominant ore mineral.
Compounds
Antimony compounds are often classified according to their oxidation state: Sb(III) and Sb(V). The +5 oxidation state is more common.
Oxides and hydroxides
Antimony trioxide is formed when antimony is burnt in air. In the gas phase, the molecule of the compound is , but it polymerizes upon condensing. Antimony pentoxide () can be formed only by oxidation with concentrated nitric acid. Antimony also forms a mixed-valence oxide, antimony tetroxide (), which features both Sb(III) and Sb(V). Unlike oxides of phosphorus and arsenic, these oxides are amphoteric, do not form well-defined oxoacids, and react with acids to form antimony salts.
Antimonous acid is unknown, but the conjugate base sodium antimonite () forms upon fusing sodium oxide and . Transition metal antimonites are also known. Antimonic acid exists only as the hydrate , forming salts as the antimonate anion . When a solution containing this anion is dehydrated, the precipitate contains mixed oxides.
The most important antimony ore is stibnite (). Other sulfide minerals include pyrargyrite (), zinkenite, jamesonite, and boulangerite. Antimony pentasulfide is non-stoichiometric, which features antimony in the +3 oxidation state and S–S bonds. Several thioantimonides are known, such as and .
Halides
Antimony forms two series of halides: and . The trihalides , , , and are all molecular compounds having trigonal pyramidal molecular geometry.
The trifluoride is prepared by the reaction of with HF:
+ 6 HF → 2 + 3
It is Lewis acidic and readily accepts fluoride ions to form the complex anions and . Molten is a weak electrical conductor. The trichloride is prepared by dissolving in hydrochloric acid:
+ 6 HCl → 2 + 3
Arsenic sulfides are not readily attacked by the hydrochloric acid, so this method offers a route to As-free Sb.
The pentahalides and have trigonal bipyramidal molecular geometry in the gas phase, but in the liquid phase, is polymeric, whereas is monomeric. is a powerful Lewis acid used to make the superacid fluoroantimonic acid ("H2SbF7").
Oxyhalides are more common for antimony than for arsenic and phosphorus. Antimony trioxide dissolves in concentrated acid to form oxoantimonyl compounds such as SbOCl and .
Antimonides, hydrides, and organoantimony compounds
Compounds in this class generally are described as derivatives of Sb3−. Antimony forms antimonides with metals, such as indium antimonide (InSb) and silver antimonide (). The alkali metal and zinc antimonides, such as Na3Sb and Zn3Sb2, are more reactive. Treating these antimonides with acid produces the highly unstable gas stibine, :
+ 3 →
Stibine can also be produced by treating salts with hydride reagents such as sodium borohydride. Stibine decomposes spontaneously at room temperature. Because stibine has a positive heat of formation, it is thermodynamically unstable and thus antimony does not react with hydrogen directly.
Organoantimony compounds are typically prepared by alkylation of antimony halides with Grignard reagents. A large variety of compounds are known with both Sb(III) and Sb(V) centers, including mixed chloro-organic derivatives, anions, and cations. Examples include triphenylstibine (Sb(C6H5)3) and pentaphenylantimony (Sb(C6H5)5).
History
Antimony(III) sulfide, Sb2S3, was recognized in predynastic Egypt as an eye cosmetic (kohl) as early as about 3100 BC, when the cosmetic palette was invented.
An artifact, said to be part of a vase, made of antimony dating to about 3000 BC was found at Telloh, Chaldea (part of present-day Iraq), and a copper object plated with antimony dating between 2500 BC and 2200 BC has been found in Egypt. Austen, at a lecture by Herbert Gladstone in 1892, commented that "we only know of antimony at the present day as a highly brittle and crystalline metal, which could hardly be fashioned into a useful vase, and therefore this remarkable 'find' (artifact mentioned above) must represent the lost art of rendering antimony malleable."
The British archaeologist Roger Moorey was unconvinced the artifact was indeed a vase, mentioning that Selimkhanov, after his analysis of the Tello object (published in 1975), "attempted to relate the metal to Transcaucasian natural antimony" (i.e. native metal) and that "the antimony objects from Transcaucasia are all small personal ornaments." This weakens the evidence for a lost art "of rendering antimony malleable."
The Roman scholar Pliny the Elder described several ways of preparing antimony sulfide for medical purposes in his treatise Natural History, around 77 AD. Pliny the Elder also made a distinction between "male" and "female" forms of antimony; the male form is probably the sulfide, while the female form, which is superior, heavier, and less friable, has been suspected to be native metallic antimony.
The Greek naturalist Pedanius Dioscorides mentioned that antimony sulfide could be roasted by heating by a current of air. It is thought that this produced metallic antimony.
Antimony was frequently described in alchemical manuscripts, including the Summa Perfectionis of Pseudo-Geber, written around the 14th century. A description of a procedure for isolating antimony is later given in the 1540 book De la pirotechnia by Vannoccio Biringuccio, predating the more famous 1556 book by Agricola, De re metallica. In this context Agricola has been often incorrectly credited with the discovery of metallic antimony. The book Currus Triumphalis Antimonii (The Triumphal Chariot of Antimony), describing the preparation of metallic antimony, was published in Germany in 1604. It was purported to be written by a Benedictine monk, writing under the name Basilius Valentinus in the 15th century; if it were authentic, which it is not, it would predate Biringuccio.
The metal antimony was known to German chemist Andreas Libavius in 1615 who obtained it by adding iron to a molten mixture of antimony sulfide, salt and potassium tartrate. This procedure produced antimony with a crystalline or starred surface.
With the advent of challenges to phlogiston theory, it was recognized that antimony is an element forming sulfides, oxides, and other compounds, as do other metals.
The first discovery of naturally occurring pure antimony in the Earth's crust was described by the Swedish scientist and local mine district engineer Anton von Swab in 1783; the type-sample was collected from the Sala Silver Mine in the Bergslagen mining district of Sala, Västmanland, Sweden.
Etymology
The medieval Latin form, from which the modern languages and late Byzantine Greek take their names for antimony, is antimonium. The origin of this is uncertain; all suggestions have some difficulty either of form or interpretation. The popular etymology, from ἀντίμοναχός anti-monachos or French antimoine, still has adherents; this would mean "monk-killer", and is explained by many early alchemists being monks, and antimony being poisonous. However, the low toxicity of antimony (see below) makes this unlikely.
Another popular etymology is the hypothetical Greek word ἀντίμόνος antimonos, "against aloneness", explained as "not found as metal", or "not found unalloyed". Edmund Oscar von Lippmann conjectured a hypothetical Greek word ανθήμόνιον anthemonion, which would mean "floret", and cites several examples of related Greek words (but not that one) which describe chemical or biological efflorescence.
The early uses of antimonium include the translations, in 1050–1100, by Constantine the African of Arabic medical treatises. Several authorities believe antimonium is a scribal corruption of some Arabic form; Meyerhof derives it from ithmid; other possibilities include athimar, the Arabic name of the metalloid, and a hypothetical as-stimmi, derived from or parallel to the Greek.
The standard chemical symbol for antimony (Sb) is credited to Jöns Jakob Berzelius, who derived the abbreviation from stibium.
The ancient words for antimony mostly have, as their chief meaning, kohl, the sulfide of antimony.
The Egyptians called antimony mśdmt or stm.
The Arabic word for the substance, as opposed to the cosmetic, can appear as إثمد ithmid, athmoud, othmod, or uthmod. Littré suggests the first form, which is the earliest, derives from stimmida, an accusative for stimmi. The Greek word, στίμμι (stimmi) is used by Attic tragic poets of the 5th century BC, and is possibly a loan word from Arabic or from Egyptian stm.
Production
Process
The extraction of antimony from ores depends on the quality and composition of the ore. Most antimony is mined as the sulfide; lower-grade ores are concentrated by froth flotation, while higher-grade ores are heated to 500–600 °C, the temperature at which stibnite melts and separates from the gangue minerals. Antimony can be isolated from the crude antimony sulfide by reduction with scrap iron:
+ 3 Fe → 2 Sb + 3 FeS
The sulfide is converted to an oxide by roasting. The product is further purified by vaporizing the volatile antimony(III) oxide, which is recovered. This sublimate is often used directly for the main applications, impurities being arsenic and sulfide. Antimony is isolated from the oxide by a carbothermal reduction:
2 + 3 C → 4 Sb + 3
The lower-grade ores are reduced in blast furnaces while the higher-grade ores are reduced in reverberatory furnaces.
Top producers and production volumes
In 2022, according to the US Geological Survey, China accounted for 54.5% of total antimony production, followed in second place by Russia with 18.2% and Tajikistan with 15.5%.
Chinese production of antimony is expected to decline in the future as mines and smelters are closed down by the government as part of pollution control. Especially due to an environmental protection law having gone into effect in January 2015 and revised "Emission Standards of Pollutants for Stanum, Antimony, and Mercury" having gone into effect, hurdles for economic production are higher.
Reported production of antimony in China has fallen and is unlikely to increase in the coming years, according to the Roskill report. No significant antimony deposits in China have been developed for about ten years, and the remaining economic reserves are being rapidly depleted.
Reserves
Supply risk
For antimony-importing regions such as Europe and the U.S., antimony is considered to be a critical mineral for industrial manufacturing that is at risk of supply chain disruption. With global production coming mainly from China (74%), Tajikistan (8%), and Russia (4%), these sources are critical to supply.
European Union: Antimony is considered a critical raw material for defense, automotive, construction and textiles. The E.U. sources are 100% imported, coming mainly from Turkey (62%), Bolivia (20%) and Guatemala (7%).
United Kingdom: The British Geological Survey's 2015 risk list ranks antimony second highest (after rare earth elements) on the relative supply risk index.
United States: Antimony is a mineral commodity considered critical to the economic and national security. In 2022, no antimony was mined in the U.S.
Applications
Approximately 48% of antimony is consumed in flame retardants, 33% in lead–acid batteries, and 8% in plastics.
Flame retardants
Antimony is mainly used as the trioxide for flame-proofing compounds, always in combination with halogenated flame retardants except in halogen-containing polymers. The flame retarding effect of antimony trioxide is produced by the formation of halogenated antimony compounds, which react with hydrogen atoms, and probably also with oxygen atoms and OH radicals, thus inhibiting fire. Markets for these flame-retardants include children's clothing, toys, aircraft, and automobile seat covers. They are also added to polyester resins in fiberglass composites for such items as light aircraft engine covers. The resin will burn in the presence of an externally generated flame, but will extinguish when the external flame is removed.
Alloys
Antimony forms a highly useful alloy with lead, increasing its hardness and mechanical strength. For most applications involving lead, varying amounts of antimony are used as alloying metal. In lead–acid batteries, this addition improves plate strength and charging characteristics. For sailboats, lead keels are used to provide righting moment, ranging from 600 lbs to over 200 tons for the largest sailing superyachts; to improve hardness and tensile strength of the lead keel, antimony is mixed with lead between 2% and 5% by volume. Antimony is used in antifriction alloys (such as Babbitt metal), in bullets and lead shot, electrical cable sheathing, type metal (for example, for linotype printing machines), solder (some "lead-free" solders contain 5% Sb), in pewter, and in hardening alloys with low tin content in the manufacturing of organ pipes.
Other applications
Three other applications consume nearly all the rest of the world's supply. One application is as a stabilizer and catalyst for the production of polyethylene terephthalate. Another is as a fining agent to remove microscopic bubbles in glass, mostly for TV screens antimony ions interact with oxygen, suppressing the tendency of the latter to form bubbles. The third application is pigments.
In the 1990s antimony was increasingly being used in semiconductors as a dopant in n-type silicon wafers for diodes, infrared detectors, and Hall-effect devices. In the 1950s, the emitters and collectors of n-p-n alloy junction transistors were doped with tiny beads of a lead-antimony alloy. Indium antimonide (InSb) is used as a material for mid-infrared detectors.
Biology and medicine have few uses for antimony. Treatments containing antimony, known as antimonials, are used as emetics. Antimony compounds are used as antiprotozoan drugs. Potassium antimonyl tartrate, or tartar emetic, was once used as an anti-schistosomal drug from 1919 on. It was subsequently replaced by praziquantel. Antimony and its compounds are used in several veterinary preparations, such as anthiomaline and lithium antimony thiomalate, as a skin conditioner in ruminants. Antimony has a nourishing or conditioning effect on keratinized tissues in animals.
Antimony-based drugs, such as meglumine antimoniate, are also considered the drugs of choice for treatment of leishmaniasis in domestic animals. Besides having low therapeutic indices, the drugs have minimal penetration of the bone marrow, where some of the Leishmania amastigotes reside, and curing the disease – especially the visceral form – is very difficult. Elemental antimony as an antimony pill was once used as a medicine. It could be reused by others after ingestion and elimination.
Antimony(III) sulfide is used in the heads of some safety matches. Antimony sulfides help to stabilize the friction coefficient in automotive brake pad materials. Antimony is used in bullets, bullet tracers, paint, glass art, and as an opacifier in enamel. Antimony-124 is used together with beryllium in neutron sources; the gamma rays emitted by antimony-124 initiate the photodisintegration of beryllium. The emitted neutrons have an average energy of 24 keV. Natural antimony is used in startup neutron sources.
Historically, the powder derived from crushed antimony (kohl) has been applied to the eyes with a metal rod and with one's spittle, thought by the ancients to aid in curing eye infections. The practice is still seen in Yemen and in other Muslim countries.
Precautions
Antimony and many of its compounds are toxic, and the effects of antimony poisoning are similar to arsenic poisoning. The toxicity of antimony is far lower than that of arsenic; this might be caused by the significant differences of uptake, metabolism and excretion between arsenic and antimony. The uptake of antimony(III) or antimony(V) in the gastrointestinal tract is at most 20%. Antimony(V) is not quantitatively reduced to antimony(III) in the cell (in fact antimony(III) is oxidised to antimony(V) instead).
Since methylation of antimony does not occur, the excretion of antimony(V) in urine is the main way of elimination. Like arsenic, the most serious effect of acute antimony poisoning is cardiotoxicity and the resulted myocarditis, however it can also manifest as Adams–Stokes syndrome which arsenic does not. Reported cases of intoxication by antimony equivalent to 90 mg antimony potassium tartrate dissolved from enamel has been reported to show only short term effects. An intoxication with 6 g of antimony potassium tartrate was reported to result in death after 3 days.
Inhalation of antimony dust is harmful and in certain cases may be fatal; in small doses, antimony causes headaches, dizziness, and depression. Larger doses such as prolonged skin contact may cause dermatitis, or damage the kidneys and the liver, causing violent and frequent vomiting, leading to death in a few days.
Antimony is incompatible with strong oxidizing agents, strong acids, halogen acids, chlorine, or fluorine. It should be kept away from heat.
Antimony leaches from polyethylene terephthalate (PET) bottles into liquids. While levels observed for bottled water are below drinking water guidelines, fruit juice concentrates (for which no guidelines are established) produced in the UK were found to contain up to 44.7 µg/L of antimony, well above the EU limits for tap water of 5 µg/L. The guidelines are:
World Health Organization: 20 µg/L
Japan: 15 µg/L
United States Environmental Protection Agency, Health Canada and the Ontario Ministry of Environment: 6 µg/L
EU and German Federal Ministry of Environment: 5 µg/L
The tolerable daily intake (TDI) proposed by WHO is 6 µg antimony per kilogram of body weight. The immediately dangerous to life or health (IDLH) value for antimony is 50 mg/m3.
Toxicity
Certain compounds of antimony appear to be toxic, particularly antimony trioxide and antimony potassium tartrate. Effects may be similar to arsenic poisoning. Occupational exposure may cause respiratory irritation, pneumoconiosis, antimony spots on the skin, gastrointestinal symptoms, and cardiac arrhythmias. In addition, antimony trioxide is potentially carcinogenic to humans.
Adverse health effects have been observed in humans and animals following inhalation, oral, or dermal exposure to antimony and antimony compounds. Antimony toxicity typically occurs either due to occupational exposure, during therapy or from accidental ingestion. It is unclear if antimony can enter the body through the skin. The presence of low levels of antimony in saliva may also be associated with dental decay.
See also
Phase change memory
Notes
References
Cited sources
External links
Public Health Statement for Antimony
International Antimony Association vzw (i2a)
Chemistry in its element podcast (MP3) from the Royal Society of Chemistry's Chemistry World: Antimony
Antimony at The Periodic Table of Videos (University of Nottingham)
CDC – NIOSH Pocket Guide to Chemical Hazards – Antimony
Antimony Mineral data and specimen images
Chemical elements
Metalloids
Native element minerals
Nuclear materials
Pnictogens
Trigonal minerals
Minerals in space group 166
Materials that expand upon freezing
Chemical elements with rhombohedral structure
|
https://en.wikipedia.org/wiki/Actinium
|
Actinium is a chemical element with the symbol Ac and atomic number 89. It was first isolated by Friedrich Oskar Giesel in 1902, who gave it the name emanium; the element got its name by being wrongly identified with a substance André-Louis Debierne found in 1899 and called actinium. Actinium gave the name to the actinide series, a set of 15 elements between actinium and lawrencium in the periodic table. Together with polonium, radium, and radon, actinium was one of the first non-primordial radioactive elements to be isolated.
A soft, silvery-white radioactive metal, actinium reacts rapidly with oxygen and moisture in air forming a white coating of actinium oxide that prevents further oxidation. As with most lanthanides and many actinides, actinium assumes oxidation state +3 in nearly all its chemical compounds. Actinium is found only in traces in uranium and thorium ores as the isotope 227Ac, which decays with a half-life of 21.772 years, predominantly emitting beta and sometimes alpha particles, and 228Ac, which is beta active with a half-life of 6.15 hours. One tonne of natural uranium in ore contains about 0.2 milligrams of actinium-227, and one tonne of thorium contains about 5 nanograms of actinium-228. The close similarity of physical and chemical properties of actinium and lanthanum makes separation of actinium from the ore impractical. Instead, the element is prepared, in milligram amounts, by the neutron irradiation of in a nuclear reactor. Owing to its scarcity, high price and radioactivity, actinium has no significant industrial use. Its current applications include a neutron source and an agent for radiation therapy.
History
André-Louis Debierne, a French chemist, announced the discovery of a new element in 1899. He separated it from pitchblende residues left by Marie and Pierre Curie after they had extracted radium. In 1899, Debierne described the substance as similar to titanium and (in 1900) as similar to thorium. Friedrich Oskar Giesel found in 1902 a substance similar to lanthanum and called it "emanium" in 1904. After a comparison of the substances' half-lives determined by Debierne, Harriet Brooks in 1904, and Otto Hahn and Otto Sackur in 1905, Debierne's chosen name for the new element was retained because it had seniority, despite the contradicting chemical properties he claimed for the element at different times.
Articles published in the 1970s and later suggest that Debierne's results published in 1904 conflict with those reported in 1899 and 1900. Furthermore, the now-known chemistry of actinium precludes its presence as anything other than a minor constituent of Debierne's 1899 and 1900 results; in fact, the chemical properties he reported make it likely that he had, instead, accidentally identified protactinium, which would not be discovered for another fourteen years, only to have it disappear due to its hydrolysis and adsorption onto his laboratory equipment. This has led some authors to advocate that Giesel alone should be credited with the discovery. A less confrontational vision of scientific discovery is proposed by Adloff. He suggests that hindsight criticism of the early publications should be mitigated by the then nascent state of radiochemistry: highlighting the prudence of Debierne's claims in the original papers, he notes that nobody can contend that Debierne's substance did not contain actinium. Debierne, who is now considered by the vast majority of historians as the discoverer, lost interest in the element and left the topic. Giesel, on the other hand, can rightfully be credited with the first preparation of radiochemically pure actinium and with the identification of its atomic number 89.
The name actinium originates from the Ancient Greek aktis, aktinos (ακτίς, ακτίνος), meaning beam or ray. Its symbol Ac is also used in abbreviations of other compounds that have nothing to do with actinium, such as acetyl, acetate and sometimes acetaldehyde.
Properties
Actinium is a soft, silvery-white, radioactive, metallic element. Its estimated shear modulus is similar to that of lead. Owing to its strong radioactivity, actinium glows in the dark with a pale blue light, which originates from the surrounding air ionized by the emitted energetic particles. Actinium has similar chemical properties to lanthanum and other lanthanides, and therefore these elements are difficult to separate when extracting from uranium ores. Solvent extraction and ion chromatography are commonly used for the separation.
The first element of the actinides, actinium gave the set its name, much as lanthanum had done for the lanthanides. The actinides are much more diverse than the lanthanides and therefore it was not until 1945 that the most significant change to Dmitri Mendeleev's periodic table since the recognition of the lanthanides, the introduction of the actinides, was generally accepted after Glenn T. Seaborg's research on the transuranium elements (although it had been proposed as early as 1892 by British chemist Henry Bassett).
Actinium reacts rapidly with oxygen and moisture in air forming a white coating of actinium oxide that impedes further oxidation. As with most lanthanides and actinides, actinium exists in the oxidation state +3, and the Ac3+ ions are colorless in solutions. The oxidation state +3 originates from the [Rn] 6d17s2 electronic configuration of actinium, with three valence electrons that are easily donated to give the stable closed-shell structure of the noble gas radon. Although the 5f orbitals are unoccupied in an actinium atom, it can be used as a valence orbital in actinium complexes and hence it is generally considered the first 5f element by authors working on it. Ac3+ is the largest of all known tripositive ions and its first coordination sphere contains approximately 10.9 ± 0.5 water molecules.
Chemical compounds
Due to actinium's intense radioactivity, only a limited number of actinium compounds are known. These include: AcF3, AcCl3, AcBr3, AcOF, AcOCl, AcOBr, Ac2S3, Ac2O3, AcPO4 and Ac(NO3)3. They all contain actinium in the oxidation state +3. In particular, the lattice constants of the analogous lanthanum and actinium compounds differ by only a few percent.
Here a, b and c are lattice constants, No is space group number and Z is the number of formula units per unit cell. Density was not measured directly but calculated from the lattice parameters.
Oxides
Actinium oxide (Ac2O3) can be obtained by heating the hydroxide at 500 °C or the oxalate at 1100 °C, in vacuum. Its crystal lattice is isotypic with the oxides of most trivalent rare-earth metals.
Halides
Actinium trifluoride can be produced either in solution or in solid reaction. The former reaction is carried out at room temperature, by adding hydrofluoric acid to a solution containing actinium ions. In the latter method, actinium metal is treated with hydrogen fluoride vapors at 700 °C in an all-platinum setup. Treating actinium trifluoride with ammonium hydroxide at 900–1000 °C yields oxyfluoride AcOF. Whereas lanthanum oxyfluoride can be easily obtained by burning lanthanum trifluoride in air at 800 °C for an hour, similar treatment of actinium trifluoride yields no AcOF and only results in melting of the initial product.
AcF3 + 2 NH3 + H2O → AcOF + 2 NH4F
Actinium trichloride is obtained by reacting actinium hydroxide or oxalate with carbon tetrachloride vapors at temperatures above 960 °C. Similar to oxyfluoride, actinium oxychloride can be prepared by hydrolyzing actinium trichloride with ammonium hydroxide at 1000 °C. However, in contrast to the oxyfluoride, the oxychloride could well be synthesized by igniting a solution of actinium trichloride in hydrochloric acid with ammonia.
Reaction of aluminium bromide and actinium oxide yields actinium tribromide:
Ac2O3 + 2 AlBr3 → 2 AcBr3 + Al2O3
and treating it with ammonium hydroxide at 500 °C results in the oxybromide AcOBr.
Other compounds
Actinium hydride was obtained by reduction of actinium trichloride with potassium at 300 °C, and its structure was deduced by analogy with the corresponding LaH2 hydride. The source of hydrogen in the reaction was uncertain.
Mixing monosodium phosphate (NaH2PO4) with a solution of actinium in hydrochloric acid yields white-colored actinium phosphate hemihydrate (AcPO4·0.5H2O), and heating actinium oxalate with hydrogen sulfide vapors at 1400 °C for a few minutes results in a black actinium sulfide Ac2S3. It may possibly be produced by acting with a mixture of hydrogen sulfide and carbon disulfide on actinium oxide at 1000 °C.
Isotopes
Naturally occurring actinium is composed of two radioactive isotopes; (from the radioactive family of ) and (a granddaughter of ). decays mainly as a beta emitter with a very small energy, but in 1.38% of cases it emits an alpha particle, so it can readily be identified through alpha spectrometry. Thirty-three radioisotopes have been identified, the most stable being with a half-life of 21.772 years, with a half-life of 10.0 days and with a half-life of 29.37 hours. All remaining radioactive isotopes have half-lives that are less than 10 hours and the majority of them have half-lives shorter than one minute. The shortest-lived known isotope of actinium is (half-life of 69 nanoseconds) which decays through alpha decay. Actinium also has two known meta states. The most significant isotopes for chemistry are 225Ac, 227Ac, and 228Ac.
Purified comes into equilibrium with its decay products after about a half of year. It decays according to its 21.772-year half-life emitting mostly beta (98.62%) and some alpha particles (1.38%); the successive decay products are part of the actinium series. Owing to the low available amounts, low energy of its beta particles (maximum 44.8 keV) and low intensity of alpha radiation, is difficult to detect directly by its emission and it is therefore traced via its decay products. The isotopes of actinium range in atomic weight from 204 u () to 236 u ().
Occurrence and synthesis
Actinium is found only in traces in uranium ores – one tonne of uranium in ore contains about 0.2 milligrams of 227Ac – and in thorium ores, which contain about 5 nanograms of 228Ac per one tonne of thorium. The actinium isotope 227Ac is a transient member of the uranium-actinium series decay chain, which begins with the parent isotope 235U (or 239Pu) and ends with the stable lead isotope 207Pb. The isotope 228Ac is a transient member of the thorium series decay chain, which begins with the parent isotope 232Th and ends with the stable lead isotope 208Pb. Another actinium isotope (225Ac) is transiently present in the neptunium series decay chain, beginning with 237Np (or 233U) and ending with thallium (205Tl) and near-stable bismuth (209Bi); even though all primordial 237Np has decayed away, it is continuously produced by neutron knock-out reactions on natural 238U.
The low natural concentration, and the close similarity of physical and chemical properties to those of lanthanum and other lanthanides, which are always abundant in actinium-bearing ores, render separation of actinium from the ore impractical, and complete separation was never achieved. Instead, actinium is prepared, in milligram amounts, by the neutron irradiation of in a nuclear reactor.
^{226}_{88}Ra + ^{1}_{0}n -> ^{227}_{88}Ra ->[\beta^-][42.2 \ \ce{min}] ^{227}_{89}Ac
The reaction yield is about 2% of the radium weight. 227Ac can further capture neutrons resulting in small amounts of 228Ac. After the synthesis, actinium is separated from radium and from the products of decay and nuclear fusion, such as thorium, polonium, lead and bismuth. The extraction can be performed with thenoyltrifluoroacetone-benzene solution from an aqueous solution of the radiation products, and the selectivity to a certain element is achieved by adjusting the pH (to about 6.0 for actinium). An alternative procedure is anion exchange with an appropriate resin in nitric acid, which can result in a separation factor of 1,000,000 for radium and actinium vs. thorium in a two-stage process. Actinium can then be separated from radium, with a ratio of about 100, using a low cross-linking cation exchange resin and nitric acid as eluant.
225Ac was first produced artificially at the Institute for Transuranium Elements (ITU) in Germany using a cyclotron and at St George Hospital in Sydney using a linac in 2000. This rare isotope has potential applications in radiation therapy and is most efficiently produced by bombarding a radium-226 target with 20–30 MeV deuterium ions. This reaction also yields 226Ac which however decays with a half-life of 29 hours and thus does not contaminate 225Ac.
Actinium metal has been prepared by the reduction of actinium fluoride with lithium vapor in vacuum at a temperature between 1100 and 1300 °C. Higher temperatures resulted in evaporation of the product and lower ones lead to an incomplete transformation. Lithium was chosen among other alkali metals because its fluoride is most volatile.
Applications
Owing to its scarcity, high price and radioactivity, 227Ac currently has no significant industrial use, but 225Ac is currently being studied for use in cancer treatments such as targeted alpha therapies.
227Ac is highly radioactive and was therefore studied for use as an active element of radioisotope thermoelectric generators, for example in spacecraft. The oxide of 227Ac pressed with beryllium is also an efficient neutron source with the activity exceeding that of the standard americium-beryllium and radium-beryllium pairs. In all those applications, 227Ac (a beta source) is merely a progenitor which generates alpha-emitting isotopes upon its decay. Beryllium captures alpha particles and emits neutrons owing to its large cross-section for the (α,n) nuclear reaction:
^{9}_{4}Be + ^{4}_{2}He -> ^{12}_{6}C + ^{1}_{0}n + \gamma
The 227AcBe neutron sources can be applied in a neutron probe – a standard device for measuring the quantity of water present in soil, as well as moisture/density for quality control in highway construction. Such probes are also used in well logging applications, in neutron radiography, tomography and other radiochemical investigations.
225Ac is applied in medicine to produce in a reusable generator or can be used alone as an agent for radiation therapy, in particular targeted alpha therapy (TAT). This isotope has a half-life of 10 days, making it much more suitable for radiation therapy than 213Bi (half-life 46 minutes). Additionally, 225Ac decays to nontoxic 209Bi rather than stable but toxic lead, which is the final product in the decay chains of several other candidate isotopes, namely 227Th, 228Th, and 230U. Not only 225Ac itself, but also its daughters, emit alpha particles which kill cancer cells in the body. The major difficulty with application of 225Ac was that intravenous injection of simple actinium complexes resulted in their accumulation in the bones and liver for a period of tens of years. As a result, after the cancer cells were quickly killed by alpha particles from 225Ac, the radiation from the actinium and its daughters might induce new mutations. To solve this problem, 225Ac was bound to a chelating agent, such as citrate, ethylenediaminetetraacetic acid (EDTA) or diethylene triamine pentaacetic acid (DTPA). This reduced actinium accumulation in the bones, but the excretion from the body remained slow. Much better results were obtained with such chelating agents as HEHA () or DOTA () coupled to trastuzumab, a monoclonal antibody that interferes with the HER2/neu receptor. The latter delivery combination was tested on mice and proved to be effective against leukemia, lymphoma, breast, ovarian, neuroblastoma and prostate cancers.
The medium half-life of 227Ac (21.77 years) makes it very convenient radioactive isotope in modeling the slow vertical mixing of oceanic waters. The associated processes cannot be studied with the required accuracy by direct measurements of current velocities (of the order 50 meters per year). However, evaluation of the concentration depth-profiles for different isotopes allows estimating the mixing rates. The physics behind this method is as follows: oceanic waters contain homogeneously dispersed 235U. Its decay product, 231Pa, gradually precipitates to the bottom, so that its concentration first increases with depth and then stays nearly constant. 231Pa decays to 227Ac; however, the concentration of the latter isotope does not follow the 231Pa depth profile, but instead increases toward the sea bottom. This occurs because of the mixing processes which raise some additional 227Ac from the sea bottom. Thus analysis of both 231Pa and 227Ac depth profiles allows researchers to model the mixing behavior.
There are theoretical predictions that AcHx hydrides (in this case with very high pressure) are a candidate for a near room-temperature superconductor as they have Tc significantly higher than H3S, possibly near 250 K.
Precautions
227Ac is highly radioactive and experiments with it are carried out in a specially designed laboratory equipped with a tight glove box. When actinium trichloride is administered intravenously to rats, about 33% of actinium is deposited into the bones and 50% into the liver. Its toxicity is comparable to, but slightly lower than that of americium and plutonium. For trace quantities, fume hoods with good aeration suffice; for gram amounts, hot cells with shielding from the intense gamma radiation emitted by 227Ac are necessary.
See also
Actinium series
Notes
References
Bibliography
Meyer, Gerd and Morss, Lester R. (1991) Synthesis of lanthanide and actinide compounds, Springer.
External links
Actinium at The Periodic Table of Videos (University of Nottingham)
NLM Hazardous Substances Databank – Actinium, Radioactive
Actinium in
Chemical elements
Chemical elements with face-centered cubic structure
Actinides
|
https://en.wikipedia.org/wiki/Americium
|
Americium is a synthetic radioactive chemical element with the symbol Am and atomic number 95. It is a transuranic member of the actinide series, in the periodic table located under the lanthanide element europium and was thus named after the United States by analogy.
Americium was first produced in 1944 by the group of Glenn T. Seaborg from Berkeley, California, at the Metallurgical Laboratory of the University of Chicago, as part of the Manhattan Project. Although it is the third element in the transuranic series, it was discovered fourth, after the heavier curium. The discovery was kept secret and only released to the public in November 1945. Most americium is produced by uranium or plutonium being bombarded with neutrons in nuclear reactors – one tonne of spent nuclear fuel contains about 100 grams of americium. It is widely used in commercial ionization chamber smoke detectors, as well as in neutron sources and industrial gauges. Several unusual applications, such as nuclear batteries or fuel for space ships with nuclear propulsion, have been proposed for the isotope 242mAm, but they are as yet hindered by the scarcity and high price of this nuclear isomer.
Americium is a relatively soft radioactive metal with silvery appearance. Its most common isotopes are 241Am and 243Am. In chemical compounds, americium usually assumes the oxidation state +3, especially in solutions. Several other oxidation states are known, ranging from +2 to +7, and can be identified by their characteristic optical absorption spectra. The crystal lattices of solid americium and its compounds contain small intrinsic radiogenic defects, due to metamictization induced by self-irradiation with alpha particles, which accumulates with time; this can cause a drift of some material properties over time, more noticeable in older samples.
History
Although americium was likely produced in previous nuclear experiments, it was first intentionally synthesized, isolated and identified in late autumn 1944, at the University of California, Berkeley, by Glenn T. Seaborg, Leon O. Morgan, Ralph A. James, and Albert Ghiorso. They used a 60-inch cyclotron at the University of California, Berkeley. The element was chemically identified at the Metallurgical Laboratory (now Argonne National Laboratory) of the University of Chicago. Following the lighter neptunium, plutonium, and heavier curium, americium was the fourth transuranium element to be discovered. At the time, the periodic table had been restructured by Seaborg to its present layout, containing the actinide row below the lanthanide one. This led to americium being located right below its twin lanthanide element europium; it was thus by analogy named after the Americas: "The name americium (after the Americas) and the symbol Am are suggested for the element on the basis of its position as the sixth member of the actinide rare-earth series, analogous to europium, Eu, of the lanthanide series."
The new element was isolated from its oxides in a complex, multi-step process. First plutonium-239 nitrate (239PuNO3) solution was coated on a platinum foil of about 0.5 cm2 area, the solution was evaporated and the residue was converted into plutonium dioxide (PuO2) by calcining. After cyclotron irradiation, the coating was dissolved with nitric acid, and then precipitated as the hydroxide using concentrated aqueous ammonia solution. The residue was dissolved in perchloric acid. Further separation was carried out by ion exchange, yielding a certain isotope of curium. The separation of curium and americium was so painstaking that those elements were initially called by the Berkeley group as pandemonium (from Greek for all demons or hell) and delirium (from Latin for madness).
Initial experiments yielded four americium isotopes: 241Am, 242Am, 239Am and 238Am. Americium-241 was directly obtained from plutonium upon absorption of two neutrons. It decays by emission of a α-particle to 237Np; the half-life of this decay was first determined as years but then corrected to 432.2 years.
The times are half-lives
The second isotope 242Am was produced upon neutron bombardment of the already-created 241Am. Upon rapid β-decay, 242Am converts into the isotope of curium 242Cm (which had been discovered previously). The half-life of this decay was initially determined at 17 hours, which was close to the presently accepted value of 16.02 h.
The discovery of americium and curium in 1944 was closely related to the Manhattan Project; the results were confidential and declassified only in 1945. Seaborg leaked the synthesis of the elements 95 and 96 on the U.S. radio show for children Quiz Kids five days before the official presentation at an American Chemical Society meeting on 11 November 1945, when one of the listeners asked whether any new transuranium element besides plutonium and neptunium had been discovered during the war. After the discovery of americium isotopes 241Am and 242Am, their production and compounds were patented listing only Seaborg as the inventor. The initial americium samples weighed a few micrograms; they were barely visible and were identified by their radioactivity. The first substantial amounts of metallic americium weighing 40–200 micrograms were not prepared until 1951 by reduction of americium(III) fluoride with barium metal in high vacuum at 1100 °C.
Occurrence
The longest-lived and most common isotopes of americium, 241Am and 243Am, have half-lives of 432.2 and 7,370 years, respectively. Therefore, any primordial americium (americium that was present on Earth during its formation) should have decayed by now. Trace amounts of americium probably occur naturally in uranium minerals as a result of nuclear reactions, though this has not been confirmed.
Existing americium is concentrated in the areas used for the atmospheric nuclear weapons tests conducted between 1945 and 1980, as well as at the sites of nuclear incidents, such as the Chernobyl disaster. For example, the analysis of the debris at the testing site of the first U.S. hydrogen bomb, Ivy Mike, (1 November 1952, Enewetak Atoll), revealed high concentrations of various actinides including americium; but due to military secrecy, this result was not published until later, in 1956. Trinitite, the glassy residue left on the desert floor near Alamogordo, New Mexico, after the plutonium-based Trinity nuclear bomb test on 16 July 1945, contains traces of americium-241. Elevated levels of americium were also detected at the crash site of a US Boeing B-52 bomber aircraft, which carried four hydrogen bombs, in 1968 in Greenland.
In other regions, the average radioactivity of surface soil due to residual americium is only about 0.01 picocuries per gram (0.37 mBq/g). Atmospheric americium compounds are poorly soluble in common solvents and mostly adhere to soil particles. Soil analysis revealed about 1,900 times higher concentration of americium inside sandy soil particles than in the water present in the soil pores; an even higher ratio was measured in loam soils.
Americium is produced mostly artificially in small quantities, for research purposes. A tonne of spent nuclear fuel contains about 100 grams of various americium isotopes, mostly 241Am and 243Am. Their prolonged radioactivity is undesirable for the disposal, and therefore americium, together with other long-lived actinides, must be neutralized. The associated procedure may involve several steps, where americium is first separated and then converted by neutron bombardment in special reactors to short-lived nuclides. This procedure is well known as nuclear transmutation, but it is still being developed for americium. The transuranic elements from americium to fermium occurred naturally in the natural nuclear fission reactor at Oklo, but no longer do so.
Americium is also one of the elements that have theoretically been detected in Przybylski's Star.
Synthesis and extraction
Isotope nucleosynthesis
Americium has been produced in small quantities in nuclear reactors for decades, and kilograms of its 241Am and 243Am isotopes have been accumulated by now. Nevertheless, since it was first offered for sale in 1962, its price, about of 241Am, remains almost unchanged owing to the very complex separation procedure. The heavier isotope 243Am is produced in much smaller amounts; it is thus more difficult to separate, resulting in a higher cost of the order .
Americium is not synthesized directly from uranium – the most common reactor material – but from the plutonium isotope 239Pu. The latter needs to be produced first, according to the following nuclear process:
^{238}_{92}U ->[\ce{(n,\gamma)}] ^{239}_{92}U ->[\beta^-][23.5 \ \ce{min}] ^{239}_{93}Np ->[\beta^-][2.3565 \ \ce{d}] ^{239}_{94}Pu
The capture of two neutrons by 239Pu (a so-called (n,γ) reaction), followed by a β-decay, results in 241Am:
^{239}_{94}Pu ->[\ce{2(n,\gamma)}] ^{241}_{94}Pu ->[\beta^-][14.35 \ \ce{yr}] ^{241}_{95}Am
The plutonium present in spent nuclear fuel contains about 12% of 241Pu. Because it beta-decays to 241Am, 241Pu can be extracted and may be used to generate further 241Am. However, this process is rather slow: half of the original amount of 241Pu decays to 241Am after about 15 years, and the 241Am amount reaches a maximum after 70 years.
The obtained 241Am can be used for generating heavier americium isotopes by further neutron capture inside a nuclear reactor. In a light water reactor (LWR), 79% of 241Am converts to 242Am and 10% to its nuclear isomer 242mAm:
Americium-242 has a half-life of only 16 hours, which makes its further conversion to 243Am extremely inefficient. The latter isotope is produced instead in a process where 239Pu captures four neutrons under high neutron flux:
^{239}_{94}Pu ->[\ce{4(n,\gamma)}] \ ^{243}_{94}Pu ->[\beta^-][4.956 \ \ce{h}] ^{243}_{95}Am
Metal generation
Most synthesis routines yield a mixture of different actinide isotopes in oxide forms, from which isotopes of americium can be separated. In a typical procedure, the spent reactor fuel (e.g. MOX fuel) is dissolved in nitric acid, and the bulk of uranium and plutonium is removed using a PUREX-type extraction (Plutonium–URanium EXtraction) with tributyl phosphate in a hydrocarbon. The lanthanides and remaining actinides are then separated from the aqueous residue (raffinate) by a diamide-based extraction, to give, after stripping, a mixture of trivalent actinides and lanthanides. Americium compounds are then selectively extracted using multi-step chromatographic and centrifugation techniques with an appropriate reagent. A large amount of work has been done on the solvent extraction of americium. For example, a 2003 EU-funded project codenamed "EUROPART" studied triazines and other compounds as potential extraction agents. A bis-triazinyl bipyridine complex was proposed in 2009 as such a reagent is highly selective to americium (and curium). Separation of americium from the highly similar curium can be achieved by treating a slurry of their hydroxides in aqueous sodium bicarbonate with ozone, at elevated temperatures. Both Am and Cm are mostly present in solutions in the +3 valence state; whereas curium remains unchanged, americium oxidizes to soluble Am(IV) complexes which can be washed away.
Metallic americium is obtained by reduction from its compounds. Americium(III) fluoride was first used for this purpose. The reaction was conducted using elemental barium as reducing agent in a water- and oxygen-free environment inside an apparatus made of tantalum and tungsten.
An alternative is the reduction of americium dioxide by metallic lanthanum or thorium:
Physical properties
In the periodic table, americium is located to the right of plutonium, to the left of curium, and below the lanthanide europium, with which it shares many physical and chemical properties. Americium is a highly radioactive element. When freshly prepared, it has a silvery-white metallic lustre, but then slowly tarnishes in air. With a density of 12 g/cm3, americium is less dense than both curium (13.52 g/cm3) and plutonium (19.8 g/cm3); but has a higher density than europium (5.264 g/cm3)—mostly because of its higher atomic mass. Americium is relatively soft and easily deformable and has a significantly lower bulk modulus than the actinides before it: Th, Pa, U, Np and Pu. Its melting point of 1173 °C is significantly higher than that of plutonium (639 °C) and europium (826 °C), but lower than for curium (1340 °C).
At ambient conditions, americium is present in its most stable α form which has a hexagonal crystal symmetry, and a space group P63/mmc with cell parameters a = 346.8 pm and c = 1124 pm, and four atoms per unit cell. The crystal consists of a double-hexagonal close packing with the layer sequence ABAC and so is isotypic with α-lanthanum and several actinides such as α-curium. The crystal structure of americium changes with pressure and temperature. When compressed at room temperature to 5 GPa, α-Am transforms to the β modification, which has a face-centered cubic (fcc) symmetry, space group Fmm and lattice constant a = 489 pm. This fcc structure is equivalent to the closest packing with the sequence ABC. Upon further compression to 23 GPa, americium transforms to an orthorhombic γ-Am structure similar to that of α-uranium. There are no further transitions observed up to 52 GPa, except for an appearance of a monoclinic phase at pressures between 10 and 15 GPa. There is no consistency on the status of this phase in the literature, which also sometimes lists the α, β and γ phases as I, II and III. The β-γ transition is accompanied by a 6% decrease in the crystal volume; although theory also predicts a significant volume change for the α-β transition, it is not observed experimentally. The pressure of the α-β transition decreases with increasing temperature, and when α-americium is heated at ambient pressure, at 770 °C it changes into an fcc phase which is different from β-Am, and at 1075 °C it converts to a body-centered cubic structure. The pressure-temperature phase diagram of americium is thus rather similar to those of lanthanum, praseodymium and neodymium.
As with many other actinides, self-damage of the crystal structure due to alpha-particle irradiation is intrinsic to americium. It is especially noticeable at low temperatures, where the mobility of the produced structure defects is relatively low, by broadening of X-ray diffraction peaks. This effect makes somewhat uncertain the temperature of americium and some of its properties, such as electrical resistivity. So for americium-241, the resistivity at 4.2 K increases with time from about 2 µOhm·cm to 10 µOhm·cm after 40 hours, and saturates at about 16 µOhm·cm after 140 hours. This effect is less pronounced at room temperature, due to annihilation of radiation defects; also heating to room temperature the sample which was kept for hours at low temperatures restores its resistivity. In fresh samples, the resistivity gradually increases with temperature from about 2 µOhm·cm at liquid helium to 69 µOhm·cm at room temperature; this behavior is similar to that of neptunium, uranium, thorium and protactinium, but is different from plutonium and curium which show a rapid rise up to 60 K followed by saturation. The room temperature value for americium is lower than that of neptunium, plutonium and curium, but higher than for uranium, thorium and protactinium.
Americium is paramagnetic in a wide temperature range, from that of liquid helium, to room temperature and above. This behavior is markedly different from that of its neighbor curium which exhibits antiferromagnetic transition at 52 K. The thermal expansion coefficient of americium is slightly anisotropic and amounts to along the shorter a axis and for the longer c hexagonal axis. The enthalpy of dissolution of americium metal in hydrochloric acid at standard conditions is , from which the standard enthalpy change of formation (ΔfH°) of aqueous Am3+ ion is . The standard potential Am3+/Am0 is .
Chemical properties
Americium metal readily reacts with oxygen and dissolves in aqueous acids. The most stable oxidation state for americium is +3. The chemistry of americium(III) has many similarities to the chemistry of lanthanide(III) compounds. For example, trivalent americium forms insoluble fluoride, oxalate, iodate, hydroxide, phosphate and other salts. Compounds of americium in oxidation states 2, 4, 5, 6 and 7 have also been studied. This is the widest range that has been observed with actinide elements. The color of americium compounds in aqueous solution is as follows: Am3+ (yellow-reddish), Am4+ (yellow-reddish), ; (yellow), (brown) and (dark green). The absorption spectra have sharp peaks, due to f-f transitions' in the visible and near-infrared regions. Typically, Am(III) has absorption maxima at ca. 504 and 811 nm, Am(V) at ca. 514 and 715 nm, and Am(VI) at ca. 666 and 992 nm.
Americium compounds with oxidation state +4 and higher are strong oxidizing agents, comparable in strength to the permanganate ion () in acidic solutions. Whereas the Am4+ ions are unstable in solutions and readily convert to Am3+, compounds such as americium dioxide (AmO2) and americium(IV) fluoride (AmF4) are stable in the solid state.
The pentavalent oxidation state of americium was first observed in 1951. In acidic aqueous solution the ion is unstable with respect to disproportionation. The reaction
is typical. The chemistry of Am(V) and Am(VI) is comparable to the chemistry of uranium in those oxidation states. In particular, compounds like and are comparable to uranates and the ion is comparable to the uranyl ion, . Such compounds can be prepared by oxidation of Am(III) in dilute nitric acid with ammonium persulfate. Other oxidising agents that have been used include silver(I) oxide, ozone and sodium persulfate.
Chemical compounds
Oxygen compounds
Three americium oxides are known, with the oxidation states +2 (AmO), +3 (Am2O3) and +4 (AmO2). Americium(II) oxide was prepared in minute amounts and has not been characterized in detail. Americium(III) oxide is a red-brown solid with a melting point of 2205 °C. Americium(IV) oxide is the main form of solid americium which is used in nearly all its applications. As most other actinide dioxides, it is a black solid with a cubic (fluorite) crystal structure.
The oxalate of americium(III), vacuum dried at room temperature, has the chemical formula Am2(C2O4)3·7H2O. Upon heating in vacuum, it loses water at 240 °C and starts decomposing into AmO2 at 300 °C, the decomposition completes at about 470 °C. The initial oxalate dissolves in nitric acid with the maximum solubility of 0.25 g/L.
Halides
Halides of americium are known for the oxidation states +2, +3 and +4, where the +3 is most stable, especially in solutions.
Reduction of Am(III) compounds with sodium amalgam yields Am(II) salts – the black halides AmCl2, AmBr2 and AmI2. They are very sensitive to oxygen and oxidize in water, releasing hydrogen and converting back to the Am(III) state. Specific lattice constants are:
Orthorhombic AmCl2: a = , b = and c =
Tetragonal AmBr2: a = and c = . They can also be prepared by reacting metallic americium with an appropriate mercury halide HgX2, where X = Cl, Br or I:
{Am} + \underset{mercury\ halide}{HgX2} ->[{} \atop 400 - 500 ^\circ \ce C] {AmX2} + {Hg}
Americium(III) fluoride (AmF3) is poorly soluble and precipitates upon reaction of Am3+ and fluoride ions in weak acidic solutions:
Am^3+ + 3F^- -> AmF3(v)
The tetravalent americium(IV) fluoride (AmF4) is obtained by reacting solid americium(III) fluoride with molecular fluorine:
2AmF3 + F2 -> 2AmF4
Another known form of solid tetravalent americium fluoride is KAmF5. Tetravalent americium has also been observed in the aqueous phase. For this purpose, black Am(OH)4 was dissolved in 15-M NH4F with the americium concentration of 0.01 M. The resulting reddish solution had a characteristic optical absorption spectrum which is similar to that of AmF4 but differed from other oxidation states of americium. Heating the Am(IV) solution to 90 °C did not result in its disproportionation or reduction, however a slow reduction was observed to Am(III) and assigned to self-irradiation of americium by alpha particles.
Most americium(III) halides form hexagonal crystals with slight variation of the color and exact structure between the halogens. So, chloride (AmCl3) is reddish and has a structure isotypic to uranium(III) chloride (space group P63/m) and the melting point of 715 °C. The fluoride is isotypic to LaF3 (space group P63/mmc) and the iodide to BiI3 (space group R). The bromide is an exception with the orthorhombic PuBr3-type structure and space group Cmcm. Crystals of americium hexahydrate (AmCl3·6H2O) can be prepared by dissolving americium dioxide in hydrochloric acid and evaporating the liquid. Those crystals are hygroscopic and have yellow-reddish color and a monoclinic crystal structure.
Oxyhalides of americium in the form AmVIO2X2, AmVO2X, AmIVOX2 and AmIIIOX can be obtained by reacting the corresponding americium halide with oxygen or Sb2O3, and AmOCl can also be produced by vapor phase hydrolysis:
AmCl3 + H2O -> AmOCl + 2HCl
Chalcogenides and pnictides
The known chalcogenides of americium include the sulfide AmS2, selenides AmSe2 and Am3Se4, and tellurides Am2Te3 and AmTe2. The pnictides of americium (243Am) of the AmX type are known for the elements phosphorus, arsenic, antimony and bismuth. They crystallize in the rock-salt lattice.
Silicides and borides
Americium monosilicide (AmSi) and "disilicide" (nominally AmSix with: 1.87 < x < 2.0) were obtained by reduction of americium(III) fluoride with elementary silicon in vacuum at 1050 °C (AmSi) and 1150−1200 °C (AmSix). AmSi is a black solid isomorphic with LaSi, it has an orthorhombic crystal symmetry. AmSix has a bright silvery lustre and a tetragonal crystal lattice (space group I41/amd), it is isomorphic with PuSi2 and ThSi2. Borides of americium include AmB4 and AmB6. The tetraboride can be obtained by heating an oxide or halide of americium with magnesium diboride in vacuum or inert atmosphere.
Organoamericium compounds
Analogous to uranocene, americium forms the organometallic compound amerocene with two cyclooctatetraene ligands, with the chemical formula (η8-C8H8)2Am. A cyclopentadienyl complex is also known that is likely to be stoichiometrically AmCp3.
Formation of the complexes of the type Am(n-C3H7-BTP)3, where BTP stands for 2,6-di(1,2,4-triazin-3-yl)pyridine, in solutions containing n-C3H7-BTP and Am3+ ions has been confirmed by EXAFS. Some of these BTP-type complexes selectively interact with americium and therefore are useful in its selective separation from lanthanides and another actinides.
Biological aspects
Americium is an artificial element of recent origin, and thus does not have a biological requirement. It is harmful to life. It has been proposed to use bacteria for removal of americium and other heavy metals from rivers and streams. Thus, Enterobacteriaceae of the genus Citrobacter precipitate americium ions from aqueous solutions, binding them into a metal-phosphate complex at their cell walls. Several studies have been reported on the biosorption and bioaccumulation of americium by bacteria and fungi.
Fission
The isotope 242mAm (half-life 141 years) has the largest cross sections for absorption of thermal neutrons (5,700 barns), that results in a small critical mass for a sustained nuclear chain reaction. The critical mass for a bare 242mAm sphere is about 9–14 kg (the uncertainty results from insufficient knowledge of its material properties). It can be lowered to 3–5 kg with a metal reflector and should become even smaller with a water reflector. Such small critical mass is favorable for portable nuclear weapons, but those based on 242mAm are not known yet, probably because of its scarcity and high price. The critical masses of the two readily available isotopes, 241Am and 243Am, are relatively high – 57.6 to 75.6 kg for 241Am and 209 kg for 243Am. Scarcity and high price yet hinder application of americium as a nuclear fuel in nuclear reactors.
There are proposals of very compact 10-kW high-flux reactors using as little as 20 grams of 242mAm. Such low-power reactors would be relatively safe to use as neutron sources for radiation therapy in hospitals.
Isotopes
About 19 isotopes and 8 nuclear isomers are known for americium. There are two long-lived alpha-emitters; 243Am has a half-life of 7,370 years and is the most stable isotope, and 241Am has a half-life of 432.2 years. The most stable nuclear isomer is 242m1Am; it has a long half-life of 141 years. The half-lives of other isotopes and isomers range from 0.64 microseconds for 245m1Am to 50.8 hours for 240Am. As with most other actinides, the isotopes of americium with odd number of neutrons have relatively high rate of nuclear fission and low critical mass.
Americium-241 decays to 237Np emitting alpha particles of 5 different energies, mostly at 5.486 MeV (85.2%) and 5.443 MeV (12.8%). Because many of the resulting states are metastable, they also emit gamma rays with the discrete energies between 26.3 and 158.5 keV.
Americium-242 is a short-lived isotope with a half-life of 16.02 h. It mostly (82.7%) converts by β-decay to 242Cm, but also by electron capture to 242Pu (17.3%). Both 242Cm and 242Pu transform via nearly the same decay chain through 238Pu down to 234U.
Nearly all (99.541%) of 242m1Am decays by internal conversion to 242Am and the remaining 0.459% by α-decay to 238Np. The latter subsequently decays to 238Pu and then to 234U.
Americium-243 transforms by α-emission into 239Np, which converts by β-decay to 239Pu, and the 239Pu changes into 235U by emitting an α-particle.
Applications
Ionization-type smoke detector
Americium is used in the most common type of household smoke detector, which uses 241Am in the form of americium dioxide as its source of ionizing radiation. This isotope is preferred over 226Ra because it emits 5 times more alpha particles and relatively little harmful gamma radiation.
The amount of americium in a typical new smoke detector is 1 microcurie (37 kBq) or 0.29 microgram. This amount declines slowly as the americium decays into neptunium-237, a different transuranic element with a much longer half-life (about 2.14 million years). With its half-life of 432.2 years, the americium in a smoke detector includes about 3% neptunium after 19 years, and about 5% after 32 years. The radiation passes through an ionization chamber, an air-filled space between two electrodes, and permits a small, constant current between the electrodes. Any smoke that enters the chamber absorbs the alpha particles, which reduces the ionization and affects this current, triggering the alarm. Compared to the alternative optical smoke detector, the ionization smoke detector is cheaper and can detect particles which are too small to produce significant light scattering; however, it is more prone to false alarms.
Radionuclide
As 241Am has a roughly similar half-life to 238Pu (432.2 years vs. 87 years), it has been proposed as an active element of radioisotope thermoelectric generators, for example in spacecraft. Although americium produces less heat and electricity – the power yield is 114.7 mW/g for 241Am and 6.31 mW/g for 243Am (cf. 390 mW/g for 238Pu) – and its radiation poses more threat to humans owing to neutron emission, the European Space Agency is considering using americium for its space probes.
Another proposed space-related application of americium is a fuel for space ships with nuclear propulsion. It relies on the very high rate of nuclear fission of 242mAm, which can be maintained even in a micrometer-thick foil. Small thickness avoids the problem of self-absorption of emitted radiation. This problem is pertinent to uranium or plutonium rods, in which only surface layers provide alpha-particles. The fission products of 242mAm can either directly propel the spaceship or they can heat a thrusting gas. They can also transfer their energy to a fluid and generate electricity through a magnetohydrodynamic generator.
One more proposal which utilizes the high nuclear fission rate of 242mAm is a nuclear battery. Its design relies not on the energy of the emitted by americium alpha particles, but on their charge, that is the americium acts as the self-sustaining "cathode". A single 3.2 kg 242mAm charge of such battery could provide about 140 kW of power over a period of 80 days. Even with all the potential benefits, the current applications of 242mAm are as yet hindered by the scarcity and high price of this particular nuclear isomer.
In 2019, researchers at the UK National Nuclear Laboratory and the University of Leicester demonstrated the use of heat generated by americium to illuminate a small light bulb. This technology could lead to systems to power missions with durations up to 400 years into interstellar space, where solar panels do not function.
Neutron source
The oxide of 241Am pressed with beryllium is an efficient neutron source. Here americium acts as the alpha source, and beryllium produces neutrons owing to its large cross-section for the (α,n) nuclear reaction:
^{241}_{95}Am -> ^{237}_{93}Np + ^{4}_{2}He + \gamma
^{9}_{4}Be + ^{4}_{2}He -> ^{12}_{6}C + ^{1}_{0}n + \gamma
The most widespread use of 241AmBe neutron sources is a neutron probe – a device used to measure the quantity of water present in soil, as well as moisture/density for quality control in highway construction. 241Am neutron sources are also used in well logging applications, as well as in neutron radiography, tomography and other radiochemical investigations.
Production of other elements
Americium is a starting material for the production of other transuranic elements and transactinides – for example, 82.7% of 242Am decays to 242Cm and 17.3% to 242Pu. In the nuclear reactor, 242Am is also up-converted by neutron capture to 243Am and 244Am, which transforms by β-decay to 244Cm:
^{243}_{95}Am ->[\ce{(n,\gamma)}] ^{244}_{95}Am ->[\beta^-][10.1 \ \ce{h}] ^{244}_{96}Cm
Irradiation of 241Am by 12C or 22Ne ions yields the isotopes 247Es (einsteinium) or 260Db (dubnium), respectively. Furthermore, the element berkelium (243Bk isotope) had been first intentionally produced and identified by bombarding 241Am with alpha particles, in 1949, by the same Berkeley group, using the same 60-inch cyclotron. Similarly, nobelium was produced at the Joint Institute for Nuclear Research, Dubna, Russia, in 1965 in several reactions, one of which included irradiation of 243Am with 15N ions. Besides, one of the synthesis reactions for lawrencium, discovered by scientists at Berkeley and Dubna, included bombardment of 243Am with 18O.
Spectrometer
Americium-241 has been used as a portable source of both gamma rays and alpha particles for a number of medical and industrial uses. The 59.5409 keV gamma ray emissions from 241Am in such sources can be used for indirect analysis of materials in radiography and X-ray fluorescence spectroscopy, as well as for quality control in fixed nuclear density gauges and nuclear densometers. For example, the element has been employed to gauge glass thickness to help create flat glass. Americium-241 is also suitable for calibration of gamma-ray spectrometers in the low-energy range, since its spectrum consists of nearly a single peak and negligible Compton continuum (at least three orders of magnitude lower intensity). Americium-241 gamma rays were also used to provide passive diagnosis of thyroid function. This medical application is however obsolete.
Health concerns
As a highly radioactive element, americium and its compounds must be handled only in an appropriate laboratory under special arrangements. Although most americium isotopes predominantly emit alpha particles which can be blocked by thin layers of common materials, many of the daughter products emit gamma-rays and neutrons which have a long penetration depth.
If consumed, most of the americium is excreted within a few days, with only 0.05% absorbed in the blood, of which roughly 45% goes to the liver and 45% to the bones, and the remaining 10% is excreted. The uptake to the liver depends on the individual and increases with age. In the bones, americium is first deposited over cortical and trabecular surfaces and slowly redistributes over the bone with time. The biological half-life of 241Am is 50 years in the bones and 20 years in the liver, whereas in the gonads (testicles and ovaries) it remains permanently; in all these organs, americium promotes formation of cancer cells as a result of its radioactivity.
Americium often enters landfills from discarded smoke detectors. The rules associated with the disposal of smoke detectors are relaxed in most jurisdictions. In 1994, 17-year-old David Hahn extracted the americium from about 100 smoke detectors in an attempt to build a breeder nuclear reactor. There have been a few cases of exposure to americium, the worst case being that of chemical operations technician Harold McCluskey, who at the age of 64 was exposed to 500 times the occupational standard for americium-241 as a result of an explosion in his lab. McCluskey died at the age of 75 of unrelated pre-existing disease.
See also
Actinides in the environment
:Category:Americium compounds
Notes
References
Bibliography
Penneman, R. A. and Keenan T. K. The radiochemistry of americium and curium, University of California, Los Alamos, California, 1960
Further reading
Nuclides and Isotopes – 14th Edition, GE Nuclear Energy, 1989.
External links
Americium at The Periodic Table of Videos (University of Nottingham)
ATSDR – Public Health Statement: Americium
World Nuclear Association – Smoke Detectors and Americium
Chemical elements
Chemical elements with double hexagonal close-packed structure
Actinides
Carcinogens
Synthetic elements
|
https://en.wikipedia.org/wiki/Astatine
|
Astatine is a chemical element with the symbol At and atomic number 85. It is the rarest naturally occurring element in the Earth's crust, occurring only as the decay product of various heavier elements. All of astatine's isotopes are short-lived; the most stable is astatine-210, with a half-life of 8.1 hours. Consequently, a solid sample of the element has never been seen, because any macroscopic specimen would be immediately vaporized by the heat of its radioactivity.
The bulk properties of astatine are not known with certainty. Many of them have been estimated from its position on the periodic table as a heavier analog of iodine, and a member of the halogens (the group of elements including fluorine, chlorine, bromine, iodine and tennessine). However, astatine also falls roughly along the dividing line between metals and nonmetals, and some metallic behavior has also been observed and predicted for it. Astatine is likely to have a dark or lustrous appearance and may be a semiconductor or possibly a metal. Chemically, several anionic species of astatine are known and most of its compounds resemble those of iodine, but it also sometimes displays metallic characteristics and shows some similarities to silver.
The first synthesis of astatine was in 1940 by Dale R. Corson, Kenneth Ross MacKenzie, and Emilio G. Segrè at the University of California, Berkeley. They named it from the Ancient Greek () 'unstable'. Four isotopes of astatine were subsequently found to be naturally occurring, although much less than one gram is present at any given time in the Earth's crust. Neither the most stable isotope, astatine-210, nor the medically useful astatine-211 occur naturally; they are usually produced by bombarding bismuth-209 with alpha particles.
Characteristics
Astatine is an extremely radioactive element; all its isotopes have half-lives of 8.1 hours or less, decaying into other astatine isotopes, bismuth, polonium, or radon. Most of its isotopes are very unstable, with half-lives of seconds or less. Of the first 101 elements in the periodic table, only francium is less stable, and all the astatine isotopes more stable than the longest-lived francium isotopes are in any case synthetic and do not occur in nature.
The bulk properties of astatine are not known with any certainty. Research is limited by its short half-life, which prevents the creation of weighable quantities. A visible piece of astatine would immediately vaporize itself because of the heat generated by its intense radioactivity. It remains to be seen if, with sufficient cooling, a macroscopic quantity of astatine could be deposited as a thin film. Astatine is usually classified as either a nonmetal or a metalloid; metal formation has also been predicted.
Physical
Most of the physical properties of astatine have been estimated (by interpolation or extrapolation), using theoretically or empirically derived methods. For example, halogens get darker with increasing atomic weight – fluorine is nearly colorless, chlorine is yellow-green, bromine is red-brown, and iodine is dark gray/violet. Astatine is sometimes described as probably being a black solid (assuming it follows this trend), or as having a metallic appearance (if it is a metalloid or a metal).
Astatine sublimes less readily than does iodine, having a lower vapor pressure. Even so, half of a given quantity of astatine will vaporize in approximately an hour if put on a clean glass surface at room temperature. The absorption spectrum of astatine in the middle ultraviolet region has lines at 224.401 and 216.225 nm, suggestive of 6p to 7s transitions.
The structure of solid astatine is unknown. As an analog of iodine it may have an orthorhombic crystalline structure composed of diatomic astatine molecules, and be a semiconductor (with a band gap of 0.7 eV). Alternatively, if condensed astatine forms a metallic phase, as has been predicted, it may have a monatomic face-centered cubic structure; in this structure, it may well be a superconductor, like the similar high-pressure phase of iodine. Metallic astatine is expected to have a density of 8.91–8.95 g/cm3.
Evidence for (or against) the existence of diatomic astatine (At2) is sparse and inconclusive. Some sources state that it does not exist, or at least has never been observed, while other sources assert or imply its existence. Despite this controversy, many properties of diatomic astatine have been predicted; for example, its bond length would be , dissociation energy , and heat of vaporization (∆Hvap) 54.39 kJ/mol. Many values have been predicted for the melting and boiling points of astatine, but only for At2.
Chemical
The chemistry of astatine is "clouded by the extremely low concentrations at which astatine experiments have been conducted, and the possibility of reactions with impurities, walls and filters, or radioactivity by-products, and other unwanted nano-scale interactions". Many of its apparent chemical properties have been observed using tracer studies on extremely dilute astatine solutions, typically less than 10−10 mol·L−1. Some properties, such as anion formation, align with other halogens. Astatine has some metallic characteristics as well, such as plating onto a cathode, and coprecipitating with metal sulfides in hydrochloric acid. It forms complexes with EDTA, a metal chelating agent, and is capable of acting as a metal in antibody radiolabeling; in some respects, astatine in the +1 state is akin to silver in the same state. Most of the organic chemistry of astatine is, however, analogous to that of iodine. It has been suggested that astatine can form a stable monatomic cation in aqueous solution.
Astatine has an electronegativity of 2.2 on the revised Pauling scale – lower than that of iodine (2.66) and the same as hydrogen. In hydrogen astatide (HAt), the negative charge is predicted to be on the hydrogen atom, implying that this compound could be referred to as astatine hydride according to certain nomenclatures. That would be consistent with the electronegativity of astatine on the Allred–Rochow scale (1.9) being less than that of hydrogen (2.2). However, official IUPAC stoichiometric nomenclature is based on an idealized convention of determining the relative electronegativities of the elements by the mere virtue of their position within the periodic table. According to this convention, astatine is handled as though it is more electronegative than hydrogen, irrespective of its true electronegativity. The electron affinity of astatine, at 233 kJ mol−1, is 21% less than that of iodine. In comparison, the value of Cl (349) is 6.4% higher than F (328); Br (325) is 6.9% less than Cl; and I (295) is 9.2% less than Br. The marked reduction for At was predicted as being due to spin–orbit interactions. The first ionization energy of astatine is about 899 kJ mol−1, which continues the trend of decreasing first ionization energies down the halogen group (fluorine, 1681; chlorine, 1251; bromine, 1140; iodine, 1008).
Compounds
Less reactive than iodine, astatine is the least reactive of the halogens; the chemical properties of tennessine, the next-heavier group 17 element, have not yet been investigated, however. Astatine compounds have been synthesized in nano-scale amounts and studied as intensively as possible before their radioactive disintegration. The reactions involved have been typically tested with dilute solutions of astatine mixed with larger amounts of iodine. Acting as a carrier, the iodine ensures there is sufficient material for laboratory techniques (such as filtration and precipitation) to work. Like iodine, astatine has been shown to adopt odd-numbered oxidation states ranging from −1 to +7.
Only a few compounds with metals have been reported, in the form of states of sodium, palladium, silver, thallium, and lead. Some characteristic properties of silver and sodium astatide, and the other hypothetical alkali and alkaline earth astatides, have been estimated by extrapolation from other metal halides.
The formation of an astatine compound with hydrogen – usually referred to as hydrogen astatide – was noted by the pioneers of astatine chemistry. As mentioned, there are grounds for instead referring to this compound as astatine hydride. It is easily oxidized; acidification by dilute nitric acid gives the At0 or At+ forms, and the subsequent addition of silver(I) may only partially, at best, precipitate astatine as silver(I) astatide (AgAt). Iodine, in contrast, is not oxidized, and precipitates readily as silver(I) iodide.
Astatine is known to bind to boron, carbon, and nitrogen. Various boron cage compounds have been prepared with At–B bonds, these being more stable than At–C bonds. Astatine can replace a hydrogen atom in benzene to form astatobenzene C6H5At; this may be oxidized to C6H5AtCl2 by chlorine. By treating this compound with an alkaline solution of hypochlorite, C6H5AtO2 can be produced. The dipyridine-astatine(I) cation, [At(C5H5N)2]+, forms ionic compounds with perchlorate (a non-coordinating anion) and with nitrate, [At(C5H5N)2]NO3. This cation exists as a coordination complex in which two dative covalent bonds separately link the astatine(I) centre with each of the pyridine rings via their nitrogen atoms.
With oxygen, there is evidence of the species AtO− and AtO+ in aqueous solution, formed by the reaction of astatine with an oxidant such as elemental bromine or (in the last case) by sodium persulfate in a solution of perchloric acid. The species previously thought to be has since been determined to be , a hydrolysis product of AtO+ (another such hydrolysis product being AtOOH). The well characterized anion can be obtained by, for example, the oxidation of astatine with potassium hypochlorite in a solution of potassium hydroxide. Preparation of lanthanum triastatate La(AtO3)3, following the oxidation of astatine by a hot Na2S2O8 solution, has been reported. Further oxidation of , such as by xenon difluoride (in a hot alkaline solution) or periodate (in a neutral or alkaline solution), yields the perastatate ion ; this is only stable in neutral or alkaline solutions. Astatine is also thought to be capable of forming cations in salts with oxyanions such as iodate or dichromate; this is based on the observation that, in acidic solutions, monovalent or intermediate positive states of astatine coprecipitate with the insoluble salts of metal cations such as silver(I) iodate or thallium(I) dichromate.
Astatine may form bonds to the other chalcogens; these include S7At+ and with sulfur, a coordination selenourea compound with selenium, and an astatine–tellurium colloid with tellurium.
Astatine is known to react with its lighter homologs iodine, bromine, and chlorine in the vapor state; these reactions produce diatomic interhalogen compounds with formulas AtI, AtBr, and AtCl. The first two compounds may also be produced in water – astatine reacts with iodine/iodide solution to form AtI, whereas AtBr requires (aside from astatine) an iodine/iodine monobromide/bromide solution. The excess of iodides or bromides may lead to and ions, or in a chloride solution, they may produce species like or via equilibrium reactions with the chlorides. Oxidation of the element with dichromate (in nitric acid solution) showed that adding chloride turned the astatine into a molecule likely to be either AtCl or AtOCl. Similarly, or may be produced. The polyhalides PdAtI2, CsAtI2, TlAtI2, and PbAtI are known or presumed to have been precipitated. In a plasma ion source mass spectrometer, the ions [AtI]+, [AtBr]+, and [AtCl]+ have been formed by introducing lighter halogen vapors into a helium-filled cell containing astatine, supporting the existence of stable neutral molecules in the plasma ion state. No astatine fluorides have been discovered yet. Their absence has been speculatively attributed to the extreme reactivity of such compounds, including the reaction of an initially formed fluoride with the walls of the glass container to form a non-volatile product. Thus, although the synthesis of an astatine fluoride is thought to be possible, it may require a liquid halogen fluoride solvent, as has already been used for the characterization of radon fluoride.
History
In 1869, when Dmitri Mendeleev published his periodic table, the space under iodine was empty; after Niels Bohr established the physical basis of the classification of chemical elements, it was suggested that the fifth halogen belonged there. Before its officially recognized discovery, it was called "eka-iodine" (from Sanskrit eka – "one") to imply it was one space under iodine (in the same manner as eka-silicon, eka-boron, and others). Scientists tried to find it in nature; given its extreme rarity, these attempts resulted in several false discoveries.
The first claimed discovery of eka-iodine was made by Fred Allison and his associates at the Alabama Polytechnic Institute (now Auburn University) in 1931. The discoverers named element 85 "alabamine", and assigned it the symbol Ab, designations that were used for a few years. In 1934, H. G. MacPherson of University of California, Berkeley disproved Allison's method and the validity of his discovery. There was another claim in 1937, by the chemist Rajendralal De. Working in Dacca in British India (now Dhaka in Bangladesh), he chose the name "dakin" for element 85, which he claimed to have isolated as the thorium series equivalent of radium F (polonium-210) in the radium series. The properties he reported for dakin do not correspond to those of astatine, and astatine's radioactivity would have prevented him from handling it in the quantities he claimed. Moreover, astatine is not found in the thorium series, and the true identity of dakin is not known.
In 1936, the team of Romanian physicist Horia Hulubei and French physicist Yvette Cauchois claimed to have discovered element 85 by observing its X-ray emission lines. In 1939, they published another paper which supported and extended previous data. In 1944, Hulubei published a summary of data he had obtained up to that time, claiming it was supported by the work of other researchers. He chose the name "dor", presumably from the Romanian for "longing" [for peace], as World War II had started five years earlier. As Hulubei was writing in French, a language which does not accommodate the "ine" suffix, dor would likely have been rendered in English as "dorine", had it been adopted. In 1947, Hulubei's claim was effectively rejected by the Austrian chemist Friedrich Paneth, who would later chair the IUPAC committee responsible for recognition of new elements. Even though Hulubei's samples did contain astatine-218, his means to detect it were too weak, by current standards, to enable correct identification; moreover, he could not perform chemical tests on the element. He had also been involved in an earlier false claim as to the discovery of element 87 (francium) and this is thought to have caused other researchers to downplay his work.
In 1940, the Swiss chemist Walter Minder announced the discovery of element 85 as the beta decay product of radium A (polonium-218), choosing the name "helvetium" (from , the Latin name of Switzerland). Berta Karlik and Traude Bernert were unsuccessful in reproducing his experiments, and subsequently attributed Minder's results to contamination of his radon stream (radon-222 is the parent isotope of polonium-218). In 1942, Minder, in collaboration with the English scientist Alice Leigh-Smith, announced the discovery of another isotope of element 85, presumed to be the product of thorium A (polonium-216) beta decay. They named this substance "anglo-helvetium", but Karlik and Bernert were again unable to reproduce these results.
Later in 1940, Dale R. Corson, Kenneth Ross MacKenzie, and Emilio Segrè isolated the element at the University of California, Berkeley. Instead of searching for the element in nature, the scientists created it by bombarding bismuth-209 with alpha particles in a cyclotron (particle accelerator) to produce, after emission of two neutrons, astatine-211. The discoverers, however, did not immediately suggest a name for the element. The reason for this was that at the time, an element created synthetically in "invisible quantities" that had not yet been discovered in nature was not seen as a completely valid one; in addition, chemists were reluctant to recognize radioactive isotopes as legitimately as stable ones. In 1943, astatine was found as a product of two naturally occurring decay chains by Berta Karlik and Traude Bernert, first in the so-called uranium series, and then in the actinium series. (Since then, astatine was also found in a third decay chain, the neptunium series.) Friedrich Paneth in 1946 called to finally recognize synthetic elements, quoting, among other reasons, recent confirmation of their natural occurrence, and proposed that the discoverers of the newly discovered unnamed elements name these elements. In early 1947, Nature published the discoverers' suggestions; a letter from Corson, MacKenzie, and Segrè suggested the name "astatine" coming from the Ancient Greek () meaning , because of its propensity for radioactive decay, with the ending "-ine", found in the names of the four previously discovered halogens. The name was also chosen to continue the tradition of the four stable halogens, where the name referred to a property of the element.
Corson and his colleagues classified astatine as a metal on the basis of its analytical chemistry. Subsequent investigators reported iodine-like, cationic, or amphoteric behavior. In a 2003 retrospective, Corson wrote that "some of the properties [of astatine] are similar to iodine … it also exhibits metallic properties, more like its metallic neighbors Po and Bi."
Isotopes
There are 41 known isotopes of astatine, with mass numbers of 188 and 190–229. Theoretical modeling suggests that about 37 more isotopes could exist. No stable or long-lived astatine isotope has been observed, nor is one expected to exist.
Astatine's alpha decay energies follow the same trend as for other heavy elements. Lighter astatine isotopes have quite high energies of alpha decay, which become lower as the nuclei become heavier. Astatine-211 has a significantly higher energy than the previous isotope, because it has a nucleus with 126 neutrons, and 126 is a magic number corresponding to a filled neutron shell. Despite having a similar half-life to the previous isotope (8.1 hours for astatine-210 and 7.2 hours for astatine-211), the alpha decay probability is much higher for the latter: 41.81% against only 0.18%. The two following isotopes release even more energy, with astatine-213 releasing the most energy. For this reason, it is the shortest-lived astatine isotope. Even though heavier astatine isotopes release less energy, no long-lived astatine isotope exists, because of the increasing role of beta decay (electron emission). This decay mode is especially important for astatine; as early as 1950 it was postulated that all isotopes of the element undergo beta decay, though nuclear mass measurements indicate that 215At is in fact beta-stable, as it has the lowest mass of all isobars with A = 215. A beta decay mode has been found for all other astatine isotopes except for astatine-213, astatine-214, and astatine-216m. Astatine-210 and lighter isotopes exhibit beta plus decay (positron emission), astatine-216 and heavier isotopes exhibit beta minus decay, and astatine-212 decays via both modes, while astatine-211 undergoes electron capture.
The most stable isotope is astatine-210, which has a half-life of 8.1 hours. The primary decay mode is beta plus, to the relatively long-lived (in comparison to astatine isotopes) alpha emitter polonium-210. In total, only five isotopes have half-lives exceeding one hour (astatine-207 to -211). The least stable ground state isotope is astatine-213, with a half-life of 125 nanoseconds. It undergoes alpha decay to the extremely long-lived bismuth-209.
Astatine has 24 known nuclear isomers, which are nuclei with one or more nucleons (protons or neutrons) in an excited state. A nuclear isomer may also be called a "meta-state", meaning the system has more internal energy than the "ground state" (the state with the lowest possible internal energy), making the former likely to decay into the latter. There may be more than one isomer for each isotope. The most stable of these nuclear isomers is astatine-202m1, which has a half-life of about 3 minutes, longer than those of all the ground states bar those of isotopes 203–211 and 220. The least stable is astatine-214m1; its half-life of 265 nanoseconds is shorter than those of all ground states except that of astatine-213.
Natural occurrence
Astatine is the rarest naturally occurring element. The total amount of astatine in the Earth's crust (quoted mass 2.36 × 1025 grams) is estimated by some to be less than one gram at any given time. Other sources estimate the amount of ephemeral astatine, present on earth at any given moment, to be up to one ounce (about 28 grams).
Any astatine present at the formation of the Earth has long since disappeared; the four naturally occurring isotopes (astatine-215, -217, -218 and -219) are instead continuously produced as a result of the decay of radioactive thorium and uranium ores, and trace quantities of neptunium-237. The landmass of North and South America combined, to a depth of 16 kilometers (10 miles), contains only about one trillion astatine-215 atoms at any given time (around 3.5 × 10−10 grams). Astatine-217 is produced via the radioactive decay of neptunium-237. Primordial remnants of the latter isotope—due to its relatively short half-life of 2.14 million years—are no longer present on Earth. However, trace amounts occur naturally as a product of transmutation reactions in uranium ores. Astatine-218 was the first astatine isotope discovered in nature. Astatine-219, with a half-life of 56 seconds, is the longest lived of the naturally occurring isotopes.
Isotopes of astatine are sometimes not listed as naturally occurring because of misconceptions that there are no such isotopes, or discrepancies in the literature. Astatine-216 has been counted as a naturally occurring isotope but reports of its observation (which were described as doubtful) have not been confirmed.
Synthesis
Formation
Astatine was first produced by bombarding bismuth-209 with energetic alpha particles, and this is still the major route used to create the relatively long-lived isotopes astatine-209 through astatine-211. Astatine is only produced in minuscule quantities, with modern techniques allowing production runs of up to 6.6 giga becquerels (about 86 nanograms or 2.47 × 1014 atoms). Synthesis of greater quantities of astatine using this method is constrained by the limited availability of suitable cyclotrons and the prospect of melting the target. Solvent radiolysis due to the cumulative effect of astatine decay is a related problem. With cryogenic technology, microgram quantities of astatine might be able to be generated via proton irradiation of thorium or uranium to yield radon-211, in turn decaying to astatine-211. Contamination with astatine-210 is expected to be a drawback of this method.
The most important isotope is astatine-211, the only one in commercial use. To produce the bismuth target, the metal is sputtered onto a gold, copper, or aluminium surface at 50 to 100 milligrams per square centimeter. Bismuth oxide can be used instead; this is forcibly fused with a copper plate. The target is kept under a chemically neutral nitrogen atmosphere, and is cooled with water to prevent premature astatine vaporization. In a particle accelerator, such as a cyclotron, alpha particles are collided with the bismuth. Even though only one bismuth isotope is used (bismuth-209), the reaction may occur in three possible ways, producing astatine-209, astatine-210, or astatine-211. In order to eliminate undesired nuclides, the maximum energy of the particle accelerator is set to a value (optimally 29.17 MeV) above that for the reaction producing astatine-211 (to produce the desired isotope) and below the one producing astatine-210 (to avoid producing other astatine isotopes).
Separation methods
Since astatine is the main product of the synthesis, after its formation it must only be separated from the target and any significant contaminants. Several methods are available, "but they generally follow one of two approaches—dry distillation or [wet] acid treatment of the target followed by solvent extraction." The methods summarized below are modern adaptations of older procedures, as reviewed by Kugler and Keller. Pre-1985 techniques more often addressed the elimination of co-produced toxic polonium; this requirement is now mitigated by capping the energy of the cyclotron irradiation beam.
Dry
The astatine-containing cyclotron target is heated to a temperature of around 650 °C. The astatine volatilizes and is condensed in (typically) a cold trap. Higher temperatures of up to around 850 °C may increase the yield, at the risk of bismuth contamination from concurrent volatilization. Redistilling the condensate may be required to minimize the presence of bismuth (as bismuth can interfere with astatine labeling reactions). The astatine is recovered from the trap using one or more low concentration solvents such as sodium hydroxide, methanol or chloroform. Astatine yields of up to around 80% may be achieved. Dry separation is the method most commonly used to produce a chemically useful form of astatine.
Wet
The irradiated bismuth (or sometimes bismuth trioxide) target is first dissolved in, for example, concentrated nitric or perchloric acid. Following this first step, the acid can be distilled away to leave behind a white residue that contains both bismuth and the desired astatine product. This residue is then dissolved in a concentrated acid, such as hydrochloric acid. Astatine is extracted from this acid using an organic solvent such as dibutyl ether, diisopropyl ether (DIPE), or thiosemicarbazide. Using liquid-liquid extraction, the astatine product can be repeatedly washed with an acid, such as HCl, and extracted into the organic solvent layer. A separation yield of 93% using nitric acid has been reported, falling to 72% by the time purification procedures were completed (distillation of nitric acid, purging residual nitrogen oxides, and redissolving bismuth nitrate to enable liquid–liquid extraction). Wet methods involve "multiple radioactivity handling steps" and have not been considered well suited for isolating larger quantities of astatine. However, wet extraction methods are being examined for use in production of larger quantities of astatine-211, as it is thought that wet extraction methods can provide more consistency. They can enable the production of astatine in a specific oxidation state and may have greater applicability in experimental radiochemistry.
Uses and precautions
{| class="wikitable"
|+ Several 211At-containing molecules and their experimental uses
! Agent
! Applications
|-
| [211At]astatine-tellurium colloids
| Compartmental tumors
|-
| 6-[211At]astato-2-methyl-1,4-naphtaquinol diphosphate
| Adenocarcinomas
|-
| 211At-labeled methylene blue
| Melanomas
|-
| Meta-[211At]astatobenzyl guanidine
| Neuroendocrine tumors
|-
| 5-[211At]astato-2'-deoxyuridine
| Various
|-
| 211At-labeled biotin conjugates
| Various pretargeting
|-
| 211At-labeled octreotide
| Somatostatin receptor
|-
| 211At-labeled monoclonal antibodies and fragments
| Various
|-
| 211At-labeled bisphosphonates
| Bone metastases
|}
Newly formed astatine-211 is the subject of ongoing research in nuclear medicine. It must be used quickly as it decays with a half-life of 7.2 hours; this is long enough to permit multistep labeling strategies. Astatine-211 has potential for targeted alpha-particle therapy, since it decays either via emission of an alpha particle (to bismuth-207), or via electron capture (to an extremely short-lived nuclide, polonium-211, which undergoes further alpha decay), very quickly reaching its stable granddaughter lead-207. Polonium X-rays emitted as a result of the electron capture branch, in the range of 77–92 keV, enable the tracking of astatine in animals and patients. Although astatine-210 has a slightly longer half-life, it is wholly unsuitable because it usually undergoes beta plus decay to the extremely toxic polonium-210.
The principal medicinal difference between astatine-211 and iodine-131 (a radioactive iodine isotope also used in medicine) is that iodine-131 emits high-energy beta particles, and astatine does not. Beta particles have much greater penetrating power through tissues than do the much heavier alpha particles. An average alpha particle released by astatine-211 can travel up to 70 µm through surrounding tissues; an average-energy beta particle emitted by iodine-131 can travel nearly 30 times as far, to about 2 mm. The short half-life and limited penetrating power of alpha radiation through tissues offers advantages in situations where the "tumor burden is low and/or malignant cell populations are located in close proximity to essential normal tissues." Significant morbidity in cell culture models of human cancers has been achieved with from one to ten astatine-211 atoms bound per cell.
Several obstacles have been encountered in the development of astatine-based radiopharmaceuticals for cancer treatment. World War II delayed research for close to a decade. Results of early experiments indicated that a cancer-selective carrier would need to be developed and it was not until the 1970s that monoclonal antibodies became available for this purpose. Unlike iodine, astatine shows a tendency to dehalogenate from molecular carriers such as these, particularly at sp3 carbon sites (less so from sp2 sites). Given the toxicity of astatine accumulated and retained in the body, this emphasized the need to ensure it remained attached to its host molecule. While astatine carriers that are slowly metabolized can be assessed for their efficacy, more rapidly metabolized carriers remain a significant obstacle to the evaluation of astatine in nuclear medicine. Mitigating the effects of astatine-induced radiolysis of labeling chemistry and carrier molecules is another area requiring further development. A practical application for astatine as a cancer treatment would potentially be suitable for a "staggering" number of patients; production of astatine in the quantities that would be required remains an issue.
Animal studies show that astatine, similarly to iodine—although to a lesser extent, perhaps because of its slightly more metallic nature—is preferentially (and dangerously) concentrated in the thyroid gland. Unlike iodine, astatine also shows a tendency to be taken up by the lungs and spleen, possibly because of in-body oxidation of At– to At+. If administered in the form of a radiocolloid it tends to concentrate in the liver. Experiments in rats and monkeys suggest that astatine-211 causes much greater damage to the thyroid gland than does iodine-131, with repetitive injection of the nuclide resulting in necrosis and cell dysplasia within the gland. Early research suggested that injection of astatine into female rodents caused morphological changes in breast tissue; this conclusion remained controversial for many years. General agreement was later reached that this was likely caused by the effect of breast tissue irradiation combined with hormonal changes due to irradiation of the ovaries. Trace amounts of astatine can be handled safely in fume hoods if they are well-aerated; biological uptake of the element must be avoided.
See also
Radiation protection
Notes
References
Bibliography
External links
Astatine at The Periodic Table of Videos (University of Nottingham)
Astatine: Halogen or Metal?
Chemical elements
Chemical elements with face-centered cubic structure
Halogens
Synthetic elements
|
https://en.wikipedia.org/wiki/Aluminium
|
Aluminium (aluminum in North American English) is a chemical element with the symbol Al and atomic number 13. Aluminium has a density lower than those of other common metals; about one-third that of steel. It has a great affinity towards oxygen, forming a protective layer of oxide on the surface when exposed to air. Aluminium visually resembles silver, both in its color and in its great ability to reflect light. It is soft, nonmagnetic and ductile. It has one stable isotope: 27Al, which is highly abundant, making aluminium the twelfth-most common element in the universe. The radioactivity of 26Al is used in radiometric dating.
Chemically, aluminium is a post-transition metal in the boron group; as is common for the group, aluminium forms compounds primarily in the +3 oxidation state. The aluminium cation Al3+ is small and highly charged; as such, it has more polarizing power, and bonds formed by aluminium have a more covalent character. The strong affinity of aluminium for oxygen leads to the common occurrence of its oxides in nature. Aluminium is found on Earth primarily in rocks in the crust, where it is the third-most abundant element, after oxygen and silicon, rather than in the mantle, and virtually never as the free metal. It is obtained industrially by mining bauxite, a sedimentary rock rich in aluminium minerals.
The discovery of aluminium was announced in 1825 by Danish physicist Hans Christian Ørsted. The first industrial production of aluminium was initiated by French chemist Henri Étienne Sainte-Claire Deville in 1856. Aluminium became much more available to the public with the Hall–Héroult process developed independently by French engineer Paul Héroult and American engineer Charles Martin Hall in 1886, and the mass production of aluminium led to its extensive use in industry and everyday life. In World Wars I and II, aluminium was a crucial strategic resource for aviation. In 1954, aluminium became the most produced non-ferrous metal, surpassing copper. In the 21st century, most aluminium was consumed in transportation, engineering, construction, and packaging in the United States, Western Europe, and Japan.
Despite its prevalence in the environment, no living organism is known to use aluminium salts for metabolism, but aluminium is well tolerated by plants and animals. Because of the abundance of these salts, the potential for a biological role for them is of interest, and studies continue.
Physical characteristics
Isotopes
Of aluminium isotopes, only is stable. This situation is common for elements with an odd atomic number. It is the only primordial aluminium isotope, i.e. the only one that has existed on Earth in its current form since the formation of the planet. It is therefore a mononuclidic element and its standard atomic weight is virtually the same as that of the isotope. This makes aluminium very useful in nuclear magnetic resonance (NMR), as its single stable isotope has a high NMR sensitivity. The standard atomic weight of aluminium is low in comparison with many other metals.
All other isotopes of aluminium are radioactive. The most stable of these is 26Al: while it was present along with stable 27Al in the interstellar medium from which the Solar System formed, having been produced by stellar nucleosynthesis as well, its half-life is only 717,000 years and therefore a detectable amount has not survived since the formation of the planet. However, minute traces of 26Al are produced from argon in the atmosphere by spallation caused by cosmic ray protons. The ratio of 26Al to 10Be has been used for radiodating of geological processes over 105 to 106 year time scales, in particular transport, deposition, sediment storage, burial times, and erosion. Most meteorite scientists believe that the energy released by the decay of 26Al was responsible for the melting and differentiation of some asteroids after their formation 4.55 billion years ago.
The remaining isotopes of aluminium, with mass numbers ranging from 22 to 43, all have half-lives well under an hour. Three metastable states are known, all with half-lives under a minute.
Electron shell
An aluminium atom has 13 electrons, arranged in an electron configuration of , with three electrons beyond a stable noble gas configuration. Accordingly, the combined first three ionization energies of aluminium are far lower than the fourth ionization energy alone. Such an electron configuration is shared with the other well-characterized members of its group, boron, gallium, indium, and thallium; it is also expected for nihonium. Aluminium can surrender its three outermost electrons in many chemical reactions (see below). The electronegativity of aluminium is 1.61 (Pauling scale).
A free aluminium atom has a radius of 143 pm. With the three outermost electrons removed, the radius shrinks to 39 pm for a 4-coordinated atom or 53.5 pm for a 6-coordinated atom. At standard temperature and pressure, aluminium atoms (when not affected by atoms of other elements) form a face-centered cubic crystal system bound by metallic bonding provided by atoms' outermost electrons; hence aluminium (at these conditions) is a metal. This crystal system is shared by many other metals, such as lead and copper; the size of a unit cell of aluminium is comparable to that of those other metals. The system, however, is not shared by the other members of its group; boron has ionization energies too high to allow metallization, thallium has a hexagonal close-packed structure, and gallium and indium have unusual structures that are not close-packed like those of aluminium and thallium. The few electrons that are available for metallic bonding in aluminium metal are a probable cause for it being soft with a low melting point and low electrical resistivity.
Bulk
Aluminium metal has an appearance ranging from silvery white to dull gray, depending on the surface roughness. Aluminium mirrors are the most reflective of all metal mirrors for the near ultraviolet and far infrared light, and one of the most reflective in the visible spectrum, nearly on par with silver, and the two therefore look similar. Aluminium is also good at reflecting solar radiation, although prolonged exposure to sunlight in air adds wear to the surface of the metal; this may be prevented if aluminium is anodized, which adds a protective layer of oxide on the surface.
The density of aluminium is 2.70 g/cm3, about 1/3 that of steel, much lower than other commonly encountered metals, making aluminium parts easily identifiable through their lightness. Aluminium's low density compared to most other metals arises from the fact that its nuclei are much lighter, while difference in the unit cell size does not compensate for this difference. The only lighter metals are the metals of groups 1 and 2, which apart from beryllium and magnesium are too reactive for structural use (and beryllium is very toxic). Aluminium is not as strong or stiff as steel, but the low density makes up for this in the aerospace industry and for many other applications where light weight and relatively high strength are crucial.
Pure aluminium is quite soft and lacking in strength. In most applications various aluminium alloys are used instead because of their higher strength and hardness. The yield strength of pure aluminium is 7–11 MPa, while aluminium alloys have yield strengths ranging from 200 MPa to 600 MPa. Aluminium is ductile, with a percent elongation of 50-70%, and malleable allowing it to be easily drawn and extruded. It is also easily machined and cast.
Aluminium is an excellent thermal and electrical conductor, having around 60% the conductivity of copper, both thermal and electrical, while having only 30% of copper's density. Aluminium is capable of superconductivity, with a superconducting critical temperature of 1.2 kelvin and a critical magnetic field of about 100 gauss (10 milliteslas). It is paramagnetic and thus essentially unaffected by static magnetic fields. The high electrical conductivity, however, means that it is strongly affected by alternating magnetic fields through the induction of eddy currents.
Chemistry
Aluminium combines characteristics of pre- and post-transition metals. Since it has few available electrons for metallic bonding, like its heavier group 13 congeners, it has the characteristic physical properties of a post-transition metal, with longer-than-expected interatomic distances. Furthermore, as Al3+ is a small and highly charged cation, it is strongly polarizing and bonding in aluminium compounds tends towards covalency; this behavior is similar to that of beryllium (Be2+), and the two display an example of a diagonal relationship.
The underlying core under aluminium's valence shell is that of the preceding noble gas, whereas those of its heavier congeners gallium, indium, thallium, and nihonium also include a filled d-subshell and in some cases a filled f-subshell. Hence, the inner electrons of aluminium shield the valence electrons almost completely, unlike those of aluminium's heavier congeners. As such, aluminium is the most electropositive metal in its group, and its hydroxide is in fact more basic than that of gallium. Aluminium also bears minor similarities to the metalloid boron in the same group: AlX3 compounds are valence isoelectronic to BX3 compounds (they have the same valence electronic structure), and both behave as Lewis acids and readily form adducts. Additionally, one of the main motifs of boron chemistry is regular icosahedral structures, and aluminium forms an important part of many icosahedral quasicrystal alloys, including the Al–Zn–Mg class.
Aluminium has a high chemical affinity to oxygen, which renders it suitable for use as a reducing agent in the thermite reaction. A fine powder of aluminium metal reacts explosively on contact with liquid oxygen; under normal conditions, however, aluminium forms a thin oxide layer (~5 nm at room temperature) that protects the metal from further corrosion by oxygen, water, or dilute acid, a process termed passivation. Because of its general resistance to corrosion, aluminium is one of the few metals that retains silvery reflectance in finely powdered form, making it an important component of silver-colored paints. Aluminium is not attacked by oxidizing acids because of its passivation. This allows aluminium to be used to store reagents such as nitric acid, concentrated sulfuric acid, and some organic acids.
In hot concentrated hydrochloric acid, aluminium reacts with water with evolution of hydrogen, and in aqueous sodium hydroxide or potassium hydroxide at room temperature to form aluminates—protective passivation under these conditions is negligible. Aqua regia also dissolves aluminium. Aluminium is corroded by dissolved chlorides, such as common sodium chloride, which is why household plumbing is never made from aluminium. The oxide layer on aluminium is also destroyed by contact with mercury due to amalgamation or with salts of some electropositive metals. As such, the strongest aluminium alloys are less corrosion-resistant due to galvanic reactions with alloyed copper, and aluminium's corrosion resistance is greatly reduced by aqueous salts, particularly in the presence of dissimilar metals.
Aluminium reacts with most nonmetals upon heating, forming compounds such as aluminium nitride (AlN), aluminium sulfide (Al2S3), and the aluminium halides (AlX3). It also forms a wide range of intermetallic compounds involving metals from every group on the periodic table.
Inorganic compounds
The vast majority of compounds, including all aluminium-containing minerals and all commercially significant aluminium compounds, feature aluminium in the oxidation state 3+. The coordination number of such compounds varies, but generally Al3+ is either six- or four-coordinate. Almost all compounds of aluminium(III) are colorless.
In aqueous solution, Al3+ exists as the hexaaqua cation [Al(H2O)6]3+, which has an approximate Ka of 10−5. Such solutions are acidic as this cation can act as a proton donor and progressively hydrolyze until a precipitate of aluminium hydroxide, Al(OH)3, forms. This is useful for clarification of water, as the precipitate nucleates on suspended particles in the water, hence removing them. Increasing the pH even further leads to the hydroxide dissolving again as aluminate, [Al(H2O)2(OH)4]−, is formed.
Aluminium hydroxide forms both salts and aluminates and dissolves in acid and alkali, as well as on fusion with acidic and basic oxides. This behavior of Al(OH)3 is termed amphoterism and is characteristic of weakly basic cations that form insoluble hydroxides and whose hydrated species can also donate their protons. One effect of this is that aluminium salts with weak acids are hydrolyzed in water to the aquated hydroxide and the corresponding nonmetal hydride: for example, aluminium sulfide yields hydrogen sulfide. However, some salts like aluminium carbonate exist in aqueous solution but are unstable as such; and only incomplete hydrolysis takes place for salts with strong acids, such as the halides, nitrate, and sulfate. For similar reasons, anhydrous aluminium salts cannot be made by heating their "hydrates": hydrated aluminium chloride is in fact not AlCl3·6H2O but [Al(H2O)6]Cl3, and the Al–O bonds are so strong that heating is not sufficient to break them and form Al–Cl bonds instead:
2[Al(H2O)6]Cl3 Al2O3 + 6 HCl + 9 H2O
All four trihalides are well known. Unlike the structures of the three heavier trihalides, aluminium fluoride (AlF3) features six-coordinate aluminium, which explains its involatility and insolubility as well as high heat of formation. Each aluminium atom is surrounded by six fluorine atoms in a distorted octahedral arrangement, with each fluorine atom being shared between the corners of two octahedra. Such {AlF6} units also exist in complex fluorides such as cryolite, Na3AlF6. AlF3 melts at and is made by reaction of aluminium oxide with hydrogen fluoride gas at .
With heavier halides, the coordination numbers are lower. The other trihalides are dimeric or polymeric with tetrahedral four-coordinate aluminium centers. Aluminium trichloride (AlCl3) has a layered polymeric structure below its melting point of but transforms on melting to Al2Cl6 dimers. At higher temperatures those increasingly dissociate into trigonal planar AlCl3 monomers similar to the structure of BCl3. Aluminium tribromide and aluminium triiodide form Al2X6 dimers in all three phases and hence do not show such significant changes of properties upon phase change. These materials are prepared by treating aluminium metal with the halogen. The aluminium trihalides form many addition compounds or complexes; their Lewis acidic nature makes them useful as catalysts for the Friedel–Crafts reactions. Aluminium trichloride has major industrial uses involving this reaction, such as in the manufacture of anthraquinones and styrene; it is also often used as the precursor for many other aluminium compounds and as a reagent for converting nonmetal fluorides into the corresponding chlorides (a transhalogenation reaction).
Aluminium forms one stable oxide with the chemical formula Al2O3, commonly called alumina. It can be found in nature in the mineral corundum, α-alumina; there is also a γ-alumina phase. Its crystalline form, corundum, is very hard (Mohs hardness 9), has a high melting point of , has very low volatility, is chemically inert, and a good electrical insulator, it is often used in abrasives (such as toothpaste), as a refractory material, and in ceramics, as well as being the starting material for the electrolytic production of aluminium metal. Sapphire and ruby are impure corundum contaminated with trace amounts of other metals. The two main oxide-hydroxides, AlO(OH), are boehmite and diaspore. There are three main trihydroxides: bayerite, gibbsite, and nordstrandite, which differ in their crystalline structure (polymorphs). Many other intermediate and related structures are also known. Most are produced from ores by a variety of wet processes using acid and base. Heating the hydroxides leads to formation of corundum. These materials are of central importance to the production of aluminium and are themselves extremely useful. Some mixed oxide phases are also very useful, such as spinel (MgAl2O4), Na-β-alumina (NaAl11O17), and tricalcium aluminate (Ca3Al2O6, an important mineral phase in Portland cement).
The only stable chalcogenides under normal conditions are aluminium sulfide (Al2S3), selenide (Al2Se3), and telluride (Al2Te3). All three are prepared by direct reaction of their elements at about and quickly hydrolyze completely in water to yield aluminium hydroxide and the respective hydrogen chalcogenide. As aluminium is a small atom relative to these chalcogens, these have four-coordinate tetrahedral aluminium with various polymorphs having structures related to wurtzite, with two-thirds of the possible metal sites occupied either in an orderly (α) or random (β) fashion; the sulfide also has a γ form related to γ-alumina, and an unusual high-temperature hexagonal form where half the aluminium atoms have tetrahedral four-coordination and the other half have trigonal bipyramidal five-coordination.
Four pnictides – aluminium nitride (AlN), aluminium phosphide (AlP), aluminium arsenide (AlAs), and aluminium antimonide (AlSb) – are known. They are all III-V semiconductors isoelectronic to silicon and germanium, all of which but AlN have the zinc blende structure. All four can be made by high-temperature (and possibly high-pressure) direct reaction of their component elements.
Aluminium alloys well with most other metals (with the exception of most alkali metals and group 13 metals) and over 150 intermetallics with other metals are known. Preparation involves heating fixed metals together in certain proportion, followed by gradual cooling and annealing. Bonding in them is predominantly metallic and the crystal structure primarily depends on efficiency of packing.
There are few compounds with lower oxidation states. A few aluminium(I) compounds exist: AlF, AlCl, AlBr, and AlI exist in the gaseous phase when the respective trihalide is heated with aluminium, and at cryogenic temperatures. A stable derivative of aluminium monoiodide is the cyclic adduct formed with triethylamine, Al4I4(NEt3)4. Al2O and Al2S also exist but are very unstable. Very simple aluminium(II) compounds are invoked or observed in the reactions of Al metal with oxidants. For example, aluminium monoxide, AlO, has been detected in the gas phase after explosion and in stellar absorption spectra. More thoroughly investigated are compounds of the formula R4Al2 which contain an Al–Al bond and where R is a large organic ligand.
Organoaluminium compounds and related hydrides
A variety of compounds of empirical formula AlR3 and AlR1.5Cl1.5 exist. The aluminium trialkyls and triaryls are reactive, volatile, and colorless liquids or low-melting solids. They catch fire spontaneously in air and react with water, thus necessitating precautions when handling them. They often form dimers, unlike their boron analogues, but this tendency diminishes for branched-chain alkyls (e.g. Pri, Bui, Me3CCH2); for example, triisobutylaluminium exists as an equilibrium mixture of the monomer and dimer. These dimers, such as trimethylaluminium (Al2Me6), usually feature tetrahedral Al centers formed by dimerization with some alkyl group bridging between both aluminium atoms. They are hard acids and react readily with ligands, forming adducts. In industry, they are mostly used in alkene insertion reactions, as discovered by Karl Ziegler, most importantly in "growth reactions" that form long-chain unbranched primary alkenes and alcohols, and in the low-pressure polymerization of ethene and propene. There are also some heterocyclic and cluster organoaluminium compounds involving Al–N bonds.
The industrially most important aluminium hydride is lithium aluminium hydride (LiAlH4), which is used in as a reducing agent in organic chemistry. It can be produced from lithium hydride and aluminium trichloride. The simplest hydride, aluminium hydride or alane, is not as important. It is a polymer with the formula (AlH3)n, in contrast to the corresponding boron hydride that is a dimer with the formula (BH3)2.
Natural occurrence
Space
Aluminium's per-particle abundance in the Solar System is 3.15 ppm (parts per million). It is the twelfth most abundant of all elements and third most abundant among the elements that have odd atomic numbers, after hydrogen and nitrogen. The only stable isotope of aluminium, 27Al, is the eighteenth most abundant nucleus in the Universe. It is created almost entirely after fusion of carbon in massive stars that will later become Type II supernovas: this fusion creates 26Mg, which, upon capturing free protons and neutrons becomes aluminium. Some smaller quantities of 27Al are created in hydrogen burning shells of evolved stars, where 26Mg can capture free protons. Essentially all aluminium now in existence is 27Al. 26Al was present in the early Solar System with abundance of 0.005% relative to 27Al but its half-life of 728,000 years is too short for any original nuclei to survive; 26Al is therefore extinct. Unlike for 27Al, hydrogen burning is the primary source of 26Al, with the nuclide emerging after a nucleus of 25Mg catches a free proton. However, the trace quantities of 26Al that do exist are the most common gamma ray emitter in the interstellar gas; if the original 26Al were still present, gamma ray maps of the Milky Way would be brighter.
Earth
Overall, the Earth is about 1.59% aluminium by mass (seventh in abundance by mass). Aluminium occurs in greater proportion in the Earth's crust than in the Universe at large, because aluminium easily forms the oxide and becomes bound into rocks and stays in the Earth's crust, while less reactive metals sink to the core. In the Earth's crust, aluminium is the most abundant metallic element (8.23% by mass) and the third most abundant of all elements (after oxygen and silicon). A large number of silicates in the Earth's crust contain aluminium. In contrast, the Earth's mantle is only 2.38% aluminium by mass. Aluminium also occurs in seawater at a concentration of 2 μg/kg.
Because of its strong affinity for oxygen, aluminium is almost never found in the elemental state; instead it is found in oxides or silicates. Feldspars, the most common group of minerals in the Earth's crust, are aluminosilicates. Aluminium also occurs in the minerals beryl, cryolite, garnet, spinel, and turquoise. Impurities in Al2O3, such as chromium and iron, yield the gemstones ruby and sapphire, respectively. Native aluminium metal is extremely rare and can only be found as a minor phase in low oxygen fugacity environments, such as the interiors of certain volcanoes. Native aluminium has been reported in cold seeps in the northeastern continental slope of the South China Sea. It is possible that these deposits resulted from bacterial reduction of tetrahydroxoaluminate Al(OH)4−.
Although aluminium is a common and widespread element, not all aluminium minerals are economically viable sources of the metal. Almost all metallic aluminium is produced from the ore bauxite (AlOx(OH)3–2x). Bauxite occurs as a weathering product of low iron and silica bedrock in tropical climatic conditions. In 2017, most bauxite was mined in Australia, China, Guinea, and India.
History
The history of aluminium has been shaped by usage of alum. The first written record of alum, made by Greek historian Herodotus, dates back to the 5th century BCE. The ancients are known to have used alum as a dyeing mordant and for city defense. After the Crusades, alum, an indispensable good in the European fabric industry, was a subject of international commerce; it was imported to Europe from the eastern Mediterranean until the mid-15th century.
The nature of alum remained unknown. Around 1530, Swiss physician Paracelsus suggested alum was a salt of an earth of alum. In 1595, German doctor and chemist Andreas Libavius experimentally confirmed this. In 1722, German chemist Friedrich Hoffmann announced his belief that the base of alum was a distinct earth. In 1754, German chemist Andreas Sigismund Marggraf synthesized alumina by boiling clay in sulfuric acid and subsequently adding potash.
Attempts to produce aluminium metal date back to 1760. The first successful attempt, however, was completed in 1824 by Danish physicist and chemist Hans Christian Ørsted. He reacted anhydrous aluminium chloride with potassium amalgam, yielding a lump of metal looking similar to tin. He presented his results and demonstrated a sample of the new metal in 1825. In 1827, German chemist Friedrich Wöhler repeated Ørsted's experiments but did not identify any aluminium. (The reason for this inconsistency was only discovered in 1921.) He conducted a similar experiment in the same year by mixing anhydrous aluminium chloride with potassium and produced a powder of aluminium. In 1845, he was able to produce small pieces of the metal and described some physical properties of this metal. For many years thereafter, Wöhler was credited as the discoverer of aluminium.
As Wöhler's method could not yield great quantities of aluminium, the metal remained rare; its cost exceeded that of gold. The first industrial production of aluminium was established in 1856 by French chemist Henri Etienne Sainte-Claire Deville and companions. Deville had discovered that aluminium trichloride could be reduced by sodium, which was more convenient and less expensive than potassium, which Wöhler had used. Even then, aluminium was still not of great purity and produced aluminium differed in properties by sample. Because of its electricity-conducting capacity, aluminium was used as the cap of the Washington Monument, completed in 1885. The tallest building in the world at the time, the non-corroding metal cap was intended to serve as a lightning rod peak.
The first industrial large-scale production method was independently developed in 1886 by French engineer Paul Héroult and American engineer Charles Martin Hall; it is now known as the Hall–Héroult process. The Hall–Héroult process converts alumina into metal. Austrian chemist Carl Joseph Bayer discovered a way of purifying bauxite to yield alumina, now known as the Bayer process, in 1889. Modern production of the aluminium metal is based on the Bayer and Hall–Héroult processes.
Prices of aluminium dropped and aluminium became widely used in jewelry, everyday items, eyeglass frames, optical instruments, tableware, and foil in the 1890s and early 20th century. Aluminium's ability to form hard yet light alloys with other metals provided the metal with many uses at the time. During World War I, major governments demanded large shipments of aluminium for light strong airframes; during World War II, demand by major governments for aviation was even higher.
By the mid-20th century, aluminium had become a part of everyday life and an essential component of housewares. In 1954, production of aluminium surpassed that of copper, historically second in production only to iron, making it the most produced non-ferrous metal. During the mid-20th century, aluminium emerged as a civil engineering material, with building applications in both basic construction and interior finish work, and increasingly being used in military engineering, for both airplanes and land armor vehicle engines. Earth's first artificial satellite, launched in 1957, consisted of two separate aluminium semi-spheres joined and all subsequent space vehicles have used aluminium to some extent. The aluminium can was invented in 1956 and employed as a storage for drinks in 1958.
Throughout the 20th century, the production of aluminium rose rapidly: while the world production of aluminium in 1900 was 6,800 metric tons, the annual production first exceeded 100,000 metric tons in 1916; 1,000,000 tons in 1941; 10,000,000 tons in 1971. In the 1970s, the increased demand for aluminium made it an exchange commodity; it entered the London Metal Exchange, the oldest industrial metal exchange in the world, in 1978. The output continued to grow: the annual production of aluminium exceeded 50,000,000 metric tons in 2013.
The real price for aluminium declined from $14,000 per metric ton in 1900 to $2,340 in 1948 (in 1998 United States dollars). Extraction and processing costs were lowered over technological progress and the scale of the economies. However, the need to exploit lower-grade poorer quality deposits and the use of fast increasing input costs (above all, energy) increased the net cost of aluminium; the real price began to grow in the 1970s with the rise of energy cost. Production moved from the industrialized countries to countries where production was cheaper. Production costs in the late 20th century changed because of advances in technology, lower energy prices, exchange rates of the United States dollar, and alumina prices. The BRIC countries' combined share in primary production and primary consumption grew substantially in the first decade of the 21st century. China is accumulating an especially large share of the world's production thanks to an abundance of resources, cheap energy, and governmental stimuli; it also increased its consumption share from 2% in 1972 to 40% in 2010. In the United States, Western Europe, and Japan, most aluminium was consumed in transportation, engineering, construction, and packaging. In 2021, prices for industrial metals such as aluminium have soared to near-record levels as energy shortages in China drive up costs for electricity.
Etymology
The names aluminium and aluminum are derived from the word alumine, an obsolete term for alumina, a naturally occurring oxide of aluminium. Alumine was borrowed from French, which in turn derived it from alumen, the classical Latin name for alum, the mineral from which it was collected. The Latin word alumen stems from the Proto-Indo-European root *alu- meaning "bitter" or "beer".
Origins
British chemist Humphry Davy, who performed a number of experiments aimed to isolate the metal, is credited as the person who named the element. The first name proposed for the metal to be isolated from alum was alumium, which Davy suggested in an 1808 article on his electrochemical research, published in Philosophical Transactions of the Royal Society. It appeared that the name was created from the English word alum and the Latin suffix -ium; but it was customary then to give elements names originating in Latin, so this name was not adopted universally. This name was criticized by contemporary chemists from France, Germany, and Sweden, who insisted the metal should be named for the oxide, alumina, from which it would be isolated. The English name alum does not come directly from Latin, whereas alumine/alumina obviously comes from the Latin word alumen (upon declension, alumen changes to alumin-).
One example was Essai sur la Nomenclature chimique (July 1811), written in French by a Swedish chemist, Jöns Jacob Berzelius, in which the name aluminium is given to the element that would be synthesized from alum. (Another article in the same journal issue also gives the name aluminium to the metal whose oxide is the basis of sapphire.) A January 1811 summary of one of Davy's lectures at the Royal Society mentioned the name aluminium as a possibility. The next year, Davy published a chemistry textbook in which he used the spelling aluminum. Both spellings have coexisted since. Their usage is regional: aluminum dominates in the United States and Canada; aluminium, in the rest of the English-speaking world.
Spelling
In 1812, a British scientist, Thomas Young, wrote an anonymous review of Davy's book, in which he proposed the name aluminium instead of aluminum, which he thought had a "less classical sound". This name did catch on: although the spelling was occasionally used in Britain, the American scientific language used from the start. Most scientists throughout the world used in the 19th century; and it was entrenched in several other European languages, such as French, German, and Dutch. In 1828, an American lexicographer, Noah Webster, entered only the aluminum spelling in his American Dictionary of the English Language. In the 1830s, the spelling gained usage in the United States; by the 1860s, it had become the more common spelling there outside science. In 1892, Hall used the spelling in his advertising handbill for his new electrolytic method of producing the metal, despite his constant use of the spelling in all the patents he filed between 1886 and 1903: it is unknown whether this spelling was introduced by mistake or intentionally; but Hall preferred aluminum since its introduction because it resembled platinum, the name of a prestigious metal. By 1890, both spellings had been common in the United States, the spelling being slightly more common; by 1895, the situation had reversed; by 1900, aluminum had become twice as common as aluminium; in the next decade, the spelling dominated American usage. In 1925, the American Chemical Society adopted this spelling.
The International Union of Pure and Applied Chemistry (IUPAC) adopted aluminium as the standard international name for the element in 1990. In 1993, they recognized aluminum as an acceptable variant; the most recent 2005 edition of the IUPAC nomenclature of inorganic chemistry also acknowledges this spelling. IUPAC official publications use the spelling as primary, and they list both where it is appropriate.
Production and refinement
The production of aluminium starts with the extraction of bauxite rock from the ground. The bauxite is processed and transformed using the Bayer process into alumina, which is then processed using the Hall–Héroult process, resulting in the final aluminium metal.
Aluminium production is highly energy-consuming, and so the producers tend to locate smelters in places where electric power is both plentiful and inexpensive. Production of one kilogram of aluminium requires 7 kilograms of oil energy equivalent, as compared to 1.5 kilograms for steel and 2 kilograms for plastic. As of 2019, the world's largest smelters of aluminium are located in China, India, Russia, Canada, and the United Arab Emirates, while China is by far the top producer of aluminium with a world share of fifty-five percent.
According to the International Resource Panel's Metal Stocks in Society report, the global per capita stock of aluminium in use in society (i.e. in cars, buildings, electronics, etc.) is . Much of this is in more-developed countries ( per capita) rather than less-developed countries ( per capita).
Bayer process
Bauxite is converted to alumina by the Bayer process. Bauxite is blended for uniform composition and then is ground. The resulting slurry is mixed with a hot solution of sodium hydroxide; the mixture is then treated in a digester vessel at a pressure well above atmospheric, dissolving the aluminium hydroxide in bauxite while converting impurities into relatively insoluble compounds:
After this reaction, the slurry is at a temperature above its atmospheric boiling point. It is cooled by removing steam as pressure is reduced. The bauxite residue is separated from the solution and discarded. The solution, free of solids, is seeded with small crystals of aluminium hydroxide; this causes decomposition of the [Al(OH)4]− ions to aluminium hydroxide. After about half of aluminium has precipitated, the mixture is sent to classifiers. Small crystals of aluminium hydroxide are collected to serve as seeding agents; coarse particles are converted to alumina by heating; the excess solution is removed by evaporation, (if needed) purified, and recycled.
Hall–Héroult process
The conversion of alumina to aluminium metal is achieved by the Hall–Héroult process. In this energy-intensive process, a solution of alumina in a molten () mixture of cryolite (Na3AlF6) with calcium fluoride is electrolyzed to produce metallic aluminium. The liquid aluminium metal sinks to the bottom of the solution and is tapped off, and usually cast into large blocks called aluminium billets for further processing.
Anodes of the electrolysis cell are made of carbon—the most resistant material against fluoride corrosion—and either bake at the process or are prebaked. The former, also called Söderberg anodes, are less power-efficient and fumes released during baking are costly to collect, which is why they are being replaced by prebaked anodes even though they save the power, energy, and labor to prebake the cathodes. Carbon for anodes should be preferably pure so that neither aluminium nor the electrolyte is contaminated with ash. Despite carbon's resistivity against corrosion, it is still consumed at a rate of 0.4–0.5 kg per each kilogram of produced aluminium. Cathodes are made of anthracite; high purity for them is not required because impurities leach only very slowly. The cathode is consumed at a rate of 0.02–0.04 kg per each kilogram of produced aluminium. A cell is usually terminated after 2–6 years following a failure of the cathode.
The Hall–Heroult process produces aluminium with a purity of above 99%. Further purification can be done by the Hoopes process. This process involves the electrolysis of molten aluminium with a sodium, barium, and aluminium fluoride electrolyte. The resulting aluminium has a purity of 99.99%.
Electric power represents about 20 to 40% of the cost of producing aluminium, depending on the location of the smelter. Aluminium production consumes roughly 5% of electricity generated in the United States. Because of this, alternatives to the Hall–Héroult process have been researched, but none has turned out to be economically feasible.
Recycling
Recovery of the metal through recycling has become an important task of the aluminium industry. Recycling was a low-profile activity until the late 1960s, when the growing use of aluminium beverage cans brought it to public awareness. Recycling involves melting the scrap, a process that requires only 5% of the energy used to produce aluminium from ore, though a significant part (up to 15% of the input material) is lost as dross (ash-like oxide). An aluminium stack melter produces significantly less dross, with values reported below 1%.
White dross from primary aluminium production and from secondary recycling operations still contains useful quantities of aluminium that can be extracted industrially. The process produces aluminium billets, together with a highly complex waste material. This waste is difficult to manage. It reacts with water, releasing a mixture of gases (including, among others, hydrogen, acetylene, and ammonia), which spontaneously ignites on contact with air; contact with damp air results in the release of copious quantities of ammonia gas. Despite these difficulties, the waste is used as a filler in asphalt and concrete.
Applications
Metal
The global production of aluminium in 2016 was 58.8 million metric tons. It exceeded that of any other metal except iron (1,231 million metric tons).
Aluminium is almost always alloyed, which markedly improves its mechanical properties, especially when tempered. For example, the common aluminium foils and beverage cans are alloys of 92% to 99% aluminium. The main alloying agents are copper, zinc, magnesium, manganese, and silicon (e.g., duralumin) with the levels of other metals in a few percent by weight. Aluminium, both wrought and cast, has been alloyed with: manganese, silicon, magnesium, copper and zinc among others. For example, the Kynal family of alloys was developed by the British chemical manufacturer Imperial Chemical Industries.
The major uses for aluminium metal are in:
Transportation (automobiles, aircraft, trucks, railway cars, marine vessels, bicycles, spacecraft, etc.). Aluminium is used because of its low density;
Packaging (cans, foil, frame, etc.). Aluminium is used because it is non-toxic (see below), non-adsorptive, and splinter-proof;
Building and construction (windows, doors, siding, building wire, sheathing, roofing, etc.). Since steel is cheaper, aluminium is used when lightness, corrosion resistance, or engineering features are important;
Electricity-related uses (conductor alloys, motors, and generators, transformers, capacitors, etc.). Aluminium is used because it is relatively cheap, highly conductive, has adequate mechanical strength and low density, and resists corrosion;
A wide range of household items, from cooking utensils to furniture. Low density, good appearance, ease of fabrication, and durability are the key factors of aluminium usage;
Machinery and equipment (processing equipment, pipes, tools). Aluminium is used because of its corrosion resistance, non-pyrophoricity, and mechanical strength.
Compounds
The great majority (about 90%) of aluminium oxide is converted to metallic aluminium. Being a very hard material (Mohs hardness 9), alumina is widely used as an abrasive; being extraordinarily chemically inert, it is useful in highly reactive environments such as high pressure sodium lamps. Aluminium oxide is commonly used as a catalyst for industrial processes; e.g. the Claus process to convert hydrogen sulfide to sulfur in refineries and to alkylate amines. Many industrial catalysts are supported by alumina, meaning that the expensive catalyst material is dispersed over a surface of the inert alumina. Another principal use is as a drying agent or absorbent.
Several sulfates of aluminium have industrial and commercial application. Aluminium sulfate (in its hydrate form) is produced on the annual scale of several millions of metric tons. About two-thirds is consumed in water treatment. The next major application is in the manufacture of paper. It is also used as a mordant in dyeing, in pickling seeds, deodorizing of mineral oils, in leather tanning, and in production of other aluminium compounds. Two kinds of alum, ammonium alum and potassium alum, were formerly used as mordants and in leather tanning, but their use has significantly declined following availability of high-purity aluminium sulfate. Anhydrous aluminium chloride is used as a catalyst in chemical and petrochemical industries, the dyeing industry, and in synthesis of various inorganic and organic compounds. Aluminium hydroxychlorides are used in purifying water, in the paper industry, and as antiperspirants. Sodium aluminate is used in treating water and as an accelerator of solidification of cement.
Many aluminium compounds have niche applications, for example:
Aluminium acetate in solution is used as an astringent.
Aluminium phosphate is used in the manufacture of glass, ceramic, pulp and paper products, cosmetics, paints, varnishes, and in dental cement.
Aluminium hydroxide is used as an antacid, and mordant; it is used also in water purification, the manufacture of glass and ceramics, and in the waterproofing of fabrics.
Lithium aluminium hydride is a powerful reducing agent used in organic chemistry.
Organoaluminiums are used as Lewis acids and co-catalysts.
Methylaluminoxane is a co-catalyst for Ziegler–Natta olefin polymerization to produce vinyl polymers such as polyethene.
Aqueous aluminium ions (such as aqueous aluminium sulfate) are used to treat against fish parasites such as Gyrodactylus salaris.
In many vaccines, certain aluminium salts serve as an immune adjuvant (immune response booster) to allow the protein in the vaccine to achieve sufficient potency as an immune stimulant. Until 2004, most of the adjuvants used in vaccines were aluminium-adjuvanted.
Biology
Despite its widespread occurrence in the Earth's crust, aluminium has no known function in biology. At pH 6–9 (relevant for most natural waters), aluminium precipitates out of water as the hydroxide and is hence not available; most elements behaving this way have no biological role or are toxic. Aluminium sulfate has an LD50 of 6207 mg/kg (oral, mouse), which corresponds to 435 grams (about one pound) for a person.
Toxicity
Aluminium is classified as a non-carcinogen by the United States Department of Health and Human Services. A review published in 1988 said that there was little evidence that normal exposure to aluminium presents a risk to healthy adult, and a 2014 multi-element toxicology review was unable to find deleterious effects of aluminium consumed in amounts not greater than 40 mg/day per kg of body mass. Most aluminium consumed will leave the body in feces; most of the small part of it that enters the bloodstream, will be excreted via urine; nevertheless some aluminium does pass the blood-brain barrier and is lodged preferentially in the brains of Alzheimer's patients. Evidence published in 1989 indicates that, for Alzheimer's patients, aluminium may act by electrostatically crosslinking proteins, thus down-regulating genes in the superior temporal gyrus.
Effects
Aluminium, although rarely, can cause vitamin D-resistant osteomalacia, erythropoietin-resistant microcytic anemia, and central nervous system alterations. People with kidney insufficiency are especially at a risk. Chronic ingestion of hydrated aluminium silicates (for excess gastric acidity control) may result in aluminium binding to intestinal contents and increased elimination of other metals, such as iron or zinc; sufficiently high doses (>50 g/day) can cause anemia.
During the 1988 Camelford water pollution incident people in Camelford had their drinking water contaminated with aluminium sulfate for several weeks. A final report into the incident in 2013 concluded it was unlikely that this had caused long-term health problems.
Aluminium has been suspected of being a possible cause of Alzheimer's disease, but research into this for over 40 years has found, , no good evidence of causal effect.
Aluminium increases estrogen-related gene expression in human breast cancer cells cultured in the laboratory. In very high doses, aluminium is associated with altered function of the blood–brain barrier. A small percentage of people have contact allergies to aluminium and experience itchy red rashes, headache, muscle pain, joint pain, poor memory, insomnia, depression, asthma, irritable bowel syndrome, or other symptoms upon contact with products containing aluminium.
Exposure to powdered aluminium or aluminium welding fumes can cause pulmonary fibrosis. Fine aluminium powder can ignite or explode, posing another workplace hazard.
Exposure routes
Food is the main source of aluminium. Drinking water contains more aluminium than solid food; however, aluminium in food may be absorbed more than aluminium from water. Major sources of human oral exposure to aluminium include food (due to its use in food additives, food and beverage packaging, and cooking utensils), drinking water (due to its use in municipal water treatment), and aluminium-containing medications (particularly antacid/antiulcer and buffered aspirin formulations). Dietary exposure in Europeans averages to 0.2–1.5 mg/kg/week but can be as high as 2.3 mg/kg/week. Higher exposure levels of aluminium are mostly limited to miners, aluminium production workers, and dialysis patients.
Consumption of antacids, antiperspirants, vaccines, and cosmetics provide possible routes of exposure. Consumption of acidic foods or liquids with aluminium enhances aluminium absorption, and maltol has been shown to increase the accumulation of aluminium in nerve and bone tissues.
Treatment
In case of suspected sudden intake of a large amount of aluminium, the only treatment is deferoxamine mesylate which may be given to help eliminate aluminium from the body by chelation. However, this should be applied with caution as this reduces not only aluminium body levels, but also those of other metals such as copper or iron.
Environmental effects
High levels of aluminium occur near mining sites; small amounts of aluminium are released to the environment at the coal-fired power plants or incinerators. Aluminium in the air is washed out by the rain or normally settles down but small particles of aluminium remain in the air for a long time.
Acidic precipitation is the main natural factor to mobilize aluminium from natural sources and the main reason for the environmental effects of aluminium; however, the main factor of presence of aluminium in salt and freshwater are the industrial processes that also release aluminium into air.
In water, aluminium acts as a toxiс agent on gill-breathing animals such as fish when the water is acidic, in which aluminium may precipitate on gills, which causes loss of plasma- and hemolymph ions leading to osmoregulatory failure. Organic complexes of aluminium may be easily absorbed and interfere with metabolism in mammals and birds, even though this rarely happens in practice.
Aluminium is primary among the factors that reduce plant growth on acidic soils. Although it is generally harmless to plant growth in pH-neutral soils, in acid soils the concentration of toxic Al3+ cations increases and disturbs root growth and function. Wheat has developed a tolerance to aluminium, releasing organic compounds that bind to harmful aluminium cations. Sorghum is believed to have the same tolerance mechanism.
Aluminium production possesses its own challenges to the environment on each step of the production process. The major challenge is the greenhouse gas emissions. These gases result from electrical consumption of the smelters and the byproducts of processing. The most potent of these gases are perfluorocarbons from the smelting process. Released sulfur dioxide is one of the primary precursors of acid rain.
Biodegradation of metallic aluminium is extremely rare; most aluminium-corroding organisms do not directly attack or consume the aluminium, but instead produce corrosive wastes. The fungus Geotrichum candidum can consume the aluminium in compact discs. The bacterium Pseudomonas aeruginosa and the fungus Cladosporium resinae are commonly detected in aircraft fuel tanks that use kerosene-based fuels (not avgas), and laboratory cultures can degrade aluminium.
See also
Aluminium granules
Aluminium joining
Aluminium–air battery
Aluminized steel, for corrosion resistance and other properties
Aluminized screen, for display devices
Aluminized cloth, to reflect heat
Aluminized mylar, to reflect heat
Panel edge staining
Quantum clock
Notes
References
Bibliography
Further reading
Mimi Sheller, Aluminum Dream: The Making of Light Modernity. Cambridge, Mass.: Massachusetts Institute of Technology Press, 2014.
External links
Aluminium at The Periodic Table of Videos (University of Nottingham)
Toxic Substances Portal – Aluminum – from the Agency for Toxic Substances and Disease Registry, United States Department of Health and Human Services
CDC – NIOSH Pocket Guide to Chemical Hazards – Aluminum
World production of primary aluminium, by country
Price history of aluminum, according to the IMF
History of Aluminium – from the website of the International Aluminium Institute
Emedicine – Aluminium
Chemical elements
Post-transition metals
Aluminium
Electrical conductors
Pyrotechnic fuels
Airship technology
Reducing agents
E-number additives
Native element minerals
Chemical elements with face-centered cubic structure
|
https://en.wikipedia.org/wiki/Archipelago
|
An archipelago ( ), sometimes called an island group or island chain, is a chain, cluster, or collection of islands, or sometimes a sea containing a small number of scattered islands.
Examples of archipelagos include: the Indonesian Archipelago, the Andaman and Nicobar Islands, the Lakshadweep Islands, the Galápagos Islands, the Japanese archipelago, the Philippine Archipelago, the Maldives, the Balearic Islands, the Åland Islands, The Bahamas, the Aegean Islands, the Hawaiian Islands, the Canary Islands, Malta, the Azores, the Canadian Arctic Archipelago, the British Isles, the islands of the Archipelago Sea, and Shetland. Archipelagos are sometimes defined by political boundaries. For example, while they are geopolitically divided, the San Juan Islands and Gulf Islands geologically form part of a larger Gulf Archipelago.
Etymology
The word archipelago is derived from the Ancient Greek ἄρχι-(arkhi-, "chief") and πέλαγος (pélagos, "sea") through the Italian arcipelago. In antiquity, "Archipelago" (from Medieval Greek *ἀρχιπέλαγος and Latin ) was the proper name for the Aegean Sea. Later, usage shifted to refer to the Aegean Islands (since the sea has a large number of islands).
Geographic types
Archipelagos may be found isolated in large amounts of water or neighbouring a large land mass. For example, Scotland has more than 700 islands surrounding its mainland, which form an archipelago.
Archipelagos are often volcanic, forming along island arcs generated by subduction zones or hotspots, but may also be the result of erosion, deposition, and land elevation. Depending on their geological origin, islands forming archipelagos can be referred to as oceanic islands, continental fragments, or continental islands.
Oceanic islands
Oceanic islands are mainly of volcanic origin, and widely separated from any adjacent continent. The Hawaiian Islands and Galapagos Islands in the Pacific, and Mascarene Islands in the south Indian Ocean are examples.
Continental fragments
Continental fragments correspond to land masses that have separated from a continental mass due to tectonic displacement. The Farallon Islands off the coast of California are an example.
Continental archipelagos
Sets of islands formed close to the coast of a continent are considered continental archipelagos when they form part of the same continental shelf, when those islands are above-water extensions of the shelf. The islands of the Inside Passage off the coast of British Columbia and the Canadian Arctic Archipelago are examples.
Artificial archipelagos
Artificial archipelagos have been created in various countries for different purposes. Palm Islands and The World Islands off Dubai were or are being created for leisure and tourism purposes. Marker Wadden in the Netherlands is being built as a conservation area for birds and other wildlife.
Superlatives
The largest archipelago in the world is the Archipelago Sea which is part of Finland. There are approximately 40,000, mostly uninhabited, islands
The largest archipelagic state in the world by area, and by population, is Indonesia.
See also
Island arc
List of landforms
List of archipelagos by number of islands
List of archipelagos
Archipelagic state
List of islands
Aquapelago
References
External links
30 Most Incredible Island Archipelagos
Coastal and oceanic landforms
Oceanographical terminology
|
https://en.wikipedia.org/wiki/Axiom
|
An axiom, postulate, or assumption is a statement that is taken to be true, to serve as a premise or starting point for further reasoning and arguments. The word comes from the Ancient Greek word (), meaning 'that which is thought worthy or fit' or 'that which commends itself as evident'.
The precise definition varies across fields of study. In classic philosophy, an axiom is a statement that is so evident or well-established, that it is accepted without controversy or question. In modern logic, an axiom is a premise or starting point for reasoning.
In mathematics, an axiom may be a "logical axiom" or a "non-logical axiom". Logical axioms are taken to be true within the system of logic they define and are often shown in symbolic form (e.g., (A and B) implies A), while non-logical axioms (e.g., ) are substantive assertions about the elements of the domain of a specific mathematical theory, such as arithmetic.
Non-logical axioms may also be called "postulates" or "assumptions". In most cases, a non-logical axiom is simply a formal logical expression used in deduction to build a mathematical theory, and might or might not be self-evident in nature (e.g., the parallel postulate in Euclidean geometry). To axiomatize a system of knowledge is to show that its claims can be derived from a small, well-understood set of sentences (the axioms), and there are typically many ways to axiomatize a given mathematical domain.
Any axiom is a statement that serves as a starting point from which other statements are logically derived. Whether it is meaningful (and, if so, what it means) for an axiom to be "true" is a subject of debate in the philosophy of mathematics.
Etymology
The word axiom comes from the Greek word (axíōma), a verbal noun from the verb (axioein), meaning "to deem worthy", but also "to require", which in turn comes from (áxios), meaning "being in balance", and hence "having (the same) value (as)", "worthy", "proper". Among the ancient Greek philosophers an axiom was a claim which could be seen to be self-evidently true without any need for proof.
The root meaning of the word postulate is to "demand"; for instance, Euclid demands that one agree that some things can be done (e.g., any two points can be joined by a straight line).
Ancient geometers maintained some distinction between axioms and postulates. While commenting on Euclid's books, Proclus remarks that "Geminus held that this [4th] Postulate should not be classed as a postulate but as an axiom, since it does not, like the first three Postulates, assert the possibility of some construction but expresses an essential property." Boethius translated 'postulate' as petitio and called the axioms notiones communes but in later manuscripts this usage was not always strictly kept.
Historical development
Early Greeks
The logico-deductive method whereby conclusions (new knowledge) follow from premises (old knowledge) through the application of sound arguments (syllogisms, rules of inference) was developed by the ancient Greeks, and has become the core principle of modern mathematics. Tautologies excluded, nothing can be deduced if nothing is assumed. Axioms and postulates are thus the basic assumptions underlying a given body of deductive knowledge. They are accepted without demonstration. All other assertions (theorems, in the case of mathematics) must be proven with the aid of these basic assumptions. However, the interpretation of mathematical knowledge has changed from ancient times to the modern, and consequently the terms axiom and postulate hold a slightly different meaning for the present day mathematician, than they did for Aristotle and Euclid.
The ancient Greeks considered geometry as just one of several sciences, and held the theorems of geometry on par with scientific facts. As such, they developed and used the logico-deductive method as a means of avoiding error, and for structuring and communicating knowledge. Aristotle's posterior analytics is a definitive exposition of the classical view.
An "axiom", in classical terminology, referred to a self-evident assumption common to many branches of science. A good example would be the assertion that:
When an equal amount is taken from equals, an equal amount results.
At the foundation of the various sciences lay certain additional hypotheses that were accepted without proof. Such a hypothesis was termed a postulate. While the axioms were common to many sciences, the postulates of each particular science were different. Their validity had to be established by means of real-world experience. Aristotle warns that the content of a science cannot be successfully communicated if the learner is in doubt about the truth of the postulates.
The classical approach is well-illustrated by Euclid's Elements, where a list of postulates is given (common-sensical geometric facts drawn from our experience), followed by a list of "common notions" (very basic, self-evident assertions).
Postulates
It is possible to draw a straight line from any point to any other point.
It is possible to extend a line segment continuously in both directions.
It is possible to describe a circle with any center and any radius.
It is true that all right angles are equal to one another.
("Parallel postulate") It is true that, if a straight line falling on two straight lines make the interior angles on the same side less than two right angles, the two straight lines, if produced indefinitely, intersect on that side on which are the angles less than the two right angles.
Common notions
Things which are equal to the same thing are also equal to one another.
If equals are added to equals, the wholes are equal.
If equals are subtracted from equals, the remainders are equal.
Things which coincide with one another are equal to one another.
The whole is greater than the part.
Modern development
A lesson learned by mathematics in the last 150 years is that it is useful to strip the meaning away from the mathematical assertions (axioms, postulates, propositions, theorems) and definitions. One must concede the need for primitive notions, or undefined terms or concepts, in any study. Such abstraction or formalization makes mathematical knowledge more general, capable of multiple different meanings, and therefore useful in multiple contexts. Alessandro Padoa, Mario Pieri, and Giuseppe Peano were pioneers in this movement.
Structuralist mathematics goes further, and develops theories and axioms (e.g. field theory, group theory, topology, vector spaces) without any particular application in mind. The distinction between an "axiom" and a "postulate" disappears. The postulates of Euclid are profitably motivated by saying that they lead to a great wealth of geometric facts. The truth of these complicated facts rests on the acceptance of the basic hypotheses. However, by throwing out Euclid's fifth postulate, one can get theories that have meaning in wider contexts (e.g., hyperbolic geometry). As such, one must simply be prepared to use labels such as "line" and "parallel" with greater flexibility. The development of hyperbolic geometry taught mathematicians that it is useful to regard postulates as purely formal statements, and not as facts based on experience.
When mathematicians employ the field axioms, the intentions are even more abstract. The propositions of field theory do not concern any one particular application; the mathematician now works in complete abstraction. There are many examples of fields; field theory gives correct knowledge about them all.
It is not correct to say that the axioms of field theory are "propositions that are regarded as true without proof." Rather, the field axioms are a set of constraints. If any given system of addition and multiplication satisfies these constraints, then one is in a position to instantly know a great deal of extra information about this system.
Modern mathematics formalizes its foundations to such an extent that mathematical theories can be regarded as mathematical objects, and mathematics itself can be regarded as a branch of logic. Frege, Russell, Poincaré, Hilbert, and Gödel are some of the key figures in this development.
Another lesson learned in modern mathematics is to examine purported proofs carefully for hidden assumptions.
In the modern understanding, a set of axioms is any collection of formally stated assertions from which other formally stated assertions follow – by the application of certain well-defined rules. In this view, logic becomes just another formal system. A set of axioms should be consistent; it should be impossible to derive a contradiction from the axioms. A set of axioms should also be non-redundant; an assertion that can be deduced from other axioms need not be regarded as an axiom.
It was the early hope of modern logicians that various branches of mathematics, perhaps all of mathematics, could be derived from a consistent collection of basic axioms. An early success of the formalist program was Hilbert's formalization of Euclidean geometry, and the related demonstration of the consistency of those axioms.
In a wider context, there was an attempt to base all of mathematics on Cantor's set theory. Here, the emergence of Russell's paradox and similar antinomies of naïve set theory raised the possibility that any such system could turn out to be inconsistent.
The formalist project suffered a decisive setback, when in 1931 Gödel showed that it is possible, for any sufficiently large set of axioms (Peano's axioms, for example) to construct a statement whose truth is independent of that set of axioms. As a corollary, Gödel proved that the consistency of a theory like Peano arithmetic is an unprovable assertion within the scope of that theory.
It is reasonable to believe in the consistency of Peano arithmetic because it is satisfied by the system of natural numbers, an infinite but intuitively accessible formal system. However, at present, there is no known way of demonstrating the consistency of the modern Zermelo–Fraenkel axioms for set theory. Furthermore, using techniques of forcing (Cohen) one can show that the continuum hypothesis (Cantor) is independent of the Zermelo–Fraenkel axioms. Thus, even this very general set of axioms cannot be regarded as the definitive foundation for mathematics.
Other sciences
Experimental sciences - as opposed to mathematics and logic - also have general founding assertions from which a deductive reasoning can be built so as to express propositions that predict properties - either still general or much more specialized to a specific experimental context. For instance, Newton's laws in classical mechanics, Maxwell's equations in classical electromagnetism, Einstein's equation in general relativity, Mendel's laws of genetics, Darwin's Natural selection law, etc. These founding assertions are usually called principles or postulates so as to distinguish from mathematical axioms.
As a matter of facts, the role of axioms in mathematics and postulates in experimental sciences is different. In mathematics one neither "proves" nor "disproves" an axiom. A set of mathematical axioms gives a set of rules that fix a conceptual realm, in which the theorems logically follow. In contrast, in experimental sciences, a set of postulates shall allow deducing results that match or do not match experimental results. If postulates do not allow deducing experimental predictions, they do not set a scientific conceptual framework and have to be completed or made more accurate. If the postulates allow deducing predictions of experimental results, the comparison with experiments allows falsifying (falsified) the theory that the postulates install. A theory is considered valid as long as it has not been falsified.
Now, the transition between the mathematical axioms and scientific postulates is always slightly blurred, especially in physics. This is due to the heavy use of mathematical tools to support the physical theories. For instance, the introduction of Newton's laws rarely establishes as a prerequisite neither Euclidean geometry or differential calculus that they imply. It became more apparent when Albert Einstein first introduced special relativity where the invariant quantity is no more the Euclidean length (defined as ) > but the Minkowski spacetime interval (defined as ), and then general relativity where flat Minkowskian geometry is replaced with pseudo-Riemannian geometry on curved manifolds.
In quantum physics, two sets of postulates have coexisted for some time, which provide a very nice example of falsification. The 'Copenhagen school' (Niels Bohr, Werner Heisenberg, Max Born) developed an operational approach with a complete mathematical formalism that involves the description of quantum system by vectors ('states') in a separable Hilbert space, and physical quantities as linear operators that act in this Hilbert space. This approach is fully falsifiable and has so far produced the most accurate predictions in physics. But it has the unsatisfactory aspect of not allowing answers to questions one would naturally ask. For this reason, another 'hidden variables' approach was developed for some time by Albert Einstein, Erwin Schrödinger, David Bohm. It was created so as to try to give deterministic explanation to phenomena such as entanglement. This approach assumed that the Copenhagen school description was not complete, and postulated that some yet unknown variable was to be added to the theory so as to allow answering some of the questions it does not answer (the founding elements of which were discussed as the EPR paradox in 1935). Taking this ideas seriously, John Bell derived in 1964 a prediction that would lead to different experimental results (Bell's inequalities) in the Copenhagen and the Hidden variable case. The experiment was conducted first by Alain Aspect in the early 1980's, and the result excluded the simple hidden variable approach (sophisticated hidden variables could still exist but their properties would still be more disturbing than the problems they try to solve). This does not mean that the conceptual framework of quantum physics can be considered as complete now, since some open questions still exist (the limit between the quantum and classical realms, what happens during a quantum measurement, what happens in a completely closed quantum system such as the universe itself, etc.).
Mathematical logic
In the field of mathematical logic, a clear distinction is made between two notions of axioms: logical and non-logical (somewhat similar to the ancient distinction between "axioms" and "postulates" respectively).
Logical axioms
These are certain formulas in a formal language that are universally valid, that is, formulas that are satisfied by every assignment of values. Usually one takes as logical axioms at least some minimal set of tautologies that is sufficient for proving all tautologies in the language; in the case of predicate logic more logical axioms than that are required, in order to prove logical truths that are not tautologies in the strict sense.
Examples
Propositional logic
In propositional logic it is common to take as logical axioms all formulae of the following forms, where , , and can be any formulae of the language and where the included primitive connectives are only "" for negation of the immediately following proposition and "" for implication from antecedent to consequent propositions:
Each of these patterns is an axiom schema, a rule for generating an infinite number of axioms. For example, if , , and are propositional variables, then and are both instances of axiom schema 1, and hence are axioms. It can be shown that with only these three axiom schemata and modus ponens, one can prove all tautologies of the propositional calculus. It can also be shown that no pair of these schemata is sufficient for proving all tautologies with modus ponens.
Other axiom schemata involving the same or different sets of primitive connectives can be alternatively constructed.
These axiom schemata are also used in the predicate calculus, but additional logical axioms are needed to include a quantifier in the calculus.
First-order logic
Axiom of Equality.Let be a first-order language. For each variable , the below formula is universally valid.
This means that, for any variable symbol , the formula can be regarded as an axiom. Also, in this example, for this not to fall into vagueness and a never-ending series of "primitive notions", either a precise notion of what we mean by (or, for that matter, "to be equal") has to be well established first, or a purely formal and syntactical usage of the symbol has to be enforced, only regarding it as a string and only a string of symbols, and mathematical logic does indeed do that.
Another, more interesting example axiom scheme, is that which provides us with what is known as Universal Instantiation:
Axiom scheme for Universal Instantiation.Given a formula in a first-order language , a variable and a term that is substitutable for in , the below formula is universally valid.
Where the symbol stands for the formula with the term substituted for . (See Substitution of variables.) In informal terms, this example allows us to state that, if we know that a certain property holds for every and that stands for a particular object in our structure, then we should be able to claim . Again, we are claiming that the formula is valid, that is, we must be able to give a "proof" of this fact, or more properly speaking, a metaproof. These examples are metatheorems of our theory of mathematical logic since we are dealing with the very concept of proof itself. Aside from this, we can also have Existential Generalization:
Axiom scheme for Existential Generalization. Given a formula in a first-order language , a variable and a term that is substitutable for in , the below formula is universally valid.
Non-logical axioms
Non-logical axioms are formulas that play the role of theory-specific assumptions. Reasoning about two different structures, for example, the natural numbers and the integers, may involve the same logical axioms; the non-logical axioms aim to capture what is special about a particular structure (or set of structures, such as groups). Thus non-logical axioms, unlike logical axioms, are not tautologies. Another name for a non-logical axiom is postulate.
Almost every modern mathematical theory starts from a given set of non-logical axioms, and it was thought that, in principle, every theory could be axiomatized in this way and formalized down to the bare language of logical formulas.
Non-logical axioms are often simply referred to as axioms in mathematical discourse. This does not mean that it is claimed that they are true in some absolute sense. For example, in some groups, the group operation is commutative, and this can be asserted with the introduction of an additional axiom, but without this axiom, we can do quite well developing (the more general) group theory, and we can even take its negation as an axiom for the study of non-commutative groups.
Thus, an axiom is an elementary basis for a formal logic system that together with the rules of inference define a deductive system.
Examples
This section gives examples of mathematical theories that are developed entirely from a set of non-logical axioms (axioms, henceforth). A rigorous treatment of any of these topics begins with a specification of these axioms.
Basic theories, such as arithmetic, real analysis and complex analysis are often introduced non-axiomatically, but implicitly or explicitly there is generally an assumption that the axioms being used are the axioms of Zermelo–Fraenkel set theory with choice, abbreviated ZFC, or some very similar system of axiomatic set theory like Von Neumann–Bernays–Gödel set theory, a conservative extension of ZFC. Sometimes slightly stronger theories such as Morse–Kelley set theory or set theory with a strongly inaccessible cardinal allowing the use of a Grothendieck universe is used, but in fact, most mathematicians can actually prove all they need in systems weaker than ZFC, such as second-order arithmetic.
The study of topology in mathematics extends all over through point set topology, algebraic topology, differential topology, and all the related paraphernalia, such as homology theory, homotopy theory. The development of abstract algebra brought with itself group theory, rings, fields, and Galois theory.
This list could be expanded to include most fields of mathematics, including measure theory, ergodic theory, probability, representation theory, and differential geometry.
Arithmetic
The Peano axioms are the most widely used axiomatization of first-order arithmetic. They are a set of axioms strong enough to prove many important facts about number theory and they allowed Gödel to establish his famous second incompleteness theorem.
We have a language where is a constant symbol and is a unary function and the following axioms:
for any formula with one free variable.
The standard structure is where is the set of natural numbers, is the successor function and is naturally interpreted as the number 0.
Euclidean geometry
Probably the oldest, and most famous, list of axioms are the 4 + 1 Euclid's postulates of plane geometry. The axioms are referred to as "4 + 1" because for nearly two millennia the fifth (parallel) postulate ("through a point outside a line there is exactly one parallel") was suspected of being derivable from the first four. Ultimately, the fifth postulate was found to be independent of the first four. One can assume that exactly one parallel through a point outside a line exists, or that infinitely many exist. This choice gives us two alternative forms of geometry in which the interior angles of a triangle add up to exactly 180 degrees or less, respectively, and are known as Euclidean and hyperbolic geometries. If one also removes the second postulate ("a line can be extended indefinitely") then elliptic geometry arises, where there is no parallel through a point outside a line, and in which the interior angles of a triangle add up to more than 180 degrees.
Real analysis
The objectives of the study are within the domain of real numbers. The real numbers are uniquely picked out (up to isomorphism) by the properties of a Dedekind complete ordered field, meaning that any nonempty set of real numbers with an upper bound has a least upper bound. However, expressing these properties as axioms requires the use of second-order logic. The Löwenheim–Skolem theorems tell us that if we restrict ourselves to first-order logic, any axiom system for the reals admits other models, including both models that are smaller than the reals and models that are larger. Some of the latter are studied in non-standard analysis.
Role in mathematical logic
Deductive systems and completeness
A deductive system consists of a set of logical axioms, a set of non-logical axioms, and a set of rules of inference. A desirable property of a deductive system is that it be complete. A system is said to be complete if, for all formulas ,
that is, for any statement that is a logical consequence of there actually exists a deduction of the statement from . This is sometimes expressed as "everything that is true is provable", but it must be understood that "true" here means "made true by the set of axioms", and not, for example, "true in the intended interpretation". Gödel's completeness theorem establishes the completeness of a certain commonly used type of deductive system.
Note that "completeness" has a different meaning here than it does in the context of Gödel's first incompleteness theorem, which states that no recursive, consistent set of non-logical axioms of the Theory of Arithmetic is complete, in the sense that there will always exist an arithmetic statement such that neither nor can be proved from the given set of axioms.
There is thus, on the one hand, the notion of completeness of a deductive system and on the other hand that of completeness of a set of non-logical axioms. The completeness theorem and the incompleteness theorem, despite their names, do not contradict one another.
Further discussion
Early mathematicians regarded axiomatic geometry as a model of physical space, and obviously, there could only be one such model. The idea that alternative mathematical systems might exist was very troubling to mathematicians of the 19th century and the developers of systems such as Boolean algebra made elaborate efforts to derive them from traditional arithmetic. Galois showed just before his untimely death that these efforts were largely wasted. Ultimately, the abstract parallels between algebraic systems were seen to be more important than the details, and modern algebra was born. In the modern view, axioms may be any set of formulas, as long as they are not known to be inconsistent.
See also
Axiomatic system
Dogma
First principle, axiom in science and philosophy
List of axioms
Model theory
Regulæ Juris
Theorem
Presupposition
Physical law
Principle
Notes
References
Further reading
Mendelson, Elliot (1987). Introduction to mathematical logic. Belmont, California: Wadsworth & Brooks.
External links
Metamath axioms page
Axiom
Concepts in logic
|
https://en.wikipedia.org/wiki/Anarcho-capitalism
|
Anarcho-capitalism (colloquially: ancap or '"an-cap"') is an anti-statist, libertarian political philosophy and economic theory that seeks to abolish centralized states in favor of stateless societies with systems of private property enforced by private agencies, the non-aggression principle, free markets and self-ownership, which extends the concept to include control of private property as part of the self. In the absence of statute, anarcho-capitalists hold that society tends to contractually self-regulate and civilize through participation in the free market, which they describe as a voluntary society involving the voluntary exchange of goods and services. In a theoretical anarcho-capitalist society, the system of private property would still exist and be enforced by private defense agencies and/or insurance companies selected by customers, which would operate competitively in a market and fulfill the roles of courts and the police.
According to its proponents, various historical theorists have espoused philosophies similar to anarcho-capitalism. While the earliest extant attestation of "anarchocapitalism [sic]" is in Karl Hess's essay "The Death of Politics" published by Playboy in March 1969, the person credited with coining the terms anarcho-capitalism and anarcho-capitalist is Murray Rothbard. Rothbard, a leading figure in the 20th-century American libertarian movement, synthesized elements from the Austrian School, classical liberalism and 19th-century American individualist anarchists and mutualists Lysander Spooner and Benjamin Tucker, while rejecting the labor theory of value. Rothbard's anarcho-capitalist society would operate under a mutually agreed-upon "legal code which would be generally accepted, and which the courts would pledge themselves to follow". This legal code would recognize contracts between individuals, private property, self-ownership and tort law in keeping with the non-aggression principle. Rothbard views the power of the state as unjustified, arguing that it restricts individual rights and prosperity, and creates social and economic problems.
Anarcho-capitalists and right-libertarians cite several historical precedents of what they believe to be examples of quasi-anarcho-capitalism, including the Republic of Cospaia, Acadia, Anglo-Saxon England, Medieval Iceland, the American Old West, Gaelic Ireland, and merchant law, admiralty law, and early common law.
Anarcho-capitalism is distinguished from minarchism, which advocates a night-watchman state limited to protecting individuals from aggression and enforcing private property. Unlike most anarchists, anarcho-capitalists support private property and private institutions.
Classification
Anarcho-capitalism developed from radical American anti-state libertarianism and individualist anarchism. A strong current within anarchism does not consider anarcho-capitalism to be part of the anarchist movement because they argue that anarchism has historically been an anti-capitalist movement and for definitional reasons which see anarchism as incompatible with capitalist forms. According to several scholars, Anarcho-capitalism lies outside the tradition of the vast majority of anarchist schools of thought and is more closely affiliated with capitalism, right-libertarianism and neoliberalism. Social anarchists oppose and reject capitalism, and consider "anarcho-capitalism" to be a contradiction in terms, although some, including anarcho-capitalists and right-libertarians, consider anarcho-capitalism to be a form of anarchism.
According to the Encyclopædia Britannica:Anarcho-capitalism is occasionally seen as part of the New Right.
Philosophy
Author J Michael Oliver says that during the 1960s, a philosophical movement arose in the United States that championed "reason, ethical egoism, and free-market capitalism". According to Oliver, anarcho-capitalism is a political theory which logically follows the philosophical conclusions of Objectivism, a philosophical system developed by Russian-American writer Ayn Rand, but Oliver acknowledges that his advocacy of anarcho-capitalism is "quite at odds with Rand's ardent defense of 'limited government. Professor Lisa Duggan also says that Rand's anti-statist, pro–"free market" stances went on to shape the politics of anarcho-capitalism.
According to Patrik Schumacher, the political ideology and programme of Anarcho-capitalism envisages the radicalization of the neoliberal "rollback of the state", and calls for the extension of "entrepreneurial freedom" and "competitive market rationality" to the point where the scope for private enterprise is all-encompassing and "leaves no space for state action whatsoever".
On the state
Anarcho-capitalists oppose the state and seek to privatize any useful service the government presently provides, such as education, infrastructure, or the enforcement of law. They see capitalism and the "free market" as the basis for a free and prosperous society. Murray Rothbard stated that the difference between free-market capitalism and state capitalism is the difference between "peaceful, voluntary exchange" and a "collusive partnership" between business and government that "uses coercion to subvert the free market". Rothbard argued that all government services, including defense, are inefficient because they lack a market-based pricing mechanism regulated by "the voluntary decisions of consumers purchasing services that fulfill their highest-priority needs" and by investors seeking the most profitable enterprises to invest in.
Maverick Edwards of the Liberty University describes anarcho-capitalism as a political, social, and economic theory that places markets as the central "governing body" and where government no longer "grants" rights to its citizenry.
Non-aggression principle
Writer Stanisław Wójtowicz says that although anarcho-capitalists are against centralized states, they hold that all people would naturally share and agree to a specific moral theory based on the non-aggression principle. While the Friedmanian formulation of anarcho-capitalism is robust to the presence of violence and in fact, assumes some degree of violence will occur, anarcho-capitalism as formulated by Rothbard and others holds strongly to the central libertarian nonaggression axiom, sometimes non-aggression principle. Rothbard wrote:
Rothbard's defense of the self-ownership principle stems from what he believed to be his falsification of all other alternatives, namely that either a group of people can own another group of people, or that no single person has full ownership over one's self. Rothbard dismisses these two cases on the basis that they cannot result in a universal ethic, i.e. a just natural law that can govern all people, independent of place and time. The only alternative that remains to Rothbard is self-ownership which he believes is both axiomatic and universal.
In general, the non-aggression axiom is described by Rothbard as a prohibition against the initiation of force, or the threat of force, against persons (in which he includes direct violence, assault and murder) or property (in which he includes fraud, burglary, theft and taxation). The initiation of force is usually referred to as aggression or coercion. The difference between anarcho-capitalists and other libertarians is largely one of the degree to which they take this axiom. Minarchist libertarians such as libertarian political parties would retain the state in some smaller and less invasive form, retaining at the very least public police, courts, and military. However, others might give further allowance for other government programs. In contrast, Rothbard rejects any level of "state intervention", defining the state as a coercive monopoly and as the only entity in human society, excluding acknowledged criminals, that derives its income entirely from coercion, in the form of taxation, which Rothbard describes as "compulsory seizure of the property of the State's inhabitants, or subjects."
Some anarcho-capitalists such as Rothbard accept the non-aggression axiom on an intrinsic moral or natural law basis. It is in terms of the non-aggression principle that Rothbard defined his interpretation of anarchism, "a system which provides no legal sanction for such aggression ['against person and property']"; and wrote that "what anarchism proposes to do, then, is to abolish the State, i.e. to abolish the regularized institution of aggressive coercion". In an interview published in the American libertarian journal The New Banner, Rothbard stated that "capitalism is the fullest expression of anarchism, and anarchism is the fullest expression of capitalism".
Property
Private property
Anarcho-capitalists postulate the privatization of everything, including cities with all their infrastructures, public spaces, streets and urban management systems.
Central to Rothbardian anarcho-capitalism are the concepts of self-ownership and original appropriation that combines personal and private property. Hans-Hermann Hoppe wrote:
Rothbard however rejected the Lockean proviso, and followed the rule of "first come, first served", without any consideration of how much resources are left for other individuals.
Anarcho-capitalists advocate private ownership of the means of production and the allocation of the product of labor created by workers within the context of wage labour and the free market – that is through decisions made by property and capital owners, regardless of what an individual needs or does not need. Original appropriation allows an individual to claim any never-before-used resources, including land and by improving or otherwise using it, own it with the same "absolute right" as their own body, and retaining those rights forever, regardless of whether the resource is still being used by them. According to Rothbard, property can only come about through labor, therefore original appropriation of land is not legitimate by merely claiming it or building a fence around it—it is only by using land and by mixing one's labor with it that original appropriation is legitimized: "Any attempt to claim a new resource that someone does not use would have to be considered invasive of the property right of whoever the first user will turn out to be". Rothbard argued that the resource need not continue to be used in order for it to be the person's property as "for once his labor is mixed with the natural resource, it remains his owned land. His labor has been irretrievably mixed with the land, and the land is therefore his or his assigns' in perpetuity".
Rothbard also spoke about a theory of justice in property rights:
In Justice and Property Rights, Rothbard wrote that "any identifiable owner (the original victim of theft or his heir) must be accorded his property". In the case of slavery, Rothbard claimed that in many cases "the old plantations and the heirs and descendants of the former slaves can be identified, and the reparations can become highly specific indeed". Rothbard believed slaves rightfully own any land they were forced to work on under the homestead principle. If property is held by the state, Rothbard advocated its confiscation and "return to the private sector", writing that "any property in the hands of the State is in the hands of thieves, and should be liberated as quickly as possible". Rothbard proposed that state universities be seized by the students and faculty under the homestead principle. Rothbard also supported the expropriation of nominally "private property" if it is the result of state-initiated force such as businesses that receive grants and subsidies. Rothbard further proposed that businesses who receive at least 50% of their funding from the state be confiscated by the workers, writing: "What we libertarians object to, then, is not government per se but crime, what we object to is unjust or criminal property titles; what we are for is not 'private' property per se but just, innocent, non-criminal private property".
Similarly, Karl Hess wrote that "libertarianism wants to advance principles of property but that it in no way wishes to defend, willy nilly, all property which now is called private ... Much of that property is stolen. Much is of dubious title. All of it is deeply intertwined with an immoral, coercive state system".
By accepting an axiomatic definition of private property and property rights, anarcho-capitalists deny the legitimacy of a state on principle. Hans-Hermann Hoppe argues:
Anarchists view capitalism as an inherently authoritarian and hierarchical system and seek the abolishment of private property. There is disagreement between anarchists and anarcho-capitalists as the former generally rejects anarcho-capitalism as a form of anarchism and considers anarcho-capitalism a contradiction in terms, while the latter holds that the abolishment of private property would require expropriation which is "counterproductive to order" and would require a state.
Common property
As opposed to anarchists, most anarcho-capitalists reject the commons. However, some of them propose that non-state public or community property can also exist in an anarcho-capitalist society. For anarcho-capitalists, what is important is that it is "acquired" and transferred without help or hindrance from what they call the "compulsory state". Deontological anarcho-capitalists believe that the only just and most economically beneficial way to acquire property is through voluntary trade, gift, or labor-based original appropriation, rather than through aggression or fraud.
Anarcho-capitalists state that there could be cases where common property may develop in a Lockean natural rights framework. Anarcho-capitalists make the example of a number of private businesses which may arise in an area, each owning the land and buildings that they use, but they argue that the paths between them become cleared and trodden incrementally through customer and commercial movement. These thoroughfares may become valuable to the community, but according to them ownership cannot be attributed to any single person and original appropriation does not apply because many contributed the labor necessary to create them. In order to prevent it from falling to the "tragedy of the commons", anarcho-capitalists suggest transitioning from common to private property, wherein an individual would make a homesteading claim based on disuse, acquire title by the assent of the community consensus, form a corporation with other involved parties, or other means.
Randall G. Holcombe see challenges stemming from the idea of common property under anarcho-capitalism, such as whether an individual might claim fishing rights in the area of a major shipping lane and thereby forbid passage through it. In contrast, Hoppe's work on anarcho-capitalist theory is based on the assumption that all property is privately held, "including all streets, rivers, airports, and harbors" which forms the foundation of his views on immigration.
Intellectual property
Some anarcho-capitalists strongly oppose intellectual property (i.e., trademarks, patents, copyrights). Stephan N. Kinsella argues that ownership only relates to tangible assets.
Contractual society
The society envisioned by anarcho-capitalists has been labelled by them as a "contractual society" which Rothbard described as "a society based purely on voluntary action, entirely unhampered by violence or threats of violence" The system relies on contracts between individuals as the legal framework which would be enforced by private police and security forces as well as private arbitrations.
Rothbard argues that limited liability for corporations could also exist through contract, arguing that "[c]orporations are not at all monopolistic privileges; they are free associations of individuals pooling their capital. On the purely free market, those men would simply announce to their creditors that their liability is limited to the capital specifically invested in the corporation".
There are limits to the right to contract under some interpretations of anarcho-capitalism. Rothbard believes that the right to contract is based in inalienable rights and because of this any contract that implicitly violates those rights can be voided at will, preventing a person from permanently selling himself or herself into unindentured slavery. That restriction aside, the right to contract under anarcho-capitalist order would be pretty broad. For example, Rothbard went as far as to justify stork markets, arguing that a market in guardianship rights would facilitate the transfer of guardianship from abusive or neglectful parents to those more interested or suited to raising children. Other anarcho-capitalists have also suggested the legalization of organ markets, as in Iran's renal market. Other interpretations conclude that banning such contracts would in itself be an unacceptably invasive interference in the right to contract.
Included in the right of contract is "the right to contract oneself out for employment by others". While anarchists criticize wage labour describing it as wage slavery, anarcho-capitalists view it as a consensual contract. Some anarcho-capitalists prefer to see self-employment prevail over wage labor. David D. Friedman has expressed a preference for a society where "almost everyone is self-employed" and "instead of corporations there are large groups of entrepreneurs related by trade, not authority. Each sells not his time, but what his time produces".
Law and order and the use of violence
Different anarcho-capitalists propose different forms of anarcho-capitalism and one area of disagreement is in the area of law. In The Market for Liberty, Morris and Linda Tannehill object to any statutory law whatsoever. They argue that all one has to do is ask if one is aggressing against another in order to decide if an act is right or wrong. However, while also supporting a on force and fraud, Rothbard supports the establishment of a mutually agreed-upon centralized libertarian legal code which private courts would pledge to follow, as he presumes a high degree of convergence amongst individuals about what constitutes natural justice.
Unlike both the Tannehills and Rothbard who see an ideological commonality of ethics and morality as a requirement, David D. Friedman proposes that "the systems of law will be produced for profit on the open market, just as books and bras are produced today. There could be competition among different brands of law, just as there is competition among different brands of cars". Friedman says whether this would lead to a libertarian society "remains to be proven". He says it is a possibility that very un-libertarian laws may result, such as laws against drugs, but he thinks this would be rare. He reasons that "if the value of a law to its supporters is less than its cost to its victims, that law ... will not survive in an anarcho-capitalist society".
Anarcho-capitalists only accept the collective defense of individual liberty (i.e. courts, military, or police forces) insofar as such groups are formed and paid for on an explicitly voluntary basis. However, their complaint is not just that the state's defensive services are funded by taxation, but that the state assumes it is the only legitimate practitioner of physical force—that is, they believe it forcibly prevents the private sector from providing comprehensive security, such as a police, judicial and prison systems to protect individuals from aggressors. Anarcho-capitalists believe that there is nothing morally superior about the state which would grant it, but not private individuals, a right to use physical force to restrain aggressors. If competition in security provision were allowed to exist, prices would also be lower and services would be better according to anarcho-capitalists. According to Molinari: "Under a regime of liberty, the natural organization of the security industry would not be different from that of other industries". Proponents believe that private systems of justice and defense already exist, naturally forming where the market is allowed to "compensate for the failure of the state", namely private arbitration, security guards, neighborhood watch groups and so on. These private courts and police are sometimes referred to generically as private defense agencies (PDAs). The defense of those unable to pay for such protection might be financed by charitable organizations relying on voluntary donation rather than by state institutions relying on taxation, or by cooperative self-help by groups of individuals. Edward Stringham argues that private adjudication of disputes could enable the market to internalize externalities and provide services that customers desire.
Rothbard stated that the American Revolutionary War and the War of Southern Secession were the only two just wars in American military history. Some anarcho-capitalists such as Rothbard feel that violent revolution is counter-productive and prefer voluntary forms of economic secession to the extent possible. Retributive justice is often a component of the contracts imagined for an anarcho-capitalist society. According to Matthew O'Keefee, some anarcho-capitalists believe prisons or indentured servitude would be justifiable institutions to deal with those who violate anarcho-capitalist property relations while others believe exile or forced restitution are sufficient. Rothbard stressed the importance of restitution as the primary focus of a libertarian legal order and advocated for corporal punishment for petty vandals and the death penalty for murders.
Bruce L. Benson argues that legal codes may impose punitive damages for intentional torts in the interest of deterring crime. Benson gives the example of a thief who breaks into a house by picking a lock. Even if caught before taking anything, Benson argues that the thief would still owe the victim for violating the sanctity of his property rights. Benson opines that despite the lack of objectively measurable losses in such cases, "standardized rules that are generally perceived to be fair by members of the community would, in all likelihood, be established through precedent, allowing judgments to specify payments that are reasonably appropriate for most criminal offenses".
Morris and Linda Tannehill raise a similar example, saying that a bank robber who had an attack of conscience and returned the money would still owe reparations for endangering the employees' and customers' lives and safety, in addition to the costs of the defense agency answering the teller's call for help. However, they believe that the robber's loss of reputation would be even more damaging. They suggest that specialized companies would list aggressors so that anyone wishing to do business with a man could first check his record, provided they trust the veracity of the companies' records. They further theorise that the bank robber would find insurance companies listing him as a very poor risk and other firms would be reluctant to enter into contracts with him.
Influences
Murray Rothbard has listed different ideologies of which his interpretations, he said, have influenced anarcho-capitalism. This includes his interpretation of anarchism, and more precisely individualist anarchism; classical liberalism and the Austrian School of economic thought. Scholars additionally associate anarcho-capitalism with neo-classical liberalism, radical neoliberalism and right-libertarianism.
Anarchism
In both its social and individualist forms, anarchism is usually considered an anti-capitalist and radical left-wing or far-left movement that promotes libertarian socialist economic theories such as collectivism, communism, individualism, mutualism and syndicalism. Because anarchism is usually described alongside libertarian Marxism as the libertarian wing of the socialist movement and as having a historical association with anti-capitalism and socialism, anarchists believe that capitalism is incompatible with social and economic equality and therefore do not recognize anarcho-capitalism as an anarchist school of thought. In particular, anarchists argue that capitalist transactions are not voluntary and that maintaining the class structure of a capitalist society requires coercion which is incompatible with an anarchist society. The usage of libertarian is also in dispute. While both anarchists and anarcho-capitalists have used it, libertarian was synonymous with anarchist until the mid-20th century, when anarcho-capitalist theory developed.
Anarcho-capitalists are distinguished from the dominant anarchist tradition by their relation to property and capital. While both anarchism and anarcho-capitalism share general antipathy towards government authority, anarcho-capitalism favors free-market capitalism. Anarchists, including egoists such as Max Stirner, have supported the protection of an individual's freedom from powers of both government and private property owners. In contrast, while condemning governmental encroachment on personal liberties, anarcho-capitalists support freedoms based on private property rights. Anarcho-capitalist theorist Murray Rothbard argued that protesters should rent a street for protest from its owners. The abolition of public amenities is a common theme in some anarcho-capitalist writings.
As anarcho-capitalism puts laissez-faire economics before economic equality, it is commonly viewed as incompatible with the anti-capitalist and egalitarian tradition of anarchism. Although anarcho-capitalist theory implies the abolition of the state in favour of a fully laissez-faire economy, it lies outside the tradition of anarchism. While using the language of anarchism, anarcho-capitalism only shares anarchism's antipathy towards the state and not anarchism's antipathy towards hierarchy as theorists expect from anarcho-capitalist economic power relations. It follows a different paradigm from anarchism and has a fundamentally different approach and goals. In spite of the anarcho- in its title, anarcho-capitalism is more closely affiliated with capitalism, right-libertarianism, and liberalism than with anarchism. Some within this laissez-faire tradition reject the designation of anarcho-capitalism, believing that capitalism may either refer to the laissez-faire market they support or the government-regulated system that they oppose.
Rothbard argued that anarcho-capitalism is the only true form of anarchism—the only form of anarchism that could possibly exist in reality as he maintained that any other form presupposes authoritarian enforcement of a political ideology such as "redistribution of private property", which he attributed to anarchism. According to this argument, the capitalist free market is "the natural situation" that would result from people being free from state authority and entails the establishment of all voluntary associations in society such as cooperatives, non-profit organizations, businesses and so on. Moreover, anarcho-capitalists, as well as classical liberal minarchists, argue that the application of anarchist ideals as advocated by what they term "left-wing anarchists" would require an authoritarian body of some sort to impose it. Based on their understanding and interpretation of anarchism, in order to forcefully prevent people from accumulating capital, which they believe is a goal of anarchists, there would necessarily be a redistributive organization of some sort which would have the authority to in essence exact a tax and re-allocate the resulting resources to a larger group of people. They conclude that this theoretical body would inherently have political power and would be nothing short of a state. The difference between such an arrangement and an anarcho-capitalist system is what anarcho-capitalists see as the voluntary nature of organization within anarcho-capitalism contrasted with a "centralized ideology" and a "paired enforcement mechanism" which they believe would be necessary under what they describe as a "coercively" egalitarian-anarchist system.
Rothbard also argued that the capitalist system of today is not properly anarchistic because it often colludes with the state. According to Rothbard, "what Marx and later writers have done is to lump together two extremely different and even contradictory concepts and actions under the same portmanteau term. These two contradictory concepts are what I would call 'free-market capitalism' on the one hand, and 'state capitalism' on the other". "The difference between free-market capitalism and state capitalism", writes Rothbard, "is precisely the difference between, on the one hand, peaceful, voluntary exchange, and on the other, violent expropriation". He continues: "State capitalism inevitably creates all sorts of problems which become insoluble".
Traditional anarchists reject the notion of capitalism, hierarchies and private property. Albert Meltzer argued that anarcho-capitalism simply cannot be anarchism because capitalism and the state are inextricably interlinked and because capitalism exhibits domineering hierarchical structures such as that between an employer and an employee. Anna Morgenstern approaches this topic from the opposite perspective, arguing that anarcho-capitalists are not really capitalists because "mass concentration of capital is impossible" without the state. According to Jeremy Jennings, "[i]t is hard not to conclude that these ideas," referring to anarcho-capitalism, have "roots deep in classical liberalism" and "are described as anarchist only on the basis of a misunderstanding of what anarchism is." For Jennings, "anarchism does not stand for the untrammelled freedom of the individual (as the 'anarcho-capitalists' appear to believe) but, as we have already seen, for the extension of individuality and community." Similarly, Barbara Goodwin, Emeritus Professor of Politics at the University of East Anglia, Norwich, argues that anarcho-capitalism's "true place is in the group of right-wing libertarians", not in anarchism.
Some right-libertarian scholars like Michael Huemer, who identify with the ideology, describe anarcho-capitalism as a "variety of anarchism". British author Andrew Heywood also believes that "individualist anarchism overlaps with libertarianism and is usually linked to a strong belief in the market as a self-regulating mechanism, most obviously manifest in the form of anarcho-capitalism". Frank H. Brooks, author of The Individualist Anarchists: An Anthology of Liberty (1881–1908), believes that "anarchism has always included a significant strain of radical individualism, from the hyperrationalism of Godwin, to the egoism of Stirner, to the libertarians and anarcho-capitalists of today".
While both anarchism and anarcho-capitalism are in opposition to the state, it is a necessary but not sufficient condition because anarchists and anarcho-capitalists interpret state-rejection differently. Austrian school economist David Prychitko, in the context of anarcho-capitalism says that "while society without a state is necessary for full-fledged anarchy, it is nevertheless insufficient". According to Ruth Kinna, anarcho-capitalists are anti-statists who draw more on right-wing liberal theory and the Austrian School than anarchist traditions. Kinna writes that "[i]n order to highlight the clear distinction between the two positions", anarchists describe anarcho-capitalists as "propertarians". Anarcho-capitalism is usually seen as part of the New Right.
Some anarcho-capitalists argue that, according to them, anarchists consider the word "anarchy" as to be the antithesis of hierarchy, and therefore, that "anarcho-capitalism" is sometimes considered to be a term with differences philosophically to what they personally consider to be true anarchism, as an anarcho-capitalist society would inherently contain hierarchy. Additionally, Rothbard discusses the difference between "government" and "governance" thus, proponents of anarcho-capitalism think the philosophy's common name is indeed consistent, as it promotes private governance, but is vehemently anti-government.
Classical liberalism
Historian and libertarian Ralph Raico argued that what liberal philosophers "had come up with was a form of individualist anarchism, or, as it would be called today, anarcho-capitalism or market anarchism". He also said that Gustave de Molinari was proposing a doctrine of the private production of security, a position which was later taken up by Murray Rothbard. Some anarcho-capitalists consider Molinari to be the first proponent of anarcho-capitalism. In the preface to the 1977 English translation by Murray Rothbard called The Production of Security the "first presentation anywhere in human history of what is now called anarcho-capitalism", although admitting that "Molinari did not use the terminology, and probably would have balked at the name". Hans-Hermann Hoppe said that "the 1849 article 'The Production of Security' is probably the single most important contribution to the modern theory of anarcho-capitalism". According to Hans-Hermann Hoppe, one of the 19th century precursors of anarcho-capitalism were philosopher Herbert Spencer, classical liberal Auberon Herbert and liberal socialist Franz Oppenheimer.
Ruth Kinna credits Murray Rothbard with coining the term anarcho-capitalism, which is – Kinna proposes – to describe "a commitment to unregulated private property and laissez-faire economics, prioritizing the liberty-rights of individuals, unfettered by government regulation, to accumulate, consume and determine the patterns of their lives as they see fit". According to Kinna, anarcho-capitalists "will sometimes label themselves market anarchists because they recognize the negative connotations of 'capitalism'. But the literature of anarcho-capitalism draws on classical liberal theory, particularly the Austrian School – Friedrich von Hayek and Ludwig von Mises – rather than recognizable anarchist traditions. Ayn Rand's laissez-faire, anti-government, corporate philosophy – Objectivism – is sometimes associated with anarcho-capitalism". Other scholars similarly associate anarcho-capitalism with anti-state classical liberalism, neo-classical liberalism, radical neoliberalism and right-libertarianism.
Paul Dragos Aligica writes that there is a "foundational difference between the classical liberal and the anarcho-capitalist positions". Classical liberalism, while accepting critical arguments against collectivism, acknowledges a certain level of public ownership and collective governance as necessary to provide practical solutions to political problems. In contrast anarcho-capitalism, according to Aligica, denies any requirement for any form of public administration, and allows no meaningful role for the public sphere, which is seen as sub-optimal and illegitimate.
Individualist anarchism
Murray Rothbard, a student of Ludwig von Mises, stated that he was influenced by the work of the 19th-century American individualist anarchists. In the winter of 1949, Rothbard decided to reject minimal state laissez-faire and embrace his interpretation of individualist anarchism. In 1965, Rothbard wrote that "Lysander Spooner and Benjamin R. Tucker were unsurpassed as political philosophers and nothing is more needed today than a revival and development of the largely forgotten legacy they left to political philosophy". However, Rothbard thought that they had a faulty understanding of economics as the 19th-century individualist anarchists had a labor theory of value as influenced by the classical economists, while Rothbard was a student of Austrian School economics which does not agree with the labor theory of value. Rothbard sought to meld 19th-century American individualist anarchists' advocacy of economic individualism and free markets with the principles of Austrian School economics, arguing that "[t]here is, in the body of thought known as 'Austrian economics', a scientific explanation of the workings of the free market (and of the consequences of government intervention in that market) which individualist anarchists could easily incorporate into their political and social Weltanschauung". Rothbard held that the economic consequences of the political system they advocate would not result in an economy with people being paid in proportion to labor amounts, nor would profit and interest disappear as they expected. Tucker thought that unregulated banking and money issuance would cause increases in the money supply so that interest rates would drop to zero or near to it. Peter Marshall states that "anarcho-capitalism overlooks the egalitarian implications of traditional individualist anarchists like Spooner and Tucker". Stephanie Silberstein states that "While Spooner was no free-market capitalist, nor an anarcho-capitalist, he was not as opposed to capitalism as most socialists were."
In "The Spooner-Tucker Doctrine: An Economist's View", Rothbard explained his disagreements. Rothbard disagreed with Tucker that it would cause the money supply to increase because he believed that the money supply in a free market would be self-regulating. If it were not, then Rothbard argued inflation would occur so it is not necessarily desirable to increase the money supply in the first place. Rothbard claimed that Tucker was wrong to think that interest would disappear regardless because he believed people, in general, do not wish to lend their money to others without compensation, so there is no reason why this would change just because banking was unregulated. Tucker held a labor theory of value and thought that in a free market people would be paid in proportion to how much labor they exerted and that exploitation or usury was taking place if they were not. As Tucker explained in State Socialism and Anarchism, his theory was that unregulated banking would cause more money to be available and that this would allow the proliferation of new businesses which would, in turn, raise demand for labor. This led Tucker to believe that the labor theory of value would be vindicated and equal amounts of labor would receive equal pay. As an Austrian School economist, Rothbard did not agree with the labor theory and believed that prices of goods and services are proportional to marginal utility rather than to labor amounts in the free market. As opposed to Tucker he did not think that there was anything exploitative about people receiving an income according to how much "buyers of their services value their labor" or what that labor produces.
Without the labor theory of value, some argue that 19th-century individualist anarchists approximate the modern movement of anarcho-capitalism, although this has been contested or rejected. As economic theory changed, the popularity of the labor theory of classical economics was superseded by the subjective theory of value of neoclassical economics and Rothbard combined Mises' Austrian School of economics with the absolutist views of human rights and rejection of the state he had absorbed from studying the individualist American anarchists of the 19th century such as Tucker and Spooner. In the mid-1950s, Rothbard wrote an unpublished article named "Are Libertarians 'Anarchists'?" under the pseudonym "Aubrey Herbert", concerned with differentiating himself from communist and socialistic economic views of anarchists, including the individualist anarchists of the 19th century, concluding that "we are not anarchists and that those who call us anarchists are not on firm etymological ground and are being completely unhistorical. On the other hand, it is clear that we are not archists either: we do not believe in establishing a tyrannical central authority that will coerce the noninvasive as well as the invasive. Perhaps, then, we could call ourselves by a new name: nonarchist." Joe Peacott, an American individualist anarchist in the mutualist tradition, criticizes anarcho-capitalists for trying to hegemonize the individualist anarchism label and make appear as if all individualist anarchists are in favor of capitalism. Peacott states that "individualists, both past and present, agree with the communist anarchists that present-day capitalism is based on economic coercion, not on voluntary contract. Rent and interest are the mainstays of modern capitalism and are protected and enforced by the state. Without these two unjust institutions, capitalism could not exist".
Anarchist activists and scholars do not consider anarcho-capitalism as a part of the anarchist movement, arguing that anarchism has historically been an anti-capitalist movement and see it as incompatible with capitalist forms. Although some regard anarcho-capitalism as a form of individualist anarchism, many others disagree or contest the existence of an individualist–socialist divide. In coming to terms that anarchists mostly identified with socialism, Rothbard wrote that individualist anarchism is different from anarcho-capitalism and other capitalist theories due to the individualist anarchists retaining the labor theory of value and socialist doctrines. Similarly, many writers deny that anarcho-capitalism is a form of anarchism or that capitalism is compatible with anarchism.
The Palgrave Handbook of Anarchism writes that "[a]s Benjamin Franks rightly points out, individualisms that defend or reinforce hierarchical forms such as the economic-power relations of anarcho-capitalism are incompatible with practices of social anarchism based on developing immanent goods which contest such as inequalities". Laurence Davis cautiously asks "[I]s anarcho-capitalism really a form of anarchism or instead a wholly different ideological paradigm whose adherents have attempted to expropriate the language of anarchism for their own anti-anarchist ends?" Davis cites Iain McKay, "whom Franks cites as an authority to support his contention that 'academic analysis has followed activist currents in rejecting the view that anarcho-capitalism has anything to do with social anarchism, as arguing "quite emphatically on the very pages cited by Franks that anarcho-capitalism is by no means a type of anarchism". McKay writes that "[i]t is important to stress that anarchist opposition to the so-called capitalist 'anarchists' does not reflect some kind of debate within anarchism, as many of these types like to pretend, but a debate between anarchism and its old enemy capitalism. ... Equally, given that anarchists and 'anarcho'-capitalists have fundamentally different analyses and goals it is hardly 'sectarian' to point this out".
Davis writes that "Franks asserts without supporting evidence that most major forms of individualist anarchism have been largely anarcho-capitalist in content, and concludes from this premise that most forms of individualism are incompatible with anarchism". Davis argues that "the conclusion is unsustainable because the premise is false, depending as it does for any validity it might have on the further assumption that anarcho-capitalism is indeed a form of anarchism. If we reject this view, then we must also reject the individual anarchist versus the communal anarchist 'chasm' style of argument that follows from it". Davis maintains that "the ideological core of anarchism is the belief that society can and should be organised without hierarchy and domination. Historically, anarchists have struggles against a wide range of regimes of domination, from capitalism, the state system, patriarchy, heterosexism, and the domination of nature to colonialism, the war system, slavery, fascism, white supremacy, and certain forms of organised religion". According to Davis, "[w]hile these visions range from the predominantly individualistic to the predominantly communitarian, features common to virtually all include an emphasis on self-management and self-regulatory methods of organisation, voluntary association, decentralised society, based on the principle of free association, in which people will manage and govern themselves". Finally, Davis includes a footnote stating that "[i]ndividualist anarchism may plausibly be re regarded as a form of both socialism and anarchism. Whether the individualist anarchists were consistent anarchists (and socialists) is another question entirely. ... McKay comments as follows: 'any individualist anarchism which supports wage labour is inconsistent anarchism. It can easily be made consistent anarchism by applying its own principles consistently. In contrast 'anarcho'-capitalism rejects so many of the basic, underlying, principles of anarchism ... that it cannot be made consistent with the ideals of anarchism.
Historical precedents
Several anarcho-capitalists and right-libertarians have discussed historical precedents of what they believe were examples of anarcho-capitalism.
Free cities of medieval Europe
Economist and libertarian scholar Bryan Caplan considers the free cities of medieval Europe as examples of "anarchist" or "nearly anarchistic" societies, further arguing:
Medieval Iceland
According to the libertarian theorist David D. Friedman, "[m]edieval Icelandic institutions have several peculiar and interesting characteristics; they might almost have been invented by a mad economist to test the lengths to which market systems could supplant government in its most fundamental functions". While not directly labeling it anarcho-capitalist, Friedman argues that the legal system of the Icelandic Commonwealth comes close to being a real-world anarcho-capitalist legal system. Although noting that there was a single legal system, Friedman argues that enforcement of the law was entirely private and highly capitalist, providing some evidence of how such a society would function. Friedman further wrote that "[e]ven where the Icelandic legal system recognized an essentially 'public' offense, it dealt with it by giving some individual (in some cases chosen by lot from those affected) the right to pursue the case and collect the resulting fine, thus fitting it into an essentially private system".
Friedman and Bruce L. Benson argued that the Icelandic Commonwealth saw significant economic and social progress in the absence of systems of criminal law, an executive, or bureaucracy. This commonwealth was led by chieftains, whose position could be bought and sold like that of private property. Being a member of the chieftainship was also completely voluntary.
American Old West
According to Terry L. Anderson and P. J. Hill, the Old West in the United States in the period of 1830 to 1900 was similar to anarcho-capitalism in that "private agencies provided the necessary basis for an orderly society in which property was protected and conflicts were resolved" and that the common popular perception that the Old West was chaotic with little respect for property rights is incorrect. Since squatters had no claim to western lands under federal law, extra-legal organizations formed to fill the void. Benson explains:
According to Anderson, "[d]efining anarcho-capitalist to mean minimal government with property rights developed from the bottom up, the western frontier was anarcho-capitalistic. People on the frontier invented institutions that fit the resource constraints they faced".
Gaelic Ireland
In his work For a New Liberty, Murray Rothbard has claimed ancient Gaelic Ireland as an example of nearly anarcho-capitalist society. In his depiction, citing the work of Professor Joseph Peden, the basic political unit of ancient Ireland was the tuath, which is portrayed as "a body of persons voluntarily united for socially beneficial purposes" with its territorial claim being limited to "the sum total of the landed properties of its members". Civil disputes were settled by private arbiters called "brehons" and the compensation to be paid to the wronged party was insured through voluntary surety relationships. Commenting on the "kings" of tuaths, Rothbard stated:
Law merchant, admiralty law, and early common law
Some libertarians have cited law merchant, admiralty law and early common law as examples of anarcho-capitalism.
In his work Power and Market, Rothbard stated:
Somalia from 1991 to 2006
Economist Alex Tabarrok argued that Somalia in its stateless period provided a "unique test of the theory of anarchy", in some aspects near of that espoused by anarcho-capitalists David D. Friedman and Murray Rothbard. Nonetheless, both anarchists and some anarcho-capitalists argue that Somalia was not an anarchist society.
Analysis and criticism
State, justice and defense
Anarchists such as Brian Morris argue that anarcho-capitalism does not in fact get rid of the state. He says that anarcho-capitalists "simply replaced the state with private security firms, and can hardly be described as anarchists as the term is normally understood". In "Libertarianism: Bogus Anarchy", anarchist Peter Sabatini notes:
Similarly, Bob Black argues that an anarcho-capitalist wants to "abolish the state to his own satisfaction by calling it something else". He states that they do not denounce what the state does, they just "object to who's doing it".
Paul Birch argues that legal disputes involving several jurisdictions and different legal systems will be too complex and costly. He therefore argues that anarcho-capitalism is inherently unstable, and would evolve, entirely through the operation of free market forces, into either a single dominant private court with a natural monopoly of justice over the territory (a de facto state), a society of multiple city states, each with a territorial monopoly, or a 'pure anarchy' that would rapidly descend into chaos.
Randall G. Holcombe argues that anarcho-capitalism turns justice into a commodity as private defense and court firms would favour those who pay more for their services. He argues that defense agencies could form cartels and oppress people without fear of competition. Philosopher Albert Meltzer argued that since anarcho-capitalism promotes the idea of private armies, it actually supports a "limited State". He contends that it "is only possible to conceive of Anarchism which is free, communistic and offering no economic necessity for repression of countering it".
Libertarian Robert Nozick argues that a competitive legal system would evolve toward a monopoly government—even without violating individuals' rights in the process. In Anarchy, State, and Utopia, Nozick defends minarchism and argues that an anarcho-capitalist society would inevitably transform into a minarchist state through the eventual emergence of a monopolistic private defense and judicial agency that no longer faces competition. He argues that anarcho-capitalism results in an unstable system that would not endure in the real world. While anarcho-capitalists such as Roy Childs and Murray Rothbard have rejected Nozick's arguments, with Rothbard arguing that the process described by Nozick, with the dominant protection agency outlawing its competitors, in fact violates its own clients' rights, John Jefferson actually advocates Nozick's argument and states that such events would best operate in laissez-faire. Robert Ellickson presented a Hayekian case against anarcho-capitalism, calling it a "pipe-dream" and stating that anarcho-capitalists "by imagining a stable system of competing private associations, ignore both the inevitability of territorial monopolists in governance, and the importance of institutions to constrain those monopolists' abuses".
Some libertarians argue that anarcho-capitalism would result in different standards of justice and law due to relying too much on the market. Friedman responded to this criticism by arguing that it assumes the state is controlled by a majority group that has similar legal ideals. If the populace is diverse, different legal standards would therefore be appropriate.
Rights and freedom
Negative and positive rights are rights that oblige either action (positive rights) or inaction (negative rights). Anarcho-capitalists believe that negative rights should be recognized as legitimate, but positive rights should be rejected as an intrusion. Some critics reject the distinction between positive and negative rights. Peter Marshall also states that the anarcho-capitalist definition of freedom is entirely negative and that it cannot guarantee the positive freedom of individual autonomy and independence.
About anarcho-capitalism, anarcho-syndicalist and anti-capitalist intellectual Noam Chomsky says:
Economics and property
Social anarchists argue that anarcho-capitalism allows individuals to accumulate significant power through free markets and private property. Friedman responded by arguing that the Icelandic Commonwealth was able to prevent the wealthy from abusing the poor by requiring individuals who engaged in acts of violence to compensate their victims financially.
Anarchists argue that certain capitalist transactions are not voluntary and that maintaining the class structure of a capitalist society requires coercion which violates anarchist principles. Anthropologist David Graeber noted his skepticism about anarcho-capitalism along the same lines, arguing:
Some critics argue that the anarcho-capitalist concept of voluntary choice ignores constraints due to both human and non-human factors such as the need for food and shelter as well as active restriction of both used and unused resources by those enforcing property claims. If a person requires employment in order to feed and house himself, the employer-employee relationship could be considered involuntary. Another criticism is that employment is involuntary because the economic system that makes it necessary for some individuals to serve others is supported by the enforcement of coercive private property relations. Some philosophies view any ownership claims on land and natural resources as immoral and illegitimate. Objectivist philosopher Harry Binswanger criticizes anarcho-capitalism by arguing that "capitalism requires government", questioning who or what would enforce treaties and contracts.
Some right-libertarian critics of anarcho-capitalism who support the full privatization of capital such as geolibertarians argue that land and the raw materials of nature remain a distinct factor of production and cannot be justly converted to private property because they are not products of human labor. Some socialists, including market anarchists and mutualists, adamantly oppose absentee ownership. Anarcho-capitalists have strong abandonment criteria, namely that one maintains ownership until one agrees to trade or gift it. Anti-state critics of this view posit comparatively weak abandonment criteria, arguing that one loses ownership when one stops personally occupying and using it as well as the idea of perpetually binding original appropriation is anathema to traditional schools of anarchism.
Literature
The following is a partial list of notable nonfiction works discussing anarcho-capitalism.
Bruce L. Benson, The Enterprise of Law: Justice Without The State
To Serve and Protect: Privatization and Community in Criminal Justice
David D. Friedman, The Machinery of Freedom
Edward P. Stringham, Anarchy and the Law: The Political Economy of Choice
George H. Smith, "Justice Entrepreneurship in a Free Market"
Gerard Casey, Libertarian Anarchy: Against the State
Hans-Hermann Hoppe, Anarcho-Capitalism: An Annotated Bibliography
A Theory of Socialism and Capitalism
Democracy: The God That Failed
The Economics and Ethics of Private Property
Linda and Morris Tannehill, The Market for Liberty
Michael Huemer, The Problem of Political Authority
Murray Rothbard, founder of anarcho-capitalism:
For a New Liberty
Man, Economy, and State
Power and Market
The Ethics of Liberty
See also
Agorism
Anarcho-capitalism and minarchism
Consequentialist libertarianism
Counter-economics
Creative disruption
Crypto-anarchism
Definition of anarchism and libertarianism
Issues in anarchism
Left-wing market anarchism
Natural-rights libertarianism
Privatization in criminal justice
Propertarianism
Stateless society
The Libertarian Forum
Voluntaryism
Notes
References
Further reading
Brown, Susan Love (1997). "The Free Market as Salvation from Government: The Anarcho-Capitalist View". In Carrier, James G., ed. Meanings of the Market: The Free Market in Western Culture (illustrated ed.). Oxford: Berg Publishers. p. 99. .
External links
Anarcho-capitalist FAQ
LewRockwell.com – website run by Lew Rockwell
Mises Institute – research and educational center of classical liberalism, including anarcho-capitalism, Austrian School of economics and American libertarian political theory
Property and Freedom Society – international anarcho-capitalist society
Strike The Root – an anarcho-capitalist website featuring essays, news, and a forum
Austrian School
Capitalist systems
Economic ideologies
Anarcho-capitalism
Ideologies of capitalism
Classical liberalism
Libertarianism by form
Political ideologies
Right-libertarianism
Syncretic political movements
Murray Rothbard
|
https://en.wikipedia.org/wiki/Almond
|
The almond (Prunus amygdalus, syn. Prunus dulcis) is a species of small tree from the genus Prunus, cultivated worldwide for its seed, a culinary nut. Along with the peach, it is classified in the subgenus Amygdalus, distinguished from the other subgenera by corrugations on the shell (endocarp) surrounding the seed.
The fruit of the almond is a drupe, consisting of an outer hull and a hard shell with the seed, which is not a true nut. Shelling almonds refers to removing the shell to reveal the seed. Almonds are sold shelled or unshelled. Blanched almonds are shelled almonds that have been treated with hot water to soften the seedcoat, which is then removed to reveal the white embryo. Once almonds are cleaned and processed, they can be stored over time. Almonds are used in many cuisines, often featuring prominently in desserts, such as marzipan.
The almond tree prospers in a moderate Mediterranean climate with cool winter weather. Native to Iran and surrounding countries including the Levant, today it is rarely found wild in its original setting. Almonds were one of the earliest domesticated fruit trees, due to the ability to produce quality offspring entirely from seed, without using suckers and cuttings. Evidence of domesticated almonds in the Early Bronze Age has been found in the archeological sites of the Middle East, and subsequently across the Mediterranean region and similar arid climates with cool winters.
California produces over half of the world's almond supply. Due to high acreage and water demand for almond cultivation, and need for pesticides, California almond production may be unsustainable, especially during the persistent drought and heat from climate change in the 21st century. Droughts in California have caused some producers to leave the industry, leading to lower supply and increased prices.
Description
The almond is a deciduous tree growing to in height, with a trunk of up to in diameter. The young twigs are green at first, becoming purplish where exposed to sunlight, then gray in their second year. The leaves are long, with a serrated margin and a petiole.
The flowers are white to pale pink, diameter with five petals, produced singly or in pairs and appearing before the leaves in early spring. Almond grows best in Mediterranean climates with warm, dry summers and mild, wet winters. The optimal temperature for their growth is between and the tree buds have a chilling requirement of 200 to 700 hours below to break dormancy.
Almonds begin bearing an economic crop in the third year after planting. Trees reach full bearing five to six years after planting. The fruit matures in the autumn, 7–8 months after flowering.
The almond fruit is long. It is not a nut but a drupe. The outer covering, consisting of an outer exocarp, or skin, and mesocarp, or flesh, fleshy in other members of Prunus such as the plum and cherry, is instead a thick, leathery, gray-green coat (with a downy exterior), called the hull. Inside the hull is a woody endocarp which forms a reticulated, hard shell (like the outside of a peach pit) called the pyrena. Inside the shell is the edible seed, commonly called a nut. Generally, one seed is present, but occasionally two occur. After the fruit matures, the hull splits and separates from the shell, and an abscission layer forms between the stem and the fruit so that the fruit can fall from the tree.
Taxonomy
Sweet and bitter almonds
The seeds of Prunus dulcis var. dulcis are predominantly sweet but some individual trees produce seeds that are somewhat more bitter. The genetic basis for bitterness involves a single gene, the bitter flavor furthermore being recessive, both aspects making this trait easier to domesticate. The fruits from Prunus dulcis var. amara are always bitter, as are the kernels from other species of genus Prunus, such as apricot, peach and cherry (although to a lesser extent).
The bitter almond is slightly broader and shorter than the sweet almond and contains about 50% of the fixed oil that occurs in sweet almonds. It also contains the enzyme emulsin which, in the presence of water, acts on the two soluble glucosides amygdalin and prunasin yielding glucose, cyanide and the essential oil of bitter almonds, which is nearly pure benzaldehyde, the chemical causing the bitter flavor. Bitter almonds may yield 4–9 milligrams of hydrogen cyanide per almond and contain 42 times higher amounts of cyanide than the trace levels found in sweet almonds. The origin of cyanide content in bitter almonds is via the enzymatic hydrolysis of amygdalin. P450 monooxygenases are involved in the amygdalin biosynthetic pathway. A point mutation in a bHLH transcription factor prevents transcription of the two cytochrome P450 genes, resulting in the sweet kernel trait.
Etymology
The word almond comes from Old French or , Late Latin , , derived from from the Ancient Greek () (cf. amygdala, an almond-shaped portion of the brain). Late Old English had amygdales, "almonds".
The adjective amygdaloid (literally 'like an almond') is used to describe objects which are roughly almond-shaped, particularly a shape which is part way between a triangle and an ellipse. For example, the amygdala of the brain uses a direct borrowing of the Greek term .
Distribution and habitat
Almond is native to Iran and its surrounding regions, including the Levant area. It was spread by humans in ancient times along the shores of the Mediterranean into northern Africa and southern Europe, and more recently transported to other parts of the world, notably California, United States. The wild form of domesticated almond grows in parts of the Levant.
Selection of the sweet type from the many bitter types in the wild marked the beginning of almond domestication. It is unclear as to which wild ancestor of the almond created the domesticated species. The species Prunus fenzliana may be the most likely wild ancestor of the almond, in part because it is native to Armenia and western Azerbaijan, where it was apparently domesticated. Wild almond species were grown by early farmers, "at first unintentionally in the garbage heaps, and later intentionally in their orchards".
Cultivation
Almonds were one of the earliest domesticated fruit trees, due to "the ability of the grower to raise attractive almonds from seed. Thus, in spite of the fact that this plant does not lend itself to propagation from suckers or from cuttings, it could have been domesticated even before the introduction of grafting". Domesticated almonds appear in the Early Bronze Age (3000–2000 BC), such as the archaeological sites of Numeira (Jordan), or possibly earlier. Another well-known archaeological example of the almond is the fruit found in Tutankhamun's tomb in Egypt (c. 1325 BC), probably imported from the Levant. An article on almond tree cultivation in Spain is brought down in Ibn al-'Awwam's 12th-century agricultural work, Book on Agriculture.
Of the European countries that the Royal Botanic Garden Edinburgh reported as cultivating almonds, Germany is the northernmost, though the domesticated form can be found as far north as Iceland.
Varieties
Almond trees are small to medium sized but commercial cultivars can be grafted onto a different root-stock to produce smaller trees. Varieties include:
– originates in the 1800s. A large tree that produces large, smooth, thin-shelled almonds with 60–65% edible kernel per nut. Requires pollination from other almond varieties for good nut production.
– originates in Italy. Has thicker, hairier shells with only 32% of edible kernel per nut. The thicker shell gives some protection from pests such as the navel orangeworm. Does not require pollination by other almond varieties.
Mariana – used as a rootstock to result in smaller trees
Breeding
Breeding programmes have found the high shell-seal trait.
Pollination
The most widely planted varieties of almond are self-incompatible; hence these trees require pollen from a tree with different genetic characters to produce seeds. Almond orchards therefore must grow mixtures of almond varieties. In addition, the pollen is transferred from flower to flower by insects; therefore commercial growers must ensure there are enough insects to perform this task. The large scale of almond production in the U.S. creates a significant problem of providing enough pollinating insects. Additional pollinating insects are therefore brought to the trees. The pollination of California's almonds is the largest annual managed pollination event in the world, with 1.4 million hives (nearly half of all beehives in the US) being brought to the almond orchards each February.
Much of the supply of bees is managed by pollination brokers, who contract with migratory beekeepers from at least 49 states for the event. This business was heavily affected by colony collapse disorder at the turn of the 21st century, causing a nationwide shortage of honey bees and increasing the price of insect pollination. To partially protect almond growers from these costs, researchers at the Agricultural Research Service, part of the United States Department of Agriculture (USDA), developed self-pollinating almond trees that combine this character with quality characters such as a flavor and yield. Self-pollinating almond varieties exist, but they lack some commercial characters. However, through natural hybridisation between different almond varieties, a new variety that was self-pollinating with a high yield of commercial quality nuts was produced.
Diseases
Almond trees can be attacked by an array of damaging microbes, fungal pathogens, plant viruses, and bacteria.
Pests
Pavement ants (Tetramorium caespitum), southern fire ants (Solenopsis xyloni), and thief ants (Solenopsis molesta) are seed predators. Bryobia rubrioculus mites are most known for their damage to this crop.
Sustainability
Almond production in California is concentrated mainly in the Central Valley, where the mild climate, rich soil, abundant sunshine and water supply make for ideal growing conditions. Due to the persistent droughts in California in the early 21st century, it became more difficult to raise almonds in a sustainable manner. The issue is complex because of the high amount of water needed to produce almonds: a single almond requires roughly of water to grow properly. Regulations related to water supplies are changing so some growers have destroyed their current almond orchards to replace with either younger trees or a different crop such as pistachio that needs less water.
Sustainability strategies implemented by the Almond Board of California and almond farmers include:
tree and soil health, and other farming practices
minimizing dust production during the harvest
bee health
irrigation guidelines for farmers
food safety
use of waste biomass as coproducts with a goal to achieve zero waste
use of solar energy during processing
job development
support of scientific research to investigate potential health benefits of consuming almonds
international education about sustainability practices
Production
In 2020, world production of almonds was 4.1 million tonnes, led by the United States providing 57% of the world total (table). Other leading producers were Spain, Australia, and Iran.
United States
In the United States, production is concentrated in California where and six different almond varieties were under cultivation in 2017, with a yield of of shelled almonds. California production is marked by a period of intense pollination during late winter by rented commercial bees transported by truck across the U.S. to almond groves, requiring more than half of the total U.S. commercial honeybee population. The value of total U.S. exports of shelled almonds in 2016 was $3.2 billion.
All commercially grown almonds sold as food in the U.S. are sweet cultivars. The U.S. Food and Drug Administration reported in 2010 that some fractions of imported sweet almonds were contaminated with bitter almonds, which contain cyanide.
Spain
Spain has diverse commercial cultivars of almonds grown in Catalonia, Valencia, Murcia, Andalusia, and Aragón regions, and the Balearic Islands. Production in 2016 declined 2% nationally compared to 2015 production data.
The 'Marcona' almond cultivar is recognizably different from other almonds and is marketed by name. The kernel is short, round, relatively sweet, and delicate in texture. Its origin is unknown and has been grown in Spain for a long time; the tree is very productive, and the shell of the nut is very hard.
Australia
Australia is the largest almond production region in the Southern Hemisphere. Most of the almond orchards are located along the Murray River corridor in New South Wales, Victoria, and South Australia.
Toxicity
Bitter almonds contain 42 times higher amounts of cyanide than the trace levels found in sweet almonds. Extract of bitter almond was once used medicinally but even in small doses, effects are severe or lethal, especially in children; the cyanide must be removed before consumption. The acute oral lethal dose of cyanide for adult humans is reported to be of body weight (approximately 50 bitter almonds), so that for children consuming 5–10 bitter almonds may be fatal. Symptoms of eating such almonds include vertigo and other typical cyanide poisoning effects.
Almonds may cause allergy or intolerance. Cross-reactivity is common with peach allergens (lipid transfer proteins) and tree nut allergens. Symptoms range from local signs and symptoms (e.g., oral allergy syndrome, contact urticaria) to systemic signs and symptoms including anaphylaxis (e.g., urticaria, angioedema, gastrointestinal and respiratory symptoms).
Almonds are susceptible to aflatoxin-producing molds. Aflatoxins are potent carcinogenic chemicals produced by molds such as Aspergillus flavus and Aspergillus parasiticus. The mold contamination may occur from soil, previously infested almonds, and almond pests such as navel-orange worm. High levels of mold growth typically appear as gray to black filament-like growth. It is unsafe to eat mold-infected tree nuts.
Some countries have strict limits on allowable levels of aflatoxin contamination of almonds and require adequate testing before the nuts can be marketed to their citizens. The European Union, for example, introduced a requirement since 2007 that all almond shipments to the EU be tested for aflatoxin. If aflatoxin does not meet the strict safety regulations, the entire consignment may be reprocessed to eliminate the aflatoxin or it must be destroyed.
Breeding programs have found the trait. High shell-seal provides resistance against these Aspergillus species and so against the development of their toxins.
Mandatory pasteurization in California
After tracing cases of salmonellosis to almonds, the USDA approved a proposal by the Almond Board of California to pasteurize almonds sold to the public. After publishing the rule in March 2007, the almond pasteurization program became mandatory for California companies effective 1 September 2007. Raw, untreated California almonds have not been available in the U.S. since then.
California almonds labeled "raw" must be steam-pasteurized or chemically treated with propylene oxide (PPO). This does not apply to imported almonds or almonds sold from the grower directly to the consumer in small quantities. The treatment also is not required for raw almonds sold for export outside of North America.
The Almond Board of California states: "PPO residue dissipates after treatment". The U.S. Environmental Protection Agency has reported: "Propylene oxide has been detected in fumigated food products; consumption of contaminated food is another possible route of exposure". PPO is classified as Group 2B ("possibly carcinogenic to humans").
The USDA-approved marketing order was challenged in court by organic farmers organized by the Cornucopia Institute, a Wisconsin-based farm policy research group which filed a lawsuit in September 2008. According to the institute, this almond marketing order has imposed significant financial burdens on small-scale and organic growers and damaged domestic almond markets. A federal judge dismissed the lawsuit in early 2009 on procedural grounds. In August 2010, a federal appeals court ruled that the farmers have a right to appeal the USDA regulation. In March 2013, the court vacated the suit on the basis that the objections should have been raised in 2007 when the regulation was first proposed.
Uses
Nutrition
Almonds are 4% water, 22% carbohydrates, 21% protein, and 50% fat (table). In a reference amount, almonds supply of food energy. The almond is a nutritionally dense food (table), providing a rich source (20% or more of the Daily Value, DV) of the B vitamins riboflavin and niacin, vitamin E, and the essential minerals calcium, copper, iron, magnesium, manganese, phosphorus, and zinc. Almonds are a moderate source (10–19% DV) of the B vitamins thiamine, vitamin B6, and folate, choline, and the essential mineral potassium. They also contain substantial dietary fiber, the monounsaturated fat, oleic acid, and the polyunsaturated fat, linoleic acid. Typical of nuts and seeds, almonds are a source of phytosterols such as beta-sitosterol, stigmasterol, campesterol, sitostanol, and campestanol.
Health
Almonds are included as a good source of protein among recommended healthy foods by the U.S. Department of Agriculture (USDA). A 2016 review of clinical research indicated that regular consumption of almonds may reduce the risk of heart disease by lowering blood levels of LDL cholesterol.
Culinary
While the almond is often eaten on its own, raw or toasted, it is also a component of various dishes. Almonds are available in many forms, such as whole, slivered, and ground into flour. Almond pieces around in size, called "nibs", are used for special purposes such as decoration.
Almonds are a common addition to breakfast muesli or oatmeal.
Desserts
A wide range of classic sweets feature almonds as a central ingredient. Marzipan was developed in the Middle Ages. Since the 19th century almonds have been used to make bread, almond butter, cakes and puddings, candied confections, almond cream-filled pastries, nougat, cookies (macaroons, biscotti and qurabiya), and cakes (financiers, Esterházy torte), and other sweets and desserts.
The young, developing fruit of the almond tree can be eaten whole (green almonds) when they are still green and fleshy on the outside and the inner shell has not yet hardened. The fruit is somewhat sour, but is a popular snack in parts of the Middle East, eaten dipped in salt to balance the sour taste. Also in the Middle East they are often eaten with dates. They are available only from mid-April to mid-June in the Northern Hemisphere; pickling or brining extends the fruit's shelf life.
Marzipan
Marzipan, a smooth, sweetened almond paste, is used in a number of elegant cakes and desserts. Princess cake is covered by marzipan (similar to fondant), as is Battenberg cake. In Sicily, sponge cake is covered with marzipan to make cassatella di sant'Agata and cassata siciliana, and marzipan is dyed and crafted into realistic fruit shapes to make frutta martorana. The Andalusian Christmas pastry pan de Cádiz is filled with marzipan and candied fruit.
World cuisines
In French cuisine, alternating layers of almond and hazelnut meringue are used to make the dessert dacquoise. Pithivier is one of many almond cream-filled pastries.
In Germany, Easter bread called Deutsches Osterbrot is baked with raisins and almonds.
In Greece almond flour is used to make amygdalopita, a glyka tapsiou dessert cake baking in a tray. Almonds are used for kourabiedes, a Greek version of the traditional quarabiya almond biscuits. A soft drink known as soumada is made from almonds in various regions.
In Saudi Arabia, almonds are a typical embellishment for the rice dish kabsa.
In Iran, green almonds are dipped in sea salt and eaten as snacks on street markets; they are called chaqale bâdam. Candied almonds called noghl are served alongside tea and coffee. Also, sweet almonds are used to prepare special food for babies, named harire badam. Almonds are added to some foods, cookies, and desserts, or are used to decorate foods. People in Iran consume roasted nuts for special events, for example, during New Year (Nowruz) parties.
In Italy, colomba di Pasqua is a traditional Easter cake made with almonds. Bitter almonds are the base for amaretti cookies, a common dessert. Almonds are also a common choice as the nuts to include in torrone.
In Morocco, almonds in the form of sweet almond paste are the main ingredient in pastry fillings, and several other desserts. Fried blanched whole almonds are also used to decorate sweet tajines such as lamb with prunes. Southwestern Berber regions of Essaouira and Souss are also known for amlou, a spread made of almond paste, argan oil, and honey. Almond paste is also mixed with toasted flour and among others, honey, olive oil or butter, anise, fennel, sesame seeds, and cinnamon to make sellou (also called zamita in Meknes or slilou in Marrakech), a sweet snack known for its long shelf life and high nutritive value.
In Indian cuisine, almonds are the base ingredients of pasanda-style and Mughlai curries. Badam halva is a sweet made from almonds with added coloring. Almond flakes are added to many sweets (such as sohan barfi), and are usually visible sticking to the outer surface. Almonds form the base of various drinks which are supposed to have cooling properties. Almond sherbet or sherbet-e-badaam, is a popular summer drink. Almonds are also sold as a snack with added salt.
In Israel almonds are used as a topping for tahini cookies or eaten as a snack.
In Spain Marcona almonds are usually toasted in oil and lightly salted. They are used by Spanish confectioners to prepare a sweet called turrón.
In Arabian cuisine, almonds are commonly used as garnishing for Mansaf.
Certain natural food stores sell "bitter almonds" or "apricot kernels" labeled as such, requiring significant caution by consumers for how to prepare and eat these products.
Milk
Almonds can be processed into a milk substitute called almond milk; the nut's soft texture, mild flavor, and light coloring (when skinned) make for an efficient analog to dairy, and a soy-free choice for lactose intolerant people and vegans. Raw, blanched, and lightly toasted almonds work well for different production techniques, some of which are similar to that of soy milk and some of which use no heat, resulting in raw milk.
Almond milk, along with almond butter and almond oil, are versatile products used in both sweet and savoury dishes.
In Moroccan cuisine, sharbat billooz, a common beverage, is made by blending blanched almonds with milk, sugar and other flavorings.
Flour and skins
Almond flour or ground almond meal combined with sugar or honey as marzipan is often used as a gluten-free alternative to wheat flour in cooking and baking.
Almonds contain polyphenols in their skins consisting of flavonols, flavan-3-ols, hydroxybenzoic acids and flavanones analogous to those of certain fruits and vegetables. These phenolic compounds and almond skin prebiotic dietary fiber have commercial interest as food additives or dietary supplements.
Syrup
Historically, almond syrup was an emulsion of sweet and bitter almonds, usually made with barley syrup (orgeat syrup) or in a syrup of orange flower water and sugar, often flavored with a synthetic aroma of almonds. Orgeat syrup is an important ingredient in the Mai Tai and many other Tiki drinks.
Due to the cyanide found in bitter almonds, modern syrups generally are produced only from sweet almonds. Such syrup products do not contain significant levels of hydrocyanic acid, so are generally considered safe for human consumption.
Oils
Almonds are a rich source of oil, with 50% of kernel dry mass as fat (whole almond nutrition table). In relation to total dry mass of the kernel, almond oil contains 32% monounsaturated oleic acid (an omega-9 fatty acid), 13% linoleic acid (a polyunsaturated omega-6 essential fatty acid), and 10% saturated fatty acid (mainly as palmitic acid). Linolenic acid, a polyunsaturated omega-3 fat, is not present (table). Almond oil is a rich source of vitamin E, providing 261% of the Daily Value per 100 millilitres.
When almond oil is analyzed separately and expressed per 100 grams as a reference mass, the oil provides of food energy, 8 grams of saturated fat (81% of which is palmitic acid), 70 grams of oleic acid, and 17 grams of linoleic acid (oil table).
Oleum amygdalae, the fixed oil, is prepared from either sweet or bitter almonds, and is a glyceryl oleate with a slight odour and a nutty taste. It is almost insoluble in alcohol but readily soluble in chloroform or ether. Almond oil is obtained from the dried kernel of almonds. Sweet almond oil is used as a carrier oil in aromatherapy and cosmetics while bitter almond oil, containing benzaldehyde, is used as a food flavouring and in perfume.
In culture
The almond is highly revered in some cultures. The tree originated in the Middle East. In the Bible, the almond is mentioned ten times, beginning with Genesis 43:11, where it is described as "among the best of fruits". In Numbers 17, Levi is chosen from the other tribes of Israel by Aaron's rod, which brought forth almond flowers. The almond blossom supplied a model for the menorah which stood in the Holy Temple, "Three cups, shaped like almond blossoms, were on one branch, with a knob and a flower; and three cups, shaped like almond blossoms, were on the other … on the candlestick itself were four cups, shaped like almond blossoms, with its knobs and flowers" (Exodus 25:33–34; 37:19–20). Many Sephardic Jews give five almonds to each guest before special occasions like weddings.
Similarly, Christian symbolism often uses almond branches as a symbol of the virgin birth of Jesus; paintings and icons often include almond-shaped haloes encircling the Christ Child and as a symbol of Mary. The word "luz", which appears in Genesis 30:37, sometimes translated as "hazel", may actually be derived from the Aramaic name for almond (Luz), and is translated as such in the New International Version and other versions of the Bible. The Arabic name for almond is لوز "lauz" or "lūz". In some parts of the Levant and North Africa, it is pronounced "loz", which is very close to its Aramaic origin.
The Entrance of the flower (La entrada de la flor) is an event celebrated on 1 February in Torrent, Spain, in which the clavarios and members of the Confrerie of the Mother of God deliver a branch of the first-blooming almond-tree to the Virgin.
See also
Fruit tree forms
Fruit tree propagation
Fruit tree pruning
List of almond dishes
List of edible seeds
References
External links
University of California Fruit and Nut Research and Information Center
Benefits of Soaked Almonds
Almond
Edible nuts and seeds
Flora of Asia
Pollination management
Snack foods
Almond oil
Crops
Fruit trees
Symbols of California
Flora of Lebanon and Syria
|
https://en.wikipedia.org/wiki/Antisemitism
|
Antisemitism (also spelled anti-semitism or anti-Semitism) is hostility to, prejudice towards, or discrimination against Jews. This sentiment is a form of racism, and a person who harbours it is called an antisemite. Though antisemitism is overwhelmingly perpetrated by non-Jews, it may occasionally be perpetrated by Jews in a phenomenon known as auto-antisemitism (i.e., self-hating Jews). Primarily, antisemitic tendencies may be motivated by negative sentiment towards Jews as a people or by negative sentiment towards Jews with regard to Judaism. In the former case, usually presented as racial antisemitism, a person's hostility is driven by the belief that Jews constitute a distinct race with inherent traits or characteristics that are repulsive or inferior to the preferred traits or characteristics within that person's society. In the latter case, known as religious antisemitism, a person's hostility is driven by their religion's perception of Jews and Judaism, typically encompassing doctrines of supersession that expect or demand Jews to turn away from Judaism and submit to the religion presenting itself as Judaism's successor faith — this is a common theme within the other Abrahamic religions. The development of racial and religious antisemitism has historically been encouraged by anti-Judaism, though the concept itself is distinct from antisemitism.
There are various ways in which antisemitism is manifested, ranging in the level of severity of Jewish persecution. On the more subtle end, it consists of expressions of hatred or discrimination against individual Jews, and may or may not be accompanied by violence. On the most extreme end, it consists of pogroms or genocide, which may or may not be state-sponsored. Although the term "antisemitism" did not come into common usage until the 19th century, it is also applied to previous and later anti-Jewish incidents. Notable instances of antisemitic persecution include the Rhineland massacres in 1096; the Edict of Expulsion in 1290; the European persecution of Jews during the Black Death, between 1348 and 1351; the massacre of Spanish Jews in 1391, the crackdown of the Spanish Inquisition, and the expulsion of Jews from Spain in 1492; the Cossack massacres in Ukraine, between 1648 and 1657; various anti-Jewish pogroms in the Russian Empire, between 1821 and 1906; the Dreyfus affair, between 1894 and 1906; the Holocaust by the Axis powers during World War II; and various Soviet anti-Jewish policies. Historically, most of the world's violent antisemitic events have taken place in Christian Europe. However, since the early 20th century, there has been a sharp rise in antisemitic incidents across the Arab world, largely due to the surge in Arab antisemitic conspiracy theories, which have been cultivated to an extent under the aegis of European antisemitic conspiracy theories.
In the contemporary era, a manifestation known as "new antisemitism" was identified. This concept addresses the exploitation of the Arab–Israeli conflict by a large number of concealed antisemites, who may attempt to gain traction or legitimacy for their antisemitic hoaxes by portraying themselves as criticizing the Israeli government's actions; this is distinct from people who view Israeli government policies negatively, which is not inherently antisemitic. Likewise, as the State of Israel has a Jewish-majority population, it is common for antisemitic rhetoric to be manifested in expressions of sentiments against the form or existence of Israel as a state, or sentiments against the need or right to a state for Jewish people (Anti-Zionism), particularly around Jerusalem, though this is not always the case and such expressions may sometimes be part of wider anti–Middle Eastern sentiment without an exclusively antisemitic motive.
Due to the root word Semite, the term is prone to being invoked as a misnomer by those who interpret it as referring to racist hatred directed at all "Semitic people" (i.e., those who speak Semitic languages, such as Arabs, Assyrians, and Arameans). This usage is erroneous; the compound word () was first used in print in Germany in 1879 as a "scientific-sounding term" for (), and it has since been used to refer to anti-Jewish sentiment alone.
Origin and usage
Etymology
The origin of "antisemitic" terminologies is found in the responses of Moritz Steinschneider to the views of Ernest Renan. As Alex Bein writes: "The compound anti-Semitism appears to have been used first by Steinschneider, who challenged Renan on account of his 'anti-Semitic prejudices' [i.e., his derogation of the "Semites" as a race]." Avner Falk similarly writes: "The German word was first used in 1860 by the Austrian Jewish scholar Moritz Steinschneider (1816–1907) in the phrase antisemitische Vorurteile (antisemitic prejudices). Steinschneider used this phrase to characterise the French philosopher Ernest Renan's false ideas about how 'Semitic races' were inferior to 'Aryan races.
Pseudoscientific theories concerning race, civilization, and "progress" had become quite widespread in Europe in the second half of the 19th century, especially as Prussian nationalistic historian Heinrich von Treitschke did much to promote this form of racism. He coined the phrase "the Jews are our misfortune" which would later be widely used by Nazis. According to Avner Falk, Treitschke uses the term "Semitic" almost synonymously with "Jewish", in contrast to Renan's use of it to refer to a whole range of peoples, based generally on linguistic criteria.
According to Jonathan M. Hess, the term was originally used by its authors to "stress the radical difference between their own 'antisemitism' and earlier forms of antagonism toward Jews and Judaism."
In 1879, German journalist Wilhelm Marr published a pamphlet, (The Victory of the Jewish Spirit over the Germanic Spirit. Observed from a non-religious perspective) in which he used the word Semitismus interchangeably with the word Judentum to denote both "Jewry" (the Jews as a collective) and "Jewishness" (the quality of being Jewish, or the Jewish spirit).
This use of Semitismus was followed by a coining of "Antisemitismus" which was used to indicate opposition to the Jews as a people and opposition to the Jewish spirit, which Marr interpreted as infiltrating German culture. His next pamphlet, (The Way to Victory of the Germanic Spirit over the Jewish Spirit, 1880), presents a development of Marr's ideas further and may present the first published use of the German word , "antisemitism".
The pamphlet became very popular, and in the same year Marr founded the Antisemiten-Liga (League of Antisemites), apparently named to follow the "Anti-Kanzler-Liga" (Anti-Chancellor League). The league was the first German organization committed specifically to combating the alleged threat to Germany and German culture posed by the Jews and their influence and advocating their forced removal from the country.
So far as can be ascertained, the word was first widely printed in 1881, when Marr published Zwanglose Antisemitische Hefte, and Wilhelm Scherer used the term Antisemiten in the January issue of Neue Freie Presse.
The Jewish Encyclopedia reports, "In February 1881, a correspondent of the Allgemeine Zeitung des Judentums speaks of 'Anti-Semitism' as a designation which recently came into use ("Allg. Zeit. d. Jud." 1881, p. 138). On 19 July 1882, the editor says, 'This quite recent Anti-Semitism is hardly three years old.
The word "antisemitism" was borrowed into English from German in 1881. Oxford English Dictionary editor James Murray wrote that it was not included in the first edition because "Anti-Semite and its family were then probably very new in English use, and not thought likely to be more than passing nonce-words... Would that anti-Semitism had had no more than a fleeting interest!" The related term "philosemitism" was used by 1881.
Usage
From the outset the term "anti-Semitism" bore special racial connotations and meant specifically prejudice against Jews. The term is confusing, for in modern usage 'Semitic' designates a language group, not a race. In this sense, the term is a misnomer, since there are many speakers of Semitic languages (e.g., Arabs, Ethiopians, and Arameans) who are not the objects of antisemitic prejudices, while there are many Jews who do not speak Hebrew, a Semitic language. Though 'antisemitism' could be construed as prejudice against people who speak other Semitic languages, this is not how the term is commonly used.
The term may be spelled with or without a hyphen (antisemitism or anti-Semitism). Many scholars and institutions favor the unhyphenated form. Shmuel Almog argued, "If you use the hyphenated form, you consider the words 'Semitism', 'Semite', 'Semitic' as meaningful ... [I]n antisemitic parlance, 'Semites' really stands for Jews, just that." Emil Fackenheim supported the unhyphenated spelling, in order to "[dispel] the notion that there is an entity 'Semitism' which 'anti-Semitism' opposes."
Others endorsing an unhyphenated term for the same reason include the International Holocaust Remembrance Alliance, historian Deborah Lipstadt, Padraic O'Hare, professor of Religious and Theological Studies and Director of the Center for the Study of Jewish-Christian-Muslim Relations at Merrimack College; and historians Yehuda Bauer and James Carroll. According to Carroll, who first cites O'Hare and Bauer on "the existence of something called 'Semitism, "the hyphenated word thus reflects the bipolarity that is at the heart of the problem of antisemitism".
The Associated Press and its accompanying AP Stylebook adopted the unhyphenated spelling in 2021. Style guides for other news organizations such as the New York Times and Wall Street Journal later adopted this spelling as well. It has also been adopted by many Holocaust museums, such as the United States Holocaust Memorial Museum and Yad Vashem.
Definition
Though the general definition of antisemitism is hostility or prejudice against Jews, and, according to Olaf Blaschke, has become an "umbrella term for negative stereotypes about Jews", a number of authorities have developed more formal definitions.
Holocaust scholar and City University of New York professor Helen Fein defines it as "a persisting latent structure of hostile beliefs towards Jews as a collective manifested in individuals as attitudes, and in culture as myth, ideology, folklore and imagery, and in actions—social or legal discrimination, political mobilization against the Jews, and collective or state violence—which results in and/or is designed to distance, displace, or destroy Jews as Jews."
Elaborating on Fein's definition, Dietz Bering of the University of Cologne writes that, to antisemites, "Jews are not only partially but totally bad by nature, that is, their bad traits are incorrigible. Because of this bad nature: (1) Jews have to be seen not as individuals but as a collective. (2) Jews remain essentially alien in the surrounding societies. (3) Jews bring disaster on their 'host societies' or on the whole world, they are doing it secretly, therefore the anti-Semites feel obliged to unmask the conspiratorial, bad Jewish character."
For Sonja Weinberg, as distinct from economic and religious anti-Judaism, antisemitism in its modern form shows conceptual innovation, a resort to 'science' to defend itself, new functional forms, and organisational differences. It was anti-liberal, racialist and nationalist. It promoted the myth that Jews conspired to 'judaise' the world; it served to consolidate social identity; it channeled dissatisfactions among victims of the capitalist system; and it was used as a conservative cultural code to fight emancipation and liberalism.
Bernard Lewis defined antisemitism as a special case of prejudice, hatred, or persecution directed against people who are in some way different from the rest. According to Lewis, antisemitism is marked by two distinct features: Jews are judged according to a standard different from that applied to others, and they are accused of "cosmic evil." Thus, "it is perfectly possible to hate and even to persecute Jews without necessarily being anti-Semitic" unless this hatred or persecution displays one of the two features specific to antisemitism.
There have been a number of efforts by international and governmental bodies to define antisemitism formally. The United States Department of State states that "while there is no universally accepted definition, there is a generally clear understanding of what the term encompasses." For the purposes of its 2005 Report on Global Anti-Semitism, the term was considered to mean "hatred toward Jews—individually and as a group—that can be attributed to the Jewish religion and/or ethnicity."
In 2005, the European Monitoring Centre on Racism and Xenophobia (now the Fundamental Rights Agency), then an agency of the European Union, developed a more detailed working definition, which states: "Antisemitism is a certain perception of Jews, which may be expressed as hatred toward Jews. Rhetorical and physical manifestations of antisemitism are directed toward Jewish or non-Jewish individuals and/or their property, toward Jewish community institutions and religious facilities." It also adds that "such manifestations could also target the state of Israel, conceived as a Jewish collectivity," but that "criticism of Israel similar to that leveled against any other country cannot be regarded as antisemitic."
It provides contemporary examples of ways in which antisemitism may manifest itself, including promoting the harming of Jews in the name of an ideology or religion; promoting negative stereotypes of Jews; holding Jews collectively responsible for the actions of an individual Jewish person or group; denying the Holocaust or accusing Jews or Israel of exaggerating it; and accusing Jews of dual loyalty or a greater allegiance to Israel than their own country. It also lists ways in which attacking Israel could be antisemitic, and states that denying the Jewish people their right to self-determination, e.g. by claiming that the existence of a state of Israel is a racist endeavor, can be a manifestation of antisemitism—as can applying double standards by requiring of Israel a behavior not expected or demanded of any other democratic nation, or holding Jews collectively responsible for the actions of the State of Israel.
The definition wrong and negative perception of people with Jewish descent has been adopted by the European Parliament Working Group on Antisemitism, in 2010, a similar definition that avoided using the word wrong was adopted by the United States Department of State, in 2014, that definition was adopted in the Operational Hate Crime Guidance of the UK College of Policing and was also adopted by the Campaign Against Antisemitism. In 2016, the U.S. State Department definition was adopted by the International Holocaust Remembrance Alliance. The Working Definition of Antisemitism is among the most controversial documents related to opposition to antisemitism, and critics argue that it has been used to censor criticism of Israel.
Evolution of usage
In 1879, Wilhelm Marr founded the Antisemiten-Liga (Anti-Semitic League). Identification with antisemitism and as an antisemite was politically advantageous in Europe during the late 19th century. For example, Karl Lueger, the popular mayor of fin de siècle Vienna, skillfully exploited antisemitism as a way of channeling public discontent to his political advantage. In its 1910 obituary of Lueger, The New York Times notes that Lueger was "Chairman of the Christian Social Union of the Parliament and of the Anti-Semitic Union of the Diet of Lower Austria. In 1895, A. C. Cuza organized the Alliance Anti-semitique Universelle in Bucharest. In the period before World War II, when animosity towards Jews was far more commonplace, it was not uncommon for a person, an organization, or a political party to self-identify as an antisemite or antisemitic.
The early Zionist pioneer Leon Pinsker, a professional physician, preferred the clinical-sounding term Judeophobia to antisemitism, which he regarded as a misnomer. The word Judeophobia first appeared in his pamphlet "Auto-Emancipation", published anonymously in German in September 1882, where it was described as an irrational fear or hatred of Jews. According to Pinsker, this irrational fear was an inherited predisposition.
In the aftermath of the Kristallnacht pogrom in 1938, German propaganda minister Goebbels announced: "The German people is anti-Semitic. It has no desire to have its rights restricted or to be provoked in the future by parasites of the Jewish race."
After 1945 victory of the Allies over Nazi Germany, and particularly after the full extent of the Nazi genocide against the Jews became known, the term antisemitism acquired pejorative connotations. This marked a full circle shift in usage, from an era just decades earlier when "Jew" was used as a pejorative term. Yehuda Bauer wrote in 1984: "There are no anti-Semites in the world ... Nobody says, 'I am anti-Semitic.' You cannot, after Hitler. The word has gone out of fashion."
Eternalism–contextualism debate
The study of antisemitism has become politically controversial because of differing interpretations of the Holocaust and the Israeli–Palestinian conflict. There are two competing views of antisemitism, eternalism, and contextualism. The eternalist view sees antisemitism as separate from other forms of racism and prejudice and an exceptionalist, transhistorical force teleologically culminating in the Holocaust. Hannah Arendt criticized this approach, writing that it provoked "the uncomfortable question: 'Why the Jews of all people?' ... with the question begging reply: Eternal hostility." Zionist thinkers and antisemites draw different conclusions from what they perceive as the eternal hatred of Jews; according to antisemites, it proves the inferiority of Jews, while for Zionists it means that Jews need their own state as a refuge. Most Zionists do not believe that antisemitism can be combatted with education or other means.
The contextual approach treats antisemitism as a type of racism and focuses on the historical context in which hatred of Jews emerges. Some contextualists restrict the use of "antisemitism" to refer exclusively to the era of modern racism, treating anti-Judaism as a separate phenomenon. Historian David Engel has challenged the project to define antisemitism, arguing that it essentializes Jewish history as one of persecution and discrimination. Engel argues that the term "antisemitism" is not useful in historical analysis because it implies that there are links between anti-Jewish prejudices expressed in different contexts, without evidence of such a connection.
Manifestations
Antisemitism manifests itself in a variety of ways. René König mentions social antisemitism, economic antisemitism, religious antisemitism, and political antisemitism as examples. König points out that these different forms demonstrate that the "origins of anti-Semitic prejudices are rooted in different historical periods." König asserts that differences in the chronology of different antisemitic prejudices and the irregular distribution of such prejudices over different segments of the population create "serious difficulties in the definition of the different kinds of anti-Semitism."
These difficulties may contribute to the existence of different taxonomies that have been developed to categorize the forms of antisemitism. The forms identified are substantially the same; it is primarily the number of forms and their definitions that differ. Bernard Lazare identifies three forms of antisemitism: Christian antisemitism, economic antisemitism, and ethnologic antisemitism. William Brustein names four categories: religious, racial, economic, and political. The Roman Catholic historian Edward Flannery distinguished four varieties of antisemitism:
Political and economic antisemitism, giving as examples Cicero and Charles Lindbergh;
Theological or religious antisemitism, also called "traditional antisemitism" and sometimes known as anti-Judaism;
Nationalistic antisemitism, citing Voltaire and other Enlightenment thinkers, who attacked Jews for supposedly having certain characteristics, such as greed and arrogance, and for observing customs such as kashrut and Shabbat;
Racial antisemitism, with its extreme form resulting in the Holocaust by the Nazis.
Louis Harap separates "economic antisemitism" and merges "political" and "nationalistic" antisemitism into "ideological antisemitism". Harap also adds a category of "social antisemitism".
Religious (Jew as Christ-killer),
Economic (Jew as banker, usurer, money-obsessed),
Social (Jew as social inferior, "pushy", vulgar, therefore excluded from personal contact),
Racist (Jews as an inferior "race"),
Ideological (Jews regarded as subversive or revolutionary),
Cultural (Jews regarded as undermining the moral and structural fiber of civilization).
Cultural antisemitism
Louis Harap defines cultural antisemitism as "that species of anti-Semitism that charges the Jews with corrupting a given culture and attempting to supplant or succeeding in supplanting the preferred culture with a uniform, crude, "Jewish" culture." Similarly, Eric Kandel characterizes cultural antisemitism as being based on the idea of "Jewishness" as a "religious or cultural tradition that is acquired through learning, through distinctive traditions and education." According to Kandel, this form of antisemitism views Jews as possessing "unattractive psychological and social characteristics that are acquired through acculturation." Niewyk and Nicosia characterize cultural antisemitism as focusing on and condemning "the Jews' aloofness from the societies in which they live."
An important feature of cultural antisemitism is that it considers the negative attributes of Judaism to be redeemable by education or by religious conversion.
Religious antisemitism
Religious antisemitism, also known as anti-Judaism, is antipathy towards Jews because of their perceived religious beliefs. In theory, antisemitism and attacks against individual Jews would stop if Jews stopped practicing Judaism or changed their public faith, especially by conversion to the official or right religion. However, in some cases, discrimination continues after conversion, as in the case of Marranos (Christianized Jews in Spain and Portugal) in the late 15th century and 16th century, who were suspected of secretly practising Judaism or Jewish customs.
Although the origins of antisemitism are rooted in the Judeo-Christian conflict, other forms of antisemitism have developed in modern times. Frederick Schweitzer asserts that "most scholars ignore the Christian foundation on which the modern antisemitic edifice rests and invoke political antisemitism, cultural antisemitism, racism or racial antisemitism, economic antisemitism, and the like." William Nichols draws a distinction between religious antisemitism and modern antisemitism based on racial or ethnic grounds: "The dividing line was the possibility of effective conversion [...] a Jew ceased to be a Jew upon baptism." From the perspective of racial antisemitism, however, "the assimilated Jew was still a Jew, even after baptism.[...] From the Enlightenment onward, it is no longer possible to draw clear lines of distinction between religious and racial forms of hostility towards Jews[...] Once Jews have been emancipated and secular thinking makes its appearance, without leaving behind the old Christian hostility towards Jews, the new term antisemitism becomes almost unavoidable, even before explicitly racist doctrines appear."
Some Christians such as the Catholic priest Ernest Jouin, who published the first French translation of the Protocols, combined religious and racial antisemitism, as in his statement that "From the triple viewpoint of race, of nationality, and of religion, the Jew has become the enemy of humanity." The virulent antisemitism of Édouard Drumont, one of the most widely read Catholic writers in France during the Dreyfus Affair, likewise combined religious and racial antisemitism. Drumont founded the Antisemitic League of France.
Economic antisemitism
The underlying premise of economic antisemitism is that Jews perform harmful economic activities or that economic activities become harmful when they are performed by Jews.
Linking Jews and money underpins the most damaging and lasting antisemitic canards. Antisemites claim that Jews control the world finances, a theory promoted in the fraudulent Protocols of the Elders of Zion, and later repeated by Henry Ford and his Dearborn Independent. In the modern era, such myths continue to be spread in books such as The Secret Relationship Between Blacks and Jews published by the Nation of Islam, and on the internet.
Derek Penslar writes that there are two components to the financial canards:
a) Jews are savages that "are temperamentally incapable of performing honest labor"
b) Jews are "leaders of a financial cabal seeking world domination"
Abraham Foxman describes six facets of the financial canards:
All Jews are wealthy
Jews are stingy and greedy
Powerful Jews control the business world
Jewish religion emphasizes profit and materialism
It is okay for Jews to cheat non-Jews
Jews use their power to benefit "their own kind"
Gerald Krefetz summarizes the myth as "[Jews] control the banks, the money supply, the economy, and businesses—of the community, of the country, of the world". Krefetz gives, as illustrations, many slurs and proverbs (in several different languages) which suggest that Jews are stingy, or greedy, or miserly, or aggressive bargainers. During the nineteenth century, Jews were described as "scurrilous, stupid, and tight-fisted", but after the Jewish Emancipation and the rise of Jews to the middle- or upper-class in Europe were portrayed as "clever, devious, and manipulative financiers out to dominate [world finances]".
Léon Poliakov asserts that economic antisemitism is not a distinct form of antisemitism, but merely a manifestation of theologic antisemitism (because, without the theological causes of economic antisemitism, there would be no economic antisemitism). In opposition to this view, Derek Penslar contends that in the modern era, economic antisemitism is "distinct and nearly constant" but theological antisemitism is "often subdued".
An academic study by Francesco D'Acunto, Marcel Prokopczuk, and Michael Weber showed that people who live in areas of Germany that contain the most brutal history of antisemitic persecution are more likely to be distrustful of finance in general. Therefore, they tended to invest less money in the stock market and make poor financial decisions. The study concluded, "that the persecution of minorities reduces not only the long-term wealth of the persecuted but of the persecutors as well."
Racial antisemitism
Racial antisemitism is prejudice against Jews as a racial/ethnic group, rather than Judaism as a religion.
Racial antisemitism is the idea that the Jews are a distinct and inferior race compared to their host nations. In the late 19th century and early 20th century, it gained mainstream acceptance as part of the eugenics movement, which categorized non-Europeans as inferior. It more specifically claimed that Northern Europeans, or "Aryans", were superior. Racial antisemites saw the Jews as part of a Semitic race and emphasized their non-European origins and culture. They saw Jews as beyond redemption even if they converted to the majority religion.
Racial antisemitism replaced the hatred of Judaism with the hatred of Jews as a group. In the context of the Industrial Revolution, following the Jewish Emancipation, Jews rapidly urbanized and experienced a period of greater social mobility. With the decreasing role of religion in public life tempering religious antisemitism, a combination of growing nationalism, the rise of eugenics, and resentment at the socio-economic success of the Jews led to the newer, and more virulent, racist antisemitism.
According to William Nichols, religious antisemitism may be distinguished from modern antisemitism based on racial or ethnic grounds. "The dividing line was the possibility of effective conversion... a Jew ceased to be a Jew upon baptism." However, with racial antisemitism, "Now the assimilated Jew was still a Jew, even after baptism... From the Enlightenment onward, it is no longer possible to draw clear lines of distinction between religious and racial forms of hostility towards Jews... Once Jews have been emancipated and secular thinking makes its appearance, without leaving behind the old Christian hostility towards Jews, the new term antisemitism becomes almost unavoidable, even before explicitly racist doctrines appear."
In the early 19th century, a number of laws enabling the emancipation of the Jews were enacted in Western European countries. The old laws restricting them to ghettos, as well as the many laws that limited their property rights, rights of worship and occupation, were rescinded. Despite this, traditional discrimination and hostility to Jews on religious grounds persisted and was supplemented by racial antisemitism, encouraged by the work of racial theorists such as Joseph Arthur de Gobineau and particularly his Essay on the Inequality of the Human Race of 1853–1855. Nationalist agendas based on ethnicity, known as ethnonationalism, usually excluded the Jews from the national community as an alien race. Allied to this were theories of Social Darwinism, which stressed a putative conflict between higher and lower races of human beings. Such theories, usually posited by northern Europeans, advocated the superiority of white Aryans to Semitic Jews.
Political antisemitism
William Brustein defines political antisemitism as hostility toward Jews based on the belief that Jews seek national or world power. Yisrael Gutman characterizes political antisemitism as tending to "lay responsibility on the Jews for defeats and political economic crises" while seeking to "exploit opposition and resistance to Jewish influence as elements in political party platforms." Derek J. Penslar wrote, "Political antisemitism identified the Jews as responsible for all the anxiety-provoking social forces that characterized modernity."
According to Viktor Karády, political antisemitism became widespread after the legal emancipation of the Jews and sought to reverse some of the consequences of that emancipation.
Conspiracy theories
Holocaust denial and Jewish conspiracy theories are also considered forms of antisemitism. Zoological conspiracy theories have been propagated by Arab media and Arabic language websites, alleging a "Zionist plot" behind the use of animals to attack civilians or to conduct espionage.
New antisemitism
Starting in the 1990s, some scholars have advanced the concept of new antisemitism, coming simultaneously from the left, the right, and radical Islam, which tends to focus on opposition to the creation of a Jewish homeland in the State of Israel, and they argue that the language of anti-Zionism and criticism of Israel are used to attack Jews more broadly. In this view, the proponents of the new concept believe that criticisms of Israel and Zionism are often disproportionate in degree and unique in kind, and they attribute this to antisemitism.
Jewish scholar Gustavo Perednik posited in 2004 that anti-Zionism in itself represents a form of discrimination against Jews, in that it singles out Jewish national aspirations as an illegitimate and racist endeavor, and "proposes actions that would result in the death of millions of Jews". It is asserted that the new antisemitism deploys traditional antisemitic motifs, including older motifs such as the blood libel.
Critics of the concept view it as trivializing the meaning of antisemitism, and as exploiting antisemitism in order to silence debate and to deflect attention from legitimate criticism of the State of Israel, and, by associating anti-Zionism with antisemitism, misusing it to taint anyone opposed to Israeli actions and policies.
History
Many authors see the roots of modern antisemitism in both pagan antiquity and early Christianity. Jerome Chanes identifies six stages in the historical development of antisemitism:
Pre-Christian anti-Judaism in ancient Greece and Rome which was primarily ethnic in nature
Christian antisemitism in antiquity and the Middle Ages which was religious in nature and has extended into modern times
Traditional Muslim antisemitism which was—at least, in its classical form—nuanced in that Jews were a protected class
Political, social and economic antisemitism of Enlightenment and post-Enlightenment Europe which laid the groundwork for racial antisemitism
Racial antisemitism that arose in the 19th century and culminated in Nazism in the 20th century
Contemporary antisemitism which has been labeled by some as the New Antisemitism
Chanes suggests that these six stages could be merged into three categories: "ancient antisemitism, which was primarily ethnic in nature; Christian antisemitism, which was religious; and the racial antisemitism of the nineteenth and twentieth centuries."
Ancient world
The first clear examples of anti-Jewish sentiment can be traced to the 3rd century BCE to Alexandria, the home to the largest Jewish diaspora community in the world at the time and where the Septuagint, a Greek translation of the Hebrew Bible, was produced. Manetho, an Egyptian priest and historian of that era, wrote scathingly of the Jews. His themes are repeated in the works of Chaeremon, Lysimachus, Poseidonius, Apollonius Molon, and in Apion and Tacitus. Agatharchides of Cnidus ridiculed the practices of the Jews and the "absurdity of their Law", making a mocking reference to how Ptolemy Lagus was able to invade Jerusalem in 320 BCE because its inhabitants were observing the Shabbat. One of the earliest anti-Jewish edicts, promulgated by Antiochus IV Epiphanes in about 170–167 BCE, sparked a revolt of the Maccabees in Judea.
In view of Manetho's anti-Jewish writings, antisemitism may have originated in Egypt and been spread by "the Greek retelling of Ancient Egyptian prejudices". The ancient Jewish philosopher Philo of Alexandria describes an attack on Jews in Alexandria in 38 CE in which thousands of Jews died. The violence in Alexandria may have been caused by the Jews being portrayed as misanthropes. Tcherikover argues that the reason for hatred of Jews in the Hellenistic period was their separateness in the Greek cities, the poleis. Bohak has argued, however, that early animosity against the Jews cannot be regarded as being anti-Judaic or antisemitic unless it arose from attitudes that were held against the Jews alone, and that many Greeks showed animosity toward any group they regarded as barbarians.
Statements exhibiting prejudice against Jews and their religion can be found in the works of many pagan Greek and Roman writers. Edward Flannery writes that it was the Jews' refusal to accept Greek religious and social standards that marked them out. Hecataetus of Abdera, a Greek historian of the early third century BCE, wrote that Moses "in remembrance of the exile of his people, instituted for them a misanthropic and inhospitable way of life." Manetho, an Egyptian historian, wrote that the Jews were expelled Egyptian lepers who had been taught by Moses "not to adore the gods." Edward Flannery describes antisemitism in ancient times as essentially "cultural, taking the shape of a national xenophobia played out in political settings."
There are examples of Hellenistic rulers desecrating the Temple and banning Jewish religious practices, such as circumcision, Shabbat observance, the study of Jewish religious books, etc. Examples may also be found in anti-Jewish riots in Alexandria in the 3rd century BCE.
The Jewish diaspora on the Nile island Elephantine, which was founded by mercenaries, experienced the destruction of its temple in 410 BCE.
Relationships between the Jewish people and the occupying Roman Empire were at times antagonistic and resulted in several rebellions. According to Suetonius, the emperor Tiberius expelled from Rome Jews who had gone to live there. The 18th-century English historian Edward Gibbon identified a more tolerant period in Roman-Jewish relations beginning in about 160 CE. However, when Christianity became the state religion of the Roman Empire, the state's attitude towards the Jews gradually worsened.
James Carroll asserted: "Jews accounted for 10% of the total population of the Roman Empire. By that ratio, if other factors such as pogroms and conversions had not intervened, there would be 200 million Jews in the world today, instead of something like 13 million."
Persecutions during the Middle Ages
In the late 6th century CE, the newly Catholicised Visigothic kingdom in Hispania issued a series of anti-Jewish edicts which forbade Jews from marrying Christians, practicing circumcision, and observing Jewish holy days. Continuing throughout the 7th century, both Visigothic kings and the Church were active in creating social aggression and towards Jews with "civic and ecclesiastic punishments", ranging between forced conversion, slavery, exile and death.
From the 9th century, the medieval Islamic world classified Jews and Christians as dhimmis and allowed Jews to practice their religion more freely than they could do in medieval Christian Europe. Under Islamic rule, there was a Golden age of Jewish culture in Spain that lasted until at least the 11th century. It ended when several Muslim pogroms against Jews took place on the Iberian Peninsula, including those that occurred in Córdoba in 1011 and in Granada in 1066. Several decrees ordering the destruction of synagogues were also enacted in Egypt, Syria, Iraq and Yemen from the 11th century. In addition, Jews were forced to convert to Islam or face death in some parts of Yemen, Morocco and Baghdad several times between the 12th and 18th centuries.
The Almohads, who had taken control of the Almoravids' Maghribi and Andalusian territories by 1147, were far more fundamentalist in outlook compared to their predecessors, and they treated the dhimmis harshly. Faced with the choice of either death or conversion, many Jews and Christians emigrated. Some, such as the family of Maimonides, fled east to more tolerant Muslim lands, while some others went northward to settle in the growing Christian kingdoms.
In medieval Europe, Jews were persecuted with blood libels, expulsions, forced conversions and massacres. These persecutions were often justified on religious grounds and reached a first peak during the Crusades. In 1096, hundreds or thousands of Jews were killed during the First Crusade. This was the first major outbreak of anti-Jewish violence in Christian Europe outside Spain and was cited by Zionists in the 19th century as indicating the need for a state of Israel.
In 1147, there were several massacres of Jews during the Second Crusade. The Shepherds' Crusades of 1251 and 1320 both involved attacks, as did the Rintfleisch massacres in 1298. Expulsions followed, such as the 1290 banishment of Jews from England, the expulsion of 100,000 Jews from France in 1394, and the 1421 expulsion of thousands of Jews from Austria. Many of the expelled Jews fled to Poland.
In medieval and Renaissance Europe, a major contributor to the deepening of antisemitic sentiment and legal action among the Christian populations was the popular preaching of the zealous reform religious orders, the Franciscans (especially Bernardino of Feltre) and Dominicans (especially Vincent Ferrer), who combed Europe and promoted antisemitism through their often fiery, emotional appeals.
As the Black Death epidemics devastated Europe in the mid-14th century, causing the death of a large part of the population, Jews were used as scapegoats. Rumors spread that they caused the disease by deliberately poisoning wells. Hundreds of Jewish communities were destroyed in numerous persecutions. Although Pope Clement VI tried to protect them by issuing two papal bulls in 1348, the first on 6 July and an additional one several months later, 900 Jews were burned alive in Strasbourg, where the plague had not yet affected the city.
Reformation
Martin Luther, an ecclesiastical reformer whose teachings inspired the Reformation, wrote antagonistically about Jews in his pamphlet On the Jews and their Lies, written in 1543. He portrays the Jews in extremely harsh terms, excoriates them and provides detailed recommendations for a pogrom against them, calling for their permanent oppression and expulsion. At one point he writes: "...we are at fault in not slaying them...", a passage that, according to historian Paul Johnson, "may be termed the first work of modern antisemitism, and a giant step forward on the road to the Holocaust."
17th century
During the mid-to-late 17th century the Polish–Lithuanian Commonwealth was devastated by several conflicts, in which the Commonwealth lost over a third of its population (over 3 million people), and Jewish losses were counted in the hundreds of thousands. The first of these conflicts was the Khmelnytsky Uprising, when Bohdan Khmelnytsky's supporters massacred tens of thousands of Jews in the eastern and southern areas he controlled (today's Ukraine). The precise number of dead may never be known, but the decrease of the Jewish population during that period is estimated at 100,000 to 200,000, which also includes emigration, deaths from diseases, and captivity in the Ottoman Empire, called jasyr.
European immigrants to the United States brought antisemitism to the country as early as the 17th century. Peter Stuyvesant, the Dutch governor of New Amsterdam, implemented plans to prevent Jews from settling in the city. During the Colonial Era, the American government limited the political and economic rights of Jews. It was not until the American Revolutionary War that Jews gained legal rights, including the right to vote. However, even at their peak, the restrictions on Jews in the United States were never as stringent as they had been in Europe.
In the Zaydi imamate of Yemen, Jews were also singled out for discrimination in the 17th century, which culminated in the general expulsion of all Jews from places in Yemen to the arid coastal plain of Tihamah and which became known as the Mawza Exile.
Enlightenment
In 1744, Archduchess of Austria Maria Theresa ordered Jews out of Bohemia but soon reversed her position, on the condition that Jews pay for their readmission every ten years. This extortion was known among the Jews as malke-geld ("queen's money" in Yiddish). In 1752, she introduced the law limiting each Jewish family to one son.
In 1782, Joseph II abolished most of these persecution practices in his Toleranzpatent, on the condition that Yiddish and Hebrew were eliminated from public records and that judicial autonomy was annulled. Moses Mendelssohn wrote that "Such a tolerance... is even more dangerous play in tolerance than open persecution."
Voltaire
According to Arnold Ages, Voltaire's "Lettres philosophiques, Dictionnaire philosophique, and Candide, to name but a few of his better known works, are saturated with comments on Jews and Judaism and the vast majority are negative". Paul H. Meyer adds: "There is no question but that Voltaire, particularly in his latter years, nursed a violent hatred of the Jews and it is equally certain that his animosity...did have a considerable impact on public opinion in France." Thirty of the 118 articles in Voltaire's Dictionnaire Philosophique concerned Jews and described them in consistently negative ways.
Louis de Bonald and the Catholic Counter-Revolution
The counter-revolutionary Catholic royalist Louis de Bonald stands out among the earliest figures to explicitly call for the reversal of Jewish emancipation in the wake of the French Revolution. Bonald's attacks on the Jews are likely to have influenced Napoleon's decision to limit the civil rights of Alsatian Jews. Bonald's article Sur les juifs (1806) was one of the most venomous screeds of its era and furnished a paradigm which combined anti-liberalism, a defense of a rural society, traditional Christian antisemitism, and the identification of Jews with bankers and finance capital, which would in turn influence many subsequent right-wing reactionaries such as Roger Gougenot des Mousseaux, Charles Maurras, and Édouard Drumont, nationalists such as Maurice Barrès and Paolo Orano, and antisemitic socialists such as Alphonse Toussenel. Bonald furthermore declared that the Jews were an "alien" people, a "state within a state", and should be forced to wear a distinctive mark to more easily identify and discriminate against them.
Under the French Second Empire, the popular counter-revolutionary Catholic journalist Louis Veuillot propagated Bonald's arguments against the Jewish "financial aristocracy" along with vicious attacks against the Talmud and the Jews as a "deicidal people" driven by hatred to "enslave" Christians. Between 1882 and 1886 alone, French priests published twenty antisemitic books blaming France's ills on the Jews and urging the government to consign them back to the ghettos, expel them, or hang them from the gallows. Gougenot des Mousseaux's Le Juif, le judaïsme et la judaïsation des peuples chrétiens (1869) has been called a "Bible of modern antisemitism" and was translated into German by Nazi ideologue Alfred Rosenberg.
Imperial Russia
Thousands of Jews were slaughtered by Cossack Haidamaks in the 1768 massacre of Uman in the Kingdom of Poland. In 1772, the empress of Russia Catherine II forced the Jews into the Pale of Settlement – which was located primarily in present-day Poland, Ukraine, and Belarus – and to stay in their shtetls and forbade them from returning to the towns that they occupied before the partition of Poland. From 1804, Jews were banned from their villages and began to stream into the towns. A decree by emperor Nicholas I of Russia in 1827 conscripted Jews under 18 years of age into the cantonist schools for a 25-year military service in order to promote baptism.
Policy towards Jews was liberalised somewhat under Czar Alexander II (). However, his assassination in 1881 served as a pretext for further repression such as the May Laws of 1882. Konstantin Pobedonostsev, nicknamed the "black czar" and tutor to the czarevitch, later crowned Czar Nicholas II, declared that "One-third of the Jews must die, one-third must emigrate, and one third be converted to Christianity".
Islamic antisemitism in the 19th century
Historian Martin Gilbert writes that it was in the 19th century that the position of Jews worsened in Muslim countries. Benny Morris writes that one symbol of Jewish degradation was the phenomenon of stone-throwing at Jews by Muslim children. Morris quotes a 19th-century traveler: "I have seen a little fellow of six years old, with a troop of fat toddlers of only three and four, teaching [them] to throw stones at a Jew, and one little urchin would, with the greatest coolness, waddle up to the man and literally spit upon his Jewish gaberdine. To all this the Jew is obliged to submit; it would be more than his life was worth to offer to strike a Mahommedan."
In the middle of the 19th century, J. J. Benjamin wrote about the life of Persian Jews, describing conditions and beliefs that went back to the 16th century: "…they are obliged to live in a separate part of town… Under the pretext of their being unclean, they are treated with the greatest severity and should they enter a street, inhabited by Mussulmans, they are pelted by the boys and mobs with stones and dirt…."
In Jerusalem at least, conditions for some Jews improved. Moses Montefiore, on his seventh visit in 1875, noted that fine new buildings had sprung up and, "surely we're approaching the time to witness God's hallowed promise unto Zion." Muslim and Christian Arabs participated in Purim and Passover; Arabs called the Sephardis 'Jews, sons of Arabs'; the Ulema and the Rabbis offered joint prayers for rain in time of drought.
At the time of the Dreyfus trial in France, "Muslim comments usually favoured the persecuted Jew against his Christian persecutors".
Secular or racial antisemitism
In 1850, the German composer Richard Wagner – who has been called "the inventor of modern antisemitism" – published Das Judenthum in der Musik (roughly "Jewishness in Music") under a pseudonym in the Neue Zeitschrift für Musik. The essay began as an attack on Jewish composers, particularly Wagner's contemporaries, and rivals, Felix Mendelssohn and Giacomo Meyerbeer, but expanded to accuse Jews of being a harmful and alien element in German culture, who corrupted morals and were, in fact, parasites incapable of creating truly "German" art. The crux was the manipulation and control by the Jews of the money economy:
Although originally published anonymously, when the essay was republished 19 years later, in 1869, the concept of the corrupting Jew had become so widely held that Wagner's name was affixed to it.
Antisemitism can also be found in many of the Grimms' Fairy Tales by Jacob and Wilhelm Grimm, published from 1812 to 1857. It is mainly characterized by Jews being the villain of a story, such as in "The Good Bargain" ("Der gute Handel") and "The Jew Among Thorns" ("Der Jude im Dorn").
The middle 19th century saw continued official harassment of the Jews, especially in Eastern Europe under Czarist influence. For example, in 1846, 80 Jews approached the governor in Warsaw to retain the right to wear their traditional dress but were immediately rebuffed by having their hair and beards forcefully cut, at their own expense.
Even such influential figures as Walt Whitman tolerated bigotry toward the Jews in America. During his time as editor of the Brooklyn Eagle (1846–1848), the newspaper published historical sketches casting Jews in a bad light.
The Dreyfus Affair was an infamous antisemitic event of the late 19th century and early 20th century. Alfred Dreyfus, a Jewish artillery captain in the French Army, was accused in 1894 of passing secrets to the Germans. As a result of these charges, Dreyfus was convicted and sentenced to life imprisonment on Devil's Island. The actual spy, Marie Charles Esterhazy, was acquitted. The event caused great uproar among the French, with the public choosing sides on the issue of whether Dreyfus was actually guilty or not. Émile Zola accused the army of corrupting the French justice system. However, general consensus held that Dreyfus was guilty: 80% of the press in France condemned him. This attitude among the majority of the French population reveals the underlying antisemitism of the time period.
Adolf Stoecker (1835–1909), the Lutheran court chaplain to Kaiser Wilhelm I, founded in 1878 an antisemitic, anti-liberal political party called the Christian Social Party. This party always remained small, and its support dwindled after Stoecker's death, with most of its members eventually joining larger conservative groups such as the German National People's Party.
Some scholars view Karl Marx's essay "On The Jewish Question" as antisemitic, and argue that he often used antisemitic epithets in his published and private writings. These scholars argue that Marx equated Judaism with capitalism in his essay, helping to spread that idea. Some further argue that the essay influenced National Socialist, as well as Soviet and Arab antisemites. Marx himself had Jewish ancestry, and Albert Lindemann and Hyam Maccoby have suggested that he was embarrassed by it.
Others argue that Marx consistently supported Prussian Jewish communities' struggles to achieve equal political rights. These scholars argue that "On the Jewish Question" is a critique of Bruno Bauer's arguments that Jews must convert to Christianity before being emancipated, and is more generally a critique of liberal rights discourses and capitalism. Iain Hamphsher-Monk wrote that "This work [On The Jewish Question] has been cited as evidence for Marx's supposed anti-semitism, but only the most superficial reading of it could sustain such an interpretation."
David McLellan and Francis Wheen argue that readers should interpret On the Jewish Question in the deeper context of Marx's debates with Bruno Bauer, author of The Jewish Question, about Jewish emancipation in Germany. Wheen says that "Those critics, who see this as a foretaste of 'Mein Kampf', overlook one, essential point: in spite of the clumsy phraseology and crude stereotyping, the essay was actually written as a defense of the Jews. It was a retort to Bruno Bauer, who had argued that Jews should not be granted full civic rights and freedoms unless they were baptised as Christians". According to McLellan, Marx used the word Judentum colloquially, as meaning commerce, arguing that Germans must be emancipated from the capitalist mode of production not Judaism or Jews in particular. McLellan concludes that readers should interpret the essay's second half as "an extended pun at Bauer's expense".
20th century
Between 1900 and 1924, approximately 1.75 million Jews migrated to America, the bulk from Eastern Europe escaping the pogroms. Before 1900 American Jews had always amounted to less than 1% of America's total population, but by 1930 Jews formed about 3.5%. This increase, combined with the upward social mobility of some Jews, contributed to a resurgence of antisemitism. In the first half of the 20th century, in the US, Jews were discriminated against in employment, access to residential and resort areas, membership in clubs and organizations, and in tightened quotas on Jewish enrolment and teaching positions in colleges and universities. The lynching of Leo Frank by a mob of prominent citizens in Marietta, Georgia, in 1915 turned the spotlight on antisemitism in the United States. The case was also used to build support for the renewal of the Ku Klux Klan which had been inactive since 1870.
At the beginning of the 20th century, the Beilis Trial in Russia represented modern incidents of blood-libels in Europe. During the Russian Civil War, close to 50,000 Jews were killed in pogroms.
Antisemitism in America reached its peak during the interwar period. The pioneer automobile manufacturer Henry Ford propagated antisemitic ideas in his newspaper The Dearborn Independent (published by Ford from 1919 to 1927). The radio speeches of Father Coughlin in the late 1930s attacked Franklin D. Roosevelt's New Deal and promoted the notion of a Jewish financial conspiracy. Some prominent politicians shared such views: Louis T. McFadden, Chairman of the United States House Committee on Banking and Currency, blamed Jews for Roosevelt's decision to abandon the gold standard, and claimed that "in the United States today, the Gentiles have the slips of paper while the Jews have the lawful money".
In Germany, shortly after Adolf Hitler and the Nazi Party came to power in 1933, the government instituted repressive legislation which denied Jews basic civil rights.
In September 1935, the Nuremberg Laws prohibited sexual relations and marriages between "Aryans" and Jews as Rassenschande ("race disgrace") and stripped all German Jews, even quarter- and half-Jews, of their citizenship (their official title became "subjects of the state"). It instituted a pogrom on the night of 9–10 November 1938, dubbed Kristallnacht, in which Jews were killed, their property destroyed and their synagogues torched. Antisemitic laws, agitation and propaganda were extended to German-occupied Europe in the wake of conquest, often building on local antisemitic traditions.
In 1940, the famous aviator Charles Lindbergh and many prominent Americans led the America First Committee in opposing any involvement in a European war. Lindbergh alleged that Jews were pushing America to go to war against Germany. Lindbergh adamantly denied being antisemitic, and yet he refers numerous times in his private writings – his letters and diary – to Jewish control of the media being used to pressure the U.S. to get involved in the European war. In one diary entry in November 1938, he responded to Kristallnacht by writing "I do not understand these riots on the part of the Germans. ... They have undoubtedly had a difficult Jewish problem, but why is it necessary to handle it so unreasonably?", acknowledgement on Lindbergh's part that he agreed with the Nazis that Germany had a "Jewish problem". An article by Jonathan Marwil in Antisemitism, A Historical Encyclopedia of Prejudice and Persecution claims that "no one who ever knew Lindbergh thought him antisemitic" and that claims of his antisemitism were solely tied to the remarks he made in that one speech.
In the east the Third Reich forced Jews into ghettos in Warsaw, in Kraków, in Lvov, in Lublin and in Radom.
After the beginning of the war between Nazi Germany and the Soviet Union in 1941, a campaign of mass murder, conducted by the Einsatzgruppen, culminated from 1942 to 1945 in systematic genocide: the Holocaust. Eleven million Jews were targeted for extermination by the Nazis, and some six million were eventually killed.
Contemporary antisemitism
Post-WWII antisemitism
There have continued to be antisemitic incidents since WWII, some of which had been state-sponsored. In the Soviet Union, antisemitism has been used as an instrument for settling personal conflicts starting with the conflict between Joseph Stalin and Leon Trotsky and continuing through numerous conspiracy theories spread by official propaganda. Antisemitism in the USSR reached new heights after 1948 during the campaign against the "rootless cosmopolitan" (euphemism for "Jew") in which numerous Yiddish-language poets, writers, painters, and sculptors were killed or arrested. This culminated in the so-called Doctors' Plot in 1952.
Similar antisemitic propaganda in Poland resulted in the flight of Polish Jewish survivors from the country. After the war, the Kielce pogrom and the "March 1968 events" in communist Poland represented further incidents of antisemitism in Europe. The anti-Jewish violence in postwar Poland had a common theme of blood libel rumours.
21st-century European antisemitism
Physical assaults against Jews in Europe have included beatings, stabbings, and other violence, which increased markedly, sometimes resulting in serious injury and death. A 2015 report by the US State Department on religious freedom declared that "European anti-Israel sentiment crossed the line into anti-Semitism."
This rise in antisemitic attacks is associated with both Muslim antisemitism and the rise of far-right political parties as a result of the economic crisis of 2008. This rise in the support for far-right ideas in western and eastern Europe has resulted in the increase of antisemitic acts, mostly attacks on Jewish memorials, synagogues and cemeteries but also a number of physical attacks against Jews.
In Eastern Europe the dissolution of the Soviet Union and the instability of the new states brought the rise of nationalist movements and the accusation against Jews for the economic crisis, taking over the local economy and bribing the government, along with traditional and religious motives for antisemitism such as blood libels. Writing on the rhetoric surrounding the 2022 Russian invasion of Ukraine, Jason Stanley relates these perceptions to broader historical narratives: "the dominant version of antisemitism alive in parts of eastern Europe today is that Jews employ the Holocaust to seize the victimhood narrative from the 'real' victims of the Nazis, who are Russian Christians (or other non-Jewish eastern Europeans)". He calls out the "myths of contemporary eastern European antisemitism – that a global cabal of Jews were (and are) the real agents of violence against Russian Christians and the real victims of the Nazis were not the Jews, but rather this group."
Most of the antisemitic incidents in Eastern Europe are against Jewish cemeteries and buildings (community centers and synagogues). Nevertheless, there were several violent attacks against Jews in Moscow in 2006 when a neo-Nazi stabbed 9 people at the Bolshaya Bronnaya Synagogue, the failed bomb attack on the same synagogue in 1999, the threats against Jewish pilgrims in Uman, Ukraine and the attack against a menorah by extremist Christian organization in Moldova in 2009.
According to Paul Johnson, antisemitic policies are a sign of a state which is poorly governed. While no European state currently has such policies, the Economist Intelligence Unit notes the rise in political uncertainty, notably populism and nationalism, as something that is particularly alarming for Jews.
21st-century Arab antisemitism
Robert Bernstein, founder of Human Rights Watch, says that antisemitism is "deeply ingrained and institutionalized" in "Arab nations in modern times".
In a 2011 survey by the Pew Research Center, all of the Muslim-majority Middle Eastern countries polled held significantly negative opinions of Jews. In the questionnaire, only 2% of Egyptians, 3% of Lebanese Muslims, and 2% of Jordanians reported having a positive view of Jews. Muslim-majority countries outside the Middle East similarly held markedly negative views of Jews, with 4% of Turks and 9% of Indonesians viewing Jews favorably.
According to a 2011 exhibition at the United States Holocaust Memorial Museum in Washington, United States, some of the dialogue from Middle East media and commentators about Jews bear a striking resemblance to Nazi propaganda. According to Josef Joffe of Newsweek, "anti-Semitism—the real stuff, not just bad-mouthing particular Israeli policies—is as much part of Arab life today as the hijab or the hookah. Whereas this darkest of creeds is no longer tolerated in polite society in the West, in the Arab world, Jew hatred remains culturally endemic."
Muslim clerics in the Middle East have frequently referred to Jews as descendants of apes and pigs, which are conventional epithets for Jews and Christians.
According to professor Robert Wistrich, director of the Vidal Sassoon International Center for the Study of Antisemitism (SICSA), the calls for the destruction of Israel by Iran or by Hamas, Hezbollah, Islamic Jihad, or the Muslim Brotherhood, represent a contemporary mode of genocidal antisemitism.
Black Hebrew Israelite antisemitism
In 2022, the American Jewish Committee stated that the Black Hebrew Israelite claim that "we are the real Jews" is a "troubling anti-Semitic trope with dangerous potential". Black Hebrew Israelite followers have sought out and attacked Jewish people in the United States on more than one occasion. Between 2019 and 2022, individuals motivated by Black Hebrew Israelitism committed five religiously motivated murders.
Black Hebrew Israelites believe that Jewish people are "imposters", who have "stolen" Black Americans' true racial and religious identity. Black Hebrew Israelites promote the Khazar theory about Ashkenazi Jewish origins. In 2019, 4% of African-Americans self-identified as being Black Hebrew Israelites.
Causes
Antisemitism has been explained in terms of racism, xenophobia, projected guilt, displaced aggression, and the search for a scapegoat. Some explanations assign partial blame to the perception of Jewish people as unsociable. Such a perception may have arisen by many Jews having strictly kept to their own communities, with their own practices and laws.
It has also been suggested that parts of antisemitism arose from a perception of Jewish people as greedy (as often used in stereotypes of Jews), and this perception has probably evolved in Europe during medieval times where a large portion of money lending was operated by Jews. Factors contributing to this situation included that Jews were restricted from other professions, while the Christian Church declared for their followers that money lending constituted immoral "usury".
Prevention through education
Education plays an important role in addressing and overcoming prejudice and countering social discrimination. However, education is not only about challenging the conditions of intolerance and ignorance in which antisemitism manifests itself; it is also about building a sense of global citizenship and solidarity, respect for, and enjoyment of diversity and the ability to live peacefully together as active, democratic citizens. Education equips learners with the knowledge to identify antisemitism and biased or prejudiced messages and raises awareness about the forms, manifestations, and impact of antisemitism faced by Jews and Jewish communities.
Geographical variation
A March 2008 report by the U.S. State Department found that there was an increase in antisemitism across the world, and that both old and new expressions of antisemitism persist. A 2012 report by the U.S. Bureau of Democracy, Human Rights and Labor also noted a continued global increase in antisemitism, and found that Holocaust denial and opposition to Israeli policy at times was used to promote or justify blatant antisemitism. In 2014, the Anti-Defamation League conducted a study titled Global 100: An Index of Anti-Semitism, which also reported high antisemitism figures around the world and, among other findings, that as many as "27% of people who have never met a Jew nevertheless harbor strong prejudices against him".
See also
1968 Polish political crisis
Anti-antisemitism
Anti-Jewish violence in Eastern Europe, 1944–1946
Anti-Middle Eastern sentiment
Anti-Semite and Jew
Antisemitism around the world
Antisemitism in the anti-globalization movement
Antisemitism in the Arab world
Antisemitism in Japan
Antisemitism in the United States
History of antisemitism in the United States
Criticism of Judaism
Farhud, 1941 Baghdad pogrom
Host desecration
Jacob Barnet affair
Jewish deicide [Christ killers]
Judeo-Masonic conspiracy theory
Martyrdom in Judaism
Universities and Antisemitism
Secondary antisemitism
Stab-in-the-back myth
Timeline of antisemitism
Xenophobia
Notes
References
Citations
Sources
Deutsch, Gotthard, Anti-Semitism Jewish Encyclopedia vol. 1, pp. 641–9. New York, Funk & Wagnalls, 1901. At Internet Archive
Poliakov, Léon. The History of Anti-Semitism, Volume 1: From the Time of Christ to the Court Jews, University of Pennsylvania Press: 2003
Poliakov, Léon. The History of Anti-Semitism, Volume 2: From Mohammad to the Marranos, University of Pennsylvania Press: 2003
Poliakov, Léon. The History of Anti-Semitism, Volume 3: From Voltaire to Wagner, University of Pennsylvania Press: 2003
Poliakov, Léon. The History of Anti-Semitism, Volume 4: Suicidal Europe 1870–1933, University of Pennsylvania Press: 2003
Poliakov, Léon (1997). "Anti-Semitism". Encyclopaedia Judaica (CD-ROM Edition Version 1.0). Ed. Cecil Roth. Keter Publishing House.
Tausch, Arno (2018). The Return of Religious Antisemitism? The Evidence from World Values Survey Data (17 November 2018). Available at SSRN
Tausch, Arno (2015). Islamism and Antisemitism. Preliminary Evidence on Their Relationship from Cross-National Opinion Data (14 August 2015). Available at SSRN or Islamism and Antisemitism. Preliminary Evidence on Their Relationship from Cross-National Opinion Data
Tausch, Arno (2014). The New Global Antisemitism: Implications from the Recent ADL-100 Data (14 January 2015). Middle East Review of International Affairs, Vol. 18, No. 3 (Fall 2014). Available at SSRN or The New Global Antisemitism: Implications from the Recent ADL-100 Data
Attribution
Further reading
Brustein, William I., and Ryan D. King. "Anti-semitism in Europe before the Holocaust." International Political Science Review 25.1 (2004): 35–53. online
Carr, Steven Alan. Hollywood and anti-Semitism: A cultural history up to World War II, Cambridge University Press 2001.
Cohn, Norman. Warrant for Genocide, Eyre & Spottiswoode 1967; Serif, 1996.
Fischer, Klaus P. The History of an Obsession: German Judeophobia and the Holocaust, The Continuum Publishing Company, 1998.
Freudmann, Lillian C. Antisemitism in the New Testament, University Press of America, 1994.
Gerber, Jane S. (1986). "Anti-Semitism and the Muslim World". In History and Hate: The Dimensions of Anti-Semitism, ed. David Berger. Jewish Publications Society.
Goldberg, Sol; Ury, Scott; Weiser, Kalman (eds.). Key Concepts in the Study of Antisemitism (Palgrave Macmillan, 2021) online review
Hanebrink, Paul. A Specter Haunting Europe: The Myth of Judeo-Bolshevism, Harvard University Press, 2018. .
Hilberg, Raul. The Destruction of the European Jews. Holmes & Meier, 1985. 3 volumes.
Isser, Natalie. Antisemitism during the French Second Empire (1991)
McKain, Mark. Anti-Semitism: At Issue, Greenhaven Press, 2005.
Marcus, Kenneth L. The Definition of Anti-Semitism, 2015, Oxford University Press
Michael, Robert and Philip Rosen. Dictionary of Antisemitism , The Scarecrow Press, Inc., 2007
Michael, Robert. Holy Hatred: Christianity, Antisemitism, and the Holocaust
Nirenberg, David. Anti-Judaism: The Western Tradition (New York: W. W. Norton & Company, 2013) 610 pp.
Roth, Philip. The Plot Against America, 2004
Selzer, Michael (ed.). "Kike!" : A Documentary History of Anti-Semitism in America, New York 1972.
Small, Charles Asher ed. The Yale Papers: Antisemitism In Comparative Perspective (Institute For the Study of Global Antisemitism and Policy, 2015). online , scholarly studies.
Stav, Arieh (1999). Peace: The Arabian Caricature – A Study of Anti-semitic Imagery. Gefen Publishing House. .
Steinweis, Alan E. Studying the Jew: Scholarly Antisemitism in Nazi Germany. Harvard University Press, 2006. .
Stillman, Norman. The Jews of Arab Lands: A History and Source Book. (Philadelphia: Jewish Publication Society of America. 1979).
Stillman, N.A. (2006). "Yahud". Encyclopaedia of Islam. Eds.: P.J. Bearman, Th. Bianquis, C.E. Bosworth, E. van Donzel and W.P. Heinrichs. Brill. Brill Online
, United States Department of State, 2008. Retrieved 25 November 2010. See HTML version .
Vital, David. People Apart: The Jews in Europe, 1789-1939 (1999); 930pp highly detailed
Bibliographies, calendars, etc.
Jewish Journal of Greater Los Angeles, "Experts explore effects of Ahmadinejad anti-Semitism", 9 March 2007
Lazare, Bernard, Antisemitism: Its History and Causes
Anti-Defamation League Arab Antisemitism
Why the Jews? A perspective on causes of anti-Semitism
Coordination Forum for Countering Antisemitism (with up to date calendar of antisemitism today)
Annotated bibliography of anti-Semitism hosted by the Hebrew University of Jerusalem's Center for the Study of Antisemitism (SICSA)
Council of Europe, ECRI Country-by-Country Reports
Porat, Dina. "What makes an anti-Semite?", Haaretz, 27 January 2007. Retrieved 24 November 2010.
Yehoshua, A.B., An Attempt to Identify the Root Cause of Antisemitism , Azure , Spring 2008.
Antisemitism in modern Ukraine
Antisemitism and Special Relativity
External links
Prejudice and discrimination by type
Racism
Orientalism
|
https://en.wikipedia.org/wiki/Avicenna
|
Ibn Sina (; 980 – June 1037 CE), commonly known in the West as Avicenna (), was the preeminent philosopher and physician of the Muslim world, flourishing during the Islamic Golden Age, serving in the courts of various Iranian rulers. He is often described as the father of early modern medicine. His philosophy was of the Muslim Peripatetic school derived from Aristotelianism.
His most famous works are The Book of Healing, a philosophical and scientific encyclopedia, and The Canon of Medicine, a medical encyclopedia which became a standard medical text at many medieval universities and remained in use as late as 1650. Besides philosophy and medicine, Avicenna's corpus includes writings on astronomy, alchemy, geography and geology, psychology, Islamic theology, logic, mathematics, physics, and works of poetry.
Avicenna wrote most of his philosophical and scientific works in Arabic, but also wrote several key works in Persian, while his poetic works were written in both languages. Of the 450 works he is believed to have written, around 240 have survived, including 150 on philosophy and 40 on medicine.
Name
is a Latin corruption of the Arabic patronym Ibn Sīnā (), meaning "Son of Sina". However, Avicenna was not the son but the great-great-grandson of a man named Sina. His formal Arabic name was Abū ʿAlī al-Ḥusayn bin ʿAbdullāh ibn al-Ḥasan bin ʿAlī bin Sīnā al-Balkhi al-Bukhari ().
Circumstances
Avicenna created an extensive corpus of works during what is commonly known as the Islamic Golden Age, in which the translations of Byzantine Greco-Roman, Persian and Indian texts were studied extensively. Greco-Roman (Mid- and Neo-Platonic, and Aristotelian) texts translated by the Kindi school were commented, redacted and developed substantially by Islamic intellectuals, who also built upon Persian and Indian mathematical systems, astronomy, algebra, trigonometry and medicine.
The Samanid dynasty in the eastern part of Persia, Greater Khorasan and Central Asia as well as the Buyid dynasty in the western part of Persia and Iraq provided a thriving atmosphere for scholarly and cultural development. Under the Samanids, Bukhara rivaled Baghdad as a cultural capital of the Islamic world. There, Avicenna had access to the great libraries of Balkh, Khwarezm, Gorgan, Rey, Isfahan and Hamadan.
Various texts (such as the 'Ahd with Bahmanyar) show that Avicenna debated philosophical points with the greatest scholars of the time. Aruzi Samarqandi describes how before Avicenna left Khwarezm he had met Al-Biruni (a famous scientist and astronomer), Abu Nasr Iraqi (a renowned mathematician), Abu Sahl Masihi (a respected philosopher) and Abu al-Khayr Khammar (a great physician). The study of the Quran and the Hadith also thrived, and Islamic philosophy, fiqh and theology (kalaam) were all further developed by Avicenna and his opponents at this time.
Biography
Early life and education
Avicenna was born in in the village of Afshana in Transoxiana to a family of Persian stock. The village was near the Samanid capital of Bukhara, which was his mother's hometown. His father Abd Allah was a native of the city of Balkh in Tukharistan. An official of the Samanid bureaucracy, he had served as the governor of a village of the royal estate of Harmaytan (near Bukhara) during the reign of Nuh II (). Avicenna also had a younger brother. A few years later, the family settled in Bukhara, a center of learning, which attracted many scholars. It was there that Avicenna was educated, which early on was seemingly administered by his father. Although both Avicenna's father and brother had converted to Ismailism, he himself did not follow the faith. He was instead an adherent of the Sunni Hanafi school, which was also followed by the Samanids.
Avicenna was first schooled in the Quran and literature, and by the age of 10, he had memorized the entire Quran. He was later sent by his father to an Indian greengrocer, who taught him arithmetic. Afterwards, he was schooled in Jurisprudence by the Hanafi jurist Ismail al-Zahid. Some time later, Avicenna's father invited the physician and philosopher Abu Abdallah al-Natili to their house to educate Avicenna. Together, they studied the Isagoge of Porphyry (died 305) and possibly the Categories of Aristotle (died 322 BC) as well. After Avicenna had read the Almagest of Ptolemy (died 170) and Euclid's Elements, Natili told him to continue his research independently. By the time Avicenna was eighteen, he was well-educated in Greek sciences. Although Avicenna only mentions Natili as his teacher in his autobiography, he most likely had other teachers as well, such as the physicians Abu Mansur Qumri and Abu Sahl al-Masihi.
Career
In Bukhara and Gurganj
At the age of seventeen, Avicenna was made a physician of Nuh II. By the time Avicenna was at least 21 years old, his father died. He was subsequently given an administrative post, possibly succeeding his father as the governor of Harmaytan. Avicenna later moved to Gurganj, the capital of Khwarazm, which he reports that he did due to "necessity". The date he went to the place is uncertain, as he reports that he served the Khwarazmshah (ruler) of the region, the Ma'munid Abu al-Hasan Ali. The latter ruled from 997 to 1009, which indicates that Avicenna moved sometime during that period. He may have moved in 999, the year which the Samanid state fell after the Turkic Qarakhanids captured Bukhara and imprisoned the Samanid ruler Abd al-Malik II. Due to his high position and strong connection with the Samanids, Avicenna may have found himself in an unfavorable position after the fall of his suzerain. It was through the minister of Gurganj, Abu'l-Husayn as-Sahi, a patron of Greek sciences, that Avicenna entered into the service of Abu al-Hasan Ali. Under the Ma'munids, Gurganj became a centre of learning, attracting many prominent figures, such as Avicenna and his former teacher Abu Sahl al-Masihi, the mathematician Abu Nasr Mansur, the physician Ibn al-Khammar, and the philologist al-Tha'alibi.
In Gurgan
Avicenna later moved due to "necessity" once more (in 1012), this time to the west. There he travelled through the Khurasani cities of Nasa, Abivard, Tus, Samangan and Jajarm. He was planning to visit the ruler of the city of Gurgan, the Ziyarid Qabus (), a cultivated patron of writing, whose court attracted many distinguished poets and scholars. However, when Avicenna eventually arrived, he discovered that the ruler had been dead since the winter of 1013. Avicenna then left Gurgan for Dihistan, but returned after becoming ill. There he met Abu 'Ubayd al-Juzjani (died 1070) who became his pupil and companion. Avicenna stayed briefly in Gurgan, reportedly serving Qabus's son and successor Manuchihr () and resided in the house of a patron.
In Ray and Hamadan
In , Avicenna went to the city of Ray, where he entered into the service of the Buyid amir (ruler) Majd al-Dawla () and his mother Sayyida Shirin, the de facto ruler of the realm. There he served as the physician at the court, treating Majd al-Dawla, who was suffering from melancholia. Avicenna reportedly later served as the "business manager" of Sayyida Shirin in Qazvin and Hamadan, though details regarding this tenure are unclear. During this period, Avicenna finished his Canon of Medicine, and started writing his Book of Healing.
In 1015, during Avicenna's stay in Hamadan, he participated in a public debate, as was custom for newly arrived scholars in western Iran at that time. The purpose of the debate was to examine one's reputation against a prominent local resident. The person whom Avicenna debated against was Abu'l-Qasim al-Kirmani, a member of the school of philosophers of Baghdad. The debate became heated, resulting in Avicenna accusing Abu'l-Qasim of lack of basic knowledge in logic, while Abu'l-Qasim accused Avicenna of impoliteness. After the debate, Avicenna sent a letter to the Baghdad Peripatetics, asking if Abu'l-Qasim's claim that he shared the same opinion as them was true. Abu'l-Qasim later retaliated by writing a letter to an unknown person, in which he made accusations so serious, that Avicenna wrote to a deputy of Majd al-Dawla, named Abu Sa'd, to investigate the matter. The accusation made towards Avicenna may have been the same as he had received earlier, in which he was accused by the people of Hamadan of copying the stylistic structures of the Quran in his Sermons on Divine Unity. The seriousness of this charge, in the words of the historian Peter Adamson, "cannot be underestimated in the larger Muslim culture."
Not long afterwards, Avicenna shifted his allegiance to the rising Buyid amir Shams al-Dawla (the younger brother of Majd al-Dawla), which Adamson suggests was due to Abu'l-Qasim also working under Sayyida Shirin. Avicenna had been called upon by Shams al-Dawla to treat him, but after the latter's campaign in the same year against his former ally, the Annazid ruler Abu Shawk (), he forced Avicenna to become his vizier. Although Avicenna would sometimes clash with Shams al-Dawla's troops, he remained vizier until the latter died of colic in 1021. Avicenna was asked by Shams al-Dawla's son and successor Sama' al-Dawla () to stay as vizier, but instead went into hiding with his patron Abu Ghalib al-Attar, to wait for better opportunities to emerge. It was during this period that Avicenna was secretly in contact with Ala al-Dawla Muhammad (), the Kakuyid ruler of Isfahan and uncle of Sayyida Shirin.
It was during his stay at Attar's home that Avicenna completed his Book of Healing, writing 50 pages a day. The Buyid court in Hamadan, particularly the Kurdish vizier Taj al-Mulk, suspected Avicenna of correspondence with Ala al-Dawla, and as result had the house of Attar ransacked and Avicenna imprisoned in the fortress of Fardajan, outside Hamadan. Juzjani blames one of Avicenna's informers for his capture. Avicenna was imprisoned for four months, until Ala al-Dawla captured Hamadan, thus putting an end to Sama al-Dawla's reign.
In Isfahan
Avicenna was subsequently released, and went to Isfahan, where he was well received by Ala al-Dawla. In the words of Juzjani, the Kakuyid ruler gave Avicenna "the respect and esteem which someone like him deserved." Adamson also says that Avicenna's service under Ala al-Dawla "proved to be the most stable period of his life." Avicenna served as the advisor, if not vizier of Ala al-Dawla, accompanying him in many of his military expeditions and travels. Avicenna dedicated two Persian works to him, a philosophical treatise named Danish-nama-yi Ala'i ("Book of Science for Ala"), and a medical treatise about the pulse.
During the brief occupation of Isfahan by the Ghaznavids in January 1030, Avicenna and Ala al-Dawla relocated to the southwestern Iranian region of Khuzistan, where they stayed until the death of the Ghaznavid ruler Mahmud (), which occurred two months later. It was seemingly when Avicenna returned to Isfahan that he started writing his Pointers and Reminders. In 1037, while Avicenna was accompanying Ala al-Dawla to a battle near Isfahan, he contracted a severe colic, which he had been suffering from throughout his life. He died shortly afterwards in Hamadan, where he was buried.
Philosophy
Avicenna wrote extensively on early Islamic philosophy, especially the subjects logic, ethics and metaphysics, including treatises named Logic and Metaphysics. Most of his works were written in Arabic—then the language of science in the Middle East—and some in Persian. Of linguistic significance even to this day are a few books that he wrote in nearly pure Persian language (particularly the Danishnamah-yi 'Ala', Philosophy for Ala' ad-Dawla'). Avicenna's commentaries on Aristotle often criticized the philosopher, encouraging a lively debate in the spirit of ijtihad.
Avicenna's Neoplatonic scheme of "emanations" became fundamental in the Kalam (school of theological discourse) in the 12th century.
His Book of Healing became available in Europe in partial Latin translation some fifty years after its composition, under the title Sufficientia, and some authors have identified a "Latin Avicennism" as flourishing for some time, paralleling the more influential Latin Averroism, but suppressed by the Parisian decrees of 1210 and 1215.
Avicenna's psychology and theory of knowledge influenced William of Auvergne, Bishop of Paris and Albertus Magnus, while his metaphysics influenced the thought of Thomas Aquinas.
Metaphysical doctrine
Early Islamic philosophy and Islamic metaphysics, imbued as it is with Islamic theology, distinguishes between essence and existence more clearly than Aristotelianism. Whereas existence is the domain of the contingent and the accidental, essence endures within a being beyond the accidental. The philosophy of Avicenna, particularly that part relating to metaphysics, owes much to al-Farabi. The search for a definitive Islamic philosophy separate from Occasionalism can be seen in what is left of his work.
Following al-Farabi's lead, Avicenna initiated a full-fledged inquiry into the question of being, in which he distinguished between essence (Mahiat) and existence (Wujud). He argued that the fact of existence cannot be inferred from or accounted for by the essence of existing things, and that form and matter by themselves cannot interact and originate the movement of the universe or the progressive actualization of existing things. Existence must, therefore, be due to an agent-cause that necessitates, imparts, gives, or adds existence to an essence. To do so, the cause must be an existing thing and coexist with its effect.
Avicenna's consideration of the essence-attributes question may be elucidated in terms of his ontological analysis of the modalities of being; namely impossibility, contingency and necessity. Avicenna argued that the impossible being is that which cannot exist, while the contingent in itself (mumkin bi-dhatihi) has the potentiality to be or not to be without entailing a contradiction. When actualized, the contingent becomes a 'necessary existent due to what is other than itself' (wajib al-wujud bi-ghayrihi). Thus, contingency-in-itself is potential beingness that could eventually be actualized by an external cause other than itself. The metaphysical structures of necessity and contingency are different. Necessary being due to itself (wajib al-wujud bi-dhatihi) is true in itself, while the contingent being is 'false in itself' and 'true due to something else other than itself'. The necessary is the source of its own being without borrowed existence. It is what always exists.
The Necessary exists 'due-to-Its-Self', and has no quiddity/essence (mahiyya) other than existence (wujud). Furthermore, It is 'One' (wahid ahad) since there cannot be more than one 'Necessary-Existent-due-to-Itself' without differentia (fasl) to distinguish them from each other. Yet, to require differentia entails that they exist 'due-to-themselves' as well as 'due to what is other than themselves'; and this is contradictory. However, if no differentia distinguishes them from each other, then there is no sense in which these 'Existents' are not one and the same. Avicenna adds that the 'Necessary-Existent-due-to-Itself' has no genus (jins), nor a definition (hadd), nor a counterpart (nadd), nor an opposite (did), and is detached (bari) from matter (madda), quality (kayf), quantity (kam), place (ayn), situation (wad) and time (waqt).
Avicenna's theology on metaphysical issues (ilāhiyyāt) has been criticized by some Islamic scholars, among them al-Ghazali, Ibn Taymiyya and Ibn al-Qayyim. While discussing the views of the theists among the Greek philosophers, namely Socrates, Plato and Aristotle in Al-Munqidh min ad-Dalal ("Deliverance from Error"), al-Ghazali noted that the Greek philosophers "must be taxed with unbelief, as must their partisans among the Muslim philosophers, such as Avicenna and al-Farabi and their likes." He added that "None, however, of the Muslim philosophers engaged so much in transmitting Aristotle's lore as did the two men just mentioned. [...] The sum of what we regard as the authentic philosophy of Aristotle, as transmitted by al-Farabi and Avicenna, can be reduced to three parts: a part which must be branded as unbelief; a part which must be stigmatized as innovation; and a part which need not be repudiated at all."
Argument for God's existence
Avicenna made an argument for the existence of God which would be known as the "Proof of the Truthful" (Arabic: burhan al-siddiqin). Avicenna argued that there must be a "necessary existent" (Arabic: wajib al-wujud), an entity that cannot not exist and through a series of arguments, he identified it with the Islamic conception of God. Present-day historian of philosophy Peter Adamson called this argument one of the most influential medieval arguments for God's existence, and Avicenna's biggest contribution to the history of philosophy.
Al-Biruni correspondence
Correspondence between Avicenna (with his student Ahmad ibn 'Ali al-Ma'sumi) and Al-Biruni has survived in which they debated Aristotelian natural philosophy and the Peripatetic school. Abu Rayhan began by asking Avicenna eighteen questions, ten of which were criticisms of Aristotle's On the Heavens.
Theology
Avicenna was a devout Muslim and sought to reconcile rational philosophy with Islamic theology. His aim was to prove the existence of God and His creation of the world scientifically and through reason and logic. Avicenna's views on Islamic theology (and philosophy) were enormously influential, forming part of the core of the curriculum at Islamic religious schools until the 19th century. Avicenna wrote a number of short treatises dealing with Islamic theology. These included treatises on the prophets (whom he viewed as "inspired philosophers"), and also on various scientific and philosophical interpretations of the Quran, such as how Quranic cosmology corresponds to his own philosophical system. In general these treatises linked his philosophical writings to Islamic religious ideas; for example, the body's afterlife.
There are occasional brief hints and allusions in his longer works, however, that Avicenna considered philosophy as the only sensible way to distinguish real prophecy from illusion. He did not state this more clearly because of the political implications of such a theory, if prophecy could be questioned, and also because most of the time he was writing shorter works which concentrated on explaining his theories on philosophy and theology clearly, without digressing to consider epistemological matters which could only be properly considered by other philosophers.
Later interpretations of Avicenna's philosophy split into three different schools; those (such as al-Tusi) who continued to apply his philosophy as a system to interpret later political events and scientific advances; those (such as al-Razi) who considered Avicenna's theological works in isolation from his wider philosophical concerns; and those (such as al-Ghazali) who selectively used parts of his philosophy to support their own attempts to gain greater spiritual insights through a variety of mystical means. It was the theological interpretation championed by those such as al-Razi which eventually came to predominate in the madrasahs.
Avicenna memorized the Quran by the age of ten, and as an adult, he wrote five treatises commenting on suras from the Quran. One of these texts included the Proof of Prophecies, in which he comments on several Quranic verses and holds the Quran in high esteem. Avicenna argued that the Islamic prophets should be considered higher than philosophers.
Avicenna is generally understood to have been aligned with the Sunni Hanafi school of thought. Avicenna studied Hanafi law, many of his notable teachers were Hanafi jurists, and he served under the Hanafi court of Ali ibn Mamun. Avicenna said at an early age that he remained "unconvinced" by Ismaili missionary attempts to convert him. Medieval historian Ẓahīr al-dīn al-Bayhaqī (d. 1169) also believed Avicenna to be a follower of the Brethren of Purity.
Thought experiments
While he was imprisoned in the castle of Fardajan near Hamadhan, Avicenna wrote his famous "floating man"—literally falling man—a thought experiment to demonstrate human self-awareness and the substantiality and immateriality of the soul. Avicenna believed his "Floating Man" thought experiment demonstrated that the soul is a substance, and claimed humans cannot doubt their own consciousness, even in a situation that prevents all sensory data input. The thought experiment told its readers to imagine themselves created all at once while suspended in the air, isolated from all sensations, which includes no sensory contact with even their own bodies. He argued that, in this scenario, one would still have self-consciousness. Because it is conceivable that a person, suspended in air while cut off from sense experience, would still be capable of determining his own existence, the thought experiment points to the conclusions that the soul is a perfection, independent of the body, and an immaterial substance. The conceivability of this "Floating Man" indicates that the soul is perceived intellectually, which entails the soul's separateness from the body. Avicenna referred to the living human intelligence, particularly the active intellect, which he believed to be the hypostasis by which God communicates truth to the human mind and imparts order and intelligibility to nature. Following is an English translation of the argument:
However, Avicenna posited the brain as the place where reason interacts with sensation. Sensation prepares the soul to receive rational concepts from the universal Agent Intellect. The first knowledge of the flying person would be "I am," affirming his or her essence. That essence could not be the body, obviously, as the flying person has no sensation. Thus, the knowledge that "I am" is the core of a human being: the soul exists and is self-aware. Avicenna thus concluded that the idea of the self is not logically dependent on any physical thing, and that the soul should not be seen in relative terms, but as a primary given, a substance. The body is unnecessary; in relation to it, the soul is its perfection. In itself, the soul is an immaterial substance.
Principal works
The Canon of Medicine
Avicenna authored a five-volume medical encyclopedia: The Canon of Medicine (Al-Qanun fi't-Tibb). It was used as the standard medical textbook in the Islamic world and Europe up to the 18th century. The Canon still plays an important role in Unani medicine.
Liber Primus Naturalium
Avicenna considered whether events like rare diseases or disorders have natural causes. He used the example of polydactyly to explain his perception that causal reasons exist for all medical events. This view of medical phenomena anticipated developments in the Enlightenment by seven centuries.
The Book of Healing
Earth sciences
Avicenna wrote on Earth sciences such as geology in The Book of Healing. While discussing the formation of mountains, he explained:
Philosophy of science
In the Al-Burhan (On Demonstration) section of The Book of Healing, Avicenna discussed the philosophy of science and described an early scientific method of inquiry. He discussed Aristotle's Posterior Analytics and significantly diverged from it on several points. Avicenna discussed the issue of a proper methodology for scientific inquiry and the question of "How does one acquire the first principles of a science?" He asked how a scientist would arrive at "the initial axioms or hypotheses of a deductive science without inferring them from some more basic premises?" He explained that the ideal situation is when one grasps that a "relation holds between the terms, which would allow for absolute, universal certainty". Avicenna then added two further methods for arriving at the first principles: the ancient Aristotelian method of induction (istiqra), and the method of examination and experimentation (tajriba). Avicenna criticized Aristotelian induction, arguing that "it does not lead to the absolute, universal, and certain premises that it purports to provide." In its place, he developed a "method of experimentation as a means for scientific inquiry."
Logic
An early formal system of temporal logic was studied by Avicenna. Although he did not develop a real theory of temporal propositions, he did study the relationship between temporalis and the implication. Avicenna's work was further developed by Najm al-Dīn al-Qazwīnī al-Kātibī and became the dominant system of Islamic logic until modern times. Avicennian logic also influenced several early European logicians such as Albertus Magnus and William of Ockham. Avicenna endorsed the law of non-contradiction proposed by Aristotle, that a fact could not be both true and false at the same time and in the same sense of the terminology used. He stated, "Anyone who denies the law of non-contradiction should be beaten and burned until he admits that to be beaten is not the same as not to be beaten, and to be burned is not the same as not to be burned."
Physics
In mechanics, Avicenna, in The Book of Healing, developed a theory of motion, in which he made a distinction between the inclination (tendency to motion) and force of a projectile, and concluded that motion was a result of an inclination (mayl) transferred to the projectile by the thrower, and that projectile motion in a vacuum would not cease. He viewed inclination as a permanent force whose effect is dissipated by external forces such as air resistance.
The theory of motion presented by Avicenna was probably influenced by the 6th-century Alexandrian scholar John Philoponus. Avicenna's is a less sophisticated variant of the theory of impetus developed by Buridan in the 14th century. It is unclear if Buridan was influenced by Avicenna, or by Philoponus directly.
In optics, Avicenna was among those who argued that light had a speed, observing that "if the perception of light is due to the emission of some sort of particles by a luminous source, the speed of light must be finite." He also provided a wrong explanation of the rainbow phenomenon. Carl Benjamin Boyer described Avicenna's ("Ibn Sīnā") theory on the rainbow as follows:
In 1253, a Latin text entitled Speculum Tripartitum stated the following regarding Avicenna's theory on heat:
Psychology
Avicenna's legacy in classical psychology is primarily embodied in the Kitab al-nafs parts of his Kitab al-shifa (The Book of Healing) and Kitab al-najat (The Book of Deliverance). These were known in Latin under the title De Anima (treatises "on the soul"). Notably, Avicenna develops what is called the Flying Man argument in the Psychology of The Cure I.1.7 as defence of the argument that the soul is without quantitative extension, which has an affinity with Descartes's cogito argument (or what phenomenology designates as a form of an "epoche").
Avicenna's psychology requires that connection between the body and soul be strong enough to ensure the soul's individuation, but weak enough to allow for its immortality. Avicenna grounds his psychology on physiology, which means his account of the soul is one that deals almost entirely with the natural science of the body and its abilities of perception. Thus, the philosopher's connection between the soul and body is explained almost entirely by his understanding of perception; in this way, bodily perception interrelates with the immaterial human intellect. In sense perception, the perceiver senses the form of the object; first, by perceiving features of the object by our external senses. This sensory information is supplied to the internal senses, which merge all the pieces into a whole, unified conscious experience. This process of perception and abstraction is the nexus of the soul and body, for the material body may only perceive material objects, while the immaterial soul may only receive the immaterial, universal forms. The way the soul and body interact in the final abstraction of the universal from the concrete particular is the key to their relationship and interaction, which takes place in the physical body.
The soul completes the action of intellection by accepting forms that have been abstracted from matter. This process requires a concrete particular (material) to be abstracted into the universal intelligible (immaterial). The material and immaterial interact through the Active Intellect, which is a "divine light" containing the intelligible forms. The Active Intellect reveals the universals concealed in material objects much like the sun makes colour available to our eyes.
Other contributions
Astronomy and astrology
Avicenna wrote an attack on astrology titled Resāla fī ebṭāl aḥkām al-nojūm, in which he cited passages from the Quran to dispute the power of astrology to foretell the future. He believed that each planet had some influence on the earth, but argued against astrologers being able to determine the exact effects.
Avicenna's astronomical writings had some influence on later writers, although in general his work could be considered less developed than Alhazen or Al-Biruni. One important feature of his writing is that he considers mathematical astronomy as a separate discipline to astrology. He criticized Aristotle's view of the stars receiving their light from the Sun, stating that the stars are self-luminous, and believed that the planets are also self-luminous. He claimed to have observed Venus as a spot on the Sun. This is possible, as there was a transit on 24 May 1032, but Avicenna did not give the date of his observation, and modern scholars have questioned whether he could have observed the transit from his location at that time; he may have mistaken a sunspot for Venus. He used his transit observation to help establish that Venus was, at least sometimes, below the Sun in Ptolemaic cosmology, i.e. the sphere of Venus comes before the sphere of the Sun when moving out from the Earth in the prevailing geocentric model.
He also wrote the Summary of the Almagest, (based on Ptolemy's Almagest), with an appended treatise "to bring that which is stated in the Almagest and what is understood from Natural Science into conformity". For example, Avicenna considers the motion of the solar apogee, which Ptolemy had taken to be fixed.
Chemistry
Avicenna was first to derive the attar of flowers from distillation and used steam distillation to produce essential oils such as rose essence, which he used as aromatherapeutic treatments for heart conditions.
Unlike al-Razi, Avicenna explicitly disputed the theory of the transmutation of substances commonly believed by alchemists:
Four works on alchemy attributed to Avicenna were translated into Latin as:
was the most influential, having influenced later medieval chemists and alchemists such as Vincent of Beauvais. However, Anawati argues (following Ruska) that the de Anima is a fake by a Spanish author. Similarly the Declaratio is believed not to be actually by Avicenna. The third work (The Book of Minerals) is agreed to be Avicenna's writing, adapted from the Kitab al-Shifa (Book of the Remedy). Avicenna classified minerals into stones, fusible substances, sulfurs and salts, building on the ideas of Aristotle and Jabir. The epistola de Re recta is somewhat less sceptical of alchemy; Anawati argues that it is by Avicenna, but written earlier in his career when he had not yet firmly decided that transmutation was impossible.
Poetry
Almost half of Avicenna's works are versified. His poems appear in both Arabic and Persian. As an example, Edward Granville Browne claims that the following Persian verses are incorrectly attributed to Omar Khayyám, and were originally written by Ibn Sīnā:
Legacy
Classical Islamic civilization
Robert Wisnovsky, a scholar of Avicenna attached to McGill University, says that "Avicenna was the central figure in the long history of the rational sciences in Islam, particularly in the fields of metaphysics, logic and medicine" but that his works didn't only have an influence in these "secular" fields of knowledge alone, as "these works, or portions of them, were read, taught, copied, commented upon, quoted, paraphrased and cited by thousands of post-Avicennian scholars—not only philosophers, logicians, physicians and specialists in the mathematical or exact sciences, but also by those who specialized in the disciplines of ʿilm al-kalām (rational theology, but understood to include natural philosophy, epistemology and philosophy of mind) and usūl al-fiqh (jurisprudence, but understood to include philosophy of law, dialectic, and philosophy of language)."
Middle Ages and Renaissance
As early as the 14th century when Dante Alighieri depicted him in Limbo alongside the virtuous non-Christian thinkers in his Divine Comedy such as Virgil, Averroes, Homer, Horace, Ovid, Lucan, Socrates, Plato and Saladin. Avicenna has been recognized by both East and West as one of the great figures in intellectual history. Johannes Kepler cites Avicenna's opinion when discussing the causes of planetary motions in Chapter 2 of Astronomia Nova.
George Sarton, the author of The History of Science, described Avicenna as "one of the greatest thinkers and medical scholars in history" and called him "the most famous scientist of Islam and one of the most famous of all races, places, and times". He was one of the Islamic world's leading writers in the field of medicine.
Along with Rhazes, Abulcasis, Ibn al-Nafis and al-Ibadi, Avicenna is considered an important compiler of early Muslim medicine. He is remembered in the Western history of medicine as a major historical figure who made important contributions to medicine and the European Renaissance. His medical texts were unusual in that where controversy existed between Galen and Aristotle's views on medical matters (such as anatomy), he preferred to side with Aristotle, where necessary updating Aristotle's position to take into account post-Aristotelian advances in anatomical knowledge. Aristotle's dominant intellectual influence among medieval European scholars meant that Avicenna's linking of Galen's medical writings with Aristotle's philosophical writings in the Canon of Medicine (along with its comprehensive and logical organisation of knowledge) significantly increased Avicenna's importance in medieval Europe in comparison to other Islamic writers on medicine. His influence following translation of the Canon was such that from the early fourteenth to the mid-sixteenth centuries he was ranked with Hippocrates and Galen as one of the acknowledged authorities, ("prince of physicians").
Modern reception
Institutions in a variety of counties have been named after Avicenna in honour of his scientific accomplishments, including the Avicenna Mausoleum and Museum, Bu-Ali Sina University, Avicenna Research Institute and Ibn Sina Academy of Medieval Medicine and Sciences. There is also a crater on the Moon named Avicenna.
The Avicenna Prize, established in 2003, is awarded every two years by UNESCO and rewards individuals and groups for their achievements in the field of ethics in science.
The Avicenna Directories (2008–15; now the World Directory of Medical Schools) list universities and schools where doctors, public health practitioners, pharmacists and others, are educated. The original project team stated:
In June 2009, Iran donated a "Persian Scholars Pavilion" to the United Nations Office in Vienna. It now sits in the Vienna International Center.
In popular culture
The 1982 Soviet film Youth of Genius () by recounts Avicenna's younger years. The film is set in Bukhara at the turn of the millennium.
In Louis L'Amour's 1985 historical novel The Walking Drum, Kerbouchard studies and discusses Avicenna's The Canon of Medicine.
In his book The Physician (1988) Noah Gordon tells the story of a young English medical apprentice who disguises himself as a Jew to travel from England to Persia and learn from Avicenna, the great master of his time. The novel was adapted into a feature film, The Physician, in 2013. Avicenna was played by Ben Kingsley.
List of works
The treatises of Avicenna influenced later Muslim thinkers in many areas including theology, philology, mathematics, astronomy, physics and music. His works numbered almost 450 volumes on a wide range of subjects, of which around 240 have survived. In particular, 150 volumes of his surviving works concentrate on philosophy and 40 of them concentrate on medicine. His most famous works are The Book of Healing, and The Canon of Medicine.
Avicenna wrote at least one treatise on alchemy, but several others have been falsely attributed to him. His Logic, Metaphysics, Physics, and De Caelo, are treatises giving a synoptic view of Aristotelian doctrine, though Metaphysics demonstrates a significant departure from the brand of Neoplatonism known as Aristotelianism in Avicenna's world; Arabic philosophers have hinted at the idea that Avicenna was attempting to "re-Aristotelianise" Muslim philosophy in its entirety, unlike his predecessors, who accepted the conflation of Platonic, Aristotelian, Neo- and Middle-Platonic works transmitted into the Muslim world.
The Logic and Metaphysics have been extensively reprinted, the latter, e.g., at Venice in 1493, 1495 and 1546. Some of his shorter essays on medicine, logic, etc., take a poetical form (the poem on logic was published by Schmoelders in 1836). Two encyclopedic treatises, dealing with philosophy, are often mentioned. The larger, Al-Shifa' (Sanatio), exists nearly complete in manuscript in the Bodleian Library and elsewhere; part of it on the De Anima appeared at Pavia (1490) as the Liber Sextus Naturalium, and the long account of Avicenna's philosophy given by Muhammad al-Shahrastani seems to be mainly an analysis, and in many places a reproduction, of the Al-Shifa'. A shorter form of the work is known as the An-najat (Liberatio). The Latin editions of part of these works have been modified by the corrections which the monastic editors confess that they applied. There is also a (hikmat-al-mashriqqiyya, in Latin Philosophia Orientalis), mentioned by Roger Bacon, the majority of which is lost in antiquity, which according to Averroes was pantheistic in tone.
Avicenna's works further include:
Sirat al-shaykh al-ra'is (The Life of Avicenna), ed. and trans. WE. Gohlman, Albany, NY: State University of New York Press, 1974. (The only critical edition of Avicenna's autobiography, supplemented with material from a biography by his student Abu 'Ubayd al-Juzjani. A more recent translation of the Autobiography appears in D. Gutas, Avicenna and the Aristotelian Tradition: Introduction to Reading Avicenna's Philosophical Works, Leiden: Brill, 1988; second edition 2014.)
Al-isharat wa al-tanbihat (Remarks and Admonitions), ed. S. Dunya, Cairo, 1960; parts translated by S.C. Inati, Remarks and Admonitions, Part One: Logic, Toronto, Ont.: Pontifical Institute for Mediaeval Studies, 1984, and Ibn Sina and Mysticism, Remarks and Admonitions: Part 4, London: Kegan Paul International, 1996.
Al-Qanun fi'l-tibb (The Canon of Medicine), ed. I. a-Qashsh, Cairo, 1987. (Encyclopedia of medicine.) manuscript, Latin translation, Flores Avicenne, Michael de Capella, 1508, Modern text. Ahmed Shawkat Al-Shatti, Jibran Jabbur.
Risalah fi sirr al-qadar (Essay on the Secret of Destiny), trans. G. Hourani in Reason and Tradition in Islamic Ethics, Cambridge: Cambridge University Press, 1985.
Danishnama-i 'ala'i (The Book of Scientific Knowledge), ed. and trans. P. Morewedge, The Metaphysics of Avicenna, London: Routledge and Kegan Paul, 1973.
Kitab al-Shifa''' (The Book of Healing). (Avicenna's major work on philosophy. He probably began to compose al-Shifa' in 1014, and completed it in 1020.) Critical editions of the Arabic text have been published in Cairo, 1952–83, originally under the supervision of I. Madkour.
Kitab al-Najat (The Book of Salvation), trans. F. Rahman, Avicenna's Psychology: An English Translation of Kitab al-Najat, Book II, Chapter VI with Historical-philosophical Notes and Textual Improvements on the Cairo Edition, Oxford: Oxford University Press, 1952. (The psychology of al-Shifa'.) (Digital version of the Arabic text)
Risala fi'l-Ishq (A Treatise on Love). Translated by Emil L. Fackenheim.
Persian works
Avicenna's most important Persian work is the Danishnama-i 'Alai (, "the Book of Knowledge for [Prince] 'Ala ad-Daulah"). Avicenna created new scientific vocabulary that had not previously existed in Persian. The Danishnama covers such topics as logic, metaphysics, music theory and other sciences of his time. It has been translated into English by Parwiz Morewedge in 1977. The book is also important in respect to Persian scientific works.Andar Danesh-e Rag (, "On the Science of the Pulse") contains nine chapters on the science of the pulse and is a condensed synopsis.
Persian poetry from Avicenna is recorded in various manuscripts and later anthologies such as Nozhat al-Majales.
See also
Al-Qumri (possibly Avicenna's teacher)
Abdol Hamid Khosro Shahi (Iranian theologian)
Mummia (Persian medicine)
Eastern philosophy
Iranian philosophy
Islamic philosophy
Contemporary Islamic philosophy
Science in the medieval Islamic world
List of scientists in medieval Islamic world
Sufi philosophy
Science and technology in Iran
Ancient Iranian medicine
List of pre-modern Iranian scientists and scholars
Namesakes of Ibn Sina
Ibn Sina Academy of Medieval Medicine and Sciences in Aligarh
Avicenna Bay in Antarctica
Avicenna (crater) on the far side of the Moon
Avicenna Cultural and Scientific Foundation
Avicenne Hospital in Paris, France
Avicenna International College in Budapest, Hungary
Avicenna Mausoleum (complex dedicated to Avicenna) in Hamadan, Iran
Avicenna Research Institute in Tehran, Iran
Avicenna Tajik State Medical University in Dushanbe, Tajikistan
Bu-Ali Sina University in Hamedan, Iran
Ibn Sina Peak – named after the Scientist, on the Kyrgyzstan–Tajikistan border
Ibn Sina Foundation in Houston, Texas
Ibn Sina Hospital, Baghdad, Iraq
Ibn Sina Hospital, Istanbul, Turkey
Ibn Sina Medical College Hospital, Dhaka, Bangladesh
Ibn Sina University Hospital of Rabat-Salé at Mohammed V University in Rabat, Morocco
Ibne Sina Hospital, Multan, Punjab, Pakistan
International Ibn Sina Clinic, Dushanbe, Tajikistan
References
Citations
Sources
Further reading
Encyclopedic articles
(PDF version)
Avicenna entry by Sajjad H. Rizvi in the Internet Encyclopedia of Philosophy Primary literature
For an old list of other extant works, C. Brockelmann's Geschichte der arabischen Litteratur (Weimar 1898), vol. i. pp. 452–458. (XV. W.; G. W. T.)
For a current list of his works see A. Bertolacci (2006) and D. Gutas (2014) in the section "Philosophy".
Avicenne: Réfutation de l'astrologie. Edition et traduction du texte arabe, introduction, notes et lexique par Yahya Michot. Préface d'Elizabeth Teissier (Beirut-Paris: Albouraq, 2006) .
William E. Gohlam (ed.), The Life of Ibn Sina. A Critical Edition and Annotated Translation, Albany, State of New York University Press, 1974.
For Ibn Sina's life, see Ibn Khallikan's Biographical Dictionary, translated by de Slane (1842); F. Wüstenfeld's Geschichte der arabischen Aerzte und Naturforscher (Göttingen, 1840).
Madelung, Wilferd and Toby Mayer (ed. and tr.), Struggling with the Philosopher: A Refutation of Avicenna's Metaphysics. A New Arabic Edition and English Translation of Shahrastani's Kitab al-Musara'a.
Secondary literature
This is, on the whole, an informed and good account of the life and accomplishments of one of the greatest influences on the development of thought both Eastern and Western. ... It is not as philosophically thorough as the works of D. Saliba, A.M. Goichon, or L. Gardet, but it is probably the best essay in English on this important thinker of the Middle Ages. (Julius R. Weinberg, The Philosophical Review, Vol. 69, No. 2, Apr. 1960, pp. 255–259)
This is a distinguished work which stands out from, and above, many of the books and articles which have been written in this century on Avicenna (Ibn Sīnā) (980–1037). It has two main features on which its distinction as a major contribution to Avicennan studies may be said to rest: the first is its clarity and readability; the second is the comparative approach adopted by the author. ... (Ian Richard Netton, Journal of the Royal Asiatic Society, Third Series, Vol. 4, No. 2, July 1994, pp. 263–264)
Y.T. Langermann (ed.), Avicenna and his Legacy. A Golden Age of Science and Philosophy, Brepols Publishers, 2010,
For a new understanding of his early career, based on a newly discovered text, see also: Michot, Yahya, Ibn Sînâ: Lettre au vizir Abû Sa'd. Editio princeps d'après le manuscrit de Bursa, traduction de l'arabe, introduction, notes et lexique (Beirut-Paris: Albouraq, 2000) .
This German publication is both one of the most comprehensive general introductions to the life and works of the philosopher and physician Avicenna (Ibn Sīnā, d. 1037) and an extensive and careful survey of his contribution to the history of science. Its author is a renowned expert in Greek and Arabic medicine who has paid considerable attention to Avicenna in his recent studies. ... (Amos Bertolacci, Isis, Vol. 96, No. 4, December 2005, p. 649)
Shaikh al Rais Ibn Sina (Special number) 1958–59, Ed. Hakim Syed Zillur Rahman, Tibbia College Magazine, Aligarh Muslim University, Aligarh, India.
Medicine
Browne, Edward G. Islamic Medicine. Fitzpatrick Lectures Delivered at the Royal College of Physicians in 1919–1920, reprint: New Delhi: Goodword Books, 2001.
Pormann, Peter & Savage-Smith, Emilie. Medieval Islamic Medicine, Washington: Georgetown University Press, 2007.
Prioreschi, Plinio. Byzantine and Islamic Medicine, A History of Medicine, Vol. 4, Omaha: Horatius Press, 2001.
Syed Ziaur Rahman. Pharmacology of Avicennian Cardiac Drugs (Metaanalysis of researches and studies in Avicennian Cardiac Drugs along with English translation of Risalah al Adwiya al Qalbiyah), Ibn Sina Academy of Medieval Medicine and Sciences, Aligarh, India, 2020
Philosophy
Amos Bertolacci, The Reception of Aristotle's Metaphysics in Avicenna's Kitab al-Sifa'. A Milestone of Western Metaphysical Thought, Leiden: Brill 2006, (Appendix C contains an Overview of the Main Works by Avicenna on Metaphysics in Chronological Order).
Dimitri Gutas, Avicenna and the Aristotelian Tradition: Introduction to Reading Avicenna's Philosophical Works, Leiden, Brill 2014, second revised and expanded edition (first edition: 1988), including an inventory of Avicenna' Authentic Works.
Andreas Lammer: The Elements of Avicenna's Physics. Greek Sources and Arabic Innovations. Scientia graeco-arabica 20. Berlin / Boston: Walter de Gruyter, 2018.
Jon McGinnis and David C. Reisman (eds.) Interpreting Avicenna: Science and Philosophy in Medieval Islam: Proceedings of the Second Conference of the Avicenna Study Group, Leiden: Brill, 2004.
Michot, Jean R., La destinée de l'homme selon Avicenne, Louvain: Aedibus Peeters, 1986, .
Nader El-Bizri, The Phenomenological Quest between Avicenna and Heidegger, Binghamton, N.Y.: Global Publications SUNY, 2000 (reprinted by SUNY Press in 2014 with a new Preface).
Nader El-Bizri, "Avicenna and Essentialism," Review of Metaphysics, Vol. 54 (June 2001), pp. 753–778.
Nader El-Bizri, "Avicenna's De Anima between Aristotle and Husserl," in The Passions of the Soul in the Metamorphosis of Becoming, ed. Anna-Teresa Tymieniecka, Dordrecht: Kluwer, 2003, pp. 67–89.
Nader El-Bizri, "Being and Necessity: A Phenomenological Investigation of Avicenna's Metaphysics and Cosmology," in Islamic Philosophy and Occidental Phenomenology on the Perennial Issue of Microcosm and Macrocosm, ed. Anna-Teresa Tymieniecka, Dordrecht: Kluwer, 2006, pp. 243–261.
Nader El-Bizri, 'Ibn Sīnā's Ontology and the Question of Being', Ishrāq: Islamic Philosophy Yearbook 2 (2011), 222–237
Nader El-Bizri, 'Philosophising at the Margins of 'Sh'i Studies': Reflections on Ibn Sīnā's Ontology', in The Study of Sh'i Islam. History, Theology and Law, eds. F. Daftary and G. Miskinzoda (London: I.B. Tauris, 2014), pp. 585–597.
Reisman, David C. (ed.), Before and After Avicenna: Proceedings of the First Conference of the Avicenna Study Group'', Leiden: Brill, 2003.
External links
Avicenna (Ibn-Sina) on the Subject and the Object of Metaphysics with a list of translations of the logical and philosophical works and an annotated bibliography
980s births
Year of birth unknown
1037 deaths
11th-century astronomers
11th-century Persian-language poets
11th-century philosophers
11th-century Iranian physicians
Alchemists of the medieval Islamic world
Aristotelian philosophers
Burials in Iran
Buyid viziers
Classical humanists
Critics of atheism
Epistemologists
Iranian music theorists
Islamic philosophers
Transoxanian Islamic scholars
Logicians
People from Bukhara Region
Pharmacologists of medieval Iran
Musical theorists of the medieval Islamic world
Ontologists
People from Khorasan
Persian physicists
Philosophers of logic
Philosophers of mind
Philosophers of psychology
Philosophers of religion
Philosophers of science
Samanid scholars
Unani medicine
Iranian logicians
Iranian ethicists
Samanid officials
Philosophers of mathematics
|
https://en.wikipedia.org/wiki/Analysis
|
Analysis (: analyses) is the process of breaking a complex topic or substance into smaller parts in order to gain a better understanding of it. The technique has been applied in the study of mathematics and logic since before Aristotle (384–322 B.C.), though analysis as a formal concept is a relatively recent development.
The word comes from the Ancient Greek (analysis, "a breaking-up" or "an untying;" from ana- "up, throughout" and lysis "a loosening"). From it also comes the word's plural, analyses.
As a formal concept, the method has variously been ascribed to Alhazen, René Descartes (Discourse on the Method), and Galileo Galilei. It has also been ascribed to Isaac Newton, in the form of a practical method of physical discovery (which he did not name).
The converse of analysis is synthesis: putting the pieces back together again in a new or different whole.
Applications
Science
The field of chemistry uses analysis in three ways: to identify the components of a particular chemical compound (qualitative analysis), to identify the proportions of components in a mixture (quantitative analysis), and to break down chemical processes and examine chemical reactions between elements of matter. For an example of its use, analysis of the concentration of elements is important in managing a nuclear reactor, so nuclear scientists will analyze neutron activation to develop discrete measurements within vast samples. A matrix can have a considerable effect on the way a chemical analysis is conducted and the quality of its results. Analysis can be done manually or with a device.
Types of Analysis:
A) Qualitative Analysis: It is concerned with which components are in a given sample or compound.
Example: Precipitation reaction
B) Quantitative Analysis: It is to determine the quantity of individual component present in a given sample or compound.
Example: To find concentration by uv-spectrophotometer.
Isotopes
Chemists can use isotope analysis to assist analysts with issues in anthropology, archeology, food chemistry, forensics, geology, and a host of other questions of physical science. Analysts can discern the origins of natural and man-made isotopes in the study of environmental radioactivity.
Business
Financial statement analysis – the analysis of the accounts and the economic prospects of a firm
Financial analysis – refers to an assessment of the viability, stability, and profitability of a business, sub-business or project
Gap analysis – involves the comparison of actual performance with potential or desired performance of an organization
Business analysis – involves identifying the needs and determining the solutions to business problems
Price analysis – involves the breakdown of a price to a unit figure
Market analysis – consists of suppliers and customers, and price is determined by the interaction of supply and demand
Sum-of-the-parts analysis – method of valuation of a multi-divisional company
Opportunity analysis – consists of customers trends within the industry, customer demand and experience determine purchasing behavior
Computer science
Requirements analysis – encompasses those tasks that go into determining the needs or conditions to meet for a new or altered product, taking account of the possibly conflicting requirements of the various stakeholders, such as beneficiaries or users.
Competitive analysis (online algorithm) – shows how online algorithms perform and demonstrates the power of randomization in algorithms
Lexical analysis – the process of processing an input sequence of characters and producing as output a sequence of symbols
Object-oriented analysis and design – à la Booch
Program analysis (computer science) – the process of automatically analysing the behavior of computer programs
Semantic analysis (computer science) – a pass by a compiler that adds semantical information to the parse tree and performs certain checks
Static code analysis – the analysis of computer software that is performed without actually executing programs built from that
Structured systems analysis and design methodology – à la Yourdon
Syntax analysis – a process in compilers that recognizes the structure of programming languages, also known as parsing
Worst-case execution time – determines the longest time that a piece of software can take to run
Economics
Agroecosystem analysis
Input–output model if applied to a region, is called Regional Impact Multiplier System
Engineering
Analysts in the field of engineering look at requirements, structures, mechanisms, systems and dimensions. Electrical engineers analyse systems in electronics. Life cycles and system failures are broken down and studied by engineers. It is also looking at different factors incorporated within the design.
Intelligence
The field of intelligence employs analysts to break down and understand a wide array of questions. Intelligence agencies may use heuristics, inductive and deductive reasoning, social network analysis, dynamic network analysis, link analysis, and brainstorming to sort through problems they face. Military intelligence may explore issues through the use of game theory, Red Teaming, and wargaming. Signals intelligence applies cryptanalysis and frequency analysis to break codes and ciphers. Business intelligence applies theories of competitive intelligence analysis and competitor analysis to resolve questions in the marketplace. Law enforcement intelligence applies a number of theories in crime analysis.
Linguistics
Linguistics explores individual languages and language in general. It breaks language down and analyses its component parts: theory, sounds and their meaning, utterance usage, word origins, the history of words, the meaning of words and word combinations, sentence construction, basic construction beyond the sentence level, stylistics, and conversation. It examines the above using statistics and modeling, and semantics. It analyses language in context of anthropology, biology, evolution, geography, history, neurology, psychology, and sociology. It also takes the applied approach, looking at individual language development and clinical issues.
Literature
Literary criticism is the analysis of literature. The focus can be as diverse as the analysis of Homer or Freud. While not all literary-critical methods are primarily analytical in nature, the main approach to the teaching of literature in the west since the mid-twentieth century, literary formal analysis or close reading, is. This method, rooted in the academic movement labelled The New Criticism, approaches texts – chiefly short poems such as sonnets, which by virtue of their small size and significant complexity lend themselves well to this type of analysis – as units of discourse that can be understood in themselves, without reference to biographical or historical frameworks. This method of analysis breaks up the text linguistically in a study of prosody (the formal analysis of meter) and phonic effects such as alliteration and rhyme, and cognitively in examination of the interplay of syntactic structures, figurative language, and other elements of the poem that work to produce its larger effects.
Mathematics
Modern mathematical analysis is the study of infinite processes. It is the branch of mathematics that includes calculus. It can be applied in the study of classical concepts of mathematics, such as real numbers, complex variables, trigonometric functions, and algorithms, or of non-classical concepts like constructivism, harmonics, infinity, and vectors.
Florian Cajori explains in A History of Mathematics (1893) the difference between modern and ancient mathematical analysis, as distinct from logical analysis, as follows:
The terms synthesis and analysis are used in mathematics in a more special sense than in logic. In ancient mathematics they had a different meaning from what they now have. The oldest definition of mathematical analysis as opposed to synthesis is that given in [appended to] Euclid, XIII. 5, which in all probability was framed by Eudoxus: "Analysis is the obtaining of the thing sought by assuming it and so reasoning up to an admitted truth; synthesis is the obtaining of the thing sought by reasoning up to the inference and proof of it."
The analytic method is not conclusive, unless all operations involved in it are known to be reversible. To remove all doubt, the Greeks, as a rule, added to the analytic process a synthetic one, consisting of a reversion of all operations occurring in the analysis. Thus the aim of analysis was to aid in the discovery of synthetic proofs or solutions.
James Gow uses a similar argument as Cajori, with the following clarification, in his A Short History of Greek Mathematics (1884):
The synthetic proof proceeds by shewing that the proposed new truth involves certain admitted truths. An analytic proof begins by an assumption, upon which a synthetic reasoning is founded. The Greeks distinguished theoretic from problematic analysis. A theoretic analysis is of the following kind. To prove that A is B, assume first that A is B. If so, then, since B is C and C is D and D is E, therefore A is E. If this be known a falsity, A is not B. But if this be a known truth and all the intermediate propositions be convertible, then the reverse process, A is E, E is D, D is C, C is B, therefore A is B, constitutes a synthetic proof of the original theorem. Problematic analysis is applied in all cases where it is proposed to construct a figure which is assumed to satisfy a given condition. The problem is then converted into some theorem which is involved in the condition and which is proved synthetically, and the steps of this synthetic proof taken backwards are a synthetic solution of the problem.
Music
Musical analysis – a process attempting to answer the question "How does this music work?"
Musical Analysis is a study of how the composers use the notes together to compose music. Those studying music will find differences with each composer's musical analysis, which differs depending on the culture and history of music studied. An analysis of music is meant to simplify the music for you.
Schenkerian analysis
Schenkerian analysis is a collection of music analysis that focuses on the production of the graphic representation. This includes both analytical procedure as well as the notational style. Simply put, it analyzes tonal music which includes all chords and tones within a composition.
Philosophy
Philosophical analysis – a general term for the techniques used by philosophers
Philosophical analysis refers to the clarification and composition of words put together and the entailed meaning behind them. Philosophical analysis dives deeper into the meaning of words and seeks to clarify that meaning by contrasting the various definitions. It is the study of reality, justification of claims, and the analysis of various concepts. Branches of philosophy include logic, justification, metaphysics, values and ethics. If questions can be answered empirically, meaning it can be answered by using the senses, then it is not considered philosophical. Non-philosophical questions also include events that happened in the past, or questions science or mathematics can answer.
Analysis is the name of a prominent journal in philosophy.
Psychotherapy
Psychoanalysis – seeks to elucidate connections among unconscious components of patients' mental processes
Transactional analysis
Transactional analysis is used by therapists to try to gain a better understanding of the unconscious. It focuses on understanding and intervening human behavior.
Policy
Policy analysis – The use of statistical data to predict the effects of policy decisions made by governments and agencies
Policy analysis includes a systematic process to find the most efficient and effective option to address the current situation.
Qualitative analysis – The use of anecdotal evidence to predict the effects of policy decisions or, more generally, influence policy decisions
Signal processing
Finite element analysis – a computer simulation technique used in engineering analysis
Independent component analysis
Link quality analysis – the analysis of signal quality
Path quality analysis
Fourier analysis
Statistics
In statistics, the term analysis may refer to any method used
for data analysis. Among the many such methods, some are:
Analysis of variance (ANOVA) – a collection of statistical models and their associated procedures which compare means by splitting the overall observed variance into different parts
Boolean analysis – a method to find deterministic dependencies between variables in a sample, mostly used in exploratory data analysis
Cluster analysis – techniques for finding groups (called clusters), based on some measure of proximity or similarity
Factor analysis – a method to construct models describing a data set of observed variables in terms of a smaller set of unobserved variables (called factors)
Meta-analysis – combines the results of several studies that address a set of related research hypotheses
Multivariate analysis – analysis of data involving several variables, such as by factor analysis, regression analysis, or principal component analysis
Principal component analysis – transformation of a sample of correlated variables into uncorrelated variables (called principal components), mostly used in exploratory data analysis
Regression analysis – techniques for analysing the relationships between several predictive variables and one or more outcomes in the data
Scale analysis (statistics) – methods to analyse survey data by scoring responses on a numeric scale
Sensitivity analysis – the study of how the variation in the output of a model depends on variations in the inputs
Sequential analysis – evaluation of sampled data as it is collected, until the criterion of a stopping rule is met
Spatial analysis – the study of entities using geometric or geographic properties
Time-series analysis – methods that attempt to understand a sequence of data points spaced apart at uniform time intervals
Other
Aura analysis – a technique in which supporters of the method claim that the body's aura, or energy field is analysed
Bowling analysis – Analysis of the performance of cricket players
Lithic analysis – the analysis of stone tools using basic scientific techniques
Lithic analysis is most often used by archeologists in determining which types of tools were used at a given time period pertaining to current artifacts discovered.
Protocol analysis – a means for extracting persons' thoughts while they are performing a task
See also
Formal analysis
Metabolism in biology
Methodology
Scientific method
References
External links
Abstraction
Critical thinking skills
Emergence
Empiricism
Epistemological theories
Intelligence
Mathematical modeling
Metaphysics of mind
Methodology
Ontology
Philosophy of logic
Rationalism
Reasoning
Research methods
Scientific method
Theory of mind
|
https://en.wikipedia.org/wiki/Automorphism
|
In mathematics, an automorphism is an isomorphism from a mathematical object to itself. It is, in some sense, a symmetry of the object, and a way of mapping the object to itself while preserving all of its structure. The set of all automorphisms of an object forms a group, called the automorphism group. It is, loosely speaking, the symmetry group of the object.
Definition
In the context of abstract algebra, a mathematical object is an algebraic structure such as a group, ring, or vector space. An automorphism is simply a bijective homomorphism of an object with itself. (The definition of a homomorphism depends on the type of algebraic structure; see, for example, group homomorphism, ring homomorphism, and linear operator.)
The identity morphism (identity mapping) is called the trivial automorphism in some contexts. Respectively, other (non-identity) automorphisms are called nontrivial automorphisms.
The exact definition of an automorphism depends on the type of "mathematical object" in question and what, precisely, constitutes an "isomorphism" of that object. The most general setting in which these words have meaning is an abstract branch of mathematics called category theory. Category theory deals with abstract objects and morphisms between those objects.
In category theory, an automorphism is an endomorphism (i.e., a morphism from an object to itself) which is also an isomorphism (in the categorical sense of the word, meaning there exists a right and left inverse endomorphism).
This is a very abstract definition since, in category theory, morphisms are not necessarily functions and objects are not necessarily sets. In most concrete settings, however, the objects will be sets with some additional structure and the morphisms will be functions preserving that structure.
Automorphism group
If the automorphisms of an object form a set (instead of a proper class), then they form a group under composition of morphisms. This group is called the automorphism group of .
Closure Composition of two automorphisms is another automorphism.
Associativity It is part of the definition of a category that composition of morphisms is associative.
Identity The identity is the identity morphism from an object to itself, which is an automorphism.
Inverses By definition every isomorphism has an inverse that is also an isomorphism, and since the inverse is also an endomorphism of the same object it is an automorphism.
The automorphism group of an object X in a category C is denoted AutC(X), or simply Aut(X) if the category is clear from context.
Examples
In set theory, an arbitrary permutation of the elements of a set X is an automorphism. The automorphism group of X is also called the symmetric group on X.
In elementary arithmetic, the set of integers, Z, considered as a group under addition, has a unique nontrivial automorphism: negation. Considered as a ring, however, it has only the trivial automorphism. Generally speaking, negation is an automorphism of any abelian group, but not of a ring or field.
A group automorphism is a group isomorphism from a group to itself. Informally, it is a permutation of the group elements such that the structure remains unchanged. For every group G there is a natural group homomorphism G → Aut(G) whose image is the group Inn(G) of inner automorphisms and whose kernel is the center of G. Thus, if G has trivial center it can be embedded into its own automorphism group.
In linear algebra, an endomorphism of a vector space V is a linear operator V → V. An automorphism is an invertible linear operator on V. When the vector space is finite-dimensional, the automorphism group of V is the same as the general linear group, GL(V). (The algebraic structure of all endomorphisms of V is itself an algebra over the same base field as V, whose invertible elements precisely consist of GL(V).)
A field automorphism is a bijective ring homomorphism from a field to itself. In the cases of the rational numbers (Q) and the real numbers (R) there are no nontrivial field automorphisms. Some subfields of R have nontrivial field automorphisms, which however do not extend to all of R (because they cannot preserve the property of a number having a square root in R). In the case of the complex numbers, C, there is a unique nontrivial automorphism that sends R into R: complex conjugation, but there are infinitely (uncountably) many "wild" automorphisms (assuming the axiom of choice). Field automorphisms are important to the theory of field extensions, in particular Galois extensions. In the case of a Galois extension L/K the subgroup of all automorphisms of L fixing K pointwise is called the Galois group of the extension.
The automorphism group of the quaternions (H) as a ring are the inner automorphisms, by the Skolem–Noether theorem: maps of the form . This group is isomorphic to SO(3), the group of rotations in 3-dimensional space.
The automorphism group of the octonions (O) is the exceptional Lie group G2.
In graph theory an automorphism of a graph is a permutation of the nodes that preserves edges and non-edges. In particular, if two nodes are joined by an edge, so are their images under the permutation.
In geometry, an automorphism may be called a motion of the space. Specialized terminology is also used:
In metric geometry an automorphism is a self-isometry. The automorphism group is also called the isometry group.
In the category of Riemann surfaces, an automorphism is a biholomorphic map (also called a conformal map), from a surface to itself. For example, the automorphisms of the Riemann sphere are Möbius transformations.
An automorphism of a differentiable manifold M is a diffeomorphism from M to itself. The automorphism group is sometimes denoted Diff(M).
In topology, morphisms between topological spaces are called continuous maps, and an automorphism of a topological space is a homeomorphism of the space to itself, or self-homeomorphism (see homeomorphism group). In this example it is not sufficient for a morphism to be bijective to be an isomorphism.
History
One of the earliest group automorphisms (automorphism of a group, not simply a group of automorphisms of points) was given by the Irish mathematician William Rowan Hamilton in 1856, in his icosian calculus, where he discovered an order two automorphism, writing:
so that is a new fifth root of unity, connected with the former fifth root by relations of perfect reciprocity.
Inner and outer automorphisms
In some categories—notably groups, rings, and Lie algebras—it is possible to separate automorphisms into two types, called "inner" and "outer" automorphisms.
In the case of groups, the inner automorphisms are the conjugations by the elements of the group itself. For each element a of a group G, conjugation by a is the operation given by (or a−1ga; usage varies). One can easily check that conjugation by a is a group automorphism. The inner automorphisms form a normal subgroup of Aut(G), denoted by Inn(G); this is called Goursat's lemma.
The other automorphisms are called outer automorphisms. The quotient group is usually denoted by Out(G); the non-trivial elements are the cosets that contain the outer automorphisms.
The same definition holds in any unital ring or algebra where a is any invertible element. For Lie algebras the definition is slightly different.
See also
Antiautomorphism
Automorphism (in Sudoku puzzles)
Characteristic subgroup
Endomorphism ring
Frobenius automorphism
Morphism
Order automorphism (in order theory).
Relation-preserving automorphism
Fractional Fourier transform
References
External links
Automorphism at Encyclopaedia of Mathematics
Morphisms
Abstract algebra
Symmetry
|
https://en.wikipedia.org/wiki/Anaximander
|
Anaximander ( ; Anaximandros; ) was a pre-Socratic Greek philosopher who lived in Miletus, a city of Ionia (in modern-day Turkey). He belonged to the Milesian school and learned the teachings of his master Thales. He succeeded Thales and became the second master of that school where he counted Anaximenes and, arguably, Pythagoras amongst his pupils.
Little of his life and work is known today. According to available historical documents, he is the first philosopher known to have written down his studies, although only one fragment of his work remains. Fragmentary testimonies found in documents after his death provide a portrait of the man.
Anaximander was an early proponent of science and tried to observe and explain different aspects of the universe, with a particular interest in its origins, claiming that nature is ruled by laws, just like human societies, and anything that disturbs the balance of nature does not last long. Like many thinkers of his time, Anaximander's philosophy included contributions to many disciplines. In astronomy, he attempted to describe the mechanics of celestial bodies in relation to the Earth. In physics, his postulation that the indefinite (or apeiron) was the source of all things led Greek philosophy to a new level of conceptual abstraction. His knowledge of geometry allowed him to introduce the gnomon in Greece. He created a map of the world that contributed greatly to the advancement of geography. He was also involved in the politics of Miletus and was sent as a leader to one of its colonies.
Biography
Anaximander, son of Praxiades, was born in the third year of the 42nd Olympiad (610 BC). According to Apollodorus of Athens, Greek grammarian of the 2nd century BC, he was sixty-four years old during the second year of the 58th Olympiad (547–546 BC), and died shortly afterwards.
Establishing a timeline of his work is now impossible, since no document provides chronological references. Themistius, a 4th-century Byzantine rhetorician, mentions that he was the "first of the known Greeks to publish a written document on nature." Therefore, his texts would be amongst the earliest written in prose, at least in the Western world. By the time of Plato, his philosophy was almost forgotten, and Aristotle, his successor Theophrastus and a few doxographers provide us with the little information that remains. However, we know from Aristotle that Thales, also from Miletus, precedes Anaximander. It is debatable whether Thales actually was the teacher of Anaximander, but there is no doubt that Anaximander was influenced by Thales' theory that everything is derived from water. One thing that is not debatable is that even the ancient Greeks considered Anaximander to be from the Monist school which began in Miletus, with Thales followed by Anaximander and which ended with Anaximenes. 3rd-century Roman rhetorician Aelian depicts Anaximander as leader of the Milesian colony to Apollonia on the Black Sea coast, and hence some have inferred that he was a prominent citizen. Indeed, Various History (III, 17) explains that philosophers sometimes also dealt with political matters. It is very likely that leaders of Miletus sent him there as a legislator to create a constitution or simply to maintain the colony's allegiance.
Anaximander lived the final few years of his life as a subject of the Persian Achaemenid Empire.
Theories
Anaximander's theories were influenced by the Greek mythical tradition, and by some ideas of Thales – the father of Western philosophy – as well as by observations made by older civilizations in the Near East, especially Babylon. All these were developed rationally. In his desire to find some universal principle, he assumed, like traditional religion, the existence of a cosmic order; and his ideas on this used the old language of myths which ascribed divine control to various spheres of reality. This was a common practice for the Greek philosophers in a society which saw gods everywhere, and therefore could fit their ideas into a tolerably elastic system.
Some scholars see a gap between the existing mythical and the new rational way of thought which is the main characteristic of the archaic period (8th to 6th century BC) in the Greek city-states. This has given rise to the phrase "Greek miracle". But there may not have been such an abrupt break as initially appears. The basic elements of nature (water, air, fire, earth) which the first Greek philosophers believed made up the universe in fact represent the primordial forces imagined in earlier ways of thinking. Their collision produced what the mythical tradition had called cosmic harmony. In the old cosmogonies – Hesiod (8th – 7th century BC) and Pherecydes (6th century BC) – Zeus establishes his order in the world by destroying the powers which were threatening this harmony (the Titans). Anaximander claimed that the cosmic order is not monarchic but geometric, and that this causes the equilibrium of the earth, which is lying in the centre of the universe. This is the projection on nature of a new political order and a new space organized around a centre which is the static point of the system in the society as in nature. In this space there is isonomy (equal rights) and all the forces are symmetrical and transferable. The decisions are now taken by the assembly of demos in the agora which is lying in the middle of the city.
The same rational way of thought led him to introduce the abstract apeiron (indefinite, infinite, boundless, unlimited) as an origin of the universe, a concept that is probably influenced by the original Chaos (gaping void, abyss, formless state) from which everything else appeared in the mythical Greek cosmogony. It also takes notice of the mutual changes between the four elements. Origin, then, must be something else unlimited in its source, that could create without experiencing decay, so that genesis would never stop.
Apeiron
The Refutation attributed to Hippolytus of Rome (I, 5), and the later 6th century Byzantine philosopher Simplicius of Cilicia, attribute to Anaximander the earliest use of the word apeiron ( "infinite" or "limitless") to designate the original principle. He was the first philosopher to employ, in a philosophical context, the term archē (), which until then had meant beginning or origin.
"That Anaximander called this something by the name of is the natural interpretation of what Theophrastos says; the current statement that the term was introduced by him appears to be due to a misunderstanding."
And "Hippolytos, however, is not an independent authority, and the only question is what Theophrastos wrote."
For him, it became no longer a mere point in time, but a source that could perpetually give birth to whatever will be. The indefiniteness is spatial in early usages as in Homer (indefinite sea) and as in Xenophanes (6th century BC) who said that the earth went down indefinitely (to apeiron) i.e. beyond the imagination or concept of men.
Burnet (1930) in Early Greek Philosophy says:
"Nearly all we know of Anaximander's system is derived in the last resort from Theophrastos, who certainly knew his book. He seems once at least to have quoted Anaximander's own words, and he criticised his style. Here are the remains of what he said of him in the First Book:
"Anaximander of Miletos, son of Praxiades, a fellow-citizen and associate of Thales, said that the material cause and first element of things was the Infinite, he being the first to introduce this name of the material cause. He says it is neither water nor any other of the so-called elements, but a substance different from them which is infinite" [apeiron, or ] "from which arise all the heavens and the worlds within them.—Phys, Op. fr. 2 (Dox. p. 476; R. P. 16)."
Burnet's quote from the "First Book" is his translation of Theophrastos' Physic Opinion fragment 2 as it appears in p. 476 of Historia Philosophiae Graecae (1898) by Ritter and Preller and section 16 of Doxographi Graeci (1879) by Diels.
By ascribing the "Infinite" with a "material cause", Theophrastos is following the Aristotelian tradition of "nearly always discussing the facts from the point of view of his own system".
Aristotle writes (Metaphysics, I.III 3–4) that the Pre-Socratics were searching for the element that constitutes all things. While each pre-Socratic philosopher gave a different answer as to the identity of this element (water for Thales and air for Anaximenes), Anaximander understood the beginning or first principle to be an endless, unlimited primordial mass (apeiron), subject to neither old age nor decay, that perpetually yielded fresh materials from which everything we perceive is derived. He proposed the theory of the apeiron in direct response to the earlier theory of his teacher, Thales, who had claimed that the primary substance was water. The notion of temporal infinity was familiar to the Greek mind from remote antiquity in the religious concept of immortality, and Anaximander's description was in terms appropriate to this conception. This archē is called "eternal and ageless". (Hippolytus (?), Refutation, I,6,I;DK B2)
"Aristotle puts things in his own way regardless of historical considerations, and it is difficult to see that it is more of an anachronism to call the Boundless " intermediate between the elements " than to say that it is " distinct from the elements." Indeed, if once we introduce the elements at all, the former description is the more adequate of the two. At any rate, if we refuse to understand these passages as referring to Anaximander, we shall have to say that Aristotle paid a great deal of attention to some one whose very name has been lost, and who not only agreed with some of Anaximander's views, but also used some of his most characteristic expressions. We may add that in one or two places Aristotle certainly seems to identify the " intermediate " with the something " distinct from " the elements."
"It is certain that he [Anaximander] cannot have said anything about elements, which no one thought of before Empedokles, and no one could think of before Parmenides. The question has only been mentioned because it has given rise to a lengthy controversy, and because it throws light on the historical value of Aristotle's statements. From the point of view of his own system, these may be justified; but we shall have to remember in other cases that, when he seems to attribute an idea to some earlier thinker, we are not bound to take what he says in an historical sense."
For Anaximander, the principle of things, the constituent of all substances, is nothing determined and not an element such as water in Thales' view. Neither is it something halfway between air and water, or between air and fire, thicker than air and fire, or more subtle than water and earth. Anaximander argues that water cannot embrace all of the opposites found in nature — for example, water can only be wet, never dry — and therefore cannot be the one primary substance; nor could any of the other candidates. He postulated the apeiron as a substance that, although not directly perceptible to us, could explain the opposites he saw around him.
"If Thales had been right in saying that water was the fundamental reality, it would not be easy to see how anything else could ever have existed. One side of the opposition, the cold and moist, would have had its way unchecked, and the warm and dry would have been driven from the field long ago. We must, then, have something not itself one of the warring opposites, something more primitive, out of which they arise, and into which they once more pass away."
Anaximander explains how the four elements of ancient physics (air, earth, water and fire) are formed, and how Earth and terrestrial beings are formed through their interactions. Unlike other Pre-Socratics, he never defines this principle precisely, and it has generally been understood (e.g., by Aristotle and by Saint Augustine) as a sort of primal chaos. According to him, the Universe originates in the separation of opposites in the primordial matter. It embraces the opposites of hot and cold, wet and dry, and directs the movement of things; an entire host of shapes and differences then grow that are found in "all the worlds" (for he believed there were many).
"Anaximander taught, then, that there was an eternal. The indestructible something out of which everything arises, and into which everything returns; a boundless stock from which the waste of existence is continually made good, "elements.". That is only the natural development of the thought we have ascribed to Thales, and there can be no doubt that Anaximander at least formulated it distinctly. Indeed, we can still follow to some extent the reasoning which led him to do so. Thales had regarded water as the most likely thing to be that of which all others are forms; Anaximander appears to have asked how the primary substance could be one of these particular things. His argument seems to be preserved by Aristotle, who has the following passage in his discussion of the Infinite: "Further, there cannot be a single, simple body which is infinite, either, as some hold, one distinct from the elements, which they then derive from it, or without this qualification. For there are some who make this. (i.e. a body distinct from the elements). the infinite, and not air or water, in order that the other things may not be destroyed by their infinity. They are in opposition one to another. air is cold, water moist, and fire hot. and therefore, if any one of them were infinite, the rest would have ceased to be by this time. Accordingly they say that what is infinite is something other than the elements, and from it the elements arise.'—Aristotle Physics. F, 5 204 b 22 (Ritter and Preller (1898) Historia Philosophiae Graecae, section 16 b)."
Anaximander maintains that all dying things are returning to the element from which they came (apeiron). The one surviving fragment of Anaximander's writing deals with this matter. Simplicius transmitted it as a quotation, which describes the balanced and mutual changes of the elements:
Whence things have their origin,
Thence also their destruction happens,
According to necessity;
For they give to each other justice and recompense
For their injustice
In conformity with the ordinance of Time.
Simplicius mentions that Anaximander said all these "in poetic terms", meaning that he used the old mythical language. The goddess Justice (Dike) keeps the cosmic order. This concept of returning to the element of origin was often revisited afterwards, notably by Aristotle, and by the Greek tragedian Euripides: "what comes from earth must return to earth." Friedrich Nietzsche, in his Philosophy in the Tragic Age of the Greeks, stated that Anaximander viewed "... all coming-to-be as though it were an illegitimate emancipation from eternal being, a wrong for which destruction is the only penance." Physicist Max Born, in commenting upon Werner Heisenberg's arriving at the idea that the elementary particles of quantum mechanics are to be seen as different manifestations, different quantum states, of one and the same "primordial substance,"' proposed that this primordial substance be called apeiron.
A free-floating Earth
Anaximander was the first to conceive a mechanical model of the world. In his model, the Earth floats very still in the centre of the infinite, not supported by anything. It remains "in the same place because of its indifference", a point of view that Aristotle considered ingenious, in On the Heavens. Its curious shape is that of a cylinder with a height one-third of its diameter. The flat top forms the inhabited world, which is surrounded by a circular oceanic mass.
Carlo Rovelli suggests that Anaximander took the idea of the Earth's shape as a floating disk from Thales, who had imagined the Earth floating in water, the "immense ocean from which everything is born and upon which the Earth floats." Anaximander was then able to envisage the Earth at the centre of an infinite space, in which case it required no support as there was nowhere "down" to fall. In Rovelli's view, the shape – a cylinder or a sphere – is unimportant compared to the appreciation of a "finite body that floats free in space."
Anaximander's realization that the Earth floats free without falling and does not need to be resting on something has been indicated by many as the first cosmological revolution and the starting point of scientific thinking. Karl Popper calls this idea "one of the boldest, most revolutionary, and most portentous ideas in the whole history of human thinking." Such a model allowed the concept that celestial bodies could pass under the Earth, opening the way to Greek astronomy. Rovelli suggests that seeing the stars circling the Pole star, and both vanishing below the horizon on one side and reappearing above it on the other, would suggest to the astronomer that there was a void both above and below the Earth.
Cosmology
Anaximander's bold use of non-mythological explanatory hypotheses considerably distinguishes him from previous cosmology writers such as Hesiod. It indicates a pre-Socratic effort to demystify physical processes. His major contribution to history was writing the oldest prose document about the Universe and the origins of life; for this he is often called the "Father of Cosmology" and founder of astronomy. However, pseudo-Plutarch states that he still viewed celestial bodies as deities. He placed the celestial bodies in the wrong order. He thought that the stars were nearest to the earth, then the moon, and the sun farthest away. His scheme is compatible with the Indo-Iranian philosophical traditions contained in the Iranian Avesta och the Indian Upanishads.
At the origin, after the separation of hot and cold, a ball of flame appeared that surrounded Earth like bark on a tree. This ball broke apart to form the rest of the Universe. It resembled a system of hollow concentric wheels, filled with fire, with the rims pierced by holes like those of a flute. Consequently, the Sun was the fire that one could see through a hole the same size as the Earth on the farthest wheel, and an eclipse corresponded with the occlusion of that hole. The diameter of the solar wheel was twenty-seven times that of the Earth (or twenty-eight, depending on the sources) and the lunar wheel, whose fire was less intense, eighteen (or nineteen) times. Its hole could change shape, thus explaining lunar phases. The stars and the planets, located closer, followed the same model.
Anaximander was the first astronomer to consider the Sun as a huge mass, and consequently, to realize how far from Earth it might be, and the first to present a system where the celestial bodies turned at different distances. Furthermore, according to Diogenes Laertius (II, 2), he built a celestial sphere. This invention undoubtedly made him the first to realize the obliquity of the Zodiac as the Roman philosopher Pliny the Elder reports in Natural History (II, 8). It is a little early to use the term ecliptic, but his knowledge and work on astronomy confirm that he must have observed the inclination of the celestial sphere in relation to the plane of the Earth to explain the seasons. The doxographer and theologian Aetius attributes to Pythagoras the exact measurement of the obliquity.
Multiple worlds
According to Simplicius, Anaximander already speculated on the plurality of worlds, similar to atomists Leucippus and Democritus, and later philosopher Epicurus. These thinkers supposed that worlds appeared and disappeared for a while, and that some were born when others perished. They claimed that this movement was eternal, "for without movement, there can be no generation, no destruction".
In addition to Simplicius, Hippolytus reports Anaximander's claim that from the infinite comes the principle of beings, which themselves come from the heavens and the worlds (several doxographers use the plural when this philosopher is referring to the worlds within, which are often infinite in quantity). Cicero writes that he attributes different gods to the countless worlds.
This theory places Anaximander close to the Atomists and the Epicureans who, more than a century later, also claimed that an infinity of worlds appeared and disappeared. In the timeline of the Greek history of thought, some thinkers conceptualized a single world (Plato, Aristotle, Anaxagoras and Archelaus), while others instead speculated on the existence of a series of worlds, continuous or non-continuous (Anaximenes, Heraclitus, Empedocles and Diogenes).
Meteorological phenomena
Anaximander attributed some phenomena, such as thunder and lightning, to the intervention of elements, rather than to divine causes. In his system, thunder results from the shock of clouds hitting each other; the loudness of the sound is proportionate with that of the shock. Thunder without lightning is the result of the wind being too weak to emit any flame, but strong enough to produce a sound. A flash of lightning without thunder is a jolt of the air that disperses and falls, allowing a less active fire to break free. Thunderbolts are the result of a thicker and more violent air flow.
He saw the sea as a remnant of the mass of humidity that once surrounded Earth. A part of that mass evaporated under the sun's action, thus causing the winds and even the rotation of the celestial bodies, which he believed were attracted to places where water is more abundant. He explained rain as a product of the humidity pumped up from Earth by the sun. For him, the Earth was slowly drying up and water only remained in the deepest regions, which someday would go dry as well. According to Aristotle's Meteorology (II, 3), Democritus also shared this opinion.
Origin of mankind
Anaximander speculated about the beginnings and origin of animal life, and that humans came from other animals in waters. According to his evolutionary theory, animals sprang out of the sea long ago, born trapped in a spiny bark, but as they got older, the bark would dry up and animals would be able to break it. As the early humidity evaporated, dry land emerged and, in time, humankind had to adapt. The 3rd century Roman writer Censorinus reports:
Anaximander put forward the idea that humans had to spend part of this transition inside the mouths of big fish to protect themselves from the Earth's climate until they could come out in open air and lose their scales. He thought that, considering humans' extended infancy, we could not have survived in the primeval world in the same manner we do presently.
Other accomplishments
Cartography
Both Strabo and Agathemerus (later Greek geographers) claim that, according to the geographer Eratosthenes, Anaximander was the first to publish a map of the world. The map probably inspired the Greek historian Hecataeus of Miletus to draw a more accurate version. Strabo viewed both as the first geographers after Homer.
Maps were produced in ancient times, also notably in Egypt, Lydia, the Middle East, and Babylon. Only some small examples survived until today. The unique example of a world map comes from the late Babylonian Map of the World later than 9th century BC but is based probably on a much older map. These maps indicated directions, roads, towns, borders, and geological features. Anaximander's innovation was to represent the entire inhabited land known to the ancient Greeks.
Such an accomplishment is more significant than it at first appears. Anaximander most likely drew this map for three reasons. First, it could be used to improve navigation and trade between Miletus's colonies and other colonies around the Mediterranean Sea and Black Sea. Second, Thales would probably have found it easier to convince the Ionian city-states to join in a federation in order to push the Median threat away if he possessed such a tool. Finally, the philosophical idea of a global representation of the world simply for the sake of knowledge was reason enough to design one.
Surely aware of the sea's convexity, he may have designed his map on a slightly rounded metal surface. The centre or “navel” of the world ( omphalós gẽs) could have been Delphi, but is more likely in Anaximander's time to have been located near Miletus. The Aegean Sea was near the map's centre and enclosed by three continents, themselves located in the middle of the ocean and isolated like islands by sea and rivers. Europe was bordered on the south by the Mediterranean Sea and was separated from Asia by the Black Sea, the Lake Maeotis, and, further east, either by the Phasis River (now called the Rioni in Georgia) or the Tanais. The Nile flowed south into the ocean, separating Libya (which was the name for the part of the then-known African continent) from Asia.
Gnomon
The Suda relates that Anaximander explained some basic notions of geometry. It also mentions his interest in the measurement of time and associates him with the introduction in Greece of the gnomon. In Lacedaemon, he participated in the construction, or at least in the adjustment, of sundials to indicate solstices and equinoxes. Indeed, a gnomon required adjustments from a place to another because of the difference in latitude.
In his time, the gnomon was simply a vertical pillar or rod mounted on a horizontal plane. The position of its shadow on the plane indicated the time of day. As it moves through its apparent course, the Sun draws a curve with the tip of the projected shadow, which is shortest at noon, when pointing due south. The variation in the tip's position at noon indicates the solar time and the seasons; the shadow is longest on the winter solstice and shortest on the summer solstice.
The invention of the gnomon itself cannot be attributed to Anaximander because its use, as well as the division of days into twelve parts, came from the Babylonians. It is they, according to Herodotus' Histories (II, 109), who gave the Greeks the art of time measurement. It is likely that he was not the first to determine the solstices, because no calculation is necessary. On the other hand, equinoxes do not correspond to the middle point between the positions during solstices, as the Babylonians thought. As the Suda seems to suggest, it is very likely that with his knowledge of geometry, he became the first Greek to determine accurately the equinoxes.
Prediction of an earthquake
In his philosophical work De Divinatione (I, 50, 112), Cicero states that Anaximander convinced the inhabitants of Lacedaemon to abandon their city and spend the night in the country with their weapons because an earthquake was near. The city collapsed when the top of the Taygetus split like the stern of a ship. Pliny the Elder also mentions this anecdote (II, 81), suggesting that it came from an "admirable inspiration", as opposed to Cicero, who did not associate the prediction with divination.
Scientific method
Rovelli credits Anaximander with pioneering the "first great scientific revolution in history" by introducing the naturalistic approach to understanding the universe, according to which the universe operates by inviolable laws, without recourse to supernatural explanations. According to Rovelli, Anaximander not only paved the way for modern science, but revolutionized the process for how we form our worldview, by constantly questioning and rejecting certainty. Rovelli further states that Anaximander has not been given his due credit, largely because his naturalistic approach was strongly opposed in antiquity (among others by Aristotle) and had yet to yield the tangible benefits it has today.
Legacy
In the 2017 essay collection Anaximander in Context: New Studies on the Origins of Greek Philosophy, Dirk Couprie, Robert Hahn and Gerald Naddaf describe Anaximander as "one of the greatest minds in history", but one that has not been given his due. Couprie goes to state that he considers him on par with Newton.
Bertrand Russell in the History of Western Philosophy interprets Anaximander's theories as an assertion of the necessity of an appropriate balance between earth, fire, and water, all of which may be independently seeking to aggrandize their proportions relative to the others. Anaximander seems to express his belief that a natural order ensures balance among these elements, that where there was fire, ashes (earth) now exist. His Greek peers echoed this sentiment with their belief in natural boundaries beyond which not even the gods could operate.
Friedrich Nietzsche, in Philosophy in the Tragic Age of the Greeks, claimed that Anaximander was a pessimist who asserted that the primal being of the world was a state of indefiniteness. In accordance with this, anything definite has to eventually pass back into indefiniteness. In other words, Anaximander viewed "...all coming-to-be as though it were an illegitimate emancipation from eternal being, a wrong for which destruction is the only penance". (Ibid., § 4) The world of individual objects, in this way of thinking, has no worth and should perish.
Martin Heidegger lectured extensively on Anaximander, and delivered a lecture entitled "Anaximander's Saying" which was subsequently included in Off the Beaten Track. The lecture examines the ontological difference and the oblivion of Being or Dasein in the context of the Anaximander fragment. Heidegger's lecture is, in turn, an important influence on the French philosopher Jacques Derrida.
The Anaximander (31st) High School of Thessaloniki, Greece is named after Anaximander.
Works
According to the Suda:
On Nature ( / Perì phúseôs)
Rotation of the Earth ( / Gễs períodos)
On Fixed stars ( / Perì tỗn aplanỗn)
The [Celestial] Sphere ( / Sphaĩra)
See also
Indefinite monism
References
Sources
Primary
Aelian: Various History (III, 17)
Aëtius: De Fide (I-III; V)
Agathemerus: A Sketch of Geography in Epitome (I, 1)
Aristotle: Meteorology (II, 3) Translated by E. W. Webster
Aristotle: On Generation and Corruption (II, 5) Translated by H. H. Joachim
Aristotle: On the Heavens (II, 13) Translated by J. L. Stocks
(III, 5, 204 b 33–34)
Censorinus: De Die Natali (IV, 7) See original text at LacusCurtius
(I, 50, 112)
Cicero: On the Nature of the Gods (I, 10, 25)
Euripides: The Suppliants (532) Translated by E. P. Coleridge
Eusebius of Caesarea: Preparation for the Gospel (X, 14, 11) Translated by E.H. Gifford
Heidel, W.A. Anaximander's Book: PAAAS, vol. 56, n.7, 1921, pp. 239–288.
Herodotus: Histories (II, 109) See original text in Perseus project
Hippolytus (?): Refutation of All Heresies (I, 5) Translated by Roberts and Donaldson
Pliny the Elder: Natural History (II, 8) See original text in Perseus project
Pseudo-Plutarch: The Doctrines of the Philosophers (I, 3; I, 7; II, 20–28; III, 2–16; V, 19)
Seneca the Younger: Natural Questions (II, 18)
Simplicius: Comments on Aristotle's Physics (24, 13–25; 1121, 5–9)
Strabo: Geography (I, 1) Books 1‑7, 15‑17 translated by H. L. Jones
Themistius: Oratio (36, 317)
The Suda (Suda On Line)
Secondary
The default source; anything not otherwise attributed should be in Conche.
External links
Philoctete – Anaximandre: Fragments ((Grk icon))
The Internet Encyclopedia of Philosophy – Anaximander
Extensive bibliography by Dirk Couprie
Anaximander of Miletus Life and Work - Fragments and Testimonies by Giannis Stamatellos
6th-century BC Greek philosophers
610s BC births
Year of birth unknown
540s BC deaths
Year of death unknown
Ancient Greek astronomers
Ancient Greek cartographers
Ancient Greek metaphysicians
Ancient Greek physicists
Ancient Greeks from the Achaemenid Empire
Ancient Milesians
Natural philosophers
Philosophers of ancient Ionia
Presocratic philosophers
6th-century BC geographers
6th-century BC astronomers
|
https://en.wikipedia.org/wiki/Architect
|
An architect is a person who plans, designs and oversees the construction of buildings. To practice architecture means to provide services in connection with the design of buildings and the space within the site surrounding the buildings that have human occupancy or use as their principal purpose. Etymologically, the term architect derives from the Latin , which derives from the Greek (-, chief + , builder), i.e., chief builder.
The professional requirements for architects vary from location to location. An architect's decisions affect public safety and thus the architect must undergo specialized training consisting of advanced education and a practicum (or internship) for practical experience to earn a license to practice architecture. Practical, technical, and academic requirements for becoming an architect vary by jurisdiction though the formal study of architecture in academic institutions has played a pivotal role in the development of the profession.
Origins
Throughout ancient and medieval history, most architectural design and construction was carried out by artisans—such as stone masons and carpenters, rising to the role of master builder. Until modern times, there was no clear distinction between architect and engineer. In Europe, the titles architect and engineer were primarily geographical variations that referred to the same person, often used interchangeably.
"Architect" derives from Greek (, "master builder", "chief ).
It is suggested that various developments in technology and mathematics allowed the development of the professional 'gentleman' architect, separate from the hands-on craftsman. Paper was not used in Europe for drawing until the 15th century, but became increasingly available after 1500. Pencils were used for drawing by 1600. The availability of both paper and pencils allowed pre-construction drawings to be made by professionals. Concurrently, the introduction of linear perspective and innovations such as the use of different projections to describe a three-dimensional building in two dimensions, together with an increased understanding of dimensional accuracy, helped building designers communicate their ideas. However, development was gradual and slow going. Until the 18th-century, buildings continued to be designed and set out by craftsmen, with the exception of high-status projects.
Architecture
In most developed countries, only those qualified with an appropriate license, certification, or registration with a relevant body (often governmental), may legally practice architecture. Such licensure usually required a university degree, successful completion of exams, as well as a training period. Representation of oneself as an architect through the use of terms and titles were restricted to licensed individuals by law, although in general, derivatives such as architectural designer were not legally protected.
To practice architecture implies the ability to practice independently of supervision. The term building design professional (or design professional), by contrast, is a much broader term that includes professionals who practice independently under an alternate profession such as engineering professionals, or those who assist in the practice of architecture under the supervision of a licensed architect such as intern architects. In many places, independent, non-licensed individuals, may perform design services outside the professional restrictions such as the design houses or other smaller structures.
Practice
In the architectural profession, technical and environmental knowledge, design, and construction management, require an understanding of business as well as design. However, design is the driving force throughout the project and beyond. An architect accepts a commission from a client. The commission might involve preparing feasibility reports, building audits, designing a building or several buildings, structures, and the spaces among them. The architect participates in developing the requirements the client wants in the building. Throughout the project (planning to occupancy), the architect coordinates a design team. Structural, mechanical, and electrical engineers are hired by the client or architect, who must ensure that the work is coordinated to construct the design.
Design role
The architect, once hired by a client, is responsible for creating a design concept that meets the requirements of that client and provides a facility suitable to the required use. The architect must meet with and put questions to the client, in order to ascertain all the requirements (and nuances) of the planned project.
Often the full brief is not clear in the beginning. It involves a degree of risk in the design undertaking. The architect may make early proposals to the client which may rework the terms of the brief. The "program" (or brief) is essential to producing a project that meets all the needs of the owner. This becomes a guide for the architect in creating the design concept.
Design proposal(s) are generally expected to be both imaginative and pragmatic. Much depends upon the time, place, finance, culture, and available crafts and technology in which the design takes place. The extent and nature of these expectations will vary. Foresight is a prerequisite when designing buildings as it is a very complex and demanding undertaking.
Any design concept during the early stage of its generation must take into account a great number of issues and variables including qualities of space(s), the end-use and life-cycle of these proposed spaces, connections, relations, and aspects between spaces including how they are put together and the impact of proposals on the immediate and wider locality. Selection of appropriate materials and technology must be considered, tested, and reviewed at an early stage in the design to ensure there are no setbacks (such as higher-than-expected costs) which could occur later in the project.
The site and its surrounding environment as well as the culture and history of the place, will also influence the design. The design must also balance increasing concerns with environmental sustainability. The architect may introduce (intentionally or not), aspects of mathematics and architecture, new or current architectural theory, or references to architectural history.
A key part of the design is that the architect often must consult with engineers, surveyors and other specialists throughout the design, ensuring that aspects such as structural supports and air conditioning elements are coordinated. The control and planning of construction costs are also a part of these consultations. Coordination of the different aspects involves a high degree of specialized communication including advanced computer technology such as building information modeling (BIM), computer-aided design (CAD), and cloud-based technologies. Finally, at all times, the architect must report back to the client who may have reservations or recommendations which might introduce further variables into the design.
Architects also deal with local and federal jurisdictions regarding regulations and building codes. The architect might need to comply with local planning and zoning laws such as required setbacks, height limitations, parking requirements, transparency requirements (windows), and land use. Some jurisdictions require adherence to design and historic preservation guidelines. Health and safety risks form a vital part of the current design, and in some jurisdictions, design reports and records are required to include ongoing considerations of materials and contaminants, waste management and recycling, traffic control, and fire safety.
Means of design
Previously, architects employed drawings to illustrate and generate design proposals. While conceptual sketches are still widely used by architects, computer technology has now become the industry standard. Furthermore, design may include the use of photos, collages, prints, linocuts, 3D scanning technology, and other media in design production.
Increasingly, computer software is shaping how architects work. BIM technology allows for the creation of a virtual building that serves as an information database for the sharing of design and building information throughout the life-cycle of the building's design, construction, and maintenance. Virtual reality (VR) presentations are becoming more common for visualizing structural designs and interior spaces from the point-of-view perspective.
Environmental role
Since modern buildings are known to place carbon into the atmosphere, increasing controls are being placed on buildings and associated technology to reduce emissions, increase energy efficiency, and make use of renewable energy sources. Renewable energy sources may be designed into the proposed building by local or national renewable energy providers. As a result, the architect is required to remain abreast of current regulations that are continually being updated. Some new developments exhibit extremely low energy use or passive solar building design.
However, the architect is also increasingly being required to provide initiatives in a wider environmental sense. Examples of this include making provisions for low-energy transport, natural daylighting instead of artificial lighting, natural ventilation instead of air conditioning, pollution, and waste management, use of recycled materials, and employment of materials which can be easily recycled.
Construction role
As the design becomes more advanced and detailed, specifications and detail designs are made of all the elements and components of the building. Techniques in the production of a building are continually advancing which places a demand on the architect to ensure that he or she remains up to date with these advances.
Depending on the client's needs and the jurisdiction's requirements, the spectrum of the architect's services during each construction stage may be extensive (detailed document preparation and construction review) or less involved (such as allowing a contractor to exercise considerable design-build functions).
Architects typically put projects to tender on behalf of their clients, advise them on the award of the project to a general contractor, facilitate and administer a contract of agreement which is often between the client and the contractor. This contract is legally binding and covers a wide range of aspects including the insurance and commitments of all stakeholders, the status of the design documents, provisions for the architect's access, and procedures for the control of the works as they proceed. Depending on the type of contract utilized, provisions for further sub-contract tenders may be required. The architect may require that some elements are covered by a warranty which specifies the expected life and other aspects of the material, product, or work.
In most jurisdictions, prior notification to the relevant authority must be given before commencement of the project, giving the local authority notice to carry out independent inspections. The architect will then review and inspect the progress of the work in coordination with the local authority.
The architect will typically review contractor shop drawings and other submittals, prepare and issue site instructions, and provide Certificates for Payment to the contractor (see also Design-bid-build) which is based on the work done as well as any materials and other goods purchased or hired in the future. In the United Kingdom and other countries, a quantity surveyor is often part of the team to provide cost consulting. With large, complex projects, an independent construction manager is sometimes hired to assist in the design and management of the construction.
In many jurisdictions, mandatory certification or assurance of the completed work or part of works, is required. This demand for certification entails a high degree of risk; therefore, regular inspections of the work as it progresses on site is required to ensure that the design is in compliance itself as well as following all relevant statutes and permissions.
Alternate practice and specializations
Recent decades have seen the rise of specializations within the profession. Many architects and architectural firms focus on certain project types (e.g. healthcare, retail, public housing, and event management), technological expertise, or project delivery methods. Some architects specialize in building code, building envelope, sustainable design, technical writing, historic preservation(US) or conservation (UK), and accessibility.
Many architects elect to move into real estate (property) development, corporate facilities planning, project management, construction management, chief sustainability officers interior design, city planning, user experience design, and design research.
Professional requirements
Although there are variations in each location, most of the world's architects are required to register with the appropriate jurisdiction. Architects are typically required to meet three common requirements: education, experience, and examination.
Basic educational requirement generally consist of a university degree in architecture. The experience requirement for degree candidates is usually satisfied by a practicum or internship (usually two to three years). Finally, a Registration Examination or a series of exams is required prior to licensure.
Professionals who engaged in the design and supervision of construction projects prior to the late 19th century were not necessarily trained in a separate architecture program in an academic setting. Instead, they often trained under established architects. Prior to modern times, there was no distinction between architects and engineers and the title used varied depending on geographical location. They often carried the title of master builder or surveyor after serving a number of years as an apprentice (such as Sir Christopher Wren). The formal study of architecture in academic institutions played a pivotal role in the development of the profession as a whole, serving as a focal point for advances in architectural technology and theory. The use of "Architect" or abbreviations such as "Ar." as a title attached to a person's name was regulated by law in some countries.
Fees
Architects' fee structure was typically based on a percentage of construction value, as a rate per unit area of the proposed construction, hourly rates, or a fixed lump sum fee. Combination of these structures was also common. Fixed fees were usually based on a project's allocated construction cost and could range between 4 and 12% of new construction cost for commercial and institutional projects, depending on a project's size and complexity. Residential projects ranged from 12 to 20%. Renovation projects typically commanded higher percentages such as 15-20%.
Overall billings for architectural firms range widely, depending on their location and economic climate. Billings have traditionally been dependent on the local economic conditions, but with rapid globalization, this is becoming less of a factor for large international firms. Salaries could also vary depending on experience, position within the firm (i.e. staff architect, partner, or shareholder, etc.), and the size and location of the firm.
Professional organizations
A number of national professional organizations exist to promote career and business development in architecture.
The International Union of Architects (UIA)
The American Institute of Architects (AIA) US
Royal Institute of British Architects (RIBA) UK
Architects Registration Board (ARB) UK
The Australian Institute of Architects (AIA) Australia
The South African Institute of Architects (SAIA) South Africa
Association of Consultant Architects (ACA) UK
Association of Licensed Architects (ALA) US
The Consejo Profesional de Arquitectura y Urbanismo (CPAU) Argentina
Indian Institute of Architects (IIA) & Council of Architecture (COA) India
The National Organization of Minority Architects (NOMA) US
Prizes and awards
A wide variety of prizes is awarded by national professional associations and other bodies, recognizing accomplished architects, their buildings, structures, and professional careers.
The most lucrative award an architect can receive is the Pritzker Prize, sometimes termed the "Nobel Prize for architecture." The inaugural Pritzker Prize winner was Philip Johnson who was cited "for 50 years of imagination and vitality embodied in a myriad of museums, theatres libraries, houses gardens and corporate structure". The Pritzker Prize has been awarded for forty-two straight editions without interruption, and there are now 22 countries with at least one winning architect. Other prestigious architectural awards are the Royal Gold Medal, the AIA Gold Medal (US), AIA Gold Medal (Australia), and the Praemium Imperiale.
Architects in the UK, who have made contributions to the profession through design excellence or architectural education, or have in some other way advanced the profession, might until 1971 be elected Fellows of the Royal Institute of British Architects and can write FRIBA after their name if they feel so inclined. Those elected to chartered membership of the RIBA after 1971 may use the initials RIBA but cannot use the old ARIBA and FRIBA. An Honorary Fellow may use the initials, Hon. FRIBA. and an International Fellow may use the initials Int. FRIBA. Architects in the US, who have made contributions to the profession through design excellence or architectural education, or have in some other way advanced the profession, are elected Fellows of the American Institute of Architects and can write FAIA after their name. Architects in Canada, who have made outstanding contributions to the profession through contribution to research, scholarship, public service, or professional standing to the good of architecture in Canada, or elsewhere, may be recognized as a Fellow of the Royal Architectural Institute of Canada and can write FRAIC after their name. In Hong Kong, those elected to chartered membership may use the initial HKIA, and those who have made a special contribution after nomination and election by The Hong Kong Institute of Architects (HKIA), may be elected as fellow members of HKIA and may use FHKIA after their name.
See also
References
Architecture occupations
Professional certification in architecture
|
https://en.wikipedia.org/wiki/Aphrodite
|
Aphrodite ( ) is an ancient Greek goddess associated with love, lust, beauty, pleasure, passion, procreation, and as her syncretized Roman goddess counterpart , desire, sex, fertility, prosperity, and victory. Aphrodite's major symbols include seashells, myrtles, roses, doves, sparrows, and swans. The cult of Aphrodite was largely derived from that of the Phoenician goddess Astarte, a cognate of the East Semitic goddess Ishtar, whose cult was based on the Sumerian cult of Inanna. Aphrodite's main cult centers were Cythera, Cyprus, Corinth, and Athens. Her main festival was the Aphrodisia, which was celebrated annually in midsummer. In Laconia, Aphrodite was worshipped as a warrior goddess. She was also the patron goddess of prostitutes, an association which led early scholars to propose the concept of "sacred prostitution" in Greco-Roman culture, an idea which is now generally seen as erroneous.
In Hesiod's Theogony, Aphrodite is born off the coast of Cythera from the foam (, ) produced by Uranus's genitals, which his son Cronus had severed and thrown into the sea. In Homer's Iliad, however, she is the daughter of Zeus and Dione. Plato, in his Symposium, asserts that these two origins actually belong to separate entities: Aphrodite Urania (a transcendent, "Heavenly" Aphrodite) and Aphrodite Pandemos (Aphrodite common to "all the people"). Aphrodite had many other epithets, each emphasizing a different aspect of the same goddess, or used by a different local cult. Thus she was also known as Cytherea (Lady of Cythera) and Cypris (Lady of Cyprus), because both locations claimed to be the place of her birth.
In Greek mythology, Aphrodite was married to Hephaestus, the god of fire, blacksmiths and metalworking. Aphrodite was frequently unfaithful to him and had many lovers; in the Odyssey, she is caught in the act of adultery with Ares, the god of war. In the First Homeric Hymn to Aphrodite, she seduces the mortal shepherd Anchises. Aphrodite was also the surrogate mother and lover of the mortal shepherd Adonis, who was killed by a wild boar. Along with Athena and Hera, Aphrodite was one of the three goddesses whose feud resulted in the beginning of the Trojan War and she plays a major role throughout the Iliad. Aphrodite has been featured in Western art as a symbol of female beauty and has appeared in numerous works of Western literature. She is a major deity in modern Neopagan religions, including the Church of Aphrodite, Wicca, and Hellenismos.
Etymology
Hesiod derives Aphrodite from () "sea-foam", interpreting the name as "risen from the foam", but most modern scholars regard this as a spurious folk etymology. Early modern scholars of classical mythology attempted to argue that Aphrodite's name was of Greek or Indo-European origin, but these efforts have now been mostly abandoned. Aphrodite's name is generally accepted to be of non-Greek (probably Semitic) origin, but its exact derivation cannot be determined.
Scholars in the late nineteenth and early twentieth centuries, accepting Hesiod's "foam" etymology as genuine, analyzed the second part of Aphrodite's name as *-odítē "wanderer" or *-dítē "bright". More recently, Michael Janda, also accepting Hesiod's etymology, has argued in favor of the latter of these interpretations and claims the story of a birth from the foam as an Indo-European mytheme. Similarly, Krzysztof Tomasz Witczak proposes an Indo-European compound "very" and "to shine", also referring to Eos, and Daniel Kölligan has interpreted her name as "shining up from the mist/foam". Other scholars have argued that these hypotheses are unlikely since Aphrodite's attributes are entirely different from those of both Eos and the Vedic deity Ushas.
A number of improbable non-Greek etymologies have also been suggested. One Semitic etymology compares Aphrodite to the Assyrian barīrītu, the name of a female demon that appears in Middle Babylonian and Late Babylonian texts. Hammarström looks to Etruscan, comparing (e)prθni "lord", an Etruscan honorific loaned into Greek as πρύτανις. This would make the theonym in origin an honorific, "the lady". Most scholars reject this etymology as implausible, especially since Aphrodite actually appears in Etruscan in the borrowed form Apru (from Greek , clipped form of Aphrodite). The medieval Etymologicum Magnum () offers a highly contrived etymology, deriving Aphrodite from the compound habrodíaitos (), "she who lives delicately", from habrós and díaita. The alteration from b to ph is explained as a "familiar" characteristic of Greek "obvious from the Macedonians".
In the Cypriot syllabary, a syllabic script used on the island of Cyprus from the eleventh until the fourth century BC, her name is attested in the forms (a-po-ro-ta-o-i, read right-to-left), (a-po-ro-ti-ta-i, samewise), and finally (a-po-ro-ti-si-jo, "Aphrodisian", "related to Aphrodite", in the context of a month).
Origins
Near Eastern love goddess
The cult of Aphrodite in Greece was imported from, or at least influenced by, the cult of Astarte in Phoenicia, which, in turn, was influenced by the cult of the Mesopotamian goddess known as "Ishtar" to the East Semitic peoples and as "Inanna" to the Sumerians. Pausanias states that the first to establish a cult of Aphrodite were the Assyrians, followed by the Paphians of Cyprus and then the Phoenicians at Ascalon. The Phoenicians, in turn, taught her worship to the people of Cythera.
Aphrodite took on Inanna-Ishtar's associations with sexuality and procreation. Furthermore, she was known as Ourania (Οὐρανία), which means "heavenly", a title corresponding to Inanna's role as the Queen of Heaven. Early artistic and literary portrayals of Aphrodite are extremely similar on Inanna-Ishtar. Like Inanna-Ishtar, Aphrodite was also a warrior goddess; the second-century AD Greek geographer Pausanias records that, in Sparta, Aphrodite was worshipped as Aphrodite Areia, which means "warlike". He also mentions that Aphrodite's most ancient cult statues in Sparta and on Cythera showed her bearing arms. Modern scholars note that Aphrodite's warrior-goddess aspects appear in the oldest strata of her worship and see it as an indication of her Near Eastern origins.
Nineteenth century classical scholars had a general aversion to the idea that ancient Greek religion was at all influenced by the cultures of the Near East, but, even Friedrich Gottlieb Welcker, who argued that Near Eastern influence on Greek culture was largely confined to material culture, admitted that Aphrodite was clearly of Phoenician origin. The significant influence of Near Eastern culture on early Greek religion in general, and on the cult of Aphrodite in particular, is now widely recognized as dating to a period of orientalization during the eighth century BC, when archaic Greece was on the fringes of the Neo-Assyrian Empire.
Indo-European dawn goddess
Some early comparative mythologists opposed to the idea of a Near Eastern origin argued that Aphrodite originated as an aspect of the Greek dawn goddess Eos and that she was therefore ultimately derived from the Proto-Indo-European dawn goddess *Haéusōs (properly Greek Eos, Latin Aurora, Sanskrit Ushas). Most modern scholars have now rejected the notion of a purely Indo-European Aphrodite, but it is possible that Aphrodite, originally a Semitic deity, may have been influenced by the Indo-European dawn goddess. Both Aphrodite and Eos were known for their erotic beauty and aggressive sexuality and both had relationships with mortal lovers. Both goddesses were associated with the colors red, white, and gold. Michael Janda etymologizes Aphrodite's name as an epithet of Eos meaning "she who rises from the foam [of the ocean]" and points to Hesiod's Theogony account of Aphrodite's birth as an archaic reflex of Indo-European myth. Aphrodite rising out of the waters after Cronus defeats Uranus as a mytheme would then be directly cognate to the Rigvedic myth of Indra defeating Vrtra, liberating Ushas. Another key similarity between Aphrodite and the Indo-European dawn goddess is her close kinship to the Greek sky deity, since both of the main claimants to her paternity (Zeus and Uranus) are sky deities.
Forms and epithets
Aphrodite's most common cultic epithet was Ourania, meaning "heavenly", but this epithet almost never occurs in literary texts, indicating a purely cultic significance. Another common name for Aphrodite was Pandemos ("For All the Folk"). In her role as Aphrodite Pandemos, Aphrodite was associated with Peithō (), meaning "persuasion", and could be prayed to for aid in seduction. The character of Pausanias in Plato's Symposium, takes differing cult-practices associated with different epithets of the goddess to claim that Ourania and Pandemos are, in fact, separate goddesses. He asserts that Aphrodite Ourania is the celestial Aphrodite, born from the sea foam after Cronus castrated Uranus, and the older of the two goddesses. According to the Symposium, Aphrodite Ourania is the inspiration of male homosexual desire, specifically the ephebic eros, and pederasty. Aphrodite Pandemos, by contrast, is the younger of the two goddesses: the common Aphrodite, born from the union of Zeus and Dione, and the inspiration of heterosexual desire and sexual promiscuity, the "lesser" of the two loves. Paphian (Παφία), was one of her epithets, after the Paphos in Cyprus where she had emerged from the sea at her birth.
Among the Neoplatonists and, later, their Christian interpreters, Ourania is associated with spiritual love, and Pandemos with physical love (desire). A representation of Ourania with her foot resting on a tortoise came to be seen as emblematic of discretion in conjugal love; it was the subject of a chryselephantine sculpture by Phidias for Elis, known only from a parenthetical comment by the geographer Pausanias.
One of Aphrodite's most common literary epithets is Philommeidḗs (), which means "smile-loving", but is sometimes mistranslated as "laughter-loving". This epithet occurs throughout both of the Homeric epics and the First Homeric Hymn to Aphrodite. Hesiod references it once in his Theogony in the context of Aphrodite's birth, but interprets it as "genital-loving" rather than "smile-loving". Monica Cyrino notes that the epithet may relate to the fact that, in many artistic depictions of Aphrodite, she is shown smiling. Other common literary epithets are Cypris and Cythereia, which derive from her associations with the islands of Cyprus and Cythera respectively.
On Cyprus, Aphrodite was sometimes called Eleemon ("the merciful"). In Athens, she was known as Aphrodite en kopois ("Aphrodite of the Gardens"). At Cape Colias, a town along the Attic coast, she was venerated as Genetyllis "Mother". The Spartans worshipped her as Potnia "Mistress", Enoplios "Armed", Morpho "Shapely", Ambologera "She who Postpones Old Age". Across the Greek world, she was known under epithets such as Melainis "Black One", Skotia "Dark One", Androphonos "Killer of Men", Anosia "Unholy", and Tymborychos "Gravedigger", all of which indicate her darker, more violent nature.
She had the epithet Automata because, according to Servius, she was the source of spontaneous love.
A male version of Aphrodite known as Aphroditus was worshipped in the city of Amathus on Cyprus. Aphroditus was depicted with the figure and dress of a woman, but had a beard, and was shown lifting his dress to reveal an erect phallus. This gesture was believed to be an apotropaic symbol, and was thought to convey good fortune upon the viewer. Eventually, the popularity of Aphroditus waned as the mainstream, fully feminine version of Aphrodite became more popular, but traces of his cult are preserved in the later legends of Hermaphroditus.
Worship
Classical period
Aphrodite's main festival, the Aphrodisia, was celebrated across Greece, but particularly in Athens and Corinth. In Athens, the Aphrodisia was celebrated on the fourth day of the month of Hekatombaion in honor of Aphrodite's role in the unification of Attica. During this festival, the priests of Aphrodite would purify the temple of Aphrodite Pandemos on the southwestern slope of the Acropolis with the blood of a sacrificed dove. Next, the altars would be anointed and the cult statues of Aphrodite Pandemos and Peitho would be escorted in a majestic procession to a place where they would be ritually bathed. Aphrodite was also honored in Athens as part of the Arrhephoria festival. The fourth day of every month was sacred to Aphrodite.
Pausanias records that, in Sparta, Aphrodite was worshipped as Aphrodite Areia, which means "warlike". This epithet stresses Aphrodite's connections to Ares, with whom she had extramarital relations. Pausanias also records that, in Sparta and on Cythera, a number of extremely ancient cult statues of Aphrodite portrayed her bearing arms. Other cult statues showed her bound in chains.
Aphrodite was the patron goddess of prostitutes of all varieties, ranging from pornai (cheap street prostitutes typically owned as slaves by wealthy pimps) to hetairai (expensive, well-educated hired companions, who were usually self-employed and sometimes provided sex to their customers). The city of Corinth was renowned throughout the ancient world for its many hetairai, who had a widespread reputation for being among the most skilled, but also the most expensive, prostitutes in the Greek world. Corinth also had a major temple to Aphrodite located on the Acrocorinth and was one of the main centers of her cult. Records of numerous dedications to Aphrodite made by successful courtesans have survived in poems and in pottery inscriptions. References to Aphrodite in association with prostitution are found in Corinth as well as on the islands of Cyprus, Cythera, and Sicily. Aphrodite's Mesopotamian precursor Inanna-Ishtar was also closely associated with prostitution.
Scholars in the nineteenth and twentieth centuries believed that the cult of Aphrodite may have involved ritual prostitution, an assumption based on ambiguous passages in certain ancient texts, particularly a fragment of a skolion by the Boeotian poet Pindar, which mentions prostitutes in Corinth in association with Aphrodite. Modern scholars now dismiss the notion of ritual prostitution in Greece as a "historiographic myth" with no factual basis.
Hellenistic and Roman periods
During the Hellenistic period, the Greeks identified Aphrodite with the ancient Egyptian goddesses Hathor and Isis. Aphrodite was the patron goddess of the Lagid queens and Queen Arsinoe II was identified as her mortal incarnation. Aphrodite was worshipped in Alexandria and had numerous temples in and around the city. Arsinoe II introduced the cult of Adonis to Alexandria and many of the women there partook in it. The Tessarakonteres, a gigantic catamaran galley designed by Archimedes for Ptolemy IV Philopator, had a circular temple to Aphrodite on it with a marble statue of the goddess herself. In the second century BC, Ptolemy VIII Physcon and his wives Cleopatra II and Cleopatra III dedicated a temple to Aphrodite Hathor at Philae. Statuettes of Aphrodite for personal devotion became common in Egypt starting in the early Ptolemaic times and extending until long after Egypt became a Roman province.
The ancient Romans identified Aphrodite with their goddess Venus, who was originally a goddess of agricultural fertility, vegetation, and springtime. According to the Roman historian Livy, Aphrodite and Venus were officially identified in the third century BC when the cult of Venus Erycina was introduced to Rome from the Greek sanctuary of Aphrodite on Mount Eryx in Sicily. After this point, Romans adopted Aphrodite's iconography and myths and applied them to Venus. Because Aphrodite was the mother of the Trojan hero Aeneas in Greek mythology and Roman tradition claimed Aeneas as the founder of Rome, Venus became venerated as Venus Genetrix, the mother of the entire Roman nation. Julius Caesar claimed to be directly descended from Aeneas's son Iulus and became a strong proponent of the cult of Venus. This precedent was later followed by his nephew Augustus and the later emperors claiming succession from him.
This syncretism greatly impacted Greek worship of Aphrodite. During the Roman era, the cults of Aphrodite in many Greek cities began to emphasize her relationship with Troy and Aeneas. They also began to adopt distinctively Roman elements, portraying Aphrodite as more maternal, more militaristic, and more concerned with administrative bureaucracy. She was claimed as a divine guardian by many political magistrates. Appearances of Aphrodite in Greek literature also vastly proliferated, usually showing Aphrodite in a characteristically Roman manner.
Mythology
Birth
Aphrodite is usually said to have been born near her chief center of worship, Paphos, on the island of Cyprus, which is why she is sometimes called "Cyprian", especially in the poetic works of Sappho. The Sanctuary of Aphrodite Paphia, marking her birthplace, was a place of pilgrimage in the ancient world for centuries. Other versions of her myth have her born near the island of Cythera, hence another of her names, "Cytherea". Cythera was a stopping place for trade and culture between Crete and the Peloponesus, so these stories may preserve traces of the migration of Aphrodite's cult from the Middle East to mainland Greece.
According to the version of her birth recounted by Hesiod in his Theogony, Cronus severed Uranus' genitals and threw them behind him into the sea. The foam from his genitals gave rise to Aphrodite (hence her name, which Hesiod interprets as "foam-arisen"), while the Giants, the Erinyes (furies), and the Meliae emerged from the drops of his blood. Hesiod states that the genitals "were carried over the sea a long time, and white foam arose from the immortal flesh; with it a girl grew." After Aphrodite was born from the sea-foam, she washed up to shore in the presence of the other gods. Hesiod's account of Aphrodite's birth following Uranus's castration is probably derived from The Song of Kumarbi, an ancient Hittite epic poem in which the god Kumarbi overthrows his father Anu, the god of the sky, and bites off his genitals, causing him to become pregnant and give birth to Anu's children, which include Ishtar and her brother Teshub, the Hittite storm god.
In the Iliad, Aphrodite is described as the daughter of Zeus and Dione. Dione's name appears to be a feminine cognate to Dios and Dion, which are oblique forms of the name Zeus. Zeus and Dione shared a cult at Dodona in northwestern Greece. In the Theogony, Hesiod describes Dione as an Oceanid, but Apollodorus makes her the thirteenth Titan, child of Gaia and Uranus.
Marriage
Aphrodite is consistently portrayed as a nubile, infinitely desirable adult, having had no childhood. She is often depicted nude. In the Iliad, Aphrodite is the apparently unmarried consort of Ares, the god of war, and the wife of Hephaestus is a different goddess named Charis. Likewise, in Hesiod's Theogony, Aphrodite is unmarried and the wife of Hephaestus is Aglaea, the youngest of the three Charites.
In Book Eight of the Odyssey, however, the blind singer Demodocus describes Aphrodite as the wife of Hephaestus and tells how she committed adultery with Ares during the Trojan War. The sun-god Helios saw Aphrodite and Ares having sex in Hephaestus's bed and warned Hephaestus, who fashioned a fine, near invisible net. The next time Ares and Aphrodite had sex together, the net trapped them both. Hephaestus brought all the gods into the bedchamber to laugh at the captured adulterers, but Apollo, Hermes, and Poseidon had sympathy for Ares and Poseidon agreed to pay Hephaestus for Ares's release. Aphrodite returned to her temple in Cyprus, where she was attended by the Charites. This narrative probably originated as a Greek folk tale, originally independent of the Odyssey. In a much later interpolated detail, Ares put the young soldier Alectryon by the door to warn of Helios's arrival but Alectryon fell asleep on guard duty. Helios discovered the two and alerted Hephaestus; Ares in rage turned Alectryon into a rooster, which unfailingly crows to announce the sunrise.
After exposing them, Hephaestus asks Zeus for his wedding gifts and dowry to be returned to him; by the time of the Trojan War, he is married to Charis/Aglaea, one of the Graces, apparently divorced from Aphrodite. Afterwards, it was generally Ares who was regarded as the husband or official consort of the goddess; on the François Vase, the two arrive at the wedding of Peleus and Thetis on the same chariot, as do Zeus with Hera and Poseidon with Amphitrite. The poets Pindar and Aeschylus refer to Ares as Aphrodite's husband.
Later stories were invented to explain Aphrodite's marriage to Hephaestus. In the most famous story, Zeus hastily married Aphrodite to Hephaestus in order to prevent the other gods from fighting over her. In another version of the myth, Hephaestus gave his mother Hera a golden throne, but when she sat on it, she became trapped and he refused to let her go until she agreed to give him Aphrodite's hand in marriage. Hephaestus was overjoyed to be married to the goddess of beauty, and forged her beautiful jewelry, including a strophion () known as the (), a saltire-shaped undergarment (usually translated as "girdle"), which accentuated her breasts and made her even more irresistible to men. Such strophia were commonly used in depictions of the Near Eastern goddesses Ishtar and Atargatis.
Attendants
Aphrodite is almost always accompanied by Eros, the god of lust and sexual desire. In his Theogony, Hesiod describes Eros as one of the four original primeval forces born at the beginning of time, but, after the birth of Aphrodite from the sea foam, he is joined by Himeros and, together, they become Aphrodite's constant companions. In early Greek art, Eros and Himeros are both shown as idealized handsome youths with wings. The Greek lyric poets regarded the power of Eros and Himeros as dangerous, compulsive, and impossible for anyone to resist. In modern times, Eros is often seen as Aphrodite's son, but this is actually a comparatively late innovation. A scholion on Theocritus's Idylls remarks that the sixth-century BC poet Sappho had described Eros as the son of Aphrodite and Uranus, but the first surviving reference to Eros as Aphrodite's son comes from Apollonius of Rhodes's Argonautica, written in the third century BC, which makes him the son of Aphrodite and Ares. Later, the Romans, who saw Venus as a mother goddess, seized on this idea of Eros as Aphrodite's son and popularized it, making it the predominant portrayal in works on mythology until the present day.
Aphrodite's main attendants were the three Charites, whom Hesiod identifies as the daughters of Zeus and Eurynome and names as Aglaea ("Splendor"), Euphrosyne ("Good Cheer"), and Thalia ("Abundance"). The Charites had been worshipped as goddesses in Greece since the beginning of Greek history, long before Aphrodite was introduced to the pantheon. Aphrodite's other set of attendants was the three Horae (the "Hours"), whom Hesiod identifies as the daughters of Zeus and Themis and names as Eunomia ("Good Order"), Dike ("Justice"), and Eirene ("Peace"). Aphrodite was also sometimes accompanied by Harmonia, her daughter by Ares, and Hebe, the daughter of Zeus and Hera.
The fertility god Priapus was usually considered to be Aphrodite's son by Dionysus, but he was sometimes also described as her son by Hermes, Adonis, or even Zeus. A scholion on Apollonius of Rhodes's Argonautica states that, while Aphrodite was pregnant with Priapus, Hera envied her and applied an evil potion to her belly while she was sleeping to ensure that the child would be hideous. In another version, Hera cursed Aphrodite's unborn son because he had been fathered by Zeus. When Aphrodite gave birth, she was horrified to see that the child had a massive, permanently erect penis, a potbelly, and a huge tongue. Aphrodite abandoned the infant to die in the wilderness, but a herdsman found him and raised him, later discovering that Priapus could use his massive penis to aid in the growth of plants.
Anchises
The First Homeric Hymn to Aphrodite (Hymn 5), which was probably composed sometime in the mid-seventh century BC, describes how Zeus once became annoyed with Aphrodite for causing deities to fall in love with mortals, so he caused her to fall in love with Anchises, a handsome mortal shepherd who lived in the foothills beneath Mount Ida near the city of Troy. Aphrodite appears to Anchises in the form of a tall, beautiful, mortal virgin while he is alone in his home. Anchises sees her dressed in bright clothing and gleaming jewelry, with her breasts shining with divine radiance. He asks her if she is Aphrodite and promises to build her an altar on top of the mountain if she will bless him and his family.
Aphrodite lies and tells him that she is not a goddess, but the daughter of one of the noble families of Phrygia. She claims to be able to understand the Trojan language because she had a Trojan nurse as a child and says that she found herself on the mountainside after she was snatched up by Hermes while dancing in a celebration in honor of Artemis, the goddess of virginity. Aphrodite tells Anchises that she is still a virgin and begs him to take her to his parents. Anchises immediately becomes overcome with mad lust for Aphrodite and swears that he will have sex with her. Anchises takes Aphrodite, with her eyes cast downwards, to his bed, which is covered in the furs of lions and bears. He then strips her naked and makes love to her.
After the lovemaking is complete, Aphrodite reveals her true divine form. Anchises is terrified, but Aphrodite consoles him and promises that she will bear him a son. She prophesies that their son will be the demigod Aeneas, who will be raised by the nymphs of the wilderness for five years before going to Troy to become a nobleman like his father. The story of Aeneas's conception is also mentioned in Hesiod's Theogony and in Book II of Homer's Iliad.
Adonis
The myth of Aphrodite and Adonis is probably derived from the ancient Sumerian legend of Inanna and Dumuzid. The Greek name (Adōnis, ) is derived from the Canaanite word ʼadōn, meaning "lord". The earliest known Greek reference to Adonis comes from a fragment of a poem by the Lesbian poet Sappho ( – ), in which a chorus of young girls asks Aphrodite what they can do to mourn Adonis's death. Aphrodite replies that they must beat their breasts and tear their tunics. Later references flesh out the story with more details. According to the retelling of the story found in the poem Metamorphoses by the Roman poet Ovid (43 BC – 17/18 AD), Adonis was the son of Myrrha, who was cursed by Aphrodite with insatiable lust for her own father, King Cinyras of Cyprus, after Myrrha's mother bragged that her daughter was more beautiful than the goddess. Driven out after becoming pregnant, Myrrha was changed into a myrrh tree, but still gave birth to Adonis.
Aphrodite found the baby and took him to the underworld to be fostered by Persephone. She returned for him once he was grown and discovered him to be strikingly handsome. Persephone wanted to keep Adonis, resulting in a custody battle between the two goddesses over whom should rightly possess Adonis. Zeus settled the dispute by decreeing that Adonis would spend one third of the year with Aphrodite, one third with Persephone, and one third with whomever he chose. Adonis chose to spend that time with Aphrodite. Then, one day, while Adonis was hunting, he was wounded by a wild boar and bled to death in Aphrodite's arms. In a semi-mocking work, the Dialogues of the Gods, the satirical author Lucian comedically relates how a frustrated Aphrodite complains to the moon goddess Selene about her son Eros making Persephone fall in love with Adonis and now she has to share him with her.
In different versions of the story, the boar was either sent by Ares, who was jealous that Aphrodite was spending so much time with Adonis, or by Artemis, who wanted revenge against Aphrodite for having killed her devoted follower Hippolytus. In another version, Apollo in fury changed himself into a boar and killed Adonis because Aphrodite had blinded his son Erymanthus when he stumbled upon Aphrodite naked as she was bathing after intercourse with Adonis. The story also provides an etiology for Aphrodite's associations with certain flowers. Reportedly, as she mourned Adonis's death, she caused anemones to grow wherever his blood fell and declared a festival on the anniversary of his death. In one version of the story, Aphrodite injured herself on a thorn from a rose bush and the rose, which had previously been white, was stained red by her blood. According to Lucian's On the Syrian Goddess, each year during the festival of Adonis, the Adonis River in Lebanon (now known as the Abraham River) ran red with blood.
The myth of Adonis is associated with the festival of the Adonia, which was celebrated by Greek women every year in midsummer. The festival, which was evidently already celebrated in Lesbos by Sappho's time, seems to have first become popular in Athens in the mid-fifth century BC. At the start of the festival, the women would plant a "garden of Adonis", a small garden planted inside a small basket or a shallow piece of broken pottery containing a variety of quick-growing plants, such as lettuce and fennel, or even quick-sprouting grains such as wheat and barley. The women would then climb ladders to the roofs of their houses, where they would place the gardens out under the heat of the summer sun. The plants would sprout in the sunlight but wither quickly in the heat. Then the women would mourn and lament loudly over the death of Adonis, tearing their clothes and beating their breasts in a public display of grief.
Divine favoritism
In Hesiod's Works and Days, Zeus orders Aphrodite to make Pandora, the first woman, physically beautiful and sexually attractive, so that she may become "an evil men will love to embrace". Aphrodite "spills grace" over Pandora's head and equips her with "painful desire and knee-weakening anguish", thus making her the perfect vessel for evil to enter the world. Aphrodite's attendants, Peitho, the Charites, and the Horae, adorn Pandora with gold and jewelry.
According to one myth, Aphrodite aided Hippomenes, a noble youth who wished to marry Atalanta, a maiden who was renowned throughout the land for her beauty, but who refused to marry any man unless he could outrun her in a footrace. Atalanta was an exceedingly swift runner and she beheaded all of the men who lost to her. Aphrodite gave Hippomenes three golden apples from the Garden of the Hesperides and instructed him to toss them in front of Atalanta as he raced her. Hippomenes obeyed Aphrodite's order and Atalanta, seeing the beautiful, golden fruits, bent down to pick up each one, allowing Hippomenes to outrun her. In the version of the story from Ovid's Metamorphoses, Hippomenes forgets to repay Aphrodite for her aid, so she causes the couple to become inflamed with lust while they are staying at the temple of Cybele. The couple desecrate the temple by having sex in it, leading Cybele to turn them into lions as punishment.
The myth of Pygmalion is first mentioned by the third-century BC Greek writer Philostephanus of Cyrene, but is first recounted in detail in Ovid's Metamorphoses. According to Ovid, Pygmalion was an exceedingly handsome sculptor from the island of Cyprus, who was so sickened by the immorality of women that he refused to marry. He fell madly and passionately in love with the ivory cult statue he was carving of Aphrodite and longed to marry it. Because Pygmalion was extremely pious and devoted to Aphrodite, the goddess brought the statue to life. Pygmalion married the girl the statue became and they had a son named Paphos, after whom the capital of Cyprus received its name. Pseudo-Apollodorus later mentions "Metharme, daughter of Pygmalion, king of Cyprus".
Anger myths
Aphrodite generously rewarded those who honored her, but also punished those who disrespected her, often quite brutally. A myth described in Apollonius of Rhodes's Argonautica and later summarized in the Bibliotheca of Pseudo-Apollodorus tells how, when the women of the island of Lemnos refused to sacrifice to Aphrodite, the goddess cursed them to stink horribly so that their husbands would never have sex with them. Instead, their husbands started having sex with their Thracian slave-girls. In anger, the women of Lemnos murdered the entire male population of the island, as well as all the Thracian slaves. When Jason and his crew of Argonauts arrived on Lemnos, they mated with the sex-starved women under Aphrodite's approval and repopulated the island. From then on, the women of Lemnos never disrespected Aphrodite again.
In Euripides's tragedy Hippolytus, which was first performed at the City Dionysia in 428 BC, Theseus's son Hippolytus worships only Artemis, the goddess of virginity, and refuses to engage in any form of sexual contact. Aphrodite is infuriated by his prideful behavior and, in the prologue to the play, she declares that, by honoring only Artemis and refusing to venerate her, Hippolytus has directly challenged her authority. Aphrodite therefore causes Hippolytus's stepmother, Phaedra, to fall in love with him, knowing Hippolytus will reject her. After being rejected, Phaedra commits suicide and leaves a suicide note to Theseus telling him that she killed herself because Hippolytus attempted to rape her. Theseus prays to Poseidon to kill Hippolytus for his transgression. Poseidon sends a wild bull to scare Hippolytus's horses as he is riding by the sea in his chariot, causing the horses to bolt and smash the chariot against the cliffs, dragging Hippolytus to a bloody death across the rocky shoreline. The play concludes with Artemis vowing to kill Aphrodite's own mortal beloved (presumably Adonis) in revenge.
Glaucus of Corinth angered Aphrodite by refusing to let his horses for chariot racing mate, since doing so would hinder their speed. During the chariot race at the funeral games of King Pelias, Aphrodite drove his horses mad and they tore him apart. Polyphonte was a young woman who chose a virginal life with Artemis instead of marriage and children, as favoured by Aphrodite. Aphrodite cursed her, causing her to have children by a bear. The resulting offspring, Agrius and Oreius, were wild cannibals who incurred the hatred of Zeus. Ultimately, he transformed all the members of the family into birds of ill omen.
According to Apollodorus, a jealous Aphrodite cursed Eos, the goddess of dawn, to be perpetually in love and have insatiable sexual desire because Eos once had lain with Aphrodite's sweetheart Ares, the god of war.
According to Ovid in his Metamorphoses (book 10.238 ff.), Propoetides who are the daughters of Propoetus from the city of Amathus on the island of Cyprus denied Aphrodite's divinity and failed to worship her properly. Therefore, Aphrodite turned them into the world's first prostitutes. According to Diodorus Siculus, when the Rhodian sea nymphe Halia's six sons by Poseidon arrogantly refused to let Aphrodite land upon their shore, the goddess cursed them with insanity. In their madness, they raped Halia. As punishment, Poseidon buried them in the island's sea-caverns.
Xanthius, a descendant of Bellerophon, had two children: Leucippus and an unnamed daughter. Through the wrath of Aphrodite (reasons unknown), Leucippus fell in love with his own sister. They started a secret relationship but the girl was already betrothed to another man and he went on to inform her father Xanthius, without telling him the name of the seducer. Xanthius went straight to his daughter's chamber, where she was together with Leucippus right at the moment. On hearing him enter, she tried to escape, but Xanthius hit her with a dagger, thinking that he was slaying the seducer, and killed her. Leucippus, failing to recognize his father at first, slew him. When the truth was revealed, he had to leave the country and took part in colonization of Crete and the lands in Asia Minor.
Queen Cenchreis of Cyprus, wife of King Cinyras, bragged that her daughter Myrrha was more beautiful than Aphrodite. Therefore, Myrrha was cursed by Aphrodite with insatiable lust for her own father, King Cinyras of Cyprus and he slept with her unknowingly in the dark. she eventually transformed into the myrrh tree and gave birth to Adonis in this form. Cinyras also had three other daughters: Braesia, Laogora, and Orsedice. These girls by the wrath of Aphrodite (reasons unknown) cohabited with foreigners and ended their life in Egypt.
The Muse Clio derided the goddess' own love for Adonis. Therefore, Clio fell in love with Pierus, son of Magnes and bore Hyacinth.
Aegiale was a daughter of Adrastus and Amphithea and was married to Diomedes. Because of anger of Aphrodite, whom Diomedes had wounded in the war against Troy, she had multiple lovers, including a certain Hippolytus. when Aegiale went so far as to threaten his life, he fled to Italy. According to Stesichorus and Hesiod while Tyndareus sacrificing to the gods he forgot Aphrodite, therefore goddess made his daughters twice and thrice wed and deserters of their husbands. Timandra deserted Echemus and went and came to Phyleus and Clytaemnestra deserted Agamemnon and lay with Aegisthus who was a worse mate for her and eventually killed her husband with her lover and finally, Helen of Troy deserted Menelaus under the influence of Aphrodite for Paris and her unfaitfulness eventually causes the War of Troy. As a result of her actions, Aphrodite caused the War of Troy in order to take Priam's kingdom and pass it down to her descendants.
In one of the versions of the legend, Pasiphae did not make offerings to the goddess Venus [Aphrodite]. Because of this Venus [Aphrodite] inspired in her an unnatural love for a bull or she cursed her because she was Helios's daughter who revealed her adultery to Hephaestus. For Helios' own tale-telling, she cursed him with uncontrollable lust over the mortal princess Leucothoe, which led to him abandoning his then-lover Clytie, leaving her heartbroken.
Lysippe was the mother of Tanais by Berossos. Her son only venerated Ares and was fully devoted to war, neglecting love and marriage. Aphrodite cursed him with falling in love with his own mother. Preferring to die rather than give up his chastity, he threw himself into the river Amazonius, which was subsequently renamed Tanais.
According to Hyginus, At the behest of Zeus, Orpheus's mother, the Muse Calliope, judged the dispute between the goddesses Aphrodite and Persephone over Adonis and decided that both shall possess him half of the year. This enraged Venus [Aphrodite], because she had not been granted what she thought was her right. Therefore, Venus [Aphrodite] inspired love for Orpheus in the women of Thrace, causing them to tear him apart as each of them sought Orpheus for herself.
Aphrodite personally witnessed the young huntress Rhodopis swear eternal devotion and chastity to Artemis when she joined her group. Aphrodite then summoned her son Eros, and convinced him that such lifestyle was an insult to them both. So under her command, Eros made Rhodopis and Euthynicus, another young hunter who had shunned love and romance just like her, to fall in love with each other. Despite their chaste life, Rhodopis and Euthynicus withdrew to some cavern where they violated their vows. Artemis was not slow to take notice after seeing Aphrodite laugh, so she changed Rhodopis into a fountain as a punishment.
Judgment of Paris and Trojan War
The myth of the Judgement of Paris is mentioned briefly in the Iliad, but is described in depth in an epitome of the Cypria, a lost poem of the Epic Cycle, which records that all the gods and goddesses as well as various mortals were invited to the marriage of Peleus and Thetis (the eventual parents of Achilles). Only Eris, goddess of discord, was not invited. She was annoyed at this, so she arrived with a golden apple inscribed with the word καλλίστῃ (kallistēi, "for the fairest"), which she threw among the goddesses. Aphrodite, Hera, and Athena all claimed to be the fairest, and thus the rightful owner of the apple.
The goddesses chose to place the matter before Zeus, who, not wanting to favor one of the goddesses, put the choice into the hands of Paris, a Trojan prince. After bathing in the spring of Mount Ida where Troy was situated, the goddesses appeared before Paris for his decision. In the extant ancient depictions of the Judgement of Paris, Aphrodite is only occasionally represented nude, and Athena and Hera are always fully clothed. Since the Renaissance, however, Western paintings have typically portrayed all three goddesses as completely naked.
All three goddesses were ideally beautiful and Paris could not decide between them, so they resorted to bribes. Hera tried to bribe Paris with power over all Asia and Europe, and Athena offered wisdom, fame and glory in battle, but Aphrodite promised Paris that, if he were to choose her as the fairest, she would let him marry the most beautiful woman on earth. This woman was Helen, who was already married to King Menelaus of Sparta. Paris selected Aphrodite and awarded her the apple. The other two goddesses were enraged and, as a direct result, sided with the Greeks in the Trojan War.
Aphrodite plays an important and active role throughout the entirety of Homer's Iliad. In Book III, she rescues Paris from Menelaus after he foolishly challenges him to a one-on-one duel. She then appears to Helen in the form of an old woman and attempts to persuade her to have sex with Paris, reminding her of his physical beauty and athletic prowess. Helen immediately recognizes Aphrodite by her beautiful neck, perfect breasts, and flashing eyes and chides the goddess, addressing her as her equal. Aphrodite sharply rebukes Helen, reminding her that, if she vexes her, she will punish her just as much as she has favored her already. Helen demurely obeys Aphrodite's command.
In Book V, Aphrodite charges into battle to rescue her son Aeneas from the Greek hero Diomedes. Diomedes recognizes Aphrodite as a "weakling" goddess and, thrusting his spear, nicks her wrist through her "ambrosial robe". Aphrodite borrows Ares's chariot to ride back to Mount Olympus. Zeus chides her for putting herself in danger, reminding her that "her specialty is love, not war." According to Walter Burkert, this scene directly parallels a scene from Tablet VI of the Epic of Gilgamesh in which Ishtar, Aphrodite's Akkadian precursor, cries to her mother Antu after the hero Gilgamesh rejects her sexual advances, but is mildly rebuked by her father Anu. In Book XIV of the Iliad, during the Dios Apate episode, Aphrodite lends her kestos himas to Hera for the purpose of seducing Zeus and distracting him from the combat while Poseidon aids the Greek forces on the beach. In the Theomachia in Book XXI, Aphrodite again enters the battlefield to carry Ares away after he is wounded.
Offspring
Sometimes poets and dramatists recounted ancient traditions, which varied, and sometimes they invented new details; later scholiasts might draw on either or simply guess. Thus while Aeneas and Phobos were regularly described as offspring of Aphrodite, others listed here such as Priapus and Eros were sometimes said to be children of Aphrodite but with varying fathers and sometimes given other mothers or none at all.
Iconography
Symbols
Aphrodite's most prominent avian symbol was the dove, which was originally an important symbol of her Near Eastern precursor Inanna-Ishtar. (In fact, the ancient Greek word for "dove", peristerá, may be derived from a Semitic phrase peraḥ Ištar, meaning "bird of Ishtar".) Aphrodite frequently appears with doves in ancient Greek pottery and the temple of Aphrodite Pandemos on the southwest slope of the Athenian Acropolis was decorated with relief sculptures of doves with knotted fillets in their beaks. Votive offerings of small, white, marble doves were also discovered in the temple of Aphrodite at Daphni. In addition to her associations with doves, Aphrodite was also closely linked with sparrows and she is described riding in a chariot pulled by sparrows in Sappho's "Ode to Aphrodite". According to myth, the dove was originally a nymph named Peristera who helped Aphrodite win in a flower-picking contest over her son Eros; for this Eros turned her into a dove, but Aphrodite took the dove under her wing and made it her sacred bird.
Because of her connections to the sea, Aphrodite was associated with a number of different types of water fowl, including swans, geese, and ducks. Aphrodite's other symbols included the sea, conch shells, and roses. The rose and myrtle flowers were both sacred to Aphrodite. A myth explaining the origin of Aphrodite's connection to myrtle goes that originally the myrtle was a maiden, Myrina, a dedicated priestess of Aphrodite. When her previous betrothed carried her away from the temple to marry her, Myrina killed him, and Aphrodite turned her into a myrtle, forever under her protection. Her most important fruit emblem was the apple, and in myth, she turned Melus, childhood friend and kin-in-law to Adonis, into an apple after he killed himself, mourning over Adonis' death. Likewise, Melus's wife Pelia was turned into a dove. She was also associated with pomegranates, possibly because the red seeds suggested sexuality or because Greek women sometimes used pomegranates as a method of birth control. In Greek art, Aphrodite is often also accompanied by dolphins and Nereids.
In classical art
A scene of Aphrodite rising from the sea appears on the back of the Ludovisi Throne ( 460 BC), which was probably originally part of a massive altar that was constructed as part of the Ionic temple to Aphrodite in the Greek polis of Locri Epizephyrii in Magna Graecia in southern Italy. The throne shows Aphrodite rising from the sea, clad in a diaphanous garment, which is drenched with seawater and clinging to her body, revealing her upturned breasts and the outline of her navel. Her hair hangs dripping as she reaches to two attendants standing barefoot on the rocky shore on either side of her, lifting her out of the water. Scenes with Aphrodite appear in works of classical Greek pottery, including a famous white-ground kylix by the Pistoxenos Painter dating the between 470 and 460 BC, showing her riding on a swan or goose. Aphrodite was often described as golden-haired and portrayed with this color hair in art.
In BC, the Athenian sculptor Praxiteles carved the marble statue Aphrodite of Knidos, which Pliny the Elder later praised as the greatest sculpture ever made. The statue showed a nude Aphrodite modestly covering her pubic region while resting against a water pot with her robe draped over it for support. The Aphrodite of Knidos was the first full-sized statue to depict Aphrodite completely naked and one of the first sculptures that was intended to be viewed from all sides. The statue was purchased by the people of Knidos in around 350 BC and proved to be tremendously influential on later depictions of Aphrodite. The original sculpture has been lost, but written descriptions of it as well several depictions of it on coins are still extant and over sixty copies, small-scale models, and fragments of it have been identified.
The Greek painter Apelles of Kos, a contemporary of Praxiteles, produced the panel painting Aphrodite Anadyomene (Aphrodite Rising from the Sea). According to Athenaeus, Apelles was inspired to paint the painting after watching the courtesan Phryne take off her clothes, untie her hair, and bathe naked in the sea at Eleusis. The painting was displayed in the Asclepeion on the island of Kos. The Aphrodite Anadyomene went unnoticed for centuries, but Pliny the Elder records that, in his own time, it was regarded as Apelles's most famous work.
During the Hellenistic and Roman periods, statues depicting Aphrodite proliferated; many of these statues were modeled at least to some extent on Praxiteles's Aphrodite of Knidos. Some statues show Aphrodite crouching naked; others show her wringing water out of her hair as she rises from the sea. Another common type of statue is known as Aphrodite Kallipygos, the name of which is Greek for "Aphrodite of the Beautiful Buttocks"; this type of sculpture shows Aphrodite lifting her peplos to display her buttocks to the viewer while looking back at them from over her shoulder. The ancient Romans produced massive numbers of copies of Greek sculptures of Aphrodite and more sculptures of Aphrodite have survived from antiquity than of any other deity.
Post-classical culture
Middle Ages
Early Christians frequently adapted pagan iconography to suit Christian purposes. In the Early Middle Ages, Christians adapted elements of Aphrodite/Venus's iconography and applied them to Eve and prostitutes, but also female saints and even the Virgin Mary. Christians in the east reinterpreted the story of Aphrodite's birth as a metaphor for baptism; in a Coptic stele from the sixth century AD, a female orant is shown wearing Aphrodite's conch shell as a sign that she is newly baptized. Throughout the Middle Ages, villages and communities across Europe still maintained folk tales and traditions about Aphrodite/Venus and travelers reported a wide variety of stories. Numerous Roman mosaics of Venus survived in Britain, preserving memory of the pagan past. In North Africa in the late fifth century AD, Fulgentius of Ruspe encountered mosaics of Aphrodite and reinterpreted her as a symbol of the sin of Lust, arguing that she was shown naked because "the sin of lust is never cloaked" and that she was often shown "swimming" because "all lust suffers shipwreck of its affairs." He also argued that she was associated with doves and conchs because these are symbols of copulation, and that she was associated with roses because "as the rose gives pleasure, but is swept away by the swift movement of the seasons, so lust is pleasant for a moment, but is swept away forever."
While Fulgentius had appropriated Aphrodite as a symbol of Lust, Isidore of Seville ( 560–636) interpreted her as a symbol of marital procreative sex and declared that the moral of the story of Aphrodite's birth is that sex can only be holy in the presence of semen, blood, and heat, which he regarded as all being necessary for procreation. Meanwhile, Isidore denigrated Aphrodite/Venus's son Eros/Cupid as a "demon of fornication" (daemon fornicationis). Aphrodite/Venus was best known to Western European scholars through her appearances in Virgil's Aeneid and Ovid's Metamorphoses. Venus is mentioned in the Latin poem Pervigilium Veneris ("The Eve of Saint Venus"), written in the third or fourth century AD, and in Giovanni Boccaccio's Genealogia Deorum Gentilium.
Since the Late Middle Ages. the myth of the Venusberg (German; French Mont de Vénus, "Mountain of Venus") – a subterranean realm ruled by Venus, hidden underneath Christian Europe – became a motif of European folklore rendered in various legends and epics. In German folklore of the 16th century, the narrative becomes associated with the minnesinger Tannhäuser, and in that form the myth was taken up in later literature and opera.
Art
Aphrodite is the central figure in Sandro Botticelli's painting Primavera, which has been described as "one of the most written about, and most controversial paintings in the world", and "one of the most popular paintings in Western art". The story of Aphrodite's birth from the foam was a popular subject matter for painters during the Italian Renaissance, who were attempting to consciously reconstruct Apelles of Kos's lost masterpiece Aphrodite Anadyomene based on the literary ekphrasis of it preserved by Cicero and Pliny the Elder. Artists also drew inspiration from Ovid's description of the birth of Venus in his Metamorphoses. Sandro Botticelli's The Birth of Venus ( 1485) was also partially inspired by a description by Poliziano of a relief on the subject. Later Italian renditions of the same scene include Titian's Venus Anadyomene ( 1525) and Raphael's painting in the Stufetta del cardinal Bibbiena (1516). Titian's biographer Giorgio Vasari identified all of Titian's paintings of naked women as paintings of "Venus", including an erotic painting from 1534, which he called the Venus of Urbino, even though the painting does not contain any of Aphrodite/Venus's traditional iconography and the woman in it is clearly shown in a contemporary setting, not a classical one.
The Birth of Venus (1863) by Alexandre CabanelJacques-Louis David's final work was his 1824 magnum opus, Mars Being Disarmed by Venus, which combines elements of classical, Renaissance, traditional French art, and contemporary artistic styles. While he was working on the painting, David described it, saying, "This is the last picture I want to paint, but I want to surpass myself in it. I will put the date of my seventy-five years on it and afterwards I will never again pick up my brush." The painting was exhibited first in Brussels and then in Paris, where over 10,000 people came to see it. Jean-Auguste-Dominique Ingres's painting Venus Anadyomene was one of his major works. Louis Geofroy described it as a "dream of youth realized with the power of maturity, a happiness that few obtain, artists or others." Théophile Gautier declared: "Nothing remains of the marvelous painting of the Greeks, but surely if anything could give the idea of antique painting as it was conceived following the statues of Phidias and the poems of Homer, it is M. Ingres's painting: the Venus Anadyomene of Apelles has been found." Other critics dismissed it as a piece of unimaginative, sentimental kitsch, but Ingres himself considered it to be among his greatest works and used the same figure as the model for his later 1856 painting La Source.
Paintings of Venus were favorites of the late nineteenth-century Academic artists in France. In 1863, Alexandre Cabanel won widespread critical acclaim at the Paris Salon for his painting The Birth of Venus, which the French emperor Napoleon III immediately purchased for his own personal art collection. Édouard Manet's 1865 painting Olympia parodied the nude Venuses of the Academic painters, particularly Cabanel's Birth of Venus. In 1867, the English Academic painter Frederic Leighton displayed his Venus Disrobing for the Bath at the academy. The art critic J. B. Atkinson praised it, declaring that "Mr Leighton, instead of adopting corrupt Roman notions regarding Venus such as Rubens embodied, has wisely reverted to the Greek idea of Aphrodite, a goddess worshipped, and by artists painted, as the perfection of female grace and beauty." A year later, the English painter Dante Gabriel Rossetti, a founding member of the Pre-Raphaelite Brotherhood, painted Venus Verticordia (Latin for "Aphrodite, the Changer of Hearts"), showing Aphrodite as a nude red-headed woman in a garden of roses. Though he was reproached for his outré subject matter, Rossetti refused to alter the painting and it was soon purchased by J. Mitchell of Bradford. In 1879, William Adolphe Bouguereau exhibited at the Paris Salon his own Birth of Venus, which imitated the classical tradition of contrapposto and was met with widespread critical acclaim, rivalling the popularity of Cabanel's version from nearly two decades prior.
Literature
William Shakespeare's erotic narrative poem Venus and Adonis (1593), a retelling of the courtship of Aphrodite and Adonis from Ovid's Metamorphoses, was the most popular of all his works published within his own lifetime. Six editions of it were published before Shakespeare's death (more than any of his other works) and it enjoyed particularly strong popularity among young adults. In 1605, Richard Barnfield lauded it, declaring that the poem had placed Shakespeare's name "in fames immortall Booke". Despite this, the poem has received mixed reception from modern critics; Samuel Taylor Coleridge defended it, but Samuel Butler complained that it bored him and C. S. Lewis described an attempted reading of it as "suffocating".
Aphrodite appears in Richard Garnett's short story collection The Twilight of the Gods and Other Tales (1888), in which the gods' temples have been destroyed by Christians. Stories revolving around sculptures of Aphrodite were common in the late nineteenth and early twentieth centuries. Examples of such works of literature include the novel The Tinted Venus: A Farcical Romance (1885) by Thomas Anstey Guthrie and the short story The Venus of Ille (1887) by Prosper Mérimée, both of which are about statues of Aphrodite that come to life. Another noteworthy example is Aphrodite in Aulis by the Anglo-Irish writer George Moore, which revolves around an ancient Greek family who moves to Aulis. The French writer Pierre Louÿs titled his erotic historical novel Aphrodite: mœurs antiques (1896) after the Greek goddess. The novel enjoyed widespread commercial success, but scandalized French audiences due to its sensuality and its decadent portrayal of Greek society.
In the early twentieth century, stories of Aphrodite were used by feminist poets, such as Amy Lowell and Alicia Ostriker. Many of these poems dealt with Aphrodite's legendary birth from the foam of the sea. Other feminist writers, including Claude Cahun, Thit Jensen, and Anaïs Nin also made use of the myth of Aphrodite in their writings. Ever since the publication of Isabel Allende's book Aphrodite: A Memoir of the Senses in 1998, the name "Aphrodite" has been used as a title for dozens of books dealing with all topics even superficially connected to her domain. Frequently these books do not even mention Aphrodite, or mention her only briefly, but make use of her name as a selling point.
Modern worship
In 1938, Gleb Botkin, a Russian immigrant to the United States, founded the Church of Aphrodite, a neopagan religion centered around the worship of a mother goddess, whom its practitioners identified as Aphrodite. The Church of Aphrodite's theology was laid out in the book In Search of Reality, published in 1969, two years before Botkin's death. The book portrayed Aphrodite in a drastically different light than the one in which the Greeks envisioned her, instead casting her as "the sole Goddess of a somewhat Neoplatonic Pagan monotheism". It claimed that the worship of Aphrodite had been brought to Greece by the mystic teacher Orpheus, but that the Greeks had misunderstood Orpheus's teachings and had not realized the importance of worshipping Aphrodite alone.
Aphrodite is a major deity in Wicca, a contemporary nature-based syncretic Neopagan religion. Wiccans regard Aphrodite as one aspect of the Goddess and she is frequently invoked by name during enchantments dealing with love and romance. Wiccans regard Aphrodite as the ruler of human emotions, erotic spirituality, creativity, and art. As one of the twelve Olympians, Aphrodite is a major deity within Hellenismos (Hellenic Polytheistic Reconstructionism), a Neopagan religion which seeks to authentically revive and recreate the religion of ancient Greece in the modern world. Unlike Wiccans, Hellenists are usually strictly polytheistic or pantheistic. Hellenists venerate Aphrodite primarily as the goddess of romantic love, but also as a goddess of sexuality, the sea, and war. Her many epithets include "Sea Born", "Killer of Men", "She upon the Graves", "Fair Sailing", and "Ally in War".
Genealogy
See also
Anchises
Cupid
Girdle of Aphrodite
History of nude art
Lakshmi, rose from the ocean like Aphrodite and has 8-pointed star like Ishtar
Explanatory notes
Citations
General and cited references
Homer, The Iliad with an English Translation by A.T. Murray, PhD in two volumes. Cambridge, MA, Harvard University Press; London, William Heinemann, Ltd. 1924. Online version at the Perseus Digital Library.
Hesiod, Theogony, in The Homeric Hymns and Homerica with an English Translation by Hugh G. Evelyn-White, Cambridge, MA., Harvard University Press; London, William Heinemann Ltd. 1914. Online version at the Perseus Digital Library.
Evelyn-White, Hugh, The Homeric Hymns and Homerica with an English Translation by Hugh G. Evelyn-White. Homeric Hymns. Cambridge, Massachusetts, Harvard University Press; London, William Heinemann Ltd. 1914.
Pindar, Odes, Diane Arnson Svarlien. 1990. Online version at the Perseus Digital Library.
Euripides, The Complete Greek Drama', edited by Whitney J. Oates and Eugene O'Neill, Jr. in two volumes. 2. The Phoenissae, translated by E. P. Coleridge. New York. Random House. 1938.
Apollonius Rhodius, Argonautica translated by Robert Cooper Seaton (1853–1915), R. C. Loeb Classical Library Volume 001. London, William Heinemann Ltd, 1912. Online version at the Topos Text Project.
Apollodorus, Apollodorus, The Library, with an English Translation by Sir James George Frazer, F.B.A., F.R.S. in 2 Volumes. Cambridge, MA, Harvard University Press; London, William Heinemann Ltd. 1921. Online version at the Perseus Digital Library.
Pausanias, Pausanias Description of Greece with an English Translation by W.H.S. Jones, Litt.D., and H.A. Ormerod, M.A., in 4 Volumes. Cambridge, MA, Harvard University Press; London, William Heinemann Ltd. 1918. Online version at the Perseus Digital Library.
Diodorus Siculus, Bibliotheca Historica. Vol 1-2. Immanel Bekker. Ludwig Dindorf. Friedrich Vogel. in aedibus B. G. Teubneri. Leipzig. 1888–1890. Greek text available at the Perseus Digital Library.
Ovid, Metamorphoses. Translated by A. D. Melville; introduction and notes by E. J. Kenney. Oxford: Oxford University Press. 2008. .
Hyginus, Gaius Julius, The Myths of Hyginus. Edited and translated by Mary A. Grant, Lawrence: University of Kansas Press, 1960.
Gaius Julius Hyginus, Astronomica from The Myths of Hyginus translated and edited by Mary Grant. University of Kansas Publications in Humanistic Studies. Online version at the Topos Text Project.
Hard, Robin, The Routledge Handbook of Greek Mythology: Based on H.J. Rose's "Handbook of Greek Mythology", Psychology Press, 2004, . Google Books.
External links
APHRODITE from The Theoi Project information from classical literature, Greek and Roman art
The Glory which Was Greece from a Female Perspective
Sappho's Hymn to Aphrodite, with a brief explanation
Warburg Institute Iconographic Database (ca 2450 images of Aphrodite)
Beauty goddesses
Characters in the Argonautica
Characters in the Odyssey
Children of Zeus
Consorts of Dionysus
Consorts of Hephaestus
Cypriot mythology
Deities in the Iliad
Divine women of Zeus
Extramarital relationships
Fertility goddesses
Greek love and lust goddesses
Homosexuality and bisexuality deities
Kourotrophoi
Metamorphoses characters
New religious movement deities
Nudity in mythology
Prostitution
Sexuality in ancient Greece
Temporary marriages
Twelve Olympians
Venusian deities
Planetary goddesses
Women of Ares
Women of Hermes
Women of Poseidon
Women of the Trojan war
|
https://en.wikipedia.org/wiki/Astrometry
|
Astrometry is a branch of astronomy that involves precise measurements of the positions and movements of stars and other celestial bodies. It provides the kinematics and physical origin of the Solar System and this galaxy, the Milky Way.
History
The history of astrometry is linked to the history of star catalogues, which gave astronomers reference points for objects in the sky so they could track their movements. This can be dated back to Hipparchus, who around 190 BC used the catalogue of his predecessors Timocharis and Aristillus to discover Earth's precession. In doing so, he also developed the brightness scale still in use today. Hipparchus compiled a catalogue with at least 850 stars and their positions. Hipparchus's successor, Ptolemy, included a catalogue of 1,022 stars in his work the Almagest, giving their location, coordinates, and brightness.
In the 10th century, Abd al-Rahman al-Sufi carried out observations on the stars and described their positions, magnitudes and star color; furthermore, he provided drawings for each constellation, which are depicted in his Book of Fixed Stars. Ibn Yunus observed more than 10,000 entries for the Sun's position for many years using a large astrolabe with a diameter of nearly 1.4 metres. His observations on eclipses were still used centuries later in Simon Newcomb's investigations on the motion of the Moon, while his other observations of the motions of the planets Jupiter and Saturn inspired Laplace's Obliquity of the Ecliptic and Inequalities of Jupiter and Saturn. In the 15th century, the Timurid astronomer Ulugh Beg compiled the Zij-i-Sultani, in which he catalogued 1,019 stars. Like the earlier catalogs of Hipparchus and Ptolemy, Ulugh Beg's catalogue is estimated to have been precise to within approximately 20 minutes of arc.
In the 16th century, Tycho Brahe used improved instruments, including large mural instruments, to measure star positions more accurately than previously, with a precision of 15–35 arcsec. Taqi al-Din measured the right ascension of the stars at the Constantinople Observatory of Taqi ad-Din using the "observational clock" he invented. When telescopes became commonplace, setting circles sped measurements
James Bradley first tried to measure stellar parallaxes in 1729. The stellar movement proved too insignificant for his telescope, but he instead discovered the aberration of light and the nutation of the Earth's axis. His cataloguing of 3222 stars was refined in 1807 by Friedrich Bessel, the father of modern astrometry. He made the first measurement of stellar parallax: 0.3 arcsec for the binary star 61 Cygni. In 1872, William Huggins used spectroscopy to measure the radial velocity of several prominent stars, including Sirius.
Being very difficult to measure, only about 60 stellar parallaxes had been obtained by the end of the 19th century, mostly by use of the filar micrometer. Astrographs using astronomical photographic plates sped the process in the early 20th century. Automated plate-measuring machines and more sophisticated computer technology of the 1960s allowed more efficient compilation of star catalogues. Started in the late 19th century, the project Carte du Ciel to improve star mapping couldn't be finished but made photography a common technique for astrometry. In the 1980s, charge-coupled devices (CCDs) replaced photographic plates and reduced optical uncertainties to one milliarcsecond. This technology made astrometry less expensive, opening the field to an amateur audience.
In 1989, the European Space Agency's Hipparcos satellite took astrometry into orbit, where it could be less affected by mechanical forces of the Earth and optical distortions from its atmosphere. Operated from 1989 to 1993, Hipparcos measured large and small angles on the sky with much greater precision than any previous optical telescopes. During its 4-year run, the positions, parallaxes, and proper motions of 118,218 stars were determined with an unprecedented degree of accuracy. A new "Tycho catalog" drew together a database of 1,058,332 stars to within 20-30 mas (milliarcseconds). Additional catalogues were compiled for the 23,882 double and multiple stars and 11,597 variable stars also analyzed during the Hipparcos mission.
In 2013, the Gaia satellite was launched and improved the accuracy of Hipparcos.
The precision was improved by a factor of 100 and enabled the mapping of a billion stars.
Today, the catalogue most often used is USNO-B1.0, an all-sky catalogue that tracks proper motions, positions, magnitudes and other characteristics for over one billion stellar objects. During the past 50 years, 7,435 Schmidt camera plates were used to complete several sky surveys that make the data in USNO-B1.0 accurate to within 0.2 arcsec.
Applications
Apart from the fundamental function of providing astronomers with a reference frame to report their observations in, astrometry is also fundamental for fields like celestial mechanics, stellar dynamics and galactic astronomy. In observational astronomy, astrometric techniques help identify stellar objects by their unique motions. It is instrumental for keeping time, in that UTC is essentially the atomic time synchronized to Earth's rotation by means of exact astronomical observations. Astrometry is an important step in the cosmic distance ladder because it establishes parallax distance estimates for stars in the Milky Way.
Astrometry has also been used to support claims of extrasolar planet detection by measuring the displacement the proposed planets cause in their parent star's apparent position on the sky, due to their mutual orbit around the center of mass of the system. Astrometry is more accurate in space missions that are not affected by the distorting effects of the Earth's atmosphere. NASA's planned Space Interferometry Mission (SIM PlanetQuest) (now cancelled) was to utilize astrometric techniques to detect terrestrial planets orbiting 200 or so of the nearest solar-type stars. The European Space Agency's Gaia Mission, launched in 2013, applies astrometric techniques in its stellar census. In addition to the detection of exoplanets, it can also be used to determine their mass.
Astrometric measurements are used by astrophysicists to constrain certain models in celestial mechanics. By measuring the velocities of pulsars, it is possible to put a limit on the asymmetry of supernova explosions. Also, astrometric results are used to determine the distribution of dark matter in the galaxy.
Astronomers use astrometric techniques for the tracking of near-Earth objects. Astrometry is responsible for the detection of many record-breaking Solar System objects. To find such objects astrometrically, astronomers use telescopes to survey the sky and large-area cameras to take pictures at various determined intervals. By studying these images, they can detect Solar System objects by their movements relative to the background stars, which remain fixed. Once a movement per unit time is observed, astronomers compensate for the parallax caused by Earth's motion during this time and the heliocentric distance to this object is calculated. Using this distance and other photographs, more information about the object, including its orbital elements, can be obtained.
50000 Quaoar and 90377 Sedna are two Solar System objects discovered in this way by Michael E. Brown and others at Caltech using the Palomar Observatory's Samuel Oschin telescope of and the Palomar-Quest large-area CCD camera. The ability of astronomers to track the positions and movements of such celestial bodies is crucial to the understanding of the Solar System and its interrelated past, present, and future with others in the Universe.
Statistics
A fundamental aspect of astrometry is error correction. Various factors introduce errors into the measurement of stellar positions, including atmospheric conditions, imperfections in the instruments and errors by the observer or the measuring instruments. Many of these errors can be reduced by various techniques, such as through instrument improvements and compensations to the data. The results are then analyzed using statistical methods to compute data estimates and error ranges.
Computer programs
XParallax viu (Free application for Windows)
Astrometrica (Application for Windows)
Astrometry.net (Online blind astrometry)
See also
References
Further reading
External links
MPC Guide to Minor Body Astrometry
Astrometry Department of the U.S. Naval Observatory
USNO Astrometric Catalog and related Products
Planet-Like Body Discovered at Fringes of Our Solar System (2004-03-15)
Mike Brown's Caltech Home Page
Scientific Paper describing Sedna's discovery
The Hipparcos Space Astrometry Mission — on ESA
Astronomical sub-disciplines
Astrological aspects
Measurement
|
https://en.wikipedia.org/wiki/Alloy
|
An alloy is a mixture of chemical elements of which at least one is a metal. Unlike chemical compounds with metallic bases, an alloy will retain all the properties of a metal in the resulting material, such as electrical conductivity, ductility, opacity, and luster, but may have properties that differ from those of the pure metals, such as increased strength or hardness. In some cases, an alloy may reduce the overall cost of the material while preserving important properties. In other cases, the mixture imparts synergistic properties to the constituent metal elements such as corrosion resistance or mechanical strength.
In an alloy, the atoms are joined by metallic bonding rather than by covalent bonds typically found in chemical compounds. The alloy constituents are usually measured by mass percentage for practical applications, and in atomic fraction for basic science studies. Alloys are usually classified as substitutional or interstitial alloys, depending on the atomic arrangement that forms the alloy. They can be further classified as homogeneous (consisting of a single phase), or heterogeneous (consisting of two or more phases) or intermetallic. An alloy may be a solid solution of metal elements (a single phase, where all metallic grains (crystals) are of the same composition) or a mixture of metallic phases (two or more solutions, forming a microstructure of different crystals within the metal).
Examples of alloys include red gold (gold and copper), white gold (gold and silver), sterling silver (silver and copper), steel or silicon steel (iron with non-metallic carbon or silicon respectively), solder, brass, pewter, duralumin, bronze, and amalgams.
Alloys are used in a wide variety of applications, from the steel alloys, used in everything from buildings to automobiles to surgical tools, to exotic titanium alloys used in the aerospace industry, to beryllium-copper alloys for non-sparking tools.
Characteristics
An alloy is a mixture of chemical elements, which forms an impure substance (admixture) that retains the characteristics of a metal. An alloy is distinct from an impure metal in that, with an alloy, the added elements are well controlled to produce desirable properties, while impure metals such as wrought iron are less controlled, but are often considered useful. Alloys are made by mixing two or more elements, at least one of which is a metal. This is usually called the primary metal or the base metal, and the name of this metal may also be the name of the alloy. The other constituents may or may not be metals but, when mixed with the molten base, they will be soluble and dissolve into the mixture.
The mechanical properties of alloys will often be quite different from those of its individual constituents. A metal that is normally very soft (malleable), such as aluminium, can be altered by alloying it with another soft metal, such as copper. Although both metals are very soft and ductile, the resulting aluminium alloy will have much greater strength. Adding a small amount of non-metallic carbon to iron trades its great ductility for the greater strength of an alloy called steel. Due to its very-high strength, but still substantial toughness, and its ability to be greatly altered by heat treatment, steel is one of the most useful and common alloys in modern use. By adding chromium to steel, its resistance to corrosion can be enhanced, creating stainless steel, while adding silicon will alter its electrical characteristics, producing silicon steel.
Like oil and water, a molten metal may not always mix with another element. For example, pure iron is almost completely insoluble with copper. Even when the constituents are soluble, each will usually have a saturation point, beyond which no more of the constituent can be added. Iron, for example, can hold a maximum of 6.67% carbon. Although the elements of an alloy usually must be soluble in the liquid state, they may not always be soluble in the solid state. If the metals remain soluble when solid, the alloy forms a solid solution, becoming a homogeneous structure consisting of identical crystals, called a phase. If as the mixture cools the constituents become insoluble, they may separate to form two or more different types of crystals, creating a heterogeneous microstructure of different phases, some with more of one constituent than the other. However, in other alloys, the insoluble elements may not separate until after crystallization occurs. If cooled very quickly, they first crystallize as a homogeneous phase, but they are supersaturated with the secondary constituents. As time passes, the atoms of these supersaturated alloys can separate from the crystal lattice, becoming more stable, and forming a second phase that serves to reinforce the crystals internally.
Some alloys, such as electrum—an alloy of silver and gold—occur naturally. Meteorites are sometimes made of naturally occurring alloys of iron and nickel, but are not native to the Earth. One of the first alloys made by humans was bronze, which is a mixture of the metals tin and copper. Bronze was an extremely useful alloy to the ancients, because it is much stronger and harder than either of its components. Steel was another common alloy. However, in ancient times, it could only be created as an accidental byproduct from the heating of iron ore in fires (smelting) during the manufacture of iron. Other ancient alloys include pewter, brass and pig iron. In the modern age, steel can be created in many forms. Carbon steel can be made by varying only the carbon content, producing soft alloys like mild steel or hard alloys like spring steel. Alloy steels can be made by adding other elements, such as chromium, molybdenum, vanadium or nickel, resulting in alloys such as high-speed steel or tool steel. Small amounts of manganese are usually alloyed with most modern steels because of its ability to remove unwanted impurities, like phosphorus, sulfur and oxygen, which can have detrimental effects on the alloy. However, most alloys were not created until the 1900s, such as various aluminium, titanium, nickel, and magnesium alloys. Some modern superalloys, such as incoloy, inconel, and hastelloy, may consist of a multitude of different elements.
An alloy is technically an impure metal, but when referring to alloys, the term impurities usually denotes undesirable elements. Such impurities are introduced from the base metals and alloying elements, but are removed during processing. For instance, sulfur is a common impurity in steel. Sulfur combines readily with iron to form iron sulfide, which is very brittle, creating weak spots in the steel. Lithium, sodium and calcium are common impurities in aluminium alloys, which can have adverse effects on the structural integrity of castings. Conversely, otherwise pure-metals that contain unwanted impurities are often called "impure metals" and are not usually referred to as alloys. Oxygen, present in the air, readily combines with most metals to form metal oxides; especially at higher temperatures encountered during alloying. Great care is often taken during the alloying process to remove excess impurities, using fluxes, chemical additives, or other methods of extractive metallurgy.
Theory
Alloying a metal is done by combining it with one or more other elements. The most common and oldest alloying process is performed by heating the base metal beyond its melting point and then dissolving the solutes into the molten liquid, which may be possible even if the melting point of the solute is far greater than that of the base. For example, in its liquid state, titanium is a very strong solvent capable of dissolving most metals and elements. In addition, it readily absorbs gases like oxygen and burns in the presence of nitrogen. This increases the chance of contamination from any contacting surface, and so must be melted in vacuum induction-heating and special, water-cooled, copper crucibles. However, some metals and solutes, such as iron and carbon, have very high melting-points and were impossible for ancient people to melt. Thus, alloying (in particular, interstitial alloying) may also be performed with one or more constituents in a gaseous state, such as found in a blast furnace to make pig iron (liquid-gas), nitriding, carbonitriding or other forms of case hardening (solid-gas), or the cementation process used to make blister steel (solid-gas). It may also be done with one, more, or all of the constituents in the solid state, such as found in ancient methods of pattern welding (solid-solid), shear steel (solid-solid), or crucible steel production (solid-liquid), mixing the elements via solid-state diffusion.
By adding another element to a metal, differences in the size of the atoms create internal stresses in the lattice of the metallic crystals; stresses that often enhance its properties. For example, the combination of carbon with iron produces steel, which is stronger than iron, its primary element. The electrical and thermal conductivity of alloys is usually lower than that of the pure metals. The physical properties, such as density, reactivity, Young's modulus of an alloy may not differ greatly from those of its base element, but engineering properties such as tensile strength, ductility, and shear strength may be substantially different from those of the constituent materials. This is sometimes a result of the sizes of the atoms in the alloy, because larger atoms exert a compressive force on neighboring atoms, and smaller atoms exert a tensile force on their neighbors, helping the alloy resist deformation. Sometimes alloys may exhibit marked differences in behavior even when small amounts of one element are present. For example, impurities in semiconducting ferromagnetic alloys lead to different properties, as first predicted by White, Hogan, Suhl, Tian Abrie and Nakamura.
Unlike pure metals, most alloys do not have a single melting point, but a melting range during which the material is a mixture of solid and liquid phases (a slush). The temperature at which melting begins is called the solidus, and the temperature when melting is just complete is called the liquidus. For many alloys there is a particular alloy proportion (in some cases more than one), called either a eutectic mixture or a peritectic composition, which gives the alloy a unique and low melting point, and no liquid/solid slush transition.
Heat treatment
Alloying elements are added to a base metal, to induce hardness, toughness, ductility, or other desired properties. Most metals and alloys can be work hardened by creating defects in their crystal structure. These defects are created during plastic deformation by hammering, bending, extruding, et cetera, and are permanent unless the metal is recrystallized. Otherwise, some alloys can also have their properties altered by heat treatment. Nearly all metals can be softened by annealing, which recrystallizes the alloy and repairs the defects, but not as many can be hardened by controlled heating and cooling. Many alloys of aluminium, copper, magnesium, titanium, and nickel can be strengthened to some degree by some method of heat treatment, but few respond to this to the same degree as does steel.
The base metal iron of the iron-carbon alloy known as steel, undergoes a change in the arrangement (allotropy) of the atoms of its crystal matrix at a certain temperature (usually between and , depending on carbon content). This allows the smaller carbon atoms to enter the interstices of the iron crystal. When this diffusion happens, the carbon atoms are said to be in solution in the iron, forming a particular single, homogeneous, crystalline phase called austenite. If the steel is cooled slowly, the carbon can diffuse out of the iron and it will gradually revert to its low temperature allotrope. During slow cooling, the carbon atoms will no longer be as soluble with the iron, and will be forced to precipitate out of solution, nucleating into a more concentrated form of iron carbide (Fe3C) in the spaces between the pure iron crystals. The steel then becomes heterogeneous, as it is formed of two phases, the iron-carbon phase called cementite (or carbide), and pure iron ferrite. Such a heat treatment produces a steel that is rather soft. If the steel is cooled quickly, however, the carbon atoms will not have time to diffuse and precipitate out as carbide, but will be trapped within the iron crystals. When rapidly cooled, a diffusionless (martensite) transformation occurs, in which the carbon atoms become trapped in solution. This causes the iron crystals to deform as the crystal structure tries to change to its low temperature state, leaving those crystals very hard but much less ductile (more brittle).
While the high strength of steel results when diffusion and precipitation is prevented (forming martensite), most heat-treatable alloys are precipitation hardening alloys, that depend on the diffusion of alloying elements to achieve their strength. When heated to form a solution and then cooled quickly, these alloys become much softer than normal, during the diffusionless transformation, but then harden as they age. The solutes in these alloys will precipitate over time, forming intermetallic phases, which are difficult to discern from the base metal. Unlike steel, in which the solid solution separates into different crystal phases (carbide and ferrite), precipitation hardening alloys form different phases within the same crystal. These intermetallic alloys appear homogeneous in crystal structure, but tend to behave heterogeneously, becoming hard and somewhat brittle.
In 1906, precipitation hardening alloys were discovered by Alfred Wilm. Precipitation hardening alloys, such as certain alloys of aluminium, titanium, and copper, are heat-treatable alloys that soften when quenched (cooled quickly), and then harden over time. Wilm had been searching for a way to harden aluminium alloys for use in machine-gun cartridge cases. Knowing that aluminium-copper alloys were heat-treatable to some degree, Wilm tried quenching a ternary alloy of aluminium, copper, and the addition of magnesium, but was initially disappointed with the results. However, when Wilm retested it the next day he discovered that the alloy increased in hardness when left to age at room temperature, and far exceeded his expectations. Although an explanation for the phenomenon was not provided until 1919, duralumin was one of the first "age hardening" alloys used, becoming the primary building material for the first Zeppelins, and was soon followed by many others. Because they often exhibit a combination of high strength and low weight, these alloys became widely used in many forms of industry, including the construction of modern aircraft.
Mechanisms
When a molten metal is mixed with another substance, there are two mechanisms that can cause an alloy to form, called atom exchange and the interstitial mechanism. The relative size of each element in the mix plays a primary role in determining which mechanism will occur. When the atoms are relatively similar in size, the atom exchange method usually happens, where some of the atoms composing the metallic crystals are substituted with atoms of the other constituent. This is called a substitutional alloy. Examples of substitutional alloys include bronze and brass, in which some of the copper atoms are substituted with either tin or zinc atoms respectively.
In the case of the interstitial mechanism, one atom is usually much smaller than the other and can not successfully substitute for the other type of atom in the crystals of the base metal. Instead, the smaller atoms become trapped in the interstitial sites between the atoms of the crystal matrix. This is referred to as an interstitial alloy. Steel is an example of an interstitial alloy, because the very small carbon atoms fit into interstices of the iron matrix.
Stainless steel is an example of a combination of interstitial and substitutional alloys, because the carbon atoms fit into the interstices, but some of the iron atoms are substituted by nickel and chromium atoms.
History and examples
Meteoric iron
The use of alloys by humans started with the use of meteoric iron, a naturally occurring alloy of nickel and iron. It is the main constituent of iron meteorites. As no metallurgic processes were used to separate iron from nickel, the alloy was used as it was. Meteoric iron could be forged from a red heat to make objects such as tools, weapons, and nails. In many cultures it was shaped by cold hammering into knives and arrowheads. They were often used as anvils. Meteoric iron was very rare and valuable, and difficult for ancient people to work.
Bronze and brass
Iron is usually found as iron ore on Earth, except for one deposit of native iron in Greenland, which was used by the Inuit. Native copper, however, was found worldwide, along with silver, gold, and platinum, which were also used to make tools, jewelry, and other objects since Neolithic times. Copper was the hardest of these metals, and the most widely distributed. It became one of the most important metals to the ancients. Around 10,000 years ago in the highlands of Anatolia (Turkey), humans learned to smelt metals such as copper and tin from ore. Around 2500 BC, people began alloying the two metals to form bronze, which was much harder than its ingredients. Tin was rare, however, being found mostly in Great Britain. In the Middle East, people began alloying copper with zinc to form brass. Ancient civilizations took into account the mixture and the various properties it produced, such as hardness, toughness and melting point, under various conditions of temperature and work hardening, developing much of the information contained in modern alloy phase diagrams. For example, arrowheads from the Chinese Qin dynasty (around 200 BC) were often constructed with a hard bronze-head, but a softer bronze-tang, combining the alloys to prevent both dulling and breaking during use.
Amalgams
Mercury has been smelted from cinnabar for thousands of years. Mercury dissolves many metals, such as gold, silver, and tin, to form amalgams (an alloy in a soft paste or liquid form at ambient temperature). Amalgams have been used since 200 BC in China for gilding objects such as armor and mirrors with precious metals. The ancient Romans often used mercury-tin amalgams for gilding their armor. The amalgam was applied as a paste and then heated until the mercury vaporized, leaving the gold, silver, or tin behind. Mercury was often used in mining, to extract precious metals like gold and silver from their ores.
Precious metals
Many ancient civilizations alloyed metals for purely aesthetic purposes. In ancient Egypt and Mycenae, gold was often alloyed with copper to produce red-gold, or iron to produce a bright burgundy-gold. Gold was often found alloyed with silver or other metals to produce various types of colored gold. These metals were also used to strengthen each other, for more practical purposes. Copper was often added to silver to make sterling silver, increasing its strength for use in dishes, silverware, and other practical items. Quite often, precious metals were alloyed with less valuable substances as a means to deceive buyers. Around 250 BC, Archimedes was commissioned by the King of Syracuse to find a way to check the purity of the gold in a crown, leading to the famous bath-house shouting of "Eureka!" upon the discovery of Archimedes' principle.
Pewter
The term pewter covers a variety of alloys consisting primarily of tin. As a pure metal, tin is much too soft to use for most practical purposes. However, during the Bronze Age, tin was a rare metal in many parts of Europe and the Mediterranean, so it was often valued higher than gold. To make jewellery, cutlery, or other objects from tin, workers usually alloyed it with other metals to increase strength and hardness. These metals were typically lead, antimony, bismuth or copper. These solutes were sometimes added individually in varying amounts, or added together, making a wide variety of objects, ranging from practical items such as dishes, surgical tools, candlesticks or funnels, to decorative items like ear rings and hair clips.
The earliest examples of pewter come from ancient Egypt, around 1450 BC. The use of pewter was widespread across Europe, from France to Norway and Britain (where most of the ancient tin was mined) to the Near East. The alloy was also used in China and the Far East, arriving in Japan around 800 AD, where it was used for making objects like ceremonial vessels, tea canisters, or chalices used in shinto shrines.
Iron
The first known smelting of iron began in Anatolia, around 1800 BC. Called the bloomery process, it produced very soft but ductile wrought iron. By 800 BC, iron-making technology had spread to Europe, arriving in Japan around 700 AD. Pig iron, a very hard but brittle alloy of iron and carbon, was being produced in China as early as 1200 BC, but did not arrive in Europe until the Middle Ages. Pig iron has a lower melting point than iron, and was used for making cast-iron. However, these metals found little practical use until the introduction of crucible steel around 300 BC. These steels were of poor quality, and the introduction of pattern welding, around the 1st century AD, sought to balance the extreme properties of the alloys by laminating them, to create a tougher metal. Around 700 AD, the Japanese began folding bloomery-steel and cast-iron in alternating layers to increase the strength of their swords, using clay fluxes to remove slag and impurities. This method of Japanese swordsmithing produced one of the purest steel-alloys of the ancient world.
While the use of iron started to become more widespread around 1200 BC, mainly because of interruptions in the trade routes for tin, the metal was much softer than bronze. However, very small amounts of steel, (an alloy of iron and around 1% carbon), was always a byproduct of the bloomery process. The ability to modify the hardness of steel by heat treatment had been known since 1100 BC, and the rare material was valued for the manufacture of tools and weapons. Because the ancients could not produce temperatures high enough to melt iron fully, the production of steel in decent quantities did not occur until the introduction of blister steel during the Middle Ages. This method introduced carbon by heating wrought iron in charcoal for long periods of time, but the absorption of carbon in this manner is extremely slow thus the penetration was not very deep, so the alloy was not homogeneous. In 1740, Benjamin Huntsman began melting blister steel in a crucible to even out the carbon content, creating the first process for the mass production of tool steel. Huntsman's process was used for manufacturing tool steel until the early 1900s.
The introduction of the blast furnace to Europe in the Middle Ages meant that people could produce pig iron in much higher volumes than wrought iron. Because pig iron could be melted, people began to develop processes to reduce carbon in liquid pig iron to create steel. Puddling had been used in China since the first century, and was introduced in Europe during the 1700s, where molten pig iron was stirred while exposed to the air, to remove the carbon by oxidation. In 1858, Henry Bessemer developed a process of steel-making by blowing hot air through liquid pig iron to reduce the carbon content. The Bessemer process led to the first large scale manufacture of steel.
Steel is an alloy of iron and carbon, but the term alloy steel usually only refers to steels that contain other elements— like vanadium, molybdenum, or cobalt—in amounts sufficient to alter the properties of the base steel. Since ancient times, when steel was used primarily for tools and weapons, the methods of producing and working the metal were often closely guarded secrets. Even long after the Age of reason, the steel industry was very competitive and manufacturers went through great lengths to keep their processes confidential, resisting any attempts to scientifically analyze the material for fear it would reveal their methods. For example, the people of Sheffield, a center of steel production in England, were known to routinely bar visitors and tourists from entering town to deter industrial espionage. Thus, almost no metallurgical information existed about steel until 1860. Because of this lack of understanding, steel was not generally considered an alloy until the decades between 1930 and 1970 (primarily due to the work of scientists like William Chandler Roberts-Austen, Adolf Martens, and Edgar Bain), so "alloy steel" became the popular term for ternary and quaternary steel-alloys.
After Benjamin Huntsman developed his crucible steel in 1740, he began experimenting with the addition of elements like manganese (in the form of a high-manganese pig-iron called spiegeleisen), which helped remove impurities such as phosphorus and oxygen; a process adopted by Bessemer and still used in modern steels (albeit in concentrations low enough to still be considered carbon steel). Afterward, many people began experimenting with various alloys of steel without much success. However, in 1882, Robert Hadfield, being a pioneer in steel metallurgy, took an interest and produced a steel alloy containing around 12% manganese. Called mangalloy, it exhibited extreme hardness and toughness, becoming the first commercially viable alloy-steel. Afterward, he created silicon steel, launching the search for other possible alloys of steel.
Robert Forester Mushet found that by adding tungsten to steel it could produce a very hard edge that would resist losing its hardness at high temperatures. "R. Mushet's special steel" (RMS) became the first high-speed steel. Mushet's steel was quickly replaced by tungsten carbide steel, developed by Taylor and White in 1900, in which they doubled the tungsten content and added small amounts of chromium and vanadium, producing a superior steel for use in lathes and machining tools. In 1903, the Wright brothers used a chromium-nickel steel to make the crankshaft for their airplane engine, while in 1908 Henry Ford began using vanadium steels for parts like crankshafts and valves in his Model T Ford, due to their higher strength and resistance to high temperatures. In 1912, the Krupp Ironworks in Germany developed a rust-resistant steel by adding 21% chromium and 7% nickel, producing the first stainless steel.
Others
Due to their high reactivity, most metals were not discovered until the 19th century. A method for extracting aluminium from bauxite was proposed by Humphry Davy in 1807, using an electric arc. Although his attempts were unsuccessful, by 1855 the first sales of pure aluminium reached the market. However, as extractive metallurgy was still in its infancy, most aluminium extraction-processes produced unintended alloys contaminated with other elements found in the ore; the most abundant of which was copper. These aluminium-copper alloys (at the time termed "aluminum bronze") preceded pure aluminium, offering greater strength and hardness over the soft, pure metal, and to a slight degree were found to be heat treatable. However, due to their softness and limited hardenability these alloys found little practical use, and were more of a novelty, until the Wright brothers used an aluminium alloy to construct the first airplane engine in 1903. During the time between 1865 and 1910, processes for extracting many other metals were discovered, such as chromium, vanadium, tungsten, iridium, cobalt, and molybdenum, and various alloys were developed.
Prior to 1910, research mainly consisted of private individuals tinkering in their own laboratories. However, as the aircraft and automotive industries began growing, research into alloys became an industrial effort in the years following 1910, as new magnesium alloys were developed for pistons and wheels in cars, and pot metal for levers and knobs, and aluminium alloys developed for airframes and aircraft skins were put into use.
See also
Alloy broadening
CALPHAD
Ideal mixture
List of alloys
References
Bibliography
External links
Metallurgy
Chemistry
|
https://en.wikipedia.org/wiki/Acoustics
|
Acoustics is a branch of physics that deals with the study of mechanical waves in gases, liquids, and solids including topics such as vibration, sound, ultrasound and infrasound. A scientist who works in the field of acoustics is an acoustician while someone working in the field of acoustics technology may be called an acoustical engineer. The application of acoustics is present in almost all aspects of modern society with the most obvious being the audio and noise control industries.
Hearing is one of the most crucial means of survival in the animal world and speech is one of the most distinctive characteristics of human development and culture. Accordingly, the science of acoustics spreads across many facets of human society—music, medicine, architecture, industrial production, warfare and more. Likewise, animal species such as songbirds and frogs use sound and hearing as a key element of mating rituals or for marking territories. Art, craft, science and technology have provoked one another to advance the whole, as in many other fields of knowledge. Robert Bruce Lindsay's "Wheel of Acoustics" is a well accepted overview of the various fields in acoustics.
History
Etymology
The word "acoustic" is derived from the Greek word ἀκουστικός (akoustikos), meaning "of or for hearing, ready to hear" and that from ἀκουστός (akoustos), "heard, audible", which in turn derives from the verb ἀκούω(akouo), "I hear".
The Latin synonym is "sonic", after which the term sonics used to be a synonym for acoustics and later a branch of acoustics. Frequencies above and below the audible range are called "ultrasonic" and "infrasonic", respectively.
Early research in acoustics
In the 6th century BC, the ancient Greek philosopher Pythagoras wanted to know why some combinations of musical sounds seemed more beautiful than others, and he found answers in terms of numerical ratios representing the harmonic overtone series on a string. He is reputed to have observed that when the lengths of vibrating strings are expressible as ratios of integers (e.g. 2 to 3, 3 to 4), the tones produced will be harmonious, and the smaller the integers the more harmonious the sounds. For example, a string of a certain length would sound particularly harmonious with a string of twice the length (other factors being equal). In modern parlance, if a string sounds the note C when plucked, a string twice as long will sound a C an octave lower. In one system of musical tuning, the tones in between are then given by 16:9 for D, 8:5 for E, 3:2 for F, 4:3 for G, 6:5 for A, and 16:15 for B, in ascending order.
Aristotle (384–322 BC) understood that sound consisted of compressions and rarefactions of air which "falls upon and strikes the air which is next to it...", a very good expression of the nature of wave motion. On Things Heard, generally ascribed to Strato of Lampsacus, states that the pitch is related to the frequency of vibrations of the air and to the speed of sound.
In about 20 BC, the Roman architect and engineer Vitruvius wrote a treatise on the acoustic properties of theaters including discussion of interference, echoes, and reverberation—the beginnings of architectural acoustics. In Book V of his De architectura (The Ten Books of Architecture) Vitruvius describes sound as a wave comparable to a water wave extended to three dimensions, which, when interrupted by obstructions, would flow back and break up following waves. He described the ascending seats in ancient theaters as designed to prevent this deterioration of sound and also recommended bronze vessels of appropriate sizes be placed in theaters to resonate with the fourth, fifth and so on, up to the double octave, in order to resonate with the more desirable, harmonious notes.
During the Islamic golden age, Abū Rayhān al-Bīrūnī (973-1048) is believed to have postulated that the speed of sound was much slower than the speed of light.
The physical understanding of acoustical processes advanced rapidly during and after the Scientific Revolution. Mainly Galileo Galilei (1564–1642) but also Marin Mersenne (1588–1648), independently, discovered the complete laws of vibrating strings (completing what Pythagoras and Pythagoreans had started 2000 years earlier). Galileo wrote "Waves are produced by the vibrations of a sonorous body, which spread through the air, bringing to the tympanum of the ear a stimulus which the mind interprets as sound", a remarkable statement that points to the beginnings of physiological and psychological acoustics. Experimental measurements of the speed of sound in air were carried out successfully between 1630 and 1680 by a number of investigators, prominently Mersenne. Meanwhile, Newton (1642–1727) derived the relationship for wave velocity in solids, a cornerstone of physical acoustics (Principia, 1687).
Age of Enlightenment and onward
Substantial progress in acoustics, resting on firmer mathematical and physical concepts, was made during the eighteenth century by Euler (1707–1783), Lagrange (1736–1813), and d'Alembert (1717–1783). During this era, continuum physics, or field theory, began to receive a definite mathematical structure. The wave equation emerged in a number of contexts, including the propagation of sound in air.
In the nineteenth century the major figures of mathematical acoustics were Helmholtz in Germany, who consolidated the field of physiological acoustics, and Lord Rayleigh in England, who combined the previous knowledge with his own copious contributions to the field in his monumental work The Theory of Sound (1877). Also in the 19th century, Wheatstone, Ohm, and Henry developed the analogy between electricity and acoustics.
The twentieth century saw a burgeoning of technological applications of the large body of scientific knowledge that was by then in place. The first such application was Sabine's groundbreaking work in architectural acoustics, and many others followed. Underwater acoustics was used for detecting submarines in the first World War. Sound recording and the telephone played important roles in a global transformation of society. Sound measurement and analysis reached new levels of accuracy and sophistication through the use of electronics and computing. The ultrasonic frequency range enabled wholly new kinds of application in medicine and industry. New kinds of transducers (generators and receivers of acoustic energy) were invented and put to use.
Definition
Acoustics is defined by ANSI/ASA S1.1-2013 as "(a) Science of sound, including its production, transmission, and effects, including biological and psychological effects. (b) Those qualities of a room that, together, determine its character with respect to auditory effects."
The study of acoustics revolves around the generation, propagation and reception of mechanical waves and vibrations.
The steps shown in the above diagram can be found in any acoustical event or process. There are many kinds of cause, both natural and volitional. There are many kinds of transduction process that convert energy from some other form into sonic energy, producing a sound wave. There is one fundamental equation that describes sound wave propagation, the acoustic wave equation, but the phenomena that emerge from it are varied and often complex. The wave carries energy throughout the propagating medium. Eventually this energy is transduced again into other forms, in ways that again may be natural and/or volitionally contrived. The final effect may be purely physical or it may reach far into the biological or volitional domains. The five basic steps are found equally well whether we are talking about an earthquake, a submarine using sonar to locate its foe, or a band playing in a rock concert.
The central stage in the acoustical process is wave propagation. This falls within the domain of physical acoustics. In fluids, sound propagates primarily as a pressure wave. In solids, mechanical waves can take many forms including longitudinal waves, transverse waves and surface waves.
Acoustics looks first at the pressure levels and frequencies in the sound wave and how the wave interacts with the environment. This interaction can be described as either a diffraction, interference or a reflection or a mix of the three. If several media are present, a refraction can also occur. Transduction processes are also of special importance to acoustics.
Fundamental concepts
Wave propagation: pressure levels
In fluids such as air and water, sound waves propagate as disturbances in the ambient pressure level. While this disturbance is usually small, it is still noticeable to the human ear. The smallest sound that a person can hear, known as the threshold of hearing, is nine orders of magnitude smaller than the ambient pressure. The loudness of these disturbances is related to the sound pressure level (SPL) which is measured on a logarithmic scale in decibels.
Wave propagation: frequency
Physicists and acoustic engineers tend to discuss sound pressure levels in terms of frequencies, partly because this is how our ears interpret sound. What we experience as "higher pitched" or "lower pitched" sounds are pressure vibrations having a higher or lower number of cycles per second. In a common technique of acoustic measurement, acoustic signals are sampled in time, and then presented in more meaningful forms such as octave bands or time frequency plots. Both of these popular methods are used to analyze sound and better understand the acoustic phenomenon.
The entire spectrum can be divided into three sections: audio, ultrasonic, and infrasonic. The audio range falls between 20 Hz and 20,000 Hz. This range is important because its frequencies can be detected by the human ear. This range has a number of applications, including speech communication and music. The ultrasonic range refers to the very high frequencies: 20,000 Hz and higher. This range has shorter wavelengths which allow better resolution in imaging technologies. Medical applications such as ultrasonography and elastography rely on the ultrasonic frequency range. On the other end of the spectrum, the lowest frequencies are known as the infrasonic range. These frequencies can be used to study geological phenomena such as earthquakes.
Analytic instruments such as the spectrum analyzer facilitate visualization and measurement of acoustic signals and their properties. The spectrogram produced by such an instrument is a graphical display of the time varying pressure level and frequency profiles which give a specific acoustic signal its defining character.
Transduction in acoustics
A transducer is a device for converting one form of energy into another. In an electroacoustic context, this means converting sound energy into electrical energy (or vice versa). Electroacoustic transducers include loudspeakers, microphones, particle velocity sensors, hydrophones and sonar projectors. These devices convert a sound wave to or from an electric signal. The most widely used transduction principles are electromagnetism, electrostatics and piezoelectricity.
The transducers in most common loudspeakers (e.g. woofers and tweeters), are electromagnetic devices that generate waves using a suspended diaphragm driven by an electromagnetic voice coil, sending off pressure waves. Electret microphones and condenser microphones employ electrostatics—as the sound wave strikes the microphone's diaphragm, it moves and induces a voltage change. The ultrasonic systems used in medical ultrasonography employ piezoelectric transducers. These are made from special ceramics in which mechanical vibrations and electrical fields are interlinked through a property of the material itself.
Acoustician
An acoustician is an expert in the science of sound.
Education
There are many types of acoustician, but they usually have a Bachelor's degree or higher qualification. Some possess a degree in acoustics, while others enter the discipline via studies in fields such as physics or engineering. Much work in acoustics requires a good grounding in Mathematics and science. Many acoustic scientists work in research and development. Some conduct basic research to advance our knowledge of the perception (e.g. hearing, psychoacoustics or neurophysiology) of speech, music and noise. Other acoustic scientists advance understanding of how sound is affected as it moves through environments, e.g. underwater acoustics, architectural acoustics or structural acoustics. Other areas of work are listed under subdisciplines below. Acoustic scientists work in government, university and private industry laboratories. Many go on to work in Acoustical Engineering. Some positions, such as Faculty (academic staff) require a Doctor of Philosophy.
Subdisciplines
Archaeoacoustics
Archaeoacoustics, also known as the archaeology of sound, is one of the only ways to experience the past with senses other than our eyes. Archaeoacoustics is studied by testing the acoustic properties of prehistoric sites, including caves. Iegor Rezkinoff, a sound archaeologist, studies the acoustic properties of caves through natural sounds like humming and whistling. Archaeological theories of acoustics are focused around ritualistic purposes as well as a way of echolocation in the caves. In archaeology, acoustic sounds and rituals directly correlate as specific sounds were meant to bring ritual participants closer to a spiritual awakening. Parallels can also be drawn between cave wall paintings and the acoustic properties of the cave; they are both dynamic. Because archaeoacoustics is a fairly new archaeological subject, acoustic sound is still being tested in these prehistoric sites today.
Aeroacoustics
Aeroacoustics is the study of noise generated by air movement, for instance via turbulence, and the movement of sound through the fluid air. This knowledge is applied in acoustical engineering to study how to quieten aircraft. Aeroacoustics is important for understanding how wind musical instruments work.
Acoustic signal processing
Acoustic signal processing is the electronic manipulation of acoustic signals. Applications include: active noise control; design for hearing aids or cochlear implants; echo cancellation; music information retrieval, and perceptual coding (e.g. MP3 or Opus).
Architectural acoustics
Architectural acoustics (also known as building acoustics) involves the scientific understanding of how to achieve good sound within a building. It typically involves the study of speech intelligibility, speech privacy, music quality, and vibration reduction in the built environment. Commonly studied environments are hospitals, classrooms, dwellings, performance venues, recording and broadcasting studios. Focus considerations include room acoustics, airborne and impact transmission in building structures, airborne and structure-borne noise control, noise control of building systems and electroacoustic systems .
Bioacoustics
Bioacoustics is the scientific study of the hearing and calls of animal calls, as well as how animals are affected by the acoustic and sounds of their habitat.
Electroacoustics
This subdiscipline is concerned with the recording, manipulation and reproduction of audio using electronics. This might include products such as mobile phones, large scale public address systems or virtual reality systems in research laboratories.
Environmental noise and soundscapes
Environmental acoustics is concerned with noise and vibration caused by railways, road traffic, aircraft, industrial equipment and recreational activities. The main aim of these studies is to reduce levels of environmental noise and vibration. Research work now also has a focus on the positive use of sound in urban environments: soundscapes and tranquility.
Musical acoustics
Musical acoustics is the study of the physics of acoustic instruments; the audio signal processing used in electronic music; the computer analysis of music and composition, and the perception and cognitive neuroscience of music.
Noise
The goal this acoustics sub-discipline is to reduce the impact of unwanted sound. Scope of noise studies includes the generation, propagation, and impact on structures, objects, and people.
Innovative model development
Measurement techniques
Mitigation strategies
Input to the establishment of standards and regulations
Noise research investigates the impact of noise on humans and animals to include work in definitions, abatement, transportation noise, hearing protection, Jet and rocket noise, building system noise and vibration, atmospheric sound propagation, soundscapes, and low-frequency sound.
Psychoacoustics
Many studies have been conducted to identify the relationship between acoustics and cognition, or more commonly known as psychoacoustics, in which what one hears is a combination of perception and biological aspects. The information intercepted by the passage of sound waves through the ear is understood and interpreted through the brain, emphasizing the connection between the mind and acoustics. Psychological changes have been seen as brain waves slow down or speed up as a result of varying auditory stimulus which can in turn affect the way one thinks, feels, or even behaves. This correlation can be viewed in normal, everyday situations in which listening to an upbeat or uptempo song can cause one's foot to start tapping or a slower song can leave one feeling calm and serene. In a deeper biological look at the phenomenon of psychoacoustics, it was discovered that the central nervous system is activated by basic acoustical characteristics of music. By observing how the central nervous system, which includes the brain and spine, is influenced by acoustics, the pathway in which acoustic affects the mind, and essentially the body, is evident.
Speech
Acousticians study the production, processing and perception of speech. Speech recognition and Speech synthesis are two important areas of speech processing using computers. The subject also overlaps with the disciplines of physics, physiology, psychology, and linguistics.
Structural Vibration and Dynamics
Structural acoustics is the study of motions and interactions of mechanical systems with their environments and the methods of their measurement, analysis, and control . There are several sub-disciplines found within this regime:
Modal Analysis
Material characterization
Structural health monitoring
Acoustic Metamaterials
Friction Acoustics
Applications might include: ground vibrations from railways; vibration isolation to reduce vibration in operating theatres; studying how vibration can damage health (vibration white finger); vibration control to protect a building from earthquakes, or measuring how structure-borne sound moves through buildings.
Ultrasonics
Ultrasonics deals with sounds at frequencies too high to be heard by humans. Specialisms include medical ultrasonics (including medical ultrasonography), sonochemistry, ultrasonic testing, material characterisation and underwater acoustics (sonar).
Underwater acoustics
Underwater acoustics is the scientific study of natural and man-made sounds underwater. Applications include sonar to locate submarines, underwater communication by whales, climate change monitoring by measuring sea temperatures acoustically, sonic weapons, and marine bioacoustics.
Acoustic Conferences
InterNoise
NoiseCon
Forum Acousticum
SAE Noise and Vibration Conference and Exhibition
Professional societies
The Acoustical Society of America (ASA)
Australian Acoustical Society (AAS)
The European Acoustics Association (EAA)
Institute of Electrical and Electronics Engineers (IEEE)
Institute of Acoustics (IoA UK)
The Audio Engineering Society (AES)
American Society of Mechanical Engineers, Noise Control and Acoustics Division (ASME-NCAD)
International Commission for Acoustics (ICA)
American Institute of Aeronautics and Astronautics, Aeroacoustics (AIAA)
International Computer Music Association (ICMA)
Academic journals
Acoustics | An Open Access Journal from MDPI
Acoustics Today
Acta Acustica united with Acustica
Advances in Acoustics and Vibration
Applied Acoustics
Building Acoustics
IEEE Transacions on Ultrasonics, Ferroelectrics, and Frequency Control
Journal of the Acoustical Society of America (JASA)
Journal of the Acoustical Society of America, Express Letters (JASA-EL)
Journal of the Audio Engineering Society
Journal of Sound and Vibration (JSV)
Journal of Vibration and Acoustics American Society of Mechanical Engineers
MDPI Acoustics
Noise Control Engineering Journal
SAE International Journal of Vehicle Dynamics, Stability and NVH
Ultrasonics (journal)
Ultrasonics Sonochemistry
Wave Motion
See also
Outline of acoustics
Acoustic attenuation
Acoustic emission
Acoustic engineering
Acoustic impedance
Acoustic levitation
Acoustic location
Acoustic phonetics
Acoustic streaming
Acoustic tags
Acoustic thermometry
Acoustic wave
Audiology
Auditory illusion
Diffraction
Doppler effect
Fisheries acoustics
Friction acoustics
Helioseismology
Lamb wave
Linear elasticity
The Little Red Book of Acoustics (in the UK)
Longitudinal wave
Musicology
Music therapy
Noise pollution
One-Way Wave Equation
Phonon
Picosecond ultrasonics
Rayleigh wave
Shock wave
Seismology
Sonification
Sonochemistry
Soundproofing
Soundscape
Sonic boom
Sonoluminescence
Surface acoustic wave
Thermoacoustics
Transverse wave
Wave equation
References
Further reading
External links
International Commission for Acoustics
European Acoustics Association
Acoustical Society of America
Institute of Noise Control Engineers
National Council of Acoustical Consultants
Institute of Acoustic in UK
Australian Acoustical Society (AAS)
Sound
|
https://en.wikipedia.org/wiki/Applet
|
In computing, an applet is any small application that performs one specific task that runs within the scope of a dedicated widget engine or a larger program, often as a plug-in. The term is frequently used to refer to a Java applet, a program written in the Java programming language that is designed to be placed on a web page. Applets are typical examples of transient and auxiliary applications that do not monopolize the user's attention. Applets are not full-featured application programs, and are intended to be easily accessible.
History
The word applet was first used in 1990 in PC Magazine. However, the concept of an applet, or more broadly a small interpreted program downloaded and executed by the user, dates at least to RFC 5 (1969) by Jeff Rulifson, which described the Decode-Encode Language, which was designed to allow remote use of the oN-Line System over ARPANET, by downloading small programs to enhance the interaction. This has been specifically credited as a forerunner of Java's downloadable programs in RFC 2555.
Applet as an extension of other software
In some cases, an applet does not run independently. These applets must run either in a container provided by a host program, through a plugin, or a variety of other applications including mobile devices that support the applet programming model.
Web-based applets
Applets were used to provide interactive features to web applications that historically could not be provided by HTML alone. They could capture mouse input and also had controls like buttons or check boxes. In response to the user action, an applet could change the provided graphic content. This made applets well suited for demonstration, visualization, and teaching. There were online applet collections for studying various subjects, from physics to heart physiology. Applets were also used to create online game collections that allowed players to compete against live opponents in real-time.
An applet could also be a text area only, providing, for instance, a cross-platform command-line interface to some remote system. If needed, an applet could leave the dedicated area and run as a separate window. However, applets had very little control over web page content outside the applet dedicated area, so they were less useful for improving the site appearance in general (while applets like news tickers or WYSIWYG editors are also known). Applets could also play media in formats that are not natively supported by the browser.
HTML pages could embed parameters that were passed to the applet. Hence, the same applet could appear differently depending on the parameters that were passed.
Examples of Web-based applets include:
QuickTime movies
Flash movies
Windows Media Player applets, used to display embedded video files in Internet Explorer (and other browsers that supported the plugin)
3D modeling display applets, used to rotate and zoom a model
Browser games that were applet-based, though some developed into fully functional applications that required installation.
Applet Vs. Subroutine
A larger application distinguishes its applets through several features:
Applets execute only on the "client" platform environment of a system, as contrasted from "Servlet". As such, an applet provides functionality or performance beyond the default capabilities of its container (the browser).
The container restricts applets' capabilities.
Applets are written in a language different from the scripting or HTML language that invokes it. The applet is written in a compiled language, whereas the scripting language of the container is an interpreted language, hence the greater performance or functionality of the applet. Unlike a subroutine, a complete web component can be implemented as an applet.
Java applets
A Java applet is a Java program that is launched from HTML and run in a web browser. It takes code from server and run in a web browser. It can provide web applications with interactive features that cannot be provided by HTML. Since Java's bytecode is platform-independent, Java applets can be executed by browsers running under many platforms, including Windows, Unix, macOS, and Linux. When a Java technology-enabled web browser processes a page that contains an applet, the applet's code is transferred to the client's system and executed by the browser's Java virtual machine. An HTML page references an applet either via the deprecated tag or via its replacement, the tag.
Security
Recent developments in the coding of applications, including mobile and embedded systems, have led to the awareness of the security of applets.
Open platform applets
Applets in an open platform environment should provide secure interactions between different applications. A compositional approach can be used to provide security for open platform applets. Advanced compositional verification methods have been developed for secure applet interactions.
Java applets
A Java applet contains different security models: unsigned Java applet security, signed Java applet security, and self-signed Java applet security.
Web-based applets
In an applet-enabled web browser, many methods can be used to provide applet security for malicious applets. A malicious applet can infect a computer system in many ways, including denial of service, invasion of privacy, and annoyance. A typical solution for malicious applets is to make the web browser to monitor applets' activities. This will result in a web browser that will enable the manual or automatic stopping of malicious applets.
See also
Application posture
Bookmarklet
Java applet
Widget engine
Abstract Window Toolkit
References
External links
Technology neologisms
Component-based software engineering
Java (programming language) libraries
|
https://en.wikipedia.org/wiki/Area
|
Area is the measure of a region's size on a surface. The area of a plane region or plane area refers to the area of a shape or planar lamina, while surface area refers to the area of an open surface or the boundary of a three-dimensional object. Area can be understood as the amount of material with a given thickness that would be necessary to fashion a model of the shape, or the amount of paint necessary to cover the surface with a single coat. It is the two-dimensional analogue of the length of a curve (a one-dimensional concept) or the volume of a solid (a three-dimensional concept).
Two different regions may have the same area (as in squaring the circle); by synecdoche, "area" sometimes is used to refer to the region, as in a "polygonal area".
The area of a shape can be measured by comparing the shape to squares of a fixed size. In the International System of Units (SI), the standard unit of area is the square metre (written as m2), which is the area of a square whose sides are one metre long. A shape with an area of three square metres would have the same area as three such squares. In mathematics, the unit square is defined to have area one, and the area of any other shape or surface is a dimensionless real number.
There are several well-known formulas for the areas of simple shapes such as triangles, rectangles, and circles. Using these formulas, the area of any polygon can be found by dividing the polygon into triangles. For shapes with curved boundary, calculus is usually required to compute the area. Indeed, the problem of determining the area of plane figures was a major motivation for the historical development of calculus.
For a solid shape such as a sphere, cone, or cylinder, the area of its boundary surface is called the surface area. Formulas for the surface areas of simple shapes were computed by the ancient Greeks, but computing the surface area of a more complicated shape usually requires multivariable calculus.
Area plays an important role in modern mathematics. In addition to its obvious importance in geometry and calculus, area is related to the definition of determinants in linear algebra, and is a basic property of surfaces in differential geometry. In analysis, the area of a subset of the plane is defined using Lebesgue measure, though not every subset is measurable. In general, area in higher mathematics is seen as a special case of volume for two-dimensional regions.
Area can be defined through the use of axioms, defining it as a function of a collection of certain plane figures to the set of real numbers. It can be proved that such a function exists.
Formal definition
An approach to defining what is meant by "area" is through axioms. "Area" can be defined as a function from a collection M of a special kinds of plane figures (termed measurable sets) to the set of real numbers, which satisfies the following properties:
For all S in M, .
If S and T are in M then so are and , and also .
If S and T are in M with then is in M and .
If a set S is in M and S is congruent to T then T is also in M and .
Every rectangle R is in M. If the rectangle has length h and breadth k then .
Let Q be a set enclosed between two step regions S and T. A step region is formed from a finite union of adjacent rectangles resting on a common base, i.e. . If there is a unique number c such that for all such step regions S and T, then .
It can be proved that such an area function actually exists.
Units
Every unit of length has a corresponding unit of area, namely the area of a square with the given side length. Thus areas can be measured in square metres (m2), square centimetres (cm2), square millimetres (mm2), square kilometres (km2), square feet (ft2), square yards (yd2), square miles (mi2), and so forth. Algebraically, these units can be thought of as the squares of the corresponding length units.
The SI unit of area is the square metre, which is considered an SI derived unit.
Conversions
Calculation of the area of a square whose length and width are 1 metre would be:
1 metre × 1 metre = 1 m2
and so, a rectangle with different sides (say length of 3 metres and width of 2 metres) would have an area in square units that can be calculated as:
3 metres × 2 metres = 6 m2. This is equivalent to 6 million square millimetres. Other useful conversions are:
1 square kilometre = 1,000,000 square metres
1 square metre = 10,000 square centimetres = 1,000,000 square millimetres
1 square centimetre = 100 square millimetres.
Non-metric units
In non-metric units, the conversion between two square units is the square of the conversion between the corresponding length units.
1 foot = 12 inches,
the relationship between square feet and square inches is
1 square foot = 144 square inches,
where 144 = 122 = 12 × 12. Similarly:
1 square yard = 9 square feet
1 square mile = 3,097,600 square yards = 27,878,400 square feet
In addition, conversion factors include:
1 square inch = 6.4516 square centimetres
1 square foot = square metres
1 square yard = square metres
1 square mile = square kilometres
Other units including historical
There are several other common units for area. The are was the original unit of area in the metric system, with:
1 are = 100 square metres
Though the are has fallen out of use, the hectare is still commonly used to measure land:
1 hectare = 100 ares = 10,000 square metres = 0.01 square kilometres
Other uncommon metric units of area include the tetrad, the hectad, and the myriad.
The acre is also commonly used to measure land areas, where
1 acre = 4,840 square yards = 43,560 square feet.
An acre is approximately 40% of a hectare.
On the atomic scale, area is measured in units of barns, such that:
1 barn = 10−28 square meters.
The barn is commonly used in describing the cross-sectional area of interaction in nuclear physics.
In South Asia (mainly Indians), although the countries use SI units as official, many South Asians still use traditional units. Each administrative division has its own area unit, some of them have same names, but with different values. There's no official consensus about the traditional units values. Thus, the conversions between the SI units and the traditional units may have different results, depending on what reference that has been used.
Some traditional South Asian units that have fixed value:
1 Killa = 1 acre
1 Ghumaon = 1 acre
1 Kanal = 0.125 acre (1 acre = 8 kanal)
1 Decimal = 48.4 square yards
1 Chatak = 180 square feet
History
Circle area
In the 5th century BCE, Hippocrates of Chios was the first to show that the area of a disk (the region enclosed by a circle) is proportional to the square of its diameter, as part of his quadrature of the lune of Hippocrates, but did not identify the constant of proportionality. Eudoxus of Cnidus, also in the 5th century BCE, also found that the area of a disk is proportional to its radius squared.
Subsequently, Book I of Euclid's Elements dealt with equality of areas between two-dimensional figures. The mathematician Archimedes used the tools of Euclidean geometry to show that the area inside a circle is equal to that of a right triangle whose base has the length of the circle's circumference and whose height equals the circle's radius, in his book Measurement of a Circle. (The circumference is 2r, and the area of a triangle is half the base times the height, yielding the area r2 for the disk.) Archimedes approximated the value of π (and hence the area of a unit-radius circle) with his doubling method, in which he inscribed a regular triangle in a circle and noted its area, then doubled the number of sides to give a regular hexagon, then repeatedly doubled the number of sides as the polygon's area got closer and closer to that of the circle (and did the same with circumscribed polygons).
Triangle area
Quadrilateral area
In the 7th century CE, Brahmagupta developed a formula, now known as Brahmagupta's formula, for the area of a cyclic quadrilateral (a quadrilateral inscribed in a circle) in terms of its sides. In 1842, the German mathematicians Carl Anton Bretschneider and Karl Georg Christian von Staudt independently found a formula, known as Bretschneider's formula, for the area of any quadrilateral.
General polygon area
The development of Cartesian coordinates by René Descartes in the 17th century allowed the development of the surveyor's formula for the area of any polygon with known vertex locations by Gauss in the 19th century.
Areas determined using calculus
The development of integral calculus in the late 17th century provided tools that could subsequently be used for computing more complicated areas, such as the area of an ellipse and the surface areas of various curved three-dimensional objects.
Area formulas
Polygon formulas
For a non-self-intersecting (simple) polygon, the Cartesian coordinates (i=0, 1, ..., n-1) of whose n vertices are known, the area is given by the surveyor's formula:
where when i=n-1, then i+1 is expressed as modulus n and so refers to 0.
Rectangles
The most basic area formula is the formula for the area of a rectangle. Given a rectangle with length and width , the formula for the area is:
(rectangle).
That is, the area of the rectangle is the length multiplied by the width. As a special case, as in the case of a square, the area of a square with side length is given by the formula:
(square).
The formula for the area of a rectangle follows directly from the basic properties of area, and is sometimes taken as a definition or axiom. On the other hand, if geometry is developed before arithmetic, this formula can be used to define multiplication of real numbers.
Dissection, parallelograms, and triangles
Most other simple formulas for area follow from the method of dissection.
This involves cutting a shape into pieces, whose areas must sum to the area of the original shape.
For an example, any parallelogram can be subdivided into a trapezoid and a right triangle, as shown in figure to the left. If the triangle is moved to the other side of the trapezoid, then the resulting figure is a rectangle. It follows that the area of the parallelogram is the same as the area of the rectangle:
(parallelogram).
However, the same parallelogram can also be cut along a diagonal into two congruent triangles, as shown in the figure to the right. It follows that the area of each triangle is half the area of the parallelogram:
(triangle).
Similar arguments can be used to find area formulas for the trapezoid as well as more complicated polygons.
Area of curved shapes
Circles
The formula for the area of a circle (more properly called the area enclosed by a circle or the area of a disk) is based on a similar method. Given a circle of radius , it is possible to partition the circle into sectors, as shown in the figure to the right. Each sector is approximately triangular in shape, and the sectors can be rearranged to form an approximate parallelogram. The height of this parallelogram is , and the width is half the circumference of the circle, or . Thus, the total area of the circle is :
(circle).
Though the dissection used in this formula is only approximate, the error becomes smaller and smaller as the circle is partitioned into more and more sectors. The limit of the areas of the approximate parallelograms is exactly , which is the area of the circle.
This argument is actually a simple application of the ideas of calculus. In ancient times, the method of exhaustion was used in a similar way to find the area of the circle, and this method is now recognized as a precursor to integral calculus. Using modern methods, the area of a circle can be computed using a definite integral:
Ellipses
The formula for the area enclosed by an ellipse is related to the formula of a circle; for an ellipse with semi-major and semi-minor axes and the formula is:
Non-planar surface area
Most basic formulas for surface area can be obtained by cutting surfaces and flattening them out (see: developable surfaces). For example, if the side surface of a cylinder (or any prism) is cut lengthwise, the surface can be flattened out into a rectangle. Similarly, if a cut is made along the side of a cone, the side surface can be flattened out into a sector of a circle, and the resulting area computed.
The formula for the surface area of a sphere is more difficult to derive: because a sphere has nonzero Gaussian curvature, it cannot be flattened out. The formula for the surface area of a sphere was first obtained by Archimedes in his work On the Sphere and Cylinder. The formula is:
(sphere),
where is the radius of the sphere. As with the formula for the area of a circle, any derivation of this formula inherently uses methods similar to calculus.
General formulas
Areas of 2-dimensional figures
A triangle: (where B is any side, and h is the distance from the line on which B lies to the other vertex of the triangle). This formula can be used if the height h is known. If the lengths of the three sides are known then Heron's formula can be used: where a, b, c are the sides of the triangle, and is half of its perimeter. If an angle and its two included sides are given, the area is where is the given angle and and are its included sides. If the triangle is graphed on a coordinate plane, a matrix can be used and is simplified to the absolute value of . This formula is also known as the shoelace formula and is an easy way to solve for the area of a coordinate triangle by substituting the 3 points (x1,y1), (x2,y2), and (x3,y3). The shoelace formula can also be used to find the areas of other polygons when their vertices are known. Another approach for a coordinate triangle is to use calculus to find the area.
A simple polygon constructed on a grid of equal-distanced points (i.e., points with integer coordinates) such that all the polygon's vertices are grid points: , where i is the number of grid points inside the polygon and b is the number of boundary points. This result is known as Pick's theorem.
Area in calculus
The area between a positive-valued curve and the horizontal axis, measured between two values a and b (b is defined as the larger of the two values) on the horizontal axis, is given by the integral from a to b of the function that represents the curve:
The area between the graphs of two functions is equal to the integral of one function, f(x), minus the integral of the other function, g(x):
where is the curve with the greater y-value.
An area bounded by a function expressed in polar coordinates is:
The area enclosed by a parametric curve with endpoints is given by the line integrals:
or the z-component of
(For details, see .) This is the principle of the planimeter mechanical device.
Bounded area between two quadratic functions
To find the bounded area between two quadratic functions, we subtract one from the other to write the difference as
where f(x) is the quadratic upper bound and g(x) is the quadratic lower bound. Define the discriminant of f(x)-g(x) as
By simplifying the integral formula between the graphs of two functions (as given in the section above) and using Vieta's formula, we can obtain
The above remains valid if one of the bounding functions is linear instead of quadratic.
Surface area of 3-dimensional figures
Cone: , where r is the radius of the circular base, and h is the height. That can also be rewritten as or where r is the radius and l is the slant height of the cone. is the base area while is the lateral surface area of the cone.
Cube: , where s is the length of an edge.
Cylinder: , where r is the radius of a base and h is the height. The can also be rewritten as , where d is the diameter.
Prism: , where B is the area of a base, P is the perimeter of a base, and h is the height of the prism.
pyramid: , where B is the area of the base, P is the perimeter of the base, and L is the length of the slant.
Rectangular prism: , where is the length, w is the width, and h is the height.
General formula for surface area
The general formula for the surface area of the graph of a continuously differentiable function where and is a region in the xy-plane with the smooth boundary:
An even more general formula for the area of the graph of a parametric surface in the vector form where is a continuously differentiable vector function of is:
List of formulas
The above calculations show how to find the areas of many common shapes.
The areas of irregular (and thus arbitrary) polygons can be calculated using the "Surveyor's formula" (shoelace formula).
Relation of area to perimeter
The isoperimetric inequality states that, for a closed curve of length L (so the region it encloses has perimeter L) and for area A of the region that it encloses,
and equality holds if and only if the curve is a circle. Thus a circle has the largest area of any closed figure with a given perimeter.
At the other extreme, a figure with given perimeter L could have an arbitrarily small area, as illustrated by a rhombus that is "tipped over" arbitrarily far so that two of its angles are arbitrarily close to 0° and the other two are arbitrarily close to 180°.
For a circle, the ratio of the area to the circumference (the term for the perimeter of a circle) equals half the radius r. This can be seen from the area formula πr2 and the circumference formula 2πr.
The area of a regular polygon is half its perimeter times the apothem (where the apothem is the distance from the center to the nearest point on any side).
Fractals
Doubling the edge lengths of a polygon multiplies its area by four, which is two (the ratio of the new to the old side length) raised to the power of two (the dimension of the space the polygon resides in). But if the one-dimensional lengths of a fractal drawn in two dimensions are all doubled, the spatial content of the fractal scales by a power of two that is not necessarily an integer. This power is called the fractal dimension of the fractal.
Area bisectors
There are an infinitude of lines that bisect the area of a triangle. Three of them are the medians of the triangle (which connect the sides' midpoints with the opposite vertices), and these are concurrent at the triangle's centroid; indeed, they are the only area bisectors that go through the centroid. Any line through a triangle that splits both the triangle's area and its perimeter in half goes through the triangle's incenter (the center of its incircle). There are either one, two, or three of these for any given triangle.
Any line through the midpoint of a parallelogram bisects the area.
All area bisectors of a circle or other ellipse go through the center, and any chords through the center bisect the area. In the case of a circle they are the diameters of the circle.
Optimization
Given a wire contour, the surface of least area spanning ("filling") it is a minimal surface. Familiar examples include soap bubbles.
The question of the filling area of the Riemannian circle remains open.
The circle has the largest area of any two-dimensional object having the same perimeter.
A cyclic polygon (one inscribed in a circle) has the largest area of any polygon with a given number of sides of the same lengths.
A version of the isoperimetric inequality for triangles states that the triangle of greatest area among all those with a given perimeter is equilateral.
The triangle of largest area of all those inscribed in a given circle is equilateral; and the triangle of smallest area of all those circumscribed around a given circle is equilateral.
The ratio of the area of the incircle to the area of an equilateral triangle, , is larger than that of any non-equilateral triangle.
The ratio of the area to the square of the perimeter of an equilateral triangle, is larger than that for any other triangle.
See also
Brahmagupta quadrilateral, a cyclic quadrilateral with integer sides, integer diagonals, and integer area.
Equiareal map
Heronian triangle, a triangle with integer sides and integer area.
List of triangle inequalities
One-seventh area triangle, an inner triangle with one-seventh the area of the reference triangle.
Routh's theorem, a generalization of the one-seventh area triangle.
Orders of magnitude—A list of areas by size.
Derivation of the formula of a pentagon
Planimeter, an instrument for measuring small areas, e.g. on maps.
Area of a convex quadrilateral
Robbins pentagon, a cyclic pentagon whose side lengths and area are all rational numbers.
References
External links
|
https://en.wikipedia.org/wiki/Anisotropy
|
Anisotropy () is the structural property of non-uniformity in different directions, as opposed to isotropy. An anisotropic object or pattern has properties that differ according to direction of measurement. For example, many materials exhibit very different properties when measured along different axes: physical or mechanical properties (absorbance, refractive index, conductivity, tensile strength, etc.).
An example of anisotropy is light coming through a polarizer. Another is wood, which is easier to split along its grain than across it because of the directional non-uniformity of the grain (the grain is the same in one direction, not all directions).
Fields of interest
Computer graphics
In the field of computer graphics, an anisotropic surface changes in appearance as it rotates about its geometric normal, as is the case with velvet.
Anisotropic filtering (AF) is a method of enhancing the image quality of textures on surfaces that are far away and steeply angled with respect to the point of view. Older techniques, such as bilinear and trilinear filtering, do not take into account the angle a surface is viewed from, which can result in aliasing or blurring of textures. By reducing detail in one direction more than another, these effects can be reduced easily.
Chemistry
A chemical anisotropic filter, as used to filter particles, is a filter with increasingly smaller interstitial spaces in the direction of filtration so that the proximal regions filter out larger particles and distal regions increasingly remove smaller particles, resulting in greater flow-through and more efficient filtration.
In fluorescence spectroscopy, the fluorescence anisotropy, calculated from the polarization properties of fluorescence from samples excited with plane-polarized light, is used, e.g., to determine the shape of a macromolecule. Anisotropy measurements reveal the average angular displacement of the fluorophore that occurs between absorption and subsequent emission of a photon.
In NMR spectroscopy, the orientation of nuclei with respect to the applied magnetic field determines their chemical shift. In this context, anisotropic systems refer to the electron distribution of molecules with abnormally high electron density, like the pi system of benzene. This abnormal electron density affects the applied magnetic field and causes the observed chemical shift to change.
Real-world imagery
Images of a gravity-bound or man-made environment are particularly anisotropic in the orientation domain, with more image structure located at orientations parallel with or orthogonal to the direction of gravity (vertical and horizontal).
Physics
Physicists from University of California, Berkeley reported about their detection of the cosmic anisotropy in cosmic microwave background radiation in 1977. Their experiment demonstrated the Doppler shift caused by the movement of the earth with respect to the early Universe matter, the source of the radiation. Cosmic anisotropy has also been seen in the alignment of galaxies' rotation axes and polarization angles of quasars.
Physicists use the term anisotropy to describe direction-dependent properties of materials. Magnetic anisotropy, for example, may occur in a plasma, so that its magnetic field is oriented in a preferred direction. Plasmas may also show "filamentation" (such as that seen in lightning or a plasma globe) that is directional.
An anisotropic liquid has the fluidity of a normal liquid, but has an average structural order relative to each other along the molecular axis, unlike water or chloroform, which contain no structural ordering of the molecules. Liquid crystals are examples of anisotropic liquids.
Some materials conduct heat in a way that is isotropic, that is independent of spatial orientation around the heat source. Heat conduction is more commonly anisotropic, which implies that detailed geometric modeling of typically diverse materials being thermally managed is required. The materials used to transfer and reject heat from the heat source in electronics are often anisotropic.
Many crystals are anisotropic to light ("optical anisotropy"), and exhibit properties such as birefringence. Crystal optics describes light propagation in these media. An "axis of anisotropy" is defined as the axis along which isotropy is broken (or an axis of symmetry, such as normal to crystalline layers). Some materials can have multiple such optical axes.
Geophysics and geology
Seismic anisotropy is the variation of seismic wavespeed with direction. Seismic anisotropy is an indicator of long range order in a material, where features smaller than the seismic wavelength (e.g., crystals, cracks, pores, layers, or inclusions) have a dominant alignment. This alignment leads to a directional variation of elasticity wavespeed. Measuring the effects of anisotropy in seismic data can provide important information about processes and mineralogy in the Earth; significant seismic anisotropy has been detected in the Earth's crust, mantle, and inner core.
Geological formations with distinct layers of sedimentary material can exhibit electrical anisotropy; electrical conductivity in one direction (e.g. parallel to a layer), is different from that in another (e.g. perpendicular to a layer). This property is used in the gas and oil exploration industry to identify hydrocarbon-bearing sands in sequences of sand and shale. Sand-bearing hydrocarbon assets have high resistivity (low conductivity), whereas shales have lower resistivity. Formation evaluation instruments measure this conductivity or resistivity, and the results are used to help find oil and gas in wells. The mechanical anisotropy measured for some of the sedimentary rocks like coal and shale can change with corresponding changes in their surface properties like sorption when gases are produced from the coal and shale reservoirs.
The hydraulic conductivity of aquifers is often anisotropic for the same reason. When calculating groundwater flow to drains or to wells, the difference between horizontal and vertical permeability must be taken into account; otherwise the results may be subject to error.
Most common rock-forming minerals are anisotropic, including quartz and feldspar. Anisotropy in minerals is most reliably seen in their optical properties. An example of an isotropic mineral is garnet.
Medical acoustics
Anisotropy is also a well-known property in medical ultrasound imaging describing a different resulting echogenicity of soft tissues, such as tendons, when the angle of the transducer is changed. Tendon fibers appear hyperechoic (bright) when the transducer is perpendicular to the tendon, but can appear hypoechoic (darker) when the transducer is angled obliquely. This can be a source of interpretation error for inexperienced practitioners.
Materials science and engineering
Anisotropy, in materials science, is a material's directional dependence of a physical property. This is a critical consideration for materials selection in engineering applications. A material with physical properties that are symmetric about an axis that is normal to a plane of isotropy is called a transversely isotropic material. Tensor descriptions of material properties can be used to determine the directional dependence of that property. For a monocrystalline material, anisotropy is associated with the crystal symmetry in the sense that more symmetric crystal types have fewer independent coefficients in the tensor description of a given property. When a material is polycrystalline, the directional dependence on properties is often related to the processing techniques it has undergone. A material with randomly oriented grains will be isotropic, whereas materials with texture will be often be anisotropic. Textured materials are often the result of processing techniques like cold rolling, wire drawing, and heat treatment.
Mechanical properties of materials such as Young's modulus, ductility, yield strength, and high-temperature creep rate, are often dependent on the direction of measurement. Fourth-rank tensor properties, like the elastic constants, are anisotropic, even for materials with cubic symmetry. The Young's modulus relates stress and strain when an isotropic material is elastically deformed; to describe elasticity in an anisotropic material, stiffness (or compliance) tensors are used instead.
In metals, anisotropic elasticity behavior is present in all single crystals with three independent coefficients for cubic crystals, for example. For face-centered cubic materials such as nickel and copper, the stiffness is highest along the <111> direction, normal to the close-packed planes, and smallest parallel to <100>. Tungsten is so nearly isotropic at room temperature that it can be considered to have only two stiffness coefficients; aluminium is another metal that is nearly isotropic.
For an isotropic material, where is the shear modulus, is the Young's modulus, and is the material's Poisson's ratio. Therefore, for cubic materials, we can think of anisotropy, , as the ratio between the empirically determined shear modulus for the cubic material and its (isotropic) equivalent:
The latter expression is known as the Zener ratio, , where refers to elastic constants in Voigt (vector-matrix) notation. For an isotropic material, the ratio is one.
Limitation of the Zener ratio to cubic materials is waived in the Tensorial anisotropy index AT that takes into consideration all the 27 components of the fully anisotropic stiffness tensor. It is composed of two major parts and , the former referring to components existing in cubic tensor and the latter in anisotropic tensor so that This first component includes the modified Zener ratio and additionally accounts for directional differences in the material, which exist in orthotropic material, for instance. The second component of this index covers the influence of stiffness coefficients that are nonzero only for non-cubic materials and remains zero otherwise.
Fiber-reinforced or layered composite materials exhibit anisotropic mechanical properties, due to orientation of the reinforcement material. In many fiber-reinforced composites like carbon fiber or glass fiber based composites, the weave of the material (e.g. unidirectional or plain weave) can determine the extent of the anisotropy of the bulk material. The tunability of orientation of the fibers allows for application-based designs of composite materials, depending on the direction of stresses applied onto the material.
Amorphous materials such as glass and polymers are typically isotropic. Due to the highly randomized orientation of macromolecules in polymeric materials, polymers are in general described as isotropic. However, mechanically gradient polymers can be engineered to have directionally dependent properties through processing techniques or introduction of anisotropy-inducing elements. Researchers have built composite materials with aligned fibers and voids to generate anisotropic hydrogels, in order to mimic hierarchically ordered biological soft matter. 3D printing, especially Fused Deposition Modeling, can introduce anisotropy into printed parts. This is due to the fact that FDM is designed to extrude and print layers of thermoplastic materials. This creates materials that are strong when tensile stress is applied in parallel to the layers and weak when the material is perpendicular to the layers.
Microfabrication
Anisotropic etching techniques (such as deep reactive-ion etching) are used in microfabrication processes to create well defined microscopic features with a high aspect ratio. These features are commonly used in MEMS (microelectromechanical systems) and microfluidic devices, where the anisotropy of the features is needed to impart desired optical, electrical, or physical properties to the device. Anisotropic etching can also refer to certain chemical etchants used to etch a certain material preferentially over certain crystallographic planes (e.g., KOH etching of silicon [100] produces pyramid-like structures)
Neuroscience
Diffusion tensor imaging is an MRI technique that involves measuring the fractional anisotropy of the random motion (Brownian motion) of water molecules in the brain. Water molecules located in fiber tracts are more likely to move anisotropically, since they are restricted in their movement (they move more in the dimension parallel to the fiber tract rather than in the two dimensions orthogonal to it), whereas water molecules dispersed in the rest of the brain have less restricted movement and therefore display more isotropy. This difference in fractional anisotropy is exploited to create a map of the fiber tracts in the brains of the individual.
Remote sensing and radiative transfer modeling
Radiance fields (see Bidirectional reflectance distribution function (BRDF)) from a reflective surface are often not isotropic in nature. This makes calculations of the total energy being reflected from any scene a difficult quantity to calculate. In remote sensing applications, anisotropy functions can be derived for specific scenes, immensely simplifying the calculation of the net reflectance or (thereby) the net irradiance of a scene.
For example, let the BRDF be where 'i' denotes incident direction and 'v' denotes viewing direction (as if from a satellite or other instrument). And let P be the Planar Albedo, which represents the total reflectance from the scene.
It is of interest because, with knowledge of the anisotropy function as defined, a measurement of the BRDF from a single viewing direction (say, ) yields a measure of the total scene reflectance (planar albedo) for that specific incident geometry (say, ).
See also
Circular symmetry
References
External links
"Overview of Anisotropy"
DoITPoMS Teaching and Learning Package: "Introduction to Anisotropy"
"Gauge, and knitted fabric generally, is an anisotropic phenomenon"
Orientation (geometry)
Asymmetry
|
https://en.wikipedia.org/wiki/Antimatter
|
In modern physics, antimatter is defined as matter composed of the antiparticles (or "partners") of the corresponding particles in "ordinary" matter, and can be thought of as matter with reversed charge, parity, and time, known as CPT reversal. Antimatter occurs in natural processes like cosmic ray collisions and some types of radioactive decay, but only a tiny fraction of these have successfully been bound together in experiments to form antiatoms. Minuscule numbers of antiparticles can be generated at particle accelerators; however, total artificial production has been only a few nanograms. No macroscopic amount of antimatter has ever been assembled due to the extreme cost and difficulty of production and handling. Nonetheless, antimatter is an essential component of widely-available applications related to beta decay, such as positron emission tomography, radiation therapy, and industrial imaging.
In theory, a particle and its antiparticle (for example, a proton and an antiproton) have the same mass, but opposite electric charge, and other differences in quantum numbers.
A collision between any particle and its anti-particle partner leads to their mutual annihilation, giving rise to various proportions of intense photons (gamma rays), neutrinos, and sometimes less-massive particleantiparticle pairs. The majority of the total energy of annihilation emerges in the form of ionizing radiation. If surrounding matter is present, the energy content of this radiation will be absorbed and converted into other forms of energy, such as heat or light. The amount of energy released is usually proportional to the total mass of the collided matter and antimatter, in accordance with the notable mass–energy equivalence equation, .
Antiparticles bind with each other to form antimatter, just as ordinary particles bind to form normal matter. For example, a positron (the antiparticle of the electron) and an antiproton (the antiparticle of the proton) can form an antihydrogen atom. The nuclei of antihelium have been artificially produced, albeit with difficulty, and are the most complex anti-nuclei so far observed. Physical principles indicate that complex antimatter atomic nuclei are possible, as well as anti-atoms corresponding to the known chemical elements.
There is strong evidence that the observable universe is composed almost entirely of ordinary matter, as opposed to an equal mixture of matter and antimatter. This asymmetry of matter and antimatter in the visible universe is one of the great unsolved problems in physics. The process by which this inequality between matter and antimatter particles developed is called baryogenesis.
Definitions
Antimatter particles carry the same charge as matter particles, but of opposite sign. That is, an antiproton is negatively charged and an antielectron (positron) is positively charged. Neutrons do not carry a net charge, but their constituent quarks do. Protons and neutrons have a baryon number of +1, while antiprotons and antineutrons have a baryon number of –1. Similarly, electrons have a lepton number of +1, while that of positrons is –1. When a particle and its corresponding antiparticle collide, they are both converted into energy.
The French term contra-terrene led to the initialism "C.T." and the science fiction term "seetee", as used in such novels as Seetee Ship.
Conceptual history
The idea of negative matter appears in past theories of matter that have now been abandoned. Using the once popular vortex theory of gravity, the possibility of matter with negative gravity was discussed by William Hicks in the 1880s. Between the 1880s and the 1890s, Karl Pearson proposed the existence of "squirts" and sinks of the flow of aether. The squirts represented normal matter and the sinks represented negative matter. Pearson's theory required a fourth dimension for the aether to flow from and into.
The term antimatter was first used by Arthur Schuster in two rather whimsical letters to Nature in 1898, in which he coined the term. He hypothesized antiatoms, as well as whole antimatter solar systems, and discussed the possibility of matter and antimatter annihilating each other. Schuster's ideas were not a serious theoretical proposal, merely speculation, and like the previous ideas, differed from the modern concept of antimatter in that it possessed negative gravity.
The modern theory of antimatter began in 1928, with a paper by Paul Dirac. Dirac realised that his relativistic version of the Schrödinger wave equation for electrons predicted the possibility of antielectrons. Although Dirac had laid the groundwork for the existence of these “antielectrons” he initially failed to pick up on the implications contained within his own equation. He freely gave the credit for that insight to J. Robert Oppenheimer, whose seminal paper “On the Theory of Electrons and Protons” (Feb 14th 1930) drew on Dirac’s equation and argued for the existence of a positively charged electron (a positron), which as a counterpart to the electron should have the same mass as the electron itself. This meant that it could not be, as Dirac had in fact suggested, a proton. Dirac further postulated the existence of antimatter in a 1931 paper which referred to the positron as an "anti-electron". These were discovered by Carl D. Anderson in 1932 and named positrons from "positive electron". Although Dirac did not himself use the term antimatter, its use follows on naturally enough from antielectrons, antiprotons, etc. A complete periodic table of antimatter was envisaged by Charles Janet in 1929.
The Feynman–Stueckelberg interpretation states that antimatter and antiparticles are regular particles traveling backward in time.
Notation
One way to denote an antiparticle is by adding a bar over the particle's symbol. For example, the proton and antiproton are denoted as and , respectively. The same rule applies if one were to address a particle by its constituent components. A proton is made up of quarks, so an antiproton must therefore be formed from antiquarks. Another convention is to distinguish particles by positive and negative electric charge. Thus, the electron and positron are denoted simply as and respectively. To prevent confusion, however, the two conventions are never mixed.
Properties
Theorized anti-gravitational properties of antimatter are currently being tested at the AEGIS and ALPHA-g experiments at CERN. Research is needed to study the possible gravitational effects between matter and antimatter, and between antimatter and antimatter. However, research is difficult considering when the two meet they annihilate, along with the current difficulties of capturing and containing antimatter.
There are compelling theoretical reasons to believe that, aside from the fact that antiparticles have different signs on all charges (such as electric and baryon charges), matter and antimatter have exactly the same properties. This means a particle and its corresponding antiparticle must have identical masses and decay lifetimes (if unstable). It also implies that, for example, a star made up of antimatter (an "antistar") will shine just like an ordinary star. This idea was tested experimentally in 2016 by the ALPHA experiment, which measured the transition between the two lowest energy states of antihydrogen. The results, which are identical to that of hydrogen, confirmed the validity of quantum mechanics for antimatter.
On 27 September 2023, physicists reported studies which support the notion that antimatter particles behave in a similar way as normal matter in a gravitational field.
Origin and asymmetry
Most matter observable from the Earth seems to be made of matter rather than antimatter. If antimatter-dominated regions of space existed, the gamma rays produced in annihilation reactions along the boundary between matter and antimatter regions would be detectable.
Antiparticles are created everywhere in the universe where high-energy particle collisions take place. High-energy cosmic rays striking Earth's atmosphere (or any other matter in the Solar System) produce minute quantities of antiparticles in the resulting particle jets, which are immediately annihilated by contact with nearby matter. They may similarly be produced in regions like the center of the Milky Way and other galaxies, where very energetic celestial events occur (principally the interaction of relativistic jets with the interstellar medium). The presence of the resulting antimatter is detectable by the two gamma rays produced every time positrons annihilate with nearby matter. The frequency and wavelength of the gamma rays indicate that each carries 511 keV of energy (that is, the rest mass of an electron multiplied by c2).
Observations by the European Space Agency's INTEGRAL satellite may explain the origin of a giant antimatter cloud surrounding the Galactic Center. The observations show that the cloud is asymmetrical and matches the pattern of X-ray binaries (binary star systems containing black holes or neutron stars), mostly on one side of the Galactic Center. While the mechanism is not fully understood, it is likely to involve the production of electron–positron pairs, as ordinary matter gains kinetic energy while falling into a stellar remnant.
Antimatter may exist in relatively large amounts in far-away galaxies due to cosmic inflation in the primordial time of the universe. Antimatter galaxies, if they exist, are expected to have the same chemistry and absorption and emission spectra as normal-matter galaxies, and their astronomical objects would be observationally identical, making them difficult to distinguish. NASA is trying to determine if such galaxies exist by looking for X-ray and gamma ray signatures of annihilation events in colliding superclusters.
In October 2017, scientists working on the BASE experiment at CERN reported a measurement of the antiproton magnetic moment to a precision of 1.5 parts per billion. It is consistent with the most precise measurement of the proton magnetic moment (also made by BASE in 2014), which supports the hypothesis of CPT symmetry. This measurement represents the first time that a property of antimatter is known more precisely than the equivalent property in matter.
Antimatter quantum interferometry has been first demonstrated in 2018 in the Positron Laboratory (L-NESS) of Rafael Ferragut in Como (Italy), by a group led by Marco Giammarchi.
Natural production
Positrons are produced naturally in β+ decays of naturally occurring radioactive isotopes (for example, potassium-40) and in interactions of gamma quanta (emitted by radioactive nuclei) with matter. Antineutrinos are another kind of antiparticle created by natural radioactivity (β− decay). Many different kinds of antiparticles are also produced by (and contained in) cosmic rays. In January 2011, research by the American Astronomical Society discovered antimatter (positrons) originating above thunderstorm clouds; positrons are produced in terrestrial gamma ray flashes created by electrons accelerated by strong electric fields in the clouds. Antiprotons have also been found to exist in the Van Allen Belts around the Earth by the PAMELA module.
Antiparticles are also produced in any environment with a sufficiently high temperature (mean particle energy greater than the pair production threshold). It is hypothesized that during the period of baryogenesis, when the universe was extremely hot and dense, matter and antimatter were continually produced and annihilated. The presence of remaining matter, and absence of detectable remaining antimatter, is called baryon asymmetry. The exact mechanism that produced this asymmetry during baryogenesis remains an unsolved problem. One of the necessary conditions for this asymmetry is the violation of CP symmetry, which has been experimentally observed in the weak interaction.
Recent observations indicate black holes and neutron stars produce vast amounts of positron-electron plasma via the jets.
Observation in cosmic rays
Satellite experiments have found evidence of positrons and a few antiprotons in primary cosmic rays, amounting to less than 1% of the particles in primary cosmic rays. This antimatter cannot all have been created in the Big Bang, but is instead attributed to have been produced by cyclic processes at high energies. For instance, electron-positron pairs may be formed in pulsars, as a magnetized neutron star rotation cycle shears electron-positron pairs from the star surface. Therein the antimatter forms a wind that crashes upon the ejecta of the progenitor supernovae. This weathering takes place as "the cold, magnetized relativistic wind launched by the star hits the non-relativistically expanding ejecta, a shock wave system forms in the impact: the outer one propagates in the ejecta, while a reverse shock propagates back towards the star." The former ejection of matter in the outer shock wave and the latter production of antimatter in the reverse shock wave are steps in a space weather cycle.
Preliminary results from the presently operating Alpha Magnetic Spectrometer (AMS-02) on board the International Space Station show that positrons in the cosmic rays arrive with no directionality, and with energies that range from 10 GeV to 250 GeV. In September, 2014, new results with almost twice as much data were presented in a talk at CERN and published in Physical Review Letters. A new measurement of positron fraction up to 500 GeV was reported, showing that positron fraction peaks at a maximum of about 16% of total electron+positron events, around an energy of 275 ± 32 GeV. At higher energies, up to 500 GeV, the ratio of positrons to electrons begins to fall again. The absolute flux of positrons also begins to fall before 500 GeV, but peaks at energies far higher than electron energies, which peak about 10 GeV. These results on interpretation have been suggested to be due to positron production in annihilation events of massive dark matter particles.
Cosmic ray antiprotons also have a much higher energy than their normal-matter counterparts (protons). They arrive at Earth with a characteristic energy maximum of 2 GeV, indicating their production in a fundamentally different process from cosmic ray protons, which on average have only one-sixth of the energy.
There is an ongoing search for larger antimatter nuclei, such as antihelium nuclei (that is, anti-alpha particles), in cosmic rays. The detection of natural antihelium could imply the existence of large antimatter structures such as an antistar. A prototype of the AMS-02 designated AMS-01, was flown into space aboard the on STS-91 in June 1998. By not detecting any antihelium at all, the AMS-01 established an upper limit of 1.1×10−6 for the antihelium to helium flux ratio. AMS-02 revealed in December 2016 that it had discovered a few signals consistent with antihelium nuclei amidst several billion helium nuclei. The result remains to be verified, and the team is currently trying to rule out contamination.
Artificial production
Positrons
Positrons were reported in November 2008 to have been generated by Lawrence Livermore National Laboratory in larger numbers than by any previous synthetic process. A laser drove electrons through a gold target's nuclei, which caused the incoming electrons to emit energy quanta that decayed into both matter and antimatter. Positrons were detected at a higher rate and in greater density than ever previously detected in a laboratory. Previous experiments made smaller quantities of positrons using lasers and paper-thin targets; newer simulations showed that short bursts of ultra-intense lasers and millimeter-thick gold are a far more effective source.
Antiprotons, antineutrons, and antinuclei
The existence of the antiproton was experimentally confirmed in 1955 by University of California, Berkeley physicists Emilio Segrè and Owen Chamberlain, for which they were awarded the 1959 Nobel Prize in Physics. An antiproton consists of two up antiquarks and one down antiquark (). The properties of the antiproton that have been measured all match the corresponding properties of the proton, with the exception of the antiproton having opposite electric charge and magnetic moment from the proton. Shortly afterwards, in 1956, the antineutron was discovered in proton–proton collisions at the Bevatron (Lawrence Berkeley National Laboratory) by Bruce Cork and colleagues.
In addition to antibaryons, anti-nuclei consisting of multiple bound antiprotons and antineutrons have been created. These are typically produced at energies far too high to form antimatter atoms (with bound positrons in place of electrons). In 1965, a group of researchers led by Antonino Zichichi reported production of nuclei of antideuterium at the Proton Synchrotron at CERN. At roughly the same time, observations of antideuterium nuclei were reported by a group of American physicists at the Alternating Gradient Synchrotron at Brookhaven National Laboratory.
Antihydrogen atoms
In 1995, CERN announced that it had successfully brought into existence nine hot antihydrogen atoms by implementing the SLAC/Fermilab concept during the PS210 experiment. The experiment was performed using the Low Energy Antiproton Ring (LEAR), and was led by Walter Oelert and Mario Macri. Fermilab soon confirmed the CERN findings by producing approximately 100 antihydrogen atoms at their facilities. The antihydrogen atoms created during PS210 and subsequent experiments (at both CERN and Fermilab) were extremely energetic and were not well suited to study. To resolve this hurdle, and to gain a better understanding of antihydrogen, two collaborations were formed in the late 1990s, namely, ATHENA and ATRAP.
In 1999, CERN activated the Antiproton Decelerator, a device capable of decelerating antiprotons from to – still too "hot" to produce study-effective antihydrogen, but a huge leap forward. In late 2002 the ATHENA project announced that they had created the world's first "cold" antihydrogen. The ATRAP project released similar results very shortly thereafter. The antiprotons used in these experiments were cooled by decelerating them with the Antiproton Decelerator, passing them through a thin sheet of foil, and finally capturing them in a Penning–Malmberg trap. The overall cooling process is workable, but highly inefficient; approximately 25 million antiprotons leave the Antiproton Decelerator and roughly 25,000 make it to the Penning–Malmberg trap, which is about or 0.1% of the original amount.
The antiprotons are still hot when initially trapped. To cool them further, they are mixed into an electron plasma. The electrons in this plasma cool via cyclotron radiation, and then sympathetically cool the antiprotons via Coulomb collisions. Eventually, the electrons are removed by the application of short-duration electric fields, leaving the antiprotons with energies less than . While the antiprotons are being cooled in the first trap, a small cloud of positrons is captured from radioactive sodium in a Surko-style positron accumulator. This cloud is then recaptured in a second trap near the antiprotons. Manipulations of the trap electrodes then tip the antiprotons into the positron plasma, where some combine with antiprotons to form antihydrogen. This neutral antihydrogen is unaffected by the electric and magnetic fields used to trap the charged positrons and antiprotons, and within a few microseconds the antihydrogen hits the trap walls, where it annihilates. Some hundreds of millions of antihydrogen atoms have been made in this fashion.
In 2005, ATHENA disbanded and some of the former members (along with others) formed the ALPHA Collaboration, which is also based at CERN. The ultimate goal of this endeavour is to test CPT symmetry through comparison of the atomic spectra of hydrogen and antihydrogen (see hydrogen spectral series).
Most of the sought-after high-precision tests of the properties of antihydrogen could only be performed if the antihydrogen were trapped, that is, held in place for a relatively long time. While antihydrogen atoms are electrically neutral, the spins of their component particles produce a magnetic moment. These magnetic moments can interact with an inhomogeneous magnetic field; some of the antihydrogen atoms can be attracted to a magnetic minimum. Such a minimum can be created by a combination of mirror and multipole fields. Antihydrogen can be trapped in such a magnetic minimum (minimum-B) trap; in November 2010, the ALPHA collaboration announced that they had so trapped 38 antihydrogen atoms for about a sixth of a second. This was the first time that neutral antimatter had been trapped.
On 26 April 2011, ALPHA announced that they had trapped 309 antihydrogen atoms, some for as long as 1,000 seconds (about 17 minutes). This was longer than neutral antimatter had ever been trapped before. ALPHA has used these trapped atoms to initiate research into the spectral properties of the antihydrogen.
In 2016, a new antiproton decelerator and cooler called ELENA (Extra Low ENergy Antiproton decelerator) was built. It takes the antiprotons from the antiproton decelerator and cools them to 90 keV, which is "cold" enough to study. This machine works by using high energy and accelerating the particles within the chamber. More than one hundred antiprotons can be captured per second, a huge improvement, but it would still take several thousand years to make a nanogram of antimatter.
The biggest limiting factor in the large-scale production of antimatter is the availability of antiprotons. Recent data released by CERN states that, when fully operational, their facilities are capable of producing ten million antiprotons per minute. Assuming a 100% conversion of antiprotons to antihydrogen, it would take 100 billion years to produce 1 gram or 1 mole of antihydrogen (approximately atoms of anti-hydrogen). However, CERN only produces 1% of the anti-matter Fermilab does, and neither are designed to produce anti-matter. According to Gerald Jackson, using technology already in use today we are capable of producing and capturing 20 grams of anti-matter particles per year at a yearly cost of 670 million dollars per facility.
Antihelium
Antihelium-3 nuclei () were first observed in the 1970s in proton–nucleus collision experiments at the Institute for High Energy Physics by Y. Prockoshkin's group (Protvino near Moscow, USSR) and later created in nucleus–nucleus collision experiments. Nucleus–nucleus collisions produce antinuclei through the coalescence of antiprotons and antineutrons created in these reactions. In 2011, the STAR detector reported the observation of artificially created antihelium-4 nuclei (anti-alpha particles) () from such collisions.
The Alpha Magnetic Spectrometer on the International Space Station has, as of 2021, recorded eight events that seem to indicate the detection of antihelium-3.
Preservation
Antimatter cannot be stored in a container made of ordinary matter because antimatter reacts with any matter it touches, annihilating itself and an equal amount of the container. Antimatter in the form of charged particles can be contained by a combination of electric and magnetic fields, in a device called a Penning trap. This device cannot, however, contain antimatter that consists of uncharged particles, for which atomic traps are used. In particular, such a trap may use the dipole moment (electric or magnetic) of the trapped particles. At high vacuum, the matter or antimatter particles can be trapped and cooled with slightly off-resonant laser radiation using a magneto-optical trap or magnetic trap. Small particles can also be suspended with optical tweezers, using a highly focused laser beam.
In 2011, CERN scientists were able to preserve antihydrogen for approximately 17 minutes. The record for storing antiparticles is currently held by the TRAP experiment at CERN: antiprotons were kept in a Penning trap for 405 days. A proposal was made in 2018 to develop containment technology advanced enough to contain a billion anti-protons in a portable device to be driven to another lab for further experimentation.
Cost
Scientists claim that antimatter is the costliest material to make. In 2006, Gerald Smith estimated $250 million could produce 10 milligrams of positrons (equivalent to $25 billion per gram); in 1999, NASA gave a figure of $62.5 trillion per gram of antihydrogen. This is because production is difficult (only very few antiprotons are produced in reactions in particle accelerators) and because there is higher demand for other uses of particle accelerators. According to CERN, it has cost a few hundred million Swiss francs to produce about 1 billionth of a gram (the amount used so far for particle/antiparticle collisions). In comparison, to produce the first atomic weapon, the cost of the Manhattan Project was estimated at $23 billion with inflation during 2007.
Several studies funded by the NASA Institute for Advanced Concepts are exploring whether it might be possible to use magnetic scoops to collect the antimatter that occurs naturally in the Van Allen belt of the Earth, and ultimately, the belts of gas giants, like Jupiter, hopefully at a lower cost per gram.
Uses
Medical
Matter–antimatter reactions have practical applications in medical imaging, such as positron emission tomography (PET). In positive beta decay, a nuclide loses surplus positive charge by emitting a positron (in the same event, a proton becomes a neutron, and a neutrino is also emitted). Nuclides with surplus positive charge are easily made in a cyclotron and are widely generated for medical use. Antiprotons have also been shown within laboratory experiments to have the potential to treat certain cancers, in a similar method currently used for ion (proton) therapy.
Fuel
Isolated and stored antimatter could be used as a fuel for interplanetary or interstellar travel as part of an antimatter-catalyzed nuclear pulse propulsion or another antimatter rocket. Since the energy density of antimatter is higher than that of conventional fuels, an antimatter-fueled spacecraft would have a higher thrust-to-weight ratio than a conventional spacecraft.
If matter–antimatter collisions resulted only in photon emission, the entire rest mass of the particles would be converted to kinetic energy. The energy per unit mass () is about 10 orders of magnitude greater than chemical energies, and about 3 orders of magnitude greater than the nuclear potential energy that can be liberated, today, using nuclear fission (about per fission reaction or ), and about 2 orders of magnitude greater than the best possible results expected from fusion (about for the proton–proton chain). The reaction of of antimatter with of matter would produce (180 petajoules) of energy (by the mass–energy equivalence formula, ), or the rough equivalent of 43 megatons of TNT – slightly less than the yield of the 27,000 kg Tsar Bomba, the largest thermonuclear weapon ever detonated.
Not all of that energy can be utilized by any realistic propulsion technology because of the nature of the annihilation products. While electron–positron reactions result in gamma ray photons, these are difficult to direct and use for thrust. In reactions between protons and antiprotons, their energy is converted largely into relativistic neutral and charged pions. The neutral pions decay almost immediately (with a lifetime of 85 attoseconds) into high-energy photons, but the charged pions decay more slowly (with a lifetime of 26 nanoseconds) and can be deflected magnetically to produce thrust.
Charged pions ultimately decay into a combination of neutrinos (carrying about 22% of the energy of the charged pions) and unstable charged muons (carrying about 78% of the charged pion energy), with the muons then decaying into a combination of electrons, positrons and neutrinos (cf. muon decay; the neutrinos from this decay carry about 2/3 of the energy of the muons, meaning that from the original charged pions, the total fraction of their energy converted to neutrinos by one route or another would be about ).
Weapons
Antimatter has been considered as a trigger mechanism for nuclear weapons. A major obstacle is the difficulty of producing antimatter in large enough quantities, and there is no evidence that it will ever be feasible. Nonetheless, the U.S. Air Force funded studies of the physics of antimatter in the Cold War, and began considering its possible use in weapons, not just as a trigger, but as the explosive itself.
See also
References
Further reading
External links
Freeview Video 'Antimatter' by the Vega Science Trust and the BBC/OU
CERN Webcasts (RealPlayer required)
What is Antimatter? (from the Frequently Asked Questions at the Center for Antimatter–Matter Studies)
FAQ from CERN with information about antimatter aimed at the general reader, posted in response to antimatter's fictional portrayal in Angels & Demons
Antimatter at Angels and Demons, CERN
What is direct CP-violation?
Animated illustration of antihydrogen production at CERN from the Exploratorium.
Quantum field theory
Fictional power sources
Articles containing video clips
|
https://en.wikipedia.org/wiki/Antiparticle
|
In particle physics, every type of particle is associated with an antiparticle with the same mass but with opposite physical charges (such as electric charge). For example, the antiparticle of the electron is the positron (also known as an antielectron). While the electron has a negative electric charge, the positron has a positive electric charge, and is produced naturally in certain types of radioactive decay. The opposite is also true: the antiparticle of the positron is the electron.
Some particles, such as the photon, are their own antiparticle. Otherwise, for each pair of antiparticle partners, one is designated as the normal particle (the one that occurs in matter usually interacted with in daily life). The other (usually given the prefix "anti-") is designated the antiparticle.
Particle–antiparticle pairs can annihilate each other, producing photons; since the charges of the particle and antiparticle are opposite, total charge is conserved. For example, the positrons produced in natural radioactive decay quickly annihilate themselves with electrons, producing pairs of gamma rays, a process exploited in positron emission tomography.
The laws of nature are very nearly symmetrical with respect to particles and antiparticles. For example, an antiproton and a positron can form an antihydrogen atom, which is believed to have the same properties as a hydrogen atom. This leads to the question of why the formation of matter after the Big Bang resulted in a universe consisting almost entirely of matter, rather than being a half-and-half mixture of matter and antimatter. The discovery of charge parity violation helped to shed light on this problem by showing that this symmetry, originally thought to be perfect, was only approximate.
Because charge is conserved, it is not possible to create an antiparticle without either destroying another particle of the same charge (as is for instance the case when antiparticles are produced naturally via beta decay or the collision of cosmic rays with Earth's atmosphere), or by the simultaneous creation of both a particle and its antiparticle, which can occur in particle accelerators such as the Large Hadron Collider at CERN.
Particles and their antiparticles have equal and opposite charges, so that an uncharged particle also gives rise to an uncharged antiparticle. In many cases, the antiparticle and the particle coincide: pairs of photons, Z0 bosons, mesons, and hypothetical gravitons and some hypothetical WIMPs all self-annihilate. However, electrically neutral particles need not be identical to their antiparticles: for example, the neutron and antineutron are distinct.
History
Experiment
In 1932, soon after the prediction of positrons by Paul Dirac, Carl D. Anderson found that cosmic-ray collisions produced these particles in a cloud chamber – a particle detector in which moving electrons (or positrons) leave behind trails as they move through the gas. The electric charge-to-mass ratio of a particle can be measured by observing the radius of curling of its cloud-chamber track in a magnetic field. Positrons, because of the direction that their paths curled, were at first mistaken for electrons travelling in the opposite direction. Positron paths in a cloud-chamber trace the same helical path as an electron but rotate in the opposite direction with respect to the magnetic field direction due to their having the same magnitude of charge-to-mass ratio but with opposite charge and, therefore, opposite signed charge-to-mass ratios.
The antiproton and antineutron were found by Emilio Segrè and Owen Chamberlain in 1955 at the University of California, Berkeley. Since then, the antiparticles of many other subatomic particles have been created in particle accelerator experiments. In recent years, complete atoms of antimatter have been assembled out of antiprotons and positrons, collected in electromagnetic traps.
Dirac hole theory
Solutions of the Dirac equation contain negative energy quantum states. As a result, an electron could always radiate energy and fall into a negative energy state. Even worse, it could keep radiating infinite amounts of energy because there were infinitely many negative energy states available. To prevent this unphysical situation from happening, Dirac proposed that a "sea" of negative-energy electrons fills the universe, already occupying all of the lower-energy states so that, due to the Pauli exclusion principle, no other electron could fall into them. Sometimes, however, one of these negative-energy particles could be lifted out of this Dirac sea to become a positive-energy particle. But, when lifted out, it would leave behind a hole in the sea that would act exactly like a positive-energy electron with a reversed charge. These holes were interpreted as "negative-energy electrons" by Paul Dirac and mistakenly identified with protons in his 1930 paper A Theory of Electrons and Protons However, these "negative-energy electrons" turned out to be positrons, and not protons.
This picture implied an infinite negative charge for the universea problem of which Dirac was aware. Dirac tried to argue that we would perceive this as the normal state of zero charge. Another difficulty was the difference in masses of the electron and the proton. Dirac tried to argue that this was due to the electromagnetic interactions with the sea, until Hermann Weyl proved that hole theory was completely symmetric between negative and positive charges. Dirac also predicted a reaction + → + , where an electron and a proton annihilate to give two photons. Robert Oppenheimer and Igor Tamm, however, proved that this would cause ordinary matter to disappear too fast. A year later, in 1931, Dirac modified his theory and postulated the positron, a new particle of the same mass as the electron. The discovery of this particle the next year removed the last two objections to his theory.
Within Dirac's theory, the problem of infinite charge of the universe remains. Some bosons also have antiparticles, but since bosons do not obey the Pauli exclusion principle (only fermions do), hole theory does not work for them. A unified interpretation of antiparticles is now available in quantum field theory, which solves both these problems by describing antimatter as negative energy states of the same underlying matter field, i.e. particles moving backwards in time.
Particle–antiparticle annihilation
If a particle and antiparticle are in the appropriate quantum states, then they can annihilate each other and produce other particles. Reactions such as + → (the two-photon annihilation of an electron-positron pair) are an example. The single-photon annihilation of an electron-positron pair, + → , cannot occur in free space because it is impossible to conserve energy and momentum together in this process. However, in the Coulomb field of a nucleus the translational invariance is broken and single-photon annihilation may occur. The reverse reaction (in free space, without an atomic nucleus) is also impossible for this reason. In quantum field theory, this process is allowed only as an intermediate quantum state for times short enough that the violation of energy conservation can be accommodated by the uncertainty principle. This opens the way for virtual pair production or annihilation in which a one particle quantum state may fluctuate into a two particle state and back. These processes are important in the vacuum state and renormalization of a quantum field theory. It also opens the way for neutral particle mixing through processes such as the one pictured here, which is a complicated example of mass renormalization.
Properties
Quantum states of a particle and an antiparticle are interchanged by the combined application of charge conjugation , parity and time reversal .
and are linear, unitary operators, is antilinear and antiunitary,
.
If denotes the quantum state of a particle with momentum and spin whose component in the z-direction is , then one has
where denotes the charge conjugate state, that is, the antiparticle. In particular a massive particle and its antiparticle transform under the same irreducible representation of the Poincaré group which means the antiparticle has the same mass and the same spin.
If , and
can be defined separately on the particles and antiparticles, then
where the proportionality sign indicates that there might be a phase on the right hand side.
As anticommutes with the charges, , particle and antiparticle have opposite electric charges q and -q.
Quantum field theory
This section draws upon the ideas, language and notation of canonical quantization of a quantum field theory.
One may try to quantize an electron field without mixing the annihilation and creation operators by writing
where we use the symbol k to denote the quantum numbers p and σ of the previous section and the sign of the energy, E(k), and ak denotes the corresponding annihilation operators. Of course, since we are dealing with fermions, we have to have the operators satisfy canonical anti-commutation relations. However, if one now writes down the Hamiltonian
then one sees immediately that the expectation value of H need not be positive. This is because E(k) can have any sign whatsoever, and the combination of creation and annihilation operators has expectation value 1 or 0.
So one has to introduce the charge conjugate antiparticle field, with its own creation and annihilation operators satisfying the relations
where k has the same p, and opposite σ and sign of the energy. Then one can rewrite the field in the form
where the first sum is over positive energy states and the second over those of negative energy. The energy becomes
where E0 is an infinite negative constant. The vacuum state is defined as the state with no particle or antiparticle, i.e., and . Then the energy of the vacuum is exactly E0. Since all energies are measured relative to the vacuum, H is positive definite. Analysis of the properties of ak and bk shows that one is the annihilation operator for particles and the other for antiparticles. This is the case of a fermion.
This approach is due to Vladimir Fock, Wendell Furry and Robert Oppenheimer. If one quantizes a real scalar field, then one finds that there is only one kind of annihilation operator; therefore, real scalar fields describe neutral bosons. Since complex scalar fields admit two different kinds of annihilation operators, which are related by conjugation, such fields describe charged bosons.
Feynman–Stueckelberg interpretation
By considering the propagation of the negative energy modes of the electron field backward in time, Ernst Stueckelberg reached a pictorial understanding of the fact that the particle and antiparticle have equal mass m and spin J but opposite charges q. This allowed him to rewrite perturbation theory precisely in the form of diagrams. Richard Feynman later gave an independent systematic derivation of these diagrams from a particle formalism, and they are now called Feynman diagrams. Each line of a diagram represents a particle propagating either backward or forward in time. In Feynman diagrams, anti-particles are shown traveling backwards in time relative to normal matter, and vice versa. This technique is the most widespread method of computing amplitudes in quantum field theory today.
Since this picture was first developed by Stueckelberg, and acquired its modern form in Feynman's work, it is called the Feynman–Stueckelberg interpretation of antiparticles to honor both scientists.
See also
List of particles
Antimatter
Gravitational interaction of antimatter
Parity, charge conjugation, and time reversal symmetry
CP violations
Quantum field theory
Baryogenesis, baryon asymmetry, and Leptogenesis
One-electron universe
Paul Dirac
Notes
References
External links
Antimatter at CERN
Subatomic particles
Quantum field theory
Antimatter
Particle physics
|
https://en.wikipedia.org/wiki/Anchor
|
An anchor is a device, normally made of metal, used to secure a vessel to the bed of a body of water to prevent the craft from drifting due to wind or current. The word derives from Latin , which itself comes from the Greek ().
Anchors can either be temporary or permanent. Permanent anchors are used in the creation of a mooring, and are rarely moved; a specialist service is normally needed to move or maintain them. Vessels carry one or more temporary anchors, which may be of different designs and weights.
A sea anchor is a drag device, not in contact with the seabed, used to minimise drift of a vessel relative to the water. A drogue is a drag device used to slow or help steer a vessel running before a storm in a following or overtaking sea, or when crossing a bar in a breaking sea.
Overview
Anchors achieve holding power either by "hooking" into the seabed, or weight, or a combination of the two. Permanent moorings use large masses (commonly a block or slab of concrete) resting on the seabed. Semi-permanent mooring anchors (such as mushroom anchors) and large ship's anchors derive a significant portion of their holding power from their weight, while also hooking or embedding in the bottom. Modern anchors for smaller vessels have metal flukes that hook on to rocks on the bottom or bury themselves in soft seabed.
The vessel is attached to the anchor by the rode (also called a cable or a warp). It can be made of rope, chain or a combination of rope and chain. The ratio of the length of rode to the water depth is known as the scope (see below).
Holding ground
Holding ground is the area of sea floor that holds an anchor, and thus the attached ship or boat. Different types of anchor are designed to hold in different types of holding ground. Some bottom materials hold better than others; for instance, hard sand holds well, shell holds poorly. Holding ground may be fouled with obstacles. An anchorage location may be chosen for its holding ground. In poor holding ground, only the weight of an anchor matters; in good holding ground, it is able to dig in, and the holding power can be significantly higher.
History
Evolution of the anchor
The earliest anchors were probably rocks, and many rock anchors have been found dating from at least the Bronze Age. Pre-European Maori waka (canoes) used one or more hollowed stones, tied with flax ropes, as anchors. Many modern moorings still rely on a large rock as the primary element of their design. However, using pure weight to resist the forces of a storm works well only as a permanent mooring; a large enough rock would be nearly impossible to move to a new location.
The ancient Greeks used baskets of stones, large sacks filled with sand, and wooden logs filled with lead. According to Apollonius Rhodius and Stephen of Byzantium, anchors were formed of stone, and Athenaeus states that they were also sometimes made of wood. Such anchors held the vessel merely by their weight and by their friction along the bottom.
Fluked anchors
Iron was afterwards introduced for the construction of anchors, and an improvement was made by forming them with teeth, or "flukes", to fasten themselves into the bottom. This is the iconic anchor shape most familiar to non-sailors.
This form has been used since antiquity. The Roman Nemi ships of the 1st century AD used this form. The Viking Ladby ship (probably 10th century) used a fluked anchor of this type, made of iron, which would have had a wooden stock mounted perpendicular to the shank and flukes to make the flukes contact the bottom at a suitable angle to hook or penetrate.
Admiralty anchor
The Admiralty Pattern anchor, or simply "Admiralty", also known as a "Fisherman", consists of a central shank with a ring or shackle for attaching the rode (the rope, chain, or cable connecting the ship and the anchor). At the other end of the shank there are two arms, carrying the flukes, while the stock is mounted to the shackle end, at ninety degrees to the arms. When the anchor lands on the bottom, it generally falls over with the arms parallel to the seabed. As a strain comes onto the rope, the stock digs into the bottom, canting the anchor until one of the flukes catches and digs into the bottom.
The Admiralty Anchor is an entirely independent reinvention of a classical design, as seen in one of the Nemi ship anchors. This basic design remained unchanged for centuries, with the most significant changes being to the overall proportions, and a move from stocks made of wood to iron stocks in the late 1830s and early 1840s.
Since one fluke always protrudes up from the set anchor, there is a great tendency of the rode to foul the anchor as the vessel swings due to wind or current shifts. When this happens, the anchor may be pulled out of the bottom, and in some cases may need to be hauled up to be re-set. In the mid-19th century, numerous modifications were attempted to alleviate these problems, as well as improve holding power, including one-armed mooring anchors. The most successful of these patent anchors, the Trotman Anchor, introduced a pivot at the centre of the crown where the arms join the shank, allowing the "idle" upper arm to fold against the shank. When deployed the lower arm may fold against the shank tilting the tip of the fluke upwards, so each fluke has a tripping palm at its base, to hook on the bottom as the folded arm drags along the seabed, which unfolds the downward oriented arm until the tip of the fluke can engage the bottom.
Handling and storage of these anchors requires special equipment and procedures. Once the anchor is hauled up to the hawsepipe, the ring end is hoisted up to the end of a timber projecting from the bow known as the cathead. The crown of the anchor is then hauled up with a heavy tackle until one fluke can be hooked over the rail. This is known as "catting and fishing" the anchor. Before dropping the anchor, the fishing process is reversed, and the anchor is dropped from the end of the cathead.
Stockless anchor
The stockless anchor, patented in England in 1821, represented the first significant departure in anchor design in centuries. Although their holding-power-to-weight ratio is significantly lower than admiralty pattern anchors, their ease of handling and stowage aboard large ships led to almost universal adoption. In contrast to the elaborate stowage procedures for earlier anchors, stockless anchors are simply hauled up until they rest with the shank inside the hawsepipes, and the flukes against the hull (or inside a recess in the hull).
While there are numerous variations, stockless anchors consist of a set of heavy flukes connected by a pivot or ball and socket joint to a shank. Cast into the crown of the anchor is a set of tripping palms, projections that drag on the bottom, forcing the main flukes to dig in.
Small boat anchors
Until the mid-20th century, anchors for smaller vessels were either scaled-down versions of admiralty anchors, or simple grapnels. As new designs with greater holding-power-to-weight ratios were sought, a great variety of anchor designs has emerged. Many of these designs are still under patent, and other types are best known by their original trademarked names.
Grapnel anchor / drag
A traditional design, the grapnel is merely a shank (no stock) with four or more tines, also known as a drag. It has a benefit in that, no matter how it reaches the bottom, one or more tines are aimed to set. In coral, or rock, it is often able to set quickly by hooking into the structure, but may be more difficult to retrieve. A grapnel is often quite light, and may have additional uses as a tool to recover gear lost overboard. Its weight also makes it relatively easy to move and carry, however its shape is generally not compact and it may be awkward to stow unless a collapsing model is used.
Grapnels rarely have enough fluke area to develop much hold in sand, clay, or mud. It is not unknown for the anchor to foul on its own rode, or to foul the tines with refuse from the bottom, preventing it from digging in. On the other hand, it is quite possible for this anchor to find such a good hook that, without a trip line from the crown, it is impossible to retrieve.
Herreshoff anchor
Designed by yacht designer L. Francis Herreshoff, this is essentially the same pattern as an admiralty anchor, albeit with small diamond-shaped flukes or palms. The novelty of the design lay in the means by which it could be broken down into three pieces for stowage. In use, it still presents all the issues of the admiralty pattern anchor.
Northill anchor
Originally designed as a lightweight anchor for seaplanes, this design consists of two plough-like blades mounted to a shank, with a folding stock crossing through the crown of the anchor.
CQR plough anchor
Many manufacturers produce a plough-type anchor, so-named after its resemblance to an agricultural plough. All such anchors are copied from the original CQR (Coastal Quick Release, or Clyde Quick Release, later rebranded as 'secure' by Lewmar), a 1933 design patented in the UK by mathematician Geoffrey Ingram Taylor.
Plough anchors stow conveniently in a roller at the bow, and have been popular with cruising sailors and private boaters. Ploughs can be moderately good in all types of seafloor, though not exceptional in any. Contrary to popular belief, the CQR's hinged shank is not to allow the anchor to turn with direction changes rather than breaking out, but actually to prevent the shank's weight from disrupting the fluke's orientation while setting. The hinge can wear out and may trap a sailor's fingers. Some later plough anchors have a rigid shank, such as the Lewmar's "Delta".
A plough anchor has a fundamental flaw: like its namesake, the agricultural plough, it digs in but then tends to break out back to the surface. Plough anchors sometimes have difficulty setting at all, and instead skip across the seafloor. By contrast, modern efficient anchors tend to be "scoop" types that dig ever deeper.
Delta anchor
The Delta anchor was derived from the CQR. It was patented by Philip McCarron, James Stewart, and Gordon Lyall of British marine manufacturer Simpson-Lawrence Ltd in 1992. It was designed as an advance over the anchors used for floating systems such as oil rigs. It retains the weighted tip of the CQR but has a much higher fluke area to weight ratio than its predecessor. The designers also eliminated the sometimes troublesome hinge. It is a plough anchor with a rigid, arched shank. It is described as self-launching because it can be dropped from a bow roller simply by paying out the rode, without manual assistance. This is an oft copied design with the European Brake and Australian Sarca Excel being two of the more notable ones. Although it is a plough type anchor, it sets and holds reasonably well in hard bottoms.
Danforth anchor
American Richard Danforth invented the Danforth Anchor in the 1940s for use aboard landing craft. It uses a stock at the crown to which two large flat triangular flukes are attached. The stock is hinged so the flukes can orient toward the bottom (and on some designs may be adjusted for an optimal angle depending on the bottom type). Tripping palms at the crown act to tip the flukes into the seabed. The design is a burying variety, and once well set can develop high resistance. Its lightweight and compact flat design make it easy to retrieve and relatively easy to store; some anchor rollers and hawsepipes can accommodate a fluke-style anchor.
A Danforth does not usually penetrate or hold in gravel or weeds. In boulders and coral it may hold by acting as a hook. If there is much current, or if the vessel is moving while dropping the anchor, it may "kite" or "skate" over the bottom due to the large fluke area acting as a sail or wing.
The FOB HP anchor designed in Brittany in the 1970s is a Danforth variant designed to give increased holding through its use of rounded flukes setting at a 30° angle.
The Fortress is an American aluminum alloy Danforth variant that can be disassembled for storage and it features an adjustable 32° and 45° shank/fluke angle to improve holding capability in common sea bottoms such as hard sand and soft mud. This anchor performed well in a 1989 US Naval Sea Systems Command (NAVSEA) test and in an August 2014 holding power test that was conducted in the soft mud bottoms of the Chesapeake Bay.
Bruce or claw anchor
This claw-shaped anchor was designed by Peter Bruce from Scotland in the 1970s. Bruce gained his early reputation from the production of large-scale commercial anchors for ships and fixed installations such as oil rigs. It was later scaled down for small boats, and copies of this popular design abound. The Bruce and its copies, known generically as "claw type anchors", have been adopted on smaller boats (partly because they stow easily on a bow roller) but they are most effective in larger sizes. Claw anchors are quite popular on charter fleets as they have a high chance to set on the first try in many bottoms. They have the reputation of not breaking out with tide or wind changes, instead slowly turning in the bottom to align with the force.
Bruce anchors can have difficulty penetrating weedy bottoms and grass. They offer a fairly low holding-power-to-weight ratio and generally have to be oversized to compete with newer types.
Scoop type anchors
Three time circumnavigator German Rolf Kaczirek invented the Bügel Anker in the 1980s. Kaczirek wanted an anchor that was self-righting without necessitating a ballasted tip. Instead, he added a roll bar and switched out the plough share for a flat blade design. As none of the innovations of this anchor were patented, copies of it abound.
Alain Poiraud of France introduced the scoop type anchor in 1996. Similar in design to the Bügel anchor, Poiraud's design features a concave fluke shaped like the blade of a shovel, with a shank attached parallel to the fluke, and the load applied toward the digging end. It is designed to dig into the bottom like a shovel, and dig deeper as more pressure is applied. The common challenge with all the scoop type anchors is that they set so well, they can be difficult to weigh.
Bügelanker, or Wasi: This German-designed bow anchor has a sharp tip for penetrating weed, and features a roll-bar that allows the correct setting attitude to be achieved without the need for extra weight to be inserted into the tip.
Spade: This is a French design that has proven successful since 1996. It features a demountable shank (hollow in some instances) and the choice of galvanized steel, stainless steel, or aluminium construction, which means a lighter and more easily stowable anchor. The geometry also makes this anchor self stowing on a single roller.
Rocna: This New Zealand spade design, available in galvanised or stainless steel, has been produced since 2004. It has a roll-bar (similar to that of the Bügel), a large spade-like fluke area, and a sharp toe for penetrating weed and grass. The Rocna sets quickly and holds well.
Mantus: This is claimed to be a fast setting anchor with high holding power. It is designed as an all round anchor capable of setting even in challenging bottoms such as hard sand/clay bottoms and grass. The shank is made out of a high tensile steel capable of withstanding high loads. It is similar in design to the Rocna but has a larger and wider roll-bar that reduces the risk of fouling and increases the angle of the fluke that results in improved penetration in some bottoms.
Ultra: This is an innovative spade design that dispenses with a roll-bar. Made primarily of stainless steel, its main arm is hollow, while the fluke tip has lead within it. It is similar in appearance to the Spade anchor.
Vulcan: A recent sibling to the Rocna, this anchor performs similarly but does not have a roll-bar. Instead the Vulcan has patented design features such as the "V-bulb" and the "Roll Palm" that allow it to dig in deeply. The Vulcan was designed primarily for sailors who had difficulties accommodating the roll-bar Rocna on their bow. Peter Smith (originator of the Rocna) designed it specifically for larger powerboats. Both Vulcans and Rocnas are available in galvanised steel, or in stainless steel. The Vulcan is similar in appearance to the Spade anchor.
Knox Anchor: This is produced in Scotland and was invented by Professor John Knox. It has a divided concave large area fluke arrangement and a shank in high tensile steel. A roll bar similar to the Rocna gives fast setting and a holding power of about 40 times anchor weight.
Other temporary anchors
Mud weight: Consists of a blunt heavy weight, usually cast iron or cast lead, that sinks into the mud and resist lateral movement. It is suitable only for soft silt bottoms and in mild conditions. Sizes range between 5 and 20 kg for small craft. Various designs exist and many are home produced from lead or improvised with heavy objects. This is a commonly used method on the Norfolk Broads in England.
Bulwagga: This is a unique design featuring three flukes instead of the usual two. It has performed well in tests by independent sources such as American boating magazine Practical Sailor.
Permanent anchors
These are used where the vessel is permanently or semi-permanently sited, for example in the case of lightvessels or channel marker buoys. The anchor needs to hold the vessel in all weathers, including the most severe storm, but needs to be lifted only occasionally, at most – for example, only if the vessel is to be towed into port for maintenance. An alternative to using an anchor under these circumstances, especially if the anchor need never be lifted at all, may be to use a pile that is driven into the seabed.
Permanent anchors come in a wide range of types and have no standard form. A slab of rock with an iron staple in it to attach a chain to would serve the purpose, as would any dense object of appropriate weight (for instance, an engine block). Modern moorings may be anchored by augers, which look and act like oversized screws drilled into the seabed, or by barbed metal beams pounded in (or even driven in with explosives) like pilings, or by a variety of other non-mass means of getting a grip on the bottom. One method of building a mooring is to use three or more conventional anchors laid out with short lengths of chain attached to a swivel, so no matter which direction the vessel moves, one or more anchors are aligned to resist the force.
Mushroom
The mushroom anchor is suitable where the seabed is composed of silt or fine sand. It was invented by Robert Stevenson, for use by an 82-ton converted fishing boat, Pharos, which was used as a lightvessel between 1807 and 1810 near to Bell Rock whilst the lighthouse was being constructed. It was equipped with a 1.5-ton example.
It is shaped like an inverted mushroom, the head becoming buried in the silt. A counterweight is often provided at the other end of the shank to lay it down before it becomes buried.
A mushroom anchor normally sinks in the silt to the point where it has displaced its own weight in bottom material, thus greatly increasing its holding power. These anchors are suitable only for a silt or mud bottom, since they rely upon suction and cohesion of the bottom material, which rocky or coarse sand bottoms lack. The holding power of this anchor is at best about twice its weight until it becomes buried, when it can be as much as ten times its weight. They are available in sizes from about 5 kg up to several tons.
Deadweight
A deadweight is an anchor that relies solely on being a heavy weight. It is usually just a large block of concrete or stone at the end of the chain. Its holding power is defined by its weight underwater (i.e., taking its buoyancy into account) regardless of the type of seabed, although suction can increase this if it becomes buried. Consequently, deadweight anchors are used where mushroom anchors are unsuitable, for example in rock, gravel or coarse sand. An advantage of a deadweight anchor over a mushroom is that if it does drag, it continues to provide its original holding force. The disadvantage of using deadweight anchors in conditions where a mushroom anchor could be used is that it needs to be around ten times the weight of the equivalent mushroom anchor.
Auger
Auger anchors can be used to anchor permanent moorings, floating docks, fish farms, etc. These anchors, which have one or more slightly pitched self-drilling threads, must be screwed into the seabed with the use of a tool, so require access to the bottom, either at low tide or by use of a diver. Hence they can be difficult to install in deep water without special equipment.
Weight for weight, augers have a higher holding than other permanent designs, and so can be cheap and relatively easily installed, although difficult to set in extremely soft mud.
High-holding-types
There is a need in the oil-and-gas industry to resist large anchoring forces when laying pipelines and for drilling vessels. These anchors are installed and removed using a support tug and pennant/pendant wire. Some examples are the Stevin range supplied by Vrijhof Ankers. Large plate anchors such as the Stevmanta are used for permanent moorings.
Anchoring gear
The elements of anchoring gear include the anchor, the cable (also called a rode), the method of attaching the two together, the method of attaching the cable to the ship, charts, and a method of learning the depth of the water.
Vessels may carry a number of anchors: bower anchors are the main anchors used by a vessel and normally carried at the bow of the vessel. A kedge anchor is a light anchor used for warping an anchor, also known as kedging, or more commonly on yachts for mooring quickly or in benign conditions. A stream anchor, which is usually heavier than a kedge anchor, can be used for kedging or warping in addition to temporary mooring and restraining stern movement in tidal conditions or in waters where vessel movement needs to be restricted, such as rivers and channels.
Charts are vital to good anchoring. Knowing the location of potential dangers, as well as being useful in estimating the effects of weather and tide in the anchorage, is essential in choosing a good place to drop the hook. One can get by without referring to charts, but they are an important tool and a part of good anchoring gear, and a skilled mariner would not choose to anchor without them.
Anchor rode
The anchor rode (or "cable" or "warp") that connects the anchor to the vessel is usually made up of chain, rope, or a combination of those. Large ships use only chain rode. Smaller craft might use a rope/chain combination or an all chain rode. All rodes should have some chain; chain is heavy but it resists abrasion from coral, sharp rocks, or shellfish beds, whereas a rope warp is susceptible to abrasion and can fail in a short time when stretched against an abrasive surface. The weight of the chain also helps keep the direction of pull on the anchor closer to horizontal, which improves holding, and absorbs part of snubbing loads. Where weight is not an issue, a heavier chain provides better holding by forming a catenary curve through the water and resting as much of its length on the bottom as would not be lifted by tension of the mooring load. Any changes to the tension are accommodated by additional chain being lifted or settling on the bottom, and this absorbs shock loads until the chain is straight, at which point the full load is taken by the anchor. Additional dissipation of shock loads can be achieved by fitting a snubber between the chain and a bollard or cleat on deck. This also reduces shock loads on the deck fittings, and the vessel usually lies more comfortably and quietly.
Being strong and elastic, nylon rope is the most suitable as an anchor rode. Polyester (terylene) is stronger but less elastic than nylon. Both materials sink, so they avoid fouling other craft in crowded anchorages and do not absorb much water. Neither breaks down quickly in sunlight. Elasticity helps absorb shock loading, but causes faster abrasive wear when the rope stretches over an abrasive surface, like a coral bottom or a poorly designed chock. Polypropylene ("polyprop") is not suited to rodes because it floats and is much weaker than nylon, being barely stronger than natural fibres. Some grades of polypropylene break down in sunlight and become hard, weak, and unpleasant to handle. Natural fibres such as manila or hemp are still used in developing nations but absorb a lot of water, are relatively weak, and rot, although they do give good handling grip and are often relatively cheap. Ropes that have little or no elasticity are not suitable as anchor rodes. Elasticity is partly a function of the fibre material and partly of the rope structure.
All anchors should have chain at least equal to the boat's length. Some skippers prefer an all chain warp for greater security on coral or sharp edged rock bottoms. The chain should be shackled to the warp through a steel eye or spliced to the chain using a chain splice. The shackle pin should be securely wired or moused. Either galvanized or stainless steel is suitable for eyes and shackles, galvanised steel being the stronger of the two. Some skippers prefer to add a swivel to the rode. There is a school of thought that says these should not be connected to the anchor itself, but should be somewhere in the chain. However, most skippers connect the swivel directly to the anchor.
Scope
Scope is the ratio of length of the rode to the depth of the water measured from the highest point (usually the anchor roller or bow chock) to the seabed, making allowance for the highest expected tide. The function of this ratio is to ensure that the pull on the anchor is unlikely to break it out of the bottom if it is embedded, or lift it off a hard bottom, either of which is likely to result in the anchor dragging. A large scope induces a load that is nearly horizontal.
In moderate conditions the ratio of rode to water depth should be 4:1 – where there is sufficient swing-room, a greater scope is always better. In rougher conditions it should be up to twice this with the extra length giving more stretch and a smaller angle to the bottom to resist the anchor breaking out. For example, if the water is deep, and the anchor roller is above the water, then the 'depth' is 9 meters (~30 feet). The amount of rode to let our in moderate conditions is thus 36 meters (120 feet). (For this reason it is important to have a reliable and accurate method of measuring the depth of water.)
When using a rope rode, there is a simple way to estimate the scope: The ratio of bow height of the rode to length of rode above the water while lying back hard on the anchor is the same or less than the scope ratio. The basis for this is simple geometry (Intercept Theorem): The ratio between two sides of a triangle stays the same regardless of the size of the triangle as long as the angles do not change.
Generally, the rode should be between 5 and 10 times the depth to the seabed, giving a scope of 5:1 or 10:1; the larger the number, the shallower the angle is between the cable and the seafloor, and the less upwards force is acting on the anchor. A 10:1 scope gives the greatest holding power, but also allows for much more drifting due to the longer amount of cable paid out. Anchoring with sufficient scope and/or heavy chain rode brings the direction of strain close to parallel with the seabed. This is particularly important for light, modern anchors designed to bury in the bottom, where scopes of 5:1 to 7:1 are common, whereas heavy anchors and moorings can use a scope of 3:1, or less. Some modern anchors, such as the Ultra holds with a scope of 3:1; but, unless the anchorage is crowded, a longer scope always reduces shock stresses.
Anchoring techniques
The basic anchoring consists of determining the location, dropping the anchor, laying out the scope, setting the hook, and assessing where the vessel ends up. The ship seeks a location that is sufficiently protected; has suitable holding ground, enough depth at low tide and enough room for the boat to swing.
The location to drop the anchor should be approached from down wind or down current, whichever is stronger. As the chosen spot is approached, the vessel should be stopped or even beginning to drift back. The anchor should initially be lowered quickly but under control until it is on the bottom (see anchor windlass). The vessel should continue to drift back, and the cable should be veered out under control (slowly) so it is relatively straight.
Once the desired scope is laid out, the vessel should be gently forced astern, usually using the auxiliary motor but possibly by backing a sail. A hand on the anchor line may telegraph a series of jerks and jolts, indicating the anchor is dragging, or a smooth tension indicative of digging in. As the anchor begins to dig in and resist backward force, the engine may be throttled up to get a thorough set. If the anchor continues to drag, or sets after having dragged too far, it should be retrieved and moved back to the desired position (or another location chosen.)
There are techniques of anchoring to limit the swing of a vessel if the anchorage has limited room:
Using an anchor weight, kellet or sentinel
Lowering a concentrated, heavy weight down the anchor line – rope or chain – directly in front of the bow to the seabed behaves like a heavy chain rode and lowers the angle of pull on the anchor. If the weight is suspended off the seabed it acts as a spring or shock absorber to dampen the sudden actions that are normally transmitted to the anchor and can cause it to dislodge and drag. In light conditions, a kellet reduces the swing of the vessel considerably. In heavier conditions these effects disappear as the rode becomes straightened and the weight ineffective. Known as an "anchor chum weight" or "angel" in the UK.
Forked moor
Using two anchors set approximately 45° apart, or wider angles up to 90°, from the bow is a strong mooring for facing into strong winds. To set anchors in this way, first one anchor is set in the normal fashion. Then, taking in on the first cable as the boat is motored into the wind and letting slack while drifting back, a second anchor is set approximately a half-scope away from the first on a line perpendicular to the wind. After this second anchor is set, the scope on the first is taken up until the vessel is lying between the two anchors and the load is taken equally on each cable.
This moor also to some degree limits the range of a vessel's swing to a narrower oval. Care should be taken that other vessels do not swing down on the boat due to the limited swing range.
Bow and stern
(Not to be mistaken with the Bahamian moor, below.) In the bow and stern technique, an anchor is set off each the bow and the stern, which can severely limit a vessel's swing range and also align it to steady wind, current or wave conditions. One method of accomplishing this moor is to set a bow anchor normally, then drop back to the limit of the bow cable (or to double the desired scope, e.g. 8:1 if the eventual scope should be 4:1, 10:1 if the eventual scope should be 5:1, etc.) to lower a stern anchor. By taking up on the bow cable the stern anchor can be set. After both anchors are set, tension is taken up on both cables to limit the swing or to align the vessel.
Bahamian moor
Similar to the above, a Bahamian moor is used to sharply limit the swing range of a vessel, but allows it to swing to a current. One of the primary characteristics of this technique is the use of a swivel as follows: the first anchor is set normally, and the vessel drops back to the limit of anchor cable. A second anchor is attached to the end of the anchor cable, and is dropped and set. A swivel is attached to the middle of the anchor cable, and the vessel connected to that.
The vessel now swings in the middle of two anchors, which is acceptable in strong reversing currents, but a wind perpendicular to the current may break out the anchors, as they are not aligned for this load.
Backing an anchor
Also known as tandem anchoring, in this technique two anchors are deployed in line with each other, on the same rode. With the foremost anchor reducing the load on the aft-most, this technique can develop great holding power and may be appropriate in "ultimate storm" circumstances. It does not limit swinging range, and might not be suitable in some circumstances. There are complications, and the technique requires careful preparation and a level of skill and experience above that required for a single anchor.
Kedging
Kedging or warping is a technique for moving or turning a ship by using a relatively light anchor.
In yachts, a kedge anchor is an anchor carried in addition to the main, or bower anchors, and usually stowed aft. Every yacht should carry at least two anchors – the main or bower anchor and a second lighter kedge anchor. It is used occasionally when it is necessary to limit the turning circle as the yacht swings when it is anchored, such as in a narrow river or a deep pool in an otherwise shallow area. Kedge anchors are sometimes used to recover vessels that have run aground.
For ships, a kedge may be dropped while a ship is underway, or carried out in a suitable direction by a tender or ship's boat to enable the ship to be winched off if aground or swung into a particular heading, or even to be held steady against a tidal or other stream.
Historically, it was of particular relevance to sailing warships that used them to outmaneuver opponents when the wind had dropped but might be used by any vessel in confined, shoal water to place it in a more desirable position, provided she had enough manpower .
Club hauling
Club hauling is an archaic technique. When a vessel is in a narrow channel or on a lee shore so that there is no room to tack the vessel in a conventional manner, an anchor attached to the lee quarter may be dropped from the lee bow. This is deployed when the vessel is head to wind and has lost headway. As the vessel gathers sternway the strain on the cable pivots the vessel around what is now the weather quarter turning the vessel onto the other tack. The anchor is then normally cut away (the ship's momentum prevents recovery without aborting the maneuver).
Weighing anchor
Since all anchors that embed themselves in the bottom require the strain to be along the seabed, anchors can be broken out of the bottom by shortening the rope until the vessel is directly above the anchor; at this point the anchor chain is "up and down", in naval parlance. If necessary, motoring slowly around the location of the anchor also helps dislodge it. Anchors are sometimes fitted with a trip line attached to the crown, by which they can be unhooked from rocks, coral, chain, or other underwater hazards.
The term aweigh describes an anchor when it is hanging on the rope and is not resting on the bottom. This is linked to the term to weigh anchor, meaning to lift the anchor from the sea bed, allowing the ship or boat to move. An anchor is described as aweigh when it has been broken out of the bottom and is being hauled up to be stowed. Aweigh should not be confused with under way, which describes a vessel that is not moored to a dock or anchored, whether or not the vessel is moving through the water. Aweigh is also often confused with away, which is incorrect.
Anchor as symbol
An anchor frequently appears on the flags and coats of arms of institutions involved with the sea, both naval and commercial, as well as of port cities and seacoast regions and provinces in various countries. There also exists in heraldry the "Anchored Cross", or Mariner's Cross, a stylized cross in the shape of an anchor. The symbol can be used to signify 'fresh start' or 'hope'. The New Testament refers to the Christian's hope as "an anchor of the soul". The Mariner's Cross is also referred to as St. Clement's Cross, in reference to the way this saint was killed (being tied to an anchor and thrown from a boat into the Black Sea in 102). Anchored crosses are occasionally a feature of coats of arms in which context they are referred to by the heraldic terms anchry or ancre.
In 1887, the Delta Gamma Fraternity adopted the anchor as its badge to signify hope.
The Unicode anchor (Miscellaneous Symbols) is represented by: ⚓.
See also
"Anchors Aweigh", United States Navy marching song
References
Bibliography
Blackwell, Alex & Daria; Happy Hooking – the Art of Anchoring, 2008, 2011, 2019 White Seahorse;
Edwards, Fred; Sailing as a Second Language: An illustrated dictionary, 1988 Highmark Publishing;
Hinz, Earl R.; The Complete Book of Anchoring and Mooring, Rev. 2d ed., 1986, 1994, 2001 Cornell Maritime Press;
Hiscock, Eric C.; Cruising Under Sail, second edition, 1965 Oxford University Press;
Pardey, Lin and Larry; The Capable Cruiser; 1995 Pardey Books/Paradise Cay Publications;
Rousmaniere, John; The Annapolis Book of Seamanship, 1983, 1989 Simon and Schuster;
Smith, Everrett; Cruising World's Guide to Seamanship: Hold me tight, 1992 New York Times Sports/Leisure Magazines
Further reading
William N. Brady (1864). The Kedge-anchor; Or, Young Sailors' Assistant.
First published as The Naval Apprentice's Kedge Anchor. New York, Taylor and Clement, 1841.--The Kedge-anchor; 3rd ed. New York, 1848.--6th ed. New York, 1852.--9th ed. New York, 1857.
External links
Anchor Tests: Soft Sand Over Hard Sand—Practical-Sailor
The Big Anchor Project
Anchor comparison
Heraldic charges
Nautical terminology
Sailboat components
Sailing ship components
Ship anchors
Watercraft components
Weights
|
https://en.wikipedia.org/wiki/Ammonia
|
Ammonia is an inorganic chemical compound of nitrogen and hydrogen with the formula . A stable binary hydride and the simplest pnictogen hydride, ammonia is a colourless gas with a distinct pungent smell. Biologically, it is a common nitrogenous waste, and it contributes significantly to the nutritional needs of terrestrial organisms by serving as a precursor to fertilisers. Around 70% of ammonia produced industrially is used to make fertilisers in various forms and composition, such as urea and diammonium phosphate. Ammonia in pure form is also applied directly into the soil.
Ammonia, either directly or indirectly, is also a building block for the synthesis of many pharmaceutical products and is used in many commercial cleaning products. It is mainly collected by downward displacement of both air and water.
Although common in nature—both terrestrially and in the outer planets of the Solar System—and in wide use, ammonia is both caustic and hazardous in its concentrated form. In many countries it is classified as an extremely hazardous substance, and is subject to strict reporting requirements by facilities that produce, store, or use it in significant quantities.
The global industrial production of ammonia in 2021 was 235 million tonnes. Industrial ammonia is sold either as ammonia liquor (usually 28% ammonia in water) or as pressurised or refrigerated anhydrous liquid ammonia transported in tank cars or cylinders.
For fundamental reasons, the production of ammonia from the elements hydrogen and nitrogen is difficult, requiring high pressures and high temperatures. The Haber process that enabled industrial production was invented at the beginning of the 20th century, revolutionizing agriculture.
boils at at a pressure of one atmosphere, so the liquid must be stored under pressure or at low temperature. Household ammonia or ammonium hydroxide is a solution of in water. The concentration of such solutions is measured in units of the Baumé scale (density), with 26 degrees Baumé (about 30% of ammonia by weight at ) being the typical high-concentration commercial product.
Etymology
Pliny, in Book XXXI of his Natural History, refers to a salt named hammoniacum, so called because of its proximity to the nearby Temple of Jupiter Amun (Greek Ἄμμων Ammon) in the Roman province of Cyrenaica. However, the description Pliny gives of the salt does not conform to the properties of ammonium chloride. According to Herbert Hoover's commentary in his English translation of Georgius Agricola's De re metallica, it is likely to have been common sea salt. In any case, that salt ultimately gave ammonia and ammonium compounds their name.
Natural occurrence
Ammonia is a chemical found in trace quantities on Earth, being produced from nitrogenous animal and vegetable matter. Ammonia and ammonium salts are also found in small quantities in rainwater, whereas ammonium chloride (sal ammoniac), and ammonium sulfate are found in volcanic districts. Crystals of ammonium bicarbonate have been found in Patagonia guano.
Ammonia is also found throughout the Solar System on Mars, Jupiter, Saturn, Uranus, Neptune, and Pluto, among other places: on smaller, icy bodies such as Pluto, ammonia can act as a geologically important antifreeze, as a mixture of water and ammonia can have a melting point as low as if the ammonia concentration is high enough and thus allow such bodies to retain internal oceans and active geology at a far lower temperature than would be possible with water alone. Substances containing ammonia, or those that are similar to it, are called ammoniacal.
Properties
Ammonia is a colourless gas with a characteristically pungent smell. It is lighter than air, its density being 0.589 times that of air. It is easily liquefied due to the strong hydrogen bonding between molecules. Gaseous ammonia turns to a colourless liquid, which boils at , and freezes to colourless crystals at . Little data is available at very high temperatures and pressures, such as supercritical conditions.
Solid
The crystal symmetry is cubic, Pearson symbol cP16, space group P213 No.198, lattice constant 0.5125 nm.
Liquid
Liquid ammonia possesses strong ionising powers reflecting its high ε of 22. Liquid ammonia has a very high standard enthalpy change of vapourization (23.35 kJ/mol; for comparison, water's is 40.65 kJ/mol, methane 8.19 kJ/mol and phosphine 14.6 kJ/mol) and can therefore be used in laboratories in uninsulated vessels without additional refrigeration. See liquid ammonia as a solvent.
Solvent properties
Ammonia readily dissolves in water. In an aqueous solution, it can be expelled by boiling. The aqueous solution of ammonia is basic. The maximum concentration of ammonia in water (a saturated solution) has a density of 0.880 g/cm3 and is often known as '.880 ammonia'.
Table of thermal and physical properties of saturated liquid ammonia:
Table of thermal and physical properties of ammonia () at atmospheric pressure:
Decomposition
At high temperature and in the presence of a suitable catalyst or in a pressurised vessel with constant volume and high temperature (e.g. ), ammonia is decomposed into its constituent elements. Decomposition of ammonia is a slightly endothermic process requiring 23 kJ/mol (5.5 kcal/mol) of ammonia, and yields hydrogen and nitrogen gas. Ammonia can also be used as a source of hydrogen for acid fuel cells if the unreacted ammonia can be removed. Ruthenium and platinum catalysts were found to be the most active, whereas supported Ni catalysts were less active.
Structure
The ammonia molecule has a trigonal pyramidal shape as predicted by the valence shell electron pair repulsion theory (VSEPR theory) with an experimentally determined bond angle of 106.7°. The central nitrogen atom has five outer electrons with an additional electron from each hydrogen atom. This gives a total of eight electrons, or four electron pairs that are arranged tetrahedrally. Three of these electron pairs are used as bond pairs, which leaves one lone pair of electrons. The lone pair repels more strongly than bond pairs; therefore, the bond angle is not 109.5°, as expected for a regular tetrahedral arrangement, but 106.8°. This shape gives the molecule a dipole moment and makes it polar. The molecule's polarity, and especially its ability to form hydrogen bonds, makes ammonia highly miscible with water. The lone pair makes ammonia a base, a proton acceptor. Ammonia is moderately basic; a 1.0 M aqueous solution has a pH of 11.6, and if a strong acid is added to such a solution until the solution is neutral (), 99.4% of the ammonia molecules are protonated. Temperature and salinity also affect the proportion of ammonium . The latter has the shape of a regular tetrahedron and is isoelectronic with methane.
The ammonia molecule readily undergoes nitrogen inversion at room temperature; a useful analogy is an umbrella turning itself inside out in a strong wind. The energy barrier to this inversion is 24.7 kJ/mol, and the resonance frequency is 23.79 GHz, corresponding to microwave radiation of a wavelength of 1.260 cm. The absorption at this frequency was the first microwave spectrum to be observed and was used in the first maser.
Amphotericity
One of the most characteristic properties of ammonia is its basicity. Ammonia is considered to be a weak base. It combines with acids to form ammonium salts; thus, with hydrochloric acid it forms ammonium chloride (sal ammoniac); with nitric acid, ammonium nitrate, etc. Perfectly dry ammonia gas will not combine with perfectly dry hydrogen chloride gas; moisture is necessary to bring about the reaction.
As a demonstration experiment under air with ambient moisture, opened bottles of concentrated ammonia and hydrochloric acid solutions produce a cloud of ammonium chloride, which seems to appear 'out of nothing' as the salt aerosol forms where the two diffusing clouds of reagents meet between the two bottles.
The salts produced by the action of ammonia on acids are known as the ammonium salts and all contain the ammonium ion ().
Although ammonia is well known as a weak base, it can also act as an extremely weak acid. It is a protic substance and is capable of formation of amides (which contain the ion). For example, lithium dissolves in liquid ammonia to give a blue solution (solvated electron) of lithium amide:
Self-dissociation
Like water, liquid ammonia undergoes molecular autoionisation to form its acid and base conjugates:
Ammonia often functions as a weak base, so it has some buffering ability. Shifts in pH will cause more or fewer ammonium cations () and amide anions () to be present in solution. At standard pressure and temperature,
K = = 10−30.
Combustion
Ammonia does not burn readily or sustain combustion, except under narrow fuel-to-air mixtures of 15–25% air by volume. When mixed with oxygen, it burns with a pale yellowish-green flame. Ignition occurs when chlorine is passed into ammonia, forming nitrogen and hydrogen chloride; if chlorine is present in excess, then the highly explosive nitrogen trichloride () is also formed.
The combustion of ammonia to form nitrogen and water is exothermic:
, ΔH°r = −1267.20 kJ (or −316.8 kJ/mol if expressed per mol of )
The standard enthalpy change of combustion, ΔH°c, expressed per mole of ammonia and with condensation of the water formed, is −382.81 kJ/mol. Dinitrogen is the thermodynamic product of combustion: all nitrogen oxides are unstable with respect to and , which is the principle behind the catalytic converter. Nitrogen oxides can be formed as kinetic products in the presence of appropriate catalysts, a reaction of great industrial importance in the production of nitric acid:
A subsequent reaction leads to :
The combustion of ammonia in air is very difficult in the absence of a catalyst (such as platinum gauze or warm chromium(III) oxide), due to the relatively low heat of combustion, a lower laminar burning velocity, high auto-ignition temperature, high heat of vapourization, and a narrow flammability range. However, recent studies have shown that efficient and stable combustion of ammonia can be achieved using swirl combustors, thereby rekindling research interest in ammonia as a fuel for thermal power production. The flammable range of ammonia in dry air is 15.15–27.35% and in 100% relative humidity air is 15.95–26.55%. For studying the kinetics of ammonia combustion, knowledge of a detailed reliable reaction mechanism is required, but this has been challenging to obtain.
Formation of other compounds
Ammonia is a direct or indirect precursor to most manufactured nitrogen-containing compounds.
In organic chemistry, ammonia can act as a nucleophile in substitution reactions. Amines can be formed by the reaction of ammonia with alkyl halides or with alcohols. The resulting − group is also nucleophilic so secondary and tertiary amines are often formed. When such multiple substitution is not desired, an excess of ammonia helps minimise it. For example, methylamine is prepared by the reaction of ammonia with chloromethane or with methanol. In both cases, dimethylamine and trimethylamine are co-produced. Ethanolamine is prepared by a ring-opening reaction with ethylene oxide, and when the reaction is allowed to go further it produces diethanolamine and triethanolamine. The reaction of ammonia with 2-bromopropanoic acid has been used to prepare racemic alanine in 70% yield.
Amides can be prepared by the reaction of ammonia with carboxylic acid derivatives. For example, ammonia reacts with formic acid (HCOOH) to yield formamide () when heated. Acyl chlorides are the most reactive, but the ammonia must be present in at least a twofold excess to neutralise the hydrogen chloride formed. Esters and anhydrides also react with ammonia to form amides. Ammonium salts of carboxylic acids can be dehydrated to amides by heating to 150–200 °C as long as no thermally sensitive groups are present.
The hydrogen in ammonia is susceptible to replacement by a myriad of substituents. When dry ammonia gas is heated with metallic sodium it converts to sodamide, . With chlorine, monochloramine is formed.
Pentavalent ammonia is known as λ5-amine, nitrogen pentahydride, or more commonly, ammonium hydride . This crystalline solid is only stable under high pressure and decomposes back into trivalent ammonia (λ3-amine) and hydrogen gas at normal conditions. This substance was once investigated as a possible solid rocket fuel in 1966.
Ammonia as a ligand
Ammonia can act as a ligand in transition metal complexes. It is a pure σ-donor, in the middle of the spectrochemical series, and shows intermediate hard–soft behaviour (see also ECW model). Its relative donor strength toward a series of acids, versus other Lewis bases, can be illustrated by C-B plots. For historical reasons, ammonia is named ammine in the nomenclature of coordination compounds. Some notable ammine complexes include tetraamminediaquacopper(II) (), a dark blue complex formed by adding ammonia to a solution of copper(II) salts. Tetraamminediaquacopper(II) hydroxide is known as Schweizer's reagent, and has the remarkable ability to dissolve cellulose. Diamminesilver(I) () is the active species in Tollens' reagent. Formation of this complex can also help to distinguish between precipitates of the different silver halides: silver chloride (AgCl) is soluble in dilute (2 M) ammonia solution, silver bromide (AgBr) is only soluble in concentrated ammonia solution, whereas silver iodide (AgI) is insoluble in aqueous ammonia.
Ammine complexes of chromium(III) were known in the late 19th century, and formed the basis of Alfred Werner's revolutionary theory on the structure of coordination compounds. Werner noted only two isomers (fac- and mer-) of the complex could be formed, and concluded the ligands must be arranged around the metal ion at the vertices of an octahedron. This proposal has since been confirmed by X-ray crystallography.
An ammine ligand bound to a metal ion is markedly more acidic than a free ammonia molecule, although deprotonation in aqueous solution is still rare. One example is the reaction of mercury(II) chloride with ammonia (Calomel reaction) where the resulting mercuric amidochloride is highly insoluble.
Ammonia forms 1:1 adducts with a variety of Lewis acids such as , phenol, and . Ammonia is a hard base (HSAB theory) and its E & C parameters are EB = 2.31 and CB = 2.04. Its relative donor strength toward a series of acids, versus other Lewis bases, can be illustrated by C-B plots.
Detection and determination
Ammonia in solution
Ammonia and ammonium salts can be readily detected, in very minute traces, by the addition of Nessler's solution, which gives a distinct yellow colouration in the presence of the slightest trace of ammonia or ammonium salts. The amount of ammonia in ammonium salts can be estimated quantitatively by distillation of the salts with sodium (NaOH) or potassium hydroxide (KOH), the ammonia evolved being absorbed in a known volume of standard sulfuric acid and the excess of acid then determined volumetrically; or the ammonia may be absorbed in hydrochloric acid and the ammonium chloride so formed precipitated as ammonium hexachloroplatinate, .
Gaseous ammonia
Sulfur sticks are burnt to detect small leaks in industrial ammonia refrigeration systems. Larger quantities can be detected by warming the salts with a caustic alkali or with quicklime, when the characteristic smell of ammonia will be at once apparent. Ammonia is an irritant and irritation increases with concentration; the permissible exposure limit is 25 ppm, and lethal above 500 ppm. Higher concentrations are hardly detected by conventional detectors, the type of detector is chosen according to the sensitivity required (e.g. semiconductor, catalytic, electrochemical). Holographic sensors have been proposed for detecting concentrations up to 12.5% in volume.
Ammoniacal nitrogen (NH3–N)
Ammoniacal nitrogen (NH3–N) is a measure commonly used for testing the quantity of ammonium ions, derived naturally from ammonia, and returned to ammonia via organic processes, in water or waste liquids. It is a measure used mainly for quantifying values in waste treatment and water purification systems, as well as a measure of the health of natural and man-made water reserves. It is measured in units of mg/L (milligram per litre).
History
The ancient Greek historian Herodotus mentioned that there were outcrops of salt in an area of Libya that was inhabited by a people called the 'Ammonians' (now the Siwa oasis in northwestern Egypt, where salt lakes still exist). The Greek geographer Strabo also mentioned the salt from this region. However, the ancient authors Dioscorides, Apicius, Arrian, Synesius, and Aëtius of Amida described this salt as forming clear crystals that could be used for cooking and that were essentially rock salt. Hammoniacus sal appears in the writings of Pliny, although it is not known whether the term is identical with the more modern sal ammoniac (ammonium chloride).
The fermentation of urine by bacteria produces a solution of ammonia; hence fermented urine was used in Classical Antiquity to wash cloth and clothing, to remove hair from hides in preparation for tanning, to serve as a mordant in dying cloth, and to remove rust from iron. It was also used by ancient dentists to wash teeth.
In the form of sal ammoniac (نشادر, nushadir), ammonia was important to the Muslim alchemists. It was mentioned in the Book of Stones, likely written in the 9th century and attributed to Jābir ibn Hayyān. It was also important to the European alchemists of the 13th century, being mentioned by Albertus Magnus. It was also used by dyers in the Middle Ages in the form of fermented urine to alter the colour of vegetable dyes. In the 15th century, Basilius Valentinus showed that ammonia could be obtained by the action of alkalis on sal ammoniac. At a later period, when sal ammoniac was obtained by distilling the hooves and horns of oxen and neutralizing the resulting carbonate with hydrochloric acid, the name 'spirit of hartshorn' was applied to ammonia.
Gaseous ammonia was first isolated by Joseph Black in 1756 by reacting sal ammoniac (ammonium chloride) with calcined magnesia (magnesium oxide). It was isolated again by Peter Woulfe in 1767, by Carl Wilhelm Scheele in 1770 and by Joseph Priestley in 1773 and was termed by him 'alkaline air'. Eleven years later in 1785, Claude Louis Berthollet ascertained its composition.
The Haber–Bosch process to produce ammonia from the nitrogen in the air was developed by Fritz Haber and Carl Bosch in 1909 and patented in 1910. It was first used on an industrial scale in Germany during World War I, following the allied blockade that cut off the supply of nitrates from Chile. The ammonia was used to produce explosives to sustain war efforts.
Before the availability of natural gas, hydrogen as a precursor to ammonia production was produced via the electrolysis of water or using the chloralkali process.
With the advent of the steel industry in the 20th century, ammonia became a byproduct of the production of coking coal.
Applications
Solvent
Liquid ammonia is the best-known and most widely studied nonaqueous ionising solvent. Its most conspicuous property is its ability to dissolve alkali metals to form highly coloured, electrically conductive solutions containing solvated electrons. Apart from these remarkable solutions, much of the chemistry in liquid ammonia can be classified by analogy with related reactions in aqueous solutions. Comparison of the physical properties of with those of water shows has the lower melting point, boiling point, density, viscosity, dielectric constant and electrical conductivity; this is due at least in part to the weaker hydrogen bonding in and because such bonding cannot form cross-linked networks, since each molecule has only one lone pair of electrons compared with two for each molecule. The ionic self-dissociation constant of liquid at −50 °C is about 10−33.
Solubility of salts
Liquid ammonia is an ionising solvent, although less so than water, and dissolves a range of ionic compounds, including many nitrates, nitrites, cyanides, thiocyanates, metal cyclopentadienyl complexes and metal bis(trimethylsilyl)amides. Most ammonium salts are soluble and act as acids in liquid ammonia solutions. The solubility of halide salts increases from fluoride to iodide. A saturated solution of ammonium nitrate (Divers' solution, named after Edward Divers) contains 0.83 mol solute per mole of ammonia and has a vapour pressure of less than 1 bar even at .
Solutions of metals
Liquid ammonia will dissolve all of the alkali metals and other electropositive metals such as Ca, Sr, Ba, Eu and Yb (also Mg using an electrolytic process). At low concentrations (<0.06 mol/L), deep blue solutions are formed: these contain metal cations and solvated electrons, free electrons that are surrounded by a cage of ammonia molecules.
These solutions are very useful as strong reducing agents. At higher concentrations, the solutions are metallic in appearance and in electrical conductivity. At low temperatures, the two types of solution can coexist as immiscible phases.
Redox properties of liquid ammonia
The range of thermodynamic stability of liquid ammonia solutions is very narrow, as the potential for oxidation to dinitrogen, E° (), is only +0.04 V. In practice, both oxidation to dinitrogen and reduction to dihydrogen are slow. This is particularly true of reducing solutions: the solutions of the alkali metals mentioned above are stable for several days, slowly decomposing to the metal amide and dihydrogen. Most studies involving liquid ammonia solutions are done in reducing conditions; although oxidation of liquid ammonia is usually slow, there is still a risk of explosion, particularly if transition metal ions are present as possible catalysts.
Fertiliser
In the US , approximately 88% of ammonia was used as fertilisers either as its salts, solutions or anhydrously. When applied to soil, it helps provide increased yields of crops such as maize and wheat. 30% of agricultural nitrogen applied in the US is in the form of anhydrous ammonia, and worldwide, 110 million tonnes are applied each year.
Precursor to nitrogenous compounds
Ammonia is directly or indirectly the precursor to most nitrogen-containing compounds. Virtually all synthetic nitrogen compounds are derived from ammonia. An important derivative is nitric acid. This key material is generated via the Ostwald process by oxidation of ammonia with air over a platinum catalyst at , ≈9 atm. Nitric oxide is an intermediate in this conversion:
Nitric acid is used for the production of fertilisers, explosives and many organonitrogen compounds.
Ammonia is also used to make the following compounds:
Hydrazine, in the Olin Raschig process and the peroxide process
Hydrogen cyanide, in the BMA process and the Andrussow process
Hydroxylamine and ammonium carbonate, in the Raschig process
Phenol, in the Raschig–Hooker process
Urea, in the Bosch–Meiser urea process and in Wöhler synthesis
Amino acids, using Strecker amino-acid synthesis
Acrylonitrile, in the Sohio process
Ammonia can also be used to make compounds in reactions that are not specifically named. Examples of such compounds include ammonium perchlorate, ammonium nitrate, formamide, dinitrogen tetroxide, alprazolam, ethanolamine, ethyl carbamate, hexamethylenetetramine and ammonium bicarbonate.
Cleansing agent
Household 'ammonia' is a solution of in water, and is used as a general purpose cleaner for many surfaces. Because ammonia results in a relatively streak-free shine, one of its most common uses is to clean glass, porcelain, and stainless steel. It is also frequently used for cleaning ovens and for soaking items to loosen baked-on grime. Household ammonia ranges in concentration by weight from 5% to 10% ammonia. US manufacturers of cleaning products are required to provide the product's material safety data sheet that lists the concentration used.
Solutions of ammonia (5–10% by weight) are used as household cleaners, particularly for glass. These solutions are irritating to the eyes and mucous membranes (respiratory and digestive tracts), and to a lesser extent the skin. Experts advise that caution be used to ensure the chemical is not mixed into any liquid containing bleach, due to the danger of forming toxic chloramine gas. Mixing with chlorine-containing products or strong oxidants, such as household bleach, can generate toxic chloramine fumes.
Experts also warn not to use ammonia-based cleaners (such as glass or window cleaners) on car touchscreens, due to the risk of damage to the screen's anti-glare and anti-fingerprint coatings.
Fermentation
Solutions of ammonia ranging from 16% to 25% are used in the fermentation industry as a source of nitrogen for microorganisms and to adjust pH during fermentation.
Antimicrobial agent for food products
As early as in 1895, it was known that ammonia was 'strongly antiseptic ... it requires 1.4 grams per litre to preserve beef tea (broth).' In one study, anhydrous ammonia destroyed 99.999% of zoonotic bacteria in three types of animal feed, but not silage. Anhydrous ammonia is currently used commercially to reduce or eliminate microbial contamination of beef.
Lean finely textured beef (popularly known as 'pink slime') in the beef industry is made from fatty beef trimmings (c. 50–70% fat) by removing the fat using heat and centrifugation, then treating it with ammonia to kill E. coli. The process was deemed effective and safe by the US Department of Agriculture based on a study that found that the treatment reduces E. coli to undetectable levels. There have been safety concerns about the process as well as consumer complaints about the taste and smell of ammonia-treated beef.
Fuel
Ammonia has been used as fuel, and is a proposed alternative to fossil fuels and hydrogen in the future.
Compared to hydrogen, ammonia is easier to store. Compared to hydrogen as a fuel, ammonia is much more energy efficient, and could be produced, stored and delivered at a much lower cost than hydrogen, which must be kept compressed or as a cryogenic liquid. The raw energy density of liquid ammonia is 11.5 MJ/L, which is about a third that of diesel.
Ammonia can be converted back to hydrogen to be used to power hydrogen fuel cells, or it may be used directly within high-temperature solid oxide direct ammonia fuel cells to provide efficient power sources that do not emit greenhouse gases. Ammonia to hydrogen conversion can be achieved through the sodium amide process or the catalytic decomposition of ammonia using solid catalysts.
Ammonia engines or ammonia motors, using ammonia as a working fluid, have been proposed and occasionally used. The principle is similar to that used in a fireless locomotive, but with ammonia as the working fluid, instead of steam or compressed air. Ammonia engines were used experimentally in the 19th century by Goldsworthy Gurney in the UK and the St. Charles Avenue Streetcar line in New Orleans in the 1870s and 1880s, and during World War II ammonia was used to power buses in Belgium.
Ammonia is sometimes proposed as a practical alternative to fossil fuel for internal combustion engines. However, ammonia cannot be easily used in existing Otto cycle engines because of its very narrow flammability range. Despite this, several tests have been run. Its high octane rating of 120 and low flame temperature allows the use of high compression ratios without a penalty of high production. Since ammonia contains no carbon, its combustion cannot produce carbon dioxide, carbon monoxide, hydrocarbons, or soot.
Ammonia production currently creates 1.8% of global emissions. 'Green ammonia' is ammonia produced by using green hydrogen (hydrogen produced by electrolysis), whereas 'blue ammonia' is ammonia produced using blue hydrogen (hydrogen produced by steam methane reforming where the carbon dioxide has been captured and stored).
Rocket engines have also been fueled by ammonia. The Reaction Motors XLR99 rocket engine that powered the hypersonic research aircraft used liquid ammonia. Although not as powerful as other fuels, it left no soot in the reusable rocket engine, and its density approximately matches the density of the oxidiser, liquid oxygen, which simplified the aircraft's design.
In early August 2018, scientists from Australia's Commonwealth Scientific and Industrial Research Organisation (CSIRO) announced the success of developing a process to release hydrogen from ammonia and harvest that at ultra-high purity as a fuel for cars. This uses a special membrane. Two demonstration fuel cell vehicles have the technology, a Hyundai Nexo and Toyota Mirai.
In 2020, Saudi Arabia shipped 40 metric tons of liquid 'blue ammonia' to Japan for use as a fuel. It was produced as a by-product by petrochemical industries, and can be burned without giving off greenhouse gases. Its energy density by volume is nearly double that of liquid hydrogen. If the process of creating it can be scaled up via purely renewable resources, producing green ammonia, it could make a major difference in avoiding climate change. The company ACWA Power and the city of Neom have announced the construction of a green hydrogen and ammonia plant in 2020.
Green ammonia is considered as a potential fuel for future container ships. In 2020, the companies DSME and MAN Energy Solutions announced the construction of an ammonia-based ship, DSME plans to commercialize it by 2025. The use of ammonia as a potential alternative fuel for aircraft jet engines is also being explored.
Japan intends to implement a plan to develop ammonia co-firing technology that can increase the use of ammonia in power generation, as part of efforts to assist domestic and other Asian utilities to accelerate their transition to carbon neutrality.
In October 2021, the first International Conference on Fuel Ammonia (ICFA2021) was held.
In June 2022, IHI Corporation succeeded in reducing greenhouse gases by over 99% during combustion of liquid ammonia in a 2,000-kilowatt-class gas turbine achieving truly -free power generation.
In July 2022, Quad nations of Japan, the U.S., Australia and India agreed to promote technological development for clean-burning hydrogen and ammonia as fuels at the security grouping's first energy meeting. , however, significant amounts of are produced. Nitrous oxide may also be a problem.
Other
Remediation of gaseous emissions
Ammonia is used to scrub from the burning of fossil fuels, and the resulting product is converted to ammonium sulfate for use as fertiliser. Ammonia neutralises the nitrogen oxide () pollutants emitted by diesel engines. This technology, called SCR (selective catalytic reduction), relies on a vanadia-based catalyst.
Ammonia may be used to mitigate gaseous spills of phosgene.
As a hydrogen carrier
Due to its attributes, being liquid at ambient temperature under its own vapour pressure and having high volumetric and gravimetric energy density, ammonia is considered a suitable carrier for hydrogen, and may be cheaper than direct transport of liquid hydrogen.
Refrigeration – R717
Because of ammonia's vapourization properties, it is a useful refrigerant. It was commonly used before the popularisation of chlorofluorocarbons (Freons). Anhydrous ammonia is widely used in industrial refrigeration applications and hockey rinks because of its high energy efficiency and low cost. It suffers from the disadvantage of toxicity, and requiring corrosion resistant components, which restricts its domestic and small-scale use. Along with its use in modern vapour-compression refrigeration it is used in a mixture along with hydrogen and water in absorption refrigerators. The Kalina cycle, which is of growing importance to geothermal power plants, depends on the wide boiling range of the ammonia–water mixture. Ammonia coolant is also used in the S1 radiator aboard the International Space Station in two loops that are used to regulate the internal temperature and enable temperature-dependent experiments.
The potential importance of ammonia as a refrigerant has increased with the discovery that vented CFCs and HFCs are extremely potent and stable greenhouse gases.
Stimulant
Ammonia, as the vapour released by smelling salts, has found significant use as a respiratory stimulant. Ammonia is commonly used in the illegal manufacture of methamphetamine through a Birch reduction. The Birch method of making methamphetamine is dangerous because the alkali metal and liquid ammonia are both extremely reactive, and the temperature of liquid ammonia makes it susceptible to explosive boiling when reactants are added.
Textile
Liquid ammonia is used for treatment of cotton materials, giving properties like mercerisation, using alkalis. In particular, it is used for prewashing of wool.
Lifting gas
At standard temperature and pressure, ammonia is less dense than atmosphere and has approximately 45–48% of the lifting power of hydrogen or helium. Ammonia has sometimes been used to fill balloons as a lifting gas. Because of its relatively high boiling point (compared to helium and hydrogen), ammonia could potentially be refrigerated and liquefied aboard an airship to reduce lift and add ballast (and returned to a gas to add lift and reduce ballast).
Fuming
Ammonia has been used to darken quartersawn white oak in Arts & Crafts and Mission-style furniture. Ammonia fumes react with the natural tannins in the wood and cause it to change colour.
Safety
The US Occupational Safety and Health Administration (OSHA) has set a 15-minute exposure limit for gaseous ammonia of 35 ppm by volume in the environmental air and an 8-hour exposure limit of 25 ppm by volume. The National Institute for Occupational Safety and Health (NIOSH) recently reduced the IDLH (Immediately Dangerous to Life and Health, the level to which a healthy worker can be exposed for 30 minutes without suffering irreversible health effects) from 500 to 300 based on recent more conservative interpretations of original research in 1943. Other organisations have varying exposure levels. US Navy Standards [U.S. Bureau of Ships 1962] maximum allowable concentrations (MACs): for continuous exposure (60 days) is 25 ppm; for exposure of 1 hour is 400 ppm.
Ammonia vapour has a sharp, irritating, pungent odor that acts as a warning of potentially dangerous exposure. The average odor threshold is 5 ppm, well below any danger or damage. Exposure to very high concentrations of gaseous ammonia can result in lung damage and death. Ammonia is regulated in the US as a non-flammable gas, but it meets the definition of a material that is toxic by inhalation and requires a hazardous safety permit when transported in quantities greater than .
Liquid ammonia is dangerous because it is hygroscopic and because it can cause caustic burns. See for more information.
Toxicity
The toxicity of ammonia solutions does not usually cause problems for humans and other mammals, as a specific mechanism exists to prevent its build-up in the bloodstream. Ammonia is converted to carbamoyl phosphate by the enzyme carbamoyl phosphate synthetase, and then enters the urea cycle to be either incorporated into amino acids or excreted in the urine. Fish and amphibians lack this mechanism, as they can usually eliminate ammonia from their bodies by direct excretion. Ammonia even at dilute concentrations is highly toxic to aquatic animals, and for this reason it is classified as dangerous for the environment. Atmospheric ammonia plays a key role in the formation of fine particulate matter.
Ammonia is a constituent of tobacco smoke.
Coking wastewater
Ammonia is present in coking wastewater streams, as a liquid by-product of the production of coke from coal. In some cases, the ammonia is discharged to the marine environment where it acts as a pollutant. The Whyalla steelworks in South Australia is one example of a coke-producing facility that discharges ammonia into marine waters.
Aquaculture
Ammonia toxicity is believed to be a cause of otherwise unexplained losses in fish hatcheries. Excess ammonia may accumulate and cause alteration of metabolism or increases in the body pH of the exposed organism. Tolerance varies among fish species. At lower concentrations, around 0.05 mg/L, un-ionised ammonia is harmful to fish species and can result in poor growth and feed conversion rates, reduced fecundity and fertility and increase stress and susceptibility to bacterial infections and diseases. Exposed to excess ammonia, fish may suffer loss of equilibrium, hyper-excitability, increased respiratory activity and oxygen uptake and increased heart rate. At concentrations exceeding 2.0 mg/L, ammonia causes gill and tissue damage, extreme lethargy, convulsions, coma, and death. Experiments have shown that the lethal concentration for a variety of fish species ranges from 0.2 to 2.0 mg/L.
During winter, when reduced feeds are administered to aquaculture stock, ammonia levels can be higher. Lower ambient temperatures reduce the rate of algal photosynthesis so less ammonia is removed by any algae present. Within an aquaculture environment, especially at large scale, there is no fast-acting remedy to elevated ammonia levels. Prevention rather than correction is recommended to reduce harm to farmed fish and in open water systems, the surrounding environment.
Storage information
Similar to propane, anhydrous ammonia boils below room temperature when at atmospheric pressure. A storage vessel capable of is suitable to contain the liquid. Ammonia is used in numerous different industrial applications requiring carbon or stainless steel storage vessels. Ammonia with at least 0.2% by weight water content is not corrosive to carbon steel. carbon steel construction storage tanks with 0.2% by weight or more of water could last more than 50 years in service. Experts warn that ammonium compounds not be allowed to come in contact with bases (unless in an intended and contained reaction), as dangerous quantities of ammonia gas could be released.
Laboratory
The hazards of ammonia solutions depend on the concentration: 'dilute' ammonia solutions are usually 5–10% by weight (< 5.62 mol/L); 'concentrated' solutions are usually prepared at >25% by weight. A 25% (by weight) solution has a density of 0.907 g/cm3, and a solution that has a lower density will be more concentrated. The European Union classification of ammonia solutions is given in the table.
The ammonia vapour from concentrated ammonia solutions is severely irritating to the eyes and the respiratory tract, and experts warn that these solutions only be handled in a fume hood. Saturated ('0.880' – see ) solutions can develop a significant pressure inside a closed bottle in warm weather, and experts also warn that the bottle be opened with care. This is not usually a problem for 25% ('0.900') solutions.
Experts warn that ammonia solutions not be mixed with halogens, as toxic and/or explosive products are formed. Experts also warn that prolonged contact of ammonia solutions with silver, mercury or iodide salts can also lead to explosive products: such mixtures are often formed in qualitative inorganic analysis, and that it needs to be lightly acidified but not concentrated (<6% w/v) before disposal once the test is completed.
Laboratory use of anhydrous ammonia (gas or liquid)
Anhydrous ammonia is classified as toxic (T) and dangerous for the environment (N). The gas is flammable (autoignition temperature: 651 °C) and can form explosive mixtures with air (16–25%). The permissible exposure limit (PEL) in the United States is 50 ppm (35 mg/m3), while the IDLH concentration is estimated at 300 ppm. Repeated exposure to ammonia lowers the sensitivity to the smell of the gas: normally the odour is detectable at concentrations of less than 50 ppm, but desensitised individuals may not detect it even at concentrations of 100 ppm. Anhydrous ammonia corrodes copper- and zinc-containing alloys, which makes brass fittings not appropriate for handling the gas. Liquid ammonia can also attack rubber and certain plastics.
Ammonia reacts violently with the halogens. Nitrogen triiodide, a primary high explosive, is formed when ammonia comes in contact with iodine. Ammonia causes the explosive polymerisation of ethylene oxide. It also forms explosive fulminating compounds with compounds of gold, silver, mercury, germanium or tellurium, and with stibine. Violent reactions have also been reported with acetaldehyde, hypochlorite solutions, potassium ferricyanide and peroxides.
Production
Ammonia has one of the highest rates of production of any inorganic chemical. Production is sometimes expressed in terms of 'fixed nitrogen'. Global production was estimated as being 160 million tonnes in 2020 (147 tons of fixed nitrogen). China accounted for 26.5% of that, followed by Russia at 11.0%, the United States at 9.5%, and India at 8.3%.
Before the start of World War I, most ammonia was obtained by the dry distillation of nitrogenous vegetable and animal waste products, including camel dung, where it was distilled by the reduction of nitrous acid and nitrites with hydrogen; in addition, it was produced by the distillation of coal, and also by the decomposition of ammonium salts by alkaline hydroxides such as quicklime:
For small scale laboratory synthesis, one can heat urea and calcium hydroxide:
Haber–Bosch
Electrochemical
Ammonia can be synthesized electrochemically. The only required inputs are sources of nitrogen (potentially atmospheric) and hydrogen (water), allowing generation at the point of use. The availability of renewable energy creates the possibility of zero emission production.
'Green Ammonia' is a name for ammonia produced from hydrogen that is in turn produced from carbon-free sources such as electrolysis of water. Ammonia from this source can be used as a liquid fuel with zero contribution to global climate change.
Another electrochemical synthesis mode involves the reductive formation of lithium nitride, which can be protonated to ammonia, given a proton source. Ethanol has been used as such a source, although it may degrade. The first use of this chemistry was reported in 1930, where lithium solutions in ethanol were used to produce ammonia at pressures of up to 1000 bar. In 1994, Tsuneto et al. used lithium electrodeposition in tetrahydrofuran to synthesize ammonia at more moderate pressures with reasonable Faradaic efficiency. Other studies have since used the ethanol–tetrahydrofuran system for electrochemical ammonia synthesis. In 2019, Lazouski et al. proposed a mechanism to explain observed ammonia formation kinetics.
In 2020, Lazouski et al. developed a solvent-agnostic gas diffusion electrode to improve nitrogen transport to the reactive lithium. The study observed production rates of up to 30 ± 5 nmol/s/cm2 and Faradaic efficiencies of up to 47.5 ± 4% at ambient temperature and 1 bar pressure.
In 2021, Suryanto et al. replaced ethanol with a tetraalkyl phosphonium salt. This cation can stably undergo deprotonation–reprotonation cycles, while it enhances the medium's ionic conductivity. The study observed production rates of 53 ± 1 nmol/s/cm2 at 69 ± 1% faradaic efficiency experiments under 0.5-bar hydrogen and 19.5-bar nitrogen partial pressure at ambient temperature.
Role in biological systems and human disease
Ammonia is both a metabolic waste and a metabolic input throughout the biosphere. It is an important source of nitrogen for living systems. Although atmospheric nitrogen abounds (more than 75%), few living creatures are capable of using atmospheric nitrogen in its diatomic form, gas. Therefore, nitrogen fixation is required for the synthesis of amino acids, which are the building blocks of protein. Some plants rely on ammonia and other nitrogenous wastes incorporated into the soil by decaying matter. Others, such as nitrogen-fixing legumes, benefit from symbiotic relationships with rhizobia bacteria that create ammonia from atmospheric nitrogen.
In humans, inhaling ammonia in high concentrations can be fatal. Exposure to ammonia can cause headaches, edema, impaired memory, seizures and coma as it is neurotoxic in nature.
Biosynthesis
In certain organisms, ammonia is produced from atmospheric nitrogen by enzymes called nitrogenases. The overall process is called nitrogen fixation. Intense effort has been directed toward understanding the mechanism of biological nitrogen fixation. The scientific interest in this problem is motivated by the unusual structure of the active site of the enzyme, which consists of an ensemble.
Ammonia is also a metabolic product of amino acid deamination catalyzed by enzymes such as glutamate dehydrogenase 1. Ammonia excretion is common in aquatic animals. In humans, it is quickly converted to urea (by liver), which is much less toxic, particularly less basic. This urea is a major component of the dry weight of urine. Most reptiles, birds, insects, and snails excrete uric acid solely as nitrogenous waste.
Physiology
Ammonia plays a role in both normal and abnormal animal physiology. It is biosynthesised through normal amino acid metabolism and is toxic in high concentrations. The liver converts ammonia to urea through a series of reactions known as the urea cycle. Liver dysfunction, such as that seen in cirrhosis, may lead to elevated amounts of ammonia in the blood (hyperammonemia). Likewise, defects in the enzymes responsible for the urea cycle, such as ornithine transcarbamylase, lead to hyperammonemia. Hyperammonemia contributes to the confusion and coma of hepatic encephalopathy, as well as the neurologic disease common in people with urea cycle defects and organic acidurias.
Ammonia is important for normal animal acid/base balance. After formation of ammonium from glutamine, α-ketoglutarate may be degraded to produce two bicarbonate ions, which are then available as buffers for dietary acids. Ammonium is excreted in the urine, resulting in net acid loss. Ammonia may itself diffuse across the renal tubules, combine with a hydrogen ion, and thus allow for further acid excretion.
Excretion
Ammonium ions are a toxic waste product of metabolism in animals. In fish and aquatic invertebrates, it is excreted directly into the water. In mammals, sharks, and amphibians, it is converted in the urea cycle to urea, which is less toxic and can be stored more efficiently. In birds, reptiles, and terrestrial snails, metabolic ammonium is converted into uric acid, which is solid and can therefore be excreted with minimal water loss.
Beyond Earth
Ammonia has been detected in the atmospheres of the giant planets Jupiter, Saturn, Uranus and Neptune, along with other gases such as methane, hydrogen, and helium. The interior of Saturn may include frozen ammonia crystals. It is found on Deimos and Phobos – the two moons of Mars.
Interstellar space
Ammonia was first detected in interstellar space in 1968, based on microwave emissions from the direction of the galactic core. This was the first polyatomic molecule to be so detected.
The sensitivity of the molecule to a broad range of excitations and the ease with which it can be observed in a number of regions has made ammonia one of the most important molecules for studies of molecular clouds. The relative intensity of the ammonia lines can be used to measure the temperature of the emitting medium.
The following isotopic species of ammonia have been detected: , , , , and . The detection of triply deuterated ammonia was considered a surprise as deuterium is relatively scarce. It is thought that the low-temperature conditions allow this molecule to survive and accumulate.
Since its interstellar discovery, has proved to be an invaluable spectroscopic tool in the study of the interstellar medium. With a large number of transitions sensitive to a wide range of excitation conditions, has been widely astronomically detected – its detection has been reported in hundreds of journal articles. Listed below is a sample of journal articles that highlights the range of detectors that have been used to identify ammonia.
The study of interstellar ammonia has been important to a number of areas of research in the last few decades. Some of these are delineated below and primarily involve using ammonia as an interstellar thermometer.
Interstellar formation mechanisms
The interstellar abundance for ammonia has been measured for a variety of environments. The []/[] ratio has been estimated to range from 10−7 in small dark clouds up to 10−5 in the dense core of the Orion molecular cloud complex. Although a total of 18 total production routes have been proposed, the principal formation mechanism for interstellar is the reaction:
The rate constant, k, of this reaction depends on the temperature of the environment, with a value of at 10 K. The rate constant was calculated from the formula . For the primary formation reaction, and . Assuming an abundance of and an electron abundance of 10−7 typical of molecular clouds, the formation will proceed at a rate of in a molecular cloud of total density .
All other proposed formation reactions have rate constants of between two and 13 orders of magnitude smaller, making their contribution to the abundance of ammonia relatively insignificant. As an example of the minor contribution other formation reactions play, the reaction:
has a rate constant of 2.2. Assuming densities of 105 and []/[] ratio of 10−7, this reaction proceeds at a rate of 2.2, more than three orders of magnitude slower than the primary reaction above.
Some of the other possible formation reactions are:
Interstellar destruction mechanisms
There are 113 total proposed reactions leading to the destruction of . Of these, 39 were tabulated in extensive tables of the chemistry among C, N and O compounds. A review of interstellar ammonia cites the following reactions as the principal dissociation mechanisms:
with rate constants of 4.39×10−9 and 2.2×10−9, respectively. The above equations (, ) run at a rate of 8.8×10−9 and 4.4×10−13, respectively. These calculations assumed the given rate constants and abundances of []/[] = 10−5, []/[] = 2×10−5, []/[] = 2×10−9, and total densities of n = 105, typical of cold, dense, molecular clouds. Clearly, between these two primary reactions, equation () is the dominant destruction reaction, with a rate ≈10,000 times faster than equation (). This is due to the relatively high abundance of .
Single antenna detections
Radio observations of from the Effelsberg 100-m Radio Telescope reveal that the ammonia line is separated into two components – a background ridge and an unresolved core. The background corresponds well with the locations previously detected CO. The 25 m Chilbolton telescope in England detected radio signatures of ammonia in H II regions, HNH2O masers, H–H objects, and other objects associated with star formation. A comparison of emission line widths indicates that turbulent or systematic velocities do not increase in the central cores of molecular clouds.
Microwave radiation from ammonia was observed in several galactic objects including W3(OH), Orion A, W43, W51, and five sources in the galactic centre. The high detection rate indicates that this is a common molecule in the interstellar medium and that high-density regions are common in the galaxy.
Interferometric studies
VLA observations of in seven regions with high-velocity gaseous outflows revealed condensations of less than 0.1 pc in L1551, S140, and Cepheus A. Three individual condensations were detected in Cepheus A, one of them with a highly elongated shape. They may play an important role in creating the bipolar outflow in the region.
Extragalactic ammonia was imaged using the VLA in IC 342. The hot gas has temperatures above 70 K, which was inferred from ammonia line ratios and appears to be closely associated with the innermost portions of the nuclear bar seen in CO. was also monitored by VLA toward a sample of four galactic ultracompact HII regions: G9.62+0.19, G10.47+0.03, G29.96-0.02, and G31.41+0.31. Based upon temperature and density diagnostics, it is concluded that in general such clumps are probably the sites of massive star formation in an early evolutionary phase prior to the development of an ultracompact HII region.
Infrared detections
Absorption at 2.97 micrometres due to solid ammonia was recorded from interstellar grains in the Becklin–Neugebauer Object and probably in NGC 2264-IR as well. This detection helped explain the physical shape of previously poorly understood and related ice absorption lines.
A spectrum of the disk of Jupiter was obtained from the Kuiper Airborne Observatory, covering the 100 to 300 cm−1 spectral range. Analysis of the spectrum provides information on global mean properties of ammonia gas and an ammonia ice haze.
A total of 149 dark cloud positions were surveyed for evidence of 'dense cores' by using the (J,K) = (1,1) rotating inversion line of NH3. In general, the cores are not spherically shaped, with aspect ratios ranging from 1.1 to 4.4. It is also found that cores with stars have broader lines than cores without stars.
Ammonia has been detected in the Draco Nebula and in one or possibly two molecular clouds, which are associated with the high-latitude galactic infrared cirrus. The finding is significant because they may represent the birthplaces for the Population I metallicity B-type stars in the galactic halo that could have been borne in the galactic disk.
Observations of nearby dark clouds
By balancing and stimulated emission with spontaneous emission, it is possible to construct a relation between excitation temperature and density. Moreover, since the transitional levels of ammonia can be approximated by a 2-level system at low temperatures, this calculation is fairly simple. This premise can be applied to dark clouds, regions suspected of having extremely low temperatures and possible sites for future star formation. Detections of ammonia in dark clouds show very narrow linesindicative not only of low temperatures, but also of a low level of inner-cloud turbulence. Line ratio calculations provide a measurement of cloud temperature that is independent of previous CO observations. The ammonia observations were consistent with CO measurements of rotation temperatures of ≈10 K. With this, densities can be determined, and have been calculated to range between 104 and 105 cm−3 in dark clouds. Mapping of gives typical clouds sizes of 0.1 pc and masses near 1 solar mass. These cold, dense cores are the sites of future star formation.
UC HII regions
Ultra-compact HII regions are among the best tracers of high-mass star formation. The dense material surrounding UCHII regions is likely primarily molecular. Since a complete study of massive star formation necessarily involves the cloud from which the star formed, ammonia is an invaluable tool in understanding this surrounding molecular material. Since this molecular material can be spatially resolved, it is possible to constrain the heating/ionising sources, temperatures, masses, and sizes of the regions. Doppler-shifted velocity components allow for the separation of distinct regions of molecular gas that can trace outflows and hot cores originating from forming stars.
Extragalactic detection
Ammonia has been detected in external galaxies, and by simultaneously measuring several lines, it is possible to directly measure the gas temperature in these galaxies. Line ratios imply that gas temperatures are warm (≈50 K), originating from dense clouds with sizes of tens of parsecs. This picture is consistent with the picture within our Milky Way galaxyhot dense molecular cores form around newly forming stars embedded in larger clouds of molecular material on the scale of several hundred parsecs (giant molecular clouds; GMCs).
See also
Notes
References
Works cited
Further reading
External links
International Chemical Safety Card 0414 (anhydrous ammonia), ilo.org.
International Chemical Safety Card 0215 (aqueous solutions), ilo.org.
Emergency Response to Ammonia Fertiliser Releases (Spills) for the Minnesota Department of Agriculture.ammoniaspills.org
National Institute for Occupational Safety and Health – Ammonia Page, cdc.gov
NIOSH Pocket Guide to Chemical Hazards – Ammonia, cdc.gov
Ammonia, video
Bases (chemistry)
Foul-smelling chemicals
Gaseous signaling molecules
Household chemicals
Industrial gases
Inorganic solvents
Nitrogen cycle
Nitrogen hydrides
Nitrogen(−III) compounds
Refrigerants
Toxicology
|
https://en.wikipedia.org/wiki/Alabaster
|
Alabaster is a mineral and a soft rock used for carvings and as a source of plaster powder. Archaeologists, geologists, and the stone industry have different definitions and usages for the word alabaster. In archaeology, the term alabaster is a category of objects and artefacts made from the varieties of two different minerals: (i) the fine-grained, massive type of gypsum, and (ii) the fine-grained, banded type of calcite.
In geology, gypsum is a type of alabaster that chemically is a hydrous sulfate of calcium, whereas calcite is a carbonate of calcium. As types of alabaster, gypsum and calcite have similar properties, such as light color, translucence, and soft stones that can be carved and sculpted; thus the historical use and application of alabaster for the production of carved, decorative artefacts and objets d’art. Calcite alabaster also is known as: onyx-marble, Egyptian alabaster, and Oriental alabaster, which terms usually describe either a compact, banded travertine stone or a stalagmitic limestone colored with swirling bands of cream and brown.
In general, ancient alabaster is calcite in the wider Middle East, including Egypt and Mesopotamia, while it is gypsum in medieval Europe. Modern alabaster is most likely calcite but may be either. Both are easy to work and slightly soluble in water. They have been used for making a variety of indoor artwork and carving, as they will not survive long outdoors.
The two kinds are readily distinguished by their different hardnesses: gypsum alabaster (Mohs hardness 1.5 to 2) is so soft that a fingernail scratches it, while calcite (Mohs hardness 3) cannot be scratched in this way but yields to a knife. Moreover, calcite alabaster, being a carbonate, effervesces when treated with hydrochloric acid, while gypsum alabaster remains almost unaffected.
Etymology
The English word "alabaster" was borrowed from Old French , in turn derived from Latin , and that from Greek () or (). The Greek words denoted a vase of alabaster.
The name may be derived further from ancient Egyptian , which refers to vessels of the Egyptian goddess Bast. She was represented as a lioness and frequently depicted as such in figures placed atop these alabaster vessels. Ancient Roman authors Pliny the Elder and Ptolemy wrote that the stone used for ointment jars called alabastra came from a region of Egypt known as Alabastron or Alabastrites.
Properties and usability
The purest alabaster is a snow-white material of fine uniform grain, but it often is associated with an oxide of iron, which produces brown clouding and veining in the stone. The coarser varieties of gypsum alabaster are converted by calcination into plaster of Paris, and are sometimes known as "plaster stone".
The softness of alabaster enables it to be carved readily into elaborate forms, but its solubility in water renders it unsuitable for outdoor work. If alabaster with a smooth, polished surface is washed with dishwashing liquid, it will become rough, dull and whiter, losing most of its translucency and lustre. The finer kinds of alabaster are employed largely as an ornamental stone, especially for ecclesiastical decoration and for the rails of staircases and halls.
Modern processing
Working techniques
Alabaster is mined and then sold in blocks to alabaster workshops. There they are cut to the needed size ("squaring"), and then are processed in different techniques: turned on a lathe for round shapes, carved into three-dimensional sculptures, chiselled to produce low relief figures or decoration; and then given an elaborate finish that reveals its transparency, colour, and texture.
Marble imitation
In order to diminish the translucency of the alabaster and to produce an opacity suggestive of true marble, the statues are immersed in a bath of water and heated gradually—nearly to the boiling point—an operation requiring great care, because if the temperature is not regulated carefully, the stone acquires a dead-white, chalky appearance. The effect of heating appears to be a partial dehydration of the gypsum. If properly treated, it very closely resembles true marble and is known as "marmo di Castellina".
Dyeing
Alabaster is a porous stone and can be "dyed" into any colour or shade, a technique used for centuries. For this the stone needs to be fully immersed in various pigmentary solutions and heated to a specific temperature. The technique can be used to disguise alabaster. In this way a very misleading imitation of coral that is called "alabaster coral" is produced.
Types, occurrence, history
Typically only one type is sculpted in any particular cultural environment, but sometimes both have been worked to make similar pieces in the same place and time. This was the case with small flasks of the alabastron type made in Cyprus from the Bronze Age into the Classical period.
Window panels
When cut in thin sheets, alabaster is translucent enough to be used for small windows. It was used for this purpose in Byzantine churches and later in medieval ones, especially in Italy. Large sheets of Aragonese gypsum alabaster are used extensively in the contemporary Cathedral of Our Lady of the Angels, which was dedicated in 2002 by the Los Angeles, California, Archdiocese. The cathedral incorporates special cooling to prevent the panes from overheating and turning opaque. The ancients used the calcite type, while the modern Los Angeles cathedral is using gypsum alabaster. There are also multiple examples of alabaster windows in ordinary village churches and monasteries in northern Spain.
Calcite alabaster
Calcite alabaster, harder than the gypsum variety, was the kind primarily used in ancient Egypt and the wider Middle East (but not Assyrian palace reliefs), and is also used in modern times. It is found as either a stalagmitic deposit from the floor and walls of limestone caverns, or as a kind of travertine, similarly deposited in springs of calcareous water. Its deposition in successive layers gives rise to the banded appearance that the marble often shows on cross-section, from which its name is derived: onyx-marble or alabaster-onyx, or sometimes simply (and wrongly) as onyx.
Egypt and the Middle East
Egyptian alabaster has been worked extensively near Suez and Assiut.
This stone variety is the "alabaster" of the ancient Egyptians and Bible and is often termed Oriental alabaster, since the early examples came from the Far East. The Greek name alabastrites is said to be derived from the town of Alabastron in Egypt, where the stone was quarried. The locality probably owed its name to the mineral; the origin of the mineral name is obscure (though see above).
The "Oriental" alabaster was highly esteemed for making small perfume bottles or ointment vases called alabastra; the vessel name has been suggested as a possible source of the mineral name. In Egypt, craftsmen used alabaster for canopic jars and various other sacred and sepulchral objects. The sarcophagus of Seti I, found in his tomb near Thebes, is on display in Sir John Soane's Museum, London; it is carved in a single block of translucent calcite alabaster from Alabastron.
Algerian onyx-marble has been quarried largely in the province of Oran.
Calcite alabaster was quarried in ancient Israel in the cave known today as the Twins Cave near Beit Shemesh. Herod used this alabaster for baths in his palaces.
North America
In Mexico, there are famous deposits of a delicate green variety at La Pedrara, in the district of Tecali, near Puebla. Onyx-marble occurs also in the district of Tehuacán and at several localities in the US including California, Arizona, Utah, Colorado and Virginia.
Gypsum alabaster
Gypsum alabaster is the softer of the two varieties, the other being calcite alabaster. It was used primarily in medieval Europe, and is also used in modern times.
Ancient and Classical Near East
"Mosul marble" is a kind of gypsum alabaster found in the north of modern Iraq, which was used for the Assyrian palace reliefs of the 9th to 7th centuries BC; these are the largest type of alabaster sculptures to have been regularly made. The relief is very low and the carving detailed, but large rooms were lined with continuous compositions on slabs around high. The Lion Hunt of Ashurbanipal and military Lachish reliefs, both 7th century and in the British Museum, are some of the best known.
Gypsum alabaster was widely used for small sculpture for indoor use in the ancient world, especially in ancient Egypt and Mesopotamia. Fine detail could be obtained in a material with an attractive finish without iron or steel tools. Alabaster was used for vessels dedicated for use in the cult of the deity Bast in the culture of the ancient Egyptians, and thousands of gypsum alabaster artifacts dating to the late 4th millennium BC also have been found in Tell Brak (present day Nagar), in Syria.
In Mesopotamia, gypsum alabaster was the material of choice for figures of deities and devotees in temples, as in a figure believed to represent the deity Abu dating to the first half of the 3rd millennium BC and currently kept in New York.
Aragon, Spain
Much of the world's alabaster extraction is performed in the centre of the Ebro Valley in Aragon, Spain, which has the world's largest known exploitable deposits. According to a brochure published by the Aragon government, alabaster has elsewhere either been depleted, or its extraction is so difficult that it has almost been abandoned or is carried out at a very high cost. There are two separate sites in Aragon, both are located in Tertiary basins. The most important site is the Fuentes-Azaila area, in the Tertiary Ebro Basin. The other is the Calatayud-Teruel Basin, which divides the Iberian Range in two main sectors (NW and SE).
The abundance of Aragonese alabaster was crucial for its use in architecture, sculpture and decoration. There is no record of likely use by pre-Roman cultures, so perhaps the first ones to use alabaster in Aragon were the Romans, who produced vessels from alabaster following the Greek and Egyptian models. It seems that since the reconstruction of the Roman Wall in Zaragoza in the 3rd century AD with alabaster, the use of this material became common in building for centuries. Muslim Saraqusta (today, Zaragoza) was also called "Medina Albaida", the White City, due to the appearance of its alabaster walls and palaces, which stood out among gardens, groves and orchards by the Ebro and Huerva Rivers.
The oldest remains in the Aljafería Palace, together with other interesting elements like capitals, reliefs and inscriptions, were made using alabaster, but it was during the artistic and economic blossoming of the Renaissance that Aragonese alabaster reached its golden age. In the 16th century sculptors in Aragon chose alabaster for their best works. They were adept at exploiting its lighting qualities and generally speaking the finished art pieces retained their natural color.
Volterra (Tuscany)
In Europe, the centre of the alabaster trade today is Florence, Italy. Tuscan alabaster occurs in nodular masses embedded in limestone, interstratified with marls of Miocene and Pliocene age. The mineral is worked largely by means of underground galleries, in the district of Volterra. Several varieties are recognized—veined, spotted, clouded, agatiform, and others. The finest kind, obtained principally from Castellina, is sent to Florence for figure-sculpture, while the common kinds are carved locally, into vases, lights, and various ornamental objects. These items are objects of extensive trade, especially in Florence, Pisa, and Livorno.
In the 3rd century BC the Etruscans used the alabaster of Tuscany from the area of modern-day Volterra to produce funeral urns, possibly taught by Greek artists. During the Middle Ages the craft of alabaster was almost completely forgotten. A revival started in the mid-16th century, and until the beginning of the 17th century alabaster work was strictly artistic and did not expand to form a large industry.
In the 17th and 18th centuries production of artistic, high-quality Renaissance-style artifacts stopped altogether, being replaced by less sophisticated, cheaper items better suited for large-scale production and commerce. The new industry prospered, but the reduced need of skilled craftsmen left only few still working. The 19th century brought a boom to the industry, largely due to the "traveling artisans" who went and offered their wares to the palaces of Europe, as well as to America and the East.
In the 19th century new processing technology was also introduced, allowing for the production of custom-made, unique pieces, as well as the combination of alabaster with other materials. Apart from the newly developed craft, artistic work became again possible, chiefly by Volterran sculptor Albino Funaioli. After a short slump, the industry was revived again by the sale of mass-produced mannerist Expressionist sculptures, and was further enhanced in the 1920s by a new branch creating ceiling and wall lamps in the Art Deco style and culminating in the participation at the 1925 International Exposition of Modern Industrial and Decorative Arts from Paris. Important names from the evolution of alabaster use after World War II are Volterran Umberto Borgna, the "first alabaster designer", and later on the architect and industrial designer Angelo Mangiarotti.
England and Wales
Gypsum alabaster is a common mineral, which occurs in England in the Keuper marls of the Midlands, especially at Chellaston in Derbyshire, at Fauld in Staffordshire, and near Newark in Nottinghamshire. Deposits at all of these localities have been worked extensively.
In the 14th and 15th centuries its carving into small statues and sets of relief panels for altarpieces was a valuable local industry in Nottingham, as well as a major English export. These were usually painted, or partly painted. It was also used for the effigies, often life size, on tomb monuments, as the typical recumbent position suited the material's lack of strength, and it was cheaper and easier to work than good marble. After the English Reformation the making of altarpiece sets was discontinued, but funerary monument work in reliefs and statues continued.
Besides examples of these carvings still in Britain (especially at the Nottingham Castle Museum, British Museum, and Victoria and Albert Museum), trade in mineral alabaster (rather than just the antiques trade) has scattered examples in the material that may be found as far afield as the Musée de Cluny, Spain, and Scandinavia.
Alabaster also is found, although in smaller quantity, at Watchet in Somerset, near Penarth in Glamorganshire, and elsewhere. In Cumbria it occurs largely in the New Red rocks, but at a lower geological horizon. The alabaster of Nottinghamshire and Derbyshire is found in thick nodular beds or "floors" in spheroidal masses known as "balls" or "bowls" and in smaller lenticular masses termed "cakes". At Chellaston, where the local alabaster is known as "Patrick", it has been worked into ornaments under the name of "Derbyshire spar"―a term more properly applied to fluorspar.
Black alabaster
Black alabaster is a rare anhydrite form of the gypsum-based mineral. This black form is found in only three veins in the world, one each in United States, Italy, and China.
Alabaster Caverns State Park, near Freedom, Oklahoma, is home to a natural gypsum cave in which much of the gypsum is in the form of alabaster. There are several types of alabaster found at the site, including pink, white, and the rare black alabaster.
Gallery
Ancient and Classical Near East
European Middle Ages
Modern
See also
Mineralogy
– mineral consisting of calcium carbonate (); archaeologists and stone trade professionals, unlike mineralogists, call one variety of calcite "alabaster"
– mineral composed of calcium sulfate dihydrate (); alabaster is one of its varieties
– a mineral closely related to gypsum
– the main inorganic compound () of gypsum
– translucent sheets of marble or alabaster used during the Early Middle Ages for windows instead of glass
Window and roof panels
Chronological list of examples:
– 5th century, Ravenna
– 6th century, Ravenna
– mainly 13th–14th century, Valencia, Spain; the lantern of the octagonal crossing tower
– 14th-century, Orvieto, Umbria, central Italy
– 17th century, Rome; alabaster window by Bernini (1598–1680) used to create a "spotlight"
– 1924, Jerusalem, architect: Antonio Barluzzi. Windows fitted with dyed alabaster panels.
– 1924, Mount Tabor, architect: Antonio Barluzzi. Alabaster roofing was attempted.
References
Further reading
Harrell J.A. (1990), "Misuse of the term 'alabaster' in Egyptology", Göttinger Miszellen, 119, pp. 37–42.
Mackintosh-Smith T. (1999), "Moonglow from Underground". Aramco World May–June 1999.
External links
More about alabaster and travertine, brief guide explaining the confusing, different use of the same terms by geologists, archaeologists and the stone trade. Oxford University Museum of Natural History, 2012
Alabaster Craftmanship in Volterra
Calcium minerals
Carbonate minerals
Sulfate minerals
Minerals
Stone (material)
Sculpture materials
Bastet
|
https://en.wikipedia.org/wiki/AOL
|
AOL (stylized as Aol., formerly a company known as AOL Inc. and originally known as America Online) is an American web portal and online service provider based in New York City. It is a brand marketed by the current incarnation of Yahoo! Inc.
The service traces its history to an online service known as PlayNET. PlayNET licensed its software to Quantum Link (Q-Link), that went online in November 1985. A new IBM PC client was launched in 1988, and eventually renamed as America Online in 1989. AOL grew to become the largest online service, displacing established players like CompuServe and The Source. By 1995, AOL had about three million active users.
AOL was one of the early pioneers of the Internet in the early-1990s, and the most recognized brand on the web in the United States. It originally provided a dial-up service to millions of Americans, pioneered instant messaging, and in 1993 began adding internet access. In 1998, AOL purchased Netscape for US$4.2 billion. In 2001, at the height of its popularity, it purchased the media conglomerate Time Warner in the largest merger in U.S. history. AOL rapidly shrank thereafter, partly due to the decline of dial-up and rise of broadband. AOL was eventually spun off from Time Warner in 2009, with Tim Armstrong appointed the new CEO. Under his leadership, the company invested in media brands and advertising technologies.
On June 23, 2015, AOL was acquired by Verizon Communications for $4.4 billion. On May 3, 2021, Verizon announced it would sell Yahoo and AOL to private equity firm Apollo Global Management for $5 billion. On September 1, 2021, AOL became part of the new Yahoo! Inc.
History
1983–1991: early years
AOL began in 1983, as a short-lived venture called Control Video Corporation (CVC), founded by William von Meister. Its sole product was an online service called GameLine for the Atari 2600 video game console, after von Meister's idea of buying music on demand was rejected by Warner Bros. Subscribers bought a modem from the company for $49.95 and paid a one-time $15 setup fee. GameLine permitted subscribers to temporarily download games and keep track of high scores, at a cost of $1 per game. The telephone disconnected and the downloaded game would remain in GameLine's Master Module and playable until the user turned off the console or downloaded another game.
In January 1983, Steve Case was hired as a marketing consultant for Control Video on the recommendation of his brother, investment banker Dan Case. In May 1983, Jim Kimsey became a manufacturing consultant for Control Video, which was near bankruptcy. Kimsey was brought in by his West Point friend Frank Caufield, an investor in the company. In early 1985, von Meister left the company.
On May 24, 1985, Quantum Computer Services, an online services company, was founded by Kimsey from the remnants of Control Video, with Kimsey as chief executive officer, and Marc Seriff as chief technology officer. The technical team consisted of Seriff, Tom Ralston, Ray Heinrich, Steve Trus, Ken Huntsman, Janet Hunter, Dave Brown, Craig Dykstra, Doug Coward, and Mike Ficco. In 1987, Case was promoted again to executive vice-president. Kimsey soon began to groom Case to take over the role of CEO, which he did when Kimsey retired in 1991.
Kimsey changed the company's strategy, and in 1985, launched a dedicated online service for Commodore 64 and 128 computers, originally called Quantum Link ("Q-Link" for short). The Quantum Link software was based on software licensed from PlayNet, Inc, (founded in 1983 by Howard Goldberg and Dave Panzl). The service was different from other online services as it used the computing power of the Commodore 64 and the Apple II rather than just a "dumb" terminal. It passed tokens back and forth and provided a fixed price service tailored for home users. In May 1988, Quantum and Apple launched AppleLink Personal Edition for Apple II and Macintosh computers. In August 1988, Quantum launched PC Link, a service for IBM-compatible PCs developed in a joint venture with the Tandy Corporation. After the company parted ways with Apple in October 1989, Quantum changed the service's name to America Online. Case promoted and sold AOL as the online service for people unfamiliar with computers, in contrast to CompuServe, which was well established in the technical community.
From the beginning, AOL included online games in its mix of products; many classic and casual games were included in the original PlayNet software system. the company introduced many innovative online interactive titles and games, including:
Graphical chat environments Habitat (1986–1988) from LucasArts.
The first online interactive fiction series QuantumLink Serial by Tracy Reed (1988).
Quantum Space, the first fully automated play-by-mail game (1989–1991).
1991–2006: Internet age, Time Warner merger
In February 1991, AOL for DOS was launched using a GeoWorks interface; it was followed a year later by AOL for Windows. This coincided with growth in pay-based online services, like Prodigy, CompuServe, and GEnie. 1991 also saw the introduction of an original Dungeons & Dragons title called Neverwinter Nights from Stormfront Studios; which was one of the first Multiplayer Online Role Playing Games to depict the adventure with graphics instead of text.
During the early 1990s, the average subscription lasted for about 25 months and accounted for $350 in total revenue. Advertisements invited modem owners to "Try America Online FREE", promising free software and trial membership. AOL discontinued Q-Link and PC Link in late 1994. In September 1993, AOL added Usenet access to its features. This is commonly referred to as the "Eternal September", as Usenet's cycle of new users was previously dominated by smaller numbers of college and university freshmen gaining access in September and taking a few weeks to acclimate. This also coincided with a new "carpet bombing" marketing campaign by CMO Jan Brandt to distribute as many free trial AOL trial disks as possible through nonconventional distribution partners. At one point, 50% of the CDs produced worldwide had an AOL logo. AOL quickly surpassed GEnie, and by the mid-1990s, it passed Prodigy (which for several years allowed AOL advertising) and CompuServe. In November 1994, AOL purchased Booklink for its web browser, to give its users web access. In 1996, AOL replaced Booklink with a browser based on Internet Explorer, allegedly in exchange for inclusion of AOL in Windows.
AOL launched services with the National Education Association, the American Federation of Teachers, National Geographic, the Smithsonian Institution, the Library of Congress, Pearson, Scholastic, ASCD, NSBA, NCTE, Discovery Networks, Turner Education Services (CNN Newsroom), NPR, The Princeton Review, Stanley Kaplan, Barron's, Highlights for Kids, the U.S. Department of Education, and many other education providers. AOL offered the first real-time homework help service (the Teacher Pager—1990; prior to this, AOL provided homework help bulletin boards), the first service by children, for children (Kids Only Online, 1991), the first online service for parents (the Parents Information Network, 1991), the first online courses (1988), the first omnibus service for teachers (the Teachers' Information Network, 1990), the first online exhibit (Library of Congress, 1991), the first parental controls, and many other online education firsts.
AOL purchased search engine WebCrawler in 1995, but sold it to Excite the following year; the deal made Excite the sole search and directory service on AOL. After the deal closed in March 1997, AOL launched its own branded search engine, based on Excite, called NetFind. This was renamed to AOL Search in 1999.
AOL charged its users an hourly fee until December 1996, when the company changed to a flat monthly rate of $19.95. During this time, AOL connections were flooded with users trying to connect, and many canceled their accounts due to constant busy signals. A commercial was made featuring Steve Case telling people AOL was working day and night to fix the problem. Within three years, AOL's user base grew to 10 million people. In 1995, AOL was headquartered at 8619 Westwood Center Drive in the Tysons Corner CDP in unincorporated Fairfax County, Virginia, near the Town of Vienna.
AOL was quickly running out of room in October 1996 for its network at the Fairfax County campus. In mid-1996, AOL moved to 22000 AOL Way in Dulles, unincorporated Loudoun County, Virginia to provide room for future growth. In a five-year landmark agreement with the most popular operating system, AOL was bundled with Windows software.
On March 31, 1996, the short-lived eWorld was purchased by AOL. In 1997, about half of all U.S. homes with Internet access had it through AOL. During this time, AOL's content channels, under Jason Seiken, including News, Sports, and Entertainment, experienced their greatest growth as AOL become the dominant online service internationally with more than 34 million subscribers. In November 1998, AOL announced it would acquire Netscape, best known for their web browser, in a major $4.2 billion deal. The deal closed on March 17, 1999. Another large acquisition in December 1999 was that of MapQuest, for $1.1 billion.
In January 2000, as new broadband technologies were being rolled out around the New York City metropolitan area and elsewhere across the U.S., AOL and Time Warner announced plans to merge, forming AOL Time Warner, Inc. The terms of the deal called for AOL shareholders to own 55% of the new, combined company. The deal closed on January 11, 2001. The new company was led by executives from AOL, SBI, and Time Warner. Gerald Levin, who had served as CEO of Time Warner, was CEO of the new company. Steve Case served as chairman, J. Michael Kelly (from AOL) was the chief financial officer, Robert W. Pittman (from AOL) and Dick Parsons (from Time Warner) served as co-chief operating officers. In 2002, Jonathan Miller became CEO of AOL. The following year, AOL Time Warner dropped the "AOL" from its name. It was the largest merger in history when completed with the combined value of the companies at $360 billion. This value fell sharply, to as low as $120 billion, as markets repriced AOL's valuation as a pure internet firm more modestly when combined with the traditional media and cable business. This status did not last long, and the company's value rose again within three months. By the end of that year, the tide had turned against "pure" internet companies, with many collapsing under falling stock prices, and even the strongest companies in the field losing up to 75% of their market value. The decline continued though 2001, but even with the losses, AOL was among the internet giants that continued to outperform brick and mortar companies.
In 2004, along with the launch of AOL 9.0 Optimized, AOL also made available the option of personalized greetings which would enable the user to hear his or her name while accessing basic functions and mail alerts, or while logging in or out. In 2005, AOL broadcast the Live 8 concert live over the Internet, and thousands of users downloaded clips of the concert over the following months. In late 2005, AOL released AOL Safety & Security Center, a bundle of McAfee Antivirus, CA anti-spyware, and proprietary firewall and phishing protection software. News reports in late 2005 identified companies such as Yahoo!, Microsoft, and Google as candidates for turning AOL into a joint venture. Those plans were abandoned when it was revealed on December 20, 2005, that Google would purchase a 5% share of AOL for $1 billion.
2006–2009: rebranding and decline
On April 3, 2006, AOL announced that it would retire the full name America Online. The official name of the service became AOL, and the full name of the Time Warner subdivision became AOL LLC. On June 8, 2006, AOL offered a new program called AOL Active Security Monitor, a diagnostic tool to monitor and rate PC security status, and recommended additional security software from AOL or Download.com. Two months later, AOL released AOL Active Virus Shield, a free product developed by Kaspersky Lab, that did not require an AOL account, only an internet email address. The ISP side of AOL UK was bought by Carphone Warehouse in October 2006 to take advantage of its 100,000 LLU customers, making Carphone Warehouse the largest LLU provider in the UK.
In August 2006, AOL announced that it would offer email accounts and software previously available only to its paying customers, provided that users accessed AOL or AOL.com through an access method not owned by AOL (otherwise known as "third party transit", "bring your own access" or "BYOA"). The move was designed to reduce costs associated with the "walled garden" business model by reducing usage of AOL-owned access points and shifting members with high-speed internet access from client-based usage to the more lucrative advertising provider AOL.com. The change from paid to free access was also designed to slow the rate at which members canceled their accounts and defected to Microsoft Hotmail, Yahoo! or other free email providers. The other free services included:
AIM (AOL Instant Messenger)
AOL Video, which featured professional content and allowed users to upload videos.
AOL Local, comprising its CityGuide, Yellow Pages and Local Search services to help users find local information like restaurants, local events, and directory listings.
AOL News
AOL My eAddress, a custom domain name for email addresses. These email accounts could be accessed in a manner similar to those of other AOL and AIM email accounts.
Xdrive, which allowed users to back up files over the Internet. It was acquired by AOL on August 4, 2005, and closed on December 31, 2008. It offered a free 5 GB account (free online file storage) to anyone with an AOL screenname. Xdrive also provided remote backup services and 50 GB of storage for $9.95 per month.
Also in August, AOL informed its U.S. customers of an increase in the price of its dial-up access to $25.90. The increase was part of an effort to migrate the service's remaining dial-up users to broadband, as the increased price was the same as that of its monthly DSL access. However, AOL subsequently began offering unlimited dial-up access for $9.95 a month.
On November 16, 2006, Randy Falco succeeded Jonathan Miller as CEO. In December 2006, AOL closed its last remaining call center in the United States, "taking the America out of America Online," according to industry pundits. Service centers based in India and the Philippines continue to provide customer support and technical assistance to subscribers.
On September 17, 2007, AOL announced the relocation of one of its corporate headquarters from Dulles, Virginia to New York City and the combination of its advertising units into a new subsidiary called Platform A. This action followed several advertising acquisitions, most notably Advertising.com, and highlighted the company's new focus on advertising-driven business models. AOL management stressed that "significant operations" would remain in Dulles, which included the company's access services and modem banks.
In October 2007, AOL announced the relocation of its other headquarters from Loudoun County, Virginia to New York City, while continuing to operate its Virginia offices. As part of the move to New York and the restructuring of responsibilities at the Dulles headquarters complex after the Reston move, Falco announced on October 15, 2007, plans to lay off 2,000 employees worldwide by the end of 2007, beginning "immediately." The result was a layoff of approximately 40% of AOL's employees. Most compensation packages associated with the October 2007 layoffs included a minimum of 120 days of severance pay, 60 of which were offered in lieu of the 60-day advance notice requirement by provisions of the 1988 federal WARN Act.
By November 2007, AOL's customer base had been reduced to 10.1 million subscribers, slightly more than the number of subscribers of Comcast and AT&T Yahoo!. According to Falco, as of December 2007, the conversion rate of accounts from paid access to free access was more than 80%.
On January 3, 2008, AOL announced the closing of its Reston, Virginia data center, which was sold to CRG West. On February 6, Time Warner CEO Jeff Bewkes announced that Time Warner would divide AOL's internet-access and advertising businesses, with the possibility of later selling the internet-access division.
On March 13, 2008, AOL purchased the social networking site Bebo for $850 million (£417 million). On July 25, AOL announced that it was shuttering Xdrive, AOL Pictures and BlueString to save on costs and focus on its core advertising business. AOL Pictures was closed on December 31. On October 31, AOL Hometown (a web-hosting service for the websites of AOL customers) and the AOL Journal blog hosting service were eliminated.
2009–2015: As a digital media company
On March 12, 2009, Tim Armstrong, formerly with Google, was named chairman and CEO of AOL. On May 28, Time Warner announced that it would position AOL as an independent company after Google's shares ceased at the end of the fiscal year. On November 23, AOL unveiled a new brand identity with the wordmark "Aol." superimposed onto canvases created by commissioned artists. The new identity, designed by Wolff Olins, was integrated with all of AOL's services on December 10, the date upon which AOL traded independently for the first time since the Time Warner merger on the New York Stock Exchange under the symbol AOL.
On April 6, 2010, AOL announced plans to shutter or sell Bebo. On June 16, the property was sold to Criterion Capital Partners for an undisclosed amount, believed to be approximately $10 million. In December, AIM eliminated access to AOL chat rooms, noting a marked decline in usage in recent months.
Under Armstrong's leadership, AOL followed a new business direction marked by a series of acquisitions. It announced the acquisition of Patch Media, a network of community-specific news and information sites focused on towns and communities. On September 28, 2010, at the San Francisco TechCrunch Disrupt Conference, AOL signed an agreement to acquire TechCrunch. On December 12, 2010, AOL acquired about.me, a personal profile and identity platform, four days after the platform's public launch.
On January 31, 2011, AOL announced the acquisition of European video distribution network goviral. In March 2011, AOL acquired HuffPost for $315 million. Shortly after the acquisition was announced, Huffington Post co-founder Arianna Huffington replaced AOL content chief David Eun, assuming the role of president and editor-in-chief of the AOL Huffington Post Media Group. On March 10, AOL announced that it would cut approximately 900 workers following the HuffPost acquisition.
On September 14, 2011, AOL formed a strategic ad-selling partnership with two of its largest competitors, Yahoo and Microsoft. The three companies would begin selling inventory on each others' sites. The strategy was designed to help the companies compete with Google and advertising networks.
On February 28, 2012, AOL partnered with PBS to launch MAKERS, a digital documentary series focusing on high-achieving women in industries perceived as male-dominated such as war, comedy, space, business, Hollywood and politics. Subjects for MAKERS episodes have included Oprah Winfrey, Hillary Clinton, Sheryl Sandberg, Martha Stewart, Indra Nooyi, Lena Dunham and Ellen DeGeneres.
On March 15, 2012, AOL announced the acquisition of Hipster, a mobile photo-sharing app, for an undisclosed amount. On April 9, 2012, AOL announced a deal to sell 800 patents to Microsoft for $1.056 billion. The deal included a perpetual license for AOL to use the patents.
In April, AOL took several steps to expand its ability to generate revenue through online video advertising. The company announced that it would offer gross rating point (GRP) guarantee for online video, mirroring the television-ratings system and guaranteeing audience delivery for online-video advertising campaigns bought across its properties. This announcement came just days before the Digital Content NewFront (DCNF) a two-week event held by AOL, Google, Hulu, Microsoft, Vevo and Yahoo to showcase the participating sites' digital video offerings. The DCNF was conducted in advance of the traditional television upfronts in the hope of diverting more advertising money into the digital space. On April 24, the company launched the AOL On network, a single website for its video output.
In February 2013, AOL reported its fourth quarter revenue of $599.5 million, its first growth in quarterly revenue in eight years.
In August 2013, Armstrong announced that Patch Media would scale back or sell hundreds of its local news sites. Not long afterward, layoffs began, with up to 500 out of 1,100 positions initially impacted. On January 15, 2014, Patch Media was spun off, and majority ownership was held by Hale Global. By the end of 2014, AOL controlled 0.74% of the global advertising market, well behind industry leader Google's 31.4%.
On January 23, 2014, AOL acquired Gravity, a software startup that tracked users' online behavior and tailored ads and content based on their interests, for $83 million. The deal, which included approximately 40 Gravity employees and the company's personalization technology, was Armstrong's fourth-largest deal since taking command in 2009. Later that year, AOL acquired Vidible, a company that developed technology to help websites run video content from other publishers, and help video publishers sell their content to these websites. The deal, which was announced December 1, 2014, was reportedly worth roughly $50 million.
On July 16, 2014, AOL earned an Emmy nomination for the AOL original series The Future Starts Here in the News and Documentary category. This came days after AOL earned its first Primetime Emmy Award nomination and win for Park Bench with Steve Buscemi in the Outstanding Short Form Variety Series. Created and hosted by Tiffany Shlain, the series focused on humans' relationship with technology and featured episodes such as "The Future of Our Species," "Why We Love Robots" and "A Case for Optimism."
2015–2021: division of Verizon
On May 12, 2015, Verizon announced plans to buy AOL for $50 per share in a deal valued at $4.4 billion. The transaction was completed on June 23. Armstrong, who continued to lead the firm following regulatory approval, called the deal the logical next step for AOL. "If you look forward five years, you're going to be in a space where there are going to be massive, global-scale networks, and there's no better partner for us to go forward with than Verizon." he said. "It's really not about selling the company today. It's about setting up for the next five to 10 years."
Analyst David Bank said he thought the deal made sense for Verizon. The deal will broaden Verizon's advertising sales platforms and increase its video production ability through websites such as HuffPost, TechCrunch, and Engadget. However, Craig Moffett said it was unlikely the deal would make a big difference to Verizon's bottom line. AOL had about two million dial-up subscribers at the time of the buyout. The announcement caused AOL's stock price to rise 17%, while Verizon's stock price dropped slightly.
Shortly before the Verizon purchase, on April 14, 2015, AOL launched ONE by AOL, a digital marketing programmatic platform that unifies buying channels and audience management platforms to track and optimize campaigns over multiple screens. Later that year, on September 15, AOL expanded the product with ONE by AOL: Creative, which is geared towards creative and media agencies to similarly connect marketing and ad distribution efforts.
On May 8, 2015, AOL reported its first-quarter revenue of $625.1 million, $483.5 million of which came from advertising and related operations, marking a 7% increase from Q1 2014. Over that year, the AOL Platforms division saw a 21% increase in revenue, but a drop in adjusted OIBDA due to increased investments in the company's video and programmatic platforms.
On June 29, 2015, AOL announced a deal with Microsoft to take over the majority of its digital advertising business. Under the pact, as many as 1,200 Microsoft employees involved with the business will be transferred to AOL, and the company will take over the sale of display, video, and mobile ads on various Microsoft platforms in nine countries, including Brazil, Canada, the United States, and the United Kingdom. Additionally, Google Search will be replaced on AOL properties with Bing—which will display advertising sold by Microsoft. Both advertising deals are subject to affiliate marketing revenue sharing.
On July 22, 2015, AOL received two News and Documentary Emmy nominations, one for MAKERS in the Outstanding Historical Programming category, and the other for True Trans With Laura Jane Grace, which documented the story of Laura Jane Grace, a transgender musician best known as the founder, lead singer, songwriter and guitarist of the punk rock band Against Me!, and her decision to come out publicly and overall transition experience.
On September 3, 2015, AOL agreed to buy Millennial Media for $238 million. On October 23, 2015, AOL completed the acquisition.
On October 1, 2015, Go90, a free ad-supported mobile video service aimed at young adult and teen viewers that Verizon owns and AOL oversees and operates launched its content publicly after months of beta testing. The initial launch line-up included content from Comedy Central, HuffPost, Nerdist News, Univision News, Vice, ESPN and MTV.
On April 20, 2016, AOL acquired virtual reality studio RYOT to bring immersive 360 degree video and VR content to HuffPost's global audience across desktop, mobile, and apps.
In July 2016, Verizon Communications announced its intent to purchase the core internet business of Yahoo!. Verizon merged AOL with Yahoo into a new company called "Oath Inc.", which in January 2019 rebranded itself as Verizon Media.
In April 2018, Oath Inc. sold Moviefone to MoviePass Parent Helios and Matheson Analytics.
In November 2020 the Huffington Post was sold to BuzzFeed in a stock deal.
2021–present: Apollo Global Management
On May 3, 2021, Verizon announced it would sell 90 percent of its Verizon Media division to Apollo Global Management for $5 billion. The division became the second incarnation of Yahoo! Inc.
Products and services
Content
As of September 1, 2021, the following media brands became subsidiary of AOL's parent Yahoo Inc.
Engadget
Autoblog
TechCrunch
Built by Girls
AOL's content contributors consists of over 20,000 bloggers, including politicians, celebrities, academics, and policy experts, who contribute on a wide range of topics making news.
In addition to mobile-optimized web experiences, AOL produces mobile applications for existing AOL properties like Autoblog, Engadget, The Huffington Post, TechCrunch, and products such as Alto, Pip, and Vivv.
Advertising
AOL has a global portfolio of media brands and advertising services across mobile, desktop, and TV. Services include brand integration and sponsorships through its in-house branded content arm, Partner Studio by AOL, as well as data and programmatic offerings through ad technology stack, ONE by AOL.
AOL acquired a number of businesses and technologies help to form ONE by AOL. These acquisitions included AdapTV in 2013 and Convertro, Precision Demand, and Vidible in 2014. ONE by AOL is further broken down into ONE by AOL for Publishers (formerly Vidible, AOL On Network and Be On for Publishers) and ONE by AOL for Advertisers, each of which have several sub-platforms.
On September 10, 2018, AOL's parent company Oath consolidated BrightRoll, One by AOL and Yahoo Gemini to 'simplify' adtech service by launching a single advertising proposition dubbed Oath Ad Platforms, now Yahoo! Ad Tech.
Membership
AOL offers a range of integrated products and properties including communication tools, mobile apps and services and subscription packages.
In 2017, before the discontinuation of AIM, "billions of messages" were sent "daily" on it and AOL's other chat services.
Dial-up Internet access – While 2.1 million people still used AOL's dial-up service as recently as 2015, only a few thousand were still subscribed as of 2021.
AOL Mail – AOL Mail is AOL's proprietary email client. It is fully integrated with AIM and links to news headlines on AOL content sites.
AOL Instant Messenger (AIM) – was AOL's proprietary instant-messaging tool. It was released in 1997. It lost market share to competition in the instant messenger market such as Google Chat, Facebook Messenger, and Skype. It also included a video-chat service, AV by AIM. On December 15, 2017, AOL discontinued AIM.
AOL Plans – AOL Plans offers three online safety and assistance tools: ID protection, data security and a general online technical assistance service.
AOL Desktop
AOL Desktop is an internet suite produced by AOL from 2007 that integrates a web browser, a media player and an instant messenger client. Version 10.X was based on AOL OpenRide, it is an upgrade from such. The macOS version is based on WebKit.
AOL Desktop version 10.X was different from previous AOL browsers and AOL Desktop versions. Its features are focused on web browsing as well as email. For instance, one does not have to sign into AOL in order to use it as a regular browser. In addition, non-AOL email accounts can be accessed through it. Primary buttons include "MAIL", "IM", and several shortcuts to various web pages. The first two require users to sign in, but the shortcuts to web pages can be used without authentication. AOL Desktop version 10.X was late marked as unsupported in favor of supporting the AOL Desktop 9.X versions.
Version 9.8 was released, replacing the Internet Explorer components of the internet browser with CEF (Chromium Embedded Framework) to give users an improved web browsing experience closer to that of Chrome.
Version 11 of AOL Desktop, was a total rewrite but maintained a similar user interface to the previous 9.8.X series of releases.
In 2017, a new paid version called AOL Desktop Gold was released, available for $4.99 per month after trial. It replaced the previous free version. After the shutdown of AIM in 2017, AOL's original chat rooms continued to be accessible through AOL Desktop Gold, and some rooms remained active during peak hours. That chat system was shut down on December 15, 2020.
In addition to AOL Desktop, the company also offered a browser toolbar Mozilla plug-in, AOL Toolbar, for several web browsers that provided quick access to AOL services. The toolbar was available from 2007 until 2018.
Criticism
In its earlier incarnation as a "walled garden" community and service provider, AOL received criticism for its community policies, terms of service, and customer service. Prior to 2006, AOL was known for its direct mailing of CD-ROMs and 3.5-inch floppy disks containing its software. The disks were distributed in large numbers; at one point, half of the CDs manufactured worldwide had AOL logos on them. The marketing tactic was criticized for its environmental cost, and AOL CDs were recognized as PC Worlds most annoying tech product.
Community leaders
AOL used a system of volunteers to moderate its chat rooms, forums and user communities. The program dated back to AOL's early days, when it charged by the hour for access and one of its highest billing services was chat. AOL provided free access to community leaders in exchange for moderating the chat rooms, and this effectively made chat very cheap to operate, and more lucrative than AOL's other services of the era. There were 33,000 community leaders in 1996. All community leaders received hours of training and underwent a probationary period. While most community leaders moderated chat rooms, some ran AOL communities and controlled their layout and design, with as much as 90% of AOL's content being created or overseen by community managers until 1996.
By 1996, ISPs were beginning to charge flat rates for unlimited access, which they could do at a profit because they only provided internet access. Even though AOL would lose money with such a pricing scheme, it was forced by market conditions to offer unlimited access in October 1996. In order to return to profitability, AOL rapidly shifted its focus from content creation to advertising, resulting in less of a need to carefully moderate every forum and chat room to keep users willing to pay by the minute to remain connected.
After unlimited access, AOL considered scrapping the program entirely, but continued it with a reduced number of community leaders, with scaled-back roles in creating content. Although community leaders continued to receive free access, after 1996 they were motivated more by the prestige of the position and the access to moderator tools and restricted areas within AOL. By 1999, there were over 15,000 volunteers in the program.
In May 1999, two former volunteers filed a class-action lawsuit alleging AOL violated the Fair Labor Standards Act by treating volunteers like employees. Volunteers had to apply for the position, commit to working for at least three to four hours a week, fill out timecards and sign a non-disclosure agreement. On July 22, AOL ended its youth corps, which consisted of 350 underage community leaders. At this time, the United States Department of Labor began an investigation into the program, but it came to no conclusions about AOL's practices.
AOL ended its community leader program on June 8, 2005. The class action lawsuit dragged on for years, even after AOL ended the program and AOL declined as a major internet company. In 2010, AOL finally agreed to settle the lawsuit for $15 million. The community leader program was found to be an example of co-production in a 2009 article in International Journal of Cultural Studies.
Billing disputes
AOL has faced a number of lawsuits over claims that it has been slow to stop billing customers after their accounts have been canceled, either by the company or the user. In addition, AOL changed its method of calculating used minutes in response to a class action lawsuit. Previously, AOL would add 15 seconds to the time a user was connected to the service and round up to the next whole minute (thus, a person who used the service for 12 minutes and 46 seconds would be charged for 14 minutes). AOL claimed this was to account for sign on/sign off time, but because this practice was not made known to its customers, the plaintiffs won (some also pointed out that signing on and off did not always take 15 seconds, especially when connecting via another ISP). AOL disclosed its connection-time calculation methods to all of its customers and credited them with extra free hours. In addition, the AOL software would notify the user of exactly how long they were connected and how many minutes they were being charged.
AOL was sued by the Ohio Attorney General in October 2003 for improper billing practices. The case was settled on June 8, 2005. AOL agreed to resolve any consumer complaints filed with the Ohio AG's office. In December 2006, AOL agreed to provide restitution to Florida consumers to settle the case filed against them by the Florida Attorney General.
Account cancellation
Many customers complained that AOL personnel ignored their demands to cancel service and stop billing. In response to approximately 300 consumer complaints, the New York Attorney General's office began an inquiry of AOL's customer service policies. The investigation revealed that the company had an elaborate scheme for rewarding employees who purported to retain or "save" subscribers who had called to cancel their Internet service. In many instances, such retention was done against subscribers' wishes, or without their consent. Under the scheme, customer service personnel received bonuses worth tens of thousands of dollars if they could successfully dissuade or "save" half of the people who called to cancel service. For several years, AOL had instituted minimum retention or "save" percentages, which consumer representatives were expected to meet. These bonuses, and the minimum "save" rates accompanying them, had the effect of employees not honoring cancellations, or otherwise making cancellation unduly difficult for consumers.
On August 24, 2005, America Online agreed to pay $1.25 million to the state of New York and reformed its customer service procedures. Under the agreement, AOL would no longer require its customer service representatives to meet a minimum quota for customer retention in order to receive a bonus. However the agreement only covered people in the state of New York.
On June 13, 2006, Vincent Ferrari documented his account cancellation phone call in a blog post, stating he had switched to broadband years earlier. In the recorded phone call, the AOL representative refused to cancel the account unless the 30-year-old Ferrari explained why AOL hours were still being recorded on it. Ferrari insisted that AOL software was not even installed on the computer. When Ferrari demanded that the account be canceled regardless, the AOL representative asked to speak with Ferrari's father, for whom the account had been set up. The conversation was aired on CNBC. When CNBC reporters tried to have an account on AOL cancelled, they were hung up on immediately and it ultimately took more than 45 minutes to cancel the account.
On July 19, 2006, AOL's entire retention manual was released on the Internet. On August 3, 2006, Time Warner announced that the company would be dissolving AOL's retention centers due to its profits hinging on $1 billion in cost cuts. The company estimated that it would lose more than six million subscribers over the following year.
Direct marketing of disks
Prior to 2006, AOL often sent unsolicited mass direct mail of 3" floppy disks and CD-ROMs containing their software. They were the most frequent user of this marketing tactic, and received criticism for the environmental cost of the campaign. According to PC World, in the 1990s "you couldn't open a magazine (PC World included) or your mailbox without an AOL disk falling out of it".
The mass distribution of these disks was seen as wasteful by the public and led to protest groups. One such was No More AOL CDs, a web-based effort by two IT workers to collect one million disks with the intent to return the disks to AOL. The website was started in August 2001, and an estimated 410,176 CDs were collected by August 2007 when the project was shut down.
Software
In 2000, AOL was served with an $8 billion lawsuit alleging that its AOL 5.0 software caused significant difficulties for users attempting to use third-party Internet service providers. The lawsuit sought damages of up to $1000 for each user that had downloaded the software cited at the time of the lawsuit. AOL later agreed to a settlement of $15 million, without admission of wrongdoing. The AOL software then was given a feature called AOL Dialer, or AOL Connect on . This feature allowed users to connect to the ISP without running the full interface. This allowed users to use only the applications they wish to use, especially if they do not favor the AOL Browser.
AOL 9.0 was once identified by Stopbadware as being under investigation for installing additional software without disclosure, and modifying browser preferences, toolbars, and icons. However, as of the release of AOL 9.0 VR (Vista Ready) on January 26, 2007, it was no longer considered badware due to changes AOL made in the software.
Usenet newsgroups
When AOL gave clients access to Usenet in 1993, they hid at least one newsgroup in standard list view: alt.aol-sucks. AOL did list the newsgroup in the alternative description view, but changed the description to "Flames and complaints about America Online". With AOL clients swarming Usenet newsgroups, the old, existing user base started to develop a strong distaste for both AOL and its clients, referring to the new state of affairs as Eternal September.
AOL discontinued access to Usenet on June 25, 2005. No official details were provided as to the cause of decommissioning Usenet access, except providing users the suggestion to access Usenet services from a third-party, Google Groups. AOL then provided community-based message boards in lieu of Usenet.
Terms of Service (TOS)
AOL has a detailed set of guidelines and expectations for users on their service, known as the Terms of Service (TOS, also known as Conditions of Service, or COS in the UK). It is separated into three different sections: Member Agreement, Community Guidelines and Privacy Policy. All three agreements are presented to users at time of registration and digital acceptance is achieved when they access the AOL service. During the period when volunteer chat room hosts and board monitors were used, chat room hosts were given a brief online training session and test on Terms of Service violations.
There have been many complaints over rules that govern an AOL user's conduct. Some users disagree with the TOS, citing the guidelines are too strict to follow coupled with the fact the TOS may change without users being made aware. A considerable cause for this was likely due to alleged censorship of user-generated content during the earlier years of growth for AOL.
Certified email
In early 2005, AOL stated its intention to implement a certified email system called Goodmail, which will allow companies to send email to users with whom they have pre-existing business relationships, with a visual indication that the email is from a trusted source and without the risk that the email messages might be blocked or stripped by spam filters.
This decision drew fire from MoveOn, which characterized the program as an "email tax", and the Electronic Frontier Foundation (EFF), which characterized it as a shakedown of non-profits. A website called Dearaol.com was launched, with an online petition and a blog that garnered hundreds of signatures from people and organizations expressing their opposition to AOL's use of Goodmail.
Esther Dyson defended the move in an editorial in The New York Times, saying "I hope Goodmail succeeds, and that it has lots of competition. I also think it and its competitors will eventually transform into services that more directly serve the interests of mail recipients. Instead of the fees going to Goodmail and AOL, they will also be shared with the individual recipients."
Tim Lee of the Technology Liberation Front posted an article that questioned the Electronic Frontier Foundation's adopting a confrontational posture when dealing with private companies. Lee's article cited a series of discussions on Declan McCullagh's Politechbot mailing list on this subject between the EFF's Danny O'Brien and antispammer Suresh Ramasubramanian, who has also compared the EFF's tactics in opposing Goodmail to tactics used by Republican political strategist Karl Rove. SpamAssassin developer Justin Mason posted some criticism of the EFF's and Moveon's "going overboard" in their opposition to the scheme.
The dearaol.com campaign lost momentum and disappeared, with the last post to the now defunct dearaol.com blog—"AOL starts the shakedown" being made on May 9, 2006.
Comcast, who also used the service, announced on its website that Goodmail had ceased operations and as of February 4, 2011, they no longer used the service.
Search data
On August 4, 2006, AOL released a compressed text file on one of its websites containing 20 million search keywords for over 650,000 users over a 3-month period between March 1, 2006 and May 31, intended for research purposes. AOL pulled the file from public access by August 7, but not before its wide distribution on the Internet by others. Derivative research, titled A Picture of Search was published by authors Pass, Chowdhury and Torgeson for The First International Conference on Scalable Information Systems.
The data were used by websites such as AOLstalker for entertainment purposes, where users of AOLstalker are encouraged to judge AOL clients based on the humorousness of personal details revealed by search behavior.
User list exposure
In 2003, Jason Smathers, an AOL employee, was convicted of stealing America Online's 92 million screen names and selling them to a known spammer. Smathers pled guilty to conspiracy charges in 2005. Smathers pled guilty to violations of the US CAN-SPAM Act of 2003. He was sentenced in August 2005 to 15 months in prison; the sentencing judge also recommended Smathers be forced to pay $84,000 in restitution, triple the $28,000 that he sold the addresses for.
AOL's Computer Checkup "scareware"
On February 27, 2012, a class action lawsuit was filed against Support.com, Inc. and partner AOL, Inc. The lawsuit alleged Support.com and AOL's Computer Checkup "scareware" (which uses software developed by Support.com) misrepresented that their software programs would identify and resolve a host of technical problems with computers, offered to perform a free "scan," which often found problems with users' computers. The companies then offered to sell software—for which AOL allegedly charged $4.99 a month and Support.com $29—to remedy those problems. Both AOL, Inc. and Support.com, Inc. settled on May 30, 2013, for $8.5 million. This included $25.00 to each valid class member and $100,000 each to Consumer Watchdog and the Electronic Frontier Foundation. Judge Jacqueline Scott Corley wrote: "Distributing a portion of the [funds] to Consumer Watchdog will meet the interests of the silent class members because the organization will use the funds to help protect consumers across the nation from being subject to the types of fraudulent and misleading conduct that is alleged here," and "EFF's mission includes a strong consumer protection component, especially in regards to online protection."
AOL continues to market Computer Checkup.
NSA PRISM program
Following media reports about PRISM, NSA's massive electronic surveillance program, in June 2013, several technology companies were identified as participants, including AOL. According to leaks of said program, AOL joined the PRISM program in 2011.
Hosting of user profiles changed, then discontinued
At one time, most AOL users had an online "profile" hosted by the AOL Hometown service. When AOL Hometown was discontinued, users had to create a new profile on Bebo. This was an unsuccessful attempt to create a social network that would compete with Facebook. When the value of Bebo decreased to a tiny fraction of the $850 million AOL paid for it, users were forced to recreate their profiles yet again, on a new service called AOL Lifestream.
AOL took the decision to shut down Lifestream on February 24, 2017, and gave users one month's notice to save photos and videos that had been uploaded to Lifestream. Following the shutdown, AOL no longer provides any option for hosting user profiles.
During the Hometown/Bebo/Lifestream era, another user's profile could be displayed by clicking the "Buddy Info" button in the AOL Desktop software. After the shutdown of Lifestream, this was no longer supported, but opened to the AIM home page (www.aim.com), which also became defunct, redirecting to AOL's home page.
See also
Adrian Lamo – Inside-AOL.com
AOHell
Comparison of webmail providers
David Shing
Dot-com bubble
List of acquisitions by AOL
List of S&P 400 companies
Live365
Truveo
References
External links
1983 establishments in the United States
2015 mergers and acquisitions
Companies based in Dulles, Virginia
Companies based in New York City
Companies formerly listed on the New York Stock Exchange
Companies in the PRISM network
Former WarnerMedia subsidiaries
Internet properties established in 1983
Internet properties established in 2009
Internet service providers of the United States
Internet services supporting OpenID
Pre–World Wide Web online services
Telecommunications companies established in 1983
Telecommunications companies established in 2009
Yahoo!
Web portals
Web service providers
|
https://en.wikipedia.org/wiki/Alcuin
|
Alcuin of York (; ; 735 – 19 May 804) – also called Ealhwine, Alhwin, or Alchoin – was a scholar, clergyman, poet, and teacher from York, Northumbria. He was born around 735 and became the student of Archbishop Ecgbert at York. At the invitation of Charlemagne, he became a leading scholar and teacher at the Carolingian court, where he remained a figure in the 780s and 790s. Before that, he was also a court chancellor in Aachen. "The most learned man anywhere to be found", according to Einhard's Life of Charlemagne (–833), he is considered among the most important intellectual architects of the Carolingian Renaissance. Among his pupils were many of the dominant intellectuals of the Carolingian era.
During this period, he perfected Carolingian minuscule, an easily read manuscript hand using a mixture of upper- and lower-case letters. Latin paleography in the eighth century leaves little room for a single origin of the script, and sources contradict his importance as no proof has been found of his direct involvement in the creation of the script. Carolingian minuscule was already in use before Alcuin arrived in Francia. Most likely he was responsible for copying and preserving the script while at the same time restoring the purity of the form.
Alcuin wrote many theological and dogmatic treatises, as well as a few grammatical works and a number of poems. In 796, he was made abbot of Marmoutier Abbey, in Tours, where he remained until his death.
Biography
Background
Alcuin was born in Northumbria, presumably sometime in the 730s. Virtually nothing is known of his parents, family background, or origin. In common hagiographical fashion, the Vita Alcuini asserts that Alcuin was "of noble English stock", and this statement has usually been accepted by scholars. Alcuin's own work only mentions such collateral kinsmen as Wilgils, father of the missionary saint Willibrord; and Beornrad (also spelled Beornred), abbot of Echternach and bishop of Sens. Willibrord, Alcuin and Beornrad were all related by blood.
In his Life of St Willibrord, Alcuin writes that Wilgils, called a paterfamilias, had founded an oratory and church at the mouth of the Humber, which had fallen into Alcuin's possession by inheritance. Because in early Anglo-Latin writing paterfamilias ("head of a family, householder") usually referred to a ("churl"), Donald A. Bullough suggests that Alcuin's family was of ("churlish") status: i.e., free but subordinate to a noble lord, and that Alcuin and other members of his family rose to prominence through beneficial connections with the aristocracy. If so, Alcuin's origins may lie in the southern part of what was formerly known as Deira.
York
The young Alcuin came to the cathedral church of York during the golden age of Archbishop Ecgbert and his brother, the Northumbrian King Eadberht. Ecgbert had been a disciple of the Venerable Bede, who urged him to raise York to an archbishopric. King Eadberht and Archbishop Ecgbert oversaw the re-energising and reorganisation of the English church, with an emphasis on reforming the clergy and on the tradition of learning that Bede had begun. Ecgbert was devoted to Alcuin, who thrived under his tutelage.
The York school was renowned as a centre of learning in the liberal arts, literature, and science, as well as in religious matters. From here, Alcuin drew inspiration for the school he would lead at the Frankish court. He revived the school with the trivium and quadrivium disciplines, writing a codex on the trivium, while his student Hraban wrote one on the quadrivium.
Alcuin graduated to become a teacher during the 750s. His ascendancy to the headship of the York school, the ancestor of St Peter's School, began after Aelbert became Archbishop of York in 767. Around the same time, Alcuin became a deacon in the church. He was never ordained a priest. Though no real evidence shows that he took monastic vows, he lived as if he had.
In 781, King Elfwald sent Alcuin to Rome to petition the pope for official confirmation of York's status as an archbishopric and to confirm the election of the new archbishop, Eanbald I. On his way home, he met Charlemagne (whom he had met once before), this time in the Italian city of Parma.
Charlemagne
Alcuin's intellectual curiosity allowed him to be reluctantly persuaded to join Charlemagne's court. He joined an illustrious group of scholars whom Charlemagne had gathered around him, the mainsprings of the Carolingian Renaissance: Peter of Pisa, Paulinus of Aquileia, Rado, and Abbot Fulrad. Alcuin would later write, "the Lord was calling me to the service of King Charles".
Alcuin became master of the Palace School of Charlemagne in Aachen () in 782. It had been founded by the king's ancestors as a place for the education of the royal children (mostly in manners and the ways of the court). However, Charlemagne wanted to include the liberal arts, and most importantly, the study of religion. From 782 to 790, Alcuin taught Charlemagne himself, his sons Pepin and Louis, as well as young men sent to be educated at court, and the young clerics attached to the palace chapel. Bringing with him from York his assistants Pyttel, Sigewulf, and Joseph, Alcuin revolutionised the educational standards of the Palace School, introducing Charlemagne to the liberal arts and creating a personalised atmosphere of scholarship and learning, to the extent that the institution came to be known as the 'school of Master Albinus'.
In this role as adviser, he took issue with the emperor's policy of forcing pagans to be baptised on pain of death, arguing, "Faith is a free act of the will, not a forced act. We must appeal to the conscience, not compel it by violence. You can force people to be baptised, but you cannot force them to believe." His arguments seem to have prevailed – Charlemagne abolished the death penalty for paganism in 797.
Charlemagne gathered the best men of every land in his court, and became far more than just the king at the centre. It seems that he made many of these men his closest friends and counsellors. They referred to him as 'David', a reference to the Biblical king David. Alcuin soon found himself on intimate terms with Charlemagne and the other men at court, where pupils and masters were known by affectionate and jesting nicknames. Alcuin himself was known as 'Albinus' or 'Flaccus'. While at Aachen, Alcuin bestowed pet names upon his pupils – derived mainly from Virgil's Eclogues. According to the Encyclopædia Britannica, "He loved Charlemagne and enjoyed the king's esteem, but his letters reveal that his fear of him was as great as his love."
After the death of Pope Adrian I, Alcuin was commissioned by Charlemagne to compose an epitaph for Adrian. The epitaph was inscribed on black stone quarried at Aachen and carried to Rome where it was set over Adrian's tomb in the south transept of St Peter's basilica just before Charlemagne's coronation in the basilica on Christmas Day 800.
Return to Northumbria and back to Francia
In 790, Alcuin returned from the court of Charlemagne to England, to which he had remained attached. He dwelt there for some time, but Charlemagne then invited him back to help in the fight against the Adoptionist heresy, which was at that time making great progress in Toledo, the old capital of the Visigoths and still a major city for the Christians under Islamic rule in Spain. He is believed to have had contacts with Beatus of Liébana, from the Kingdom of Asturias, who fought against Adoptionism. At the Council of Frankfurt in 794, Alcuin upheld the orthodox doctrine against the views expressed by Felix of Urgel, an heresiarch according to the Catholic Encyclopaedia. Having failed during his stay in Northumbria to influence King Æthelred in the conduct of his reign, Alcuin never returned home.
He was back at Charlemagne's court by at least mid-792, writing a series of letters to Æthelred, to Hygbald, Bishop of Lindisfarne, and to Æthelhard, Archbishop of Canterbury in the succeeding months, dealing with the Viking attack on Lindisfarne in July 793. These letters and Alcuin's poem on the subject, , provide the only significant contemporary account of these events. In his description of the Viking attack, he wrote: "Never before has such terror appeared in Britain. Behold the church of St Cuthbert, splattered with the blood of God's priests, robbed of its ornaments."
Tours and death
In 796, Alcuin was in his 60s. He hoped to be free from court duties and upon the death of Abbot Itherius of Saint Martin at Tours, Charlemagne put Marmoutier Abbey into Alcuin's care, with the understanding that he should be available if the king ever needed his counsel. There, he encouraged the work of the monks on the beautiful Carolingian minuscule script, ancestor of modern Roman typefaces.
Alcuin died on 19 May 804, some 10 years before the emperor, and was buried at St. Martin's Church under an epitaph that partly read:
The majority of details on Alcuin's life come from his letters and poems. Also, autobiographical sections are in Alcuin's poem on York and in the Vita Alcuini, a hagiography written for him at Ferrières in the 820s, possibly based in part on the memories of Sigwulf, one of Alcuin's pupils.
Carolingian Renaissance figure and legacy
Mathematician
The collection of mathematical and logical word problems entitled Propositiones ad acuendos juvenes ("Problems to Sharpen Youths") is sometimes attributed to Alcuin. In a 799 letter to Charlemagne, the scholar claimed to have sent "certain figures of arithmetic for the joy of cleverness", which some scholars have identified with the Propositiones.
The text contains about 53 mathematical word problems (with solutions), in no particular pedagogical order. Among the most famous of these problems are: four that involve river crossings, including the problem of three anxious brothers, each of whom has an unmarried sister whom he cannot leave alone with either of the other men lest she be defiled (Problem 17); the problem of the wolf, goat, and cabbage (Problem 18); and the problem of "the two adults and two children where the children weigh half as much as the adults" (Problem 19). Alcuin's sequence is the solution to one of the problems of that book.
Literary influence
Alcuin made the abbey school into a model of excellence and many students flocked to it. He had many manuscripts copied using outstandingly beautiful calligraphy, the Carolingian minuscule based on round and legible uncial letters. He wrote many letters to his English friends, to Arno, bishop of Salzburg and above all to Charlemagne. These letters (of which 311 are extant) are filled mainly with pious meditations, but they form an important source of information as to the literary and social conditions of the time and are the most reliable authority for the history of humanism during the Carolingian age. Alcuin trained the numerous monks of the abbey in piety, and in the midst of these pursuits, he died.
Alcuin is the most prominent figure of the Carolingian Renaissance, in which three main periods have been distinguished: in the first of these, up to the arrival of Alcuin at the court, the Italians occupy a central place; in the second, Alcuin and the English are dominant; in the third (from 804), the influence of Theodulf the Visigoth is preponderant.
Alcuin also developed manuals used in his educational work – a grammar and works on rhetoric and dialectics. These are written in the form of a dialogue, and in two of them the interlocutors are Charlemagne and Alcuin. He wrote several theological treatises: a De fide Trinitatis, and commentaries on the Bible. Alcuin is credited with inventing the first known question mark, though it did not resemble the modern symbol.
Alcuin transmitted to the Franks the knowledge of Latin culture, which had existed in Anglo-Saxon England. A number of his works still exist. Besides some graceful epistles in the style of Venantius Fortunatus, he wrote some long poems, and notably he is the author of a history (in verse) of the church at York, Versus de patribus, regibus et sanctis Eboracensis ecclesiae. At the same time, he is noted for making one of the only explicit comments on Old English poetry surviving from the early Middle Ages, in a letter to one Speratus, the bishop of an unnamed English see (possibly Unwona of Leicester): ("Let God's words be read at the episcopal dinner-table. It is right that a reader should be heard, not a harpist, patristic discourse, not pagan song. What has Ingeld to do with Christ?").
Use of homoerotic language in writings
Historian John Boswell cited Alcuin's writings as demonstrating a personal outpouring of his internalized homosexual feelings. Others agree that Alcuin at times "comes perilously close to communicating openly his same-sex desires." According to David Clark, passages in some of Alcuin's writings can be seen to display homosocial desire, even possibly homoerotic imagery. However, he argues that it is not possible to necessarily determine whether they were the result of an outward expression of erotic feelings on the part of Alcuin.
The interpretation of homosexual desire has been disputed by Allen Frantzen, who identifies Alcuin's language with that of medieval Christian amicitia or friendship. Douglas Dales and Rowan Williams say "the use of language drawn [by Alcuin] from the Song of Songs transforms apparently erotic language into something within Christian friendship – 'an ordained affection.
Alcuin was also a close friend of Charlemagne's sister Gisela, Abbess of Chelles, and he hailed her as "a noble sister in the bond of sweet love". He wrote to Charlemagne's daughters Rotrude and Bertha, "the devotion of my heart specially tends towards you both because of the familiarity and dedication you have shown me". He dedicated the last two books of his commentary on John's gospel to them both.
Despite inconclusive evidence of Alcuin's personal passions, he was clear in his own writings that the men of Sodom had been punished with fire for "sinning against nature with men" – a view consistent with Church teaching. Such sins, argued Alcuin, were therefore more serious than lustful acts with women, for which the earth was cleansed and revivified by the water of the Flood, and merit to be "withered by flames unto eternal barrenness".
Legacy
Alcuin is honored in the Church of England and in the Episcopal Church on 20 May the first available day after the day of his death (as Dunstan is celebrated on 19 May).
Alcuin College, one of the colleges of the University of York, is named after him. In January 2020, Alcuin was the subject of the BBC Radio 4 programme In Our Time.
Selected works
For a complete census of Alcuin's works, see Marie-Hélène Jullien and Françoise Perelman, eds., Clavis scriptorum latinorum medii aevi: Auctores Galliae 735–987. Tomus II: Alcuinus. Turnhout: Brepols, 1999.
Poetry
Carmina, ed. Ernst Dümmler, MGH Poetae Latini aevi Carolini I. Berlin: Weidmann, 1881. 160–351.
Godman, Peter, tr., Poetry of the Carolingian Renaissance. Norman, University of Oklahoma Press, 1985. 118–149.
Stella, Francesco, tr., comm., La poesia carolingia, Firenze: Le Lettere, 1995, pp. 94–96, 152–61, 266–67, 302–307, 364–371, 399–404, 455–457, 474–477, 503–507.
Isbell, Harold, tr.. The Last Poets of Imperial Rome. Baltimore: Penguin, 1971.
Poem on York, Versus de patribus, regibus et sanctis Euboricensis ecclesiae, ed. and tr. Peter Godman, The Bishops, Kings, and Saints of York. Oxford: Clarendon Press, 1982.
De clade Lindisfarnensis monasterii, "On the destruction of the monastery of Lindisfarne" (Carmen 9, ed. Dümmler, pp. 229–235).
Letters
Of Alcuin's letters, over 310 have survived.
Epistolae, ed. Ernst Dümmler, MGH Epistolae IV.2. Berlin: Weidmann, 1895. 1–493.
Jaffé, Philipp, Ernst Dümmler, and W. Wattenbach, eds. Monumenta Alcuiniana. Berlin: Weidmann, 1873. 132–897.
Chase, Colin, ed. Two Alcuin Letter-books. Toronto: Pontifical Institute of Mediaeval Studies, 1975.
Allott, Stephen, tr. Alcuin of York, c. AD 732 to 804. His life and letters. York: William Sessions, 1974.
Sturgeon, Thomas G., tr. The Letters of Alcuin: Part One, the Aachen Period (762–796). Harvard University PhD thesis, 1953.
Didactic works
Ars grammatica. PL 101: 854–902.
De orthographia, ed. H. Keil, Grammatici Latini VII, 1880. 295–312; ed. Sandra Bruni, Alcuino de orthographia. Florence: SISMEL, 1997.
De dialectica. PL 101: 950–976.
Disputatio regalis et nobilissimi juvenis Pippini cum Albino scholastico "Dialogue of Pepin, the Most Noble and Royal Youth, with the Teacher Albinus", ed. L. W. Daly and W. Suchier, Altercatio Hadriani Augusti et Epicteti Philosophi. Urbana, IL: University of Illinois Press, 1939. 134–146; ed. Wilhelm Wilmanns, "Disputatio regalis et nobilissimi juvenis Pippini cum Albino scholastico". Zeitschrift für deutsches Altertum 14 (1869): 530–555, 562.
Disputatio de rhetorica et de virtutibus sapientissimi regis Carli et Albini magistri, ed. and tr. Wilbur Samuel Howell, The Rhetoric of Alcuin and Charlemagne. New York: Russell and Russell, 1965 (1941); ed. C. Halm, Rhetorici Latini Minores. Leipzig: Teubner, 1863. 523–550.
De virtutibus et vitiis (moral treatise dedicated to Count Wido of Brittany, 799–800). PL 101: 613–638 (transcript available online). A new critical edition is being prepared for the Corpus Christianorum, Continuatio Medievalis.
De animae ratione (ad Eulaliam virginem) (written for Gundrada, Charlemagne's cousin). PL 101: 639–650.
De Cursu et Saltu Lunae ac Bissexto, astronomical treatise. PL 101: 979–1002.
(?) Propositiones ad acuendos iuvenes, ed. Menso Folkerts, "Die alteste mathematische Aufgabensammlung in lateinischer Sprache: Die Alkuin zugeschriebenen Propositiones ad acuendos iuvenes; Überlieferung, Inhalt, Kritische Edition", in idem, Essays on Early Medieval Mathematics: The Latin Tradition. Aldershot: Ashgate, 2003.
Theology
Compendium in Canticum Canticorum: Alcuino, Commento al Cantico dei cantici – con i commenti anonimi Vox ecclesie e Vox antique ecclesie, ed. Rossana Guglielmetti, Firenze, SISMEL 2004
Quaestiones in Genesim. PL 100: 515–566.
De Fide Sanctae Trinitatis et de Incarnatione Christi; Quaestiones de Sancta Trinitate, ed. E. Knibbs and E. Ann Matter (Corpus Christianorum – Continuatio Mediaevalis 249: Brepols, 2012)
Hagiography
Vita II Vedastis episcopi Atrebatensis. Revision of the earlier Vita Vedastis by Jonas of Bobbio. Patrologia Latina 101: 663–682.
Vita Richarii confessoris Centulensis. Revision of an earlier anonymous life. MGH Scriptores Rerum Merovingicarum 4: 381–401.
Vita Willibrordi archiepiscopi Traiectensis, ed. W. Levison, Passiones vitaeque sanctorum aevi Merovingici. MGH Scriptores Rerum Merovingicarum 7: 81–141.
See also
Propositiones ad Acuendos Juvenes
Carolingian art
Carolingian Empire
Category: Carolingian period
Correctory
Codex Vindobonensis 795
References
Notes
Citations
Sources
Allott, Stephen. Alcuin of York, his life and letters
</
Dales, Douglas J. 'Accessing Alcuin: A Master Bibliography' (James Clarke & Co., Cambridge, 2013),
Diem, Albrecht, 'The Emergence of Monastic Schools. The Role of Alcuin', in: Luuk A. J. R. Houwen and Alasdair A. McDonald (eds.), Alcuin of York. Scholar at the Carolingian Court, Groningen 1998 (Germania Latina, vol. 3), pp. 27–44.
Duckett, Eleanor Shipley. Carolingian Portraits, (1962)
Ganshof, F.L. The Carolingians and the Frankish Monarchy
Godman, Peter. Poetry of the Carolingian Renaissance
Lorenz, Frederick. The life of Alcuin (Thomas Hurst, 1837).
McGuire, Brian P. Friendship, and Community: The Monastic Experience
Murphy, Richard E. Alcuin of York: De Virtutibus et Vitiis, Virtues and Vices.
Stehling, Thomas. Medieval Latin Love Poems of Male Love and Friendship.
Stella, Francesco, "Alkuins Dichtung" in Alkuin von York und die geistige Grundlegung Europas , Sankt Gallen, Verlag am Klosterhof, 2010, pp. 107–28.
Throop, Priscilla, trans. Alcuin: His Life; On Virtues and Vices; Dialogue with Pepin (Charlotte, VT: MedievalMS, 2011)
West, Andrew Fleming. Alcuin and the Rise of the Christian Schools (C. Scribner's Sons, 1912)
External links
Alcuin's book, Problems for the Quickening of the Minds of the Young
Introduction to Alcuin's writings by Robert Levine and Whitney Bolton
The Alcuin Society
Anglo-Saxon York on History of York site
Corpus Christianorum, Continuatio Mediaevalis: new critical editions in preparation
Corpus Grammaticorum Latinorum: complete texts and full bibliography
The Life of Alcuin by Frederick Lorenz
Vidas de São Martinho de Tours e de São Brício, 1175, at the National Library of Portugal
Glossarium latinum De arte grammaticaTextos didáticos, 1176-1225, at the National Library of Portugal
730s births
Year of birth unknown
804 deaths
8th-century astronomers
8th-century Christian theologians
8th-century English writers
8th-century Frankish writers
8th-century writers in Latin
8th-century mathematicians
8th-century philosophers
8th-century poets
9th-century Christian monks
9th-century Christian theologians
9th-century English writers
9th-century philosophers
People educated at St Peter's School, York
Anglo-Saxon poets
Anglo-Saxon saints
Anglo-Saxon writers
Carolingian poets
Christian hagiographers
Deacons
English monks
Grammarians of Latin
LGBT and Catholicism
Latin texts of Anglo-Saxon England
Medieval chancellors (government)
Medieval English mathematicians
Medieval English theologians
Medieval Latin poets
Medieval LGBT people
Medieval linguists
People from York
Saints from the Carolingian Empire
Scholastic philosophers
Sources on Germanic paganism
Writers from the Carolingian Empire
Anglican saints
English LGBT writers
|
https://en.wikipedia.org/wiki/Amine
|
In chemistry, amines (, ) are compounds and functional groups that contain a basic nitrogen atom with a lone pair. Amines are formally derivatives of ammonia (), wherein one or more hydrogen atoms have been replaced by a substituent such as an alkyl or aryl group (these may respectively be called alkylamines and arylamines; amines in which both types of substituent are attached to one nitrogen atom may be called alkylarylamines). Important amines include amino acids, biogenic amines, trimethylamine, and aniline. Inorganic derivatives of ammonia are also called amines, such as monochloramine ().
The substituent is called an amino group.
Compounds with a nitrogen atom attached to a carbonyl group, thus having the structure , are called amides and have different chemical properties from amines.
Classification of amines
Amines can be classified according to the nature and number of substituents on nitrogen. Aliphatic amines contain only H and alkyl substituents. Aromatic amines have the nitrogen atom connected to an aromatic ring.
Amines, alkyl and aryl alike, are organized into three subcategories (see table) based on the number of carbon atoms adjacent to the nitrogen(how many hydrogen atoms of the ammonia molecule are replaced by hydrocarbon groups):
Primary (1°) amines—Primary amines arise when one of three hydrogen atoms in ammonia is replaced by an alkyl or aromatic group. Important primary alkyl amines include, methylamine, most amino acids, and the buffering agent tris, while primary aromatic amines include aniline.
Secondary (2°) amines—Secondary amines have two organic substituents (alkyl, aryl or both) bound to the nitrogen together with one hydrogen. Important representatives include dimethylamine, while an example of an aromatic amine would be diphenylamine.
Tertiary (3°) amines—In tertiary amines, nitrogen has three organic substituents. Examples include trimethylamine, which has a distinctively fishy smell, and EDTA.
A fourth subcategory is determined by the connectivity of the substituents attached to the nitrogen:
*Cyclic amines—Cyclic amines are either secondary or tertiary amines. Examples of cyclic amines include the 3-membered ring aziridine and the six-membered ring piperidine. N-methylpiperidine and N-phenylpiperidine are examples of cyclic tertiary amines.
It is also possible to have four organic substituents on the nitrogen. These species are not amines but are quaternary ammonium cations and have a charged nitrogen center. Quaternary ammonium salts exist with many kinds of anions.
Naming conventions
Amines are named in several ways. Typically, the compound is given the prefix "amino-" or the suffix "-amine". The prefix "N-" shows substitution on the nitrogen atom. An organic compound with multiple amino groups is called a diamine, triamine, tetraamine and so forth.
Systematic names for some common amines:
Physical properties
Hydrogen bonding significantly influences the properties of primary and secondary amines. For example, methyl and ethyl amines are gases under standard conditions, whereas the corresponding methyl and ethyl alcohols are liquids. Amines possess a characteristic ammonia smell, liquid amines have a distinctive "fishy" and foul smell.
The nitrogen atom features a lone electron pair that can bind H+ to form an ammonium ion R3NH+. The lone electron pair is represented in this article by a two dots above or next to the N. The water solubility of simple amines is enhanced by hydrogen bonding involving these lone electron pairs. Typically salts of ammonium compounds exhibit the following order of solubility in water: primary ammonium () > secondary ammonium () > tertiary ammonium (R3NH+). Small aliphatic amines display significant solubility in many solvents, whereas those with large substituents are lipophilic. Aromatic amines, such as aniline, have their lone pair electrons conjugated into the benzene ring, thus their tendency to engage in hydrogen bonding is diminished. Their boiling points are high and their solubility in water is low.
Spectroscopic identification
Typically the presence of an amine functional group is deduced by a combination of techniques, including mass spectrometry as well as NMR and IR spectroscopies. 1H NMR signals for amines disappear upon treatment of the sample with D2O. In their infrared spectrum primary amines exhibit two N-H bands, whereas secondary amines exhibit only one.
Structure
Alkyl amines
Alkyl amines characteristically feature tetrahedral nitrogen centers. C-N-C and C-N-H angles approach the idealized angle of 109°. C-N distances are slightly shorter than C-C distances. The energy barrier for the nitrogen inversion of the stereocenter is about 7 kcal/mol for a trialkylamine. The interconversion has been compared to the inversion of an open umbrella into a strong wind.
Amines of the type NHRR' and NRR′R″ are chiral: the nitrogen center bears four substituents counting the lone pair. Because of the low barrier to inversion, amines of the type NHRR' cannot be obtained in optical purity. For chiral tertiary amines, NRR′R″ can only be resolved when the R, R', and R″ groups are constrained in cyclic structures such as N-substituted aziridines (quaternary ammonium salts are resolvable).
Aromatic amines
In aromatic amines ("anilines"), nitrogen is often nearly planar owing to conjugation of the lone pair with the aryl substituent. The C-N distance is correspondingly shorter. In aniline, the C-N distance is the same as the C-C distances.
Basicity
Like ammonia, amines are bases. Compared to alkali metal hydroxides, amines are weaker (see table for examples of conjugate acid Ka values).
The basicity of amines depends on:
The electronic properties of the substituents (alkyl groups enhance the basicity, aryl groups diminish it).
The degree of solvation of the protonated amine, which includes steric hindrance by the groups on nitrogen.
Electronic effects
Owing to inductive effects, the basicity of an amine might be expected to increase with the number of alkyl groups on the amine. Correlations are complicated owing to the effects of solvation which are opposite the trends for inductive effects. Solvation effects also dominate the basicity of aromatic amines (anilines). For anilines, the lone pair of electrons on nitrogen delocalizes into the ring, resulting in decreased basicity. Substituents on the aromatic ring, and their positions relative to the amino group, also affect basicity as seen in the table.
Solvation effects
Solvation significantly affects the basicity of amines. N-H groups strongly interact with water, especially in ammonium ions. Consequently, the basicity of ammonia is enhanced by 1011 by solvation. The intrinsic basicity of amines, i.e. the situation where solvation is unimportant, has been evaluated in the gas phase. In the gas phase, amines exhibit the basicities predicted from the electron-releasing effects of the organic substituents. Thus tertiary amines are more basic than secondary amines, which are more basic than primary amines, and finally ammonia is least basic. The order of pKb's (basicities in water) does not follow this order. Similarly aniline is more basic than ammonia in the gas phase, but ten thousand times less so in aqueous solution.
In aprotic polar solvents such as DMSO, DMF, and acetonitrile the energy of solvation is not as high as in protic polar solvents like water and methanol. For this reason, the basicity of amines in these aprotic solvents is almost solely governed by the electronic effects.
Synthesis
From alcohols
Industrially significant alkyl amines are prepared from ammonia by alkylation with alcohols:
ROH + NH3 -> RNH2 + H2O
From alkyl and aryl halides
Unlike the reaction of amines with alcohols the reaction of amines and ammonia with alkyl halides is used for synthesis in the laboratory:
RX + 2 R'NH2 -> RR'NH + [RR'NH2]X
In such reactions, which are more useful for alkyl iodides and bromides, the degree of alkylation is difficult to control such that one obtains mixtures of primary, secondary, and tertiary amines, as well as quaternary ammonium salts.
Selectivity can be improved via the Delépine reaction, although this is rarely employed on an industrial scale. Selectivity is also assured in the Gabriel synthesis, which involves organohalide reacting with potassium phthalimide.
Aryl halides are much less reactive toward amines and for that reason are more controllable. A popular way to prepare aryl amines is the Buchwald-Hartwig reaction.
From alkenes
Disubstituted alkenes react with HCN in the presence of strong acids to give formamides, which can be decarbonylated. This method, the Ritter reaction, is used industrially to produce tertiary amines such a tert-octylamine.
Hydroamination of alkenes is also widely practiced. The reaction is catalyzed by zeolite-based solid acids.
Reductive routes
Via the process of hydrogenation, unsaturated N-containing functional groups are reduced to amines using hydrogen in the presence of a nickel catalyst. Suitable groups include nitriles, azides, imines including oximes, amides, and nitro. In the case of nitriles, reactions are sensitive to acidic or alkaline conditions, which can cause hydrolysis of the group. is more commonly employed for the reduction of these same groups on the laboratory scale.
Many amines are produced from aldehydes and ketones via reductive amination, which can either proceed catalytically or stoichiometrically.
Aniline () and its derivatives are prepared by reduction of the nitroaromatics. In industry, hydrogen is the preferred reductant, whereas, in the laboratory, tin and iron are often employed.
Specialized methods
Many methods exist for the preparation of amines, many of these methods being rather specialized.
Reactions
Alkylation, acylation, and sulfonation, etc.
Aside from their basicity, the dominant reactivity of amines is their nucleophilicity. Most primary amines are good ligands for metal ions to give coordination complexes. Amines are alkylated by alkyl halides. Acyl chlorides and acid anhydrides react with primary and secondary amines to form amides (the "Schotten–Baumann reaction").
Similarly, with sulfonyl chlorides, one obtains sulfonamides. This transformation, known as the Hinsberg reaction, is a chemical test for the presence of amines.
Because amines are basic, they neutralize acids to form the corresponding ammonium salts . When formed from carboxylic acids and primary and secondary amines, these salts thermally dehydrate to form the corresponding amides.
Amines undergo sulfamation upon treatment with sulfur trioxide or sources thereof:
R2NH + SO3 -> R2NSO3H
Acid-base reactions
Alkyl amines protonate near pH=7 to give alkylammonium derivative.
Diazotization
Amines reacts with nitrous acid to give diazonium salts. The alkyl diazonium salts are of little importance because they are too unstable. The most important members are derivatives of aromatic amines such as aniline ("phenylamine") (A = aryl or naphthyl):
ANH2 + HNO2 + HX -> AN2+ + X- + 2 H2O
Anilines and naphthylamines form more stable diazonium salts, which can be isolated in the crystalline form. Diazonium salts undergo a variety of useful transformations involving replacement of the group with anions. For example, cuprous cyanide gives the corresponding nitriles:
AN2+ + Y- -> AY + N2
Aryldiazonium couple with electron-rich aromatic compounds such as a phenol to form azo compounds. Such reactions are widely applied to the production of dyes.
Conversion to imines
Imine formation is an important reaction. Primary amines react with ketones and aldehydes to form imines. In the case of formaldehyde (R' H), these products typically exist as cyclic trimers.
RNH2 + R'_2C=O -> R'_2C=NR + H2O
Reduction of these imines gives secondary amines:
R'_2C=NR + H2 -> R'_2CH-NHR
Similarly, secondary amines react with ketones and aldehydes to form enamines:
R2NH + R'(R''CH2)C=O -> R''CH=C(NR2)R' + H2O
Overview
An overview of the reactions of amines is given below:
Biological activity
Amines are ubiquitous in biology. The breakdown of amino acids releases amines, famously in the case of decaying fish which smell of trimethylamine. Many neurotransmitters are amines, including epinephrine, norepinephrine, dopamine, serotonin, and histamine. Protonated amino groups () are the most common positively charged moieties in proteins, specifically in the amino acid lysine. The anionic polymer DNA is typically bound to various amine-rich proteins. Additionally, the terminal charged primary ammonium on lysine forms salt bridges with carboxylate groups of other amino acids in polypeptides, which is one of the primary influences on the three-dimensional structures of proteins.
Amine hormones
Hormones derived from the modification of amino acids are referred to as amine hormones. Typically, the original structure of the amino acid is modified such that a –COOH, or carboxyl, group is removed, whereas the −NH+3, or amine, group remains. Amine hormones are synthesized from the amino acids tryptophan or tyrosine.
Application of amines
Dyes
Primary aromatic amines are used as a starting material for the manufacture of azo dyes. It reacts with nitrous acid to form diazonium salt, which can undergo coupling reaction to form an azo compound. As azo-compounds are highly coloured, they are widely used in dyeing industries, such as:
Methyl orange
Direct brown 138
Sunset yellow FCF
Ponceau
Drugs
Most drugs and drug candidates contain amine functional groups:
Chlorpheniramine is an antihistamine that helps to relieve allergic disorders due to cold, hay fever, itchy skin, insect bites and stings.
Chlorpromazine is a tranquilizer that sedates without inducing sleep. It is used to relieve anxiety, excitement, restlessness or even mental disorder.
Ephedrine and phenylephrine, as amine hydrochlorides, are used as decongestants.
Amphetamine, methamphetamine, and methcathinone are psychostimulant amines that are listed as controlled substances by the US DEA.
Thioridazine, an antipsychotic drug, is an amide which is believed to exhibit its antipsychotic effects, in part, due to its effects on other amides.
Amitriptyline, imipramine, lofepramine and clomipramine are tricyclic antidepressants and tertiary amines.
Nortriptyline, desipramine, and amoxapine are tricyclic antidepressants and secondary amines. (The tricyclics are grouped by the nature of the final amino group on the side chain.)
Substituted tryptamines and phenethylamines are key basic structures for a large variety of psychedelic drugs.
Opiate analgesics such as morphine, codeine, and heroin are tertiary amines.
Gas treatment
Aqueous monoethanolamine (MEA), diglycolamine (DGA), diethanolamine (DEA), diisopropanolamine (DIPA) and methyldiethanolamine (MDEA) are widely used industrially for removing carbon dioxide (CO2) and hydrogen sulfide (H2S) from natural gas and refinery process streams. They may also be used to remove CO2 from combustion gases and flue gases and may have potential for abatement of greenhouse gases. Related processes are known as sweetening.
Epoxy resin curing agents
Amines are often used as epoxy resin curing agents. These include dimethylethylamine, cyclohexylamine, and a variety of diamines such as 4,4 -diaminodicyclohexylmethane. Multifunctional amines such as tetraethylenepentamine and triethylenetetramine are also widely used in this capacity. The reaction proceeds by the lone pair of electrons on the amine nitrogen attacking the outermost carbon on the oxirane ring of the epoxy resin. This relieves ring strain on the epoxide and is the driving force of the reaction.
Safety
Low molecular weight simple amines, such as ethylamine, are only weakly toxic with between 100 and 1000 mg/kg. They are skin irritants, especially as some are easily absorbed through the skin. Amines are a broad class of compounds, and more complex members of the class can be extremely bioactive, for example strychnine.
See also
Acid-base extraction
Amine value
Amine gas treating
Ammine
Biogenic amine
Ligand isomerism
Official naming rules for amines as determined by the International Union of Pure and Applied Chemistry (IUPAC)
References
Further reading
External links
Synthesis of amines
Factsheet, amines in food
Functional groups
|
https://en.wikipedia.org/wiki/Aspirin
|
Aspirin, also known as acetylsalicylic acid (ASA), is a nonsteroidal anti-inflammatory drug (NSAID) used to reduce pain, fever, and/or inflammation, and as an antithrombotic. Specific inflammatory conditions which aspirin is used to treat include Kawasaki disease, pericarditis, and rheumatic fever.
Aspirin is also used long-term to help prevent further heart attacks, ischaemic strokes, and blood clots in people at high risk. For pain or fever, effects typically begin within 30 minutes. Aspirin works similarly to other NSAIDs but also suppresses the normal functioning of platelets.
One common adverse effect is an upset stomach. More significant side effects include stomach ulcers, stomach bleeding, and worsening asthma. Bleeding risk is greater among those who are older, drink alcohol, take other NSAIDs, or are on other blood thinners. Aspirin is not recommended in the last part of pregnancy. It is not generally recommended in children with infections because of the risk of Reye syndrome. High doses may result in ringing in the ears.
A precursor to aspirin found in the bark of the willow tree (genus Salix) has been used for its health effects for at least 2,400 years. In 1853, chemist Charles Frédéric Gerhardt treated the medicine sodium salicylate with acetyl chloride to produce acetylsalicylic acid for the first time. Over the next 50 years, other chemists established the chemical structure and devised more efficient production methods.
Aspirin is available without medical prescription as a proprietary or generic medication in most jurisdictions. It is one of the most widely used medications globally, with an estimated (50 to 120 billion pills) consumed each year, and is on the World Health Organization's List of Essential Medicines. In 2020, it was the 36th most commonly prescribed medication in the United States, with more than 17million prescriptions.
Brand vs. generic name
In 1897, scientists at the Bayer company began studying acetylsalicylic acid as a less-irritating replacement medication for common salicylate medicines. By 1899, Bayer had named it "Aspirin" and sold it around the world.
Aspirin's popularity grew over the first half of the 20th century, leading to competition between many brands and formulations. The word Aspirin was Bayer's brand name; however, their rights to the trademark were lost or sold in many countries. The name is ultimately a blend of the prefix a(cetyl) + spir Spiraea, the meadowsweet plant genus from which the acetylsalicylic acid was originally derived at Bayer + -in, the common chemical suffix.
Chemical properties
Aspirin decomposes rapidly in solutions of ammonium acetate or the acetates, carbonates, citrates, or hydroxides of the alkali metals. It is stable in dry air, but gradually hydrolyses in contact with moisture to acetic and salicylic acids. In solution with alkalis, the hydrolysis proceeds rapidly and the clear solutions formed may consist entirely of acetate and salicylate.
Like flour mills, factories producing aspirin tablets must control the amount of the powder that becomes airborne inside the building, because the powder-air mixture can be explosive. The National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit in the United States of 5mg/m3 (time-weighted average). In 1989, the Occupational Safety and Health Administration (OSHA) set a legal permissible exposure limit for aspirin of 5mg/m3, but this was vacated by the AFL-CIO v. OSHA decision in 1993.
Synthesis
The synthesis of aspirin is classified as an esterification reaction. Salicylic acid is treated with acetic anhydride, an acid derivative, causing a chemical reaction that turns salicylic acid's hydroxyl group into an ester group (R-OH → R-OCOCH3). This process yields aspirin and acetic acid, which is considered a byproduct of this reaction. Small amounts of sulfuric acid (and occasionally phosphoric acid) are almost always used as a catalyst. This method is commonly demonstrated in undergraduate teaching labs.
Reaction between acetic acid and salicylic acid can also form aspirin but this esterification reaction is reversible and the presence of water can lead to hydrolysis of the aspirin. So, an anhydrous reagent is preferred.
Reaction mechanism
Formulations containing high concentrations of aspirin often smell like vinegar because aspirin can decompose through hydrolysis in moist conditions, yielding salicylic and acetic acids.
Physical properties
Aspirin, an acetyl derivative of salicylic acid, is a white, crystalline, weakly acidic substance, which melts at , and decomposes around . Its acid dissociation constant (pKa) is 3.5 at .
Polymorphism
Polymorphism, or the ability of a substance to form more than one crystal structure, is important in the development of pharmaceutical ingredients. Many drugs receive regulatory approval for only a single crystal form or polymorph.
There was only one proven polymorph Form I of aspirin, though the existence of another polymorph was debated since the 1960s, and one report from 1981 reported that when crystallized in the presence of aspirin anhydride, the diffractogram of aspirin has weak additional peaks. Though at the time it was dismissed as mere impurity, it was, in retrospect, Form II aspirin.
Form II was reported in 2005, found after attempted co-crystallization of aspirin and levetiracetam from hot acetonitrile.
In form I, pairs of aspirin molecules form centrosymmetric dimers through the acetyl groups with the (acidic) methyl proton to carbonyl hydrogen bonds. In form II, each aspirin molecule forms the same hydrogen bonds, but with two neighbouring molecules instead of one. With respect to the hydrogen bonds formed by the carboxylic acid groups, both polymorphs form identical dimer structures. The aspirin polymorphs contain identical 2-dimensional sections and are therefore more precisely described as polytypes.
Pure Form II aspirin could be prepared by seeding the batch with aspirin anhydrate in 15% weight.
Mechanism of action
Discovery of the mechanism
In 1971, British pharmacologist John Robert Vane, then employed by the Royal College of Surgeons in London, showed aspirin suppressed the production of prostaglandins and thromboxanes. For this discovery he was awarded the 1982 Nobel Prize in Physiology or Medicine, jointly with Sune Bergström and Bengt Ingemar Samuelsson.
Prostaglandins and thromboxanes
Aspirin's ability to suppress the production of prostaglandins and thromboxanes is due to its irreversible inactivation of the cyclooxygenase (COX; officially known as prostaglandin-endoperoxide synthase, PTGS) enzyme required for prostaglandin and thromboxane synthesis. Aspirin acts as an acetylating agent where an acetyl group is covalently attached to a serine residue in the active site of the PTGS enzyme (Suicide inhibition). This makes aspirin different from other NSAIDs (such as diclofenac and ibuprofen), which are reversible inhibitors.
Low-dose aspirin use irreversibly blocks the formation of thromboxane A2 in platelets, producing an inhibitory effect on platelet aggregation during the lifetime of the affected platelet (8–9 days). This antithrombotic property makes aspirin useful for reducing the incidence of heart attacks in people who have had a heart attack, unstable angina, ischemic stroke or transient ischemic attack. 40mg of aspirin a day is able to inhibit a large proportion of maximum thromboxane A2 release provoked acutely, with the prostaglandin I2 synthesis being little affected; however, higher doses of aspirin are required to attain further inhibition.
Prostaglandins, local hormones produced in the body, have diverse effects, including the transmission of pain information to the brain, modulation of the hypothalamic thermostat, and inflammation. Thromboxanes are responsible for the aggregation of platelets that form blood clots. Heart attacks are caused primarily by blood clots, and low doses of aspirin are seen as an effective medical intervention to prevent a second acute myocardial infarction.
COX-1 and COX-2 inhibition
At least two different types of cyclooxygenases, COX-1 and COX-2, are acted on by aspirin. Aspirin irreversibly inhibits COX-1 and modifies the enzymatic activity of COX-2. COX-2 normally produces prostanoids, most of which are proinflammatory. Aspirin-modified PTGS2 (prostaglandin-endoperoxide synthase 2) produces lipoxins, most of which are anti-inflammatory. Newer NSAID drugs, COX-2 inhibitors (coxibs), have been developed to inhibit only PTGS2, with the intent to reduce the incidence of gastrointestinal side effects.
Several COX-2 inhibitors, such as rofecoxib (Vioxx), have been withdrawn from the market, after evidence emerged that PTGS2 inhibitors increase the risk of heart attack and stroke. Endothelial cells lining the microvasculature in the body are proposed to express PTGS2, and, by selectively inhibiting PTGS2, prostaglandin production (specifically, PGI2; prostacyclin) is downregulated with respect to thromboxane levels, as PTGS1 in platelets is unaffected. Thus, the protective anticoagulative effect of PGI2 is removed, increasing the risk of thrombus and associated heart attacks and other circulatory problems. Since platelets have no DNA, they are unable to synthesize new PTGS once aspirin has irreversibly inhibited the enzyme, an important difference as compared with reversible inhibitors.
Furthermore, aspirin, while inhibiting the ability of COX-2 to form pro-inflammatory products such as the prostaglandins, converts this enzyme's activity from a prostaglandin-forming cyclooxygenase to a lipoxygenase-like enzyme: aspirin-treated COX-2 metabolizes a variety of polyunsaturated fatty acids to hydroperoxy products which are then further metabolized to specialized proresolving mediators such as the aspirin-triggered lipoxins, aspirin-triggered resolvins, and aspirin-triggered maresins. These mediators possess potent anti-inflammatory activity. It is proposed that this aspirin-triggered transition of COX-2 from cyclooxygenase to lipoxygenase activity and the consequential formation of specialized proresolving mediators contributes to the anti-inflammatory effects of aspirin.
Additional mechanisms
Aspirin has been shown to have at least three additional modes of action. It uncouples oxidative phosphorylation in cartilaginous (and hepatic) mitochondria, by diffusing from the inner membrane space as a proton carrier back into the mitochondrial matrix, where it ionizes once again to release protons. Aspirin buffers and transports the protons. When high doses are given, it may actually cause fever, owing to the heat released from the electron transport chain, as opposed to the antipyretic action of aspirin seen with lower doses. In addition, aspirin induces the formation of NO-radicals in the body, which have been shown in mice to have an independent mechanism of reducing inflammation. This reduced leukocyte adhesion is an important step in the immune response to infection; however, evidence is insufficient to show aspirin helps to fight infection. More recent data also suggest salicylic acid and its derivatives modulate signalling through NF-κB. NF-κB, a transcription factor complex, plays a central role in many biological processes, including inflammation.
Aspirin is readily broken down in the body to salicylic acid, which itself has anti-inflammatory, antipyretic, and analgesic effects. In 2012, salicylic acid was found to activate AMP-activated protein kinase, which has been suggested as a possible explanation for some of the effects of both salicylic acid and aspirin. The acetyl portion of the aspirin molecule has its own targets. Acetylation of cellular proteins is a well-established phenomenon in the regulation of protein function at the post-translational level. Aspirin is able to acetylate several other targets in addition to COX isoenzymes. These acetylation reactions may explain many hitherto unexplained effects of aspirin.
Formulations
Aspirin is produced in many formulations, with some differences in effect. In particular, aspirin can cause gastrointestinal bleeding, and formulations are sought which deliver the benefits of aspirin while mitigating harmful bleeding. Formulations may be combined (e.g., buffered + vitamin C).
Tablets, typically of about 75–100 mg and 300–320 mg of immediate-release aspirin (IR-ASA).
Dispersible tablets.
Enteric-coated tablets.
Buffered formulations containing aspirin with one of many buffering agents.
Formulations of aspirin with vitamin C (ASA-VitC)
A phospholipid-aspirin complex liquid formulation, PL-ASA. the phospholipid coating was being trialled to determine if it caused less gastrointestinal damage.
Pharmacokinetics
Acetylsalicylic acid is a weak acid, and very little of it is ionized in the stomach after oral administration. Acetylsalicylic acid is quickly absorbed through the cell membrane in the acidic conditions of the stomach. The increased pH and larger surface area of the small intestine causes aspirin to be absorbed more slowly there, as more of it is ionized. Owing to the formation of concretions, aspirin is absorbed much more slowly during overdose, and plasma concentrations can continue to rise for up to 24 hours after ingestion.
About 50–80% of salicylate in the blood is bound to human serum albumin, while the rest remains in the active, ionized state; protein binding is concentration-dependent. Saturation of binding sites leads to more free salicylate and increased toxicity. The volume of distribution is 0.1–0.2 L/kg. Acidosis increases the volume of distribution because of enhancement of tissue penetration of salicylates.
As much as 80% of therapeutic doses of salicylic acid is metabolized in the liver. Conjugation with glycine forms salicyluric acid, and with glucuronic acid to form two different glucuronide esters. The conjugate with the acetyl group intact is referred to as the acyl glucuronide; the deacetylated conjugate is the phenolic glucuronide. These metabolic pathways have only a limited capacity. Small amounts of salicylic acid are also hydroxylated to gentisic acid. With large salicylate doses, the kinetics switch from first-order to zero-order, as metabolic pathways become saturated and renal excretion becomes increasingly important.
Salicylates are excreted mainly by the kidneys as salicyluric acid (75%), free salicylic acid (10%), salicylic phenol (10%), and acyl glucuronides (5%), gentisic acid (< 1%), and 2,3-dihydroxybenzoic acid. When small doses (less than 250mg in an adult) are ingested, all pathways proceed by first-order kinetics, with an elimination half-life of about 2.0 h to 4.5 h. When higher doses of salicylate are ingested (more than 4 g), the half-life becomes much longer (15 h to 30 h), because the biotransformation pathways concerned with the formation of salicyluric acid and salicyl phenolic glucuronide become saturated. Renal excretion of salicylic acid becomes increasingly important as the metabolic pathways become saturated, because it is extremely sensitive to changes in urinary pH. A 10- to 20-fold increase in renal clearance occurs when urine pH is increased from 5 to 8. The use of urinary alkalinization exploits this particular aspect of salicylate elimination. It was found that short-term aspirin use in therapeutic doses might precipitate reversible acute kidney injury when the patient was ill with glomerulonephritis or cirrhosis. Aspirin for some patients with chronic kidney disease and some children with congestive heart failure was contraindicated.
History
Medicines made from willow and other salicylate-rich plants appear in clay tablets from ancient Sumer as well as the Ebers Papyrus from ancient Egypt. Hippocrates referred to the use of salicylic tea to reduce fevers around 400 BC, and willow bark preparations were part of the pharmacopoeia of Western medicine in classical antiquity and the Middle Ages. Willow bark extract became recognized for its specific effects on fever, pain, and inflammation in the mid-eighteenth century. By the nineteenth century, pharmacists were experimenting with and prescribing a variety of chemicals related to salicylic acid, the active component of willow extract.
In 1853, chemist Charles Frédéric Gerhardt treated sodium salicylate with acetyl chloride to produce acetylsalicylic acid for the first time; in the second half of the 19th century, other academic chemists established the compound's chemical structure and devised more efficient methods of synthesis. In 1897, scientists at the drug and dye firm Bayer began investigating acetylsalicylic acid as a less-irritating replacement for standard common salicylate medicines, and identified a new way to synthesize it. By 1899, Bayer had dubbed this drug Aspirin and was selling it globally. The word Aspirin was Bayer's brand name, rather than the generic name of the drug; however, Bayer's rights to the trademark were lost or sold in many countries. Aspirin's popularity grew over the first half of the 20th century leading to fierce competition with the proliferation of aspirin brands and products.
Aspirin's popularity declined after the development of acetaminophen/paracetamol in 1956 and ibuprofen in 1962. In the 1960s and 1970s, John Vane and others discovered the basic mechanism of aspirin's effects, while clinical trials and other studies from the 1960s to the 1980s established aspirin's efficacy as an anti-clotting agent that reduces the risk of clotting diseases. The initial large studies on the use of low-dose aspirin to prevent heart attacks that were published in the 1970s and 1980s helped spur reform in clinical research ethics and guidelines for human subject research and US federal law, and are often cited as examples of clinical trials that included only men, but from which people drew general conclusions that did not hold true for women.
Aspirin sales revived considerably in the last decades of the 20th century, and remain strong in the 21st century with widespread use as a preventive treatment for heart attacks and strokes.
Trademark
Bayer lost its trademark for Aspirin in the United States and some other countries in actions taken between 1918 and 1921 because it had failed to use the name for its own product correctly and had for years allowed the use of "Aspirin" by other manufacturers without defending the intellectual property rights. Today, aspirin is a generic trademark in many countries. Aspirin, with a capital "A", remains a registered trademark of Bayer in Germany, Canada, Mexico, and in over 80 other countries, for acetylsalicylic acid in all markets, but using different packaging and physical aspects for each.
Compendial status
United States Pharmacopeia
British Pharmacopoeia
Medical use
Aspirin is used in the treatment of a number of conditions, including fever, pain, rheumatic fever, and inflammatory conditions, such as rheumatoid arthritis, pericarditis, and Kawasaki disease. Lower doses of aspirin have also been shown to reduce the risk of death from a heart attack, or the risk of stroke in people who are at high risk or who have cardiovascular disease, but not in elderly people who are otherwise healthy. There is evidence that aspirin is effective at preventing colorectal cancer, though the mechanisms of this effect are unclear. In the United States, the selective initiation of low-dose aspirin, based on an individualised assessment, has been deemed reasonable for the primary prevention of cardiovascular disease in people aged between 40 and 59 who have a 10% or greater risk of developing cardiovascular disease over the next 10 years and are not at an increased risk of bleeding.
Pain
Aspirin is an effective analgesic for acute pain, although it is generally considered inferior to ibuprofen because aspirin is more likely to cause gastrointestinal bleeding. Aspirin is generally ineffective for those pains caused by muscle cramps, bloating, gastric distension, or acute skin irritation. As with other NSAIDs, combinations of aspirin and caffeine provide slightly greater pain relief than aspirin alone. Effervescent formulations of aspirin relieve pain faster than aspirin in tablets, which makes them useful for the treatment of migraines. Topical aspirin may be effective for treating some types of neuropathic pain.
Aspirin, either by itself or in a combined formulation, effectively treats certain types of a headache, but its efficacy may be questionable for others. Secondary headaches, meaning those caused by another disorder or trauma, should be promptly treated by a medical provider. Among primary headaches, the International Classification of Headache Disorders distinguishes between tension headache (the most common), migraine, and cluster headache. Aspirin or other over-the-counter analgesics are widely recognized as effective for the treatment of tension headaches. Aspirin, especially as a component of an aspirin/paracetamol/caffeine combination, is considered a first-line therapy in the treatment of migraine, and comparable to lower doses of sumatriptan. It is most effective at stopping migraines when they are first beginning.
Fever
Like its ability to control pain, aspirin's ability to control fever is due to its action on the prostaglandin system through its irreversible inhibition of COX. Although aspirin's use as an antipyretic in adults is well established, many medical societies and regulatory agencies, including the American Academy of Family Physicians, the American Academy of Pediatrics, and the Food and Drug Administration, strongly advise against using aspirin for the treatment of fever in children because of the risk of Reye's syndrome, a rare but often fatal illness associated with the use of aspirin or other salicylates in children during episodes of viral or bacterial infection. Because of the risk of Reye's syndrome in children, in 1986, the US Food and Drug Administration (FDA) required labeling on all aspirin-containing medications advising against its use in children and teenagers.
Inflammation
Aspirin is used as an anti-inflammatory agent for both acute and long-term inflammation, as well as for the treatment of inflammatory diseases, such as rheumatoid arthritis.
Heart attacks and strokes
Aspirin is an important part of the treatment of those who have had a heart attack. It is generally not recommended for routine use by people with no other health problems, including those over the age of 70.
The 2009 Antithrombotic Trialists' Collaboration published in Lancet evaluated the efficacy and safety of low dose aspirin in secondary prevention. In those with prior ischaemic stroke or acute myocardial infarction, daily low dose aspirin was associated with a 19% relative risk reduction of serious cardiovascular events (non-fatal myocardial infarction, non-fatal stroke, or vascular death). This did come at the expense of a 0.19% absolute risk increase in gastrointestinal bleeding; however, the benefits outweigh the hazard risk in this case. Data from previous trials have suggested that weight-based dosing of aspirin has greater benefits in primary prevention of cardiovascular outcomes. However, more recent trials were not able to replicate similar outcomes using low dose aspirin in low body weight (<70 kg) in specific subset of population studied i.e. elderly and diabetic population, and more evidence is required to study the effect of high dose aspirin in high body weight (≥70 kg).
After percutaneous coronary interventions (PCIs), such as the placement of a coronary artery stent, a U.S. Agency for Healthcare Research and Quality guideline recommends that aspirin be taken indefinitely. Frequently, aspirin is combined with an ADP receptor inhibitor, such as clopidogrel, prasugrel, or ticagrelor to prevent blood clots. This is called dual antiplatelet therapy (DAPT). Duration of DAPT was advised in the United States and European Union guidelines after the CURE and PRODIGY studies . In 2020, the systematic review and network meta-analysis from Khan et al. showed promising benefits of short-term (< 6 months) DAPT followed by P2Y12 inhibitors in selected patients, as well as the benefits of extended-term (> 12 months) DAPT in high risk patients. In conclusion, the optimal duration of DAPT after PCIs should be personalized after outweighing each patient's risks of ischemic events and risks of bleeding events with consideration of multiple patient-related and procedure-related factors. Moreover, aspirin should be continued indefinitely after DAPT is complete.
The status of the use of aspirin for the primary prevention in cardiovascular disease is conflicting and inconsistent, with recent changes from previously recommending it widely decades ago, and that some referenced newer trials in clinical guidelines show less of benefit of adding aspirin alongside other anti-hypertensive and cholesterol lowering therapies. The ASCEND study demonstrated that in high-bleeding risk diabetics with no prior cardiovascular disease, there is no overall clinical benefit (12% decrease in risk of ischaemic events v/s 29% increase in GI bleeding) of low dose aspirin in preventing the serious vascular events over a period of 7.4 years. Similarly, the results of the ARRIVE study also showed no benefit of same dose of aspirin in reducing the time to first cardiovascular outcome in patients with moderate risk of cardiovascular disease over a period of five years. Aspirin has also been suggested as a component of a polypill for prevention of cardiovascular disease. Complicating the use of aspirin for prevention is the phenomenon of aspirin resistance. For people who are resistant, aspirin's efficacy is reduced. Some authors have suggested testing regimens to identify people who are resistant to aspirin.
As of , the United States Preventive Services Task Force (USPSTF) determined that there was a "small net benefit" for patients aged 40–59 with a 10% or greater 10-year cardiovascular disease (CVD) risk, and "no net benefit" for patients aged over 60. Determining the net benefit was based on balancing the risk reduction of taking aspirin for heart attacks and ischaemic strokes, with the increased risk of gastrointestinal bleeding, intracranial bleeding, and hemorrhagic strokes. Their recommendations state that age changes the risk of the medicine, with the magnitude of the benefit of aspirin coming from starting at a younger age, while the risk of bleeding, while small, increases with age, particular for adults over 60, and can be compounded by other risk factors such as diabetes and a history of gastrointestinal bleeding. As a result, the USPSTF suggests that "people ages 40 to 59 who are at higher risk for CVD should decide with their clinician whether to start taking aspirin; people 60 or older should not start taking aspirin to prevent a first heart attack or stroke." Primary prevention guidelines from made by the American College of Cardiology and the American Heart Association state they might consider aspirin for patients aged 40–69 with a higher risk of atherosclerotic CVD, without an increased bleeding risk, while stating they would not recommend aspirin for patients aged over 70 or adults of any age with an increased bleeding risk. They state a CVD risk estimation and a risk discussion should be done before starting on aspirin, while stating aspirin should be used "infrequently in the routine primary prevention of (atherosclerotic CVD) because of lack of net benefit". As of , the European Society of Cardiology made similar recommendations; considering aspirin specifically to patients aged less than 70 at high or very high CVD risk, without any clear contraindications, on a case-by-case basis considering both ischemic risk and bleeding risk.
Cancer prevention
Aspirin may reduce the overall risk of both getting cancer and dying from cancer. There is substantial evidence for lowering the risk of colorectal cancer (CRC), but aspirin must be taken for at least 10–20 years to see this benefit. It may also slightly reduce the risk of endometrial cancer and prostate cancer.
Some conclude the benefits are greater than the risks due to bleeding in those at average risk. Others are unclear if the benefits are greater than the risk. Given this uncertainty, the 2007 United States Preventive Services Task Force (USPSTF) guidelines on this topic recommended against the use of aspirin for prevention of CRC in people with average risk. Nine years later however, the USPSTF issued a grade B recommendation for the use of low-dose aspirin (75 to 100mg/day) "for the primary prevention of CVD [cardiovascular disease] and CRC in adults 50 to 59 years of age who have a 10% or greater 10-year CVD risk, are not at increased risk for bleeding, have a life expectancy of at least 10 years, and are willing to take low-dose aspirin daily for at least 10 years".
A meta-analysis through 2019 said that there was an association between taking aspirin and lower risk of cancer of the colorectum, esophagus, and stomach.
In 2021, the U.S. Preventive services Task Force raised questions about the use of aspirin in cancer prevention. It notes the results of the 2018 ASPREE (Aspirin in Reducing Events in the Elderly) Trial, in which the risk of cancer-related death was higher in the aspirin-treated group than in the placebo group.
Psychiatry
Bipolar disorder
Aspirin, along with several other agents with anti-inflammatory properties, has been repurposed as an add-on treatment for depressive episodes in subjects with bipolar disorder in light of the possible role of inflammation in the pathogenesis of severe mental disorders. A 2022 systematic review concluded that aspirin exposure reduced the risk of depression in a pooled cohort of three studies (HR 0.624, 95% CI: 0.0503, 1.198, P=0.033). However, further high-quality, longer-duration, double-blind randomized controlled trials (RCTs) are needed to determine whether aspirin is an effective add-on treatment for bipolar depression. Thus, notwithstanding the biological rationale, the clinical perspectives of aspirin and anti-inflammatory agents in the treatment of bipolar depression remain uncertain.
Dementia
Although cohort and longitudinal studies have shown low-dose aspirin has a greater likelihood of reducing the incidence of dementia, numerous randomized controlled trials have not validated this.
Schizophrenia
Some researchers have speculated the anti-inflammatory effects of aspirin may be beneficial for schizophrenia. Small trials have been conducted but evidence remains lacking.
Other uses
Aspirin is a first-line treatment for the fever and joint-pain symptoms of acute rheumatic fever. The therapy often lasts for one to two weeks, and is rarely indicated for longer periods. After fever and pain have subsided, the aspirin is no longer necessary, since it does not decrease the incidence of heart complications and residual rheumatic heart disease. Naproxen has been shown to be as effective as aspirin and less toxic, but due to the limited clinical experience, naproxen is recommended only as a second-line treatment.
Along with rheumatic fever, Kawasaki disease remains one of the few indications for aspirin use in children in spite of a lack of high quality evidence for its effectiveness.
Low-dose aspirin supplementation has moderate benefits when used for prevention of pre-eclampsia. This benefit is greater when started in early pregnancy.
Aspirin has also demonstrated anti-tumoral effects, via inhibition of the PTTG1 gene, which is often overexpressed in tumors.
Resistance
For some people, aspirin does not have as strong an effect on platelets as for others, an effect known as aspirin-resistance or insensitivity. One study has suggested women are more likely to be resistant than men, and a different, aggregate study of 2,930 people found 28% were resistant.
A study in 100 Italian people found, of the apparent 31% aspirin-resistant subjects, only 5% were truly resistant, and the others were noncompliant.
Another study of 400 healthy volunteers found no subjects who were truly resistant, but some had "pseudoresistance, reflecting delayed and reduced drug absorption".
Meta-analysis and systematic reviews have concluded that laboratory confirmed aspirin resistance confers increased rates of poorer outcomes in cardiovascular and neurovascular diseases. Although the majority of research conducted has surrounded cardiovascular and neurovascular, there is emerging research into the risk of aspirin resistance after orthopaedic surgery where aspirin is used for venous thromboembolism prophylaxis. Aspirin resistance in orthopaedic surgery, specifically after total hip and knee arthroplasties, is of interest as risk factors for aspirin resistance are also risk factors for venous thromboembolisms and osteoarthritis; the sequalae of requiring a total hip or knee arthroplasty. Some of these risk factors include obesity, advancing age, diabetes mellitus, dyslipidaemia and inflammatory diseases.
Dosages
Adult aspirin tablets are produced in standardised sizes, which vary slightly from country to country, for example 300mg in Britain and 325mg in the United States. Smaller doses are based on these standards, e.g., 75mg and 81mg tablets. The 81 mg tablets are commonly called "baby aspirin" or "baby-strength", because they were originallybut no longerintended to be administered to infants and children. No medical significance occurs due to the slight difference in dosage between the 75mg and the 81mg tablets. The dose required for benefit appears to depend on a person's weight. For those weighing less than , low dose is effective for preventing cardiovascular disease; for patients above this weight, higher doses are required.
In general, for adults, doses are taken four times a day for fever or arthritis, with doses near the maximal daily dose used historically for the treatment of rheumatic fever. For the prevention of myocardial infarction (MI) in someone with documented or suspected coronary artery disease, much lower doses are taken once daily.
March 2009 recommendations from the USPSTF on the use of aspirin for the primary prevention of coronary heart disease encourage men aged 45–79 and women aged 55–79 to use aspirin when the potential benefit of a reduction in MI for men or stroke for women outweighs the potential harm of an increase in gastrointestinal hemorrhage. The WHI study of postmenopausal women found that aspirin resulted in a 25% lower risk of death from cardiovascular disease and a 14% lower risk of death from any cause, though there was no significant difference between 81mg and 325mg aspirin doses. The 2021 ADAPTABLE study also showed no significant difference in cardiovascular events or major bleeding between 81mg and 325mg doses of aspirin in patients (both men and women) with established cardiovascular disease.
Low-dose aspirin use was also associated with a trend toward lower risk of cardiovascular events, and lower aspirin doses (75 or 81mg/day) may optimize efficacy and safety for people requiring aspirin for long-term prevention.
In children with Kawasaki disease, aspirin is taken at dosages based on body weight, initially four times a day for up to two weeks and then at a lower dose once daily for a further six to eight weeks.
Adverse effects
In October 2020, the US Food and Drug Administration (FDA) required the drug label to be updated for all nonsteroidal anti-inflammatory medications to describe the risk of kidney problems in unborn babies that result in low amniotic fluid. They recommend avoiding NSAIDs in pregnant women at 20 weeks or later in pregnancy. One exception to the recommendation is the use of low-dose 81mg aspirin at any point in pregnancy under the direction of a health care professional.
Contraindications
Aspirin should not be taken by people who are allergic to ibuprofen or naproxen, or who have salicylate intolerance or a more generalized drug intolerance to NSAIDs, and caution should be exercised in those with asthma or NSAID-precipitated bronchospasm. Owing to its effect on the stomach lining, manufacturers recommend people with peptic ulcers, mild diabetes, or gastritis seek medical advice before using aspirin. Even if none of these conditions is present, the risk of stomach bleeding is still increased when aspirin is taken with alcohol or warfarin. People with hemophilia or other bleeding tendencies should not take aspirin or other salicylates. Aspirin is known to cause hemolytic anemia in people who have the genetic disease glucose-6-phosphate dehydrogenase deficiency, particularly in large doses and depending on the severity of the disease. Use of aspirin during dengue fever is not recommended owing to increased bleeding tendency. Aspirin taken at doses of ≤325 mg and ≤100 mg per day for ≥2 days can increase the odds of suffering a gout attack by 81% and 91% respectively. This effect may potentially be worsened by high purine diets, diuretics, and kidney disease, but is eliminated by the urate lowering drug allopurinol. Daily low dose aspirin does not appear to worsen kidney function. Aspirin may reduce cardiovascular risk in those without established cardiovascular disease in people with moderate CKD, without significantly increasing the risk of bleeding. Aspirin should not be given to children or adolescents under the age of 16 to control cold or influenza symptoms, as this has been linked with Reye's syndrome.
Gastrointestinal
Aspirin use has been shown to increase the risk of gastrointestinal bleeding. Although some enteric-coated formulations of aspirin are advertised as being "gentle to the stomach", in one study, enteric coating did not seem to reduce this risk; the Mayo Clinic agree with this, and report that coated aspirin may also not be as effective at reducing blood clot risk. Although enteric coated aspirin is said to be not as effective as plain aspirin in reducing blood clot risk, however, with the current available results from clinical studies, there is still insufficient data to support this statement. Larger studies are required to provide more accurate results and conclusions.
Combining aspirin with other NSAIDs has been shown to further increase the risk of gastrointestinal bleeding. Using aspirin in combination with clopidogrel or warfarin also increases the risk of upper gastrointestinal bleeding.
Blockade of COX-1 by aspirin apparently results in the upregulation of COX-2 as part of a gastric defense. Several trials suggest that the simultaneous use of a COX-2 inhibitor with aspirin may increase the risk of gastrointestinal injury. However, currently available evidence has been unable to prove that this effect is consistently repeatable in everyday clinical practice. More dedicated research is required to provide greater clarity on the subject. Therefore, caution should be exercised if combining aspirin with any "natural" supplements with COX-2-inhibiting properties, such as garlic extracts, curcumin, bilberry, pine bark, ginkgo, fish oil, resveratrol, genistein, quercetin, resorcinol, and others.
"Buffering" is an additional method that is used with the intention of mitigating gastrointestinal bleeding. Buffering agents are intended to work by preventing the aspirin from concentrating in the walls of the stomach, although the benefits of buffered aspirin are disputed. Almost any buffering agent used in antacids can be used; Bufferin, for example, uses magnesium oxide. Other preparations use calcium carbonate. Gas-forming agents in effervescent tablet and powder formulations can also double as a buffering agent, one example being sodium bicarbonate, used in Alka-Seltzer.
Taking vitamin C with aspirin has been investigated as a method of protecting the stomach lining. In trials vitamin C-releasing aspirin (ASA-VitC) or a buffered aspirin formulation containing vitamin C was found to cause less stomach damage than aspirin alone.
Retinal vein occlusion
It is a widespread habit among eye specialists (ophthalmologists) to prescribe aspirin as an add-on medication for patients with retinal vein occlusion (RVO), such as central retinal vein occlusion (CRVO) and branch retinal vein occlusion (BRVO). The reason of this widespread use is the evidence of its proven effectiveness in major systemic venous thrombotic disorders, and it has been assumed that may be similarly beneficial in various types of retinal vein occlusion.
However, a large-scale investigation based on data of nearly 700 patients showed "that aspirin or other antiplatelet aggregating agents or anticoagulants adversely influence the visual outcome in patients with CRVO and hemi-CRVO, without any evidence of protective or beneficial effect". Several expert groups, including the Royal College of Ophthalmologists, recommended against the use of antithrombotic drugs (incl. aspirin) for patients with RVO.
Central effects
Large doses of salicylate, a metabolite of aspirin, cause temporary tinnitus (ringing in the ears) based on experiments in rats, via the action on arachidonic acid and NMDA receptors cascade.
Reye's syndrome
Reye's syndrome, a rare but severe illness characterized by acute encephalopathy and fatty liver, can occur when children or adolescents are given aspirin for a fever or other illness or infection. From 1981 to 1997, 1207 cases of Reye's syndrome in people younger than 18 were reported to the US Centers for Disease Control and Prevention (CDC). Of these, 93% reported being ill in the three weeks preceding the onset of Reye's syndrome, most commonly with a respiratory infection, chickenpox, or diarrhea. Salicylates were detectable in 81.9% of children for whom test results were reported. After the association between Reye's syndrome and aspirin was reported, and safety measures to prevent it (including a Surgeon General's warning, and changes to the labeling of aspirin-containing drugs) were implemented, aspirin taken by children declined considerably in the United States, as did the number of reported cases of Reye's syndrome; a similar decline was found in the United Kingdom after warnings against pediatric aspirin use were issued. The US Food and Drug Administration recommends aspirin (or aspirin-containing products) should not be given to anyone under the age of 12 who has a fever, and the UK National Health Service recommends children who are under 16 years of age should not take aspirin, unless it is on the advice of a doctor.
Skin
For a small number of people, taking aspirin can result in symptoms including hives, swelling, and headache. Aspirin can exacerbate symptoms among those with chronic hives, or create acute symptoms of hives. These responses can be due to allergic reactions to aspirin, or more often due to its effect of inhibiting the COX-1 enzyme. Skin reactions may also tie to systemic contraindications, seen with NSAID-precipitated bronchospasm, or those with atopy.
Aspirin and other NSAIDs, such as ibuprofen, may delay the healing of skin wounds. Earlier findings from two small, low-quality trials suggested a benefit with aspirin (alongside compression therapy) on venous leg ulcer healing time and leg ulcer size, however larger, more recent studies of higher quality have been unable to corroborate these outcomes. As such, further research is required to clarify the role of aspirin in this context.
Other adverse effects
Aspirin can induce swelling of skin tissues in some people. In one study, angioedema appeared one to six hours after ingesting aspirin in some of the people. However, when the aspirin was taken alone, it did not cause angioedema in these people; the aspirin had been taken in combination with another NSAID-induced drug when angioedema appeared.
Aspirin causes an increased risk of cerebral microbleeds having the appearance on MRI scans of 5 to 10mm or smaller, hypointense (dark holes) patches.
A study of a group with a mean dosage of aspirin of 270mg per day estimated an average absolute risk increase in intracerebral hemorrhage (ICH) of 12 events per 10,000 persons. In comparison, the estimated absolute risk reduction in myocardial infarction was 137 events per 10,000 persons, and a reduction of 39 events per 10,000 persons in ischemic stroke. In cases where ICH already has occurred, aspirin use results in higher mortality, with a dose of about 250mg per day resulting in a relative risk of death within three months after the ICH around 2.5 (95% confidence interval 1.3 to 4.6).
Aspirin and other NSAIDs can cause abnormally high blood levels of potassium by inducing a hyporeninemic hypoaldosteronic state via inhibition of prostaglandin synthesis; however, these agents do not typically cause hyperkalemia by themselves in the setting of normal renal function and euvolemic state.
Use of low-dose aspirin before a surgical procedure has been associated with an increased risk of bleeding events in some patients, however, ceasing aspirin prior to surgery has also been associated with an increase in major adverse cardiac events. An analysis of multiple studies found a three-fold increase in adverse events such as myocardial infarction in patients who ceased aspirin prior to surgery. The analysis found that the risk is dependent on the type of surgery being performed and the patient indication for aspirin use.
On 9 July 2015, the US Food and Drug Administration (FDA) toughened warnings of increased heart attack and stroke risk associated with nonsteroidal anti-inflammatory drugs (NSAID). Aspirin is an NSAID but is not affected by the new warnings.
Overdose
Aspirin overdose can be acute or chronic. In acute poisoning, a single large dose is taken; in chronic poisoning, higher than normal doses are taken over a period of time. Acute overdose has a mortality rate of 2%. Chronic overdose is more commonly lethal, with a mortality rate of 25%; chronic overdose may be especially severe in children. Toxicity is managed with a number of potential treatments, including activated charcoal, intravenous dextrose and normal saline, sodium bicarbonate, and dialysis. The diagnosis of poisoning usually involves measurement of plasma salicylate, the active metabolite of aspirin, by automated spectrophotometric methods. Plasma salicylate levels in general range from 30 to 100mg/L after usual therapeutic doses, 50–300mg/L in people taking high doses and 700–1400mg/L following acute overdose. Salicylate is also produced as a result of exposure to bismuth subsalicylate, methyl salicylate, and sodium salicylate.
Interactions
Aspirin is known to interact with other drugs. For example, acetazolamide and ammonium chloride are known to enhance the intoxicating effect of salicylates, and alcohol also increases the gastrointestinal bleeding associated with these types of drugs. Aspirin is known to displace a number of drugs from protein-binding sites in the blood, including the antidiabetic drugs tolbutamide and chlorpropamide, warfarin, methotrexate, phenytoin, probenecid, valproic acid (as well as interfering with beta oxidation, an important part of valproate metabolism), and other NSAIDs. Corticosteroids may also reduce the concentration of aspirin. Other NSAIDs, such as ibuprofen and naproxen, may reduce the antiplatelet effect of aspirin. Although limited evidence suggests this may not result in a reduced cardioprotective effect of aspirin. Analgesic doses of aspirin decrease sodium loss induced by spironolactone in the urine, however this does not reduce the antihypertensive effects of spironolactone. Furthermore, antiplatelet doses of aspirin are deemed too small to produce an interaction with spironolactone. Aspirin is known to compete with penicillin G for renal tubular secretion. Aspirin may also inhibit the absorption of vitamin C.
Research
The ISIS-2 trial demonstrated that aspirin at doses of 160mg daily for one month, decreased the mortality by 21% of participants with a suspected myocardial infarction in the first five weeks. A single daily dose of 324mg of aspirin for 12 weeks has a highly protective effect against acute myocardial infarction and death in men with unstable angina.
Bipolar disorder
Aspirin has been repurposed as an add-on treatment for depressive episodes in subjects with bipolar disorder. However, meta-analytic evidence is based on very few studies and does not suggest any efficacy of aspirin in the treatment of bipolar depression. Thus, notwithstanding the biological rationale, the clinical perspectives of aspirin and anti-inflammatory agents in the treatment of bipolar depression remain uncertain.
Infectious diseases
Several studies investigated the anti-infective properties of aspirin for bacterial, viral and parasitic infections. Aspirin was demonstrated to limit platelet activation induced by Staphylococcus aureus and Enterococcus faecalis and to reduce streptococcal adhesion to heart valves. In patients with tuberculous meningitis, the addition of aspirin reduced the risk of new cerebral infarction [RR = 0.52 (0.29-0.92)]. A role of aspirin on bacterial and fungal biofilm is also being supported by growing evidence.
Cancer prevention
Evidence from observational studies were conflicting on the effect of aspirin in breast cancer prevention, a randomized controlled trial showed that aspirin had no significant effect in reducing breast cancer thus further studies are needed to clarify aspirin effect in cancer prevention.
In gardening
There are a many anecdotal reportings that aspirin can improve plant's growth and resistance though most research involved salicylic acid instead of aspirin.
Veterinary medicine
Aspirin is sometimes used in veterinary medicine as an anticoagulant or to relieve pain associated with musculoskeletal inflammation or osteoarthritis. Aspirin should only be given to animals under the direct supervision of a veterinarian, as adverse effects—including gastrointestinal issues—are common. An aspirin overdose in any species may result in salicylate poisoning, characterized by hemorrhaging, seizures, coma, and even death.
Dogs are better able to tolerate aspirin than cats are. Cats metabolize aspirin slowly because they lack the glucuronide conjugates that aid in the excretion of aspirin, making it potentially toxic if dosing is not spaced out properly. No clinical signs of toxicosis occurred when cats were given 25mg/kg of aspirin every 48 hours for 4 weeks, but the recommended dose for relief of pain and fever and for treating blood clotting diseases in cats is 10mg/kg every 48 hours to allow for metabolization.
See also
Fluoroaspirin
References
Further reading
External links
1897 in Germany
1897 in science
Acetate esters
Acetylsalicylic acids
Antiplatelet drugs
Bayer brands
Brands that became generic
Chemical substances for emergency medicine
Commercialization of traditional medicines
Covalent inhibitors
Equine medications
German inventions
Hepatotoxins
Nonsteroidal anti-inflammatory drugs
Salicylic acids
Salicylyl esters
World Health Organization essential medicines
Wikipedia medicine articles ready to translate
|
https://en.wikipedia.org/wiki/Acropolis
|
An acropolis was the settlement of an upper part of an ancient Greek city, especially a citadel, and frequently a hill with precipitous sides, mainly chosen for purposes of defense. The term is typically used to refer to the Acropolis of Athens, yet every Greek city had an acropolis of its own. Acropolises were used as religious centers and places of worship, forts, and places in which the royal and high-status resided. Acropolises became the nuclei of large cities of classical ancient times, and served as important centers of a community. Some well-known acropolises have become the centers of tourism in present-day, and, especially, the Acropolis of Athens has been a revolutionary center for the studies of ancient Greece since the Mycenaean period. Many of them have become a source of revenue for Greece, and represent some great technology during the period.
Origin
An acropolis is defined by the Greek definition of , ; from () or () meaning “highest; edge; extremity”, and () meaning “city.” The plural of () is , also commonly as and , and in Greek. This word was first used in the 14th century BCE, in the context of Mycenaean kings and community. The term acropolis is also used to describe the central complex of overlapping structures, such as plazas and pyramids, in many Maya cities, including Tikal and Copán. Acropolis is also the term used by archaeologists and historians for the urban Castro culture settlements located in Northwestern Iberian hilltops.
It is primarily associated with the Greek cities of Athens, Argos (with Larisa), Thebes (with Cadmea), Corinth (with its Acrocorinth), and Rhodes (with its Acropolis of Lindos). It may also be applied generically to all such citadels including Rome, Carthage, Jerusalem, Celtic Bratislava, Asia Minor, or Castle Rock in Edinburgh. An example in Ireland is the Rock of Cashel. In Central Italy, many small rural communes still cluster at the base of a fortified habitation known as of the commune. Other parts of the world have developed other names for the high citadel, or , which often have reinforced a naturally strong site. Because of this, many cultures have included acropolises in their societies, however, do not use the same name for them.
Differing acropoleis
The acropolis of a city was used in many ways, with regards to ancient time and through references. Because an acropolis was built at the highest part of a city, it served as a highly functional form of protection, a fortress, and was as well as a home to the royal of a city and a centre for religion through the worshipping of different gods. There have been many classical and ancient acropolises, including the most commonly-known, Acropolis of Athens, as well as the Tepecik Acropolis at Patara, Ankara Acropolis, Acropolis of La Blanca, Acropolis at the Maya Site in Guatemala, and the Acropolis at Halieis.
The most famous example is the Athenian Acropolis, which is a collection of structures featuring a citadel on the highest part of land in ancient (and modern-day) Athens, Greece. Many notable structures at the site were constructed in the 5th century BCE, including the Propylaea, Erechtheion, and the Temple of Athena. The Temple is also commonly known as the Parthenon, which is derived from the divine Athena Parthenos. There were often dances, music and plays held at the acropolis, which it served as a community centre for the city of Athens. It became a prime tourist destination by the 2nd century AD during the Roman Empire and was known as "the Greece of Greece," as coined by an unknown poet. Although originating in the mainland of Greece, use of the acropolis model quickly spread to Greek colonies such as the Dorian Lato on Crete during the Archaic Period.
The Tepecik Acropolis at Patara served as a harbor to nearby communities and naval forces, such as Antigonos I Monopthalmos and Demetrios Poliorketes, and combined land and sea. Its fortification wall and Bastion date back to the Classical period. The acropolis was constructed in the fourth century BCE by the Hekatomnids that ultimately led to its seizure in 334 BCE by Alexander the Great. The acropolis contributed significantly to the overall development that took place during the Hellenistic empires. This acropolis was the earliest place of settlement, probably dating back to the third millennium BCE. During excavations that took place in 1989, ceramic items, terracotta figurines, coins, bone and stone objects were found that date to the fourth century BCE. The fortification wall and bastion that are built at this acropolis uses a style of masonry, commonly known as the Greek word (meaning "woven"). This style of masonry was likely used for weight-bearing purposes.
The Acropolis at Halieis dates back to the Neolithic and Classical periods. It included a fortified wall, sanctuary of Apollo (two temples, an altar, a race course), and necropolis (cemetery). This acropolis was the highest point of fortification on the south edge at Halieis. There was a small open-air cult space, including an altar and monuments.
The Ankara Acropolis, which was set in modern-day Turkey, is a historically prominent space that has changed over time through the urban development of the country from the Phrygian period. This acropolis was well known as a spot for holy worshipping, and was symbolic of the time. It has also been a place that has historically recognized the legislative changes that Turkey has faced.
The Acropolis of La Blanca was created in Guatemala as a small ancient Maya settlement and archaeological site that is located adjacent to the Salsipuedes River. This acropolis developed as a place of residence for the city of La Blanca's rulers. Its main period of usage was during the Classical period of 600 AD to 850 AD, as the city developed as a commercial place of trade among a number of nearby settlements.
The Mayan Acropolis site in Guatemala included a burial site and vaulted tombs of the highest status royal. This funerary structure was integrated into this sacred landscape, and illustrated the prosperity of power between the royal figures of Pedras Negras in Guatemala.
Modern-day uses
Tourism
Acropolises today have become the epicenters of tourism and attraction sites in many modern-day Greek cities. The Athenian Acropolis, in particular, is the most famous, and has the best vantage point in Athens, Greece. Today, tourists can purchase tickets to visit the Athenian Acropolis, including walking, sightseeing, and bus tours, as well as a classic Greek dinner.
Cultural ties
Because of its classical Hellenistic and Greco-Roman style, the ruins of Mission San Juan Capistrano's Great Stone Church in California, United States has been called an American Acropolis. The civilization developed its religious, educational, and cultural aspects of the acropolis, and is used today as a location that holds events, such as operas.
The neighborhood of Morningside Heights in New York City is commonly referred to as the "Academic Acropolis" due to its high elevation and the concentration of educational institutions in the area, including Columbia University and its affiliates, Barnard College, Teachers College, Union Theological Seminary and the Jewish Theological Seminary of America; Manhattan School of Music; Bank Street College of Education; and New York Theological Seminary. The analogy is also aided by the neoclassical architecture of the Columbia University campus, which was designed by McKim, Mead & White in the early 20th century.
Excavations
Much of the modern-day uses of acropolises have been discovered through excavations that have developed over the course of many years. For example, the Athenian Acropolis includes a Great Temple that holds the Parthenon, a specific space for ancient worship. Through today's findings and research, the Parthenon treasury is able to be recognized as the west part of the structure (the Erechtheion), as well as the Parthenon itself. Most excavations have been able to provide archaeologists with samples of pottery, ceramics, and vessels. The excavation of the Acropolis of Halieis produced remains that provided context that dated the Acropolis at Halieis from the Final Neolithic period through the first Early Helladic period.
See also
Acropolis of Rhodes
Acropolis Palaiokastro
Idjang
Tell (archaeology)
Hillfort
References
External links
Acropolis Museum
Acropolis: description, photo album
The Acropolis of Athens (Greek Government website)
The Acropolis Restoration Project (Greek Government website)
The Acropolis: A Walk Through History
The Parthenon Frieze (Hellenic Ministry of Culture web site)
UNESCO World Heritage Centre — Acropolis, Athens
Ancient Greek architecture
Culture of Greece
Archaeological terminology
Ancient Greek fortifications
|
https://en.wikipedia.org/wiki/Acupuncture
|
Acupuncture is a form of alternative medicine and a component of traditional Chinese medicine (TCM) in which thin needles are inserted into the body. Acupuncture is a pseudoscience; the theories and practices of TCM are not based on scientific knowledge, and it has been characterized as quackery.
There is a range of acupuncture variants which originated in different philosophies, and techniques vary depending on the country in which it is performed. However, it can be divided into two main foundational philosophical applications and approaches; the first form being the modern standardized form called eight principles TCM and the second an older system that is based on the ancient Daoist wuxing, better known as the five elements or phases in the West. Acupuncture is most often used to attempt pain relief, though acupuncturists say that it can also be used for a wide range of other conditions. Acupuncture is generally used only in combination with other forms of treatment.
The global acupuncture market was worth US$24.55 billion in 2017. The market was led by Europe with a 32.7% share, followed by Asia-Pacific with a 29.4% share and the Americas with a 25.3% share. It was estimated in 2021 that the industry would reach a market size of $55bn by 2023.
The conclusions of trials and systematic reviews of acupuncture generally provide no good evidence of benefit, which suggests that it is not an effective method of healthcare. Acupuncture is generally safe when done by appropriately trained practitioners using clean needle technique and single-use needles. When properly delivered, it has a low rate of mostly minor adverse effects. When accidents and infections do occur, they are associated with neglect on the part of the practitioner, particularly in the application of sterile techniques. A review conducted in 2013 stated that reports of infection transmission increased significantly in the preceding decade. The most frequently reported adverse events were pneumothorax and infections. Since serious adverse events continue to be reported, it is recommended that acupuncturists be trained sufficiently to reduce the risk.
Scientific investigation has not found any histological or physiological evidence for traditional Chinese concepts such as qi, meridians, and acupuncture points, and many modern practitioners no longer support the existence of life force energy (qi) or meridians, which was a major part of early belief systems. Acupuncture is believed to have originated around 100 BC in China, around the time The Inner Classic of Huang Di (Huangdi Neijing) was published, though some experts suggest it could have been practiced earlier. Over time, conflicting claims and belief systems emerged about the effect of lunar, celestial and earthly cycles, yin and yang energies, and a body's "rhythm" on the effectiveness of treatment. Acupuncture fluctuated in popularity in China due to changes in the country's political leadership and the preferential use of rationalism or scientific medicine. Acupuncture spread first to Korea in the 6th century AD, then to Japan through medical missionaries, and then to Europe, beginning with France. In the 20th century, as it spread to the United States and Western countries, spiritual elements of acupuncture that conflicted with scientific knowledge were sometimes abandoned in favor of simply tapping needles into acupuncture points.
Clinical practice
Acupuncture is a form of alternative medicine. It is used most commonly for pain relief, though it is also used to treat a wide range of conditions. Acupuncture is generally only used in combination with other forms of treatment. For example, the American Society of Anesthesiologists states it may be considered in the treatment for nonspecific, noninflammatory low back pain only in conjunction with conventional therapy.
Acupuncture is the insertion of thin needles into the skin. According to the Mayo Foundation for Medical Education and Research (Mayo Clinic), a typical session entails lying still while approximately five to twenty needles are inserted; for the majority of cases, the needles will be left in place for ten to twenty minutes. It can be associated with the application of heat, pressure, or laser light. Classically, acupuncture is individualized and based on philosophy and intuition, and not on scientific research. There is also a non-invasive therapy developed in early 20th century Japan using an elaborate set of instruments other than needles for the treatment of children (shōnishin or shōnihari).
Clinical practice varies depending on the country. A comparison of the average number of patients treated per hour found significant differences between China (10) and the United States (1.2). Chinese herbs are often used. There is a diverse range of acupuncture approaches, involving different philosophies. Although various different techniques of acupuncture practice have emerged, the method used in traditional Chinese medicine (TCM) seems to be the most widely adopted in the US. Traditional acupuncture involves needle insertion, moxibustion, and cupping therapy, and may be accompanied by other procedures such as feeling the pulse and other parts of the body and examining the tongue. Traditional acupuncture involves the belief that a "life force" (qi) circulates within the body in lines called meridians. The main methods practiced in the UK are TCM and Western medical acupuncture. The term Western medical acupuncture is used to indicate an adaptation of TCM-based acupuncture which focuses less on TCM. The Western medical acupuncture approach involves using acupuncture after a medical diagnosis. Limited research has compared the contrasting acupuncture systems used in various countries for determining different acupuncture points and thus there is no defined standard for acupuncture points.
In traditional acupuncture, the acupuncturist decides which points to treat by observing and questioning the patient to make a diagnosis according to the tradition used. In TCM, the four diagnostic methods are: inspection, auscultation and olfaction, inquiring, and palpation. Inspection focuses on the face and particularly on the tongue, including analysis of the tongue size, shape, tension, color and coating, and the absence or presence of teeth marks around the edge. Auscultation and olfaction involve listening for particular sounds such as wheezing, and observing body odor. Inquiring involves focusing on the "seven inquiries": chills and fever; perspiration; appetite, thirst and taste; defecation and urination; pain; sleep; and menses and leukorrhea. Palpation is focusing on feeling the body for tender "A-shi" points and feeling the pulse.
Needles
The most common mechanism of stimulation of acupuncture points employs penetration of the skin by thin metal needles, which are manipulated manually or the needle may be further stimulated by electrical stimulation (electroacupuncture). Acupuncture needles are typically made of stainless steel, making them flexible and preventing them from rusting or breaking. Needles are usually disposed of after each use to prevent contamination. Reusable needles when used should be sterilized between applications. In many areas, only sterile, single-use acupuncture needles are allowed, including the State of California, USA. Needles vary in length between , with shorter needles used near the face and eyes, and longer needles in areas with thicker tissues; needle diameters vary from 0 to 0, with thicker needles used on more robust patients. Thinner needles may be flexible and require tubes for insertion. The tip of the needle should not be made too sharp to prevent breakage, although blunt needles cause more pain.
Apart from the usual filiform needle, other needle types include three-edged needles and the Nine Ancient Needles. Japanese acupuncturists use extremely thin needles that are used superficially, sometimes without penetrating the skin, and surrounded by a guide tube (a 17th-century invention adopted in China and the West). Korean acupuncture uses copper needles and has a greater focus on the hand.
Needling technique
Insertion
The skin is sterilized and needles are inserted, frequently with a plastic guide tube. Needles may be manipulated in various ways, including spinning, flicking, or moving up and down relative to the skin. Since most pain is felt in the superficial layers of the skin, a quick insertion of the needle is recommended. Often the needles are stimulated by hand in order to cause a dull, localized, aching sensation that is called de qi, as well as "needle grasp," a tugging feeling felt by the acupuncturist and generated by a mechanical interaction between the needle and skin. Acupuncture can be painful. The acupuncturist's skill level may influence the painfulness of the needle insertion; a sufficiently skilled practitioner may be able to insert the needles without causing any pain.
De-qi sensation
De-qi (; "arrival of qi") refers to a claimed sensation of numbness, distension, or electrical tingling at the needling site. If these sensations are not observed then inaccurate location of the acupoint, improper depth of needle insertion, inadequate manual manipulation, are blamed. If de-qi is not immediately observed upon needle insertion, various manual manipulation techniques are often applied to promote it (such as "plucking", "shaking" or "trembling").
Once de-qi is observed, techniques might be used which attempt to "influence" the de-qi; for example, by certain manipulation the de-qi can allegedly be conducted from the needling site towards more distant sites of the body. Other techniques aim at "tonifying" () or "sedating" () qi. The former techniques are used in deficiency patterns, the latter in excess patterns. De qi is more important in Chinese acupuncture, while Western and Japanese patients may not consider it a necessary part of the treatment.
Related practices
Acupressure, a non-invasive form of bodywork, uses physical pressure applied to acupressure points by the hand or elbow, or with various devices.
Acupuncture is often accompanied by moxibustion, the burning of cone-shaped preparations of moxa (made from dried mugwort) on or near the skin, often but not always near or on an acupuncture point. Traditionally, acupuncture was used to treat acute conditions while moxibustion was used for chronic diseases. Moxibustion could be direct (the cone was placed directly on the skin and allowed to burn the skin, producing a blister and eventually a scar), or indirect (either a cone of moxa was placed on a slice of garlic, ginger or other vegetable, or a cylinder of moxa was held above the skin, close enough to either warm or burn it).
Cupping therapy is an ancient Chinese form of alternative medicine in which a local suction is created on the skin; practitioners believe this mobilizes blood flow in order to promote healing.
Tui na is a TCM method of attempting to stimulate the flow of qi by various bare-handed techniques that do not involve needles.
Electroacupuncture is a form of acupuncture in which acupuncture needles are attached to a device that generates continuous electric pulses (this has been described as "essentially transdermal electrical nerve stimulation [TENS] masquerading as acupuncture").
Fire needle acupuncture also known as fire needling is a technique which involves quickly inserting a flame-heated needle into areas on the body.
Sonopuncture is a stimulation of the body similar to acupuncture using sound instead of needles. This may be done using purpose-built transducers to direct a narrow ultrasound beam to a depth of 6–8 centimetres at acupuncture meridian points on the body. Alternatively, tuning forks or other sound emitting devices are used.
Acupuncture point injection is the injection of various substances (such as drugs, vitamins or herbal extracts) into acupoints. This technique combines traditional acupuncture with injection of what is often an effective dose of an approved pharmaceutical drug, and proponents claim that it may be more effective than either treatment alone, especially for the treatment of some kinds of chronic pain. However, a 2016 review found that most published trials of the technique were of poor value due to methodology issues and larger trials would be needed to draw useful conclusions.
Auriculotherapy, commonly known as ear acupuncture, auricular acupuncture, or auriculoacupuncture, is considered to date back to ancient China. It involves inserting needles to stimulate points on the outer ear. The modern approach was developed in France during the early 1950s. There is no scientific evidence that it can cure disease; the evidence of effectiveness is negligible.
Scalp acupuncture, developed in Japan, is based on reflexological considerations regarding the scalp.
Koryo hand acupuncture, developed in Korea, centers around assumed reflex zones of the hand. Medical acupuncture attempts to integrate reflexological concepts, the trigger point model, and anatomical insights (such as dermatome distribution) into acupuncture practice, and emphasizes a more formulaic approach to acupuncture point location.
Cosmetic acupuncture is the use of acupuncture in an attempt to reduce wrinkles on the face.
Bee venom acupuncture is a treatment approach of injecting purified, diluted bee venom into acupoints.
Veterinary acupuncture is the use of acupuncture on domesticated animals.
Efficacy
many thousands of papers had been published on the efficacy of acupuncture for the treatment of various adult health conditions, but there was no robust evidence it was beneficial for anything, except shoulder pain and fibromyalgia. For Science-Based Medicine, Steven Novella wrote that the overall pattern of evidence was reminiscent of that for homeopathy, compatible with the hypothesis that most, if not all, benefits were due to the placebo effect, and strongly suggestive that acupuncture had no beneficial therapeutic effects at all.
Research methodology and challenges
Sham acupuncture and research
It is difficult but not impossible to design rigorous research trials for acupuncture. Due to acupuncture's invasive nature, one of the major challenges in efficacy research is in the design of an appropriate placebo control group. For efficacy studies to determine whether acupuncture has specific effects, "sham" forms of acupuncture where the patient, practitioner, and analyst are blinded seem the most acceptable approach. Sham acupuncture uses non-penetrating needles or needling at non-acupuncture points, e.g. inserting needles on meridians not related to the specific condition being studied, or in places not associated with meridians. The under-performance of acupuncture in such trials may indicate that therapeutic effects are due entirely to non-specific effects, or that the sham treatments are not inert, or that systematic protocols yield less than optimal treatment.
A 2014 review in Nature Reviews Cancer found that "contrary to the claimed mechanism of redirecting the flow of qi through meridians, researchers usually find that it generally does not matter where the needles are inserted, how often (that is, no dose-response effect is observed), or even if needles are actually inserted. In other words, 'sham' or 'placebo' acupuncture generally produces the same effects as 'real' acupuncture and, in some cases, does better." A 2013 meta-analysis found little evidence that the effectiveness of acupuncture on pain (compared to sham) was modified by the location of the needles, the number of needles used, the experience or technique of the practitioner, or by the circumstances of the sessions. The same analysis also suggested that the number of needles and sessions is important, as greater numbers improved the outcomes of acupuncture compared to non-acupuncture controls. There has been little systematic investigation of which components of an acupuncture session may be important for any therapeutic effect, including needle placement and depth, type and intensity of stimulation, and number of needles used. The research seems to suggest that needles do not need to stimulate the traditionally specified acupuncture points or penetrate the skin to attain an anticipated effect (e.g. psychosocial factors).
A response to "sham" acupuncture in osteoarthritis may be used in the elderly, but placebos have usually been regarded as deception and thus unethical. However, some physicians and ethicists have suggested circumstances for applicable uses for placebos such as it might present a theoretical advantage of an inexpensive treatment without adverse reactions or interactions with drugs or other medications. As the evidence for most types of alternative medicine such as acupuncture is far from strong, the use of alternative medicine in regular healthcare can present an ethical question.
Using the principles of evidence-based medicine to research acupuncture is controversial, and has produced different results. Some research suggests acupuncture can alleviate pain but the majority of research suggests that acupuncture's effects are mainly due to placebo. Evidence suggests that any benefits of acupuncture are short-lasting. There is insufficient evidence to support use of acupuncture compared to mainstream medical treatments. Acupuncture is not better than mainstream treatment in the long term.
The use of acupuncture has been criticized owing to there being little scientific evidence for explicit effects, or the mechanisms for its supposed effectiveness, for any condition that is discernible from placebo. Acupuncture has been called 'theatrical placebo', and David Gorski argues that when acupuncture proponents advocate 'harnessing of placebo effects' or work on developing 'meaningful placebos', they essentially concede it is little more than that.
Publication bias
Publication bias is cited as a concern in the reviews of randomized controlled trials of acupuncture. A 1998 review of studies on acupuncture found that trials originating in China, Japan, Hong Kong, and Taiwan were uniformly favourable to acupuncture, as were ten out of eleven studies conducted in Russia. A 2011 assessment of the quality of randomized controlled trials on traditional Chinese medicine, including acupuncture, concluded that the methodological quality of most such trials (including randomization, experimental control, and blinding) was generally poor, particularly for trials published in Chinese journals (though the quality of acupuncture trials was better than the trials testing traditional Chinese medicine remedies). The study also found that trials published in non-Chinese journals tended to be of higher quality. Chinese authors use more Chinese studies, which have been demonstrated to be uniformly positive. A 2012 review of 88 systematic reviews of acupuncture published in Chinese journals found that less than half of these reviews reported testing for publication bias, and that the majority of these reviews were published in journals with impact factors of zero. A 2015 study comparing pre-registered records of acupuncture trials with their published results found that it was uncommon for such trials to be registered before the trial began. This study also found that selective reporting of results and changing outcome measures to obtain statistically significant results was common in this literature.
Scientist and journalist Steven Salzberg identifies acupuncture and Chinese medicine generally as a focus for "fake medical journals" such as the Journal of Acupuncture and Meridian Studies and Acupuncture in Medicine.
Safety
Adverse events
Acupuncture is generally safe when administered by an experienced, appropriately trained practitioner using clean-needle technique and sterile single-use needles. When improperly delivered it can cause adverse effects. Accidents and infections are associated with infractions of sterile technique or neglect on the part of the practitioner. To reduce the risk of serious adverse events after acupuncture, acupuncturists should be trained sufficiently. A 2009 overview of Cochrane reviews found acupuncture is not effective for a wide range of conditions. People with serious spinal disease, such as cancer or infection, are not good candidates for acupuncture. Contraindications to acupuncture (conditions that should not be treated with acupuncture) include coagulopathy disorders (e.g. hemophilia and advanced liver disease), warfarin use, severe psychiatric disorders (e.g. psychosis), and skin infections or skin trauma (e.g. burns). Further, electroacupuncture should be avoided at the spot of implanted electrical devices (such as pacemakers).
A 2011 systematic review of systematic reviews (internationally and without language restrictions) found that serious complications following acupuncture continue to be reported. Between 2000 and 2009, ninety-five cases of serious adverse events, including five deaths, were reported. Many such events are not inherent to acupuncture but are due to malpractice of acupuncturists. This might be why such complications have not been reported in surveys of adequately trained acupuncturists. Most such reports originate from Asia, which may reflect the large number of treatments performed there or a relatively higher number of poorly trained Asian acupuncturists. Many serious adverse events were reported from developed countries. These included Australia, Austria, Canada, Croatia, France, Germany, Ireland, the Netherlands, New Zealand, Spain, Sweden, Switzerland, the UK, and the US. The number of adverse effects reported from the UK appears particularly unusual, which may indicate less under-reporting in the UK than other countries. Reports included 38 cases of infections and 42 cases of organ trauma. The most frequent adverse events included pneumothorax, and bacterial and viral infections.
A 2013 review found (without restrictions regarding publication date, study type or language) 295 cases of infections; mycobacterium was the pathogen in at least 96%. Likely sources of infection include towels, hot packs or boiling tank water, and reusing reprocessed needles. Possible sources of infection include contaminated needles, reusing personal needles, a person's skin containing mycobacterium, and reusing needles at various sites in the same person. Although acupuncture is generally considered a safe procedure, a 2013 review stated that the reports of infection transmission increased significantly in the prior decade, including those of mycobacterium. Although it is recommended that practitioners of acupuncture use disposable needles, the reuse of sterilized needles is still permitted. It is also recommended that thorough control practices for preventing infection be implemented and adapted.
English-language
A 2013 systematic review of the English-language case reports found that serious adverse events associated with acupuncture are rare, but that acupuncture is not without risk. Between 2000 and 2011 the English-language literature from 25 countries and regions reported 294 adverse events. The majority of the reported adverse events were relatively minor, and the incidences were low. For example, a prospective survey of 34,000 acupuncture treatments found no serious adverse events and 43 minor ones, a rate of 1.3 per 1000 interventions. Another survey found there were 7.1% minor adverse events, of which 5 were serious, amid 97,733 acupuncture patients. The most common adverse effect observed was infection (e.g. mycobacterium), and the majority of infections were bacterial in nature, caused by skin contact at the needling site. Infection has also resulted from skin contact with unsterilized equipment or with dirty towels in an unhygienic clinical setting. Other adverse complications included five reported cases of spinal cord injuries (e.g. migrating broken needles or needling too deeply), four brain injuries, four peripheral nerve injuries, five heart injuries, seven other organ and tissue injuries, bilateral hand edema, epithelioid granuloma, pseudolymphoma, argyria, pustules, pancytopenia, and scarring due to hot-needle technique. Adverse reactions from acupuncture, which are unusual and uncommon in typical acupuncture practice, included syncope, galactorrhoea, bilateral nystagmus, pyoderma gangrenosum, hepatotoxicity, eruptive lichen planus, and spontaneous needle migration.
A 2013 systematic review found 31 cases of vascular injuries caused by acupuncture, three causing death. Two died from pericardial tamponade and one was from an aortoduodenal fistula. The same review found vascular injuries were rare, bleeding and pseudoaneurysm were most prevalent. A 2011 systematic review (without restriction in time or language), aiming to summarize all reported case of cardiac tamponade after acupuncture, found 26 cases resulting in 14 deaths, with little doubt about cause in most fatal instances. The same review concluded that cardiac tamponade was a serious, usually fatal, though theoretically avoidable complication following acupuncture, and urged training to minimize risk.
A 2012 review found that a number of adverse events were reported after acupuncture in the UK's National Health Service (NHS), 95% of which were not severe, though miscategorization and under-reporting may alter the total figures. From January 2009 to December 2011, 468 safety incidents were recognized within the NHS organizations. The adverse events recorded included retained needles (31%), dizziness (30%), loss of consciousness/unresponsive (19%), falls (4%), bruising or soreness at needle site (2%), pneumothorax (1%) and other adverse side effects (12%). Acupuncture practitioners should know, and be prepared to be responsible for, any substantial harm from treatments. Some acupuncture proponents argue that the long history of acupuncture suggests it is safe. However, there is an increasing literature on adverse events (e.g. spinal-cord injury).
Acupuncture seems to be safe in people getting anticoagulants, assuming needles are used at the correct location and depth, but studies are required to verify these findings.
Chinese, Korean, and Japanese-language
A 2010 systematic review of the Chinese-language literature found numerous acupuncture-related adverse events, including pneumothorax, fainting, subarachnoid hemorrhage, and infection as the most frequent, and cardiovascular injuries, subarachnoid hemorrhage, pneumothorax, and recurrent cerebral hemorrhage as the most serious, most of which were due to improper technique. Between 1980 and 2009, the Chinese-language literature reported 479 adverse events. Prospective surveys show that mild, transient acupuncture-associated adverse events ranged from 6.71% to 15%. In a study with 190,924 patients, the prevalence of serious adverse events was roughly 0.024%. Another study showed a rate of adverse events requiring specific treatment of 2.2%, 4,963 incidences among 229,230 patients. Infections, mainly hepatitis, after acupuncture are reported often in English-language research, though are rarely reported in Chinese-language research, making it plausible that acupuncture-associated infections have been underreported in China. Infections were mostly caused by poor sterilization of acupuncture needles. Other adverse events included spinal epidural hematoma (in the cervical, thoracic and lumbar spine), chylothorax, injuries of abdominal organs and tissues, injuries in the neck region, injuries to the eyes, including orbital hemorrhage, traumatic cataract, injury of the oculomotor nerve and retinal puncture, hemorrhage to the cheeks and the hypoglottis, peripheral motor-nerve injuries and subsequent motor dysfunction, local allergic reactions to metal needles, stroke, and cerebral hemorrhage after acupuncture.
A causal link between acupuncture and the adverse events cardiac arrest, pyknolepsy, shock, fever, cough, thirst, aphonia, leg numbness, and sexual dysfunction remains uncertain. The same review concluded that acupuncture can be considered inherently safe when practiced by properly trained practitioners, but the review also stated there is a need to find effective strategies to minimize the health risks. Between 1999 and 2010, the Korean-language literature contained reports of 1104 adverse events. Between the 1980s and 2002, the Japanese-language literature contained reports of 150 adverse events.
Children and pregnancy
Although acupuncture has been practiced for thousands of years in China, its use in pediatrics in the United States did not become common until the early 2000s. In 2007, the National Health Interview Survey (NHIS) conducted by the National Center For Health Statistics (NCHS) estimated that approximately 150,000 children had received acupuncture treatment for a variety of conditions.
In 2008 a study determined that the use of acupuncture-needle treatment on children was "questionable" due to the possibility of adverse side-effects and the pain manifestation differences in children versus adults. The study also includes warnings against practicing acupuncture on infants, as well as on children who are over-fatigued, very weak, or have over-eaten.
When used on children, acupuncture is considered safe when administered by well-trained, licensed practitioners using sterile needles; however, a 2011 review found there was limited research to draw definite conclusions about the overall safety of pediatric acupuncture. The same review found 279 adverse events, 25 of them serious. The adverse events were mostly mild in nature (e.g. bruising or bleeding). The prevalence of mild adverse events ranged from 10.1% to 13.5%, an estimated 168 incidences among 1,422 patients. On rare occasions adverse events were serious (e.g. cardiac rupture or hemoptysis); many might have been a result of substandard practice. The incidence of serious adverse events was 5 per one million, which included children and adults.
When used during pregnancy, the majority of adverse events caused by acupuncture were mild and transient, with few serious adverse events. The most frequent mild adverse event was needling or unspecified pain, followed by bleeding. Although two deaths (one stillbirth and one neonatal death) were reported, there was a lack of acupuncture-associated maternal mortality. Limiting the evidence as certain, probable or possible in the causality evaluation, the estimated incidence of adverse events following acupuncture in pregnant women was 131 per 10,000.
Although acupuncture is not contraindicated in pregnant women, some specific acupuncture points are particularly sensitive to needle insertion; these spots, as well as the abdominal region, should be avoided during pregnancy.
Moxibustion and cupping
Four adverse events associated with moxibustion were bruising, burns and cellulitis, spinal epidural abscess, and large superficial basal cell carcinoma. Ten adverse events were associated with cupping. The minor ones were keloid scarring, burns, and bullae; the serious ones were acquired hemophilia A, stroke following cupping on the back and neck, factitious panniculitis, reversible cardiac hypertrophy, and iron deficiency anemia.
Risk of forgoing conventional medical care
As with other alternative medicines, unethical or naïve practitioners may induce patients to exhaust financial resources by pursuing ineffective treatment. Professional ethics codes set by accrediting organizations such as the National Certification Commission for Acupuncture and Oriental Medicine require practitioners to make "timely referrals to other health care professionals as may be appropriate." Stephen Barrett states that there is a "risk that an acupuncturist whose approach to diagnosis is not based on scientific concepts will fail to diagnose a dangerous condition".
Conceptual basis
Traditional
Acupuncture is a substantial part of traditional Chinese medicine (TCM). Early acupuncture beliefs relied on concepts that are common in TCM, such as a life force energy called qi. Qi was believed to flow from the body's primary organs (zang-fu organs) to the "superficial" body tissues of the skin, muscles, tendons, bones, and joints, through channels called meridians. Acupuncture points where needles are inserted are mainly (but not always) found at locations along the meridians. Acupuncture points not found along a meridian are called extraordinary points and those with no designated site are called "A-shi" points.
In TCM, disease is generally perceived as a disharmony or imbalance in energies such as yin, yang, qi, xuĕ, zàng-fǔ, meridians, and of the interaction between the body and the environment. Therapy is based on which "pattern of disharmony" can be identified. For example, some diseases are believed to be caused by meridians being invaded with an excess of wind, cold, and damp. In order to determine which pattern is at hand, practitioners examine things like the color and shape of the tongue, the relative strength of pulse-points, the smell of the breath, the quality of breathing, or the sound of the voice. TCM and its concept of disease does not strongly differentiate between the cause and effect of symptoms.
Purported scientific basis
Many within the scientific community consider attempts to rationalize acupuncture in science to be quackery and pseudoscience. Academics Massimo Pigliucci and Maarten Boudry describe it as a "borderlands science" lying between science and pseudoscience.
Rationalizations of traditional medicine
It is a generally held belief within the acupuncture community that acupuncture points and meridians structures are special conduits for electrical signals, but no research has established any consistent anatomical structure or function for either acupuncture points or meridians. Human tests to determine whether electrical continuity was significantly different near meridians than other places in the body have been inconclusive. Scientific research has not supported the existence of qi, meridians, or yin and yang. A Nature editorial described TCM as "fraught with pseudoscience", with the majority of its treatments having no logical mechanism of action. Quackwatch states that "TCM theory and practice are not based upon the body of knowledge related to health, disease, and health care that has been widely accepted by the scientific community. TCM practitioners disagree among themselves about how to diagnose patients and which treatments should go with which diagnoses. Even if they could agree, the TCM theories are so nebulous that no amount of scientific study will enable TCM to offer rational care." Academic discussions of acupuncture still make reference to pseudoscientific concepts such as qi and meridians despite the lack of scientific evidence.
Release of endorphins or adenosine
Some modern practitioners support the use of acupuncture to treat pain, but have abandoned the use of qi, meridians, yin, yang and other mystical energies as an explanatory frameworks. The use of qi as an explanatory framework has been decreasing in China, even as it becomes more prominent during discussions of acupuncture in the US.
Many acupuncturists attribute pain relief to the release of endorphins when needles penetrate, but no longer support the idea that acupuncture can affect a disease. Some studies suggest acupuncture causes a series of events within the central nervous system, and that it is possible to inhibit acupuncture's analgesic effects with the opioid antagonist naloxone. Mechanical deformation of the skin by acupuncture needles appears to result in the release of adenosine. The anti-nociceptive effect of acupuncture may be mediated by the adenosine A1 receptor. A 2014 review in Nature Reviews Cancer analyzed mouse studies that suggested acupuncture relieves pain via the local release of adenosine, which then triggered nearby A1 receptors. The review found that in those studies, because acupuncture "caused more tissue damage and inflammation relative to the size of the animal in mice than in humans, such studies unnecessarily muddled a finding that local inflammation can result in the local release of adenosine with analgesic effect."
History
Origins
Acupuncture, along with moxibustion, is one of the oldest practices of traditional Chinese medicine. Most historians believe the practice began in China, though there are some conflicting narratives on when it originated. Academics David Ramey and Paul Buell said the exact date acupuncture was founded depends on the extent to which dating of ancient texts can be trusted and the interpretation of what constitutes acupuncture.
Acupressure therapy was prevalent in India. Once Buddhism spread to China, the acupressure therapy was also integrated into common medical practice in China and it came to be known as acupuncture. The major points of Indian acupressure and Chinese acupuncture are similar to each other.
According to an article in Rheumatology, the first documentation of an "organized system of diagnosis and treatment" for acupuncture was in Inner Classic of Huang Di (Huangdi Neijing) from about 100 BC. Gold and silver needles found in the tomb of Liu Sheng from around 100 BC are believed to be the earliest archaeological evidence of acupuncture, though it is unclear if that was their purpose. According to Plinio Prioreschi, the earliest known historical record of acupuncture is the Shiji ("Records of the Grand Historian"), written by a historian around 100 BC. It is believed that this text was documenting what was established practice at that time.
Alternate theories
The 5,000-year-old mummified body of Ötzi the Iceman was found with 15 groups of tattoos, many of which were located at points on the body where acupuncture needles are used for abdominal or lower back problems. Evidence from the body suggests Ötzi had these conditions. This has been cited as evidence that practices similar to acupuncture may have been practised elsewhere in Eurasia during the early Bronze Age; however, The Oxford Handbook of the History of Medicine calls this theory "speculative". It is considered unlikely that acupuncture was practised before 2000 BC.
Acupuncture may have been practised during the Neolithic era, near the end of the Stone Age, using sharpened stones called Bian shi. Many Chinese texts from later eras refer to sharp stones called "plen", which means "stone probe", that may have been used for acupuncture purposes. The ancient Chinese medical text, Huangdi Neijing, indicates that sharp stones were believed at-the-time to cure illnesses at or near the body's surface, perhaps because of the short depth a stone could penetrate. However, it is more likely that stones were used for other medical purposes, such as puncturing a growth to drain its pus. The Mawangdui texts, which are believed to be from the 2nd century BC, mention the use of pointed stones to open abscesses, and moxibustion, but not for acupuncture. It is also speculated that these stones may have been used for bloodletting, due to the ancient Chinese belief that illnesses were caused by demons within the body that could be killed or released. It is likely bloodletting was an antecedent to acupuncture.
According to historians Lu Gwei-djen and Joseph Needham, there is substantial evidence that acupuncture may have begun around 600 BC. Some hieroglyphs and pictographs from that era suggests acupuncture and moxibustion were practised. However, historians Lu and Needham said it was unlikely a needle could be made out of the materials available in China during this time period. It is possible that bronze was used for early acupuncture needles. Tin, copper, gold and silver are also possibilities, though they are considered less likely, or to have been used in fewer cases. If acupuncture was practised during the Shang dynasty (1766 to 1122 BC), organic materials like thorns, sharpened bones, or bamboo may have been used. Once methods for producing steel were discovered, it would replace all other materials, since it could be used to create a very fine, but sturdy needles. Lu and Needham noted that all the ancient materials that could have been used for acupuncture and which often produce archaeological evidence, such as sharpened bones, bamboo or stones, were also used for other purposes. An article in Rheumatology said that the absence of any mention of acupuncture in documents found in the tomb of Mawangdui from 198 BC suggest that acupuncture was not practised by that time.
Belief systems
Several different and sometimes conflicting belief systems emerged regarding acupuncture. This may have been the result of competing schools of thought. Some ancient texts referred to using acupuncture to cause bleeding, while others mixed the ideas of blood-letting and spiritual ch'i energy. Over time, the focus shifted from blood to the concept of puncturing specific points on the body, and eventually to balancing Yin and Yang energies as well. According to David Ramey, no single "method or theory" was ever predominantly adopted as the standard. At the time, scientific knowledge of medicine was not yet developed, especially because in China dissection of the deceased was forbidden, preventing the development of basic anatomical knowledge.
It is not certain when specific acupuncture points were introduced, but the autobiography of Bian Que from around 400–500 BC references inserting needles at designated areas. Bian Que believed there was a single acupuncture point at the top of one's skull that he called the point "of the hundred meetings." Texts dated to be from 156–186 BC document early beliefs in channels of life force energy called meridians that would later be an element in early acupuncture beliefs.
Ramey and Buell said the "practice and theoretical underpinnings" of modern acupuncture were introduced in The Yellow Emperor's Classic (Huangdi Neijing) around 100 BC. It introduced the concept of using acupuncture to manipulate the flow of life energy (qi) in a network of meridian (channels) in the body. The network concept was made up of acu-tracts, such as a line down the arms, where it said acupoints were located. Some of the sites acupuncturists use needles at today still have the same names as those given to them by the Yellow Emperor's Classic. Numerous additional documents were published over the centuries introducing new acupoints. By the 4th century AD, most of the acupuncture sites in use today had been named and identified.
Early development in China
Establishment and growth
In the first half of the 1st century AD, acupuncturists began promoting the belief that acupuncture's effectiveness was influenced by the time of day or night, the lunar cycle, and the season. The 'science of the yin-yang cycles' ( ) was a set of beliefs that curing diseases relied on the alignment of both heavenly (tian) and earthly (di) forces that were attuned to cycles like that of the sun and moon. There were several different belief systems that relied on a number of celestial and earthly bodies or elements that rotated and only became aligned at certain times. According to Needham and Lu, these "arbitrary predictions" were depicted by acupuncturists in complex charts and through a set of special terminology.
Acupuncture needles during this period were much thicker than most modern ones and often resulted in infection. Infection is caused by a lack of sterilization, but at that time it was believed to be caused by use of the wrong needle, or needling in the wrong place, or at the wrong time. Later, many needles were heated in boiling water, or in a flame. Sometimes needles were used while they were still hot, creating a cauterizing effect at the injection site. Nine needles were recommended in the Great Compendium of Acupuncture and Moxibustion from 1601, which may have been because of an ancient Chinese belief that nine was a magic number.
Other belief systems were based on the idea that the human body operated on a rhythm and acupuncture had to be applied at the right point in the rhythm to be effective. In some cases a lack of balance between Yin and Yang were believed to be the cause of disease.
In the 1st century AD, many of the first books about acupuncture were published and recognized acupuncturist experts began to emerge. The Zhen Jiu Jia Yi Jing, which was published in the mid-3rd century, became the oldest acupuncture book that is still in existence in the modern era. Other books like the Yu Gui Zhen Jing, written by the Director of Medical Services for China, were also influential during this period, but were not preserved. In the mid 7th century, Sun Simiao published acupuncture-related diagrams and charts that established standardized methods for finding acupuncture sites on people of different sizes and categorized acupuncture sites in a set of modules.
Acupuncture became more established in China as improvements in paper led to the publication of more acupuncture books. The Imperial Medical Service and the Imperial Medical College, which both supported acupuncture, became more established and created medical colleges in every province. The public was also exposed to stories about royal figures being cured of their diseases by prominent acupuncturists. By time the Great Compendium of Acupuncture and Moxibustion was published during the Ming dynasty (1368–1644 AD), most of the acupuncture practices used in the modern era had been established.
Decline
By the end of the Song dynasty (1279 AD), acupuncture had lost much of its status in China. It became rarer in the following centuries, and was associated with less prestigious professions like alchemy, shamanism, midwifery and moxibustion. Additionally, by the 18th century, scientific rationality was becoming more popular than traditional superstitious beliefs. By 1757 a book documenting the history of Chinese medicine called acupuncture a "lost art". Its decline was attributed in part to the popularity of prescriptions and medications, as well as its association with the lower classes.
In 1822, the Chinese Emperor signed a decree excluding the practice of acupuncture from the Imperial Medical Institute. He said it was unfit for practice by gentlemen-scholars. In China acupuncture was increasingly associated with lower-class, illiterate practitioners. It was restored for a time, but banned again in 1929 in favor of science-based Western medicine. Although acupuncture declined in China during this time period, it was also growing in popularity in other countries.
International expansion
Korea is believed to be the first country in Asia that acupuncture spread to outside of China. Within Korea there is a legend that acupuncture was developed by emperor Dangun, though it is more likely to have been brought into Korea from a Chinese colonial prefecture in 514 AD. Acupuncture use was commonplace in Korea by the 6th century. It spread to Vietnam in the 8th and 9th centuries. As Vietnam began trading with Japan and China around the 9th century, it was influenced by their acupuncture practices as well. China and Korea sent "medical missionaries" that spread traditional Chinese medicine to Japan, starting around 219 AD. In 553, several Korean and Chinese citizens were appointed to re-organize medical education in Japan and they incorporated acupuncture as part of that system. Japan later sent students back to China and established acupuncture as one of five divisions of the Chinese State Medical Administration System.
Acupuncture began to spread to Europe in the second half of the 17th century. Around this time the surgeon-general of the Dutch East India Company met Japanese and Chinese acupuncture practitioners and later encouraged Europeans to further investigate it. He published the first in-depth description of acupuncture for the European audience and created the term "acupuncture" in his 1683 work De Acupunctura. France was an early adopter among the West due to the influence of Jesuit missionaries, who brought the practice to French clinics in the 16th century. The French doctor Louis Berlioz (the father of the composer Hector Berlioz) is usually credited with being the first to experiment with the procedure in Europe in 1810, before publishing his findings in 1816.
By the 19th century, acupuncture had become commonplace in many areas of the world. Americans and Britons began showing interest in acupuncture in the early 19th century, although interest waned by mid-century. Western practitioners abandoned acupuncture's traditional beliefs in spiritual energy, pulse diagnosis, and the cycles of the moon, sun or the body's rhythm. Diagrams of the flow of spiritual energy, for example, conflicted with the West's own anatomical diagrams. It adopted a new set of ideas for acupuncture based on tapping needles into nerves. In Europe it was speculated that acupuncture may allow or prevent the flow of electricity in the body, as electrical pulses were found to make a frog's leg twitch after death.
The West eventually created a belief system based on Travell trigger points that were believed to inhibit pain. They were in the same locations as China's spiritually identified acupuncture points, but under a different nomenclature. The first elaborate Western treatise on acupuncture was published in 1683 by Willem ten Rhijne.
Modern era
In China, the popularity of acupuncture rebounded in 1949 when Mao Zedong took power and sought to unite China behind traditional cultural values. It was also during this time that many Eastern medical practices were consolidated under the name traditional Chinese medicine (TCM).
New practices were adopted in the 20th century, such as using a cluster of needles, electrified needles, or leaving needles inserted for up to a week. A lot of emphasis developed on using acupuncture on the ear. Acupuncture research organizations such as the International Society of Acupuncture were founded in the 1940s and 1950s and acupuncture services became available in modern hospitals. China, where acupuncture was believed to have originated, was increasingly influenced by Western medicine. Meanwhile, acupuncture grew in popularity in the US. The US Congress created the Office of Alternative Medicine in 1992 and the National Institutes of Health (NIH) declared support for acupuncture for some conditions in November 1997. In 1999, the National Center for Complementary and Alternative Medicine was created within the NIH. Acupuncture became the most popular alternative medicine in the US.
Politicians from the Chinese Communist Party said acupuncture was superstitious and conflicted with the party's commitment to science. Communist Party Chairman Mao Zedong later reversed this position, arguing that the practice was based on scientific principles.
In 1971, New York Times reporter James Reston published an article on his acupuncture experiences in China, which led to more investigation of and support for acupuncture. The US President Richard Nixon visited China in 1972. During one part of the visit, the delegation was shown a patient undergoing major surgery while fully awake, ostensibly receiving acupuncture rather than anesthesia. Later it was found that the patients selected for the surgery had both a high pain tolerance and received heavy indoctrination before the operation; these demonstration cases were also frequently receiving morphine surreptitiously through an intravenous drip that observers were told contained only fluids and nutrients. One patient receiving open heart surgery while awake was ultimately found to have received a combination of three powerful sedatives as well as large injections of a local anesthetic into the wound. After the National Institute of Health expressed support for acupuncture for a limited number of conditions, adoption in the US grew further. In 1972 the first legal acupuncture center in the US was established in Washington DC and in 1973 the American Internal Revenue Service allowed acupuncture to be deducted as a medical expense.
In 2006, a BBC documentary Alternative Medicine filmed a patient undergoing open heart surgery allegedly under acupuncture-induced anesthesia. It was later revealed that the patient had been given a cocktail of anesthetics.
In 2010, UNESCO inscribed "acupuncture and moxibustion of traditional Chinese medicine" on the UNESCO Intangible Cultural Heritage List following China's nomination.
Adoption
Acupuncture is most heavily practiced in China and is popular in the US, Australia, and Europe. In Switzerland, acupuncture has become the most frequently used alternative medicine since 2004. In the United Kingdom, a total of 4 million acupuncture treatments were administered in 2009. Acupuncture is used in most pain clinics and hospices in the UK. An estimated 1 in 10 adults in Australia used acupuncture in 2004. In Japan, it is estimated that 25 percent of the population will try acupuncture at some point, though in most cases it is not covered by public health insurance. Users of acupuncture in Japan are more likely to be elderly and to have a limited education. Approximately half of users surveyed indicated a likelihood to seek such remedies in the future, while 37% did not. Less than one percent of the US population reported having used acupuncture in the early 1990s. By the early 2010s, more than 14 million Americans reported having used acupuncture as part of their health care.
In the US, acupuncture is increasingly () used at academic medical centers, and is usually offered through CAM centers or anesthesia and pain management services. Examples include those at Harvard University, Stanford University, Johns Hopkins University, and UCLA. CDC clinical practice guidelines from 2022 list acupuncture among the types of complementary and alternative medicines physicians should consider in preference to opioid prescription for certain kinds of pain.
The use of acupuncture in Germany increased by 20% in 2007, after the German acupuncture trials supported its efficacy for certain uses. In 2011, there were more than one million users, and insurance companies have estimated that two-thirds of German users are women. As a result of the trials, German public health insurers began to cover acupuncture for chronic low back pain and osteoarthritis of the knee, but not tension headache or migraine. This decision was based in part on socio-political reasons. Some insurers in Germany chose to stop reimbursement of acupuncture because of the trials. For other conditions, insurers in Germany were not convinced that acupuncture had adequate benefits over usual care or sham treatments. Highlighting the results of the placebo group, researchers refused to accept a placebo therapy as efficient.
Regulation
There are various government and trade association regulatory bodies for acupuncture in the United Kingdom, the United States, Saudi Arabia, Australia, New Zealand, Japan, Canada, and in European countries and elsewhere. The World Health Organization recommends that before being licensed or certified, an acupuncturist receive 200 hours of specialized training if they are a physician and 2,500 hours for non-physicians; many governments have adopted similar standards.
In Hong Kong, the practice of acupuncture is regulated by the Chinese Medicine Council that was formed in 1999 by the Legislative Council. It includes a licensing exam and registration, as well as degree courses approved by the board. Canada has acupuncture licensing programs in the provinces of British Columbia, Ontario, Alberta and Quebec; standards set by the Chinese Medicine and Acupuncture Association of Canada are used in provinces without government regulation. Regulation in the US began in the 1970s in California, which was eventually followed by every state but Wyoming and Idaho. Licensing requirements vary greatly from state to state. The needles used in acupuncture are regulated in the US by the Food and Drug Administration. In some states acupuncture is regulated by a board of medical examiners, while in others by the board of licensing, health or education.
In Japan, acupuncturists are licensed by the Minister of Health, Labour and Welfare after passing an examination and graduating from a technical school or university. In Australia, the Chinese Medicine Board of Australia regulates acupuncture, among other Chinese medical traditions, and restricts the use of titles like 'acupuncturist' to registered practitioners only. The practice of Acupuncture in New Zealand in 1990 acupuncture was included into the Governmental Accident Compensation Corporation (ACC) Act. This inclusion granted qualified and professionally registered acupuncturists to provide subsidised care and treatment to citizens, residents, and temporary visitors for work or sports related injuries that occurred within the country of New Zealand. The two bodies for the regulation of acupuncture and attainment of ACC treatment provider status in New Zealand are Acupuncture NZ, and The New Zealand Acupuncture Standards Authority. At least 28 countries in Europe have professional associations for acupuncturists. In France, the Académie Nationale de Médecine (National Academy of Medicine) has regulated acupuncture since 1955.
See also
Notes
References
Bibliography
Further reading
External links
Alternative medicine
Chinese inventions
Energy therapies
Pain management
Pseudoscience
Traditional Chinese medicine
|
https://en.wikipedia.org/wiki/Aloe
|
Aloe (; also written Aloë) is a genus containing over 650 species of flowering succulent plants. The most widely known species is Aloe vera, or "true aloe". It is called this because it is cultivated as the standard source for assorted pharmaceutical purposes. Other species, such as Aloe ferox, are also cultivated or harvested from the wild for similar applications.
The APG IV system (2016) places the genus in the family Asphodelaceae, subfamily Asphodeloideae. Within the subfamily it may be placed in the tribe Aloeae. In the past, it has been assigned to the family Aloaceae (now included in the Asphodeloidae) or to a broadly circumscribed family Liliaceae (the lily family). The plant Agave americana, which is sometimes called "American aloe", belongs to the Asparagaceae, a different family.
The genus is native to tropical and southern Africa, Madagascar, Jordan, the Arabian Peninsula, and various islands in the Indian Ocean (Mauritius, Réunion, Comoros, etc.). A few species have also become naturalized in other regions (Mediterranean, India, Australia, North and South America, Hawaiian Islands, etc.).
Etymology
The genus name Aloe is derived from the Arabic word alloeh, meaning "bitter and shiny substance" or from Hebrew ahalim, plural of ahal.
Description
Most Aloe species have a rosette of large, thick, fleshy leaves. Aloe flowers are tubular, frequently yellow, orange, pink, or red, and are borne, densely clustered and pendant, at the apex of simple or branched, leafless stems. Many species of Aloe appear to be stemless, with the rosette growing directly at ground level; other varieties may have a branched or unbranched stem from which the fleshy leaves spring. They vary in color from grey to bright-green and are sometimes striped or mottled. Some aloes native to South Africa are tree-like (arborescent).
Systematics
The APG IV system (2016) places the genus in the family Asphodelaceae, subfamily Asphodeloideae. In the past it has also been assigned to the families Liliaceae and Aloeaceae, as well as the family Asphodelaceae sensu stricto, before this was merged into the Asphodelaceae sensu lato.
The circumscription of the genus has varied widely. Many genera, such as Lomatophyllum, have been brought into synonymy. Species at one time placed in Aloe, such as Agave americana, have been moved to other genera. Molecular phylogenetic studies, particularly from 2010 onwards, suggested that as then circumscribed, Aloe was not monophyletic and should be divided into more tightly defined genera. In 2014, John Charles Manning and coworkers produced a phylogeny in which Aloe was divided into six genera: Aloidendron, Kumara, Aloiampelos, Aloe, Aristaloe and Gonialoe.
Species
Over 600 species are accepted in the genus Aloe, plus even more synonyms and unresolved species, subspecies, varieties, and hybrids. Some of the accepted species are:
Aloe aculeata Pole-Evans
Aloe africana Mill.
Aloe albida (Stapf) Reynolds
Aloe albiflora Guillaumin
Aloe arborescens Mill.
Aloe arenicola Reynolds
Aloe argenticauda Merxm. & Giess
Aloe bakeri Scott-Elliot
Aloe ballii Reynolds
Aloe ballyi Reynolds
Aloe brevifolia Mill.
Aloe broomii Schönland
Aloe buettneri A.Berger
Aloe camperi Schweinf.
Aloe capitata Baker
Aloe comosa Marloth & A.Berger
Aloe cooperi Baker
Aloe corallina Verd.
Aloe dewinteri Giess ex Borman & Hardy
Aloe erinacea D.S.Hardy
Aloe excelsa A.Berger
Aloe ferox Mill.
Aloe forbesii Balf.f.
Aloe helenae Danguy
Aloe hereroensis Engl.
Aloe inermis Forssk.
Aloe inyangensis Christian
Aloe jawiyon S.J.Christie, D.P.Hannon & Oakman ex A.G.Mill.
Aloe jucunda Reynolds
Aloe khamiesensis Pillans
Aloe kilifiensis Christian
Aloe maculata All.
Aloe marlothii A.Berger
Aloe mubendiensis Christian
Aloe namibensis Giess
Aloe nyeriensis Christian & I.Verd.
Aloe pearsonii Schönland
Aloe peglerae Schönland
Aloe perfoliata L.
Aloe perryi Baker
Aloe petricola Pole-Evans
Aloe polyphylla Pillans
Aloe rauhii Reynolds
Aloe reynoldsii Letty
Aloe scobinifolia Reynolds & Bally
Aloe sinkatana Reynolds
Aloe squarrosa Baker ex Balf.f.
Aloe striata Haw.
Aloe succotrina Lam.
Aloe suzannae Decary
Aloe thraskii Baker
Aloe vera (L.) Burm.f.
Aloe viridiflora Reynolds
Aloe wildii (Reynolds) Reynolds
In addition to the species and hybrids between species within the genus, several hybrids with other genera have been created in cultivation, such as between Aloe and Gasteria (× Gasteraloe), and between Aloe and Astroloba (×Aloloba).
Uses
Aloe species are frequently cultivated as ornamental plants both in gardens and in pots. Many aloe species are highly decorative and are valued by collectors of succulents. Aloe vera is used both internally and externally on humans as folk or alternative medicine. The Aloe species is known for its medicinal and cosmetic properties. Around 75% of Aloe species are used locally for medicinal uses. The plants can also be made into types of special soaps or used in other skin care products (see natural skin care).
Numerous cultivars with mixed or uncertain parentage are grown. Of these, Aloe ‘Lizard Lips’ has gained the Royal Horticultural Society’s Award of Garden Merit.
Aloe variegata has been planted on graves in the superstitious belief that this ensures eternal life.
Historical uses
Historical use of various aloe species is well documented. Documentation of the clinical effectiveness is available, although relatively limited.
Of the 500+ species, only a few were used traditionally as herbal medicines, Aloe vera again being the most commonly used species. Also included are A. perryi and A. ferox. The Ancient Greeks and Romans used Aloe vera to treat wounds. In the Middle Ages, the yellowish liquid found inside the leaves was favored as a purgative. Unprocessed aloe that contains aloin is generally used as a laxative, whereas processed juice does not usually contain significant aloin.
Some species, particularly Aloe vera, are used in alternative medicine and first aid. Both the translucent inner pulp and the resinous yellow aloin from wounding the aloe plant are used externally for skin discomforts. As an herbal medicine, Aloe vera juice is commonly used internally for digestive discomfort.
According to Cancer Research UK, a potentially deadly product called T-UP is made of concentrated aloe, and promoted as a cancer cure. They say "there is currently no evidence that aloe products can help to prevent or treat cancer in humans".
Aloin in OTC laxative products
On May 9, 2002, the US Food and Drug Administration issued a final rule banning the use of aloin, the yellow sap of the aloe plant, for use as a laxative ingredient in over-the-counter drug products. Most aloe juices today do not contain significant aloin.
Chemical properties
According to W. A. Shenstone, two classes of aloins are recognized: (1) nataloins, which yield picric and oxalic acids with nitric acid, and do not give a red coloration with nitric acid; and (2) barbaloins, which yield aloetic acid (C7H2N3O5), chrysammic acid (C7H2N2O6), picric and oxalic acids with nitric acid, being reddened by the acid. This second group may be divided into a-barbaloins, obtained from Barbados Aloe, and reddened in the cold, and b-barbaloins, obtained from Aloe Socotrina and Zanzibar Aloe, reddened by ordinary nitric acid only when warmed or by fuming acid in the cold. Nataloin (2C17H13O7·H2O) forms bright-yellow scales, barbaloin (C17H18O7) prismatic crystals. Aloe species are used in essential oils as a safety measure to dilute the solution before they are applied to the skin.
Flavoring
Aloe perryi, A. barbadensis, A. ferox, and hybrids of this species with A. africana and A. spicata are listed as natural flavoring substances in the US government Electronic Code of Federal Regulations. Aloe socotrina is said to be used in yellow Chartreuse.
Heraldic occurrence
Aloe rubrolutea occurs as a charge in heraldry, for example in the Civic Heraldry of Namibia.
Gallery
See also
List of Aloe species
List of ineffective cancer treatments
List of Southern African indigenous trees
References
Further reading
External links
Asphodelaceae genera
Laxatives
Cosmetics chemicals
Succulent plants
|
https://en.wikipedia.org/wiki/Ambergris
|
Ambergris ( or , , ), ambergrease, or grey amber is a solid, waxy, flammable substance of a dull grey or blackish colour produced in the digestive system of sperm whales. Freshly produced ambergris has a marine, fecal odor. It acquires a sweet, earthy scent as it ages, commonly likened to the fragrance of isopropyl alcohol without the vaporous chemical astringency.
Ambergris has been highly valued by perfume makers as a fixative that allows the scent to last much longer, although it has been mostly replaced by synthetic ambroxide. Dogs are attracted to the smell of ambergris and are sometimes used by ambergris searchers.
Etymology
The English word amber derives from the Arabic word (ultimately from Middle Persian ambar, also ambergris), via Middle Latin ambar and Middle French ambre. The word "amber," in its sense of "ambergris," was adopted in Middle English in the 14th century.
The word "ambergris" comes from the Old French "ambre gris" or "grey amber". The addition of "grey" came about when, in the Romance languages, the sense of the word "amber" was extended to Baltic amber (fossil resin), as white or yellow amber (ambre jaune), from as early as the late 13th century. This fossilized resin became the dominant (and now exclusive) sense of "amber", leaving "ambergris" as the word for the whale secretion.
The archaic alternate spelling "ambergrease" arose as an eggcorn from the phonetic pronunciation of "ambergris," encouraged by the substance's waxy texture.
Formation
Ambergris is formed from a secretion of the bile duct in the intestines of the sperm whale, and can be found floating on the sea or washed up on coastlines. It is sometimes found in the abdomens of dead sperm whales. Because the beaks of giant squids have been discovered within lumps of ambergris, scientists have theorized that the substance is produced by the whale's gastrointestinal tract to ease the passage of hard, sharp objects that it may have eaten.
Ambergris is passed like fecal matter. It is speculated that an ambergris mass too large to be passed through the intestines is expelled via the mouth, but this remains under debate. Another theory states that an ambergris mass is formed when the colon of a whale is enlarged by a blockage from intestinal worms and cephalopod parts resulting in the death of the whale and the mass being excreted into the sea. Ambergris takes years to form. Christopher Kemp, the author of Floating Gold: A Natural (and Unnatural) History of Ambergris, says that it is only produced by sperm whales, and only by an estimated one percent of them. Ambergris is rare; once expelled by a whale, it often floats for years before making landfall. The slim chances of finding ambergris and the legal ambiguity involved led perfume makers away from ambergris, and led chemists on a quest to find viable alternatives.
Ambergris is found primarily in the Atlantic Ocean and on the coasts of South Africa; Brazil; Madagascar; the East Indies; The Maldives; China; Japan; India; Australia; New Zealand; and the Molucca Islands. Most commercially collected ambergris comes from The Bahamas in the Atlantic, particularly New Providence. In 2021, fishermen found a 127 kg (280-pound) piece of ambergris off the coast of Yemen, valued at US$1.5 million. Fossilised ambergris from 1.75 million years ago has also been found.
Physical properties
Ambergris is found in lumps of various shapes and sizes, usually weighing from to or more. When initially expelled by or removed from the whale, the fatty precursor of ambergris is pale white in color (sometimes streaked with black), soft, with a strong fecal smell. Following months to years of photodegradation and oxidation in the ocean, this precursor gradually hardens, developing a dark grey or black color, a crusty and waxy texture, and a peculiar odor that is at once sweet, earthy, marine, and animalic. Its scent has been generally described as a vastly richer and smoother version of isopropanol without its stinging harshness. In this developed condition, ambergris has a specific gravity ranging from 0.780 to 0.926 (meaning it floats in water). It melts at about to a fatty, yellow resinous liquid; and at it is volatilised into a white vapor. It is soluble in ether, and in volatile and fixed oils.
Chemical properties
Ambergris is relatively nonreactive to acid. White crystals of a terpenoid known as ambrein, discovered by Ružička and Fernand Lardon in 1946, can be separated from ambergris by heating raw ambergris in alcohol, then allowing the resulting solution to cool. Breakdown of the relatively scentless ambrein through oxidation produces ambroxide and ambrinol, the main odor components of ambergris.
Ambroxide is now produced synthetically and used extensively in the perfume industry.
Applications
Ambergris has been mostly known for its use in creating perfume and fragrance much like musk. Perfumes can still be found with ambergris.
Ambergris has historically been used in food and drink. A serving of eggs and ambergris was reportedly King Charles II of England's favorite dish. A recipe for Rum Shrub liqueur from the mid 19th century called for a thread of ambergris to be added to rum, almonds, cloves, cassia, and the peel of oranges in making a cocktail from The English and Australian Cookery Book. It has been used as a flavoring agent in Turkish coffee and in hot chocolate in 18th century Europe. The substance is considered an aphrodisiac in some cultures.
Ancient Egyptians burned ambergris as incense, while in modern Egypt ambergris is used for scenting cigarettes. The ancient Chinese called the substance "dragon's spittle fragrance". During the Black Death in Europe, people believed that carrying a ball of ambergris could help prevent them from contracting plague. This was because the fragrance covered the smell of the air which was believed to be a cause of plague.
During the Middle Ages, Europeans used ambergris as a medication for headaches, colds, epilepsy, and other ailments.
Legality
From the 18th to the mid-19th century, the whaling industry prospered. By some reports, nearly 50,000 whales, including sperm whales, were killed each year. Throughout the 1800s, "millions of whales were killed for their oil, whalebone, and ambergris" to fuel profits, and they soon became endangered as a species as a result. Due to studies showing that the whale populations were being threatened, the International Whaling Commission instituted a moratorium on commercial whaling in 1982. Although ambergris is not harvested from whales, many countries also ban the trade of ambergris as part of the more general ban on the hunting and exploitation of whales.
Urine, faeces, and ambergris (that has been naturally excreted by a sperm whale) are waste products not considered parts or derivatives of a CITES species and are therefore not covered by the provisions of the convention.
Illegal
Australia – Under federal law, the export and import of ambergris for commercial purposes is banned by the Environment Protection and Biodiversity Conservation Act 1999. The various states and territories have additional laws regarding ambergris.
United States – The possession and trade of ambergris is prohibited by the Endangered Species Act of 1973.
India – Sale or possession is illegal under the Wild Life (Protection) Act, 1972.
Legal
United Kingdom
France
Switzerland
Maldives
References
Further reading
montalvoeascinciasdonossotempo.blogspot, accessed 21 August 2015
External links
Natural History Magazine Article (from 1933): Floating Gold – The Romance of Ambergris
Ambergris – A Pathfinder and Annotated Bibliography
On the chemistry and ethics of Ambergris
Pathologist finds €500,000 ‘floating gold’ in dead whale in Canary Islands
Perfume ingredients
Whale products
Animal glandular products
Natural products
Traditional medicine
|
https://en.wikipedia.org/wiki/Anaxagoras
|
Anaxagoras (; , Anaxagóras, "lord of the assembly"; 500 – 428 BC) was a Pre-Socratic Greek philosopher. Born in Clazomenae at a time when Asia Minor was under the control of the Persian Empire, Anaxagoras came to Athens. According to Diogenes Laërtius and Plutarch, in later life he was charged with impiety and went into exile in Lampsacus; the charges may have been political, owing to his association with Pericles, if they were not fabricated by later ancient biographers.
Responding to the claims of Parmenides on the impossibility of change, Anaxagoras introduced the concept of Nous (Cosmic Mind) as an ordering force. He also gave several novel scientific accounts of natural phenomena, including the notion of panspermia, that life exists throughout the universe and could be distributed everywhere. He deduced a correct explanation for eclipses and described the Sun as a fiery mass larger than the Peloponnese, as well as attempting to explain rainbows and meteors.
Biography
Anaxagoras was born in the town of Clazomenae in the early 5th century BCE, where he may have been born into an aristocratic family. He arrived at Athens, either shortly after the Persian war (in which he may have fought on the Persian side), or at some point when he was a bit older, around 456 BCE. While at Athens, he became close with the Athenian statesman Pericles. According to Diogenes, Laërtius and Plutarch, in later life he was charged with impiety and went into exile in Lampsacus; the charges may have been political, owing to his association with Pericles, if they were not fabricated by later ancient biographers. According to Laërtius, Pericles spoke in defense of Anaxagoras at his trial, . Even so, Anaxagoras was forced to retire from Athens to Lampsacus in Troad (433). He died there around the year 428. Citizens of Lampsacus erected an altar to Mind and Truth in his memory and observed the anniversary of his death for many years. They placed over his grave the following inscription: Here Anaxagoras, who in his quest of truth scaled heaven itself, is laid to rest.
Philosophy
Responding to the claims of Parmenides on the impossibility of change, Anaxagoras described the world as a mixture of primary imperishable ingredients, where material variation was never caused by an absolute presence of a particular ingredient, but rather by its relative preponderance over the other ingredients; in his words, "each one is... most manifestly those things of which there are the most in it". He introduced the concept of Nous (Cosmic Mind) as an ordering force, which moved and separated the original mixture, which was homogeneous or nearly so.
Anaxagoras brought philosophy and the spirit of scientific inquiry from Ionia to Athens. According to Anaxagoras, all things have existed in some way from the beginning, but originally they existed in infinitesimally small fragments of themselves, endless in number and inextricably combined throughout the universe. All things existed in this mass but in a confused and indistinguishable form. There was an infinite number of homogeneous parts () as well as heterogeneous ones.
The work of arrangement, the segregation of like from unlike, and the summation of the whole into totals of the same name, was the work of Mind or Reason (). Mind is no less unlimited than the chaotic mass, but it stood pure and independent, a thing of finer texture, alike in all its manifestations and everywhere the same. This subtle agent, possessed of all knowledge and power, is especially seen ruling all life forms. Its first appearance, and the only manifestation of it which Anaxagoras describes, is Motion. It gave distinctness and reality to the aggregates of like parts.
Decrease and growth represent a new aggregation () and disruption (). However, the original intermixture of things is never wholly overcome. Each thing contains parts of other things or heterogeneous elements, and is what it is only on account of the preponderance of certain homogeneous parts which constitute its character. Out of this process arise the things we see in this world.
Astronomy
Plutarch says "Anaxagoras is said to have predicted that if the heavenly bodies should be loosened by some slip or shake, one of them might be torn away, and might plunge and fall to earth."
His observations of the celestial bodies and the fall of meteorites led him to form new theories of the universal order, and to the prediction of the impact of meteorites. According to Pliny, he was credited with predicting the fall of the meteorite in 467. He was the first to give a correct explanation of eclipses, and was both famous and notorious for his scientific theories, including the claims that the Sun is a mass of red-hot metal, that the Moon is earthy, and that the stars are fiery stones. He thought that the Earth was flat and floated supported by 'strong' air under it, and that disturbances in this air sometimes caused earthquakes. He introduced the notion of panspermia, that life exists throughout the universe and could be distributed everywhere.
He attempted to give a scientific account of eclipses, meteors, rainbows, and the Sun, which he described as a mass of blazing metal, larger than the Peloponnese; he also said that the Moon had mountains, and he believed that it was inhabited. The heavenly bodies, he asserted, were masses of stone torn from the Earth and ignited by rapid rotation. His theories about eclipses, the Sun, and Moon may well have been based on observations of the eclipse of 463 BCE, which was visible in Greece.
Mathematics
According to Plutarch in his work On exile, Anaxagoras is the first Greek to attempt the problem of squaring the circle, a problem he worked on while in prison.
Legacy
Anaxagoras wrote a book of philosophy, but only fragments of the first part of this have survived, through preservation in the work of Simplicius of Cilicia in the 6th century AD.
Anaxagoras' book was reportedly available for a drachma in the Athenian marketplace. It was certainly known to Sophocles, Euripides, and Aristophanes, based on the contents of their surviving plays, and possibly to Aeschylus as well, based on the testimony of Seneca. However, although Anaxagoras almost certainly lived in Athens during the lifetime of Socrates (born 470 BCE), there is no evidence that they ever met. In the Phaedo, Plato portrays Socrates saying of Anaxagoras as a young man: 'I eagerly acquired his books and read them as quickly as I could'. However, Socrates goes on to describe his later disillusionment with his philosophy. Anaxagoras is also mentioned by Socrates during his trial in Plato's Apology.
He is also mentioned in Seneca's Natural Questions (Book 4B, originally Book 3: On Clouds, Hail, Snow). It reads: "Why should I too allow myself the same liberty as Anaxagoras allowed himself?"
The Roman author Valerius Maximus preserves a different tradition; Anaxagoras, coming home from a long voyage, found his property in ruin, and said: "If this had not perished, I would have"—a sentence described by Valerius as being "possessed of sought-after wisdom".
Dante Alighieri places Anaxagoras in the First Circle of Hell (Limbo) in his Divine Comedy (Inferno, Canto IV, line 137).
Chapter 5 in Book II of De Docta Ignorantia (1440) by Nicholas of Cusa is dedicated to the truth of the sentence "Each thing is in each thing" which he attributes to Anaxagoras.
Anaxagoras appears as a character in the second Act of Faust, Part II by Johann Wolfgang von Goethe.
See also
Anaxagoras (crater) on the Moon
Notes
Footnotes
Citations
References
Ancient testimony
In the Diels-Kranz numbering for testimony and fragments of Pre-Socratic philosophy, Anaxagoras is catalogued as number 59.
The most recent edition of this catalogue is
Biography
A1.
A3.
A5.
A12.
A13.
A15.
A16.
A17.
A18.
Writings
Doctrines
Fragments
B1.
B2.
B3.
B4.
B5.
B6.
B7.
B8.
B9.
B10.
B11.
B12.
B13.
B14.
B15.
B16.
B17.
B18.
B21.
B21a.
B21b.
Translations of the fragments
Sider, David (ed.), The Fragments of Anaxagoras, with introduction, text, and commentary, Sankt Augustin: Academia Verlag, 2005.
Kirk G. S.; Raven, J. E. and Schofield, M. (1983) The Presocratic Philosophers: a critical history with a selection of texts (2nd ed.) Cambridge University Press, Cambridge, ; originally authored by Kirk and Raven and published in 1957
Sources
Burnet J. (1892). Early Greek Philosophy A. & C. Black, London, , and subsequent editions, 2003 edition published by Kessinger, Whitefish, Montana,
Further reading
Bakalis Nikolaos (2005). Handbook of Greek Philosophy: From Thales to the Stoics Analysis and Fragments, Trafford Publishing, Victoria, BC.,
Barnes J. (1979). The Presocratic Philosophers, Routledge, London, , and editions of 1982, 1996 and 2006
Gershenson, Daniel E. and Greenberg, Daniel A. (1964) Anaxagoras and the birth of physics, Blaisdell Publishing Co., New York,
Graham, Daniel W. (1999). "Empedocles and Anaxagoras: Responses to Parmenides" Chapter 8 of Long, A. A. (1999) The Cambridge Companion to Early Greek Philosophy Cambridge University Press, Cambridge, pp. 159–180,
Taylor, C. C. W. (ed.) (1997). Routledge History of Philosophy: From the Beginning to Plato, Vol. I, pp. 192–225,
Teodorsson, Sven-Tage (1982). Anaxagoras' Theory of Matter. Acta Universitatis Gothoburgensis, Göteborg, Sweden,
Torrijos-Castrillejo, David (2014) Anaxágoras y su recepción en Aristóteles. Romae: EDUSC,
Zeller, A. (1881). A History of Greek Philosophy: From the Earliest Period to the Time of Socrates, Vol. II, translated by S. F. Alleyne, pp. 321–394
External links
Anaxagoras entry by Michael Patzia in the Internet Encyclopedia of Philosophy
Translation and Commentary from John Burnet's Early Greek Philosophy.
500s BC births
420s BC deaths
5th-century BC Greek philosophers
Ancient Greek mathematicians
Ancient Greek physicists
Ancient Greeks from the Achaemenid Empire
Ancient Greek philosophers of mind
Ancient Greek metaphysicians
Metic philosophers in Classical Athens
Military personnel of the Achaemenid Empire
Natural philosophers
People from Clazomenae
Philosophers of ancient Ionia
Pluralist philosophers
5th-century BC mathematicians
|
https://en.wikipedia.org/wiki/Arthritis
|
Arthritis is a term often used to mean any disorder that affects joints. Symptoms generally include joint pain and stiffness. Other symptoms may include redness, warmth, swelling, and decreased range of motion of the affected joints. In some types of arthritis, other organs are also affected. Onset can be gradual or sudden.
There are over 100 types of arthritis. The most common forms are osteoarthritis (degenerative joint disease) and rheumatoid arthritis. Osteoarthritis usually occurs with age and affects the fingers, knees, and hips. Rheumatoid arthritis is an autoimmune disorder that often affects the hands and feet. Other types include gout, lupus, fibromyalgia, and septic arthritis. They are all types of rheumatic disease.
Treatment may include resting the joint and alternating between applying ice and heat. Weight loss and exercise may also be useful. Recommended medications may depend on the form of arthritis. These may include pain medications such as ibuprofen and paracetamol (acetaminophen). In some circumstances, a joint replacement may be useful.
Osteoarthritis affects more than 3.8% of people, while rheumatoid arthritis affects about 0.24% of people. Gout affects about 1–2% of the Western population at some point in their lives. In Australia about 15% of people are affected by arthritis, while in the United States more than 20% have a type of arthritis. Overall the disease becomes more common with age. Arthritis is a common reason that people miss work and can result in a decreased quality of life. The term is derived from arthr- (meaning 'joint') and -itis (meaning 'inflammation').
Classification
There are several diseases where joint pain is primary, and is considered the main feature. Generally when a person has "arthritis" it means that they have one of these diseases, which include:
Hemarthrosis
Osteoarthritis
Rheumatoid arthritis
Gout and pseudo-gout
Septic arthritis
Ankylosing spondylitis
Juvenile idiopathic arthritis
Still's disease
Psoriatic arthritis
Joint pain can also be a symptom of other diseases. In this case, the arthritis is considered to be secondary to the main disease; these include:
Psoriasis
Reactive arthritis
Ehlers–Danlos syndrome
Iron overload
Hepatitis
Lyme disease
Sjögren's disease
Hashimoto's thyroiditis
Celiac disease
Non-celiac gluten sensitivity
Inflammatory bowel disease (including Crohn's disease and ulcerative colitis)
Henoch–Schönlein purpura
Hyperimmunoglobulinemia D with recurrent fever
Sarcoidosis
Whipple's disease
TNF receptor associated periodic syndrome
Granulomatosis with polyangiitis (and many other vasculitis syndromes)
Familial Mediterranean fever
Systemic lupus erythematosus
An undifferentiated arthritis is an arthritis that does not fit into well-known clinical disease categories, possibly being an early stage of a definite rheumatic disease.
Signs and symptoms
Pain, which can vary in severity, is a common symptom in virtually all types of arthritis. Other symptoms include swelling, joint stiffness, redness, and aching around the joint(s). Arthritic disorders like lupus and rheumatoid arthritis can affect other organs in the body, leading to a variety of symptoms. Symptoms may include:
Inability to use the hand or walk
Stiffness in one or more joints
Rash or itch
Malaise and fatigue
Weight loss
Poor sleep
Muscle aches and pains
Tenderness
Difficulty moving the joint
It is common in advanced arthritis for significant secondary changes to occur. For example, arthritic symptoms might make it difficult for a person to move around and/or exercise, which can lead to secondary effects, such as:
Muscle weakness
Loss of flexibility
Decreased aerobic fitness
These changes, in addition to the primary symptoms, can have a huge impact on quality of life.
Disability
Arthritis is the most common cause of disability in the United States. More than 20 million individuals with arthritis have severe limitations in function on a daily basis. Absenteeism and frequent visits to the physician are common in individuals who have arthritis. Arthritis can make it difficult for individuals to be physically active and some become home bound.
It is estimated that the total cost of arthritis cases is close to $100 billion of which almost 50% is from lost earnings. Each year, arthritis results in nearly 1 million hospitalizations and close to 45 million outpatient visits to health care centers.
Decreased mobility, in combination with the above symptoms, can make it difficult for an individual to remain physically active, contributing to an increased risk of obesity, high cholesterol or vulnerability to heart disease. People with arthritis are also at increased risk of depression, which may be a response to numerous factors, including fear of worsening symptoms.
Risk factors
There are common risk factors that increase a person's chance of developing arthritis later in adulthood. Some of these are modifiable while others are not. Smoking has been linked to an increased susceptibility of developing arthritis, particularly rheumatoid arthritis.
Diagnosis
Diagnosis is made by clinical examination from an appropriate health professional, and may be supported by other tests such as radiology and blood tests, depending on the type of suspected arthritis. All arthritides potentially feature pain. Pain patterns may differ depending on the arthritides and the location. Rheumatoid arthritis is generally worse in the morning and associated with stiffness lasting over 30 minutes. However, in the early stages, patients may have no symptoms after a warm shower. Osteoarthritis, on the other hand, tends to be associated with morning stiffness which eases relatively quickly with movement and exercise. In the aged and children, pain might not be the main presenting feature; the aged patient simply moves less, the infantile patient refuses to use the affected limb.
Elements of the history of the disorder guide diagnosis. Important features are speed and time of onset, pattern of joint involvement, symmetry of symptoms, early morning stiffness, tenderness, gelling or locking with inactivity, aggravating and relieving factors, and other systemic symptoms. It may include checking joints, observing movements, examination of skin for rashes or nodules and symptoms of pulmonary inflammation. Physical examination may confirm the diagnosis or may indicate systemic disease. Radiographs are often used to follow progression or help assess severity.
Blood tests and X-rays of the affected joints often are performed to make the diagnosis. Screening blood tests are indicated if certain arthritides are suspected. These might include: rheumatoid factor, antinuclear factor (ANF), extractable nuclear antigen, and specific antibodies.
Rheumatoid arthritis patients often have high erythrocyte sedimentation rate (ESR, also known as sed rate) or C-reactive protein (CRP) levels, which indicates the presence of an inflammatory process in the body. Anti-cyclic citrullinated peptide (anti-CCP) antibodies and rheumatoid factor (RF) are two more common blood tests. Positive results indicate the risk of rheumatoid arthritis, while negative results help rule out this autoimmune condition.
Imaging tests like X-rays, MRI scans or Ultrasounds used to diagnose and monitor arthritis. Other imaging tests for rheumatoid arthritis that may be considered include computed tomography (CT) scanning, positron emission tomography (PET) scanning, bone scanning, and dual-energy X-ray absorptiometry (DEXA).
Osteoarthritis
Osteoarthritis is the most common form of arthritis. It affects humans and other animals, notably dogs, but also occurs in cats and horses. It can affect both the larger and the smaller joints of the body. In humans, this includes the hands, wrists, feet, back, hip, and knee. In dogs, this includes the elbow, hip, stifle (knee), shoulder, and back. The disease is essentially one acquired from daily wear and tear of the joint; however, osteoarthritis can also occur as a result of injury. Osteoarthritis begins in the cartilage and eventually causes the two opposing bones to erode into each other. The condition starts with minor pain during physical activity, but soon the pain can be continuous and even occur while in a state of rest. The pain can be debilitating and prevent one from doing some activities. In dogs, this pain can significantly affect quality of life and may include difficulty going up and down stairs, struggling to get up after lying down, trouble walking on slick floors, being unable to hop in and out of vehicles, difficulty jumping on and off furniture, and behavioral changes (e.g., aggression, difficulty squatting to toilet). Osteoarthritis typically affects the weight-bearing joints, such as the back, knee and hip. Unlike rheumatoid arthritis, osteoarthritis is most commonly a disease of the elderly. The strongest predictor of osteoarthritis is increased age, likely due to the declining ability of chondrocytes to maintain the structural integrity of cartilage. More than 30 percent of women have some degree of osteoarthritis by age 65. Other risk factors for osteoarthritis include prior joint trauma, obesity, and a sedentary lifestyle.
Rheumatoid arthritis
Rheumatoid arthritis (RA) is a disorder in which the body's own immune system starts to attack body tissues. The attack is not only directed at the joint but to many other parts of the body. In rheumatoid arthritis, most damage occurs to the joint lining and cartilage which eventually results in erosion of two opposing bones. RA often affects joints in the fingers, wrists, knees and elbows, is symmetrical (appears on both sides of the body), and can lead to severe deformity in a few years if not treated. RA occurs mostly in people aged 20 and above. In children, the disorder can present with a skin rash, fever, pain, disability, and limitations in daily activities. With earlier diagnosis and aggressive treatment, many individuals can lead a better quality of life than if going undiagnosed for long after RA's onset. The risk factors with the strongest association for developing rheumatoid arthritis are the female sex, a family history of rheumatoid arthritis, age, obesity, previous joint damage from an injury, and exposure to tobacco smoke.
Bone erosion is a central feature of rheumatoid arthritis. Bone continuously undergoes remodeling by actions of bone resorbing osteoclasts and bone forming osteoblasts. One of the main triggers of bone erosion in the joints in rheumatoid arthritis is inflammation of the synovium, caused in part by the production of pro-inflammatory cytokines and receptor activator of nuclear factor kappa B ligand (RANKL), a cell surface protein present in Th17 cells and osteoblasts. Osteoclast activity can be directly induced by osteoblasts through the RANK/RANKL mechanism.
Lupus
Lupus is a common collagen vascular disorder that can be present with severe arthritis. Other features of lupus include a skin rash, extreme photosensitivity, hair loss, kidney problems, lung fibrosis and constant joint pain.
Gout
Gout is caused by deposition of uric acid crystals in the joints, causing inflammation. There is also an uncommon form of gouty arthritis caused by the formation of rhomboid crystals of calcium pyrophosphate known as pseudogout. In the early stages, the gouty arthritis usually occurs in one joint, but with time, it can occur in many joints and be quite crippling. The joints in gout can often become swollen and lose function. Gouty arthritis can become particularly painful and potentially debilitating when gout cannot successfully be treated. When uric acid levels and gout symptoms cannot be controlled with standard gout medicines that decrease the production of uric acid (e.g., allopurinol) or increase uric acid elimination from the body through the kidneys (e.g., probenecid), this can be referred to as refractory chronic gout.
Comparison of types
Other
Infectious arthritis is another severe form of arthritis. It presents with sudden onset of chills, fever and joint pain. The condition is caused by bacteria elsewhere in the body. Infectious arthritis must be rapidly diagnosed and treated promptly to prevent irreversible joint damage.
Psoriasis can develop into psoriatic arthritis. With psoriatic arthritis, most individuals develop the skin problem first and then the arthritis. The typical features are continuous joint pains, stiffness and swelling. The disease does recur with periods of remission but there is no cure for the disorder. A small percentage develop a severely painful and destructive form of arthritis which destroys the small joints in the hands and can lead to permanent disability and loss of hand function.
Treatment
There is no known cure for arthritis and rheumatic diseases. Treatment options vary depending on the type of arthritis and include physical therapy, exercise and diet, orthopedic bracing, and oral and topical medications. Joint replacement surgery may be required to repair damage, restore function, or relieve pain.
Physical therapy
In general, studies have shown that physical exercise of the affected joint can noticeably improve long-term pain relief. Furthermore, exercise of the arthritic joint is encouraged to maintain the health of the particular joint and the overall body of the person.
Individuals with arthritis can benefit from both physical and occupational therapy. In arthritis the joints become stiff and the range of movement can be limited. Physical therapy has been shown to significantly improve function, decrease pain, and delay the need for surgical intervention in advanced cases. Exercise prescribed by a physical therapist has been shown to be more effective than medications in treating osteoarthritis of the knee. Exercise often focuses on improving muscle strength, endurance and flexibility. In some cases, exercises may be designed to train balance. Occupational therapy can provide assistance with activities. Assistive technology is a tool used to aid a person's disability by reducing their physical barriers by improving the use of their damaged body part, typically after an amputation. Assistive technology devices can be customized to the patient or bought commercially.
Medications
There are several types of medications that are used for the treatment of arthritis. Treatment typically begins with medications that have the fewest side effects with further medications being added if insufficiently effective.
Depending on the type of arthritis, the medications that are given may be different. For example, the first-line treatment for osteoarthritis is acetaminophen (paracetamol) while for inflammatory arthritis it involves non-steroidal anti-inflammatory drugs (NSAIDs) like ibuprofen. Opioids and NSAIDs may be less well tolerated. However, topical NSAIDs may have better safety profiles than oral NSAIDs. For more severe cases of osteoarthritis, intra-articular corticosteroid injections may also be considered.
The drugs to treat rheumatoid arthritis (RA) range from corticosteroids to monoclonal antibodies given intravenously. Due to the autoimmune nature of RA, treatments may include not only pain medications and anti-inflammatory drugs, but also another category of drugs called disease-modifying antirheumatic drugs (DMARDs). csDMARDs, TNF biologics and tsDMARDs are specific kinds of DMARDs that are recommended for treatment. Treatment with DMARDs is designed to slow down the progression of RA by initiating an adaptive immune response, in part by CD4+ T helper (Th) cells, specifically Th17 cells. Th17 cells are present in higher quantities at the site of bone destruction in joints and produce inflammatory cytokines associated with inflammation, such as interleukin-17 (IL-17).
Surgery
A number of rheumasurgical interventions have been incorporated in the treatment of arthritis since the 1950s. Arthroscopic surgery for osteoarthritis of the knee provides no additional benefit to optimized physical and medical therapy.
Adaptive aids
People with hand arthritis can have trouble with simple activities of daily living tasks (ADLs), such as turning a key in a lock or opening jars, as these activities can be cumbersome and painful. There are adaptive aids or assistive devices (ADs) available to help with these tasks, but they are generally more costly than conventional products with the same function. It is now possible to 3-D print adaptive aids, which have been released as open source hardware to reduce patient costs. Adaptive aids can significantly help arthritis patients and the vast majority of those with arthritis need and use them.
Alternative medicine
Further research is required to determine if transcutaneous electrical nerve stimulation (TENS) for knee osteoarthritis is effective for controlling pain.
Low level laser therapy may be considered for relief of pain and stiffness associated with arthritis. Evidence of benefit is tentative.
Pulsed electromagnetic field therapy (PEMFT) has tentative evidence supporting improved functioning but no evidence of improved pain in osteoarthritis. The FDA has not approved PEMFT for the treatment of arthritis. In Canada, PEMF devices are legally licensed by Health Canada for the treatment of pain associated with arthritic conditions.
Epidemiology
Arthritis is predominantly a disease of the elderly, but children can also be affected by the disease. Arthritis is more common in women than men at all ages and affects all races, ethnic groups and cultures. In the United States a CDC survey based on data from 2013 to 2015 showed 54.4 million (22.7%) adults had self-reported doctor-diagnosed arthritis, and 23.7 million (43.5% of those with arthritis) had arthritis-attributable activity limitation (AAAL). With an aging population, this number is expected to increase. Adults with co-morbid conditions, such as heart disease, diabetes, and obesity, were seen to have a higher than average prevalence of doctor-diagnosed arthritis (49.3%, 47.1%, and 30.6% respectively).
Disability due to musculoskeletal disorders increased by 45% from 1990 to 2010. Of these, osteoarthritis is the fastest increasing major health condition. Among the many reports on the increased prevalence of musculoskeletal conditions, data from Africa are lacking and underestimated. A systematic review assessed the prevalence of arthritis in Africa and included twenty population-based and seven hospital-based studies. The majority of studies, twelve, were from South Africa. Nine studies were well-conducted, eleven studies were of moderate quality, and seven studies were conducted poorly. The results of the systematic review were as follows:
Rheumatoid arthritis: 0.1% in Algeria (urban setting); 0.6% in Democratic Republic of Congo (urban setting); 2.5% and 0.07% in urban and rural settings in South Africa respectively; 0.3% in Egypt (rural setting), 0.4% in Lesotho (rural setting)
Osteoarthritis: 55.1% in South Africa (urban setting); ranged from 29.5 to 82.7% in South Africans aged 65 years and older
Knee osteoarthritis has the highest prevalence from all types of osteoarthritis, with 33.1% in rural South Africa
Ankylosing spondylitis: 0.1% in South Africa (rural setting)
Psoriatic arthritis: 4.4% in South Africa (urban setting)
Gout: 0.7% in South Africa (urban setting)
Juvenile idiopathic arthritis: 0.3% in Egypt (urban setting)
History
Evidence of osteoarthritis and potentially inflammatory arthritis has been discovered in dinosaurs. The first known traces of human arthritis date back as far as 4500 BC. In early reports, arthritis was frequently referred to as the most common ailment of prehistoric peoples. It was noted in skeletal remains of Native Americans found in Tennessee and parts of what is now Olathe, Kansas. Evidence of arthritis has been found throughout history, from Ötzi, a mummy () found along the border of modern Italy and Austria, to the Egyptian mummies .
In 1715, William Musgrave published the second edition of his most important medical work, De arthritide symptomatica, which concerned arthritis and its effects. Augustin Jacob Landré-Beauvais, a 28-year-old resident physician at Salpêtrière Asylum in France was the first person to describe the symptoms of rheumatoid arthritis. Though Landré-Beauvais' classification of rheumatoid arthritis as a relative of gout was inaccurate, his dissertation encouraged others to further study the disease.
Terminology
The term is derived from arthr- (from ) and -itis (from , , ), the latter suffix having come to be associated with inflammation.
The word arthritides is the plural form of arthritis, and denotes the collective group of arthritis-like conditions.
See also
Antiarthritics
Arthritis Care (charity in the UK)
Arthritis Foundation (US not-for-profit)
Knee arthritis
Osteoimmunology
Weather pains
References
External links
American College of Rheumatology – US professional society of rheumatologists
National Institute of Arthritis and Musculoskeletal and Skin Diseases - US National Institute of Arthritis and Musculoskeletal and Skin Diseases
Aging-associated diseases
Inflammations
Rheumatology
Wikipedia neurology articles ready to translate
Skeletal disorders
Wikipedia medicine articles ready to translate
|
https://en.wikipedia.org/wiki/Acetylene
|
Acetylene (systematic name: ethyne) is the chemical compound with the formula and structure . It is a hydrocarbon and the simplest alkyne. This colorless gas is widely used as a fuel and a chemical building block. It is unstable in its pure form and thus is usually handled as a solution. Pure acetylene is odorless, but commercial grades usually have a marked odor due to impurities such as divinyl sulfide and phosphine.
As an alkyne, acetylene is unsaturated because its two carbon atoms are bonded together in a triple bond. The carbon–carbon triple bond places all four atoms in the same straight line, with CCH bond angles of 180°.
Discovery
Acetylene was discovered in 1836 by Edmund Davy, who identified it as a "new carburet of hydrogen". It was an accidental discovery while attempting to isolate potassium metal. By heating potassium carbonate with carbon at very high temperatures, he produced a residue of what is now known as potassium carbide, (K2C2), which reacted with water to release the new gas. It was rediscovered in 1860 by French chemist Marcellin Berthelot, who coined the name acétylène. Berthelot's empirical formula for acetylene (C4H2), as well as the alternative name "quadricarbure d'hydrogène" (hydrogen quadricarbide), were incorrect because many chemists at that time used the wrong atomic mass for carbon (6 instead of 12). Berthelot was able to prepare this gas by passing vapours of organic compounds (methanol, ethanol, etc.) through a red hot tube and collecting the effluent. He also found that acetylene was formed by sparking electricity through mixed cyanogen and hydrogen gases. Berthelot later obtained acetylene directly by passing hydrogen between the poles of a carbon arc.
Preparation
Except for China acetylene production is dominated by partial combustion of natural gas.
Partial combustion of hydrocarbons
Since the 1950s, acetylene has mainly been manufactured by the partial combustion of methane. It is a recovered side product in production of ethylene by cracking of hydrocarbons. Approximately 400,000 tonnes were produced by this method in 1983. Its presence in ethylene is usually undesirable because of its explosive character and its ability to poison Ziegler–Natta catalysts. It is selectively hydrogenated into ethylene, usually using Pd–Ag catalysts.
3 CH4 + 3 O2 → C2H2 + CO + 5 H2O.
Partial combustion of methane also produces acetylene:
Dehydrogenation of alkanes
The heaviest alkanes in petroleum and natural gas are cracked into lighter molecules which are dehydrogenated at high temperature:
C2H6 → C2H2 + 2 H2
2 CH4→ C2H2+ 3 H2
This last reaction is implemented in the process of anaerobic decomposition of methane by microwave plasma. The advantage of this technology is the absence of CO2 emissions and the joint production of hydrogen as a secondary product. It makes it a low-carbon technology production and also an electrified process. For 32 t of methane transformed, production of 26 t of acetylene and 6 t of hydrogen (according to stoichiometry).
Carbochemical method
The production of acetylene from calcium carbide is a traditional and still the dominant route:
The conditions for production of calcium carbide are environmentally unacceptable in most advanced countries, except China.
Until the 1950s, when oil supplanted coal as the chief source of reduced carbon, acetylene (and the aromatic fraction from coal tar) was the main source of organic chemicals in the chemical industry. It was prepared by the hydrolysis of calcium carbide, a reaction discovered by Friedrich Wöhler in 1862 and still familiar to students:
Calcium carbide production requires high temperatures, ~2000 °C, necessitating the use of an electric arc furnace. In the US, this process was an important part of the late-19th century revolution in chemistry enabled by the massive hydroelectric power project at Niagara Falls.
In the user, the carbide reacts with water to produce acetylene, 1 kg of carbide combining with 562.5 g of water to release 350 l of acetylene.
Bonding
In terms of valence bond theory, in each carbon atom the 2s orbital hybridizes with one 2p orbital thus forming an sp hybrid. The other two 2p orbitals remain unhybridized. The two ends of the two sp hybrid orbital overlap to form a strong σ valence bond between the carbons, while on each of the other two ends hydrogen atoms attach also by σ bonds. The two unchanged 2p orbitals form a pair of weaker π bonds.
Since acetylene is a linear symmetrical molecule, it possesses the D∞h point group.
Physical properties
Changes of state
At atmospheric pressure, acetylene cannot exist as a liquid and does not have a melting point. The triple point on the phase diagram corresponds to the melting point (−80.8 °C) at the minimal pressure at which liquid acetylene can exist (1.27 atm). At temperatures below the triple point, solid acetylene can change directly to the vapour (gas) by sublimation. The sublimation point at atmospheric pressure is −84.0 °C.
Other
At room temperature, the solubility of acetylene in acetone is 27.9 g per kg. For the same amount of dimethylformamide (DMF), the solubility is 51 g. At
20.26 bar, the solubility increases to 689.0 and 628.0 g for acetone and DMF, respectively. These solvents are used in pressurized gas cylinders.
Applications
Welding
Approximately 20% of acetylene is supplied by the industrial gases industry for oxyacetylene gas welding and cutting due to the high temperature of the flame. Combustion of acetylene with oxygen produces a flame of over , releasing 11.8 kJ/g. Oxyacetylene is the hottest burning common fuel gas. Acetylene is the third-hottest natural chemical flame after dicyanoacetylene's and cyanogen at . Oxy-acetylene welding was a popular welding process in previous decades. The development and advantages of arc-based welding processes have made oxy-fuel welding nearly extinct for many applications. Acetylene usage for welding has dropped significantly. On the other hand, oxy-acetylene welding equipment is quite versatile – not only because the torch is preferred for some sorts of iron or steel welding (as in certain artistic applications), but also because it lends itself easily to brazing, braze-welding, metal heating (for annealing or tempering, bending or forming), the loosening of corroded nuts and bolts, and other applications. Bell Canada cable-repair technicians still use portable acetylene-fuelled torch kits as a soldering tool for sealing lead sleeve splices in manholes and in some aerial locations. Oxyacetylene welding may also be used in areas where electricity is not readily accessible. Oxyacetylene cutting is used in many metal fabrication shops. For use in welding and cutting, the working pressures must be controlled by a regulator, since above , if subjected to a shockwave (caused, for example, by a flashback), acetylene decomposes explosively into hydrogen and carbon.
Chemicals
Acetylene, despite its simplicity, is not used for many industrial processes.
One of the major chemical applications is ethynylation of formaldehyde.
Acetylene adds to aldehydes and ketones to form α-ethynyl alcohols:
The reaction gives butynediol, with propargyl alcohol as the by-product. Copper acetylide is used as the catalyst.
In addition to ethynylation, acetylene reacts with carbon monoxide, acetylene reacts to give acrylic acid, or acrylic esters. Metal catalysts are required. These derivatives form products such as acrylic fibers, glasses, paints, resins, and polymers. Except in China, use of acetylene as a chemical feedstock has declined by 70% from 1965 to 2007 owing to cost and environmental considerations.
Historical uses
Prior to the widespread use of petrochemicals, coal-derived acetylene was a building block for several industrial chemicals. Thus acetylene can be hydrated to give acetaldehyde, which in turn can be oxidized to acetic acid. Processes leading to acrylates were also commericalized. Almost all of these processes because obsolete with the availability of petroleum-derived ethylene and propylene.
Niche applications
In 1881, the Russian chemist Mikhail Kucherov described the hydration of acetylene to acetaldehyde using catalysts such as mercury(II) bromide. Before the advent of the Wacker process, this reaction was conducted on an industrial scale.
The polymerization of acetylene with Ziegler–Natta catalysts produces polyacetylene films. Polyacetylene, a chain of CH centres with alternating single and double bonds, was one of the first discovered organic semiconductors. Its reaction with iodine produces a highly electrically conducting material. Although such materials are not useful, these discoveries led to the developments of organic semiconductors, as recognized by the Nobel Prize in Chemistry in 2000 to Alan J. Heeger, Alan G MacDiarmid, and Hideki Shirakawa.
In the 1920s, pure acetylene was experimentally used as an inhalation anesthetic.
Acetylene is sometimes used for carburization (that is, hardening) of steel when the object is too large to fit into a furnace.
Acetylene is used to volatilize carbon in radiocarbon dating. The carbonaceous material in an archeological sample is treated with lithium metal in a small specialized research furnace to form lithium carbide (also known as lithium acetylide). The carbide can then be reacted with water, as usual, to form acetylene gas to feed into a mass spectrometer to measure the isotopic ratio of carbon-14 to carbon-12.
Acetylene combustion produces a strong, bright light and the ubiquity of carbide lamps drove much acetylene commercialization in the early 20th century. Common applications included coastal lighthouses, street lights, and automobile and mining headlamps. In most of these applications, direct combustion is a fire hazard, and so acetylene has been replaced, first by incandescent lighting and many years later by low-power/high-lumen LEDs. Nevertheless, acetylene lamps remain in limited use in remote or otherwise inaccessible areas and in countries with a weak or unreliable central electric grid.
Natural occurrence
The energy richness of the C≡C triple bond and the rather high solubility of acetylene in water make it a suitable substrate for bacteria, provided an adequate source is available. A number of bacteria living on acetylene have been identified. The enzyme acetylene hydratase catalyzes the hydration of acetylene to give acetaldehyde:
Acetylene is a moderately common chemical in the universe, often associated with the atmospheres of gas giants. One curious discovery of acetylene is on Enceladus, a moon of Saturn. Natural acetylene is believed to form from catalytic decomposition of long-chain hydrocarbons at temperatures of and above. Since such temperatures are highly unlikely on such a small distant body, this discovery is potentially suggestive of catalytic reactions within that moon, making it a promising site to search for prebiotic chemistry.
Reactions
Vinylation reactions
In vinylation reactions, H−X compounds add across the triple bond. Alcohols and phenols add to acetylene to give vinyl ethers. Thiols give vinyl thioethers. Similarly, vinylpyrrolidone and vinylcarbazole are produced industrially by vinylation of 2-pyrrolidone and carbazole.
The hydration of acetylene is a vinylation reaction, but the resulting vinyl alcohol isomerizes to acetaldehyde. The reaction is catalyzed by mercury salts. This reaction once was the dominant technology for acetaldehyde production, but it has been displaced by the Wacker process, which affords acetaldehyde by oxidation of ethylene, a cheaper feedstock. A similar situation applies to the conversion of acetylene to the valuable vinyl chloride by hydrochlorination vs the oxychlorination of ethylene.
Vinyl acetate is used instead of acetylene for some vinylations, which are more accurately described as transvinylations. Higher esters of vinyl acetate have been used in the synthesis of vinyl formate.
Organometallic chemistry
Acetylene and its derivatives (2-butyne, diphenylacetylene, etc.) form complexes with transition metals. Its bonding to the metal is somewhat similar to that of ethylene complexes. These complexes are intermediates in many catalytic reactions such as alkyne trimerisation to benzene, tetramerization to cyclooctatetraene, and carbonylation to hydroquinone:
at basic conditions (50–, 20–).
In the presence of certain transition metals, alkynes undergo alkyne metathesis.
Metal acetylides, species of the formula , are also common. Copper(I) acetylide and silver acetylide can be formed in aqueous solutions with ease due to a favorable solubility equilibrium.
Acid-base reactions
Acetylene has a pKa of 25, acetylene can be deprotonated by a superbase to form an acetylide:
HC#CH + RM -> RH + HC#CM
Various organometallic and inorganic reagents are effective.
Hydrogenation
Acetylene can be semihydrogenated to ethylene, providing a feedstock for a variety of polyethylene plastics. Halogens add to the triple bond.
Safety and handling
Acetylene is not especially toxic, but when generated from calcium carbide, it can contain toxic impurities such as traces of phosphine and arsine, which give it a distinct garlic-like smell. It is also highly flammable, as are most light hydrocarbons, hence its use in welding. Its most singular hazard is associated with its intrinsic instability, especially when it is pressurized: under certain conditions acetylene can react in an exothermic addition-type reaction to form a number of products, typically benzene and/or vinylacetylene, possibly in addition to carbon and hydrogen. Consequently, acetylene, if initiated by intense heat or a shockwave, can decompose explosively if the absolute pressure of the gas exceeds about . Most regulators and pressure gauges on equipment report gauge pressure, and the safe limit for acetylene therefore is 101 kPagage, or 15 psig. It is therefore supplied and stored dissolved in acetone or dimethylformamide (DMF), contained in a gas cylinder with a porous filling (Agamassan), which renders it safe to transport and use, given proper handling. Acetylene cylinders should be used in the upright position to avoid withdrawing acetone during use.
Information on safe storage of acetylene in upright cylinders is provided by the OSHA, Compressed Gas Association, United States Mine Safety and Health Administration (MSHA), EIGA, and other agencies.
Copper catalyses the decomposition of acetylene, and as a result acetylene should not be transported in copper pipes.
Cylinders should be stored in an area segregated from oxidizers to avoid exacerbated reaction in case of fire/leakage. Acetylene cylinders should not be stored in confined spaces, enclosed vehicles, garages, and buildings, to avoid unintended leakage leading to explosive atmosphere. In the US, National Electric Code (NEC) requires consideration for hazardous areas including those where acetylene may be released during accidents or leaks. Consideration may include electrical classification and use of listed Group A electrical components in US. Further information on determining the areas requiring special consideration is in NFPA 497. In Europe, ATEX also requires consideration for hazardous areas where flammable gases may be released during accidents or leaks.
References
External links
Acetylene Production Plant and Detailed Process
Acetylene at Chemistry Comes Alive!
Movie explaining acetylene formation from calcium carbide and the explosive limits forming fire hazards
Calcium Carbide & Acetylene at The Periodic Table of Videos (University of Nottingham)
CDC – NIOSH Pocket Guide to Chemical Hazards – Acetylene
Alkynes
Fuel gas
Industrial gases
Synthetic fuel technologies
Explosive gases
|
https://en.wikipedia.org/wiki/Antibiotic
|
An antibiotic is a type of antimicrobial substance active against bacteria. It is the most important type of antibacterial agent for fighting bacterial infections, and antibiotic medications are widely used in the treatment and prevention of such infections. They may either kill or inhibit the growth of bacteria. A limited number of antibiotics also possess antiprotozoal activity. Antibiotics are not effective against viruses such as the common cold or influenza; drugs which inhibit growth of viruses are termed antiviral drugs or antivirals rather than antibiotics. They are also not effective against fungi; drugs which inhibit growth of fungi are called antifungal drugs.
Sometimes, the term antibiotic—literally "opposing life", from the Greek roots ἀντι anti, "against" and βίος bios, "life"—is broadly used to refer to any substance used against microbes, but in the usual medical usage, antibiotics (such as penicillin) are those produced naturally (by one microorganism fighting another), whereas non-antibiotic antibacterials (such as sulfonamides and antiseptics) are fully synthetic. However, both classes have the same goal of killing or preventing the growth of microorganisms, and both are included in antimicrobial chemotherapy. "Antibacterials" include antiseptic drugs, antibacterial soaps, and chemical disinfectants, whereas antibiotics are an important class of antibacterials used more specifically in medicine and sometimes in livestock feed.
Antibiotics have been used since ancient times. Many civilizations used topical application of moldy bread, with many references to its beneficial effects arising from ancient Egypt, Nubia, China, Serbia, Greece, and Rome. The first person to directly document the use of molds to treat infections was John Parkinson (1567–1650). Antibiotics revolutionized medicine in the 20th century. Alexander Fleming (1881–1955) discovered modern day penicillin in 1928, the widespread use of which proved significantly beneficial during wartime. However, the effectiveness and easy access to antibiotics have also led to their overuse and some bacteria have evolved resistance to them. The World Health Organization has classified antimicrobial resistance as a widespread "serious threat [that] is no longer a prediction for the future, it is happening right now in every region of the world and has the potential to affect anyone, of any age, in any country". Global deaths attributable to antimicrobial resistance numbered 1.27 million in 2019.
Etymology
The term 'antibiosis', meaning "against life", was introduced by the French bacteriologist Jean Paul Vuillemin as a descriptive name of the phenomenon exhibited by these early antibacterial drugs. Antibiosis was first described in 1877 in bacteria when Louis Pasteur and Robert Koch observed that an airborne bacillus could inhibit the growth of Bacillus anthracis. These drugs were later renamed antibiotics by Selman Waksman, an American microbiologist, in 1947.
The term antibiotic was first used in 1942 by Selman Waksman and his collaborators in journal articles to describe any substance produced by a microorganism that is antagonistic to the growth of other microorganisms in high dilution. This definition excluded substances that kill bacteria but that are not produced by microorganisms (such as gastric juices and hydrogen peroxide). It also excluded synthetic antibacterial compounds such as the sulfonamides. In current usage, the term "antibiotic" is applied to any medication that kills bacteria or inhibits their growth, regardless of whether that medication is produced by a microorganism or not.
The term "antibiotic" derives from anti + βιωτικός (biōtikos), "fit for life, lively", which comes from βίωσις (biōsis), "way of life", and that from βίος (bios), "life". The term "antibacterial" derives from Greek ἀντί (anti), "against" + βακτήριον (baktērion), diminutive of βακτηρία (baktēria), "staff, cane", because the first bacteria to be discovered were rod-shaped.
Usage
Medical uses
Antibiotics are used to treat or prevent bacterial infections, and sometimes protozoan infections. (Metronidazole is effective against a number of parasitic diseases). When an infection is suspected of being responsible for an illness but the responsible pathogen has not been identified, an empiric therapy is adopted. This involves the administration of a broad-spectrum antibiotic based on the signs and symptoms presented and is initiated pending laboratory results that can take several days.
When the responsible pathogenic microorganism is already known or has been identified, definitive therapy can be started. This will usually involve the use of a narrow-spectrum antibiotic. The choice of antibiotic given will also be based on its cost. Identification is critically important as it can reduce the cost and toxicity of the antibiotic therapy and also reduce the possibility of the emergence of antimicrobial resistance. To avoid surgery, antibiotics may be given for non-complicated acute appendicitis.
Antibiotics may be given as a preventive measure and this is usually limited to at-risk populations such as those with a weakened immune system (particularly in HIV cases to prevent pneumonia), those taking immunosuppressive drugs, cancer patients, and those having surgery. Their use in surgical procedures is to help prevent infection of incisions. They have an important role in dental antibiotic prophylaxis where their use may prevent bacteremia and consequent infective endocarditis. Antibiotics are also used to prevent infection in cases of neutropenia particularly cancer-related.
The use of antibiotics for secondary prevention of coronary heart disease is not supported by current scientific evidence, and may actually increase cardiovascular mortality, all-cause mortality and the occurrence of stroke.
Routes of administration
There are many different routes of administration for antibiotic treatment. Antibiotics are usually taken by mouth. In more severe cases, particularly deep-seated systemic infections, antibiotics can be given intravenously or by injection. Where the site of infection is easily accessed, antibiotics may be given topically in the form of eye drops onto the conjunctiva for conjunctivitis or ear drops for ear infections and acute cases of swimmer's ear. Topical use is also one of the treatment options for some skin conditions including acne and cellulitis. Advantages of topical application include achieving high and sustained concentration of antibiotic at the site of infection; reducing the potential for systemic absorption and toxicity, and total volumes of antibiotic required are reduced, thereby also reducing the risk of antibiotic misuse. Topical antibiotics applied over certain types of surgical wounds have been reported to reduce the risk of surgical site infections. However, there are certain general causes for concern with topical administration of antibiotics. Some systemic absorption of the antibiotic may occur; the quantity of antibiotic applied is difficult to accurately dose, and there is also the possibility of local hypersensitivity reactions or contact dermatitis occurring. It is recommended to administer antibiotics as soon as possible, especially in life-threatening infections. Many emergency departments stock antibiotics for this purpose.
Global consumption
Antibiotic consumption varies widely between countries. The WHO report on surveillance of antibiotic consumption published in 2018 analysed 2015 data from 65 countries. As measured in defined daily doses per 1,000 inhabitants per day. Mongolia had the highest consumption with a rate of 64.4. Burundi had the lowest at 4.4. Amoxicillin and amoxicillin/clavulanic acid were the most frequently consumed.
Side effects
Antibiotics are screened for any negative effects before their approval for clinical use, and are usually considered safe and well tolerated. However, some antibiotics have been associated with a wide extent of adverse side effects ranging from mild to very severe depending on the type of antibiotic used, the microbes targeted, and the individual patient. Side effects may reflect the pharmacological or toxicological properties of the antibiotic or may involve hypersensitivity or allergic reactions. Adverse effects range from fever and nausea to major allergic reactions, including photodermatitis and anaphylaxis.
Common side effects of oral antibiotics include diarrhea, resulting from disruption of the species composition in the intestinal flora, resulting, for example, in overgrowth of pathogenic bacteria, such as Clostridium difficile. Taking probiotics during the course of antibiotic treatment can help prevent antibiotic-associated diarrhea. Antibacterials can also affect the vaginal flora, and may lead to overgrowth of yeast species of the genus Candida in the vulvo-vaginal area. Additional side effects can result from interaction with other drugs, such as the possibility of tendon damage from the administration of a quinolone antibiotic with a systemic corticosteroid.
Some antibiotics may also damage the mitochondrion, a bacteria-derived organelle found in eukaryotic, including human, cells. Mitochondrial damage cause oxidative stress in cells and has been suggested as a mechanism for side effects from fluoroquinolones. They are also known to affect chloroplasts.
Interactions
Birth control pills
There are few well-controlled studies on whether antibiotic use increases the risk of oral contraceptive failure. The majority of studies indicate antibiotics do not interfere with birth control pills, such as clinical studies that suggest the failure rate of contraceptive pills caused by antibiotics is very low (about 1%). Situations that may increase the risk of oral contraceptive failure include non-compliance (missing taking the pill), vomiting, or diarrhea. Gastrointestinal disorders or interpatient variability in oral contraceptive absorption affecting ethinylestradiol serum levels in the blood. Women with menstrual irregularities may be at higher risk of failure and should be advised to use backup contraception during antibiotic treatment and for one week after its completion. If patient-specific risk factors for reduced oral contraceptive efficacy are suspected, backup contraception is recommended.
In cases where antibiotics have been suggested to affect the efficiency of birth control pills, such as for the broad-spectrum antibiotic rifampicin, these cases may be due to an increase in the activities of hepatic liver enzymes' causing increased breakdown of the pill's active ingredients. Effects on the intestinal flora, which might result in reduced absorption of estrogens in the colon, have also been suggested, but such suggestions have been inconclusive and controversial. Clinicians have recommended that extra contraceptive measures be applied during therapies using antibiotics that are suspected to interact with oral contraceptives. More studies on the possible interactions between antibiotics and birth control pills (oral contraceptives) are required as well as careful assessment of patient-specific risk factors for potential oral contractive pill failure prior to dismissing the need for backup contraception.
Alcohol
Interactions between alcohol and certain antibiotics may occur and may cause side effects and decreased effectiveness of antibiotic therapy. While moderate alcohol consumption is unlikely to interfere with many common antibiotics, there are specific types of antibiotics with which alcohol consumption may cause serious side effects. Therefore, potential risks of side effects and effectiveness depend on the type of antibiotic administered.
Antibiotics such as metronidazole, tinidazole, cephamandole, latamoxef, cefoperazone, cefmenoxime, and furazolidone, cause a disulfiram-like chemical reaction with alcohol by inhibiting its breakdown by acetaldehyde dehydrogenase, which may result in vomiting, nausea, and shortness of breath. In addition, the efficacy of doxycycline and erythromycin succinate may be reduced by alcohol consumption. Other effects of alcohol on antibiotic activity include altered activity of the liver enzymes that break down the antibiotic compound.
Pharmacodynamics
The successful outcome of antimicrobial therapy with antibacterial compounds depends on several factors. These include host defense mechanisms, the location of infection, and the pharmacokinetic and pharmacodynamic properties of the antibacterial. The bactericidal activity of antibacterials may depend on the bacterial growth phase, and it often requires ongoing metabolic activity and division of bacterial cells. These findings are based on laboratory studies, and in clinical settings have also been shown to eliminate bacterial infection. Since the activity of antibacterials depends frequently on its concentration, in vitro characterization of antibacterial activity commonly includes the determination of the minimum inhibitory concentration and minimum bactericidal concentration of an antibacterial.
To predict clinical outcome, the antimicrobial activity of an antibacterial is usually combined with its pharmacokinetic profile, and several pharmacological parameters are used as markers of drug efficacy.
Combination therapy
In important infectious diseases, including tuberculosis, combination therapy (i.e., the concurrent application of two or more antibiotics) has been used to delay or prevent the emergence of resistance. In acute bacterial infections, antibiotics as part of combination therapy are prescribed for their synergistic effects to improve treatment outcome as the combined effect of both antibiotics is better than their individual effect. Fosfomycin has the highest number of synergistic combinations among antibiotics and is almost always used as a partner drug. Methicillin-resistant Staphylococcus aureus infections may be treated with a combination therapy of fusidic acid and rifampicin. Antibiotics used in combination may also be antagonistic and the combined effects of the two antibiotics may be less than if one of the antibiotics was given as a monotherapy. For example, chloramphenicol and tetracyclines are antagonists to penicillins. However, this can vary depending on the species of bacteria. In general, combinations of a bacteriostatic antibiotic and bactericidal antibiotic are antagonistic.
In addition to combining one antibiotic with another, antibiotics are sometimes co-administered with resistance-modifying agents. For example, β-lactam antibiotics may be used in combination with β-lactamase inhibitors, such as clavulanic acid or sulbactam, when a patient is infected with a β-lactamase-producing strain of bacteria.
Classes
Antibiotics are commonly classified based on their mechanism of action, chemical structure, or spectrum of activity. Most target bacterial functions or growth processes. Those that target the bacterial cell wall (penicillins and cephalosporins) or the cell membrane (polymyxins), or interfere with essential bacterial enzymes (rifamycins, lipiarmycins, quinolones, and sulfonamides) have bactericidal activities, killing the bacteria. Protein synthesis inhibitors (macrolides, lincosamides, and tetracyclines) are usually bacteriostatic, inhibiting further growth (with the exception of bactericidal aminoglycosides). Further categorization is based on their target specificity. "Narrow-spectrum" antibiotics target specific types of bacteria, such as gram-negative or gram-positive, whereas broad-spectrum antibiotics affect a wide range of bacteria. Following a 40-year break in discovering classes of antibacterial compounds, four new classes of antibiotics were introduced to clinical use in the late 2000s and early 2010s: cyclic lipopeptides (such as daptomycin), glycylcyclines (such as tigecycline), oxazolidinones (such as linezolid), and lipiarmycins (such as fidaxomicin).
Production
With advances in medicinal chemistry, most modern antibacterials are semisynthetic modifications of various natural compounds. These include, for example, the beta-lactam antibiotics, which include the penicillins (produced by fungi in the genus Penicillium), the cephalosporins, and the carbapenems. Compounds that are still isolated from living organisms are the aminoglycosides, whereas other antibacterials—for example, the sulfonamides, the quinolones, and the oxazolidinones—are produced solely by chemical synthesis. Many antibacterial compounds are relatively small molecules with a molecular weight of less than 1000 daltons.
Since the first pioneering efforts of Howard Florey and Chain in 1939, the importance of antibiotics, including antibacterials, to medicine has led to intense research into producing antibacterials at large scales. Following screening of antibacterials against a wide range of bacteria, production of the active compounds is carried out using fermentation, usually in strongly aerobic conditions.
Resistance
The emergence of antibiotic-resistant bacteria is a common phenomenon mainly caused by the overuse/misuse. It represents a threat to health globally.
Emergence of resistance often reflects evolutionary processes that take place during antibiotic therapy. The antibiotic treatment may select for bacterial strains with physiologically or genetically enhanced capacity to survive high doses of antibiotics. Under certain conditions, it may result in preferential growth of resistant bacteria, while growth of susceptible bacteria is inhibited by the drug. For example, antibacterial selection for strains having previously acquired antibacterial-resistance genes was demonstrated in 1943 by the Luria–Delbrück experiment. Antibiotics such as penicillin and erythromycin, which used to have a high efficacy against many bacterial species and strains, have become less effective, due to the increased resistance of many bacterial strains.
Resistance may take the form of biodegradation of pharmaceuticals, such as sulfamethazine-degrading soil bacteria introduced to sulfamethazine through medicated pig feces.
The survival of bacteria often results from an inheritable resistance, but the growth of resistance to antibacterials also occurs through horizontal gene transfer. Horizontal transfer is more likely to happen in locations of frequent antibiotic use.
Antibacterial resistance may impose a biological cost, thereby reducing fitness of resistant strains, which can limit the spread of antibacterial-resistant bacteria, for example, in the absence of antibacterial compounds. Additional mutations, however, may compensate for this fitness cost and can aid the survival of these bacteria.
Paleontological data show that both antibiotics and antibiotic resistance are ancient compounds and mechanisms. Useful antibiotic targets are those for which mutations negatively impact bacterial reproduction or viability.
Several molecular mechanisms of antibacterial resistance exist. Intrinsic antibacterial resistance may be part of the genetic makeup of bacterial strains. For example, an antibiotic target may be absent from the bacterial genome. Acquired resistance results from a mutation in the bacterial chromosome or the acquisition of extra-chromosomal DNA. Antibacterial-producing bacteria have evolved resistance mechanisms that have been shown to be similar to, and may have been transferred to, antibacterial-resistant strains. The spread of antibacterial resistance often occurs through vertical transmission of mutations during growth and by genetic recombination of DNA by horizontal genetic exchange. For instance, antibacterial resistance genes can be exchanged between different bacterial strains or species via plasmids that carry these resistance genes. Plasmids that carry several different resistance genes can confer resistance to multiple antibacterials. Cross-resistance to several antibacterials may also occur when a resistance mechanism encoded by a single gene conveys resistance to more than one antibacterial compound.
Antibacterial-resistant strains and species, sometimes referred to as "superbugs", now contribute to the emergence of diseases that were, for a while, well controlled. For example, emergent bacterial strains causing tuberculosis that are resistant to previously effective antibacterial treatments pose many therapeutic challenges. Every year, nearly half a million new cases of multidrug-resistant tuberculosis (MDR-TB) are estimated to occur worldwide. For example, NDM-1 is a newly identified enzyme conveying bacterial resistance to a broad range of beta-lactam antibacterials. The United Kingdom's Health Protection Agency has stated that "most isolates with NDM-1 enzyme are resistant to all standard intravenous antibiotics for treatment of severe infections." On 26 May 2016, an E. coli "superbug" was identified in the United States resistant to colistin, "the last line of defence" antibiotic.
In recent years, even anaerobic bacteria, historically considered less concerning in terms of resistance, have demonstrated high rates of antibiotic resistance, particularly Bacteroides, for which resistance rates to penicillin have been reported to exceed 90%.
Misuse
Per The ICU Book "The first rule of antibiotics is to try not to use them, and the second rule is try not to use too many of them." Inappropriate antibiotic treatment and overuse of antibiotics have contributed to the emergence of antibiotic-resistant bacteria. However, potential harm from antibiotics extends beyond selection of antimicrobial resistance and their overuse is associated with adverse effects for patients themselves, seen most clearly in critically ill patients in Intensive care units. Self-prescribing of antibiotics is an example of misuse. Many antibiotics are frequently prescribed to treat symptoms or diseases that do not respond to antibiotics or that are likely to resolve without treatment. Also, incorrect or suboptimal antibiotics are prescribed for certain bacterial infections. The overuse of antibiotics, like penicillin and erythromycin, has been associated with emerging antibiotic resistance since the 1950s. Widespread usage of antibiotics in hospitals has also been associated with increases in bacterial strains and species that no longer respond to treatment with the most common antibiotics.
Common forms of antibiotic misuse include excessive use of prophylactic antibiotics in travelers and failure of medical professionals to prescribe the correct dosage of antibiotics on the basis of the patient's weight and history of prior use. Other forms of misuse include failure to take the entire prescribed course of the antibiotic, incorrect dosage and administration, or failure to rest for sufficient recovery. Inappropriate antibiotic treatment, for example, is their prescription to treat viral infections such as the common cold. One study on respiratory tract infections found "physicians were more likely to prescribe antibiotics to patients who appeared to expect them". Multifactorial interventions aimed at both physicians and patients can reduce inappropriate prescription of antibiotics. The lack of rapid point of care diagnostic tests, particularly in resource-limited settings is considered one of the drivers of antibiotic misuse.
Several organizations concerned with antimicrobial resistance are lobbying to eliminate the unnecessary use of antibiotics. The issues of misuse and overuse of antibiotics have been addressed by the formation of the US Interagency Task Force on Antimicrobial Resistance. This task force aims to actively address antimicrobial resistance, and is coordinated by the US Centers for Disease Control and Prevention, the Food and Drug Administration (FDA), and the National Institutes of Health, as well as other US agencies. A non-governmental organization campaign group is Keep Antibiotics Working. In France, an "Antibiotics are not automatic" government campaign started in 2002 and led to a marked reduction of unnecessary antibiotic prescriptions, especially in children.
The emergence of antibiotic resistance has prompted restrictions on their use in the UK in 1970 (Swann report 1969), and the European Union has banned the use of antibiotics as growth-promotional agents since 2003. Moreover, several organizations (including the World Health Organization, the National Academy of Sciences, and the U.S. Food and Drug Administration) have advocated restricting the amount of antibiotic use in food animal production. However, commonly there are delays in regulatory and legislative actions to limit the use of antibiotics, attributable partly to resistance against such regulation by industries using or selling antibiotics, and to the time required for research to test causal links between their use and resistance to them. Two federal bills (S.742 and H.R. 2562) aimed at phasing out nontherapeutic use of antibiotics in US food animals were proposed, but have not passed. These bills were endorsed by public health and medical organizations, including the American Holistic Nurses' Association, the American Medical Association, and the American Public Health Association.
Despite pledges by food companies and restaurants to reduce or eliminate meat that comes from animals treated with antibiotics, the purchase of antibiotics for use on farm animals has been increasing every year.
There has been extensive use of antibiotics in animal husbandry. In the United States, the question of emergence of antibiotic-resistant bacterial strains due to use of antibiotics in livestock was raised by the US Food and Drug Administration (FDA) in 1977. In March 2012, the United States District Court for the Southern District of New York, ruling in an action brought by the Natural Resources Defense Council and others, ordered the FDA to revoke approvals for the use of antibiotics in livestock, which violated FDA regulations.
Studies have shown that common misconceptions about the effectiveness and necessity of antibiotics to treat common mild illnesses contribute to their overuse.
Other forms of antibiotic associated harm include anaphylaxis, drug toxicity most notably kidney and liver damage, and super-infections with resistant organisms. Antibiotics are also known to affect mitochondrial function, and this may contribute to the bioenergetic failure of immune cells seen in sepsis. They also alter the microbiome of the gut, lungs and skin, which may be associated with adverse effects such as Clostridium difficile associated diarrhoea. Whilst antibiotics can clearly be lifesaving in patients with bacterial infections, their overuse, especially in patients where infections are hard to diagnose, can lead to harm via multiple mechanisms.
History
Before the early 20th century, treatments for infections were based primarily on medicinal folklore. Mixtures with antimicrobial properties that were used in treatments of infections were described over 2,000 years ago. Many ancient cultures, including the ancient Egyptians and ancient Greeks, used specially selected mold and plant materials to treat infections. Nubian mummies studied in the 1990s were found to contain significant levels of tetracycline. The beer brewed at that time was conjectured to have been the source.
The use of antibiotics in modern medicine began with the discovery of synthetic antibiotics derived from dyes.Various Essential oils have been shown to have anti-microbial properties.Along with this, the plants from which these oils have been derived from can be used as niche anti-microbial agents.
Synthetic antibiotics derived from dyes
Synthetic antibiotic chemotherapy as a science and development of antibacterials began in Germany with Paul Ehrlich in the late 1880s. Ehrlich noted certain dyes would colour human, animal, or bacterial cells, whereas others did not. He then proposed the idea that it might be possible to create chemicals that would act as a selective drug that would bind to and kill bacteria without harming the human host. After screening hundreds of dyes against various organisms, in 1907, he discovered a medicinally useful drug, the first synthetic antibacterial organoarsenic compound salvarsan, now called arsphenamine.
This heralded the era of antibacterial treatment that was begun with the discovery of a series of arsenic-derived synthetic antibiotics by both Alfred Bertheim and Ehrlich in 1907. Ehrlich and Bertheim had experimented with various chemicals derived from dyes to treat trypanosomiasis in mice and spirochaeta infection in rabbits. While their early compounds were too toxic, Ehrlich and Sahachiro Hata, a Japanese bacteriologist working with Erlich in the quest for a drug to treat syphilis, achieved success with the 606th compound in their series of experiments. In 1910, Ehrlich and Hata announced their discovery, which they called drug "606", at the Congress for Internal Medicine at Wiesbaden. The Hoechst company began to market the compound toward the end of 1910 under the name Salvarsan, now known as arsphenamine. The drug was used to treat syphilis in the first half of the 20th century. In 1908, Ehrlich received the Nobel Prize in Physiology or Medicine for his contributions to immunology. Hata was nominated for the Nobel Prize in Chemistry in 1911 and for the Nobel Prize in Physiology or Medicine in 1912 and 1913.
The first sulfonamide and the first systemically active antibacterial drug, Prontosil, was developed by a research team led by Gerhard Domagk in 1932 or 1933 at the Bayer Laboratories of the IG Farben conglomerate in Germany, for which Domagk received the 1939 Nobel Prize in Physiology or Medicine. Sulfanilamide, the active drug of Prontosil, was not patentable as it had already been in use in the dye industry for some years. Prontosil had a relatively broad effect against Gram-positive cocci, but not against enterobacteria. Research was stimulated apace by its success. The discovery and development of this sulfonamide drug opened the era of antibacterials.
Penicillin and other natural antibiotics
Observations about the growth of some microorganisms inhibiting the growth of other microorganisms have been reported since the late 19th century. These observations of antibiosis between microorganisms led to the discovery of natural antibacterials. Louis Pasteur observed, "if we could intervene in the antagonism observed between some bacteria, it would offer perhaps the greatest hopes for therapeutics".
In 1874, physician Sir William Roberts noted that cultures of the mould Penicillium glaucum that is used in the making of some types of blue cheese did not display bacterial contamination. In 1876, physicist John Tyndall also contributed to this field.
In 1895 Vincenzo Tiberio, Italian physician, published a paper on the antibacterial power of some extracts of mold.
In 1897, doctoral student Ernest Duchesne submitted a dissertation, "" (Contribution to the study of vital competition in micro-organisms: antagonism between moulds and microbes), the first known scholarly work to consider the therapeutic capabilities of moulds resulting from their anti-microbial activity. In his thesis, Duchesne proposed that bacteria and moulds engage in a perpetual battle for survival. Duchesne observed that E. coli was eliminated by Penicillium glaucum when they were both grown in the same culture. He also observed that when he inoculated laboratory animals with lethal doses of typhoid bacilli together with Penicillium glaucum, the animals did not contract typhoid. Duchesne's army service after getting his degree prevented him from doing any further research. Duchesne died of tuberculosis, a disease now treated by antibiotics.
In 1928, Sir Alexander Fleming postulated the existence of penicillin, a molecule produced by certain moulds that kills or stops the growth of certain kinds of bacteria. Fleming was working on a culture of disease-causing bacteria when he noticed the spores of a green mold, Penicillium rubens, in one of his culture plates. He observed that the presence of the mould killed or prevented the growth of the bacteria. Fleming postulated that the mould must secrete an antibacterial substance, which he named penicillin in 1928. Fleming believed that its antibacterial properties could be exploited for chemotherapy. He initially characterised some of its biological properties, and attempted to use a crude preparation to treat some infections, but he was unable to pursue its further development without the aid of trained chemists.
Ernst Chain, Howard Florey and Edward Abraham succeeded in purifying the first penicillin, penicillin G, in 1942, but it did not become widely available outside the Allied military before 1945. Later, Norman Heatley developed the back extraction technique for efficiently purifying penicillin in bulk. The chemical structure of penicillin was first proposed by Abraham in 1942 and then later confirmed by Dorothy Crowfoot Hodgkin in 1945. Purified penicillin displayed potent antibacterial activity against a wide range of bacteria and had low toxicity in humans. Furthermore, its activity was not inhibited by biological constituents such as pus, unlike the synthetic sulfonamides. (see below) The development of penicillin led to renewed interest in the search for antibiotic compounds with similar efficacy and safety. For their successful development of penicillin, which Fleming had accidentally discovered but could not develop himself, as a therapeutic drug, Chain and Florey shared the 1945 Nobel Prize in Medicine with Fleming.
Florey credited René Dubos with pioneering the approach of deliberately and systematically searching for antibacterial compounds, which had led to the discovery of gramicidin and had revived Florey's research in penicillin. In 1939, coinciding with the start of World War II, Dubos had reported the discovery of the first naturally derived antibiotic, tyrothricin, a compound of 20% gramicidin and 80% tyrocidine, from Bacillus brevis. It was one of the first commercially manufactured antibiotics and was very effective in treating wounds and ulcers during World War II. Gramicidin, however, could not be used systemically because of toxicity. Tyrocidine also proved too toxic for systemic usage. Research results obtained during that period were not shared between the Axis and the Allied powers during World War II and limited access during the Cold War.
Late 20th century
During the mid-20th century, the number of new antibiotic substances introduced for medical use increased significantly. From 1935 to 1968, 12 new classes were launched. However, after this, the number of new classes dropped markedly, with only two new classes introduced between 1969 and 2003.
Antibiotic pipeline
Both the WHO and the Infectious Disease Society of America report that the weak antibiotic pipeline does not match bacteria's increasing ability to develop resistance. The Infectious Disease Society of America report noted that the number of new antibiotics approved for marketing per year had been declining and identified seven antibiotics against the Gram-negative bacilli currently in phase 2 or phase 3 clinical trials. However, these drugs did not address the entire spectrum of resistance of Gram-negative bacilli. According to the WHO fifty one new therapeutic entities - antibiotics (including combinations), are in phase 1-3 clinical trials as of May 2017. Antibiotics targeting multidrug-resistant Gram-positive pathogens remains a high priority.
A few antibiotics have received marketing authorization in the last seven years. The cephalosporin ceftaroline and the lipoglycopeptides oritavancin and telavancin for the treatment of acute bacterial skin and skin structure infection and community-acquired bacterial pneumonia. The lipoglycopeptide dalbavancin and the oxazolidinone tedizolid has also been approved for use for the treatment of acute bacterial skin and skin structure infection. The first in a new class of narrow spectrum macrocyclic antibiotics, fidaxomicin, has been approved for the treatment of C. difficile colitis. New cephalosporin-lactamase inhibitor combinations also approved include ceftazidime-avibactam and ceftolozane-avibactam for complicated urinary tract infection and intra-abdominal infection.
Possible improvements include clarification of clinical trial regulations by FDA. Furthermore, appropriate economic incentives could persuade pharmaceutical companies to invest in this endeavor. In the US, the Antibiotic Development to Advance Patient Treatment (ADAPT) Act was introduced with the aim of fast tracking the drug development of antibiotics to combat the growing threat of 'superbugs'. Under this Act, FDA can approve antibiotics and antifungals treating life-threatening infections based on smaller clinical trials. The CDC will monitor the use of antibiotics and the emerging resistance, and publish the data. The FDA antibiotics labeling process, 'Susceptibility Test Interpretive Criteria for Microbial Organisms' or 'breakpoints', will provide accurate data to healthcare professionals. According to Allan Coukell, senior director for health programs at The Pew Charitable Trusts, "By allowing drug developers to rely on smaller datasets, and clarifying FDA's authority to tolerate a higher level of uncertainty for these drugs when making a risk/benefit calculation, ADAPT would make the clinical trials more feasible."
Replenishing the antibiotic pipeline and developing other new therapies
Because antibiotic-resistant bacterial strains continue to emerge and spread, there is a constant need to develop new antibacterial treatments. Current strategies include traditional chemistry-based approaches such as natural product-based drug discovery, newer chemistry-based approaches such as drug design, traditional biology-based approaches such as immunoglobulin therapy, and experimental biology-based approaches such as phage therapy, fecal microbiota transplants, antisense RNA-based treatments, and CRISPR-Cas9-based treatments.
Natural product-based antibiotic discovery
Most of the antibiotics in current use are natural products or natural product derivatives, and bacterial, fungal, plant and animal extracts are being screened in the search for new antibiotics. Organisms may be selected for testing based on ecological, ethnomedical, genomic, or historical rationales. Medicinal plants, for example, are screened on the basis that they are used by traditional healers to prevent or cure infection and may therefore contain antibacterial compounds. Also, soil bacteria are screened on the basis that, historically, they have been a very rich source of antibiotics (with 70 to 80% of antibiotics in current use derived from the actinomycetes).
In addition to screening natural products for direct antibacterial activity, they are sometimes screened for the ability to suppress antibiotic resistance and antibiotic tolerance. For example, some secondary metabolites inhibit drug efflux pumps, thereby increasing the concentration of antibiotic able to reach its cellular target and decreasing bacterial resistance to the antibiotic. Natural products known to inhibit bacterial efflux pumps include the alkaloid lysergol, the carotenoids capsanthin and capsorubin, and the flavonoids rotenone and chrysin. Other natural products, this time primary metabolites rather than secondary metabolites, have been shown to eradicate antibiotic tolerance. For example, glucose, mannitol, and fructose reduce antibiotic tolerance in Escherichia coli and Staphylococcus aureus, rendering them more susceptible to killing by aminoglycoside antibiotics.
Natural products may be screened for the ability to suppress bacterial virulence factors too. Virulence factors are molecules, cellular structures and regulatory systems that enable bacteria to evade the body's immune defenses (e.g. urease, staphyloxanthin), move towards, attach to, and/or invade human cells (e.g. type IV pili, adhesins, internalins), coordinate the activation of virulence genes (e.g. quorum sensing), and cause disease (e.g. exotoxins). Examples of natural products with antivirulence activity include the flavonoid epigallocatechin gallate (which inhibits listeriolysin O), the quinone tetrangomycin (which inhibits staphyloxanthin), and the sesquiterpene zerumbone (which inhibits Acinetobacter baumannii motility).
Immunoglobulin therapy
Antibodies (anti-tetanus immunoglobulin) have been used in the treatment and prevention of tetanus since the 1910s, and this approach continues to be a useful way of controlling bacterial diseases. The monoclonal antibody bezlotoxumab, for example, has been approved by the US FDA and EMA for recurrent Clostridium difficile infection, and other monoclonal antibodies are in development (e.g. AR-301 for the adjunctive treatment of S. aureus ventilator-associated pneumonia). Antibody treatments act by binding to and neutralizing bacterial exotoxins and other virulence factors.
Phage therapy
Phage therapy is under investigation as a method of treating antibiotic-resistant strains of bacteria. Phage therapy involves infecting bacterial pathogens with viruses. Bacteriophages and their host ranges are extremely specific for certain bacteria, thus, unlike antibiotics, they do not disturb the host organism's intestinal microbiota. Bacteriophages, also known as phages, infect and kill bacteria primarily during lytic cycles. Phages insert their DNA into the bacterium, where it is transcribed and used to make new phages, after which the cell will lyse, releasing new phage that are able to infect and destroy further bacteria of the same strain. The high specificity of phage protects "good" bacteria from destruction.
Some disadvantages to the use of bacteriophages also exist, however. Bacteriophages may harbour virulence factors or toxic genes in their genomes and, prior to use, it may be prudent to identify genes with similarity to known virulence factors or toxins by genomic sequencing. In addition, the oral and IV administration of phages for the eradication of bacterial infections poses a much higher safety risk than topical application. Also, there is the additional concern of uncertain immune responses to these large antigenic cocktails.
There are considerable regulatory hurdles that must be cleared for such therapies. Despite numerous challenges, the use of bacteriophages as a replacement for antimicrobial agents against MDR pathogens that no longer respond to conventional antibiotics, remains an attractive option.
Fecal microbiota transplants
Fecal microbiota transplants involve transferring the full intestinal microbiota from a healthy human donor (in the form of stool) to patients with C. difficile infection. Although this procedure has not been officially approved by the US FDA, its use is permitted under some conditions in patients with antibiotic-resistant C. difficile infection. Cure rates are around 90%, and work is underway to develop stool banks, standardized products, and methods of oral delivery. Fecal microbiota transplantation has also been used more recently for inflammatory bowel diseases.
Antisense RNA-based treatments
Antisense RNA-based treatment (also known as gene silencing therapy) involves (a) identifying bacterial genes that encode essential proteins (e.g. the Pseudomonas aeruginosa genes acpP, lpxC, and rpsJ), (b) synthesizing single stranded RNA that is complementary to the mRNA encoding these essential proteins, and (c) delivering the single stranded RNA to the infection site using cell-penetrating peptides or liposomes. The antisense RNA then hybridizes with the bacterial mRNA and blocks its translation into the essential protein. Antisense RNA-based treatment has been shown to be effective in in vivo models of P. aeruginosa pneumonia.
In addition to silencing essential bacterial genes, antisense RNA can be used to silence bacterial genes responsible for antibiotic resistance. For example, antisense RNA has been developed that silences the S. aureus mecA gene (the gene that encodes modified penicillin-binding protein 2a and renders S. aureus strains methicillin-resistant). Antisense RNA targeting mecA mRNA has been shown to restore the susceptibility of methicillin-resistant staphylococci to oxacillin in both in vitro and in vivo studies.
CRISPR-Cas9-based treatments
In the early 2000s, a system was discovered that enables bacteria to defend themselves against invading viruses. The system, known as CRISPR-Cas9, consists of (a) an enzyme that destroys DNA (the nuclease Cas9) and (b) the DNA sequences of previously encountered viral invaders (CRISPR). These viral DNA sequences enable the nuclease to target foreign (viral) rather than self (bacterial) DNA.
Although the function of CRISPR-Cas9 in nature is to protect bacteria, the DNA sequences in the CRISPR component of the system can be modified so that the Cas9 nuclease targets bacterial resistance genes or bacterial virulence genes instead of viral genes. The modified CRISPR-Cas9 system can then be administered to bacterial pathogens using plasmids or bacteriophages. This approach has successfully been used to silence antibiotic resistance and reduce the virulence of enterohemorrhagic E. coli in an in vivo model of infection.
Reducing the selection pressure for antibiotic resistance
In addition to developing new antibacterial treatments, it is important to reduce the selection pressure for the emergence and spread of antibiotic resistance. Strategies to accomplish this include well-established infection control measures such as infrastructure improvement (e.g. less crowded housing), better sanitation (e.g. safe drinking water and food) and vaccine development, other approaches such as antibiotic stewardship, and experimental approaches such as the use of prebiotics and probiotics to prevent infection. Antibiotic cycling, where antibiotics are alternated by clinicians to treat microbial diseases, is proposed, but recent studies revealed such strategies are ineffective against antibiotic resistance.
Vaccines
Vaccines rely on immune modulation or augmentation. Vaccination either excites or reinforces the immune competence of a host to ward off infection, leading to the activation of macrophages, the production of antibodies, inflammation, and other classic immune reactions. Antibacterial vaccines have been responsible for a drastic reduction in global bacterial diseases. Vaccines made from attenuated whole cells or lysates have been replaced largely by less reactogenic, cell-free vaccines consisting of purified components, including capsular polysaccharides and their conjugates, to protein carriers, as well as inactivated toxins (toxoids) and proteins.
See also
References
Further reading
External links
Anti-infective agents
.
|
https://en.wikipedia.org/wiki/Allomorph
|
In linguistics, an allomorph is a variant phonetic form of a morpheme, or, a unit of meaning that varies in sound and spelling without changing the meaning. The term allomorph describes the realization of phonological variations for a specific morpheme. The different allomorphs that a morpheme can become are governed by morphophonemic rules. These phonological rules determine what phonetic form, or specific pronunciation, a morpheme will take based on the phonological or morphological context in which they appear.
In English
English has several morphemes that vary in sound but not in meaning, such as past tense morphemes, plural morphemes, and negative morphemes.
Past tense allomorphs
For example, an English past tense morpheme is -ed, which occurs in several allomorphs depending on its phonological environment by assimilating the voicing of the previous segment or the insertion of a schwa after an alveolar stop:
as or in verbs whose stem ends with the alveolar stops or , such as 'hunted' or 'banded'
as in verbs whose stem ends with voiceless phonemes other than , such as 'fished'
as in verbs whose stem ends with voiced phonemes other than , such as 'buzzed'
The "other than" restrictions above are typical for allomorphy. If the allomorphy conditions are ordered from most restrictive (in this case, after an alveolar stop) to least restrictive, the first matching case usually has precedence. Thus, the above conditions could be rewritten as follows:
as or when the stem ends with the alveolar stops or
as when the stem ends with voiceless phonemes
as elsewhere
The allomorph does not appear after stem-final although the latter is voiceless, which is then explained by appearing in that environment, together with the fact that the environments are ordered. Likewise, the allomorph does not appear after stem-final because the earlier clause for the allomorph has priority. The allomorph does not appear after stem-final voiceless phoneme because the preceding clause for the comes first.
Irregular past tense forms, such as "broke" or "was/were," can be seen as still more specific cases since they are confined to certain lexical items, such as the verb "break," which take priority over the general cases listed above.
Plural allomorphs
The plural morpheme for regular nouns in English is typically realized by adding an -s or -es to the end of the noun. However, the plural morpheme actually has three different allomorphs: [-s], [-z], and [-əz]. The specific pronunciation that a plural morpheme takes on is determined by the following morphological rules:
Assume that the basic form of the plural morpheme, /-z/, is [-z] ("bags" /bægz/)
The morpheme /-z/ becomes [-əz] by inserting an [ə] before [-z] when a noun ends in a sibilant ("buses" /bʌsəz/)
Change the morpheme /-z/ to a voiceless [-s] when a noun ends in a voiceless sound ("caps" /kæps/)
Negative allomorphs
In English, the negative prefix in- has three allomorphs: [ɪn-], [ɪŋ-], and [ɪm-]. The phonetic form that the negative morpheme /ɪn-/ uses is determined by the following morphological rules:
the negative morpheme /ɪn-/ becomes [ɪn-] when preceding an alveolar consonant ("intolerant"/ɪn'tɔlərənt/)
the morpheme /ɪn-/ becomes [ɪŋ-] before a velar consonant ("incongruous" /ɪŋ'kɔŋgruəs/)
the morpheme /ɪn-/ becomes [ɪm-] before a bilabial consonant ("improper" /ɪm'prɔpər/)
In Sami languages
The Sami languages have a trochaic pattern of alternating stressed and unstressed syllables. The vowels and consonants that are allowed in an unstressed syllable differ from those that are allowed in a stressed syllable. Consequently, every suffix and inflectional ending has two forms, and the form that is used depends on the stress pattern of the word to which it is attached. For example, Northern Sami has the causative verb suffix - in which - is selected when it would be the third syllable (and the preceding verb has two syllables), and - is selected when it would be the third and the fourth syllables (and the preceding verb has three syllables):
has two syllables and so when suffixed, the result is .
has three syllables and so when suffixed, the result is .
The same applies to inflectional patterns in the Sami languages as well, which are divided into even stems and odd stems.
Stem allomorphy
Allomorphy can also exist in stems or roots, as in Classical Sanskrit:
There are three allomorphs of the stem, , , and , which are conditioned by the particular case-marking suffixes.
The form of the stem , found in the nominative singular and locative plural, is the etymological form of the morpheme. Pre-Indic palatalization of velars resulted in the variant form , which was initially phonologically conditioned. The conditioning can still be seen in the locative singular form, for which the is followed by the high front vowel .
However, the subsequent merging of and into made the alternation unpredictable on phonetic grounds in the genitive case (both singular and plural) as well as the nominative plural and the instrumental singular. Thus, allomorphy was no longer directly relatable to phonological processes.
Phonological conditioning also accounts for the form in the instrumental plural, in which the assimilates in voicing to the following .
History
The term was originally used to describe variations in chemical structure. It was first applied to language (in writing) in 1948, by Fatih Şat and Sibel Merve in Language XXIV.
See also
Null allomorph
Alternation (linguistics)
Allophone
Consonant mutation
Grassmann's law
Suppletion
References
Linguistic morphology
Morphemes
Linguistics terminology
|
https://en.wikipedia.org/wiki/Affix
|
In linguistics, an affix is a morpheme that is attached to a word stem to form a new word or word form. The main two categories are derivational and inflectional affixes. The first ones, such as -un, -ation, anti-, pre- etc, introduce a semantic change to the word they are attached to. The latter ones introduce a syntactic change, such as singular into plural (e.g. -(e)s), or present simple tense into present continuous or past tense by adding -ing, -ed to a word. All of them are bound morphemes by definition; prefixes and suffixes may be separable affixes.
Adfixes, infixes and their variations
Changing a word by adding a morpheme at its beginning is called prefixation, in the middle is called infixation, and at the end is called suffixation.
Prefix and suffix may be subsumed under the term adfix, in contrast to infix.
When marking text for interlinear glossing, as in the third column in the chart above, simple affixes such as prefixes and suffixes are separated from the stem with hyphens. Affixes which disrupt the stem, or which themselves are discontinuous, are often marked off with angle brackets. Reduplication is often shown with a tilde. Affixes which cannot be segmented are marked with a back slash.
Lexical affixes
Semantically speaking, lexical affixes or semantic affixes, when compared with free nouns, often have a more generic or general meaning, for example, one denoting "water in a general sense" may not have a noun equivalent because all the nouns denote more specific meanings such as "saltwater", "whitewater", etc (while in other cases the lexical suffixes have become grammaticalized to various degrees.) Although they behave as incorporated noun roots/stems within verbs and as elements of nouns, they never occur as freestanding nouns. Lexical affixes are relatively rare and are used in Wakashan, Salishan, and Chimakuan languages — the presence of these is an areal feature of the Pacific Northwest of North America - where they show little to no resemblance to free nouns with similar meanings. Compare the lexical suffixes and free nouns of Northern Straits Saanich written in the Saanich orthography and in Americanist notation:
Some linguists have claimed that these lexical suffixes provide only adverbial or adjectival notions to verbs. Other linguists disagree arguing that they may additionally be syntactic arguments just as free nouns are and, thus, equating lexical suffixes with incorporated nouns. Gerdts (2003) gives examples of lexical suffixes in the Halkomelem language (the word order here is verb–subject–object):
{| class="IPA wikitable"
|- style="line-height: 1.0em; font-size: 75%"
|
|
| style="background: #bbbbff" | VERB
| style="background: #ffebad" | SUBJ
| style="background: #ffbbbb" | OBJ
|-
| (1)
| niʔ
| šak’ʷ-ət-əs
| łə słeniʔ
| łə qeq
|-
|
| colspan="4" | "the woman washed the baby"
|- style="line-height: 1.0em; font-size: 75%"
| bgcolor=white colspan=5|
|- style="line-height: 1.0em; font-size: 75%"
|
|
| style="background: #bbbbff" | VERB+LEX.SUFF
| style="background: #ffebad" | SUBJ
|
|-
| (2)
| niʔ
| šk’ʷ-əyəł
| łə słeniʔ
|
|-
|
| colspan="4" | "the woman baby-washed"
|}
In sentence (1), the verb "wash" is šak’ʷətəs where šak’ʷ- is the root and -ət and -əs are inflectional suffixes. The subject "the woman" is łə słeniʔ and the object "the baby" is łə qeq. In this sentence, "the baby" is a free noun. (The niʔ here is an auxiliary, which can be ignored for explanatory purposes.)
In sentence (2), "baby" does not appear as a free noun. Instead it appears as the lexical suffix -əyəł which is affixed to the verb root šk’ʷ- (which has changed slightly in pronunciation, but this can also be ignored here). The lexical suffix is neither "the baby" (definite) nor "a baby" (indefinite); such referential changes are routine with incorporated nouns.
Orthographic affixes
In orthography, the terms for affixes may be used for the smaller elements of conjunct characters. For example, Maya glyphs are generally compounds of a main sign and smaller affixes joined at its margins. These are called prefixes, superfixes, postfixes, and subfixes according to their position to the left, on top, to the right, or at the bottom of the main glyph. A small glyph placed inside another is called an infix. Similar terminology is found with the conjunct consonants of the Indic alphabets. For example, the Tibetan alphabet utilizes prefix, suffix, superfix, and subfix consonant letters.
See also
Agglutination
Augmentative
Binary prefix
Clitic
Combining form
Concatenation
Diminutive
English prefixes
Family name affixes
Internet-related prefixes
Marker (linguistics)
Morphological derivation
Separable affix
SI prefix
Stemming - affix removal using computer software
Unpaired word
Word formation
References
Bibliography
Montler, Timothy. (1986). An outline of the morphology and phonology of Saanich, North Straits Salish. Occasional Papers in Linguistics (No. 4). Missoula, MT: University of Montana Linguistics Laboratory.
Montler, Timothy. (1991). Saanich, North Straits Salish classified word list. Canadian Ethnology service paper (No. 119); Mercury series. Hull, Quebec: Canadian Museum of Civilization.
External links
Comprehensive and searchable affix dictionary reference
Lexical units
Linguistics terminology
|
https://en.wikipedia.org/wiki/Allotropy
|
Allotropy or allotropism () is the property of some chemical elements to exist in two or more different forms, in the same physical state, known as allotropes of the elements. Allotropes are different structural modifications of an element: the atoms of the element are bonded together in different manners.
For example, the allotropes of carbon include diamond (the carbon atoms are bonded together to form a cubic lattice of tetrahedra), graphite (the carbon atoms are bonded together in sheets of a hexagonal lattice), graphene (single sheets of graphite), and fullerenes (the carbon atoms are bonded together in spherical, tubular, or ellipsoidal formations).
The term allotropy is used for elements only, not for compounds. The more general term, used for any compound, is polymorphism, although its use is usually restricted to solid materials such as crystals. Allotropy refers only to different forms of an element within the same physical phase (the state of matter, such as a solid, liquid or gas). The differences between these states of matter would not alone constitute examples of allotropy. Allotropes of chemical elements are frequently referred to as polymorphs or as phases of the element.
For some elements, allotropes have different molecular formulae or different crystalline structures, as well as a difference in physical phase; for example, two allotropes of oxygen (dioxygen, O2, and ozone, O3) can both exist in the solid, liquid and gaseous states. Other elements do not maintain distinct allotropes in different physical phases; for example, phosphorus has numerous solid allotropes, which all revert to the same P4 form when melted to the liquid state.
History
The concept of allotropy was originally proposed in 1840 by the Swedish scientist Baron Jöns Jakob Berzelius (1779–1848). The term is derived . After the acceptance of Avogadro's hypothesis in 1860, it was understood that elements could exist as polyatomic molecules, and two allotropes of oxygen were recognized as O2 and O3. In the early 20th century, it was recognized that other cases such as carbon were due to differences in crystal structure.
By 1912, Ostwald noted that the allotropy of elements is just a special case of the phenomenon of polymorphism known for compounds, and proposed that the terms allotrope and allotropy be abandoned and replaced by polymorph and polymorphism. Although many other chemists have repeated this advice, IUPAC and most chemistry texts still favour the usage of allotrope and allotropy for elements only.
Differences in properties of an element's allotropes
Allotropes are different structural forms of the same element and can exhibit quite different physical properties and chemical behaviours. The change between allotropic forms is triggered by the same forces that affect other structures, i.e., pressure, light, and temperature. Therefore, the stability of the particular allotropes depends on particular conditions. For instance, iron changes from a body-centered cubic structure (ferrite) to a face-centered cubic structure (austenite) above 906 °C, and tin undergoes a modification known as tin pest from a metallic form to a semiconductor form below 13.2 °C (55.8 °F). As an example of allotropes having different chemical behaviour, ozone (O3) is a much stronger oxidizing agent than dioxygen (O2).
List of allotropes
Typically, elements capable of variable coordination number and/or oxidation states tend to exhibit greater numbers of allotropic forms. Another contributing factor is the ability of an element to catenate.
Examples of allotropes include:
Non-metals
Metalloids
Metals
Among the metallic elements that occur in nature in significant quantities (56 up to U, without Tc and Pm), almost half (27) are allotropic at ambient pressure: Li, Be, Na, Ca, Ti, Mn, Fe, Co, Sr, Y, Zr, Sn, La, Ce, Pr, Nd, Sm, Gd, Tb, Dy, Yb, Hf, Tl, Th, Pa and U. Some phase transitions between allotropic forms of technologically relevant metals are those of Ti at 882 °C, Fe at 912 °C and 1394 °C, Co at 422 °C, Zr at 863 °C, Sn at 13 °C and U at 668 °C and 776 °C.
Lanthanides and actinides
Cerium, samarium, dysprosium and ytterbium have three allotropes.
Praseodymium, neodymium, gadolinium and terbium have two allotropes.
Plutonium has six distinct solid allotropes under "normal" pressures. Their densities vary within a ratio of some 4:3, which vastly complicates all kinds of work with the metal (particularly casting, machining, and storage). A seventh plutonium allotrope exists at very high pressures. The transuranium metals Np, Am, and Cm are also allotropic.
Promethium, americium, berkelium and californium have three allotropes each.
Nanoallotropes
In 2017, the concept of nanoallotropy was proposed by Rafal Klajn of the Organic Chemistry Department of the Weizmann Institute of Science. Nanoallotropes, or allotropes of nanomaterials, are nanoporous materials that have the same chemical composition (e.g., Au), but differ in their architecture at the nanoscale (that is, on a scale 10 to 100 times the dimensions of individual atoms). Such nanoallotropes may help create ultra-small electronic devices and find other industrial applications. The different nanoscale architectures translate into different properties, as was demonstrated for surface-enhanced Raman scattering performed on several different nanoallotropes of gold. A two-step method for generating nanoallotropes was also created.
See also
Isomer
Polymorphism (materials science)
Notes
References
External links
Allotropes – Chemistry Encyclopedia
Chemistry
Inorganic chemistry
Physical chemistry
|
https://en.wikipedia.org/wiki/Archimedes
|
Archimedes of Syracuse (, ; ) was an Ancient Greek mathematician, physicist, engineer, astronomer, and inventor from the ancient city of Syracuse in Sicily. Although few details of his life are known, he is regarded as one of the leading scientists in classical antiquity. Considered the greatest mathematician of ancient history, and one of the greatest of all time, Archimedes anticipated modern calculus and analysis by applying the concept of the infinitely small and the method of exhaustion to derive and rigorously prove a range of geometrical theorems. These include the area of a circle, the surface area and volume of a sphere, the area of an ellipse, the area under a parabola, the volume of a segment of a paraboloid of revolution, the volume of a segment of a hyperboloid of revolution, and the area of a spiral.
Archimedes' other mathematical achievements include deriving an approximation of pi, defining and investigating the Archimedean spiral, and devising a system using exponentiation for expressing very large numbers. He was also one of the first to apply mathematics to physical phenomena, working on statics and hydrostatics. Archimedes' achievements in this area include a proof of the law of the lever, the widespread use of the concept of center of gravity, and the enunciation of the law of buoyancy or Archimedes' principle. He is also credited with designing innovative machines, such as his screw pump, compound pulleys, and defensive war machines to protect his native Syracuse from invasion.
Archimedes died during the siege of Syracuse, when he was killed by a Roman soldier despite orders that he should not be harmed. Cicero describes visiting Archimedes' tomb, which was surmounted by a sphere and a cylinder that Archimedes requested be placed there to represent his mathematical discoveries.
Unlike his inventions, Archimedes' mathematical writings were little known in antiquity. Mathematicians from Alexandria read and quoted him, but the first comprehensive compilation was not made until by Isidore of Miletus in Byzantine Constantinople, while commentaries on the works of Archimedes by Eutocius in the 6th century opened them to wider readership for the first time. The relatively few copies of Archimedes' written work that survived through the Middle Ages were an influential source of ideas for scientists during the Renaissance and again in the 17th century, while the discovery in 1906 of previously lost works by Archimedes in the Archimedes Palimpsest has provided new insights into how he obtained mathematical results.
Biography
Archimedes was born c. 287 BC in the seaport city of Syracuse, Sicily, at that time a self-governing colony in Magna Graecia. The date of birth is based on a statement by the Byzantine Greek historian John Tzetzes that Archimedes lived for 75 years before his death in 212 BC. In the Sand-Reckoner, Archimedes gives his father's name as Phidias, an astronomer about whom nothing else is known. A biography of Archimedes was written by his friend Heracleides, but this work has been lost, leaving the details of his life obscure. It is unknown, for instance, whether he ever married or had children, or if he ever visited Alexandria, Egypt, during his youth. From his surviving written works, it is clear that he maintained collegiate relations with scholars based there, including his friend Conon of Samos and the head librarian Eratosthenes of Cyrene.
The standard versions of Archimedes' life were written long after his death by Greek and Roman historians. The earliest reference to Archimedes occurs in The Histories by Polybius ( 200–118 BC), written about 70 years after his death. It sheds little light on Archimedes as a person, and focuses on the war machines that he is said to have built in order to defend the city from the Romans. Polybius remarks how, during the Second Punic War, Syracuse switched allegiances from Rome to Carthage, resulting in a military campaign under the command of Marcus Claudius Marcellus and Appius Claudius Pulcher, who besieged the city from 213 to 212 BC. He notes that the Romans underestimated Syracuse's defenses, and mentions several machines Archimedes designed, including improved catapults, crane-like machines that could be swung around in an arc, and other stone-throwers. Although the Romans ultimately captured the city, they suffered considerable losses due to Archimedes' inventiveness.
Cicero (106–43 BC) mentions Archimedes in some of his works. While serving as a quaestor in Sicily, Cicero found what was presumed to be Archimedes' tomb near the Agrigentine gate in Syracuse, in a neglected condition and overgrown with bushes. Cicero had the tomb cleaned up and was able to see the carving and read some of the verses that had been added as an inscription. The tomb carried a sculpture illustrating Archimedes' favorite mathematical proof, that the volume and surface area of the sphere are two-thirds that of an enclosing cylinder including its bases. He also mentions that Marcellus brought to Rome two planetariums Archimedes built. The Roman historian Livy (59 BC–17 AD) retells Polybius' story of the capture of Syracuse and Archimedes' role in it.
Plutarch (45–119 AD) wrote in his Parallel Lives that Archimedes was related to King Hiero II, the ruler of Syracuse. He also provides at least two accounts on how Archimedes died after the city was taken. According to the most popular account, Archimedes was contemplating a mathematical diagram when the city was captured. A Roman soldier commanded him to come and meet Marcellus, but he declined, saying that he had to finish working on the problem. This enraged the soldier, who killed Archimedes with his sword. Another story has Archimedes carrying mathematical instruments before being killed because a soldier thought they were valuable items. Marcellus was reportedly angered by Archimedes' death, as he considered him a valuable scientific asset (he called Archimedes "a geometrical Briareus") and had ordered that he should not be harmed.
The last words attributed to Archimedes are "Do not disturb my circles" (Latin, "Noli turbare circulos meos"; Katharevousa Greek, "μὴ μου τοὺς κύκλους τάραττε"), a reference to the mathematical drawing that he was supposedly studying when disturbed by the Roman soldier. There is no reliable evidence that Archimedes uttered these words and they do not appear in Plutarch's account. A similar quotation is found in the work of Valerius Maximus (fl. 30 AD), who wrote in Memorable Doings and Sayings, "" ("... but protecting the dust with his hands, said 'I beg of you, do not disturb this).
Discoveries and inventions
Archimedes' principle
The most widely known anecdote about Archimedes tells of how he invented a method for determining the volume of an object with an irregular shape. According to Vitruvius, a crown for a temple had been made for King Hiero II of Syracuse, who supplied the pure gold to be used. The crown was likely made in the shape of a votive wreath. Archimedes was asked to determine whether some silver had been substituted by the goldsmith without damaging the crown, so he could not melt it down into a regularly shaped body in order to calculate its density.
In this account, Archimedes noticed while taking a bath that the level of the water in the tub rose as he got in, and realized that this effect could be used to determine the golden crown's volume. Archimedes was so excited by this discovery that he took to the streets naked, having forgotten to dress, crying "Eureka!" (, heúrēka!, ). For practical purposes water is incompressible, so the submerged crown would displace an amount of water equal to its own volume. By dividing the mass of the crown by the volume of water displaced, its density could be obtained; if cheaper and less dense metals had been added, the density would be lower than that of gold. Archimedes found that this is what had happened, proving that silver had been mixed in.
The story of the golden crown does not appear anywhere in Archimedes' known works. The practicality of the method described has been called into question due to the extreme accuracy that would be required to measure water displacement. Archimedes may have instead sought a solution that applied the hydrostatics principle known as Archimedes' principle, found in his treatise On Floating Bodies: a body immersed in a fluid experiences a buoyant force equal to the weight of the fluid it displaces. Using this principle, it would have been possible to compare the density of the crown to that of pure gold by balancing it on a scale with a pure gold reference sample of the same weight, then immersing the apparatus in water. The difference in density between the two samples would cause the scale to tip accordingly. Galileo Galilei, who invented a hydrostatic balance in 1586 inspired by Archimedes' work, considered it "probable that this method is the same that Archimedes followed, since, besides being very accurate, it is based on demonstrations found by Archimedes himself."
Law of the lever
While Archimedes did not invent the lever, he gave a mathematical proof of the principle involved in his work On the Equilibrium of Planes. Earlier descriptions of the principle of the lever are found in a work by Euclid and in the Mechanical Problems, belonging to the Peripatetic school of the followers of Aristotle, the authorship of which has been attributed by some to Archytas.
There are several, often conflicting, reports regarding Archimedes' feats using the lever to lift very heavy objects. Plutarch describes how Archimedes designed block-and-tackle pulley systems, allowing sailors to use the principle of leverage to lift objects that would otherwise have been too heavy to move. According to Pappus of Alexandria, Archimedes' work on levers and his understanding of mechanical advantage caused him to remark: "Give me a place to stand on, and I will move the Earth" (). Olympiodorus later attributed the same boast to Archimedes' invention of the baroulkos, a kind of windlass, rather than the lever.
Archimedes' screw
A large part of Archimedes' work in engineering probably arose from fulfilling the needs of his home city of Syracuse. Athenaeus of Naucratis quotes a certain Moschion in a description on how King Hiero II commissioned the design of a huge ship, the Syracusia, which could be used for luxury travel, carrying supplies, and as a display of naval power. The Syracusia is said to have been the largest ship built in classical antiquity and, according to Moschion's account, it was launched by Archimedes. The ship presumably was capable of carrying 600 people and included garden decorations, a gymnasium, and a temple dedicated to the goddess Aphrodite among its facilities. The account also mentions that, in order to remove any potential water leaking through the hull, a device with a revolving screw-shaped blade inside a cylinder was designed by Archimedes.
Archimedes' screw was turned by hand, and could also be used to transfer water from a body of water into irrigation canals. The screw is still in use today for pumping liquids and granulated solids such as coal and grain. Described by Vitruvius, Archimedes' device may have been an improvement on a screw pump that was used to irrigate the Hanging Gardens of Babylon. The world's first seagoing steamship with a screw propeller was the SS Archimedes, which was launched in 1839 and named in honor of Archimedes and his work on the screw.
Archimedes' claw
Archimedes is said to have designed a claw as a weapon to defend the city of Syracuse. Also known as "", the claw consisted of a crane-like arm from which a large metal grappling hook was suspended. When the claw was dropped onto an attacking ship the arm would swing upwards, lifting the ship out of the water and possibly sinking it.
There have been modern experiments to test the feasibility of the claw, and in 2005 a television documentary entitled Superweapons of the Ancient World built a version of the claw and concluded that it was a workable device. Archimedes has also been credited with improving the power and accuracy of the catapult, and with inventing the odometer during the First Punic War. The odometer was described as a cart with a gear mechanism that dropped a ball into a container after each mile traveled.
Heat ray
Archimedes may have written a work on mirrors entitled Catoptrica, and later authors believed he might have used mirrors acting collectively as a parabolic reflector to burn ships attacking Syracuse. Lucian wrote, in the second century AD, that during the siege of Syracuse Archimedes destroyed enemy ships with fire. Almost four hundred years later, Anthemius of Tralles mentions, somewhat hesitantly, that Archimedes could have used burning-glasses as a weapon.
Often called the "", the purported mirror arrangement focused sunlight onto approaching ships, presumably causing them to catch fire. In the modern era, similar devices have been constructed and may be referred to as a heliostat or solar furnace.
Archimedes' alleged heat ray has been the subject of an ongoing debate about its credibility since the Renaissance. René Descartes rejected it as false, while modern researchers have attempted to recreate the effect using only the means that would have been available to Archimedes, mostly with negative results. It has been suggested that a large array of highly polished bronze or copper shields acting as mirrors could have been employed to focus sunlight onto a ship, but the overall effect would have been blinding, dazzling, or distracting the crew of the ship rather than fire.
Astronomical instruments
Archimedes discusses astronomical measurements of the Earth, Sun, and Moon, as well as Aristarchus' heliocentric model of the universe, in the Sand-Reckoner. Without the use of either trigonometry or a table of chords, Archimedes determines the Sun's apparent diameter by first describing the procedure and instrument used to make observations (a straight rod with pegs or grooves), applying correction factors to these measurements, and finally giving the result in the form of upper and lower bounds to account for observational error. Ptolemy, quoting Hipparchus, also references Archimedes' solstice observations in the Almagest. This would make Archimedes the first known Greek to have recorded multiple solstice dates and times in successive years.
Cicero's De re publica portrays a fictional conversation taking place in 129 BC. After the capture of Syracuse in the Second Punic War, Marcellus is said to have taken back to Rome two mechanisms which were constructed by Archimedes and which showed the motion of the Sun, Moon and five planets. Cicero also mentions similar mechanisms designed by Thales of Miletus and Eudoxus of Cnidus. The dialogue says that Marcellus kept one of the devices as his only personal loot from Syracuse, and donated the other to the Temple of Virtue in Rome. Marcellus' mechanism was demonstrated, according to Cicero, by Gaius Sulpicius Gallus to Lucius Furius Philus, who described it thus:
This is a description of a small planetarium. Pappus of Alexandria reports on a now lost treatise by Archimedes dealing with the construction of these mechanisms entitled On Sphere-Making. Modern research in this area has been focused on the Antikythera mechanism, another device built BC probably designed with a similar purpose. Constructing mechanisms of this kind would have required a sophisticated knowledge of differential gearing. This was once thought to have been beyond the range of the technology available in ancient times, but the discovery of the Antikythera mechanism in 1902 has confirmed that devices of this kind were known to the ancient Greeks.
Mathematics
While he is often regarded as a designer of mechanical devices, Archimedes also made contributions to the field of mathematics. Plutarch wrote that Archimedes "placed his whole affection and ambition in those purer speculations where there can be no reference to the vulgar needs of life", though some scholars believe this may be a mischaracterization.
Method of exhaustion
Archimedes was able to use indivisibles (a precursor to infinitesimals) in a way that is similar to modern integral calculus. Through proof by contradiction (reductio ad absurdum), he could give answers to problems to an arbitrary degree of accuracy, while specifying the limits within which the answer lay. This technique is known as the method of exhaustion, and he employed it to approximate the areas of figures and the value of π.
In Measurement of a Circle, he did this by drawing a larger regular hexagon outside a circle then a smaller regular hexagon inside the circle, and progressively doubling the number of sides of each regular polygon, calculating the length of a side of each polygon at each step. As the number of sides increases, it becomes a more accurate approximation of a circle. After four such steps, when the polygons had 96 sides each, he was able to determine that the value of π lay between 3 (approx. 3.1429) and 3 (approx. 3.1408), consistent with its actual value of approximately 3.1416. He also proved that the area of a circle was equal to π multiplied by the square of the radius of the circle ().
Archimedean property
In On the Sphere and Cylinder, Archimedes postulates that any magnitude when added to itself enough times will exceed any given magnitude. Today this is known as the Archimedean property of real numbers.
Archimedes gives the value of the square root of 3 as lying between (approximately 1.7320261) and (approximately 1.7320512) in Measurement of a Circle. The actual value is approximately 1.7320508, making this a very accurate estimate. He introduced this result without offering any explanation of how he had obtained it. This aspect of the work of Archimedes caused John Wallis to remark that he was: "as it were of set purpose to have covered up the traces of his investigation as if he had grudged posterity the secret of his method of inquiry while he wished to extort from them assent to his results." It is possible that he used an iterative procedure to calculate these values.
The infinite series
In Quadrature of the Parabola, Archimedes proved that the area enclosed by a parabola and a straight line is times the area of a corresponding inscribed triangle as shown in the figure at right. He expressed the solution to the problem as an infinite geometric series with the common ratio :
If the first term in this series is the area of the triangle, then the second is the sum of the areas of two triangles whose bases are the two smaller secant lines, and whose third vertex is where the line that is parallel to the parabola's axis and that passes through the midpoint of the base intersects the parabola, and so on. This proof uses a variation of the series which sums to .
Myriad of myriads
In The Sand Reckoner, Archimedes set out to calculate a number that was greater than the grains of sand needed to fill the universe. In doing so, he challenged the notion that the number of grains of sand was too large to be counted. He wrote:There are some, King Gelo (Gelo II, son of Hiero II), who think that the number of the sand is infinite in multitude; and I mean by the sand not only that which exists about Syracuse and the rest of Sicily but also that which is found in every region whether inhabited or uninhabited.To solve the problem, Archimedes devised a system of counting based on the myriad. The word itself derives from the Greek , for the number 10,000. He proposed a number system using powers of a myriad of myriads (100 million, i.e., 10,000 x 10,000) and concluded that the number of grains of sand required to fill the universe would be 8 vigintillion, or 8.
Writings
The works of Archimedes were written in Doric Greek, the dialect of ancient Syracuse. Many written works by Archimedes have not survived or are only extant in heavily edited fragments; at least seven of his treatises are known to have existed due to references made by other authors. Pappus of Alexandria mentions On Sphere-Making and another work on polyhedra, while Theon of Alexandria quotes a remark about refraction from the Catoptrica.
Archimedes made his work known through correspondence with the mathematicians in Alexandria. The writings of Archimedes were first collected by the Byzantine Greek architect Isidore of Miletus (), while commentaries on the works of Archimedes written by Eutocius in the sixth century AD helped to bring his work a wider audience. Archimedes' work was translated into Arabic by Thābit ibn Qurra (836–901 AD), and into Latin via Arabic by Gerard of Cremona (c. 1114–1187). Direct Greek to Latin translations were later done by William of Moerbeke (c. 1215–1286) and Iacobus Cremonensis (c. 1400–1453).
During the Renaissance, the Editio princeps (First Edition) was published in Basel in 1544 by Johann Herwagen with the works of Archimedes in Greek and Latin.
Surviving works
The following are ordered chronologically based on new terminological and historical criteria set by Knorr (1978) and Sato (1986).
Measurement of a Circle
This is a short work consisting of three propositions. It is written in the form of a correspondence with Dositheus of Pelusium, who was a student of Conon of Samos. In Proposition II, Archimedes gives an approximation of the value of pi (), showing that it is greater than and less than .
The Sand Reckoner
In this treatise, also known as Psammites, Archimedes finds a number that is greater than the grains of sand needed to fill the universe. This book mentions the heliocentric theory of the solar system proposed by Aristarchus of Samos, as well as contemporary ideas about the size of the Earth and the distance between various celestial bodies. By using a system of numbers based on powers of the myriad, Archimedes concludes that the number of grains of sand required to fill the universe is 8 in modern notation. The introductory letter states that Archimedes' father was an astronomer named Phidias. The Sand Reckoner is the only surviving work in which Archimedes discusses his views on astronomy.
On the Equilibrium of Planes
There are two books to On the Equilibrium of Planes: the first contains seven postulates and fifteen propositions, while the second book contains ten propositions. In the first book, Archimedes proves the law of the lever, which states that:
Archimedes uses the principles derived to calculate the areas and centers of gravity of various geometric figures including triangles, parallelograms and parabolas.
Quadrature of the Parabola
In this work of 24 propositions addressed to Dositheus, Archimedes proves by two methods that the area enclosed by a parabola and a straight line is 4/3 the area of a triangle with equal base and height. He achieves this in one of his proofs by calculating the value of a geometric series that sums to infinity with the ratio 1/4.
On the Sphere and Cylinder
In this two-volume treatise addressed to Dositheus, Archimedes obtains the result of which he was most proud, namely the relationship between a sphere and a circumscribed cylinder of the same height and diameter. The volume is 3 for the sphere, and 23 for the cylinder. The surface area is 42 for the sphere, and 62 for the cylinder (including its two bases), where is the radius of the sphere and cylinder.
On Spirals
This work of 28 propositions is also addressed to Dositheus. The treatise defines what is now called the Archimedean spiral. It is the locus of points corresponding to the locations over time of a point moving away from a fixed point with a constant speed along a line which rotates with constant angular velocity. Equivalently, in modern polar coordinates (, ), it can be described by the equation with real numbers and .
This is an early example of a mechanical curve (a curve traced by a moving point) considered by a Greek mathematician.
On Conoids and Spheroids
This is a work in 32 propositions addressed to Dositheus. In this treatise Archimedes calculates the areas and volumes of sections of cones, spheres, and paraboloids.
On Floating Bodies
There are two books of On Floating Bodies. In the first book, Archimedes spells out the law of equilibrium of fluids and proves that water will adopt a spherical form around a center of gravity. This may have been an attempt at explaining the theory of contemporary Greek astronomers such as Eratosthenes that the Earth is round. The fluids described by Archimedes are not since he assumes the existence of a point towards which all things fall in order to derive the spherical shape. Archimedes' principle of buoyancy is given in this work, stated as follows:Any body wholly or partially immersed in fluid experiences an upthrust equal to, but opposite in direction to, the weight of the fluid displaced.
In the second part, he calculates the equilibrium positions of sections of paraboloids. This was probably an idealization of the shapes of ships' hulls. Some of his sections float with the base under water and the summit above water, similar to the way that icebergs float.
Ostomachion
Also known as Loculus of Archimedes or Archimedes' Box, this is a dissection puzzle similar to a Tangram, and the treatise describing it was found in more complete form in the Archimedes Palimpsest. Archimedes calculates the areas of the 14 pieces which can be assembled to form a square. Reviel Netz of Stanford University argued in 2003 that Archimedes was attempting to determine how many ways the pieces could be assembled into the shape of a square. Netz calculates that the pieces can be made into a square 17,152 ways. The number of arrangements is 536 when solutions that are equivalent by rotation and reflection are excluded. The puzzle represents an example of an early problem in combinatorics.
The origin of the puzzle's name is unclear, and it has been suggested that it is taken from the Ancient Greek word for "throat" or "gullet", stomachos (). Ausonius calls the puzzle , a Greek compound word formed from the roots of () and ().
The cattle problem
Gotthold Ephraim Lessing discovered this work in a Greek manuscript consisting of a 44-line poem in the Herzog August Library in Wolfenbüttel, Germany in 1773. It is addressed to Eratosthenes and the mathematicians in Alexandria. Archimedes challenges them to count the numbers of cattle in the Herd of the Sun by solving a number of simultaneous Diophantine equations. There is a more difficult version of the problem in which some of the answers are required to be square numbers. A. Amthor first solved this version of the problem in 1880, and the answer is a very large number, approximately 7.760271.
The Method of Mechanical Theorems
This treatise was thought lost until the discovery of the Archimedes Palimpsest in 1906. In this work Archimedes uses indivisibles, and shows how breaking up a figure into an infinite number of infinitely small parts can be used to determine its area or volume. He may have considered this method lacking in formal rigor, so he also used the method of exhaustion to derive the results. As with The Cattle Problem, The Method of Mechanical Theorems was written in the form of a letter to Eratosthenes in Alexandria.
Apocryphal works
Archimedes' Book of Lemmas or Liber Assumptorum is a treatise with 15 propositions on the nature of circles. The earliest known copy of the text is in Arabic. T. L. Heath and Marshall Clagett argued that it cannot have been written by Archimedes in its current form, since it quotes Archimedes, suggesting modification by another author. The Lemmas may be based on an earlier work by Archimedes that is now lost.
It has also been claimed that the formula for calculating the area of a triangle from the length of its sides was known to Archimedes, though its first appearance is in the work of Heron of Alexandria in the 1st century AD. Other questionable attributions to Archimedes' work include the Latin poem Carmen de ponderibus et mensuris (4th or 5th century), which describes the use of a hydrostatic balance to solve the problem of the crown, and the 12th-century text Mappae clavicula, which contains instructions on how to perform assaying of metals by calculating their specific gravities.
Archimedes Palimpsest
The foremost document containing Archimedes' work is the Archimedes Palimpsest. In 1906, the Danish professor Johan Ludvig Heiberg visited Constantinople to examine a 174-page goatskin parchment of prayers, written in the 13th century, after reading a short transcription published seven years earlier by Papadopoulos-Kerameus. He confirmed that it was indeed a palimpsest, a document with text that had been written over an erased older work. Palimpsests were created by scraping the ink from existing works and reusing them, a common practice in the Middle Ages, as vellum was expensive. The older works in the palimpsest were identified by scholars as 10th-century copies of previously lost treatises by Archimedes. The parchment spent hundreds of years in a monastery library in Constantinople before being sold to a private collector in the 1920s. On 29 October 1998, it was sold at auction to an anonymous buyer for $2 million.
The palimpsest holds seven treatises, including the only surviving copy of On Floating Bodies in the original Greek. It is the only known source of The Method of Mechanical Theorems, referred to by Suidas and thought to have been lost forever. Stomachion was also discovered in the palimpsest, with a more complete analysis of the puzzle than had been found in previous texts. The palimpsest was stored at the Walters Art Museum in Baltimore, Maryland, where it was subjected to a range of modern tests including the use of ultraviolet and light to read the overwritten text. It has since returned to its anonymous owner.
The treatises in the Archimedes Palimpsest include:
On the Equilibrium of Planes
On Spirals
Measurement of a Circle
On the Sphere and Cylinder
On Floating Bodies
The Method of Mechanical Theorems
Stomachion
Speeches by the 4th century BC politician Hypereides
A commentary on Aristotle's Categories
Other works
Legacy
Sometimes called the father of mathematics and mathematical physics, Archimedes had a wide influence on mathematics and science.
Mathematics and physics
Historians of science and mathematics almost universally agree that Archimedes was the finest mathematician from antiquity. Eric Temple Bell, for instance, wrote:
Likewise, Alfred North Whitehead and George F. Simmons said of Archimedes:
Reviel Netz, Suppes Professor in Greek Mathematics and Astronomy at Stanford University and an expert in Archimedes notes:
Leonardo da Vinci repeatedly expressed admiration for Archimedes, and attributed his invention Architonnerre to Archimedes. Galileo called him "superhuman" and "my master", while Huygens said, "I think Archimedes is comparable to no one", consciously emulating him in his early work. Leibniz said, "He who understands Archimedes and Apollonius will admire less the achievements of the foremost men of later times". Gauss's heroes were Archimedes and Newton, and Moritz Cantor, who studied under Gauss in the University of Göttingen, reported that he once remarked in conversation that "there had been only three epoch-making mathematicians: Archimedes, Newton, and Eisenstein".
The inventor Nikola Tesla praised him, saying:
Honors and commemorations
There is a crater on the Moon named Archimedes () in his honor, as well as a lunar mountain range, the Montes Archimedes ().
The Fields Medal for outstanding achievement in mathematics carries a portrait of Archimedes, along with a carving illustrating his proof on the sphere and the cylinder. The inscription around the head of Archimedes is a quote attributed to 1st century AD poet Manilius, which reads in Latin: Transire suum pectus mundoque potiri ("Rise above oneself and grasp the world").
Archimedes has appeared on postage stamps issued by East Germany (1973), Greece (1983), Italy (1983), Nicaragua (1971), San Marino (1982), and Spain (1963).
The exclamation of Eureka! attributed to Archimedes is the state motto of California. In this instance, the word refers to the discovery of gold near Sutter's Mill in 1848 which sparked the California Gold Rush.
See also
Concepts
Arbelos
Archimedean point
Archimedes' axiom
Archimedes number
Archimedes paradox
Archimedean solid
Archimedes' twin circles
Methods of computing square roots
Salinon
Steam cannon
Trammel of Archimedes
People
Diocles
Pseudo-Archimedes
Zhang Heng
References
Notes
Citations
Further reading
Boyer, Carl Benjamin. 1991. A History of Mathematics. New York: Wiley. .
Clagett, Marshall. 1964–1984. Archimedes in the Middle Ages 1–5. Madison, WI: University of Wisconsin Press.
Dijksterhuis, Eduard J. [1938] 1987. Archimedes, translated. Princeton: Princeton University Press. .
Gow, Mary. 2005. Archimedes: Mathematical Genius of the Ancient World. Enslow Publishing. .
Hasan, Heather. 2005. Archimedes: The Father of Mathematics. Rosen Central. .
Heath, Thomas L. 1897. Works of Archimedes. Dover Publications. . Complete works of Archimedes in English.
Netz, Reviel, and William Noel. 2007. The Archimedes Codex. Orion Publishing Group. .
Pickover, Clifford A. 2008. Archimedes to Hawking: Laws of Science and the Great Minds Behind Them. Oxford University Press. .
Simms, Dennis L. 1995. Archimedes the Engineer. Continuum International Publishing Group. .
Stein, Sherman. 1999. Archimedes: What Did He Do Besides Cry Eureka?. Mathematical Association of America. .
External links
Heiberg's Edition of Archimedes. Texts in Classical Greek, with some in English.
The Archimedes Palimpsest project at The Walters Art Museum in Baltimore, Maryland
Testing the Archimedes steam cannon
3rd-century BC Greek people
3rd-century BC writers
People from Syracuse, Sicily
Ancient Greek engineers
Ancient Greek inventors
Ancient Greek geometers
Ancient Greek physicists
Hellenistic-era philosophers
Doric Greek writers
Sicilian Greeks
Mathematicians from Sicily
Scientists from Sicily
Ancient Greeks who were murdered
Ancient Syracusans
Fluid dynamicists
Buoyancy
280s BC births
210s BC deaths
Year of birth uncertain
Year of death uncertain
3rd-century BC mathematicians
3rd-century BC Syracusans
|
https://en.wikipedia.org/wiki/Antiprism
|
In geometry, an antiprism or is a polyhedron composed of two parallel direct copies (not mirror images) of an polygon, connected by an alternating band of triangles. They are represented by the Conway notation .
Antiprisms are a subclass of prismatoids, and are a (degenerate) type of snub polyhedron.
Antiprisms are similar to prisms, except that the bases are twisted relatively to each other, and that the side faces (connecting the bases) are triangles, rather than quadrilaterals.
The dual polyhedron of an -gonal antiprism is an -gonal trapezohedron.
History
At the intersection of modern-day graph theory and coding theory, the triangulation of a set of points have interested mathematicians since Isaac Newton, who fruitlessly sought a mathematical proof of the kissing number problem in 1694. The existence of antiprisms was discussed, and their name was coined by Johannes Kepler, though it is possible that they were previously known to Archimedes, as they satisfy the same conditions on faces and on vertices as the Archimedean solids. According to Ericson and Zinoviev, Harold Scott MacDonald Coxeter wrote at length on the topic, and was among the first to apply the mathematics of Victor Schlegel to this field.
Knowledge in this field is "quite incomplete" and "was obtained fairly recently", i.e. in the 20th century. For example, as of 2001 it had been proven for only a limited number of non-trivial cases that the -gonal antiprism is the mathematically optimal arrangement of points in the sense of maximizing the minimum Euclidean distance between any two points on the set: in 1943 by László Fejes Tóth for 4 and 6 points (digonal and trigonal antiprisms, which are Platonic solids); in 1951 by Kurt Schütte and Bartel Leendert van der Waerden for 8 points (tetragonal antiprism, which is not a cube).
The chemical structure of binary compounds has been remarked to be in the family of antiprisms; especially those of the family of boron hydrides (in 1975) and carboranes because they are isoelectronic. This is a mathematically real conclusion reached by studies of X-ray diffraction patterns, and stems from the 1971 work of Kenneth Wade, the nominative source for Wade's rules of polyhedral skeletal electron pair theory.
Rare-earth metals such as the lanthanides form antiprismatic compounds with some of the halides or some of the iodides. The study of crystallography is useful here. Some lanthanides, when arranged in peculiar antiprismatic structures with chlorine and water, can form molecule-based magnets.
Right antiprism
For an antiprism with regular -gon bases, one usually considers the case where these two copies are twisted by an angle of degrees.
The axis of a regular polygon is the line perpendicular to the polygon plane and lying in the polygon centre.
For an antiprism with congruent regular -gon bases, twisted by an angle of degrees, more regularity is obtained if the bases have the same axis: are coaxial; i.e. (for non-coplanar bases): if the line connecting the base centers is perpendicular to the base planes. Then the antiprism is called a right antiprism, and its side faces are isosceles triangles.
Uniform antiprism
A uniform -antiprism has two congruent regular -gons as base faces, and equilateral triangles as side faces.
Uniform antiprisms form an infinite class of vertex-transitive polyhedra, as do uniform prisms. For , we have the regular tetrahedron as a digonal antiprism (degenerate antiprism); for , the regular octahedron as a triangular antiprism (non-degenerate antiprism).
Schlegel diagrams
Cartesian coordinates
Cartesian coordinates for the vertices of a right -antiprism (i.e. with regular -gon bases and isosceles triangle side faces) are:
where ;
if the -antiprism is uniform (i.e. if the triangles are equilateral), then:
Volume and surface area
Let be the edge-length of a uniform -gonal antiprism; then the volume is:
and the surface area is:
Furthermore, the volume of a right -gonal antiprism with side length of its bases and height is given by:
Note that the volume of a right -gonal prism with the same and is:
which is smaller than that of an antiprism.
Related polyhedra
There are an infinite set of truncated antiprisms, including a lower-symmetry form of the truncated octahedron (truncated triangular antiprism). These can be alternated to create snub antiprisms, two of which are Johnson solids, and the snub triangular antiprism is a lower symmetry form of the regular icosahedron.
Four-dimensional antiprisms can be defined as having two dual polyhedra as parallel opposite faces, so that each three-dimensional face between them comes from two dual parts of the polyhedra: a vertex and a dual polygon, or two dual edges. Every three-dimensional polyhedron is combinatorially equivalent to one of the two opposite faces of a four-dimensional antiprism, constructed from its canonical polyhedron and its polar dual. However, there exist four-dimensional polyhedra that cannot be combined with their duals to form five-dimensional antiprisms.
Symmetry
The symmetry group of a right -antiprism (i.e. with regular bases and isosceles side faces) is of order , except in the cases of:
: the regular tetrahedron, which has the larger symmetry group of order , which has three versions of as subgroups;
: the regular octahedron, which has the larger symmetry group of order , which has four versions of as subgroups.
The symmetry group contains inversion if and only if is odd.
The rotation group is of order , except in the cases of:
: the regular tetrahedron, which has the larger rotation group of order , which has three versions of as subgroups;
: the regular octahedron, which has the larger rotation group of order , which has four versions of as subgroups.
Note: The right -antiprisms have congruent regular -gon bases and congruent isosceles triangle side faces, thus have the same (dihedral) symmetry group as the uniform -antiprism, for .
Star antiprism
Uniform star antiprisms are named by their star polygon bases, {p/q}, and exist in prograde and in retrograde (crossed) solutions. Crossed forms have intersecting vertex figures, and are denoted by "inverted" fractions: p/(p – q) instead of p/q; example: 5/3 instead of 5/2.
A right star antiprism has two congruent coaxial regular convex or star polygon base faces, and 2n isosceles triangle side faces.
Any star antiprism with regular convex or star polygon bases can be made a right star antiprism (by translating and/or twisting one of its bases, if necessary).
In the retrograde forms but not in the prograde forms, the triangles joining the convex or star bases intersect the axis of rotational symmetry. Thus:
Retrograde star antiprisms with regular convex polygon bases cannot have all equal edge lengths, so cannot be uniform. "Exception": a retrograde star antiprism with equilateral triangle bases (vertex configuration: 3.3/2.3.3) can be uniform; but then, it has the appearance of an equilateral triangle: it is a degenerate star polyhedron.
Similarly, some retrograde star antiprisms with regular star polygon bases cannot have all equal edge lengths, so cannot be uniform. Example: a retrograde star antiprism with regular star 7/5-gon bases (vertex configuration: 3.3.3.7/5) cannot be uniform.
Also, star antiprism compounds with regular star p/q-gon bases can be constructed if p and q have common factors. Example: a star 10/4-antiprism is the compound of two star 5/2-antiprisms.
See also
Apeirogonal antiprism
Grand antiprism – a four-dimensional polytope
One World Trade Center, a building consisting primarily of an elongated square antiprism
Skew polygon
References
Chapter 2: Archimedean polyhedra, prisms and antiprisms
Nonconvex Prisms and Antiprisms
Paper models of prisms and antiprisms
Uniform polyhedra
Prismatoid polyhedra
Topological graph theory
Graph drawing
Coxeter groups
Elementary geometry
Polyhedra
Polytopes
Triangulation (geometry)
Knot invariants
|
https://en.wikipedia.org/wiki/Abzyme
|
An abzyme (from antibody and enzyme), also called catmab (from catalytic monoclonal antibody), and most often called catalytic antibody or sometimes catab, is a monoclonal antibody with catalytic activity. Abzymes are usually raised in lab animals immunized against synthetic haptens, but some natural abzymes can be found in normal humans (anti-vasoactive intestinal peptide autoantibodies) and in patients with autoimmune diseases such as systemic lupus erythematosus, where they can bind to and hydrolyze DNA. To date abzymes display only weak, modest catalytic activity and have not proved to be of any practical use. They are, however, subjects of considerable academic interest. Studying them has yielded important insights into reaction mechanisms, enzyme structure and function, catalysis, and the immune system itself.
Enzymes function by lowering the activation energy of the transition state of a chemical reaction, thereby enabling the formation of an otherwise less-favorable molecular intermediate between the reactant(s) and the product(s). If an antibody is developed to bind to a molecule that is structurally and electronically similar to the transition state of a given chemical reaction, the developed antibody will bind to, and stabilize, the transition state, just like a natural enzyme, lowering the activation energy of the reaction, and thus catalyzing the reaction. By raising an antibody to bind to a stable transition-state analog, a new and unique type of enzyme is produced.
So far, all catalytic antibodies produced have displayed only modest, weak catalytic activity. The reasons for low catalytic activity for these molecules have been widely discussed. Possibilities indicate that factors beyond the binding site may play an important role, in particular through protein dynamics. Some abzymes have been engineered to use metal ions and other cofactors to improve their catalytic activity.
History
The possibility of catalyzing a reaction by means of an antibody which binds the transition state was first suggested by William P. Jencks in 1969. In 1994 Peter G. Schultz and Richard A. Lerner received the prestigious Wolf Prize in Chemistry for developing catalytic antibodies for many reactions and popularizing their study into a significant sub-field of enzymology.
Abzymes in Human healthy Breast Milk
There are a broad range of abzymes in healthy human mothers with DNAse, RNAse, and protease activity.
Potential HIV treatment
In a June 2008 issue of the journal Autoimmunity Review, researchers S. Planque, Sudhir Paul, Ph.D, and Yasuhiro Nishiyama, Ph.D of the University Of Texas Medical School at Houston announced that they have engineered an abzyme that degrades the superantigenic region of the gp120 CD4 binding site. This is the one part of the HIV virus outer coating that does not change, because it is the attachment point to T lymphocytes, the key cell in cell-mediated immunity. Once infected by HIV, patients produce antibodies to the more changeable parts of the viral coat. The antibodies are ineffective because of the virus' ability to change their coats rapidly. Because this protein gp120 is necessary for HIV to attach, it does not change across different strains and is a point of vulnerability across the entire range of the HIV variant population.
The abzyme does more than bind to the site: it catalytically destroys the site, rendering the virus inert, and then can attack other HIV viruses. A single abzyme molecule can destroy thousands of HIV viruses.
References
Monoclonal antibodies
Immune system
Enzymes
|
https://en.wikipedia.org/wiki/Ampicillin
|
Ampicillin is an antibiotic belonging to the aminopenicillin class of the penicillin family. The drug is used to prevent and treat a number of bacterial infections, such as respiratory tract infections, urinary tract infections, meningitis, salmonellosis, and endocarditis. It may also be used to prevent group B streptococcal infection in newborns. It is used by mouth, by injection into a muscle, or intravenously.
Common side effects include rash, nausea, and diarrhea. It should not be used in people who are allergic to penicillin. Serious side effects may include Clostridium difficile colitis or anaphylaxis. While usable in those with kidney problems, the dose may need to be decreased. Its use during pregnancy and breastfeeding appears to be generally safe.
Ampicillin was discovered in 1958 and came into commercial use in 1961. It is on the World Health Organization's List of Essential Medicines. The World Health Organization classifies ampicillin as critically important for human medicine. It is available as a generic medication.
Medical uses
Diseases
Bacterial meningitis; an aminoglycoside can be added to increase efficacy against gram-negative meningitis bacteria
Endocarditis by enterococcal strains (off-label use); often given with an aminoglycoside
Gastrointestinal infections caused by contaminated water or food (for example, by Salmonella)
Genito-urinary tract infections
Healthcare-associated infections that are related to infections from using urinary catheters and that are unresponsive to other medications
Otitis media (middle ear infection)
Prophylaxis (i.e. to prevent infection) in those who previously had rheumatic heart disease or are undergoing dental procedures, vaginal hysterectomies, or C-sections. It is also used in pregnant woman who are carriers of group B streptococci to prevent early-onset neonatal infections.
Respiratory infections, including bronchitis, pharyngitis
Sinusitis
Sepsis
Whooping cough, to prevent and treat secondary infections
Ampicillin used to also be used to treat gonorrhea, but there are now too many strains resistant to penicillins.
Bacteria
Ampicillin is used to treat infections by many gram-positive and gram-negative bacteria. It was the first "broad spectrum" penicillin with activity against gram-positive bacteria, including Streptococcus pneumoniae, Streptococcus pyogenes, some isolates of Staphylococcus aureus (but not penicillin-resistant or methicillin-resistant strains), Trueperella, and some Enterococcus. It is one of the few antibiotics that works against multidrug resistant Enterococcus faecalis and E. faecium. Activity against gram-negative bacteria includes Neisseria meningitidis, some Haemophilus influenzae, and some of the Enterobacteriaceae (though most Enterobacteriaceae and Pseudomonas are resistant). Its spectrum of activity is enhanced by co-administration of sulbactam, a drug that inhibits beta lactamase, an enzyme produced by bacteria to inactivate ampicillin and related antibiotics. It is sometimes used in combination with other antibiotics that have different mechanisms of action, like vancomycin, linezolid, daptomycin, and tigecycline.
Available forms
Ampicillin can be administered by mouth, an intramuscular injection (shot) or by intravenous infusion. The oral form, available as capsules or oral suspensions, is not given as an initial treatment for severe infections, but rather as a follow-up to an IM or IV injection. For IV and IM injections, ampicillin is kept as a powder that must be reconstituted.
IV injections must be given slowly, as rapid IV injections can lead to convulsive seizures.
Specific populations
Ampicillin is one of the most used drugs in pregnancy, and has been found to be generally harmless both by the Food and Drug Administration in the U.S. (which classified it as category B) and the Therapeutic Goods Administration in Australia (which classified it as category A). It is the drug of choice for treating Listeria monocytogenes in pregnant women, either alone or combined with an aminoglycoside. Pregnancy increases the clearance of ampicillin by up to 50%, and a higher dose is thus needed to reach therapeutic levels.
Ampicillin crosses the placenta and remains in the amniotic fluid at 50–100% of the concentration in maternal plasma; this can lead to high concentrations of ampicillin in the newborn.
While lactating mothers secrete some ampicillin into their breast milk, the amount is minimal.
In newborns, ampicillin has a longer half-life and lower plasma protein binding. The clearance by the kidneys is lower, as kidney function has not fully developed.
Contraindications
Ampicillin is contraindicated in those with a hypersensitivity to penicillins, as they can cause fatal anaphylactic reactions. Hypersensitivity reactions can include frequent skin rashes and hives, exfoliative dermatitis, erythema multiforme, and a temporary decrease in both red and white blood cells.
Ampicillin is not recommended in people with concurrent mononucleosis, as over 40% of patients develop a skin rash.
Side effects
Ampicillin is comparatively less toxic than other antibiotics, and side effects are more likely in those who are sensitive to penicillins and those with a history of asthma or allergies. In very rare cases, it causes severe side effects such as angioedema, anaphylaxis, and C. difficile infection (that can range from mild diarrhea to serious pseudomembranous colitis). Some develop black "furry" tongue. Serious adverse effects also include seizures and serum sickness. The most common side effects, experienced by about 10% of users are diarrhea and rash. Less common side effects can be nausea, vomiting, itching, and blood dyscrasias. The gastrointestinal effects, such as hairy tongue, nausea, vomiting, diarrhea, and colitis, are more common with the oral form of penicillin. Other conditions may develop up several weeks after treatment.
Overdose
Ampicillin overdose can cause behavioral changes, confusion, blackouts, and convulsions, as well as neuromuscular hypersensitivity, electrolyte imbalance, and kidney failure.
Interactions
Ampicillin reacts with probenecid and methotrexate to decrease renal excretion. Large doses of ampicillin can increase the risk of bleeding with concurrent use of warfarin and other oral anticoagulants, possibly by inhibiting platelet aggregation. Ampicillin has been said to make oral contraceptives less effective, but this has been disputed. It can be made less effective by other antibiotic, such as chloramphenicol, erythromycin, cephalosporins, and tetracyclines. For example, tetracyclines inhibit protein synthesis in bacteria, reducing the target against which ampicillin acts. If given at the same time as aminoglycosides, it can bind to it and inactivate it. When administered separately, aminoglycosides and ampicillin can potentiate each other instead.
Ampicillin causes skin rashes more often when given with allopurinol.
Both the live cholera vaccine and live typhoid vaccine can be made ineffective if given with ampicillin. Ampicillin is normally used to treat cholera and typhoid fever, lowering the immunological response that the body has to mount.
Pharmacology
Mechanism of action
Ampicillin is in the penicillin group of beta-lactam antibiotics and is part of the aminopenicillin family. It is roughly equivalent to amoxicillin in terms of activity. Ampicillin is able to penetrate gram-positive and some gram-negative bacteria. It differs from penicillin G, or benzylpenicillin, only by the presence of an amino group. This amino group, present on both ampicillin and amoxicillin, helps these antibiotics pass through the pores of the outer membrane of gram-negative bacteria, such as E. coli, Proteus mirabilis, Salmonella enterica, and Shigella.
Ampicillin acts as an irreversible inhibitor of the enzyme transpeptidase, which is needed by bacteria to make the cell wall. It inhibits the third and final stage of bacterial cell wall synthesis in binary fission, which ultimately leads to cell lysis; therefore, ampicillin is usually bacteriolytic.
Pharmacokinetics
Ampicillin is well-absorbed from the GI tract (though food reduces its absorption), and reaches peak concentrations in one to two hours. The bioavailability is around 62% for parenteral routes. Unlike other penicillins, which usually bind 60–90% to plasma proteins, ampicillin binds to only 15–20%.
Ampicillin is distributed through most tissues, though it is concentrated in the liver and kidneys. It can also be found in the cerebrospinal fluid when the meninges become inflamed (such as, for example, meningitis). Some ampicillin is metabolized by hydrolyzing the beta-lactam ring to penicilloic acid, though most of it is excreted unchanged. In the kidneys, it is filtered out mostly by tubular secretion; some also undergoes glomerular filtration, and the rest is excreted in the feces and bile.
Hetacillin and pivampicillin are ampicillin esters that have been developed to increase bioavailability.
History
Ampicillin has been used extensively to treat bacterial infections since 1961. Until the introduction of ampicillin by the British company Beecham, penicillin therapies had only been effective against gram-positive organisms such as staphylococci and streptococci. Ampicillin (originally branded as "Penbritin") also demonstrated activity against gram-negative organisms such as H. influenzae, coliforms, and Proteus spp.
Cost
Ampicillin is relatively inexpensive. In the United States, it is available as a generic medication.
Veterinary use
In veterinary medicine, ampicillin is used in cats, dogs, and farm animals to treat:
Anal gland infections
Cutaneous infections, such as abscesses, cellulitis, and pustular dermatitis
E. coli and Salmonella infections in cattle, sheep, and goats (oral form). Ampicillin use for this purpose had declined as bacterial resistance has increased.
Mastitis in sows
Mixed aerobic–anaerobic infections, such as from cat bites
Multidrug-resistant Enterococcus faecalis and E. faecium
Prophylactic use in poultry against Salmonella and sepsis from E. coli or Staphylococcus aureus
Respiratory tract infections, including tonsilitis, bovine respiratory disease, shipping fever, bronchopneumonia, and calf and bovine pneumonia
Urinary tract infections in dogs
Horses are generally not treated with oral ampicillin, as they have low bioavailability of beta-lactams.
The half-life in animals is around that same of that in humans (just over an hour). Oral absorption is less than 50% in cats and dogs, and less than 4% in horses.
See also
Amoxycillin (p-hydroxy metabolite of ampicillin)
Azlocillin and pirbenicillin (urea and amide made from ampicillin)
Pivampicillin (special pro-drug of ampicillin)
References
External links
Enantiopure drugs
Penicillins
Phenyl compounds
World Health Organization essential medicines
Wikipedia medicine articles ready to translate
|
https://en.wikipedia.org/wiki/Antigen
|
In immunology, an antigen (Ag) is a molecule, moiety, foreign particulate matter, or an allergen, such as pollen, that can bind to a specific antibody or T-cell receptor. The presence of antigens in the body may trigger an immune response.
Antigens can be proteins, peptides (amino acid chains), polysaccharides (chains of simple sugars), lipids, or nucleic acids. Antigens exist on normal cells, cancer cells, parasites, viruses, fungi, and bacteria.
Antigens are recognized by antigen receptors, including antibodies and T-cell receptors. Diverse antigen receptors are made by cells of the immune system so that each cell has a specificity for a single antigen. Upon exposure to an antigen, only the lymphocytes that recognize that antigen are activated and expanded, a process known as clonal selection. In most cases, antibodies are antigen-specific, meaning that an antibody can only react to and bind one specific antigen; in some instances, however, antibodies may cross-react to bind more than one antigen. The reaction between an antigen and an antibody is called the antigen-antibody reaction.
Antigen can originate either from within the body ("self-protein" or "self antigens") or from the external environment ("non-self"). The immune system identifies and attacks "non-self" external antigens. Antibodies usually do not react with self-antigens due to negative selection of T cells in the thymus and B cells in the bone marrow. The diseases in which antibodies react with self antigens and damage the body's own cells are called autoimmune diseases.
Vaccines are examples of antigens in an immunogenic form, which are intentionally administered to a recipient to induce the memory function of the adaptive immune system towards antigens of the pathogen invading that recipient. The vaccine for seasonal influenza is a common example.
Etymology
Paul Ehrlich coined the term antibody () in his side-chain theory at the end of the 19th century. In 1899, Ladislas Deutsch (László Detre) named the hypothetical substances halfway between bacterial constituents and antibodies "antigenic or immunogenic substances" (). He originally believed those substances to be precursors of antibodies, just as a zymogen is a precursor of an enzyme. But, by 1903, he understood that an antigen induces the production of immune bodies (antibodies) and wrote that the word antigen is a contraction of antisomatogen (). The Oxford English Dictionary indicates that the logical construction should be "anti(body)-gen".
The term originally referred to a substance that acts as an antibody generator.
Terminology
Epitope – the distinct surface features of an antigen, its antigenic determinant.Antigenic molecules, normally "large" biological polymers, usually present surface features that can act as points of interaction for specific antibodies. Any such feature constitutes an epitope. Most antigens have the potential to be bound by multiple antibodies, each of which is specific to one of the antigen's epitopes. Using the "lock and key" metaphor, the antigen can be seen as a string of keys (epitopes) each of which matches a different lock (antibody). Different antibody idiotypes, each have distinctly formed complementarity-determining regions.
Allergen – A substance capable of causing an allergic reaction. The (detrimental) reaction may result after exposure via ingestion, inhalation, injection, or contact with skin.
Superantigen – A class of antigens that cause non-specific activation of T-cells, resulting in polyclonal T-cell activation and massive cytokine release.
Tolerogen – A substance that invokes a specific immune non-responsiveness due to its molecular form. If its molecular form is changed, a tolerogen can become an immunogen.
Immunoglobulin-binding protein – Proteins such as protein A, protein G, and protein L that are capable of binding to antibodies at positions outside of the antigen-binding site. While antigens are the "target" of antibodies, immunoglobulin-binding proteins "attack" antibodies.
T-dependent antigen – Antigens that require the assistance of T cells to induce the formation of specific antibodies.
T-independent antigen – Antigens that stimulate B cells directly.
Immunodominant antigens – Antigens that dominate (over all others from a pathogen) in their ability to produce an immune response. T cell responses typically are directed against a relatively few immunodominant epitopes, although in some cases (e.g., infection with the malaria pathogen Plasmodium spp.) it is dispersed over a relatively large number of parasite antigens.
Antigen-presenting cells present antigens in the form of peptides on histocompatibility molecules. The T cells selectively recognize the antigens; depending on the antigen and the type of the histocompatibility molecule, different types of T cells will be activated. For T-cell receptor (TCR) recognition, the peptide must be processed into small fragments inside the cell and presented by a major histocompatibility complex (MHC). The antigen cannot elicit the immune response without the help of an immunologic adjuvant. Similarly, the adjuvant component of vaccines plays an essential role in the activation of the innate immune system.
An immunogen is an antigen substance (or adduct) that is able to trigger a humoral (innate) or cell-mediated immune response. It first initiates an innate immune response, which then causes the activation of the adaptive immune response. An antigen binds the highly variable immunoreceptor products (B-cell receptor or T-cell receptor) once these have been generated. Immunogens are those antigens, termed immunogenic, capable of inducing an immune response.
At the molecular level, an antigen can be characterized by its ability to bind to an antibody's paratopes. Different antibodies have the potential to discriminate among specific epitopes present on the antigen surface. A hapten is a small molecule that can only induce an immune response when attached to a larger carrier molecule, such as a protein. Antigens can be proteins, polysaccharides, lipids, nucleic acids or other biomolecules. This includes parts (coats, capsules, cell walls, flagella, fimbriae, and toxins) of bacteria, viruses, and other microorganisms. Non-microbial non-self antigens can include pollen, egg white, and proteins from transplanted tissues and organs or on the surface of transfused blood cells.
Sources
Antigens can be classified according to their source.
Exogenous antigens
Exogenous antigens are antigens that have entered the body from the outside, for example, by inhalation, ingestion or injection. The immune system's response to exogenous antigens is often subclinical. By endocytosis or phagocytosis, exogenous antigens are taken into the antigen-presenting cells (APCs) and processed into fragments. APCs then present the fragments to T helper cells (CD4+) by the use of class II histocompatibility molecules on their surface. Some T cells are specific for the peptide:MHC complex. They become activated and start to secrete cytokines, substances that activate cytotoxic T lymphocytes (CTL), antibody-secreting B cells, macrophages and other particles.
Some antigens start out as exogenous and later become endogenous (for example, intracellular viruses). Intracellular antigens can be returned to circulation upon the destruction of the infected cell.
Endogenous antigens
Endogenous antigens are generated within normal cells as a result of normal cell metabolism, or because of viral or intracellular bacterial infection. The fragments are then presented on the cell surface in the complex with MHC class I molecules. If activated cytotoxic CD8+ T cells recognize them, the T cells secrete various toxins that cause the lysis or apoptosis of the infected cell. In order to keep the cytotoxic cells from killing cells just for presenting self-proteins, the cytotoxic cells (self-reactive T cells) are deleted as a result of tolerance (negative selection). Endogenous antigens include xenogenic (heterologous), autologous and idiotypic or allogenic (homologous) antigens. Sometimes antigens are part of the host itself in an autoimmune disease.
Autoantigens
An autoantigen is usually a self-protein or protein complex (and sometimes DNA or RNA) that is recognized by the immune system of patients with a specific autoimmune disease. Under normal conditions, these self-proteins should not be the target of the immune system, but in autoimmune diseases, their associated T cells are not deleted and instead attack.
Neoantigens
Neoantigens are those that are entirely absent from the normal human genome. As compared with nonmutated self-proteins, neoantigens are of relevance to tumor control, as the quality of the T cell pool that is available for these antigens is not affected by central T cell tolerance. Technology to systematically analyze T cell reactivity against neoantigens became available only recently. Neoantigens can be directly detected and quantified.
Viral antigens
For virus-associated tumors, such as cervical cancer and a subset of head and neck cancers, epitopes derived from viral open reading frames contribute to the pool of neoantigens.
Tumor antigens
Tumor antigens are those antigens that are presented by MHC class I or MHC class II molecules on the surface of tumor cells. Antigens found only on such cells are called tumor-specific antigens (TSAs) and generally result from a tumor-specific mutation. More common are antigens that are presented by tumor cells and normal cells, called tumor-associated antigens (TAAs). Cytotoxic T lymphocytes that recognize these antigens may be able to destroy tumor cells.
Tumor antigens can appear on the surface of the tumor in the form of, for example, a mutated receptor, in which case they are recognized by B cells.
For human tumors without a viral etiology, novel peptides (neo-epitopes) are created by tumor-specific DNA alterations.
Process
A large fraction of human tumor mutations are effectively patient-specific. Therefore, neoantigens may also be based on individual tumor genomes. Deep-sequencing technologies can identify mutations within the protein-coding part of the genome (the exome) and predict potential neoantigens. In mice models, for all novel protein sequences, potential MHC-binding peptides were predicted. The resulting set of potential neoantigens was used to assess T cell reactivity. Exome–based analyses were exploited in a clinical setting, to assess reactivity in patients treated by either tumor-infiltrating lymphocyte (TIL) cell therapy or checkpoint blockade. Neoantigen identification was successful for multiple experimental model systems and human malignancies.
The false-negative rate of cancer exome sequencing is low—i.e.: the majority of neoantigens occur within exonic sequence with sufficient coverage. However, the vast majority of mutations within expressed genes do not produce neoantigens that are recognized by autologous T cells.
As of 2015 mass spectrometry resolution is insufficient to exclude many false positives from the pool of peptides that may be presented by MHC molecules. Instead, algorithms are used to identify the most likely candidates. These algorithms consider factors such as the likelihood of proteasomal processing, transport into the endoplasmic reticulum, affinity for the relevant MHC class I alleles and gene expression or protein translation levels.
The majority of human neoantigens identified in unbiased screens display a high predicted MHC binding affinity. Minor histocompatibility antigens, a conceptually similar antigen class are also correctly identified by MHC binding algorithms. Another potential filter examines whether the mutation is expected to improve MHC binding. The nature of the central TCR-exposed residues of MHC-bound peptides is associated with peptide immunogenicity.
Nativity
A native antigen is an antigen that is not yet processed by an APC to smaller parts. T cells cannot bind native antigens, but require that they be processed by APCs, whereas B cells can be activated by native ones.
Antigenic specificity
Antigenic specificity is the ability of the host cells to recognize an antigen specifically as a unique molecular entity and distinguish it from another with exquisite precision. Antigen specificity is due primarily to the side-chain conformations of the antigen. It is measurable and need not be linear or of a rate-limited step or equation. Both T cells and B cells are cellular components of adaptive immunity.
See also
References
Immune system
Biomolecules
|
https://en.wikipedia.org/wiki/Antlia
|
Antlia (; from Ancient Greek ἀντλία) is a constellation in the Southern Celestial Hemisphere. Its name means "pump" in Latin and Greek; it represents an air pump. Originally Antlia Pneumatica, the constellation was established by Nicolas-Louis de Lacaille in the 18th century. Its non-specific (single-word) name, already in limited use, was preferred by John Herschel then welcomed by the astronomic community which officially accepted this. North of stars forming some of the sails of the ship Argo Navis (the constellation Vela), Antlia is completely visible from latitudes south of 49 degrees north.
Antlia is a faint constellation; its brightest star is Alpha Antliae, an orange giant that is a suspected variable star, ranging between apparent magnitudes 4.22 and 4.29. S Antliae is an eclipsing binary star system, changing in brightness as one star passes in front of the other. Sharing a common envelope, the stars are so close they will one day merge to form a single star. Two star systems with known exoplanets, HD 93083 and WASP-66, lie within Antlia, as do NGC 2997, a spiral galaxy, and the Antlia Dwarf Galaxy.
History
The French astronomer Nicolas-Louis de Lacaille first described the constellation in French as la Machine Pneumatique (the Pneumatic Machine) in 1751–52, commemorating the air pump invented by the French physicist Denis Papin. De Lacaille had observed and catalogued almost 10,000 southern stars during a two-year stay at the Cape of Good Hope, devising fourteen new constellations in uncharted regions of the Southern Celestial Hemisphere not visible from Europe. He named all but one in honour of instruments that symbolised the Age of Enlightenment. Lacaille depicted Antlia as a single-cylinder vacuum pump used in Papin's initial experiments, while German astronomer Johann Bode chose the more advanced double-cylinder version. Lacaille Latinised the name to Antlia pneumatica on his 1763 chart. English astronomer John Herschel proposed shrinking the name to one word in 1844, noting that Lacaille himself had abbreviated his constellations thus on occasion. This was universally adopted. The International Astronomical Union adopted it as one of the 88 modern constellations in 1922.
Although visible to the Ancient Greeks, Antlia's stars were too faint to have been commonly recognised as a figurative object, or part of one, in ancient asterisms. The stars that now comprise Antlia are in a zone of the sky associated with the asterism/old constellation Argo Navis, the ship, the Argo, of the Argonauts, in its latter centuries. This, due to its immense size, was split into hull, poop deck and sails by Lacaille in 1763. Ridpath reports that due to their faintness, the stars of Antlia did not make up part of the classical depiction of Argo Navis.
In non-Western astronomy
Chinese astronomers were able to view what is modern Antlia from their latitudes, and incorporated its stars into two different constellations. Several stars in the southern part of Antlia were a portion of "Dong'ou", which represented an area in southern China. Furthermore, Epsilon, Eta, and Theta Antliae were incorporated into the celestial temple, which also contained stars from modern Pyxis.
Characteristics
Covering 238.9 square degrees and hence 0.579% of the sky, Antlia ranks 62nd of the 88 modern constellations by area. Its position in the Southern Celestial Hemisphere means that the whole constellation is visible to observers south of 49°N. Hydra the sea snake runs along the length of its northern border, while Pyxis the compass, Vela the sails, and Centaurus the centaur line it to the west, south and east respectively. The three-letter abbreviation for the constellation, as adopted by the International Astronomical Union, is "Ant". The official constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined by a polygon with an east side, south side and ten other sides (facing the two other cardinal compass points) (illustrated in infobox at top-right). In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between −24.54° and −40.42°.
Features
Stars
Lacaille gave nine stars Bayer designations, labelling them Alpha through to Theta, combining two stars next to each other as Zeta. Gould later added a tenth, Iota Antliae. Beta and Gamma Antliae (now HR 4339 and HD 90156) ended up in the neighbouring constellation Hydra once the constellation boundaries were delineated in 1930. Within the constellation's borders, there are 42 stars brighter than or equal to apparent magnitude 6.5.
The constellation's two brightest stars—Alpha and Epsilon Antliae—shine with a reddish tinge. Alpha is an orange giant of spectral type K4III that is a suspected variable star, ranging between apparent magnitudes 4.22 and 4.29. It is located 320 ± 10 light-years away from Earth. Estimated to be shining with around 480 to 555 times the luminosity of the Sun, it is most likely an ageing star that is brightening and on its way to becoming a Mira variable star, having converted all its core fuel into carbon. Located 590 ± 30 light-years from Earth, Epsilon Antliae is an evolved orange giant star of spectral type K3 IIIa, that has swollen to have a diameter about 69 times that of the Sun, and a luminosity of around 1279 Suns. It is slightly variable. At the other end of Antlia, Iota Antliae is likewise an orange giant of spectral type K1 III. It is 202 ± 2 light-years distant.
Located near Alpha is Delta Antliae, a binary star, 450 ± 10 light-years distant from Earth. The primary is a blue-white main sequence star of spectral type B9.5V and magnitude 5.6, and the secondary is a yellow-white main sequence star of spectral type F9Ve and magnitude 9.6. Zeta Antliae is a wide optical double star. The brighter star—Zeta1 Antliae—is 410 ± 40 light-years distant and has a magnitude of 5.74, though it is a true binary star system composed of two white main sequence stars of magnitudes 6.20 and 7.01 that are separated by 8.042 arcseconds. The fainter star—Zeta2 Antliae—is 386 ± 5 light-years distant and of magnitude 5.9. Eta Antliae is another double composed of a yellow white star of spectral type F1V and magnitude 5.31, with a companion of magnitude 11.3. Theta Antliae is likewise double, most likely composed of an A-type main sequence star and a yellow giant. S Antliae is an eclipsing binary star system that varies in apparent magnitude from 6.27 to 6.83 over a period of 15.6 hours. The system is classed as a W Ursae Majoris variable—the primary is hotter than the secondary and the drop in magnitude is caused by the latter passing in front of the former. Calculating the properties of the component stars from the orbital period indicates that the primary star has a mass 1.94 times and a diameter 2.026 times that of the Sun, and the secondary has a mass 0.76 times and a diameter 1.322 times that of the Sun. The two stars have similar luminosity and spectral type as they have a common envelope and share stellar material. The system is thought to be around 5–6 billion years old. The two stars will eventually merge to form a single fast-spinning star.
T Antliae is a yellow-white supergiant of spectral type F6Iab and Classical Cepheid variable ranging between magnitude 8.88 and 9.82 over 5.9 days. U Antliae is a red C-type carbon star and is an irregular variable that ranges between magnitudes 5.27 and 6.04. At 910 ± 50 light-years distant, it is around 5819 times as luminous as the Sun. BF Antliae is a Delta Scuti variable that varies by 0.01 of a magnitude. HR 4049, also known as AG Antliae, is an unusual hot variable ageing star of spectral type B9.5Ib-II. It is undergoing intense loss of mass and is a unique variable that does not belong to any class of known variable star, ranging between magnitudes 5.29 and 5.83 with a period of 429 days. It is around 6000 light-years away from Earth. UX Antliae is an R Coronae Borealis variable with a baseline apparent magnitude of around 11.85, with irregular dimmings down to below magnitude 18.0. A luminous and remote star, it is a supergiant with a spectrum resembling that of a yellow-white F-type star but it has almost no hydrogen.
HD 93083 is an orange dwarf star of spectral type K3V that is smaller and cooler than the Sun. It has a planet that was discovered by the radial velocity method with the HARPS spectrograph in 2005. About as massive as Saturn, the planet orbits its star with a period of 143 days at a mean distance of 0.477 AU. WASP-66 is a sunlike star of spectral type F4V. A planet with 2.3 times the mass of Jupiter orbits it every 4 days, discovered by the transit method in 2012. DEN 1048-3956 is a brown dwarf of spectral type M8 located around 13 light-years distant from Earth. At magnitude 17 it is much too faint to be seen with the unaided eye. It has a surface temperature of about 2500 K. Two powerful flares lasting 4–5 minutes each were detected in 2002. 2MASS 0939-2448 is a system of two cool and faint brown dwarfs, probably with effective temperatures of about 500 and 700 K and masses of about 25 and 40 times that of Jupiter, though it is also possible that both objects have temperatures of 600 K and 30 Jupiter masses.
Deep-sky objects
Antlia contains many faint galaxies, the brightest of which is NGC 2997 at magnitude 10.6. It is a loosely wound face-on spiral galaxy of type Sc. Though nondescript in most amateur telescopes, it presents bright clusters of young stars and many dark dust lanes in photographs. Discovered in 1997, the Antlia Dwarf is a 14.8m dwarf spheroidal galaxy that belongs to the Local Group of galaxies. In 2018 the discovery was announced of a very low surface brightness galaxy near Epsilon Antliae, Antlia 2, which is a satellite galaxy of the Milky Way.
The Antlia Cluster, also known as Abell S0636, is a cluster of galaxies located in the Hydra–Centaurus Supercluster. It is the third nearest to the Local Group after the Virgo Cluster and the Fornax Cluster. The cluster's distance from earth is Located in the southeastern corner of the constellation, it boasts the giant elliptical galaxies NGC 3268 and NGC 3258 as the main members of a southern and northern subgroup respectively, and contains around 234 galaxies in total.
Antlia is home to the huge Antlia Supernova Remnant, one of the largest supernova remnants in the sky.
Notes
References
Citations
Sources
External links
The Deep Photographic Guide to the Constellations: Antlia
The clickable Antlia
Southern constellations
Constellations listed by Lacaille
|
https://en.wikipedia.org/wiki/Apus
|
Apus is a small constellation in the southern sky. It represents a bird-of-paradise, and its name means "without feet" in Greek because the bird-of-paradise was once wrongly believed to lack feet. First depicted on a celestial globe by Petrus Plancius in 1598, it was charted on a star atlas by Johann Bayer in his 1603 Uranometria. The French explorer and astronomer Nicolas Louis de Lacaille charted and gave the brighter stars their Bayer designations in 1756.
The five brightest stars are all reddish in hue. Shading the others at apparent magnitude 3.8 is Alpha Apodis, an orange giant that has around 48 times the diameter and 928 times the luminosity of the Sun. Marginally fainter is Gamma Apodis, another ageing giant star. Delta Apodis is a double star, the two components of which are 103 arcseconds apart and visible with the naked eye. Two star systems have been found to have planets.
History
Apus was one of twelve constellations published by Petrus Plancius from the observations of Pieter Dirkszoon Keyser and Frederick de Houtman who had sailed on the first Dutch trading expedition, known as the Eerste Schipvaart, to the East Indies. It first appeared on a 35-cm (14 in) diameter celestial globe published in 1598 in Amsterdam by Plancius with Jodocus Hondius. De Houtman included it in his southern star catalogue in 1603 under the Dutch name De Paradijs Voghel, "The Bird of Paradise", and Plancius called the constellation Paradysvogel Apis Indica; the first word is Dutch for "bird of paradise". Apis (Latin for "bee") is assumed to have been a typographical error for avis ("bird").
After its introduction on Plancius's globe, the constellation's first known appearance in a celestial atlas was in German cartographer Johann Bayer's Uranometria of 1603. Bayer called it Apis Indica while fellow astronomers Johannes Kepler and his son-in-law Jakob Bartsch called it Apus or Avis Indica. The name Apus is derived from the Greek apous, meaning "without feet". This referred to the Western misconception that the bird-of-paradise had no feet, which arose because the only specimens available in the West had their feet and wings removed. Such specimens began to arrive in Europe in 1522, when the survivors of Ferdinand Magellan's expedition brought them home. The constellation later lost some of its tail when Nicolas-Louis de Lacaille used those stars to establish Octans in the 1750s.
Characteristics
Covering 206.3 square degrees and hence 0.5002% of the sky, Apus ranks 67th of the 88 modern constellations by area. Its position in the Southern Celestial Hemisphere means that the whole constellation is visible to observers south of 7°N. It is bordered by Ara, Triangulum Australe and Circinus to the north, Musca and Chamaeleon to the west, Octans to the south, and Pavo to the east. The three-letter abbreviation for the constellation, as adopted by the International Astronomical Union in 1922, is "Aps". The official constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined by a polygon of six segments (illustrated in infobox). In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between −67.48° and −83.12°.
Features
Stars
Lacaille gave twelve stars Bayer designations, labelling them Alpha through to Kappa, including two stars next to each other as Delta and another two stars near each other as Kappa. Within the constellation's borders, there are 39 stars brighter than or equal to apparent magnitude 6.5. Beta, Gamma and Delta Apodis form a narrow triangle, with Alpha Apodis lying to the east. The five brightest stars are all red-tinged, which is unusual among constellations.
Alpha Apodis is an orange giant of spectral type K3III located 430 ± 20 light-years away from Earth, with an apparent magnitude of 3.8. It spent much of its life as a blue-white (B-type) main sequence star before expanding, cooling and brightening as it used up its core hydrogen. It has swollen to 48 times the Sun's diameter, and shines with a luminosity approximately 928 times that of the Sun, with a surface temperature of 4312 K. Beta Apodis is an orange giant 149 ± 2 light-years away, with a magnitude of 4.2. It is around 1.84 times as massive as the Sun, with a surface temperature of 4677 K. Gamma Apodis is a yellow giant of spectral type G8III located 150 ± 4 light-years away, with a magnitude of 3.87. It is approximately 63 times as luminous the Sun, with a surface temperature of 5279 K. Delta Apodis is a double star, the two components of which are 103 arcseconds apart and visible through binoculars. Delta1 is a red giant star of spectral type M4III located 630 ± 30 light-years away. It is a semiregular variable that varies from magnitude +4.66 to +4.87, with pulsations of multiple periods of 68.0, 94.9 and 101.7 days. Delta2 is an orange giant star of spectral type K3III, located 550 ± 10 light-years away, with a magnitude of 5.3. The separate components can be resolved with the naked eye.
The fifth-brightest star is Zeta Apodis at magnitude 4.8, a star that has swollen and cooled to become an orange giant of spectral type K1III, with a surface temperature of 4649 K and a luminosity 133 times that of the Sun. It is 300 ± 4 light-years distant. Near Zeta is Iota Apodis, a binary star system 1,040 ± 60 light-years distant, that is composed of two blue-white main sequence stars that orbit each other every 59.32 years. Of spectral types B9V and B9.5 V, they are both over three times as massive as the Sun.
Eta Apodis is a white main sequence star located 140.8 ± 0.9 light-years distant. Of apparent magnitude 4.89, it is 1.77 times as massive, 15.5 times as luminous as the Sun and has 2.13 times its radius. Aged 250 ± 200 million years old, this star is emitting an excess of 24 μm infrared radiation, which may be caused by a debris disk of dust orbiting at a distance of more than 31 astronomical units from it.
Theta Apodis is a cool red giant of spectral type M7 III located 350 ± 30 light-years distant. It shines with a luminosity approximately 3879 times that of the Sun and has a surface temperature of 3151 K. A semiregular variable, it varies by 0.56 magnitudes with a period of 119 days—or approximately 4 months. It is losing mass at the rate of times the mass of the Sun per year through its stellar wind. Dusty material ejected from this star is interacting with the surrounding interstellar medium, forming a bow shock as the star moves through the galaxy. NO Apodis is a red giant of spectral type M3III that varies between magnitudes 5.71 and 5.95. Located 780 ± 20 light-years distant, it shines with a luminosity estimated at 2059 times that of the Sun and has a surface temperature of 3568 K. S Apodis is a rare R Coronae Borealis variable, an extremely hydrogen-deficient supergiant thought to have arisen as the result of the merger of two white dwarfs; fewer than 100 have been discovered as of 2012. It has a baseline magnitude of 9.7. R Apodis is a star that was given a variable star designation, yet has turned out not to be variable. Of magnitude 5.3, it is another orange giant.
Two star systems have had exoplanets discovered by doppler spectroscopy, and the substellar companion of a third star system—the sunlike star HD 131664—has since been found to be a brown dwarf with a calculated mass of the companion to 23 times that of Jupiter (minimum of 18 and maximum of 49 Jovian masses). HD 134606 is a yellow sunlike star of spectral type G6IV that has begun expanding and cooling off the main sequence. Three planets orbit it with periods of 12, 59.5 and 459 days, successively larger as they are further away from the star. HD 137388 is another star—of spectral type K2IV—that is cooler than the Sun and has begun cooling off the main sequence. Around 47% as luminous and 88% as massive as the Sun, with 85% of its diameter, it is thought to be around 7.4 ± 3.9 billion years old. It has a planet that is 79 times as massive as the Earth and orbits its sun every 330 days at an average distance of 0.89 astronomical units (AU).
Deep-sky objects
The Milky Way covers much of the constellation's area. Of the deep-sky objects in Apus, there are two prominent globular clusters—NGC 6101 and IC 4499—and a large faint nebula that covers several degrees east of Beta and Gamma Apodis. NGC 6101 is a globular cluster of apparent magnitude 9.2 located around 50,000 light-years distant from Earth, which is around 160 light-years across. Around 13 billion years old, it contains a high concentration of massive bright stars known as blue stragglers, thought to be the result of two stars merging. IC 4499 is a loose globular cluster in the medium-far galactic halo; its apparent magnitude is 10.6.
The galaxies in the constellation are faint. IC 4633 is a very faint spiral galaxy surrounded by a vast amount of Milky Way line-of-sight integrated flux nebulae—large faint clouds thought to be lit by large numbers of stars.
See also
IAU-recognized constellations
Notes
References
External links
The Deep Photographic Guide to the Constellations: Apus
The clickable Apus
Southern constellations
Constellations listed by Petrus Plancius
Dutch celestial cartography in the Age of Discovery
Astronomy in the Dutch Republic
1590s in the Dutch Republic
|
https://en.wikipedia.org/wiki/Aeon
|
The word aeon , also spelled eon (in American and Australian English), originally meant "life", "vital force" or "being", "generation" or "a period of time", though it tended to be translated as "age" in the sense of "ages", "forever", "timeless" or "for eternity". It is a Latin transliteration from the ancient Greek word (), from the archaic () meaning "century". In Greek, it literally refers to the timespan of one hundred years. A cognate Latin word or (cf. ) for "age" is present in words such as longevity and mediaeval.
Although the term aeon may be used in reference to a period of a billion years (especially in geology, cosmology and astronomy), its more common usage is for any long, indefinite period. Aeon can also refer to the four aeons on the geologic time scale that make up the Earth's history, the Hadean, Archean, Proterozoic, and the current aeon, Phanerozoic.
Astronomy and cosmology
In astronomy, an aeon is defined as a billion years (109 years, abbreviated AE).
Roger Penrose uses the word aeon to describe the period between successive and cyclic Big Bangs within the context of conformal cyclic cosmology.
Philosophy and mysticism
In Buddhism, an "aeon" or (Sanskrit: ) is often said to be 1,334,240,000 years, the life cycle of the world.
Christianity's idea of "eternal life" comes from the word for life, (), and a form of (), which could mean life in the next aeon, the Kingdom of God, or Heaven, just as much as immortality, as in .
According to Christian universalism, the Greek New Testament scriptures use the word () to mean a long period and the word () to mean "during a long period"; thus, there was a time before the aeons, and the aeonian period is finite. After each person's mortal life ends, they are judged worthy of aeonian life or aeonian punishment. That is, after the period of the aeons, all punishment will cease and death is overcome and then God becomes the all in each one (). This contrasts with the conventional Christian belief in eternal life and eternal punishment.
Occultists of the Thelema and Ordo Templi Orientis (English: "Order of the Temple of the East") traditions sometimes speak of a "magical Aeon" that may last for perhaps as little as 2,000 years.
Gnosticism
In many Gnostic systems, the various emanations of God, who is also known by such names as the One, the Monad, Aion teleos ("The Broadest Aeon", Greek: ), Bythos ("depth or profundity", Greek: ), Proarkhe ("before the beginning", Greek: ), ("the beginning", Greek: ), ("wisdom"), and ("the Anointed One"), are called Aeons. In the different systems these emanations are differently named, classified, and described, but the emanation theory itself is common to all forms of Gnosticism.
In the Basilidian Gnosis they are called sonships ( ; singular: ); according to Marcus, they are numbers and sounds; in Valentinianism they form male/female pairs called "" (Greek , from ).
See also
Aion (deity)
Kalpa (aeon)
Saeculum – comparable Latin concept
References
New Testament Greek words and phrases
Time
Units of time
Gnosticism
|
https://en.wikipedia.org/wiki/Avionics
|
Avionics (a blend of aviation and electronics) are the electronic systems used on aircraft. Avionic systems include communications, navigation, the display and management of multiple systems, and the hundreds of systems that are fitted to aircraft to perform individual functions. These can be as simple as a searchlight for a police helicopter or as complicated as the tactical system for an airborne early warning platform.
History
The term "avionics" was coined in 1949 by Philip J. Klass, senior editor at Aviation Week & Space Technology magazine as a portmanteau of "aviation electronics".
Radio communication was first used in aircraft just prior to World War I. The first airborne radios were in zeppelins, but the military sparked development of light radio sets that could be carried by heavier-than-air craft, so that aerial reconnaissance biplanes could report their observations immediately in case they were shot down. The first experimental radio transmission from an airplane was conducted by the U.S. Navy in August 1910. The first aircraft radios transmitted by radiotelegraphy, so they required two-seat aircraft with a second crewman to tap on a telegraph key to spell out messages by Morse code. During World War I, AM voice two way radio sets were made possible in 1917 by the development of the triode vacuum tube, which were simple enough that the pilot in a single seat aircraft could use it while flying.
Radar, the central technology used today in aircraft navigation and air traffic control, was developed by several nations, mainly in secret, as an air defense system in the 1930s during the runup to World War II. Many modern avionics have their origins in World War II wartime developments. For example, autopilot systems that are commonplace today began as specialized systems to help bomber planes fly steadily enough to hit precision targets from high altitudes. Britain's 1940 decision to share its radar technology with its U.S. ally, particularly the magnetron vacuum tube, in the famous Tizard Mission, significantly shortened the war. Modern avionics is a substantial portion of military aircraft spending. Aircraft like the F-15E and the now retired F-14 have roughly 20 percent of their budget spent on avionics. Most modern helicopters now have budget splits of 60/40 in favour of avionics.
The civilian market has also seen a growth in cost of avionics. Flight control systems (fly-by-wire) and new navigation needs brought on by tighter airspaces, have pushed up development costs. The major change has been the recent boom in consumer flying. As more people begin to use planes as their primary method of transportation, more elaborate methods of controlling aircraft safely in these high restrictive airspaces have been invented.
Modern avionics
Avionics plays a heavy role in modernization initiatives like the Federal Aviation Administration's (FAA) Next Generation Air Transportation System project in the United States and the Single European Sky ATM Research (SESAR) initiative in Europe. The Joint Planning and Development Office put forth a roadmap for avionics in six areas:
Published Routes and Procedures – Improved navigation and routing
Negotiated Trajectories – Adding data communications to create preferred routes dynamically
Delegated Separation – Enhanced situational awareness in the air and on the ground
LowVisibility/CeilingApproach/Departure – Allowing operations with weather constraints with less ground infrastructure
Surface Operations – To increase safety in approach and departure
ATM Efficiencies – Improving the air traffic management (ATM) process
Market
The Aircraft Electronics Association reports $1.73 billion avionics sales for the first three quarters of 2017 in business and general aviation, a 4.1% yearly improvement: 73.5% came from North America, forward-fit represented 42.3% while 57.7% were retrofits as the U.S. deadline of January 1, 2020 for mandatory ADS-B out approach.
Aircraft avionics
The cockpit of an aircraft is a typical location for avionic equipment, including control, monitoring, communication, navigation, weather, and anti-collision systems. The majority of aircraft power their avionics using 14- or 28‑volt DC electrical systems; however, larger, more sophisticated aircraft (such as airliners or military combat aircraft) have AC systems operating at 115 volts 400 Hz, AC. There are several major vendors of flight avionics, including The Boeing Company, Panasonic Avionics Corporation, Honeywell (which now owns Bendix/King), Universal Avionics Systems Corporation, Rockwell Collins (now Collins Aerospace), Thales Group, GE Aviation Systems, Garmin, Raytheon, Parker Hannifin, UTC Aerospace Systems (now Collins Aerospace), Selex ES (now Leonardo S.p.A.), Shadin Avionics, and Avidyne Corporation.
International standards for avionics equipment are prepared by the Airlines Electronic Engineering Committee (AEEC) and published by ARINC.
Communications
Communications connect the flight deck to the ground and the flight deck to the passengers. On‑board communications are provided by public-address systems and aircraft intercoms.
The VHF aviation communication system works on the airband of 118.000 MHz to 136.975 MHz. Each channel is spaced from the adjacent ones by 8.33 kHz in Europe, 25 kHz elsewhere. VHF is also used for line of sight communication such as aircraft-to-aircraft and aircraft-to-ATC. Amplitude modulation (AM) is used, and the conversation is performed in simplex mode. Aircraft communication can also take place using HF (especially for trans-oceanic flights) or satellite communication.
Navigation
Air navigation is the determination of position and direction on or above the surface of the Earth. Avionics can use satellite navigation systems (such as GPS and WAAS), inertial navigation system (INS), ground-based radio navigation systems (such as VOR or LORAN), or any combination thereof. Some navigation systems such as GPS calculate the position automatically and display it to the flight crew on moving map displays. Older ground-based Navigation systems such as VOR or LORAN requires a pilot or navigator to plot the intersection of signals on a paper map to determine an aircraft's location; modern systems calculate the position automatically and display it to the flight crew on moving map displays.
Monitoring
The first hints of glass cockpits emerged in the 1970s when flight-worthy cathode ray tube (CRT) screens began to replace electromechanical displays, gauges and instruments. A "glass" cockpit refers to the use of computer monitors instead of gauges and other analog displays. Aircraft were getting progressively more displays, dials and information dashboards that eventually competed for space and pilot attention. In the 1970s, the average aircraft had more than 100 cockpit instruments and controls.
Glass cockpits started to come into being with the Gulfstream G‑IV private jet in 1985. One of the key challenges in glass cockpits is to balance how much control is automated and how much the pilot should do manually. Generally they try to automate flight operations while keeping the pilot constantly informed.
Aircraft flight-control system
Aircraft have means of automatically controlling flight. Autopilot was first invented by Lawrence Sperry during World War I to fly bomber planes steady enough to hit accurate targets from 25,000 feet. When it was first adopted by the U.S. military, a Honeywell engineer sat in the back seat with bolt cutters to disconnect the autopilot in case of emergency. Nowadays most commercial planes are equipped with aircraft flight control systems in order to reduce pilot error and workload at landing or takeoff.
The first simple commercial auto-pilots were used to control heading and altitude and had limited authority on things like thrust and flight control surfaces. In helicopters, auto-stabilization was used in a similar way. The first systems were electromechanical. The advent of fly-by-wire and electro-actuated flight surfaces (rather than the traditional hydraulic) has increased safety. As with displays and instruments, critical devices that were electro-mechanical had a finite life. With safety critical systems, the software is very strictly tested.
Fuel Systems
Fuel Quantity Indication System (FQIS) monitors the amount of fuel aboard. Using various sensors, such as capacitance tubes, temperature sensors, densitometers & level sensors, the FQIS computer calculates the mass of fuel remaining on board.
Fuel Control and Monitoring System (FCMS) reports fuel remaining on board in a similar manner, but, by controlling pumps & valves, also manages fuel transfers around various tanks.
Refuelling control to upload to a certain total mass of fuel and distribute it automatically.
Transfers during flight to the tanks that feed the engines. E.G. from fuselage to wing tanks
Centre of gravity control transfers from the tail (trim) tanks forward to the wings as fuel is expended
Maintaining fuel in the wing tips (to help stop the wings bending due to lift in flight) & transferring to the main tanks after landing
Controlling fuel jettison during an emergency to reduce the aircraft weight.
Collision-avoidance systems
To supplement air traffic control, most large transport aircraft and many smaller ones use a traffic alert and collision avoidance system (TCAS), which can detect the location of nearby aircraft, and provide instructions for avoiding a midair collision. Smaller aircraft may use simpler traffic alerting systems such as TPAS, which are passive (they do not actively interrogate the transponders of other aircraft) and do not provide advisories for conflict resolution.
To help avoid controlled flight into terrain (CFIT), aircraft use systems such as ground-proximity warning systems (GPWS), which use radar altimeters as a key element. One of the major weaknesses of GPWS is the lack of "look-ahead" information, because it only provides altitude above terrain "look-down". In order to overcome this weakness, modern aircraft use a terrain awareness warning system (TAWS).
Flight recorders
Commercial aircraft cockpit data recorders, commonly known as "black boxes", store flight information and audio from the cockpit. They are often recovered from an aircraft after a crash to determine control settings and other parameters during the incident.
Weather systems
Weather systems such as weather radar (typically Arinc 708 on commercial aircraft) and lightning detectors are important for aircraft flying at night or in instrument meteorological conditions, where it is not possible for pilots to see the weather ahead. Heavy precipitation (as sensed by radar) or severe turbulence (as sensed by lightning activity) are both indications of strong convective activity and severe turbulence, and weather systems allow pilots to deviate around these areas.
Lightning detectors like the Stormscope or Strikefinder have become inexpensive enough that they are practical for light aircraft. In addition to radar and lightning detection, observations and extended radar pictures (such as NEXRAD) are now available through satellite data connections, allowing pilots to see weather conditions far beyond the range of their own in-flight systems. Modern displays allow weather information to be integrated with moving maps, terrain, and traffic onto a single screen, greatly simplifying navigation.
Modern weather systems also include wind shear and turbulence detection and terrain and traffic warning systems. In‑plane weather avionics are especially popular in Africa, India, and other countries where air-travel is a growing market, but ground support is not as well developed.
Aircraft management systems
There has been a progression towards centralized control of the multiple complex systems fitted to aircraft, including engine monitoring and management. Health and usage monitoring systems (HUMS) are integrated with aircraft management computers to give maintainers early warnings of parts that will need replacement.
The integrated modular avionics concept proposes an integrated architecture with application software portable across an assembly of common hardware modules. It has been used in fourth generation jet fighters and the latest generation of airliners.
Mission or tactical avionics
Military aircraft have been designed either to deliver a weapon or to be the eyes and ears of other weapon systems. The vast array of sensors available to the military is used for whatever tactical means required. As with aircraft management, the bigger sensor platforms (like the E‑3D, JSTARS, ASTOR, Nimrod MRA4, Merlin HM Mk 1) have mission-management computers.
Police and EMS aircraft also carry sophisticated tactical sensors.
Military communications
While aircraft communications provide the backbone for safe flight, the tactical systems are designed to withstand the rigors of the battle field. UHF, VHF Tactical (30–88 MHz) and SatCom systems combined with ECCM methods, and cryptography secure the communications. Data links such as Link 11, 16, 22 and BOWMAN, JTRS and even TETRA provide the means of transmitting data (such as images, targeting information etc.).
Radar
Airborne radar was one of the first tactical sensors. The benefit of altitude providing range has meant a significant focus on airborne radar technologies. Radars include airborne early warning (AEW), anti-submarine warfare (ASW), and even weather radar (Arinc 708) and ground tracking/proximity radar.
The military uses radar in fast jets to help pilots fly at low levels. While the civil market has had weather radar for a while, there are strict rules about using it to navigate the aircraft.
Sonar
Dipping sonar fitted to a range of military helicopters allows the helicopter to protect shipping assets from submarines or surface threats. Maritime support aircraft can drop active and passive sonar devices (sonobuoys) and these are also used to determine the location of enemy submarines.
Electro-optics
Electro-optic systems include devices such as the head-up display (HUD), forward looking infrared (FLIR), infrared search and track and other passive infrared devices (Passive infrared sensor). These are all used to provide imagery and information to the flight crew. This imagery is used for everything from search and rescue to navigational aids and target acquisition.
ESM/DAS
Electronic support measures and defensive aids systems are used extensively to gather information about threats or possible threats. They can be used to launch devices (in some cases automatically) to counter direct threats against the aircraft. They are also used to determine the state of a threat and identify it.
Aircraft networks
The avionics systems in military, commercial and advanced models of civilian aircraft are interconnected using an avionics databus. Common avionics databus protocols, with their primary application, include:
Aircraft Data Network (ADN): Ethernet derivative for Commercial Aircraft
Avionics Full-Duplex Switched Ethernet (AFDX): Specific implementation of ARINC 664 (ADN) for Commercial Aircraft
ARINC 429: Generic Medium-Speed Data Sharing for Private and Commercial Aircraft
ARINC 664: See ADN above
ARINC 629: Commercial Aircraft (Boeing 777)
ARINC 708: Weather Radar for Commercial Aircraft
ARINC 717: Flight Data Recorder for Commercial Aircraft
ARINC 825: CAN bus for commercial aircraft (for example Boeing 787 and Airbus A350)
Commercial Standard Digital Bus
IEEE 1394b: Military Aircraft
MIL-STD-1553: Military Aircraft
MIL-STD-1760: Military Aircraft
TTP – Time-Triggered Protocol: Boeing 787, Airbus A380, Fly-By-Wire Actuation Platforms from Parker Aerospace
See also
Astrionics, similar, for spacecraft
Acronyms and abbreviations in avionics
Avionics software
Emergency locator beacon
Emergency position-indicating radiobeacon station
Integrated modular avionics
Notes
Further reading
Avionics: Development and Implementation by Cary R. Spitzer (Hardcover – December 15, 2006)
Principles of Avionics, 4th Edition by Albert Helfrick, Len Buckwalter, and Avionics Communications Inc. (Paperback – July 1, 2007)
Avionics Training: Systems, Installation, and Troubleshooting by Len Buckwalter (Paperback – June 30, 2005)
Avionics Made Simple, by Mouhamed Abdulla, Ph.D.; Jaroslav V. Svoboda, Ph.D. and Luis Rodrigues, Ph.D. (Coursepack – Dec. 2005 - ).
External links
Avionics in Commercial Aircraft
Aircraft Electronics Association (AEA)
Pilot's Guide to Avionics
The Avionic Systems Standardisation Committee
Space Shuttle Avionics
Aviation Today Avionics magazine
RAES Avionics homepage
Aircraft instruments
Spacecraft components
Electronic engineering
|
https://en.wikipedia.org/wiki/Aeronautics
|
Aeronautics is the science or art involved with the study, design, and manufacturing of air flight–capable machines, and the techniques of operating aircraft and rockets within the atmosphere. The British Royal Aeronautical Society identifies the aspects of "aeronautical Art, Science and Engineering" and "The profession of Aeronautics (which expression includes Astronautics)."
While the term originally referred solely to operating the aircraft, it has since been expanded to include technology, business, and other aspects related to aircraft.
The term "aviation" is sometimes used interchangeably with aeronautics, although "aeronautics" includes lighter-than-air craft such as airships, and includes ballistic vehicles while "aviation" technically does not.
A significant part of aeronautical science is a branch of dynamics called aerodynamics, which deals with the motion of air and the way that it interacts with objects in motion, such as an aircraft.
History
Early ideas
Attempts to fly without any real aeronautical understanding have been made from the earliest times, typically by constructing wings and jumping from a tower with crippling or lethal results.
Wiser investigators sought to gain some rational understanding through the study of bird flight. Medieval Islamic Golden Age scientists such as Abbas ibn Firnas also made such studies. The founders of modern aeronautics, Leonardo da Vinci in the Renaissance and Cayley in 1799, both began their investigations with studies of bird flight.
Man-carrying kites are believed to have been used extensively in ancient China. In 1282 the Italian explorer Marco Polo described the Chinese techniques then current. The Chinese also constructed small hot air balloons, or lanterns, and rotary-wing toys.
An early European to provide any scientific discussion of flight was Roger Bacon, who described principles of operation for the lighter-than-air balloon and the flapping-wing ornithopter, which he envisaged would be constructed in the future. The lifting medium for his balloon would be an "aether" whose composition he did not know.
In the late fifteenth century, Leonardo da Vinci followed up his study of birds with designs for some of the earliest flying machines, including the flapping-wing ornithopter and the rotating-wing helicopter. Although his designs were rational, they were not based on particularly good science. Many of his designs, such as a four-person screw-type helicopter, have severe flaws. He did at least understand that "An object offers as much resistance to the air as the air does to the object." (Newton would not publish the Third law of motion until 1687.) His analysis led to the realisation that manpower alone was not sufficient for sustained flight, and his later designs included a mechanical power source such as a spring. Da Vinci's work was lost after his death and did not reappear until it had been overtaken by the work of George Cayley.
Balloon flight
The modern era of lighter-than-air flight began early in the 17th century with Galileo's experiments in which he showed that air has weight. Around 1650 Cyrano de Bergerac wrote some fantasy novels in which he described the principle of ascent using a substance (dew) he supposed to be lighter than air, and descending by releasing a controlled amount of the substance. Francesco Lana de Terzi measured the pressure of air at sea level and in 1670 proposed the first scientifically credible lifting medium in the form of hollow metal spheres from which all the air had been pumped out. These would be lighter than the displaced air and able to lift an airship. His proposed methods of controlling height are still in use today; by carrying ballast which may be dropped overboard to gain height, and by venting the lifting containers to lose height. In practice de Terzi's spheres would have collapsed under air pressure, and further developments had to wait for more practicable lifting gases.
From the mid-18th century the Montgolfier brothers in France began experimenting with balloons. Their balloons were made of paper, and early experiments using steam as the lifting gas were short-lived due to its effect on the paper as it condensed. Mistaking smoke for a kind of steam, they began filling their balloons with hot smoky air which they called "electric smoke" and, despite not fully understanding the principles at work, made some successful launches and in 1783 were invited to give a demonstration to the French Académie des Sciences.
Meanwhile, the discovery of hydrogen led Joseph Black in to propose its use as a lifting gas, though practical demonstration awaited a gas tight balloon material. On hearing of the Montgolfier Brothers' invitation, the French Academy member Jacques Charles offered a similar demonstration of a hydrogen balloon. Charles and two craftsmen, the Robert brothers, developed a gas tight material of rubberised silk for the envelope. The hydrogen gas was to be generated by chemical reaction during the filling process.
The Montgolfier designs had several shortcomings, not least the need for dry weather and a tendency for sparks from the fire to set light to the paper balloon. The manned design had a gallery around the base of the balloon rather than the hanging basket of the first, unmanned design, which brought the paper closer to the fire. On their free flight, De Rozier and d'Arlandes took buckets of water and sponges to douse these fires as they arose. On the other hand, the manned design of Charles was essentially modern. As a result of these exploits, the hot air balloon became known as the Montgolfière type and the gas balloon the Charlière.
Charles and the Robert brothers' next balloon, La Caroline, was a Charlière that followed Jean Baptiste Meusnier's proposals for an elongated dirigible balloon, and was notable for having an outer envelope with the gas contained in a second, inner ballonet. On 19 September 1784, it completed the first flight of over 100 km, between Paris and Beuvry, despite the man-powered propulsive devices proving useless.
In an attempt the next year to provide both endurance and controllability, de Rozier developed a balloon having both hot air and hydrogen gas bags, a design which was soon named after him as the Rozière. The principle was to use the hydrogen section for constant lift and to navigate vertically by heating and allowing to cool the hot air section, in order to catch the most favourable wind at whatever altitude it was blowing. The balloon envelope was made of goldbeater's skin. The first flight ended in disaster and the approach has seldom been used since.
Cayley and the foundation of modern aeronautics
Sir George Cayley (1773–1857) is widely acknowledged as the founder of modern aeronautics. He was first called the "father of the aeroplane" in 1846 and Henson called him the "father of aerial navigation." He was the first true scientific aerial investigator to publish his work, which included for the first time the underlying principles and forces of flight.
In 1809 he began the publication of a landmark three-part treatise titled "On Aerial Navigation" (1809–1810). In it he wrote the first scientific statement of the problem, "The whole problem is confined within these limits, viz. to make a surface support a given weight by the application of power to the resistance of air." He identified the four vector forces that influence an aircraft: thrust, lift, drag and weight and distinguished stability and control in his designs.
He developed the modern conventional form of the fixed-wing aeroplane having a stabilising tail with both horizontal and vertical surfaces, flying gliders both unmanned and manned.
He introduced the use of the whirling arm test rig to investigate the aerodynamics of flight, using it to discover the benefits of the curved or cambered aerofoil over the flat wing he had used for his first glider. He also identified and described the importance of dihedral, diagonal bracing and drag reduction, and contributed to the understanding and design of ornithopters and parachutes.
Another significant invention was the tension-spoked wheel, which he devised in order to create a light, strong wheel for aircraft undercarriage.
The 19th century: Otto Lilienthal and the first human flights
During the 19th century Cayley's ideas were refined, proved and expanded on, culminating in the works of Otto Lilienthal.
Lilienthal was a German engineer and businessman who became known as the "flying man". He was the first person to make well-documented, repeated, successful flights with gliders, therefore making the idea of "heavier than air" a reality. Newspapers and magazines published photographs of Lilienthal gliding, favourably influencing public and scientific opinion about the possibility of flying machines becoming practical.
His work lead to him developing the concept of the modern wing. His flight attempts in Berlin in the year 1891 are seen as the beginning of human flight and the "Lilienthal Normalsegelapparat" is considered to be the first air plane in series production, making the Maschinenfabrik Otto Lilienthal in Berlin the first air plane production company in the world.
Otto Lilienthal is often referred to as either the "father of aviation" or "father of flight".
Other important investigators included Horatio Phillips.
Branches
Aeronautics may be divided into three main branches, Aviation, Aeronautical science and Aeronautical engineering.
Aviation
Aviation is the art or practice of aeronautics. Historically aviation meant only heavier-than-air flight, but nowadays it includes flying in balloons and airships.
Aeronautical engineering
Aeronautical engineering covers the design and construction of aircraft, including how they are powered, how they are used and how they are controlled for safe operation.
A major part of aeronautical engineering is aerodynamics, the science of passing through the air.
With the increasing activity in space flight, nowadays aeronautics and astronautics are often combined as aerospace engineering.
Aerodynamics
The science of aerodynamics deals with the motion of air and the way that it interacts with objects in motion, such as an aircraft.
The study of aerodynamics falls broadly into three areas:
Incompressible flow occurs where the air simply moves to avoid objects, typically at subsonic speeds below that of sound (Mach 1).
Compressible flow occurs where shock waves appear at points where the air becomes compressed, typically at speeds above Mach 1.
Transonic flow occurs in the intermediate speed range around Mach 1, where the airflow over an object may be locally subsonic at one point and locally supersonic at another.
Rocketry
A rocket or rocket vehicle is a missile, spacecraft, aircraft or other vehicle which obtains thrust from a rocket engine. In all rockets, the exhaust is formed entirely from propellants carried within the rocket before use. Rocket engines work by action and reaction. Rocket engines push rockets forwards simply by throwing their exhaust backwards extremely fast.
Rockets for military and recreational uses date back to at least 13th-century China. Significant scientific, interplanetary and industrial use did not occur until the 20th century, when rocketry was the enabling technology of the Space Age, including setting foot on the Moon.
Rockets are used for fireworks, weaponry, ejection seats, launch vehicles for artificial satellites, human spaceflight and exploration of other planets. While comparatively inefficient for low speed use, they are very lightweight and powerful, capable of generating large accelerations and of attaining extremely high speeds with reasonable efficiency.
Chemical rockets are the most common type of rocket and they typically create their exhaust by the combustion of rocket propellant. Chemical rockets store a large amount of energy in an easily released form, and can be very dangerous. However, careful design, testing, construction and use minimizes risks.
See also
References
Citations
Sources
External links
Aeronautics
Aviation Terminology
Jeppesen The AVIATION DICTIONARY for pilots and aviation technicians
DTIC ADA032206: Chinese-English Aviation and Space Dictionary
Courses
Research
+
Vehicle operation
Articles containing video clips
|
https://en.wikipedia.org/wiki/Ansible
|
An ansible is a category of fictional devices or technology capable of near-instantaneous or faster-than-light communication. It can send and receive messages to and from a corresponding device over any distance or obstacle whatsoever with no delay, even between star systems. As a name for such a device, the word "ansible" first appeared in a 1966 novel by Ursula K. Le Guin. Since that time, the term has been broadly used in the works of numerous science fiction authors, across a variety of settings and continuities. A related term is ultrawave.
Coinage by Ursula Le Guin
Ursula K. Le Guin coined the word "ansible" in her 1966 novel Rocannon's World. The word was a contraction of "answerable", as the device would allow its users to receive answers to their messages in a reasonable amount of time, even over interstellar distances.
The ansible was the basis for creating a specific kind of interstellar civilizationone where communications between far-flung stars are instantaneous, but humans can only travel at relativistic speeds. Under these conditions, a full-fledged galactic empire is not possible, but there is a looser interstellar organization, in which several of Le Guin's protagonists are involved.
Although Le Guin invented the name "ansible" for this type of device, fleshed out with specific details in her fictional works, the broader concept of instantaneous or faster-than-light communication had previously existed in science fiction. For example, similar communication functions were included in a device called an interocitor in the 1952 novel This Island Earth by Raymond F. Jones, and the 1955 film based on that novel, and in the "Dirac Communicator", which first appeared in James Blish's short story "Beep" (1954), which was later expanded into the novel The Quincunx of Time (1973). Robert A. Heinlein in Time for the Stars (1958) employed instantaneous telepathic communication between identical twin pairs over interstellar distances, and like Le Guin, provided a technical explanation based on a non-Einsteinian principle of simultaneity.
In Le Guin's works
In her subsequent works, Le Guin continued to develop the concept of the ansible:
In The Left Hand of Darkness (1969), Le Guin writes that the ansible "doesn't involve radio waves, or any form of energy. The principle it works on, the constant of simultaneity, is analogous in some ways to gravity ... One point has to be fixed, on a planet of certain mass, but the other end is portable."
In The Word for World Is Forest (1972), Le Guin explains that in order for communication to work with any pair of ansibles, at least one "must be on a large-mass body, the other can be anywhere in the cosmos".
In The Dispossessed (1974), Le Guin tells of the development of the theory leading up to the ansible.
Any ansible may be used to communicate through any other, by setting its coordinates to those of the receiving ansible. They have a limited bandwidth, which only allows for at most a few hundred characters of text to be communicated in any transaction of a dialog session, and are attached to a keyboard and small display to perform text messaging.
Use by later authors
Since Le Guin's conception of the ansible, the name of the device has been borrowed by numerous authors. While Le Guin's ansible was said to communicate "instantaneously", the name has also been adopted for devices capable of communication at finite speeds that are faster than light.
Orson Scott Card's works
Orson Scott Card, in his 1977 novelette and 1985 novel Ender's Game and its sequels, used the term "ansible" as an unofficial name for the Philotic Parallax Instantaneous Communicator, a machine capable of communicating across infinite distances with no time delay. In Ender's Game, Colonel Graff states that "somebody dredged the name ansible out of an old book somewhere".
In the universe of the Ender's Game series, the ansible's functions involved a fictional subatomic particle, the philote. The two quarks inside a pi meson can be separated by an arbitrary distance, while remaining connected by "philotic rays". This concept is similar to quantum teleportation due to entanglement; however, in reality, quark confinement prevents quarks from being separated by any observable distance.
Card's version of the ansible was also featured in the video game Advent Rising, for which Card helped write the story, and in the movie Ender's Game, which was based on the book.
Other writers
Numerous other writers have included faster-than-light communication devices in their fictional works. Notable examples include:
Christopher Rowley, in his 1986 novel Starhammer, describes the Deep Link, an instantaneous interstellar communicator. Most commonly used for messaging, it is capable of voice and video conversations as well, although the latter only at great expense.
Vernor Vinge, in the 1988 short story "The Blabber"
Elizabeth Moon, in the 1995 novel Winning Colors
Jason Jones, in the 1995 computer game Marathon 2: Durandal
L.A. Graf, in the 1996 Star Trek: Deep Space Nine novel Time's Enemy
Philip Pullman, in the 2000 novel The Amber Spyglass, part of the His Dark Materials trilogy.
Neal Asher, in his Polity series of novels including Gridlinked (2001), in which the runcible, named in homage to the ansible, is an interstellar wormhole generator/teleporter
Dan Simmons, in the 2003 novel Ilium
Liu Cixin, in the 2008 trilogy Remembrance of Earth's Past
Kim Stanley Robinson, in the 2012 novel 2312
Becky Chambers, in her Wayfarer novels, including the 2014 novel The Long Way to a Small, Angry Planet, and 2016 novel A Closed and Common Orbit.
Neon Yang, in the 2017 novella Waiting on a Bright Moon
Joe M. McDermott, in the 2017 novel The Fortress at the End of Time
Thomas Happ, in the 2021 console and PC video game Axiom Verge 2
Star Wars' New Jedi Order series featured enemies, the Yuuzhan Vong, use organic communication devices known as villips, which can transmit over infinite distances thanks to telepathic connections formed while being harvested in groups.
Oliver Helm in the 2023 novel Swimming with Dolphins
L.J Cohen in the 2014 novel Derelict
See also
Faster-than-light communication
Interstellar communication
No-cloning theorem
Quantum entanglement
Tachyon
Tachyonic antitelephone
References
Further reading
Faster-than-light communication
Fictional technology
|
https://en.wikipedia.org/wiki/Astrology
|
Astrology is a range of divinatory practices, recognized as pseudoscientific since the 18th century, that claim to discern information about human affairs and terrestrial events by studying the apparent positions of celestial objects. Different cultures have employed forms of astrology since at least the 2nd millennium BCE, these practices having originated in calendrical systems used to predict seasonal shifts and to interpret celestial cycles as signs of divine communications. Most, if not all, cultures have attached importance to what they observed in the sky, and some—such as the Hindus, Chinese, and the Maya—developed elaborate systems for predicting terrestrial events from celestial observations. Western astrology, one of the oldest astrological systems still in use, can trace its roots to 19th–17th century BCE Mesopotamia, from where it spread to Ancient Greece, Rome, the Islamic world, and eventually Central and Western Europe. Contemporary Western astrology is often associated with systems of horoscopes that purport to explain aspects of a person's personality and predict significant events in their lives based on the positions of celestial objects; the majority of professional astrologers rely on such systems.
Throughout most of its history, astrology was considered a scholarly tradition and was common in academic circles, often in close relation with astronomy, alchemy, meteorology, and medicine. It was present in political circles and is mentioned in various works of literature, from Dante Alighieri and Geoffrey Chaucer to William Shakespeare, Lope de Vega, and Calderón de la Barca. During the Enlightenment, however, astrology lost its status as an area of legitimate scholarly pursuit. Following the end of the 19th century and the wide-scale adoption of the scientific method, researchers have successfully challenged astrology on both theoretical and experimental grounds, and have shown it to have no scientific validity or explanatory power. Astrology thus lost its academic and theoretical standing in the western world, and common belief in it largely declined, until a continuing resurgence starting in the 1960s. In India, belief in astrology is long-standing, widespread and continuing.
Etymology
The word astrology comes from the early Latin word astrologia, which derives from the Greek —from ἄστρον astron ("star") and -λογία -logia, ("study of"—"account of the stars"). The word entered the English language via Latin and medieval French, and its use overlapped considerably with that of astronomy (derived from the Latin astronomia). By the 17th century, astronomy became established as the scientific term, with astrology referring to divinations and schemes for predicting human affairs.
History
Many cultures have attached importance to astronomical events, and the Indians, Chinese, and Maya developed elaborate systems for predicting terrestrial events from celestial observations. A form of astrology was practised in the Old Babylonian period of Mesopotamia, . Vedāṅga Jyotiṣa is one of earliest known Hindu texts on astronomy and astrology (Jyotisha). The text is dated between 1400 BCE to final centuries BCE by various scholars according to astronomical and linguistic evidences. Chinese astrology was elaborated in the Zhou dynasty (1046–256 BCE). Hellenistic astrology after 332 BCE mixed Babylonian astrology with Egyptian Decanic astrology in Alexandria, creating horoscopic astrology. Alexander the Great's conquest of Asia allowed astrology to spread to Ancient Greece and Rome. In Rome, astrology was associated with "Chaldean wisdom". After the conquest of Alexandria in the 7th century, astrology was taken up by Islamic scholars, and Hellenistic texts were translated into Arabic and Persian. In the 12th century, Arabic texts were imported to Europe and translated into Latin. Major astronomers including Tycho Brahe, Johannes Kepler and Galileo practised as court astrologers. Astrological references appear in literature in the works of poets such as Dante Alighieri and Geoffrey Chaucer, and of playwrights such as Christopher Marlowe and William Shakespeare.
Throughout most of its history, astrology was considered a scholarly tradition. It was accepted in political and academic contexts, and was connected with other studies, such as astronomy, alchemy, meteorology, and medicine. At the end of the 17th century, new scientific concepts in astronomy and physics (such as heliocentrism and Newtonian mechanics) called astrology into question. Astrology thus lost its academic and theoretical standing, and common belief in astrology has largely declined.
Ancient world
Astrology, in its broadest sense, is the search for meaning in the sky. Early evidence for humans making conscious attempts to measure, record, and predict seasonal changes by reference to astronomical cycles, appears as markings on bones and cave walls, which show that lunar cycles were being noted as early as 25,000 years ago. This was a first step towards recording the Moon's influence upon tides and rivers, and towards organising a communal calendar. Farmers addressed agricultural needs with increasing knowledge of the constellations that appear in the different seasons—and used the rising of particular star-groups to herald annual floods or seasonal activities. By the 3rd millennium BCE, civilisations had sophisticated awareness of celestial cycles, and may have oriented temples in alignment with heliacal risings of the stars.
Scattered evidence suggests that the oldest known astrological references are copies of texts made in the ancient world. The Venus tablet of Ammisaduqa is thought to have been compiled in Babylon around 1700 BCE. A scroll documenting an early use of electional astrology is doubtfully ascribed to the reign of the Sumerian ruler Gudea of Lagash ( – 2124 BCE). This describes how the gods revealed to him in a dream the constellations that would be most favourable for the planned construction of a temple. However, there is controversy about whether these were genuinely recorded at the time or merely ascribed to ancient rulers by posterity. The oldest undisputed evidence of the use of astrology as an integrated system of knowledge is therefore attributed to the records of the first dynasty of Mesopotamia (1950–1651 BCE). This astrology had some parallels with Hellenistic Greek (western) astrology, including the zodiac, a norming point near 9 degrees in Aries, the trine aspect, planetary exaltations, and the dodekatemoria (the twelve divisions of 30 degrees each). The Babylonians viewed celestial events as possible signs rather than as causes of physical events.
The system of Chinese astrology was elaborated during the Zhou dynasty (1046–256 BCE) and flourished during the Han dynasty (2nd century BCE to 2nd century CE), during which all the familiar elements of traditional Chinese culture – the Yin-Yang philosophy, theory of the five elements, Heaven and Earth, Confucian morality – were brought together to formalise the philosophical principles of Chinese medicine and divination, astrology, and alchemy.
The ancient Arabs that inhabited the Arabian Peninsula before the advent of Islam used to profess a widespread belief in fatalism (ḳadar) alongside a fearful consideration for the sky and the stars, which they held to be ultimately responsible for every phenomena that occurs on Earth and for the destiny of humankind. Accordingly, they shaped their entire lives in accordance with their interpretations of astral configurations and phenomena.
Ancient objections
The Hellenistic schools of philosophical skepticism criticized the rationality of astrology. Criticism of astrology by academic skeptics such as Cicero, Carneades, and Favorinus; and Pyrrhonists such as Sextus Empiricus has been preserved.
Carneades argued that belief in fate denies free will and morality; that people born at different times can all die in the same accident or battle; and that contrary to uniform influences from the stars, tribes and cultures are all different.
Cicero stated the twins objection (that with close birth times, personal outcomes can be very different), later developed by Augustine. He argued that since the other planets are much more distant from the Earth than the Moon, they could have only very tiny influence compared to the Moon's. He also argued that if astrology explains everything about a person's fate, then it wrongly ignores the visible effect of inherited ability and parenting, changes in health worked by medicine, or the effects of the weather on people.
Favorinus argued that it was absurd to imagine that stars and planets would affect human bodies in the same way as they affect the tides, and equally absurd that small motions in the heavens cause large changes in people's fates.
Sextus Empiricus argued that it was absurd to link human attributes with myths about the signs of the zodiac, and wrote an entire book, Against the Astrologers (Πρὸς ἀστρολόγους, Pros astrologous), compiling arguments against astrology. Against the Astrologers was the fifth section of a larger work arguing against philosophical and scientific inquiry in general, Against the Professors (Πρὸς μαθηματικούς, Pros mathematikous).
Plotinus, a neoplatonist, argued that since the fixed stars are much more distant than the planets, it is laughable to imagine the planets' effect on human affairs should depend on their position with respect to the zodiac. He also argues that the interpretation of the Moon's conjunction with a planet as good when the moon is full, but bad when the moon is waning, is clearly wrong, as from the Moon's point of view, half of its surface is always in sunlight; and from the planet's point of view, waning should be better, as then the planet sees some light from the Moon, but when the Moon is full to us, it is dark, and therefore bad, on the side facing the planet in question.
Hellenistic Egypt
In 525 BCE, Egypt was conquered by the Persians. The 1st century BCE Egyptian Dendera Zodiac shares two signs – the Balance and the Scorpion – with Mesopotamian astrology.
With the occupation by Alexander the Great in 332 BCE, Egypt became Hellenistic. The city of Alexandria was founded by Alexander after the conquest, becoming the place where Babylonian astrology was mixed with Egyptian Decanic astrology to create Horoscopic astrology. This contained the Babylonian zodiac with its system of planetary exaltations, the triplicities of the signs and the importance of eclipses. It used the Egyptian concept of dividing the zodiac into thirty-six decans of ten degrees each, with an emphasis on the rising decan, and the Greek system of planetary Gods, sign rulership and four elements. 2nd century BCE texts predict positions of planets in zodiac signs at the time of the rising of certain decans, particularly Sothis. The astrologer and astronomer Ptolemy lived in Alexandria. Ptolemy's work the Tetrabiblos formed the basis of Western astrology, and, "...enjoyed almost the authority of a Bible among the astrological writers of a thousand years or more."
Greece and Rome
The conquest of Asia by Alexander the Great exposed the Greeks to ideas from Syria, Babylon, Persia and central Asia. Around 280 BCE, Berossus, a priest of Bel from Babylon, moved to the Greek island of Kos, teaching astrology and Babylonian culture. By the 1st century BCE, there were two varieties of astrology, one using horoscopes to describe the past, present and future; the other, theurgic, emphasising the soul's ascent to the stars. Greek influence played a crucial role in the transmission of astrological theory to Rome.
The first definite reference to astrology in Rome comes from the orator Cato, who in 160 BCE warned farm overseers against consulting with Chaldeans, who were described as Babylonian 'star-gazers'. Among both Greeks and Romans, Babylonia (also known as Chaldea) became so identified with astrology that 'Chaldean wisdom' became synonymous with divination using planets and stars. The 2nd-century Roman poet and satirist Juvenal complains about the pervasive influence of Chaldeans, saying, "Still more trusted are the Chaldaeans; every word uttered by the astrologer they will believe has come from Hammon's fountain."
One of the first astrologers to bring Hermetic astrology to Rome was Thrasyllus, astrologer to the emperor Tiberius, the first emperor to have had a court astrologer, though his predecessor Augustus had used astrology to help legitimise his Imperial rights.
Medieval world
Hindu
The main texts upon which classical Indian astrology is based are early medieval compilations, notably the , and Sārāvalī by .
The Horāshastra is a composite work of 71 chapters, of which the first part (chapters 1–51) dates to the 7th to early 8th centuries and the second part (chapters 52–71) to the later 8th century. The Sārāvalī likewise dates to around 800 CE. English translations of these texts were published by N.N. Krishna Rau and V.B. Choudhari in 1963 and 1961, respectively.
Islamic
Astrology was taken up by Islamic scholars following the collapse of Alexandria to the Arabs in the 7th century, and the founding of the Abbasid empire in the 8th. The second Abbasid caliph, Al Mansur (754–775) founded the city of Baghdad to act as a centre of learning, and included in its design a library-translation centre known as Bayt al-Hikma 'House of Wisdom', which continued to receive development from his heirs and was to provide a major impetus for Arabic-Persian translations of Hellenistic astrological texts. The early translators included Mashallah, who helped to elect the time for the foundation of Baghdad, and Sahl ibn Bishr, (a.k.a. Zael), whose texts were directly influential upon later European astrologers such as Guido Bonatti in the 13th century, and William Lilly in the 17th century. Knowledge of Arabic texts started to become imported into Europe during the Latin translations of the 12th century.
Europe
In the seventh century, Isidore of Seville argued in his Etymologiae that astronomy described the movements of the heavens, while astrology had two parts: one was scientific, describing the movements of the Sun, the Moon and the stars, while the other, making predictions, was theologically erroneous.
The first astrological book published in Europe was the Liber Planetis et Mundi Climatibus ("Book of the Planets and Regions of the World"), which appeared between 1010 and 1027 AD, and may have been authored by Gerbert of Aurillac. Ptolemy's second century AD Tetrabiblos was translated into Latin by Plato of Tivoli in 1138. The Dominican theologian Thomas Aquinas followed Aristotle in proposing that the stars ruled the imperfect 'sublunary' body, while attempting to reconcile astrology with Christianity by stating that God ruled the soul. The thirteenth century mathematician Campanus of Novara is said to have devised a system of astrological houses that divides the prime vertical into 'houses' of equal 30° arcs, though the system was used earlier in the East. The thirteenth century astronomer Guido Bonatti wrote a textbook, the Liber Astronomicus, a copy of which King Henry VII of England owned at the end of the fifteenth century.
In Paradiso, the final part of the Divine Comedy, the Italian poet Dante Alighieri referred "in countless details" to the astrological planets, though he adapted traditional astrology to suit his Christian viewpoint, for example using astrological thinking in his prophecies of the reform of Christendom.
John Gower in the fourteenth century defined astrology as essentially limited to the making of predictions. The influence of the stars was in turn divided into natural astrology, with for example effects on tides and the growth of plants, and judicial astrology, with supposedly predictable effects on people. The fourteenth-century sceptic Nicole Oresme however included astronomy as a part of astrology in his Livre de divinacions. Oresme argued that current approaches to prediction of events such as plagues, wars, and weather were inappropriate, but that such prediction was a valid field of inquiry. However, he attacked the use of astrology to choose the timing of actions (so-called interrogation and election) as wholly false, and rejected the determination of human action by the stars on grounds of free will. The friar Laurens Pignon (c. 1368–1449) similarly rejected all forms of divination and determinism, including by the stars, in his 1411 Contre les Devineurs. This was in opposition to the tradition carried by the Arab astronomer Albumasar (787-886) whose Introductorium in Astronomiam and De Magnis Coniunctionibus argued the view that both individual actions and larger scale history are determined by the stars.
In the late 15th century, Giovanni Pico della Mirandola forcefully attacked astrology in Disputationes contra Astrologos, arguing that the heavens neither caused, nor heralded earthly events. His contemporary, Pietro Pomponazzi, a "rationalistic and critical thinker", was much more sanguine about astrology and critical of Pico's attack.
Renaissance and Early Modern
Renaissance scholars commonly practised astrology. Gerolamo Cardano cast the horoscope of king Edward VI of England, while John Dee was the personal astrologer to queen Elizabeth I of England. Catherine de Medici paid Michael Nostradamus in 1566 to verify the prediction of the death of her husband, king Henry II of France made by her astrologer Lucus Gauricus. Major astronomers who practised as court astrologers included Tycho Brahe in the royal court of Denmark, Johannes Kepler to the Habsburgs, Galileo Galilei to the Medici, and Giordano Bruno who was burnt at the stake for heresy in Rome in 1600. The distinction between astrology and astronomy was not entirely clear. Advances in astronomy were often motivated by the desire to improve the accuracy of astrology. Kepler, for example, was driven by a belief in harmonies between Earthly and celestial affairs, yet he disparaged the activities of most astrologers as "evil-smelling dung".
Ephemerides with complex astrological calculations, and almanacs interpreting celestial events for use in medicine and for choosing times to plant crops, were popular in Elizabethan England. In 1597, the English mathematician and physician Thomas Hood made a set of paper instruments that used revolving overlays to help students work out relationships between fixed stars or constellations, the midheaven, and the twelve astrological houses. Hood's instruments also illustrated, for pedagogical purposes, the supposed relationships between the signs of the zodiac, the planets, and the parts of the human body adherents believed were governed by the planets and signs. While Hood's presentation was innovative, his astrological information was largely standard and was taken from Gerard Mercator's astrological disc made in 1551, or a source used by Mercator. Despite its popularity, Renaissance astrology had what historian Gabor Almasi calls "elite debate", exemplified by the polemical letters of Swiss physician Thomas Erastus who fought against astrology, calling it "vanity" and "superstition." Then around the time of the new star of 1572 and the comet of 1577 there began what Almasi calls an "extended epistemological reform" which began the process of excluding religion, astrology and anthropocentrism from scientific debate. By 1679, the yearly publication La Connoissance des temps eschewed astrology as a legitimate topic.”
Enlightenment period and onwards
During the Enlightenment, intellectual sympathy for astrology fell away, leaving only a popular following supported by cheap almanacs. One English almanac compiler, Richard Saunders, followed the spirit of the age by printing a derisive Discourse on the Invalidity of Astrology, while in France Pierre Bayle's Dictionnaire of 1697 stated that the subject was puerile. The Anglo-Irish satirist Jonathan Swift ridiculed the Whig political astrologer John Partridge.
In the second half of the Seventeenth Century, the Society of Astrologers (1647–1684), a trade, educational, and social organization, sought to unite London's often fractious astrologers in the task of revitalizing Astrology. Following the template of the popular “Feasts of Mathematicians” they endeavored to defend their art in the face of growing religious criticism. The Society hosted banquets, exchanged “instruments and manuscripts”, proposed research projects, and funded the publication of sermons that depicted astrology as a legitimate biblical pursuit for Christians. They commissioned sermons that argued Astrology was divine, Hebraic, and scripturally supported by Bible passages about the Magi and the sons of Seth. According to historian Michelle Pfeffer, “The society's public relations campaign ultimately failed.” Modern historians have mostly neglected the Society of Astrologers in favor of the still extant Royal Society (1660), even though both organizations initially had some of the same members.
Astrology saw a popular revival starting in the 19th century, as part of a general revival of spiritualism and—later, New Age philosophy, and through the influence of mass media such as newspaper horoscopes. Early in the 20th century the psychiatrist Carl Jung developed some concepts concerning astrology, which led to the development of psychological astrology.
Principles and practice
Advocates have defined astrology as a symbolic language, an art form, a science, and a method of divination. Though most cultural astrology systems share common roots in ancient philosophies that influenced each other, many use methods that differ from those in the West. These include Hindu astrology (also known as "Indian astrology" and in modern times referred to as "Vedic astrology") and Chinese astrology, both of which have influenced the world's cultural history.
Western
Western astrology is a form of divination based on the construction of a horoscope for an exact moment, such as a person's birth. It uses the tropical zodiac, which is aligned to the equinoctial points.
Western astrology is founded on the movements and relative positions of celestial bodies such as the Sun, Moon and planets, which are analysed by their movement through signs of the zodiac (twelve spatial divisions of the ecliptic) and by their aspects (based on geometric angles) relative to one another. They are also considered by their placement in houses (twelve spatial divisions of the sky). Astrology's modern representation in western popular media is usually reduced to sun sign astrology, which considers only the zodiac sign of the Sun at an individual's date of birth, and represents only 1/12 of the total chart.
The horoscope visually expresses the set of relationships for the time and place of the chosen event. These relationships are between the seven 'planets', signifying tendencies such as war and love; the twelve signs of the zodiac; and the twelve houses. Each planet is in a particular sign and a particular house at the chosen time, when observed from the chosen place, creating two kinds of relationship. A third kind is the aspect of each planet to every other planet, where for example two planets 120° apart (in 'trine') are in a harmonious relationship, but two planets 90° apart ('square') are in a conflicted relationship. Together these relationships and their interpretations are said to form "...the language of the heavens speaking to learned men."
Along with tarot divination, astrology is one of the core studies of Western esotericism, and as such has influenced systems of magical belief not only among Western esotericists and Hermeticists, but also belief systems such as Wicca, which have borrowed from or been influenced by the Western esoteric tradition. Tanya Luhrmann has said that "all magicians know something about astrology," and refers to a table of correspondences in Starhawk's The Spiral Dance, organised by planet, as an example of the astrological lore studied by magicians.
Hindu
The earliest Vedic text on astronomy is the Vedanga Jyotisha; Vedic thought later came to include astrology as well.
Hindu natal astrology originated with Hellenistic astrology by the 3rd century BCE, though incorporating the Hindu lunar mansions. The names of the signs (e.g. Greek 'Krios' for Aries, Hindi 'Kriya'), the planets (e.g. Greek 'Helios' for Sun, astrological Hindi 'Heli'), and astrological terms (e.g. Greek 'apoklima' and 'sunaphe' for declination and planetary conjunction, Hindi 'apoklima' and 'sunapha' respectively) in Varaha Mihira's texts are considered conclusive evidence of a Greek origin for Hindu astrology. The Indian techniques may also have been augmented with some of the Babylonian techniques.
Chinese and East Asian
Chinese astrology has a close relation with Chinese philosophy (theory of the three harmonies: heaven, earth and man) and uses concepts such as yin and yang, the Five phases, the 10 Celestial stems, the 12 Earthly Branches, and shichen (時辰 a form of timekeeping used for religious purposes). The early use of Chinese astrology was mainly confined to political astrology, the observation of unusual phenomena, identification of portents and the selection of auspicious days for events and decisions.
The constellations of the Zodiac of western Asia and Europe were not used; instead the sky is divided into Three Enclosures (三垣 sān yuán), and Twenty-Eight Mansions (二十八宿 èrshíbā xiù) in twelve Ci (十二次). The Chinese zodiac of twelve animal signs is said to represent twelve different types of personality. It is based on cycles of years, lunar months, and two-hour periods of the day (the shichen). The zodiac traditionally begins with the sign of the Rat, and the cycle proceeds through 11 other animal signs: the Ox, Tiger, Rabbit, Dragon, Snake, Horse, Goat, Monkey, Rooster, Dog, and Pig. Complex systems of predicting fate and destiny based on one's birthday, birth season, and birth hours, such as ziping and Zi Wei Dou Shu () are still used regularly in modern-day Chinese astrology. They do not rely on direct observations of the stars.
The Korean zodiac is identical to the Chinese one. The Vietnamese zodiac is almost identical to the Chinese, except for second animal being the Water Buffalo instead of the Ox, and the fourth animal the Cat instead of the Rabbit. The Japanese have since 1873 celebrated the beginning of the new year on 1 January as per the Gregorian calendar. The Thai zodiac begins, not at Chinese New Year, but either on the first day of the fifth month in the Thai lunar calendar, or during the Songkran festival (now celebrated every 13–15 April), depending on the purpose of the use.
Theological viewpoints
Ancient
Augustine (354430) believed that the determinism of astrology conflicted with the Christian doctrines of man's free will and responsibility, and God not being the cause of evil, but he also grounded his opposition philosophically, citing the failure of astrology to explain twins who behave differently although conceived at the same moment and born at approximately the same time.
Medieval
Some of the practices of astrology were contested on theological grounds by medieval Muslim astronomers such as Al-Farabi (Alpharabius), Ibn al-Haytham (Alhazen) and Avicenna. They said that the methods of astrologers conflicted with orthodox religious views of Islamic scholars, by suggesting that the Will of God can be known and predicted. For example, Avicenna's 'Refutation against astrology', Risāla fī ibṭāl aḥkām al-nojūm, argues against the practice of astrology while supporting the principle that planets may act as agents of divine causation. Avicenna considered that the movement of the planets influenced life on earth in a deterministic way, but argued against the possibility of determining the exact influence of the stars. Essentially, Avicenna did not deny the core dogma of astrology, but denied our ability to understand it to the extent that precise and fatalistic predictions could be made from it. Ibn Qayyim al-Jawziyya (1292–1350), in his Miftah Dar al-SaCadah, also used physical arguments in astronomy to question the practice of judicial astrology. He recognised that the stars are much larger than the planets, and argued: And if you astrologers answer that it is precisely because of this distance and smallness that their influences are negligible, then why is it that you claim a great influence for the smallest heavenly body, Mercury? Why is it that you have given an influence to [the head] and [the tail], which are two imaginary points [ascending and descending nodes]?
Modern
Martin Luther denounced astrology in his Table Talk. He asked why twins like Esau and Jacob had two different natures yet were born at the same time. Luther also compared astrologers to those who say their dice will always land on a certain number. Although the dice may roll on the number a couple of times, the predictor is silent for all the times the dice fails to land on that number.
The Catechism of the Catholic Church maintains that divination, including predictive astrology, is incompatible with modern Catholic beliefs such as free will:
Scientific analysis and criticism
The scientific community rejects astrology as having no explanatory power for describing the universe, and considers it a pseudoscience. Scientific testing of astrology has been conducted, and no evidence has been found to support any of the premises or purported effects outlined in astrological traditions. There is no proposed mechanism of action by which the positions and motions of stars and planets could affect people and events on Earth that does not contradict basic and well understood aspects of biology and physics. Those who have faith in astrology have been characterised by scientists including Bart J. Bok as doing so "...in spite of the fact that there is no verified scientific basis for their beliefs, and indeed that there is strong evidence to the contrary".
Confirmation bias is a form of cognitive bias, a psychological factor that contributes to belief in astrology. Astrology believers tend to selectively remember predictions that turn out to be true, and do not remember those that turn out false. Another, separate, form of confirmation bias also plays a role, where believers often fail to distinguish between messages that demonstrate special ability and those that do not. Thus there are two distinct forms of confirmation bias that are under study with respect to astrological belief.
Demarcation
Under the criterion of falsifiability, first proposed by the philosopher of science Karl Popper, astrology is a pseudoscience. Popper regarded astrology as "pseudo-empirical" in that "it appeals to observation and experiment," but "nevertheless does not come up to scientific standards." In contrast to scientific disciplines, astrology has not responded to falsification through experiment.
In contrast to Popper, the philosopher Thomas Kuhn argued that it was not lack of falsifiability that makes astrology unscientific, but rather that the process and concepts of astrology are non-empirical. Kuhn thought that, though astrologers had, historically, made predictions that categorically failed, this in itself does not make astrology unscientific, nor do attempts by astrologers to explain away failures by claiming that creating a horoscope is very difficult. Rather, in Kuhn's eyes, astrology is not science because it was always more akin to medieval medicine; astrologers followed a sequence of rules and guidelines for a seemingly necessary field with known shortcomings, but they did no research because the fields are not amenable to research, and so "they had no puzzles to solve and therefore no science to practise." While an astronomer could correct for failure, an astrologer could not. An astrologer could only explain away failure but could not revise the astrological hypothesis in a meaningful way. As such, to Kuhn, even if the stars could influence the path of humans through life, astrology is not scientific.
The philosopher Paul Thagard asserts that astrology cannot be regarded as falsified in this sense until it has been replaced with a successor. In the case of predicting behaviour, psychology is the alternative. To Thagard a further criterion of demarcation of science from pseudoscience is that the state-of-the-art must progress and that the community of researchers should be attempting to compare the current theory to alternatives, and not be "selective in considering confirmations and disconfirmations." Progress is defined here as explaining new phenomena and solving existing problems, yet astrology has failed to progress having only changed little in nearly 2000 years. To Thagard, astrologers are acting as though engaged in normal science believing that the foundations of astrology were well established despite the "many unsolved problems", and in the face of better alternative theories (psychology). For these reasons Thagard views astrology as pseudoscience.
For the philosopher Edward W. James, astrology is irrational not because of the numerous problems with mechanisms and falsification due to experiments, but because an analysis of the astrological literature shows that it is infused with fallacious logic and poor reasoning.
Effectiveness
Astrology has not demonstrated its effectiveness in controlled studies and has no scientific validity. Where it has made falsifiable predictions under controlled conditions, they have been falsified. One famous experiment included 28 astrologers who were asked to match over a hundred natal charts to psychological profiles generated by the California Psychological Inventory (CPI) questionnaire. The double-blind experimental protocol used in this study was agreed upon by a group of physicists and a group of astrologers nominated by the National Council for Geocosmic Research, who advised the experimenters, helped ensure that the test was fair and helped draw the central proposition of natal astrology to be tested. They also chose 26 out of the 28 astrologers for the tests (two more volunteered afterwards). The study, published in Nature in 1985, found that predictions based on natal astrology were no better than chance, and that the testing "...clearly refutes the astrological hypothesis."
In 1955, the astrologer and psychologist Michel Gauquelin stated that though he had failed to find evidence that supported indicators like zodiacal signs and planetary aspects in astrology, he did find positive correlations between the diurnal positions of some planets and success in professions that astrology traditionally associates with those planets. The best-known of Gauquelin's findings is based on the positions of Mars in the natal charts of successful athletes and became known as the Mars effect. A study conducted by seven French scientists attempted to replicate the claim, but found no statistical evidence. They attributed the effect to selective bias on Gauquelin's part, accusing him of attempting to persuade them to add or delete names from their study.
Geoffrey Dean has suggested that the effect may be caused by self-reporting of birth dates by parents rather than any issue with the study by Gauquelin. The suggestion is that a small subset of the parents may have had changed birth times to be consistent with better astrological charts for a related profession. The number of births under astrologically undesirable conditions was also lower, indicating that parents choose dates and times to suit their beliefs. The sample group was taken from a time where belief in astrology was more common. Gauquelin had failed to find the Mars effect in more recent populations, where a nurse or doctor recorded the birth information.
Dean, a scientist and former astrologer, and psychologist Ivan Kelly conducted a large scale scientific test that involved more than one hundred cognitive, behavioural, physical, and other variables—but found no support for astrology. Furthermore, a meta-analysis pooled 40 studies that involved 700 astrologers and over 1,000 birth charts. Ten of the tests—which involved 300 participants—had the astrologers pick the correct chart interpretation out of a number of others that were not the astrologically correct chart interpretation (usually three to five others). When date and other obvious clues were removed, no significant results suggested there was any preferred chart.
Lack of mechanisms and consistency
Testing the validity of astrology can be difficult, because there is no consensus amongst astrologers as to what astrology is or what it can predict. Most professional astrologers are paid to predict the future or describe a person's personality and life, but most horoscopes only make vague untestable statements that can apply to almost anyone.
Many astrologers claim that astrology is scientific, while some have proposed conventional causal agents such as electromagnetism and gravity. Scientists reject these mechanisms as implausible since, for example, the magnetic field, when measured from Earth, of a large but distant planet such as Jupiter is far smaller than that produced by ordinary household appliances.
Western astrology has taken the earth's axial precession (also called precession of the equinoxes) into account since Ptolemy's Almagest, so the "first point of Aries", the start of the astrological year, continually moves against the background of the stars. The tropical zodiac has no connection to the stars, and as long as no claims are made that the constellations themselves are in the associated sign, astrologers avoid the concept that precession seemingly moves the constellations. Charpak and Broch, noting this, referred to astrology based on the tropical zodiac as being "...empty boxes that have nothing to do with anything and are devoid of any consistency or correspondence with the stars." Sole use of the tropical zodiac is inconsistent with references made, by the same astrologers, to the Age of Aquarius, which depends on when the vernal point enters the constellation of Aquarius.
Astrologers usually have only a small knowledge of astronomy, and often do not take into account basic principles—such as the precession of the equinoxes, which changes the position of the sun with time. They commented on the example of Élizabeth Teissier, who claimed that, "The sun ends up in the same place in the sky on the same date each year", as the basis for claims that two people with the same birthday, but a number of years apart, should be under the same planetary influence. Charpak and Broch noted that, "There is a difference of about twenty-two thousand miles between Earth's location on any specific date in two successive years", and that thus they should not be under the same influence according to astrology. Over a 40-year period there would be a difference greater than 780,000 miles.
Reception in the social sciences
The general consensus of astronomers and other natural scientists is that astrology is a pseudoscience which carries no predictive capability, with many philosophers of science considering it a "paradigm or prime example of pseudoscience." Some scholars in the social sciences have cautioned against categorizing astrology, especially ancient astrology, as "just" a pseudoscience or projecting the distinction backwards into the past. Thagard, while demarcating it as a pseudoscience, notes that astrology "should be judged as not pseudoscientific in classical or Renaissance times...Only when the historical and social aspects of science are neglected does it become plausible that pseudoscience is an unchanging category." Historians of science such as Tamsyn Barton, Roger Beck, Francesca Rochberg, and Wouter J. Hanegraaff argue that such a wholesale description is anachronistic when applied to historical contexts, stressing that astrology was not pseudoscience before the 18th century and the importance of the discipline to the development of medieval science. R. J. Hakinson writes in the context of Hellenistic astrology that "the belief in the possibility of [astrology] was, at least some of the time, the result of careful reflection on the nature and structure of the universe."
Nicholas Campion, both an astrologer and academic historian of astrology, argues that Indigenous astronomy is largely used as a synonym for astrology in academia, and that modern Indian and Western astrology are better understood as modes of cultural astronomy or ethnoastronomy. Roy Willis and Patrick Curry draw a distinction between propositional episteme and metaphoric metis in the ancient world, identifying astrology with the latter and noting that the central concern of astrology "is not knowledge (factual, let alone scientific) but (ethical, spiritual and pragmatic)". Similarly, historian of science Justin Niermeier-Dohoney writes that astrology was "more than simply a science of prediction using the stars and comprised a vast body of beliefs, knowledge, and practices with the overarching theme of understanding the relationship between humanity and the rest of the cosmos through an interpretation of stellar, solar, lunar, and planetary movement." Scholars such as Assyriologist Matthew Rutz have begun using the term "astral knowledge" rather than astrology "to better describe a category of beliefs and practices much broader than the term 'astrology' can capture."
Cultural impact
Western politics and society
In the West, political leaders have sometimes consulted astrologers. For example, the British intelligence agency MI5 employed Louis de Wohl as an astrologer after claims surfaced that Adolf Hitler used astrology to time his actions. The War Office was "...interested to know what Hitler's own astrologers would be telling him from week to week." In fact, de Wohl's predictions were so inaccurate that he was soon labelled a "complete charlatan", and later evidence showed that Hitler considered astrology "complete nonsense". After John Hinckley's attempted assassination of US President Ronald Reagan, first lady Nancy Reagan commissioned astrologer Joan Quigley to act as the secret White House astrologer. However, Quigley's role ended in 1988 when it became public through the memoirs of former chief of staff, Donald Regan.
There was a boom in interest in astrology in the late 1960s. The sociologist Marcello Truzzi described three levels of involvement of "Astrology-believers" to account for its revived popularity in the face of scientific discrediting. He found that most astrology-believers did not claim it was a scientific explanation with predictive power. Instead, those superficially involved, knowing "next to nothing" about astrology's 'mechanics', read newspaper astrology columns, and could benefit from "tension-management of anxieties" and "a cognitive belief-system that transcends science." Those at the second level usually had their horoscopes cast and sought advice and predictions. They were much younger than those at the first level, and could benefit from knowledge of the language of astrology and the resulting ability to belong to a coherent and exclusive group. Those at the third level were highly involved and usually cast horoscopes for themselves. Astrology provided this small minority of astrology-believers with a "meaningful view of their universe and [gave] them an understanding of their place in it." This third group took astrology seriously, possibly as an overarching religious worldview (a sacred canopy, in Peter L. Berger's phrase), whereas the other two groups took it playfully and irreverently.
In 1953, the sociologist Theodor W. Adorno conducted a study of the astrology column of a Los Angeles newspaper as part of a project examining mass culture in capitalist society. Adorno believed that popular astrology, as a device, invariably leads to statements that encouraged conformity—and that astrologers who go against conformity, by discouraging performance at work etc., risk losing their jobs. Adorno concluded that astrology is a large-scale manifestation of systematic irrationalism, where individuals are subtly led—through flattery and vague generalisations—to believe that the author of the column is addressing them directly. Adorno drew a parallel with the phrase opium of the people, by Karl Marx, by commenting, "occultism is the metaphysic of the dopes."
A 2005 Gallup poll and a 2009 survey by the Pew Research Center reported that 25% of US adults believe in astrology, while a 2018 Pew survey found a figure of 29%. According to data released in the National Science Foundation's 2014 Science and Engineering Indicators study, "Fewer Americans rejected astrology in 2012 than in recent years." The NSF study noted that in 2012, "slightly more than half of Americans said that astrology was 'not at all scientific,' whereas nearly two-thirds gave this response in 2010. The comparable percentage has not been this low since 1983." Astrology apps became popular in the late 2010s, some receiving millions of dollars in Silicon Valley venture capital.
India and Japan
In India, there is a long-established and widespread belief in astrology. It is commonly used for daily life, particularly in matters concerning marriage and career, and makes extensive use of electional, horary and karmic astrology. Indian politics have also been influenced by astrology. It is still considered a branch of the Vedanga. In 2001, Indian scientists and politicians debated and critiqued a proposal to use state money to fund research into astrology, resulting in permission for Indian universities to offer courses in Vedic astrology.
In February 2011, the Bombay High Court reaffirmed astrology's standing in India when it dismissed a case that challenged its status as a science.
In Japan, strong belief in astrology has led to dramatic changes in the fertility rate and the number of abortions in the years of Fire Horse. Adherents believe that women born in hinoeuma years are unmarriageable and bring bad luck to their father or husband. In 1966, the number of babies born in Japan dropped by over 25% as parents tried to avoid the stigma of having a daughter born in the hinoeuma year.
Literature and music
The fourteenth-century English poets John Gower and Geoffrey Chaucer both referred to astrology in their works, including Gower's Confessio Amantis and Chaucer's The Canterbury Tales. Chaucer commented explicitly on astrology in his Treatise on the Astrolabe, demonstrating personal knowledge of one area, judicial astrology, with an account of how to find the ascendant or rising sign.
In the fifteenth century, references to astrology, such as with similes, became "a matter of course" in English literature.
In the sixteenth century, John Lyly's 1597 play, The Woman in the Moon, is wholly motivated by astrology, while Christopher Marlowe makes astrological references in his plays Doctor Faustus and Tamburlaine (both c. 1590), and Sir Philip Sidney refers to astrology at least four times in his romance The Countess of Pembroke's Arcadia (c. 1580). Edmund Spenser uses astrology both decoratively and causally in his poetry, revealing "...unmistakably an abiding interest in the art, an interest shared by a large number of his contemporaries." George Chapman's play, Byron's Conspiracy (1608), similarly uses astrology as a causal mechanism in the drama. William Shakespeare's attitude towards astrology is unclear, with contradictory references in plays including King Lear, Antony and Cleopatra, and Richard II. Shakespeare was familiar with astrology and made use of his knowledge of astrology in nearly every play he wrote, assuming a basic familiarity with the subject in his commercial audience. Outside theatre, the physician and mystic Robert Fludd practised astrology, as did the quack doctor Simon Forman. In Elizabethan England, "The usual feeling about astrology ... [was] that it is the most useful of the sciences."
In seventeenth century Spain, Lope de Vega, with a detailed knowledge of astronomy, wrote plays that ridicule astrology. In his pastoral romance La Arcadia (1598), it leads to absurdity; in his novela Guzman el Bravo (1624), he concludes that the stars were made for man, not man for the stars. Calderón de la Barca wrote the 1641 comedy Astrologo Fingido (The Pretended Astrologer); the plot was borrowed by the French playwright Thomas Corneille for his 1651 comedy Feint Astrologue.
The most famous piece of music influenced by astrology is the orchestral suite The Planets. Written by the British composer Gustav Holst (1874–1934), and first performed in 1918, the framework of The Planets is based upon the astrological symbolism of the planets. Each of the seven movements of the suite is based upon a different planet, though the movements are not in the order of the planets from the Sun. The composer Colin Matthews wrote an eighth movement entitled Pluto, the Renewer, first performed in 2000. In 1937, another British composer, Constant Lambert, wrote a ballet on astrological themes, called Horoscope. In 1974, the New Zealand composer Edwin Carr wrote The Twelve Signs: An Astrological Entertainment for orchestra without strings. Camille Paglia acknowledges astrology as an influence on her work of literary criticism Sexual Personae (1990).
Astrology features strongly in Eleanor Catton's The Luminaries, recipient of the 2013 Man Booker Prize.
See also
Astrology and science
Astrology software
Barnum effect
List of astrological traditions, types, and systems
List of topics characterised as pseudoscience
Jewish astrology
Scientific skepticism
Notes
References
Sources
Further reading
External links
Digital International Astrology Library (ancient astrological works)
Biblioastrology (www.biblioastrology.com) (specialised bibliography)
Paris Observatory
Astrology – Merriam-Webster
Pseudoscience
|
https://en.wikipedia.org/wiki/Armour
|
Armour (Commonwealth English) or armor (American English; see spelling differences) is a covering used to protect an object, individual, or vehicle from physical injury or damage, especially direct contact weapons or projectiles during combat, or from a potentially dangerous environment or activity (e.g. cycling, construction sites, etc.). Personal armour is used to protect soldiers and war animals. Vehicle armour is used on warships, armoured fighting vehicles, and some mostly ground attack combat aircraft.
A second use of the term armour describes armoured forces, armoured weapons, and their role in combat. After the development of armoured warfare, tanks and mechanised infantry and their combat formations came to be referred to collectively as "armour".
Etymology
The word "armour" began to appear in the Middle Ages as a derivative of Old French. It is dated from 1297 as a "mail, defensive covering worn in combat". The word originates from the Old French , itself derived from the Latin meaning "arms and/or equipment", with the root meaning "arms or gear".
Personal
Armour has been used throughout recorded history. It has been made from a variety of materials, beginning with the use of leathers or fabrics as protection and evolving through chain mail and metal plate into today's modern composites. For much of military history the manufacture of metal personal armour has dominated the technology and employment of armour.
Armour drove the development of many important technologies of the Ancient World, including wood lamination, mining, metal refining, vehicle manufacture, leather processing, and later decorative metal working. Its production was influential in the industrial revolution, and furthered commercial development of metallurgy and engineering. Armour was the single most influential factor in the development of firearms, which in turn revolutionised warfare.
History
Significant factors in the development of armour include the economic and technological necessities of its production. For instance, plate armour first appeared in Medieval Europe when water-powered trip hammers made the formation of plates faster and cheaper. At times the development of armour has paralleled the development of increasingly effective weaponry on the battlefield, with armourers seeking to create better protection without sacrificing mobility.
Well-known armour types in European history include the lorica hamata, lorica squamata, and the lorica segmentata of the Roman legions, the mail hauberk of the early medieval age, and the full steel plate harness worn by later medieval and renaissance knights, and breast and back plates worn by heavy cavalry in several European countries until the first year of World War I (1914–1915). The samurai warriors of feudal Japan utilised many types of armour for hundreds of years up to the 19th century.
Early
Cuirasses and helmets were manufactured in Japan as early as the 4th century. Tankō, worn by foot soldiers and keikō, worn by horsemen were both pre-samurai types of early Japanese armour constructed from iron plates connected together by leather thongs. Japanese lamellar armour (keiko) passed through Korea and reached Japan around the 5th century. These early Japanese lamellar armours took the form of a sleeveless jacket, leggings and a helmet.
Armour did not always cover all of the body; sometimes no more than a helmet and leg plates were worn. The rest of the body was generally protected by means of a large shield. Examples of armies equipping their troops in this fashion were the Aztecs (13th to 15th century CE).
In East Asia, many types of armour were commonly used at different times by various cultures, including scale armour, lamellar armour, laminar armour, plated mail, mail, plate armour, and brigandine. Around the dynastic Tang, Song, and early Ming Period, cuirasses and plates (mingguangjia) were also used, with more elaborate versions for officers in war. The Chinese, during that time used partial plates for "important" body parts instead of covering their whole body since too much plate armour hinders their martial arts movement. The other body parts were covered in cloth, leather, lamellar, or Mountain pattern. In pre-Qin dynasty times, leather armour was made out of various animals, with more exotic ones such as the rhinoceros.
Mail, sometimes called "chainmail", made of interlocking iron rings is believed to have first appeared some time after 300 BC. Its invention is credited to the Celts; the Romans are thought to have adopted their design.
Gradually, small additional plates or discs of iron were added to the mail to protect vulnerable areas. Hardened leather and splinted construction were used for arm and leg pieces. The coat of plates was developed, an armour made of large plates sewn inside a textile or leather coat.
13th to 18th century Europe
Early plate in Italy, and elsewhere in the 13th–15th century, were made of iron. Iron armour could be carburised or case hardened to give a surface of harder steel. Plate armour became cheaper than mail by the 15th century as it required much less labour and labour had become much more expensive after the Black Death, though it did require larger furnaces to produce larger blooms. Mail continued to be used to protect those joints which could not be adequately protected by plate, such as the armpit, crook of the elbow and groin. Another advantage of plate was that a lance rest could be fitted to the breast plate.
The small skull cap evolved into a bigger true helmet, the bascinet, as it was lengthened downward to protect the back of the neck and the sides of the head. Additionally, several new forms of fully enclosed helmets were introduced in the late 14th century.
Probably the most recognised style of armour in the world became the plate armour associated with the knights of the European Late Middle Ages, but continuing to the early 17th century Age of Enlightenment in all European countries.
By 1400, the full harness of plate armour had been developed in armouries of Lombardy. Heavy cavalry dominated the battlefield for centuries in part because of their armour.
In the early 15th century, advances in weaponry allowed infantry to defeat armoured knights on the battlefield. The quality of the metal used in armour deteriorated as armies became bigger and armour was made thicker, necessitating breeding of larger cavalry horses. If during the 14–15th centuries armour seldom weighed more than 15 kg, then by the late 16th century it weighed 25 kg. The increasing weight and thickness of late 16th century armour therefore gave substantial resistance.
In the early years of low velocity firearms, full suits of armour, or breast plates actually stopped bullets fired from a modest distance. Crossbow bolts, if still in use, would seldom penetrate good plate, nor would any bullet unless fired from close range. In effect, rather than making plate armour obsolete, the use of firearms stimulated the development of plate armour into its later stages. For most of that period, it allowed horsemen to fight while being the targets of defending arquebusiers without being easily killed. Full suits of armour were actually worn by generals and princely commanders right up to the second decade of the 18th century. It was the only way they could be mounted and survey the overall battlefield with safety from distant musket fire.
The horse was afforded protection from lances and infantry weapons by steel plate barding. This gave the horse protection and enhanced the visual impression of a mounted knight. Late in the era, elaborate barding was used in parade armour.
Later
Gradually, starting in the mid-16th century, one plate element after another was discarded to save weight for foot soldiers.
Back and breast plates continued to be used throughout the entire period of the 18th century and through Napoleonic times, in many European heavy cavalry units, until the early 20th century. From their introduction, muskets could pierce plate armour, so cavalry had to be far more mindful of the fire. In Japan, armour continued to be used until the late 19th century, with the last major fighting in which armour was used, this occurred in 1868. Samurai armour had one last short lived use in 1877 during the Satsuma Rebellion.
Though the age of the knight was over, armour continued to be used in many capacities. Soldiers in the American Civil War bought iron and steel vests from peddlers (both sides had considered but rejected body armour for standard issue). The effectiveness of the vests varied widely, some successfully deflected bullets and saved lives, but others were poorly made and resulted in tragedy for the soldiers. In any case the vests were abandoned by many soldiers due to their increased weight on long marches, as well as the stigma they got for being cowards from their fellow troops.
At the start of World War I, thousands of the French Cuirassiers rode out to engage the German Cavalry. By that period, the shiny metallic cuirass was covered in a dark paint and a canvas wrap covered their elaborate Napoleonic style helmets, to help mitigate the sunlight being reflected off the surfaces, thereby alerting the enemy of their location. Their armour was only meant for protection against edged weapons such as bayonets, sabres, and lances. Cavalry had to be wary of repeating rifles, machine guns, and artillery, unlike the foot soldiers, who at least had a trench to give them some protection.
Present
Today, ballistic vests, also known as flak jackets, made of ballistic cloth (e.g. kevlar, dyneema, twaron, spectra etc.) and ceramic or metal plates are common among police forces, security staff, corrections officers and some branches of the military.
The US Army has adopted Interceptor body armour, which uses Enhanced Small Arms Protective Inserts (ESAPIs) in the chest, sides, and back of the armour. Each plate is rated to stop a range of ammunition including 3 hits from a 7.62×51 NATO AP round at a range of . Dragon Skin is another ballistic vest which is currently in testing with mixed results. As of 2019, it has been deemed too heavy, expensive, and unreliable, in comparison to more traditional plates, and it is outdated in protection compared to modern US IOTV armour, and even in testing was deemed a downgrade from the IBA.
The British Armed Forces also have their own armour, known as Osprey. It is rated to the same general equivalent standard as the US counterpart, the Improved Outer Tactical Vest, and now the Soldier Plate Carrier System and Modular Tactical Vest.
The Russian Armed Forces also have armour, known as the 6B43, all the way to 6B45, depending on variant. Their armour runs on the GOST system, which, due to regional conditions, has resulted in a technically higher protective level overall.
Vehicle
The first modern production technology for armour plating was used by navies in the construction of the ironclad warship, reaching its pinnacle of development with the battleship. The first tanks were produced during World War I. Aerial armour has been used to protect pilots and aircraft systems since the First World War.
In modern ground forces' usage, the meaning of armour has expanded to include the role of troops in combat. After the evolution of armoured warfare, mechanised infantry were mounted in armoured fighting vehicles and replaced light infantry in many situations. In modern armoured warfare, armoured units equipped with tanks and infantry fighting vehicles serve the historic role of heavy cavalry, light cavalry, and dragoons, and belong to the armoured branch of warfare.
History
Ships
The first ironclad battleship, with iron armour over a wooden hull, , was launched by the French Navy in 1859 prompting the British Royal Navy to build a counter. The following year they launched , which was twice the size and had iron armour over an iron hull. After the first battle between two ironclads took place in 1862 during the American Civil War, it became clear that the ironclad had replaced the unarmoured line-of-battle ship as the most powerful warship afloat.
Ironclads were designed for several roles, including as high seas battleships, coastal defence ships, and long-range cruisers. The rapid evolution of warship design in the late 19th century transformed the ironclad from a wooden-hulled vessel which carried sails to supplement its steam engines into the steel-built, turreted battleships and cruisers familiar in the 20th century. This change was pushed forward by the development of heavier naval guns (the ironclads of the 1880s carried some of the heaviest guns ever mounted at sea), more sophisticated steam engines, and advances in metallurgy which made steel shipbuilding possible.
The rapid pace of change in the ironclad period meant that many ships were obsolete as soon as they were complete, and that naval tactics were in a state of flux. Many ironclads were built to make use of the ram or the torpedo, which a number of naval designers considered the crucial weapons of naval combat. There is no clear end to the ironclad period, but towards the end of the 1890s the term ironclad dropped out of use. New ships were increasingly constructed to a standard pattern and designated battleships or armoured cruisers.
Trains
Armoured trains saw use from the mid-19th to the mid-20th century, including the American Civil War (1861–1865), the Franco-Prussian War (1870–1871), the First and Second Boer Wars (1880–81 and 1899–1902), the Polish–Soviet War (1919–1921), the First (1914–1918) and Second World Wars (1939–1945) and the First Indochina War (1946–1954). The most intensive use of armoured trains was during the Russian Civil War (1918–1920).
Armoured fighting vehicles
Ancient siege engines were usually protected by wooden armour, often covered with wet hides or thin metal to prevent being easily burned.
Medieval war wagons were horse-drawn wagons that were similarly armoured. These contained guns or crossbowmen that could fire through gun-slits.
The first modern armoured fighting vehicles were armoured cars, developed circa 1900. These started as ordinary wheeled motor-cars protected by iron shields, typically mounting a machine gun.
During the First World War, the stalemate of trench warfare during on the Western Front spurred the development of the tank. It was envisioned as an armoured machine that could advance under fire from enemy rifles and machine guns, and respond with its own heavy guns. It used caterpillar tracks to cross ground broken up by shellfire and trenches.
Aircraft
With the development of effective anti-aircraft artillery in the period before the Second World War, military pilots, once the "knights of the air" during the First World War, became far more vulnerable to ground fire. As a response, armour plating was added to aircraft to protect aircrew and vulnerable areas such as engines and fuel tanks. Self-sealing fuel tanks functioned like armour in that they added protection but also increased weight and cost.
Present
Tank armour has progressed from the Second World War armour forms, now incorporating not only harder composites, but also reactive armour designed to defeat shaped charges. As a result of this, the main battle tank (MBT) conceived in the Cold War era can survive multiple rocket-propelled grenade strikes with minimal effect on the crew or the operation of the vehicle. The light tanks that were the last descendants of the light cavalry during the Second World War have almost completely disappeared from the world's militaries due to increased lethality of the weapons available to the vehicle-mounted infantry.
The armoured personnel carrier (APC) was devised during the First World War. It allows the safe and rapid movement of infantry in a combat zone, minimising casualties and maximising mobility. APCs are fundamentally different from the previously used armoured half-tracks in that they offer a higher level of protection from artillery burst fragments, and greater mobility in more terrain types. The basic APC design was substantially expanded to an infantry fighting vehicle (IFV) when properties of an APC and a light tank were combined in one vehicle.
Naval armour has fundamentally changed from the Second World War doctrine of thicker plating to defend against shells, bombs and torpedoes. Passive defence naval armour is limited to kevlar or steel (either single layer or as spaced armour) protecting particularly vital areas from the effects of nearby impacts. Since ships cannot carry enough armour to completely protect against anti-ship missiles, they depend more on defensive weapons destroying incoming missiles, or causing them to miss by confusing their guidance systems with electronic warfare.
Although the role of the ground attack aircraft significantly diminished after the Korean War, it re-emerged during the Vietnam War, and in the recognition of this, the US Air Force authorised the design and production of what became the A-10 dedicated anti-armour and ground-attack aircraft that first saw action in the Gulf War.
High-voltage transformer fire barriers are often required to defeat ballistics from small arms as well as projectiles from transformer bushings and lightning arresters, which form part of large electrical transformers, per NFPA 850. Such fire barriers may be designed to inherently function as armour, or may be passive fire protection materials augmented by armour, where care must be taken to ensure that the armour's reaction to fire does not cause issues with regards to the fire barrier being armoured to defeat explosions and projectiles in addition to fire, especially since both functions must be provided simultaneously, meaning they must be fire-tested together to provide realistic evidence of fitness for purpose.
Combat drones use little to no vehicular armour as they are not manned vessels, this results in them being lightweight and small in size.
Animal armour
Horse armour
Body armour for war horses has been used since at least 2000 BC. Cloth, leather, and metal protection covered cavalry horses in ancient civilisations, including ancient Egypt, Assyria, Persia, and Rome. Some formed heavy cavalry units of armoured horses and riders used to attack infantry and mounted archers. Armour for horses is called barding (also spelled bard or barb) especially when used by European knights.
During the late Middle Ages as armour protection for knights became more effective, their mounts became targets. This vulnerability was exploited by the Scots at the Battle of Bannockburn in the 14th century, when horses were killed by the infantry, and for the English at the Battle of Crécy in the same century where longbowmen shot horses and the then dismounted French knights were killed by heavy infantry. Barding developed as a response to such events.
Examples of armour for horses could be found as far back as classical antiquity. Cataphracts, with scale armour for both rider and horse, are believed by many historians to have influenced the later European knights, via contact with the Byzantine Empire.
Surviving period examples of barding are rare; however, complete sets are on display at the Philadelphia Museum of Art, the Wallace Collection in London, the Royal Armouries in Leeds, and the Metropolitan Museum of Art in New York. Horse armour could be made in whole or in part of cuir bouilli (hardened leather), but surviving examples of this are especially rare.
Elephant armour
War elephants were first used in ancient times without armour, but armour was introduced because elephants injured by enemy weapons would often flee the battlefield. Elephant armour was often made from hardened leather, which was fitted onto an individual elephant while moist, then dried to create a hardened shell. Alternatively, metal armour pieces were sometimes sewn into heavy cloth. Later lamellar armour (small overlapping metal plates) was introduced. Full plate armour was not typically used due to its expense and the danger of the animal overheating.
See also
Battledress
Bomb suit
High-voltage transformer fire barriers
Linothorax
Powered exoskeleton
Rolled homogeneous armour
Notes
References
"Ballistic Protection Levels." BulletproofME.com Body Armor. ArmorUP L.P., n.d. 19 October 2014
External links
Articles containing video clips
Safety clothing
Military equipment of antiquity
|
https://en.wikipedia.org/wiki/Arcology
|
Arcology, a portmanteau of "architecture" and "ecology", is a field of creating architectural design principles for very densely populated and ecologically low-impact human habitats.
The term was coined in 1969 by architect Paolo Soleri, who believed that a completed arcology would provide space for a variety of residential, commercial, and agricultural facilities while minimizing individual human environmental impact. These structures have been largely hypothetical, as no large-scale arcology has yet been built.
The concept has been popularized by various science fiction writers. Larry Niven and Jerry Pournelle provided a detailed description of an arcology in their 1981 novel Oath of Fealty. William Gibson mainstreamed the term in his seminal 1984 cyberpunk novel Neuromancer, where each corporation has its own self-contained city known as arcologies. More recently, authors such as Peter Hamilton in Neutronium Alchemist and Paolo Bacigalupi in The Water Knife explicitly used arcologies as part of their scenarios. They are often portrayed as self-contained or economically self-sufficient.
Development
An arcology is distinguished from a merely large building in that it is designed to lessen the impact of human habitation on any given ecosystem. It could be self-sustainable, employing all or most of its own available resources for a comfortable life: power, climate control, food production, air and water conservation and purification, sewage treatment, etc. An arcology is designed to make it possible to supply those items for a large population. An arcology would supply and maintain its own municipal or urban infrastructures in order to operate and connect with other urban environments apart from its own.
Arcologies were proposed in order to reduce human impact on natural resources. Arcology designs might apply conventional building and civil engineering techniques in very large, but practical projects in order to achieve pedestrian economies of scale that have proven, post-automobile, to be difficult to achieve in other ways.
Frank Lloyd Wright proposed an early version called Broadacre City although, in contrast to an arcology, his idea is comparatively two-dimensional and depends on a road network. Wright's plan described transportation, agriculture, and commerce systems that would support an economy. Critics said that Wright's solution failed to account for population growth, and assumed a more rigid democracy than the US actually has.
Buckminster Fuller proposed the Old Man River's City project, a domed city with a capacity of 125,000, as a solution to the housing problems in East St. Louis, Illinois.
Paolo Soleri proposed later solutions, and coined the term "arcology". Soleri describes ways of compacting city structures in three dimensions to combat two-dimensional urban sprawl, to economize on transportation and other energy uses. Like Wright, Soleri proposed changes in transportation, agriculture, and commerce. Soleri explored reductions in resource consumption and duplication, land reclamation; he also proposed to eliminate most private transportation. He advocated for greater "frugality" and favored greater use of shared social resources, including public transit (and public libraries).
Similar real-world projects
Arcosanti is an experimental "arcology prototype", a demonstration project under construction in central Arizona since 1970. Designed by Paolo Soleri, its primary purpose is to demonstrate Soleri's personal designs, his application of principles of arcology to create a pedestrian-friendly urban form.
Many cities in the world have proposed projects adhering to the design principles of the arcology concept, like Tokyo, and Dongtan near Shanghai. The Dongtan project may have collapsed, and it failed to open for the Shanghai World Expo in 2010.
McMurdo Station of the United States Antarctic Program and other scientific research stations on Antarctica resemble the popular conception of an arcology as a technologically advanced, relatively self-sufficient human community. The Antarctic research base provides living and entertainment amenities for roughly 3,000 staff who visit each year. Its remoteness and the measures needed to protect its population from the harsh environment give it an insular character. The station is not self-sufficientthe U.S. military delivers 30,000 cubic metres (8,000,000 US gal) of fuel and of supplies and equipment yearly through its Operation Deep Freeze resupply effortbut it is isolated from conventional support networks. Under international treaty, it must avoid damage to the surrounding ecosystem.
Begich Towers operates like a small-scale arcology encompassing nearly all of the population of Whittier, Alaska. The building contains residential housing as well as a police station, grocery, and municipal offices.
Whittier once boasted a second structure known as the Buckner Building. The Buckner Building still stands but was deemed unfit for habitation after the 1969 earthquake.
The Line is a long and wide linear smart city under construction in Saudi Arabia in Neom, Tabuk Province, which is designed to have no cars, streets or carbon emissions. The Line is planned to be the first development in Neom, a $500 billion project. The city's plans anticipate a population of 9 million. Excavation work had started along the entire length of the project by October 2022.
In popular culture
Most proposals to build real arcologies have failed due to financial, structural or conceptual shortcomings. Arcologies are therefore found primarily in fictional works.
In Robert Silverberg's The World Inside, most of the global population of 75 billion live inside giant skyscrapers, called "urbmons", each of which contains hundreds of thousands of people. The urbmons are arranged in "constellations". Each urbmon is divided into "neighborhoods" of 40 or so floors. All the needs of the inhabitants are provided inside the building – food is grown outside and brought into the building – so the idea of going outside is heretical and can be a sign of madness. The book examines human life when the population density is extremely high.
Another significant example is the 1981 novel Oath of Fealty by Larry Niven and Jerry Pournelle, in which a segment of the population of Los Angeles has moved into an arcology. The plot examines the social changes that result, both inside and outside the arcology. Thus the arcology is not just a plot device but a subject of critique.
In the city-building video game SimCity 2000, self-contained arcologies can be built, reducing the infrastructure needs of the city.
See also
References
Notes
Further reading
Soleri, Paolo. Arcology: The City in the Image of Man. 1969: Cambridge, Massachusetts, MIT Press.
External links
Arcology: The City in the Image of Man by Paolo Soleri (full text online)
Arcology.com – useful links
The Night Land by William Hope Hodgson (full text online)
Victory City
A discussion of arcology concepts
What is an Arcology?
Usage of "arcology" vs. "hyperstructure"
Arcology.com ("An arcology in southern China" on front page)
Arcology ("An arcology is a self-contained environment...")
SculptorsWiki: Arcology ("The only arcology yet on Earth...")
Review of Shadowrun: Renraku Arcology ("What's an arcology? A self-contained, largely self-sufficient living, working, recreational structure...")
Megastructures
Exploratory engineering
Environmental design
Human habitats
Planned communities
Urban studies and planning terminology
Emerging technologies
Cyberpunk themes
Architecture related to utopias
|
https://en.wikipedia.org/wiki/Actinide
|
The actinide () or actinoid () series encompasses the 14 metallic chemical elements with atomic numbers from 89 to 102, actinium through nobelium. The actinide series derives its name from the first element in the series, actinium. The informal chemical symbol An is used in general discussions of actinide chemistry to refer to any actinide.
The 1985 IUPAC Red Book recommends that actinoid be used rather than actinide, since the suffix -ide normally indicates a negative ion. However, owing to widespread current use, actinide is still allowed. Since actinoid literally means actinium-like (cf. humanoid or android), it has been argued for semantic reasons that actinium cannot logically be an actinoid, but IUPAC acknowledges its inclusion based on common usage.
All the actinides are f-block elements. Lawrencium is sometimes considered one as well, despite being a d-block element and a transition metal. The series mostly corresponds to the filling of the 5f electron shell, although in the ground state many have anomalous configurations involving the filling of the 6d shell due to interelectronic repulsion. In comparison with the lanthanides, also mostly f-block elements, the actinides show much more variable valence. They all have very large atomic and ionic radii and exhibit an unusually large range of physical properties. While actinium and the late actinides (from americium onwards) behave similarly to the lanthanides, the elements thorium, protactinium, and uranium are much more similar to transition metals in their chemistry, with neptunium and plutonium occupying an intermediate position.
All actinides are radioactive and release energy upon radioactive decay; naturally occurring uranium and thorium, and synthetically produced plutonium are the most abundant actinides on Earth. These are used in nuclear reactors and nuclear weapons. Uranium and thorium also have diverse current or historical uses, and americium is used in the ionization chambers of most modern smoke detectors.
Of the actinides, primordial thorium and uranium occur naturally in substantial quantities. The radioactive decay of uranium produces transient amounts of actinium and protactinium, and atoms of neptunium and plutonium are occasionally produced from transmutation reactions in uranium ores. The other actinides are purely synthetic elements. Nuclear weapons tests have released at least six actinides heavier than plutonium into the environment; analysis of debris from a 1952 hydrogen bomb explosion showed the presence of americium, curium, berkelium, californium, einsteinium and fermium.
In presentations of the periodic table, the f-block elements are customarily shown as two additional rows below the main body of the table. This convention is entirely a matter of aesthetics and formatting practicality; a rarely used wide-formatted periodic table inserts the 4f and 5f series in their proper places, as parts of the table's sixth and seventh rows (periods).
Discovery, isolation and synthesis
Like the lanthanides, the actinides form a family of elements with similar properties. Within the actinides, there are two overlapping groups: transuranium elements, which follow uranium in the periodic table; and transplutonium elements, which follow plutonium. Compared to the lanthanides, which (except for promethium) are found in nature in appreciable quantities, most actinides are rare. Most do not occur in nature, and of those that do, only thorium and uranium do so in more than trace quantities. The most abundant or easily synthesized actinides are uranium and thorium, followed by plutonium, americium, actinium, protactinium, neptunium, and curium.
The existence of transuranium elements was suggested in 1934 by Enrico Fermi, based on his experiments. However, even though four actinides were known by that time, it was not yet understood that they formed a family similar to lanthanides. The prevailing view that dominated early research into transuranics was that they were regular elements in the 7th period, with thorium, protactinium and uranium corresponding to 6th-period hafnium, tantalum and tungsten, respectively. Synthesis of transuranics gradually undermined this point of view. By 1944, an observation that curium failed to exhibit oxidation states above 4 (whereas its supposed 6th period homolog, platinum, can reach oxidation state of 6) prompted Glenn Seaborg to formulate an "actinide hypothesis". Studies of known actinides and discoveries of further transuranic elements provided more data in support of this position, but the phrase "actinide hypothesis" (the implication being that a "hypothesis" is something that has not been decisively proven) remained in active use by scientists through the late 1950s.
At present, there are two major methods of producing isotopes of transplutonium elements: (1) irradiation of the lighter elements with neutrons; (2) irradiation with accelerated charged particles. The first method is more important for applications, as only neutron irradiation using nuclear reactors allows the production of sizeable amounts of synthetic actinides; however, it is limited to relatively light elements. The advantage of the second method is that elements heavier than plutonium, as well as neutron-deficient isotopes, can be obtained, which are not formed during neutron irradiation.
In 1962–1966, there were attempts in the United States to produce transplutonium isotopes using a series of six underground nuclear explosions. Small samples of rock were extracted from the blast area immediately after the test to study the explosion products, but no isotopes with mass number greater than 257 could be detected, despite predictions that such isotopes would have relatively long half-lives of α-decay. This non-observation was attributed to spontaneous fission owing to the large speed of the products and to other decay channels, such as neutron emission and nuclear fission.
From actinium to uranium
Uranium and thorium were the first actinides discovered. Uranium was identified in 1789 by the German chemist Martin Heinrich Klaproth in pitchblende ore. He named it after the planet Uranus, which had been discovered eight years earlier. Klaproth was able to precipitate a yellow compound (likely sodium diuranate) by dissolving pitchblende in nitric acid and neutralizing the solution with sodium hydroxide. He then reduced the obtained yellow powder with charcoal, and extracted a black substance that he mistook for metal. Sixty years later, the French scientist Eugène-Melchior Péligot identified it as uranium oxide. He also isolated the first sample of uranium metal by heating uranium tetrachloride with metallic potassium. The atomic mass of uranium was then calculated as 120, but Dmitri Mendeleev in 1872 corrected it to 240 using his periodicity laws. This value was confirmed experimentally in 1882 by K. Zimmerman.
Thorium oxide was discovered by Friedrich Wöhler in the mineral thorianite, which was found in Norway (1827). Jöns Jacob Berzelius characterized this material in more detail in 1828. By reduction of thorium tetrachloride with potassium, he isolated the metal and named it thorium after the Norse god of thunder and lightning Thor. The same isolation method was later used by Péligot for uranium.
Actinium was discovered in 1899 by André-Louis Debierne, an assistant of Marie Curie, in the pitchblende waste left after removal of radium and polonium. He described the substance (in 1899) as similar to titanium and (in 1900) as similar to thorium. The discovery of actinium by Debierne was however questioned in 1971 and 2000, arguing that Debierne's publications in 1904 contradicted his earlier work of 1899–1900. This view instead credits the 1902 work of Friedrich Oskar Giesel, who discovered a radioactive element named emanium that behaved similarly to lanthanum. The name actinium comes from the , meaning beam or ray. This metal was discovered not by its own radiation but by the radiation of the daughter products. Owing to the close similarity of actinium and lanthanum and low abundance, pure actinium could only be produced in 1950. The term actinide was probably introduced by Victor Goldschmidt in 1937.
Protactinium was possibly isolated in 1900 by William Crookes. It was first identified in 1913, when Kasimir Fajans and Oswald Helmuth Göhring encountered the short-lived isotope 234mPa (half-life 1.17 minutes) during their studies of the 238U decay. They named the new element brevium (from Latin brevis meaning brief); the name was changed to protoactinium (from Greek πρῶτος + ἀκτίς meaning "first beam element") in 1918 when two groups of scientists, led by the Austrian Lise Meitner and Otto Hahn of Germany and Frederick Soddy and John Cranston of Great Britain, independently discovered the much longer-lived 231Pa. The name was shortened to protactinium in 1949. This element was little characterized until 1960, when A. G. Maddock and his co-workers in the U.K. isolated 130 grams of protactinium from 60 tonnes of waste left after extraction of uranium from its ore.
Neptunium and above
Neptunium (named for the planet Neptune, the next planet out from Uranus, after which uranium was named) was discovered by Edwin McMillan and Philip H. Abelson in 1940 in Berkeley, California. They produced the 239Np isotope (half-life = 2.4 days) by bombarding uranium with slow neutrons. It was the first transuranium element produced synthetically.
Transuranium elements do not occur in sizeable quantities in nature and are commonly synthesized via nuclear reactions conducted with nuclear reactors. For example, under irradiation with reactor neutrons, uranium-238 partially converts to plutonium-239:
This synthesis reaction was used by Fermi and his collaborators in their design of the reactors located at the Hanford Site, which produced significant amounts of plutonium-239 for the nuclear weapons of the Manhattan Project and the United States' post-war nuclear arsenal.
Actinides with the highest mass numbers are synthesized by bombarding uranium, plutonium, curium and californium with ions of nitrogen, oxygen, carbon, neon or boron in a particle accelerator. Thus nobelium was produced by bombarding uranium-238 with neon-22 as
_{92}^{238}U + _{10}^{22}Ne -> _{102}^{256}No + 4_0^1n.
The first isotopes of transplutonium elements, americium-241 and curium-242, were synthesized in 1944 by Glenn T. Seaborg, Ralph A. James and Albert Ghiorso. Curium-242 was obtained by bombarding plutonium-239 with 32-MeV α-particles
_{94}^{239}Pu + _2^4He -> _{96}^{242}Cm + _0^1n.
The americium-241 and curium-242 isotopes also were produced by irradiating plutonium in a nuclear reactor. The latter element was named after Marie Curie and her husband Pierre who are noted for discovering radium and for their work in radioactivity.
Bombarding curium-242 with α-particles resulted in an isotope of californium 245Cf (1950), and a similar procedure yielded in 1949 berkelium-243 from americium-241. The new elements were named after Berkeley, California, by analogy with its lanthanide homologue terbium, which was named after the village of Ytterby in Sweden.
In 1945, B. B. Cunningham obtained the first bulk chemical compound of a transplutonium element, namely americium hydroxide. Over the few years, milligram quantities of americium and microgram amounts of curium were accumulated that allowed production of isotopes of berkelium (Thomson, 1949) and californium (Thomson, 1950). Sizeable amounts of these elements were produced in 1958 (Burris B. Cunningham and Stanley G. Thomson), and the first californium compound (0.3 µg of CfOCl) was obtained in 1960 by B. B. Cunningham and J. C. Wallmann.
Einsteinium and fermium were identified in 1952–1953 in the fallout from the "Ivy Mike" nuclear test (1 November 1952), the first successful test of a hydrogen bomb. Instantaneous exposure of uranium-238 to a large neutron flux resulting from the explosion produced heavy isotopes of uranium, including uranium-253 and uranium-255, and their β-decay yielded einsteinium-253 and fermium-255. The discovery of the new elements and the new data on neutron capture were initially kept secret on the orders of the US military until 1955 due to Cold War tensions. Nevertheless, the Berkeley team were able to prepare einsteinium and fermium by civilian means, through the neutron bombardment of plutonium-239, and published this work in 1954 with the disclaimer that it was not the first studies that had been carried out on those elements. The "Ivy Mike" studies were declassified and published in 1955. The first significant (submicrograms) amounts of einsteinium were produced in 1961 by Cunningham and colleagues, but this has not been done for fermium yet.
The first isotope of mendelevium, 256Md (half-life 87 min), was synthesized by Albert Ghiorso, Glenn T. Seaborg, Gregory R. Choppin, Bernard G. Harvey and Stanley G. Thompson when they bombarded an 253Es target with alpha particles in the 60-inch cyclotron of Berkeley Radiation Laboratory; this was the first isotope of any element to be synthesized one atom at a time.
There were several attempts to obtain isotopes of nobelium by Swedish (1957) and American (1958) groups, but the first reliable result was the synthesis of 256No by the Russian group (Georgy Flyorov et al.) in 1965, as acknowledged by the IUPAC in 1992. In their experiments, Flyorov et al. bombarded uranium-238 with neon-22.
In 1961, Ghiorso et al. obtained the first isotope of lawrencium by irradiating californium (mostly californium-252) with boron-10 and boron-11 ions. The mass number of this isotope was not clearly established (possibly 258 or 259) at the time. In 1965, 256Lr was synthesized by Flyorov et al. from 243Am and 18O. Thus IUPAC recognized the nuclear physics teams at Dubna and Berkeley as the co-discoverers of lawrencium.
Isotopes
32 isotopes of actinium and eight excited isomeric states of some of its nuclides were identified by 2016. Three isotopes, 225Ac, 227Ac and 228Ac, were found in nature and the others were produced in the laboratory; only the three natural isotopes are used in applications. Actinium-225 is a member of the radioactive neptunium series; it was first discovered in 1947 as a decay product of uranium-233, it is an α-emitter with a half-life of 10 days. Actinium-225 is less available than actinium-228, but is more promising in radiotracer applications. Actinium-227 (half-life 21.77 years) occurs in all uranium ores, but in small quantities. One gram of uranium (in radioactive equilibrium) contains only 2 gram of 227Ac. Actinium-228 is a member of the radioactive thorium series formed by the decay of 228Ra; it is a β− emitter with a half-life of 6.15 hours. In one tonne of thorium there is 5 gram of 228Ac. It was discovered by Otto Hahn in 1906.
There are 31 known isotopes of thorium ranging in mass number from 208 to 238. Of these, the longest-lived is 232Th, whose half-life of means that it still exists in nature as a primordial nuclide. The next longest-lived is 230Th, an intermediate decay product of 238U with a half-life of 75,400 years. Several other thorium isotopes have half-lives over a day; all of these are also transient in the decay chains of 232Th, 235U, and 238U.
28 isotopes of protactinium are known with mass numbers 212–239 as well as three excited isomeric states. Only 231Pa and 234Pa have been found in nature. All the isotopes have short lifetimes, except for protactinium-231 (half-life 32,760 years). The most important isotopes are 231Pa and 233Pa, which is an intermediate product in obtaining uranium-233 and is the most affordable among artificial isotopes of protactinium. 233Pa has convenient half-life and energy of γ-radiation, and thus was used in most studies of protactinium chemistry. Protactinium-233 is a β-emitter with a half-life of 26.97 days.
There are 26 known isotopes of uranium, having mass numbers 215–242 (except 220 and 241). Three of them, 234U, 235U and 238U, are present in appreciable quantities in nature. Among others, the most important is 233U, which is a final product of transformation of 232Th irradiated by slow neutrons. 233U has a much higher fission efficiency by low-energy (thermal) neutrons, compared e.g. with 235U. Most uranium chemistry studies were carried out on uranium-238 owing to its long half-life of 4.4 years.
There are 24 isotopes of neptunium with mass numbers of 219, 220, and 223–244; they are all highly radioactive. The most popular among scientists are long-lived 237Np (t1/2 = 2.20 years) and short-lived 239Np, 238Np (t1/2 ~ 2 days).
There are 20 known isotopes of plutonium, having mass numbers 228–247. The most stable isotope of plutonium is 244Pu with half-life of 8.13 years.
Eighteen isotopes of americium are known with mass numbers from 229 to 247 (with the exception of 231). The most important are 241Am and 243Am, which are alpha-emitters and also emit soft, but intense γ-rays; both of them can be obtained in an isotopically pure form. Chemical properties of americium were first studied with 241Am, but later shifted to 243Am, which is almost 20 times less radioactive. The disadvantage of 243Am is production of the short-lived daughter isotope 239Np, which has to be considered in the data analysis.
Among 19 isotopes of curium, ranging in mass number from 233 to 251, the most accessible are 242Cm and 244Cm; they are α-emitters, but with much shorter lifetime than the americium isotopes. These isotopes emit almost no γ-radiation, but undergo spontaneous fission with the associated emission of neutrons. More long-lived isotopes of curium (245–248Cm, all α-emitters) are formed as a mixture during neutron irradiation of plutonium or americium. Upon short irradiation, this mixture is dominated by 246Cm, and then 248Cm begins to accumulate. Both of these isotopes, especially 248Cm, have a longer half-life (3.48 years) and are much more convenient for carrying out chemical research than 242Cm and 244Cm, but they also have a rather high rate of spontaneous fission. 247Cm has the longest lifetime among isotopes of curium (1.56 years), but is not formed in large quantities because of the strong fission induced by thermal neutrons.
Seventeen isotopes of berkelium were identified with mass numbers 233–234, 236, 238, and 240–252. Only 249Bk is available in large quantities; it has a relatively short half-life of 330 days and emits mostly soft β-particles, which are inconvenient for detection. Its alpha radiation is rather weak (1.45% with respect to β-radiation), but is sometimes used to detect this isotope. 247Bk is an alpha-emitter with a long half-life of 1,380 years, but it is hard to obtain in appreciable quantities; it is not formed upon neutron irradiation of plutonium because of the β-stability of isotopes of curium isotopes with mass number below 248.
The 20 isotopes of californium with mass numbers 237–256 are formed in nuclear reactors; californium-253 is a β-emitter and the rest are α-emitters. The isotopes with even mass numbers (250Cf, 252Cf and 254Cf) have a high rate of spontaneous fission, especially 254Cf of which 99.7% decays by spontaneous fission. Californium-249 has a relatively long half-life (352 years), weak spontaneous fission and strong γ-emission that facilitates its identification. 249Cf is not formed in large quantities in a nuclear reactor because of the slow β-decay of the parent isotope 249Bk and a large cross section of interaction with neutrons, but it can be accumulated in the isotopically pure form as the β-decay product of (pre-selected) 249Bk. Californium produced by reactor-irradiation of plutonium mostly consists of 250Cf and 252Cf, the latter being predominant for large neutron fluences, and its study is hindered by the strong neutron radiation.
Among the 18 known isotopes of einsteinium with mass numbers from 240 to 257, the most affordable is 253Es. It is an α-emitter with a half-life of 20.47 days, a relatively weak γ-emission and small spontaneous fission rate as compared with the isotopes of californium. Prolonged neutron irradiation also produces a long-lived isotope 254Es (t1/2 = 275.5 days).
Twenty isotopes of fermium are known with mass numbers of 241–260. 254Fm, 255Fm and 256Fm are α-emitters with a short half-life (hours), which can be isolated in significant amounts. 257Fm (t1/2 = 100 days) can accumulate upon prolonged and strong irradiation. All these isotopes are characterized by high rates of spontaneous fission.
Among the 17 known isotopes of mendelevium (mass numbers from 244 to 260), the most studied is 256Md, which mainly decays through the electron capture (α-radiation is ≈10%) with the half-life of 77 minutes. Another alpha emitter, 258Md, has a half-life of 53 days. Both these isotopes are produced from rare einsteinium (253Es and 255Es respectively), that therefore limits their availability.
Long-lived isotopes of nobelium and isotopes of lawrencium (and of heavier elements) have relatively short half-lives. For nobelium, 11 isotopes are known with mass numbers 250–260 and 262. The chemical properties of nobelium and lawrencium were studied with 255No (t1/2 = 3 min) and 256Lr (t1/2 = 35 s). The longest-lived nobelium isotope, 259No, has a half-life of approximately 1 hour. Lawrencium has 13 known isotopes with mass numbers 251–262 and 266. The most stable of them all is 266Lr with a half life of 11 hours.
Among all of these, the only isotopes that occur in sufficient quantities in nature to be detected in anything more than traces and have a measurable contribution to the atomic weights of the actinides are the primordial 232Th, 235U, and 238U, and three long-lived decay products of natural uranium, 230Th, 231Pa, and 234U. Natural thorium consists of 0.02(2)% 230Th and 99.98(2)% 232Th; natural protactinium consists of 100% 231Pa; and natural uranium consists of 0.0054(5)% 234U, 0.7204(6)% 235U, and 99.2742(10)% 238U.
Formation in nuclear reactors
The figure buildup of actinides is a table of nuclides with the number of neutrons on the horizontal axis (isotopes) and the number of protons on the vertical axis (elements). The red dot divides the nuclides in two groups, so the figure is more compact. Each nuclide is represented by a square with the mass number of the element and its half-life. Naturally existing actinide isotopes (Th, U) are marked with a bold border, alpha emitters have a yellow colour, and beta emitters have a blue colour. Pink indicates electron capture (236Np), whereas white stands for a long-lasting metastable state (242Am).
The formation of actinide nuclides is primarily characterised by:
Neutron capture reactions (n,γ), which are represented in the figure by a short right arrow.
The (n,2n) reactions and the less frequently occurring (γ,n) reactions are also taken into account, both of which are marked by a short left arrow.
Even more rarely and only triggered by fast neutrons, the (n,3n) reaction occurs, which is represented in the figure with one example, marked by a long left arrow.
In addition to these neutron- or gamma-induced nuclear reactions, the radioactive conversion of actinide nuclides also affects the nuclide inventory in a reactor. These decay types are marked in the figure by diagonal arrows. The beta-minus decay, marked with an arrow pointing up-left, plays a major role for the balance of the particle densities of the nuclides. Nuclides decaying by positron emission (beta-plus decay) or electron capture (ϵ) do not occur in a nuclear reactor except as products of knockout reactions; their decays are marked with arrows pointing down-right. Due to the long half-lives of the given nuclides, alpha decay plays almost no role in the formation and decay of the actinides in a power reactor, as the residence time of the nuclear fuel in the reactor core is rather short (a few years). Exceptions are the two relatively short-lived nuclides 242Cm (T1/2 = 163 d) and 236Pu (T1/2 = 2.9 y). Only for these two cases, the α decay is marked on the nuclide map by a long arrow pointing down-left. A few long-lived actinide isotopes, such as 244Pu and 250Cm, cannot be produced in reactors because neutron capture does not happen quickly enough to bypass the short-lived beta-decaying nuclides 243Pu and 249Cm; they can however be generated in nuclear explosions, which have much higher neutron fluxes.
Distribution in nature
Thorium and uranium are the most abundant actinides in nature with the respective mass concentrations of 16 ppm and 4 ppm. Uranium mostly occurs in the Earth's crust as a mixture of its oxides in the mineral uraninite, which is also called pitchblende because of its black color. There are several dozens of other uranium minerals such as carnotite (KUO2VO4·3H2O) and autunite (Ca(UO2)2(PO4)2·nH2O). The isotopic composition of natural uranium is 238U (relative abundance 99.2742%), 235U (0.7204%) and 234U (0.0054%); of these 238U has the largest half-life of 4.51 years. The worldwide production of uranium in 2009 amounted to 50,572 tonnes, of which 27.3% was mined in Kazakhstan. Other important uranium mining countries are Canada (20.1%), Australia (15.7%), Namibia (9.1%), Russia (7.0%), and Niger (6.4%).
The most abundant thorium minerals are thorianite (), thorite () and monazite, (). Most thorium minerals contain uranium and vice versa; and they all have significant fraction of lanthanides. Rich deposits of thorium minerals are located in the United States (440,000 tonnes), Australia and India (~300,000 tonnes each) and Canada (~100,000 tonnes).
The abundance of actinium in the Earth's crust is only about 5%. Actinium is mostly present in uranium-containing, but also in other minerals, though in much smaller quantities. The content of actinium in most natural objects corresponds to the isotopic equilibrium of parent isotope 235U, and it is not affected by the weak Ac migration. Protactinium is more abundant (10−12%) in the Earth's crust than actinium. It was discovered in the uranium ore in 1913 by Fajans and Göhring. As actinium, the distribution of protactinium follows that of 235U.
The half-life of the longest-lived isotope of neptunium, 237Np, is negligible compared to the age of the Earth. Thus neptunium is present in nature in negligible amounts produced as intermediate decay products of other isotopes. Traces of plutonium in uranium minerals were first found in 1942, and the more systematic results on 239Pu are summarized in the table (no other plutonium isotopes could be detected in those samples). The upper limit of abundance of the longest-living isotope of plutonium, 244Pu, is 3%. Plutonium could not be detected in samples of lunar soil. Owing to its scarcity in nature, most plutonium is produced synthetically.
Extraction
Owing to the low abundance of actinides, their extraction is a complex, multistep process. Fluorides of actinides are usually used because they are insoluble in water and can be easily separated with redox reactions. Fluorides are reduced with calcium, magnesium or barium:
Among the actinides, thorium and uranium are the easiest to isolate. Thorium is extracted mostly from monazite: thorium pyrophosphate (ThP2O7) is reacted with nitric acid, and the produced thorium nitrate treated with tributyl phosphate. Rare-earth impurities are separated by increasing the pH in sulfate solution.
In another extraction method, monazite is decomposed with a 45% aqueous solution of sodium hydroxide at 140 °C. Mixed metal hydroxides are extracted first, filtered at 80 °C, washed with water and dissolved with concentrated hydrochloric acid. Next, the acidic solution is neutralized with hydroxides to pH = 5.8 that results in precipitation of thorium hydroxide (Th(OH)4) contaminated with ~3% of rare-earth hydroxides; the rest of rare-earth hydroxides remains in solution. Thorium hydroxide is dissolved in an inorganic acid and then purified from the rare earth elements. An efficient method is the dissolution of thorium hydroxide in nitric acid, because the resulting solution can be purified by extraction with organic solvents:
Th(OH)4 + 4 HNO3 → Th(NO3)4 + 4 H2O
Metallic thorium is separated from the anhydrous oxide, chloride or fluoride by reacting it with calcium in an inert atmosphere:
ThO2 + 2 Ca → 2 CaO + Th
Sometimes thorium is extracted by electrolysis of a fluoride in a mixture of sodium and potassium chloride at 700–800 °C in a graphite crucible. Highly pure thorium can be extracted from its iodide with the crystal bar process.
Uranium is extracted from its ores in various ways. In one method, the ore is burned and then reacted with nitric acid to convert uranium into a dissolved state. Treating the solution with a solution of tributyl phosphate (TBP) in kerosene transforms uranium into an organic form UO2(NO3)2(TBP)2. The insoluble impurities are filtered and the uranium is extracted by reaction with hydroxides as (NH4)2U2O7 or with hydrogen peroxide as UO4·2H2O.
When the uranium ore is rich in such minerals as dolomite, magnesite, etc., those minerals consume much acid. In this case, the carbonate method is used for uranium extraction. Its main component is an aqueous solution of sodium carbonate, which converts uranium into a complex [UO2(CO3)3]4−, which is stable in aqueous solutions at low concentrations of hydroxide ions. The advantages of the sodium carbonate method are that the chemicals have low corrosivity (compared to nitrates) and that most non-uranium metals precipitate from the solution. The disadvantage is that tetravalent uranium compounds precipitate as well. Therefore, the uranium ore is treated with sodium carbonate at elevated temperature and under oxygen pressure:
2 UO2 + O2 + 6 → 2 [UO2(CO3)3]4−
This equation suggests that the best solvent for the uranium carbonate processing is a mixture of carbonate with bicarbonate. At high pH, this results in precipitation of diuranate, which is treated with hydrogen in the presence of nickel yielding an insoluble uranium tetracarbonate.
Another separation method uses polymeric resins as a polyelectrolyte. Ion exchange processes in the resins result in separation of uranium. Uranium from resins is washed with a solution of ammonium nitrate or nitric acid that yields uranyl nitrate, UO2(NO3)2·6H2O. When heated, it turns into UO3, which is converted to UO2 with hydrogen:
UO3 + H2 → UO2 + H2O
Reacting uranium dioxide with hydrofluoric acid changes it to uranium tetrafluoride, which yields uranium metal upon reaction with magnesium metal:
4 HF + UO2 → UF4 + 2 H2O
To extract plutonium, neutron-irradiated uranium is dissolved in nitric acid, and a reducing agent (FeSO4, or H2O2) is added to the resulting solution. This addition changes the oxidation state of plutonium from +6 to +4, while uranium remains in the form of uranyl nitrate (UO2(NO3)2). The solution is treated with a reducing agent and neutralized with ammonium carbonate to pH = 8 that results in precipitation of Pu4+ compounds.
In another method, Pu4+ and are first extracted with tributyl phosphate, then reacted with hydrazine washing out the recovered plutonium.
The major difficulty in separation of actinium is the similarity of its properties with those of lanthanum. Thus actinium is either synthesized in nuclear reactions from isotopes of radium or separated using ion-exchange procedures.
Properties
Actinides have similar properties to lanthanides. The 6d and 7s electronic shells are filled in actinium and thorium, and the 5f shell is being filled with further increase in atomic number; the 4f shell is filled in the lanthanides. The first experimental evidence for the filling of the 5f shell in actinides was obtained by McMillan and Abelson in 1940. As in lanthanides (see lanthanide contraction), the ionic radius of actinides monotonically decreases with atomic number (see also Aufbau principle).
Physical properties
Actinides are typical metals. All of them are soft and have a silvery color (but tarnish in air), relatively high density and plasticity. Some of them can be cut with a knife. Their electrical resistivity varies between 15 and 150 µΩ·cm. The hardness of thorium is similar to that of soft steel, so heated pure thorium can be rolled in sheets and pulled into wire. Thorium is nearly half as dense as uranium and plutonium, but is harder than either of them. All actinides are radioactive, paramagnetic, and, with the exception of actinium, have several crystalline phases: plutonium has seven, and uranium, neptunium and californium three. The crystal structures of protactinium, uranium, neptunium and plutonium do not have clear analogs among the lanthanides and are more similar to those of the 3d-transition metals.
All actinides are pyrophoric, especially when finely divided, that is, they spontaneously ignite upon reaction with air at room temperature. The melting point of actinides does not have a clear dependence on the number of f-electrons. The unusually low melting point of neptunium and plutonium (~640 °C) is explained by hybridization of 5f and 6d orbitals and the formation of directional bonds in these metals.
Chemical properties
Like the lanthanides, all actinides are highly reactive with halogens and chalcogens; however, the actinides react more easily. Actinides, especially those with a small number of 5f-electrons, are prone to hybridization. This is explained by the similarity of the electron energies at the 5f, 7s and 6d shells. Most actinides exhibit a larger variety of valence states, and the most stable are +6 for uranium, +5 for protactinium and neptunium, +4 for thorium and plutonium and +3 for actinium and other actinides.
Actinium is chemically similar to lanthanum, which is explained by their similar ionic radii and electronic structures. Like lanthanum, actinium almost always has an oxidation state of +3 in compounds, but it is less reactive and has more pronounced basic properties. Among other trivalent actinides Ac3+ is least acidic, i.e. has the weakest tendency to hydrolyze in aqueous solutions.
Thorium is rather active chemically. Owing to lack of electrons on 6d and 5f orbitals, the tetravalent thorium compounds are colorless. At pH < 3, the solutions of thorium salts are dominated by the cations [Th(H2O)8]4+. The Th4+ ion is relatively large, and depending on the coordination number can have a radius between 0.95 and 1.14 Å. As a result, thorium salts have a weak tendency to hydrolyse. The distinctive ability of thorium salts is their high solubility both in water and polar organic solvents.
Protactinium exhibits two valence states; the +5 is stable, and the +4 state easily oxidizes to protactinium(V). Thus tetravalent protactinium in solutions is obtained by the action of strong reducing agents in a hydrogen atmosphere. Tetravalent protactinium is chemically similar to uranium(IV) and thorium(IV). Fluorides, phosphates, hypophosphate, iodate and phenylarsonates of protactinium(IV) are insoluble in water and dilute acids. Protactinium forms soluble carbonates. The hydrolytic properties of pentavalent protactinium are close to those of tantalum(V) and niobium(V). The complex chemical behavior of protactinium is a consequence of the start of the filling of the 5f shell in this element.
Uranium has a valence from 3 to 6, the last being most stable. In the hexavalent state, uranium is very similar to the group 6 elements. Many compounds of uranium(IV) and uranium(VI) are non-stoichiometric, i.e. have variable composition. For example, the actual chemical formula of uranium dioxide is UO2+x, where x varies between −0.4 and 0.32. Uranium(VI) compounds are weak oxidants. Most of them contain the linear "uranyl" group, . Between 4 and 6 ligands can be accommodated in an equatorial plane perpendicular to the uranyl group. The uranyl group acts as a hard acid and forms stronger complexes with oxygen-donor ligands than with nitrogen-donor ligands. and are also the common form of Np and Pu in the +6 oxidation state. Uranium(IV) compounds exhibit reducing properties, e.g., they are easily oxidized by atmospheric oxygen. Uranium(III) is a very strong reducing agent. Owing to the presence of d-shell, uranium (as well as many other actinides) forms organometallic compounds, such as UIII(C5H5)3 and UIV(C5H5)4.
Neptunium has valence states from 3 to 7, which can be simultaneously observed in solutions. The most stable state in solution is +5, but the valence +4 is preferred in solid neptunium compounds. Neptunium metal is very reactive. Ions of neptunium are prone to hydrolysis and formation of coordination compounds.
Plutonium also exhibits valence states between 3 and 7 inclusive, and thus is chemically similar to neptunium and uranium. It is highly reactive, and quickly forms an oxide film in air. Plutonium reacts with hydrogen even at temperatures as low as 25–50 °C; it also easily forms halides and intermetallic compounds. Hydrolysis reactions of plutonium ions of different oxidation states are quite diverse. Plutonium(V) can enter polymerization reactions.
The largest chemical diversity among actinides is observed in americium, which can have valence between 2 and 6. Divalent americium is obtained only in dry compounds and non-aqueous solutions (acetonitrile). Oxidation states +3, +5 and +6 are typical for aqueous solutions, but also in the solid state. Tetravalent americium forms stable solid compounds (dioxide, fluoride and hydroxide) as well as complexes in aqueous solutions. It was reported that in alkaline solution americium can be oxidized to the heptavalent state, but these data proved erroneous. The most stable valence of americium is 3 in the aqueous solutions and 3 or 4 in solid compounds.
Valence 3 is dominant in all subsequent elements up to lawrencium (with the exception of nobelium). Curium can be tetravalent in solids (fluoride, dioxide). Berkelium, along with a valence of +3, also shows the valence of +4, more stable than that of curium; the valence 4 is observed in solid fluoride and dioxide. The stability of Bk4+ in aqueous solution is close to that of Ce4+. Only valence 3 was observed for californium, einsteinium and fermium. The divalent state is proven for mendelevium and nobelium, and in nobelium it is more stable than the trivalent state. Lawrencium shows valence 3 both in solutions and solids.
The redox potential \mathit E_\frac{M^4+}{AnO2^2+} increases from −0.32 V in uranium, through 0.34 V (Np) and 1.04 V (Pu) to 1.34 V in americium revealing the increasing reduction ability of the An4+ ion from americium to uranium. All actinides form AnH3 hydrides of black color with salt-like properties. Actinides also produce carbides with the general formula of AnC or AnC2 (U2C3 for uranium) as well as sulfides An2S3 and AnS2.
Compounds
Oxides and hydroxides
An – actinide **Depending on the isotopes
Some actinides can exist in several oxide forms such as An2O3, AnO2, An2O5 and AnO3. For all actinides, oxides AnO3 are amphoteric and An2O3, AnO2 and An2O5 are basic, they easily react with water, forming bases:
An2O3 + 3 H2O → 2 An(OH)3.
These bases are poorly soluble in water and by their activity are close to the hydroxides of rare-earth metals.
Np(OH)3 has not yet been synthesized, Pu(OH)3 has a blue color while Am(OH)3 is pink and curium hydroxide Cm(OH)3 is colorless. Bk(OH)3 and Cf(OH)3 are also known, as are tetravalent hydroxides for Np, Pu and Am and pentavalent for Np and Am.
The strongest base is of actinium. All compounds of actinium are colorless, except for black actinium sulfide (Ac2S3). Dioxides of tetravalent actinides crystallize in the cubic system, same as in calcium fluoride.
Thorium reacting with oxygen exclusively forms the dioxide:
Th{} + O2 ->[\ce{1000^\circ C}] \overbrace{ThO2}^{Thorium~dioxide}
Thorium dioxide is a refractory material with the highest melting point among any known oxide (3390 °C). Adding 0.8–1% ThO2 to tungsten stabilizes its structure, so the doped filaments have better mechanical stability to vibrations. To dissolve ThO2 in acids, it is heated to 500–600 °C; heating above 600 °C produces a very resistant to acids and other reagents form of ThO2. Small addition of fluoride ions catalyses dissolution of thorium dioxide in acids.
Two protactinium oxides have been obtained: PaO2 (black) and Pa2O5 (white); the former is isomorphic with ThO2 and the latter is easier to obtain. Both oxides are basic, and Pa(OH)5 is a weak, poorly soluble base.
Decomposition of certain salts of uranium, for example UO2(NO3)·6H2O in air at 400 °C, yields orange or yellow UO3. This oxide is amphoteric and forms several hydroxides, the most stable being uranyl hydroxide UO2(OH)2. Reaction of uranium(VI) oxide with hydrogen results in uranium dioxide, which is similar in its properties with ThO2. This oxide is also basic and corresponds to the uranium hydroxide (U(OH)4).
Plutonium, neptunium and americium form two basic oxides: An2O3 and AnO2. Neptunium trioxide is unstable; thus, only Np3O8 could be obtained so far. However, the oxides of plutonium and neptunium with the chemical formula AnO2 and An2O3 are well characterized.
Salts
*An – actinide **Depending on the isotopes
Actinides easily react with halogens forming salts with the formulas MX3 and MX4 (X = halogen). So the first berkelium compound, BkCl3, was synthesized in 1962 with an amount of 3 nanograms. Like the halogens of rare earth elements, actinide chlorides, bromides, and iodides are water-soluble, and fluorides are insoluble. Uranium easily yields a colorless hexafluoride, which sublimates at a temperature of 56.5 °C; because of its volatility, it is used in the separation of uranium isotopes with gas centrifuge or gaseous diffusion. Actinide hexafluorides have properties close to anhydrides. They are very sensitive to moisture and hydrolyze forming AnO2F2. The pentachloride and black hexachloride of uranium were synthesized, but they are both unstable.
Action of acids on actinides yields salts, and if the acids are non-oxidizing then the actinide in the salt is in low-valence state:
U + 2 H2SO4 → U(SO4)2 + 2 H2
2 Pu + 6 HCl → 2 PuCl3 + 3 H2
However, in these reactions the regenerating hydrogen can react with the metal, forming the corresponding hydride. Uranium reacts with acids and water much more easily than thorium.
Actinide salts can also be obtained by dissolving the corresponding hydroxides in acids. Nitrates, chlorides, sulfates and perchlorates of actinides are water-soluble. When crystallizing from aqueous solutions, these salts forming a hydrates, such as Th(NO3)4·6H2O, Th(SO4)2·9H2O and Pu2(SO4)3·7H2O. Salts of high-valence actinides easily hydrolyze. So, colorless sulfate, chloride, perchlorate and nitrate of thorium transform into basic salts with formulas Th(OH)2SO4 and Th(OH)3NO3. The solubility and insolubility of trivalent and tetravalent actinides is like that of lanthanide salts. So phosphates, fluorides, oxalates, iodates and carbonates of actinides are weakly soluble in water; they precipitate as hydrates, such as ThF4·3H2O and Th(CrO4)2·3H2O.
Actinides with oxidation state +6, except for the AnO22+-type cations, form [AnO4]2−, [An2O7]2− and other complex anions. For example, uranium, neptunium and plutonium form salts of the Na2UO4 (uranate) and (NH4)2U2O7 (diuranate) types. In comparison with lanthanides, actinides more easily form coordination compounds, and this ability increases with the actinide valence. Trivalent actinides do not form fluoride coordination compounds, whereas tetravalent thorium forms K2ThF6, KThF5, and even K5ThF9 complexes. Thorium also forms the corresponding sulfates (for example Na2SO4·Th(SO4)2·5H2O), nitrates and thiocyanates. Salts with the general formula An2Th(NO3)6·nH2O are of coordination nature, with the coordination number of thorium equal to 12. Even easier is to produce complex salts of pentavalent and hexavalent actinides. The most stable coordination compounds of actinides – tetravalent thorium and uranium – are obtained in reactions with diketones, e.g. acetylacetone.
Applications
While actinides have some established daily-life applications, such as in smoke detectors (americium) and gas mantles (thorium), they are mostly used in nuclear weapons and as fuel in nuclear reactors. The last two areas exploit the property of actinides to release enormous energy in nuclear reactions, which under certain conditions may become self-sustaining chain reactions.
The most important isotope for nuclear power applications is uranium-235. It is used in the thermal reactor, and its concentration in natural uranium does not exceed 0.72%. This isotope strongly absorbs thermal neutrons releasing much energy. One fission act of 1 gram of 235U converts into about 1 MW·day. Of importance, is that emits more neutrons than it absorbs; upon reaching the critical mass, enters into a self-sustaining chain reaction. Typically, uranium nucleus is divided into two fragments with the release of 2–3 neutrons, for example:
+ ⟶ + + 3
Other promising actinide isotopes for nuclear power are thorium-232 and its product from the thorium fuel cycle, uranium-233.
Emission of neutrons during the fission of uranium is important not only for maintaining the nuclear chain reaction, but also for the synthesis of the heavier actinides. Uranium-239 converts via β-decay into plutonium-239, which, like uranium-235, is capable of spontaneous fission. The world's first nuclear reactors were built not for energy, but for producing plutonium-239 for nuclear weapons.
About half of the produced thorium is used as the light-emitting material of gas mantles. Thorium is also added into multicomponent alloys of magnesium and zinc. So the Mg-Th alloys are light and strong, but also have high melting point and ductility and thus are widely used in the aviation industry and in the production of missiles. Thorium also has good electron emission properties, with long lifetime and low potential barrier for the emission. The relative content of thorium and uranium isotopes is widely used to estimate the age of various objects, including stars (see radiometric dating).
The major application of plutonium has been in nuclear weapons, where the isotope plutonium-239 was a key component due to its ease of fission and availability. Plutonium-based designs allow reducing the critical mass to about a third of that for uranium-235. The "Fat Man"-type plutonium bombs produced during the Manhattan Project used explosive compression of plutonium to obtain significantly higher densities than normal, combined with a central neutron source to begin the reaction and increase efficiency. Thus only 6.2 kg of plutonium was needed for an explosive yield equivalent to 20 kilotons of TNT. (See also Nuclear weapon design.) Hypothetically, as little as 4 kg of plutonium—and maybe even less—could be used to make a single atomic bomb using very sophisticated assembly designs.
Plutonium-238 is potentially more efficient isotope for nuclear reactors, since it has smaller critical mass than uranium-235, but it continues to release much thermal energy (0.56 W/g) by decay even when the fission chain reaction is stopped by control rods. Its application is limited by its high price (about US$1000/g). This isotope has been used in thermopiles and water distillation systems of some space satellites and stations. So Galileo and Apollo spacecraft (e.g. Apollo 14) had heaters powered by kilogram quantities of plutonium-238 oxide; this heat is also transformed into electricity with thermopiles. The decay of plutonium-238 produces relatively harmless alpha particles and is not accompanied by gamma-irradiation. Therefore, this isotope (~160 mg) is used as the energy source in heart pacemakers where it lasts about 5 times longer than conventional batteries.
Actinium-227 is used as a neutron source. Its high specific energy (14.5 W/g) and the possibility of obtaining significant quantities of thermally stable compounds are attractive for use in long-lasting thermoelectric generators for remote use. 228Ac is used as an indicator of radioactivity in chemical research, as it emits high-energy electrons (2.18 MeV) that can be easily detected. 228Ac-228Ra mixtures are widely used as an intense gamma-source in industry and medicine.
Development of self-glowing actinide-doped materials with durable crystalline matrices is a new area of actinide utilization as the addition of alpha-emitting radionuclides to some glasses and crystals may confer luminescence.
Toxicity
Radioactive substances can harm human health via (i) local skin contamination, (ii) internal exposure due to ingestion of radioactive isotopes, and (iii) external overexposure by β-activity and γ-radiation. Together with radium and transuranium elements, actinium is one of the most dangerous radioactive poisons with high specific α-activity. The most important feature of actinium is its ability to accumulate and remain in the surface layer of skeletons. At the initial stage of poisoning, actinium accumulates in the liver. Another danger of actinium is that it undergoes radioactive decay faster than being excreted. Adsorption from the digestive tract is much smaller (~0.05%) for actinium than radium.
Protactinium in the body tends to accumulate in the kidneys and bones. The maximum safe dose of protactinium in the human body is 0.03 µCi that corresponds to 0.5 micrograms of 231Pa. This isotope, which might be present in the air as aerosol, is 2.5 times more toxic than hydrocyanic acid.
Plutonium, when entering the body through air, food or blood (e.g. a wound), mostly settles in the lungs, liver and bones with only about 10% going to other organs, and remains there for decades. The long residence time of plutonium in the body is partly explained by its poor solubility in water. Some isotopes of plutonium emit ionizing α-radiation, which damages the surrounding cells. The median lethal dose (LD50) for 30 days in dogs after intravenous injection of plutonium is 0.32 milligram per kg of body mass, and thus the lethal dose for humans is approximately 22 mg for a person weighing 70 kg; the amount for respiratory exposure should be approximately four times greater. Another estimate assumes that plutonium is 50 times less toxic than radium, and thus permissible content of plutonium in the body should be 5 µg or 0.3 µCi. Such amount is nearly invisible under microscope. After trials on animals, this maximum permissible dose was reduced to 0.65 µg or 0.04 µCi. Studies on animals also revealed that the most dangerous plutonium exposure route is through inhalation, after which 5–25% of inhaled substances is retained in the body. Depending on the particle size and solubility of the plutonium compounds, plutonium is localized either in the lungs or in the lymphatic system, or is absorbed in the blood and then transported to the liver and bones. Contamination via food is the least likely way. In this case, only about 0.05% of soluble 0.01% insoluble compounds of plutonium absorbs into blood, and the rest is excreted. Exposure of damaged skin to plutonium would retain nearly 100% of it.
Using actinides in nuclear fuel, sealed radioactive sources or advanced materials such as self-glowing crystals has many potential benefits. However, a serious concern is the extremely high radiotoxicity of actinides and their migration in the environment. Use of chemically unstable forms of actinides in MOX and sealed radioactive sources is not appropriate by modern safety standards. There is a challenge to develop stable and durable actinide-bearing materials, which provide safe storage, use and final disposal. A key need is application of actinide solid solutions in durable crystalline host phases.
Nuclear properties
See also
Actinides in the environment
Lanthanides
Major actinides
Minor actinides
Transuranics
Notes
References
Bibliography
External links
Lawrence Berkeley Laboratory image of historic periodic table by Seaborg showing actinide series for the first time
Lawrence Livermore National Laboratory, Uncovering the Secrets of the Actinides
Los Alamos National Laboratory, Actinide Research Quarterly
Periodic table
|
https://en.wikipedia.org/wiki/Alkaloid
|
Alkaloids are a class of basic, naturally occurring organic compounds that contain at least one nitrogen atom. This group also includes some related compounds with neutral and even weakly acidic properties. Some synthetic compounds of similar structure may also be termed alkaloids. In addition to carbon, hydrogen and nitrogen, alkaloids may also contain oxygen or sulfur. More rarely still, they may contain elements such as phosphorus, chlorine, and bromine.
Alkaloids are produced by a large variety of organisms including bacteria, fungi, plants, and animals. They can be purified from crude extracts of these organisms by acid-base extraction, or solvent extractions followed by silica-gel column chromatography. Alkaloids have a wide range of pharmacological activities including antimalarial (e.g. quinine), antiasthma (e.g. ephedrine), anticancer (e.g. homoharringtonine), cholinomimetic (e.g. galantamine), vasodilatory (e.g. vincamine), antiarrhythmic (e.g. quinidine), analgesic (e.g. morphine), antibacterial (e.g. chelerythrine), and antihyperglycemic activities (e.g. piperine). Many have found use in traditional or modern medicine, or as starting points for drug discovery. Other alkaloids possess psychotropic (e.g. psilocin) and stimulant activities (e.g. cocaine, caffeine, nicotine, theobromine), and have been used in entheogenic rituals or as recreational drugs. Alkaloids can be toxic too (e.g. atropine, tubocurarine). Although alkaloids act on a diversity of metabolic systems in humans and other animals, they almost uniformly evoke a bitter taste.
The boundary between alkaloids and other nitrogen-containing natural compounds is not clear-cut. Compounds like amino acid peptides, proteins, nucleotides, nucleic acid, amines, and antibiotics are usually not called alkaloids. Natural compounds containing nitrogen in the exocyclic position (mescaline, serotonin, dopamine, etc.) are usually classified as amines rather than as alkaloids. Some authors, however, consider alkaloids a special case of amines.
Naming
The name "alkaloids" () was introduced in 1819 by German chemist Carl Friedrich Wilhelm Meissner, and is derived from late Latin root and the Greek-language suffix -('like'). However, the term came into wide use only after the publication of a review article, by Oscar Jacobsen in the chemical dictionary of Albert Ladenburg in the 1880s.
There is no unique method for naming alkaloids. Many individual names are formed by adding the suffix "ine" to the species or genus name. For example, atropine is isolated from the plant Atropa belladonna; strychnine is obtained from the seed of the Strychnine tree (Strychnos nux-vomica L.). Where several alkaloids are extracted from one plant their names are often distinguished by variations in the suffix: "idine", "anine", "aline", "inine" etc. There are also at least 86 alkaloids whose names contain the root "vin" because they are extracted from vinca plants such as Vinca rosea (Catharanthus roseus); these are called vinca alkaloids.
History
Alkaloid-containing plants have been used by humans since ancient times for therapeutic and recreational purposes. For example, medicinal plants have been known in Mesopotamia from about 2000 BC. The Odyssey of Homer referred to a gift given to Helen by the Egyptian queen, a drug bringing oblivion. It is believed that the gift was an opium-containing drug. A Chinese book on houseplants written in 1st–3rd centuries BC mentioned a medical use of ephedra and opium poppies. Also, coca leaves have been used by Indigenous South Americans since ancient times.
Extracts from plants containing toxic alkaloids, such as aconitine and tubocurarine, were used since antiquity for poisoning arrows.
Studies of alkaloids began in the 19th century. In 1804, the German chemist Friedrich Sertürner isolated from opium a "soporific principle" (), which he called "morphium", referring to Morpheus, the Greek god of dreams; in German and some other Central-European languages, this is still the name of the drug. The term "morphine", used in English and French, was given by the French physicist Joseph Louis Gay-Lussac.
A significant contribution to the chemistry of alkaloids in the early years of its development was made by the French researchers Pierre Joseph Pelletier and Joseph Bienaimé Caventou, who discovered quinine (1820) and strychnine (1818). Several other alkaloids were discovered around that time, including xanthine (1817), atropine (1819), caffeine (1820), coniine (1827), nicotine (1828), colchicine (1833), sparteine (1851), and cocaine (1860). The development of the chemistry of alkaloids was accelerated by the emergence of spectroscopic and chromatographic methods in the 20th century, so that by 2008 more than 12,000 alkaloids had been identified.
The first complete synthesis of an alkaloid was achieved in 1886 by the German chemist Albert Ladenburg. He produced coniine by reacting 2-methylpyridine with acetaldehyde and reducing the resulting 2-propenyl pyridine with sodium.
Classifications
Compared with most other classes of natural compounds, alkaloids are characterized by a great structural diversity. There is no uniform classification. Initially, when knowledge of chemical structures was lacking, botanical classification of the source plants was relied on. This classification is now considered obsolete.
More recent classifications are based on similarity of the carbon skeleton (e.g., indole-, isoquinoline-, and pyridine-like) or biochemical precursor (ornithine, lysine, tyrosine, tryptophan, etc.). However, they require compromises in borderline cases; for example, nicotine contains a pyridine fragment from nicotinamide and a pyrrolidine part from ornithine and therefore can be assigned to both classes.
Alkaloids are often divided into the following major groups:
"True alkaloids" contain nitrogen in the heterocycle and originate from amino acids. Their characteristic examples are atropine, nicotine, and morphine. This group also includes some alkaloids that besides the nitrogen heterocycle contain terpene (e.g., evonine) or peptide fragments (e.g. ergotamine). The piperidine alkaloids coniine and coniceine may be regarded as true alkaloids (rather than pseudoalkaloids: see below) although they do not originate from amino acids.
"Protoalkaloids", which contain nitrogen (but not the nitrogen heterocycle) and also originate from amino acids. Examples include mescaline, adrenaline and ephedrine.
Polyamine alkaloids – derivatives of putrescine, spermidine, and spermine.
Peptide and cyclopeptide alkaloids.
Pseudoalkaloids – alkaloid-like compounds that do not originate from amino acids. This group includes terpene-like and steroid-like alkaloids, as well as purine-like alkaloids such as caffeine, theobromine, theacrine and theophylline. Some authors classify ephedrine and cathinone as pseudoalkaloids. Those originate from the amino acid phenylalanine, but acquire their nitrogen atom not from the amino acid but through transamination.
Some alkaloids do not have the carbon skeleton characteristic of their group. So, galanthamine and homoaporphines do not contain isoquinoline fragment, but are, in general, attributed to isoquinoline alkaloids.
Main classes of monomeric alkaloids are listed in the table below:
Properties
Most alkaloids contain oxygen in their molecular structure; those compounds are usually colorless crystals at ambient conditions. Oxygen-free alkaloids, such as nicotine or coniine, are typically volatile, colorless, oily liquids. Some alkaloids are colored, like berberine (yellow) and sanguinarine (orange).
Most alkaloids are weak bases, but some, such as theobromine and theophylline, are amphoteric. Many alkaloids dissolve poorly in water but readily dissolve in organic solvents, such as diethyl ether, chloroform or 1,2-dichloroethane. Caffeine, cocaine, codeine and nicotine are slightly soluble in water (with a solubility of ≥1g/L), whereas others, including morphine and yohimbine are very slightly water-soluble (0.1–1 g/L). Alkaloids and acids form salts of various strengths. These salts are usually freely soluble in water and ethanol and poorly soluble in most organic solvents. Exceptions include scopolamine hydrobromide, which is soluble in organic solvents, and the water-soluble quinine sulfate.
Most alkaloids have a bitter taste or are poisonous when ingested. Alkaloid production in plants appeared to have evolved in response to feeding by herbivorous animals; however, some animals have evolved the ability to detoxify alkaloids. Some alkaloids can produce developmental defects in the offspring of animals that consume but cannot detoxify the alkaloids. One example is the alkaloid cyclopamine, produced in the leaves of corn lily. During the 1950s, up to 25% of lambs born by sheep that had grazed on corn lily had serious facial deformations. These ranged from deformed jaws to cyclopia (see picture). After decades of research, in the 1980s, the compound responsible for these deformities was identified as the alkaloid 11-deoxyjervine, later renamed to cyclopamine.
Distribution in nature
Alkaloids are generated by various living organisms, especially by higher plants – about 10 to 25% of those contain alkaloids. Therefore, in the past the term "alkaloid" was associated with plants.
The alkaloids content in plants is usually within a few percent and is inhomogeneous over the plant tissues. Depending on the type of plants, the maximum concentration is observed in the leaves (for example, black henbane), fruits or seeds (Strychnine tree), root (Rauvolfia serpentina) or bark (cinchona). Furthermore, different tissues of the same plants may contain different alkaloids.
Beside plants, alkaloids are found in certain types of fungus, such as psilocybin in the fruiting bodies of the genus Psilocybe, and in animals, such as bufotenin in the skin of some toads and a number of insects, markedly ants. Many marine organisms also contain alkaloids. Some amines, such as adrenaline and serotonin, which play an important role in higher animals, are similar to alkaloids in their structure and biosynthesis and are sometimes called alkaloids.
Extraction
Because of the structural diversity of alkaloids, there is no single method of their extraction from natural raw materials. Most methods exploit the property of most alkaloids to be soluble in organic solvents but not in water, and the opposite tendency of their salts.
Most plants contain several alkaloids. Their mixture is extracted first and then individual alkaloids are separated. Plants are thoroughly ground before extraction. Most alkaloids are present in the raw plants in the form of salts of organic acids. The extracted alkaloids may remain salts or change into bases. Base extraction is achieved by processing the raw material with alkaline solutions and extracting the alkaloid bases with organic solvents, such as 1,2-dichloroethane, chloroform, diethyl ether or benzene. Then, the impurities are dissolved by weak acids; this converts alkaloid bases into salts that are washed away with water. If necessary, an aqueous solution of alkaloid salts is again made alkaline and treated with an organic solvent. The process is repeated until the desired purity is achieved.
In the acidic extraction, the raw plant material is processed by a weak acidic solution (e.g., acetic acid in water, ethanol, or methanol). A base is then added to convert alkaloids to basic forms that are extracted with organic solvent (if the extraction was performed with alcohol, it is removed first, and the remainder is dissolved in water). The solution is purified as described above.
Alkaloids are separated from their mixture using their different solubility in certain solvents and different reactivity with certain reagents or by distillation.
A number of alkaloids are identified from insects, among which the fire ant venom alkaloids known as solenopsins have received greater attention from researchers. These insect alkaloids can be efficiently extracted by solvent immersion of live fire ants or by centrifugation of live ants followed by silica-gel chromatography purification. Tracking and dosing the extracted solenopsin ant alkaloids has been described as possible based on their absorbance peak around 232 nanometers.
Biosynthesis
Biological precursors of most alkaloids are amino acids, such as ornithine, lysine, phenylalanine, tyrosine, tryptophan, histidine, aspartic acid, and anthranilic acid. Nicotinic acid can be synthesized from tryptophan or aspartic acid. Ways of alkaloid biosynthesis are too numerous and cannot be easily classified. However, there are a few typical reactions involved in the biosynthesis of various classes of alkaloids, including synthesis of Schiff bases and Mannich reaction.
Synthesis of Schiff bases
Schiff bases can be obtained by reacting amines with ketones or aldehydes. These reactions are a common method of producing C=N bonds.
In the biosynthesis of alkaloids, such reactions may take place within a molecule, such as in the synthesis of piperidine:
Mannich reaction
An integral component of the Mannich reaction, in addition to an amine and a carbonyl compound, is a carbanion, which plays the role of the nucleophile in the nucleophilic addition to the ion formed by the reaction of the amine and the carbonyl.
The Mannich reaction can proceed both intermolecularly and intramolecularly:
Dimer alkaloids
In addition to the described above monomeric alkaloids, there are also dimeric, and even trimeric and tetrameric alkaloids formed upon condensation of two, three, and four monomeric alkaloids. Dimeric alkaloids are usually formed from monomers of the same type through the following mechanisms:
Mannich reaction, resulting in, e.g., voacamine
Michael reaction (villalstonine)
Condensation of aldehydes with amines (toxiferine)
Oxidative addition of phenols (dauricine, tubocurarine)
Lactonization (carpaine).
There are also dimeric alkaloids formed from two distinct monomers, such as the vinca alkaloids vinblastine and vincristine, which are formed from the coupling of catharanthine and vindoline. The newer semi-synthetic chemotherapeutic agent vinorelbine is used in the treatment of non-small-cell lung cancer. It is another derivative dimer of vindoline and catharanthine and is synthesised from anhydrovinblastine, starting either from leurosine or the monomers themselves.
Biological role
Alkaloids are among the most important and best-known secondary metabolites, i.e. biogenic substances not directly involved in the normal growth, development, or reproduction of the organism. Instead, they generally mediate ecological interactions, which may produce a selective advantage for the organism by increasing its survivability or fecundity. In some cases their function, if any, remains unclear. An early hypothesis, that alkaloids are the final products of nitrogen metabolism in plants, as urea and uric acid are in mammals, was refuted by the finding that their concentration fluctuates rather than steadily increasing.
Most of the known functions of alkaloids are related to protection. For example, aporphine alkaloid liriodenine produced by the tulip tree protects it from parasitic mushrooms. In addition, the presence of alkaloids in the plant prevents insects and chordate animals from eating it. However, some animals are adapted to alkaloids and even use them in their own metabolism. Such alkaloid-related substances as serotonin, dopamine and histamine are important neurotransmitters in animals. Alkaloids are also known to regulate plant growth. One example of an organism that uses alkaloids for protection is the Utetheisa ornatrix, more commonly known as the ornate moth. Pyrrolizidine alkaloids render these larvae and adult moths unpalatable to many of their natural enemies like coccinelid beetles, green lacewings, insectivorous hemiptera and insectivorous bats. Another example of alkaloids being utilized occurs in the poison hemlock moth (Agonopterix alstroemeriana). This moth feeds on its highly toxic and alkaloid-rich host plant poison hemlock (Conium maculatum) during its larval stage. A. alstroemeriana may benefit twofold from the toxicity of the naturally-occurring alkaloids, both through the unpalatability of the species to predators and through the ability of A. alstroemeriana to recognize Conium maculatum as the correct location for oviposition. A fire ant venom alkaloid known as solenopsin has been demonstrated to protect queens of invasive fire ants during the foundation of new nests, thus playing a central role in the spread of this pest ant species around the world.
Applications
In medicine
Medical use of alkaloid-containing plants has a long history, and, thus, when the first alkaloids were isolated in the 19th century, they immediately found application in clinical practice. Many alkaloids are still used in medicine, usually in the form of salts widely used including the following:
Many synthetic and semisynthetic drugs are structural modifications of the alkaloids, which were designed to enhance or change the primary effect of the drug and reduce unwanted side-effects. For example, naloxone, an opioid receptor antagonist, is a derivative of thebaine that is present in opium.
In agriculture
Prior to the development of a wide range of relatively low-toxic synthetic pesticides, some alkaloids, such as salts of nicotine and anabasine, were used as insecticides. Their use was limited by their high toxicity to humans.
Use as psychoactive drugs
Preparations of plants containing alkaloids and their extracts, and later pure alkaloids, have long been used as psychoactive substances. Cocaine, caffeine, and cathinone are stimulants of the central nervous system. Mescaline and many indole alkaloids (such as psilocybin, dimethyltryptamine and ibogaine) have hallucinogenic effect. Morphine and codeine are strong narcotic pain killers.
There are alkaloids that do not have strong psychoactive effect themselves, but are precursors for semi-synthetic psychoactive drugs. For example, ephedrine and pseudoephedrine are used to produce methcathinone and methamphetamine. Thebaine is used in the synthesis of many painkillers such as oxycodone.
See also
Amine
Base (chemistry)
List of poisonous plants
Mayer's reagent
Natural products
Palau'amine
Secondary metabolite
Explanatory notes
Citations
General and cited references
External links
|
https://en.wikipedia.org/wiki/Antibody
|
An antibody (Ab), also known as an immunoglobulin (Ig), is a large, Y-shaped protein used by the immune system to identify and neutralize foreign objects such as pathogenic bacteria and viruses. The antibody recognizes a unique molecule of the pathogen, called an antigen. Each tip of the "Y" of an antibody contains a paratope (analogous to a lock) that is specific for one particular epitope (analogous to a key) on an antigen, allowing these two structures to bind together with precision. Using this binding mechanism, an antibody can tag a microbe or an infected cell for attack by other parts of the immune system, or can neutralize it directly (for example, by blocking a part of a virus that is essential for its invasion).
To allow the immune system to recognize millions of different antigens, the antigen-binding sites at both tips of the antibody come in an equally wide variety.
In contrast, the remainder of the antibody is relatively constant. In mammals, antibodies occur in a few variants, which define the antibody's class or isotype: IgA, IgD, IgE, IgG, and IgM.
The constant region at the trunk of the antibody includes sites involved in interactions with other components of the immune system. The class hence determines the function triggered by an antibody after binding to an antigen, in addition to some structural features.
Antibodies from different classes also differ in where they are released in the body and at what stage of an immune response.
Together with B and T cells, antibodies comprise the most important part of the adaptive immune system.
They occur in two forms: one that is attached to a B cell, and the other, a soluble form, that is unattached and found in extracellular fluids such as blood plasma.
Initially, all antibodies are of the first form, attached to the surface of a B cell – these are then referred to as B-cell receptors (BCR).
After an antigen binds to a BCR, the B cell activates to proliferate and differentiate into either plasma cells, which secrete soluble antibodies with the same paratope, or memory B cells, which survive in the body to enable long-lasting immunity to the antigen.
Soluble antibodies are released into the blood and tissue fluids, as well as many secretions.
Because these fluids were traditionally known as humors, antibody-mediated immunity is sometimes known as, or considered a part of, humoral immunity.
The soluble Y-shaped units can occur individually as monomers, or in complexes of two to five units.
Antibodies are glycoproteins belonging to the immunoglobulin superfamily.
The terms antibody and immunoglobulin are often used interchangeably, though the term 'antibody' is sometimes reserved for the secreted, soluble form, i.e. excluding B-cell receptors.
Structure
Antibodies are heavy (~150 kDa) proteins of about 10 nm in size,
arranged in three globular regions that roughly form a Y shape.
In humans and most other mammals, an antibody unit consists of four polypeptide chains; two identical heavy chains and two identical light chains connected by disulfide bonds.
Each chain is a series of domains: somewhat similar sequences of about 110 amino acids each.
These domains are usually represented in simplified schematics as rectangles.
Light chains consist of one variable domain VL and one constant domain CL, while heavy chains contain one variable domain VH and three to four constant domains CH1, CH2, ...
Structurally an antibody is also partitioned into two antigen-binding fragments (Fab), containing one VL, VH, CL, and CH1 domain each, as well as the crystallisable fragment (Fc), forming the trunk of the Y shape.
In between them is a hinge region of the heavy chains, whose flexibility allows antibodies to bind to pairs of epitopes at various distances, to form complexes (dimers, trimers, etc.), and to bind effector molecules more easily.
In an electrophoresis test of blood proteins, antibodies mostly migrate to the last, gamma globulin fraction.
Conversely, most gamma-globulins are antibodies, which is why the two terms were historically used as synonyms, as were the symbols Ig and γ.
This variant terminology fell out of use due to the correspondence being inexact and due to confusion with γ (gamma) heavy chains which characterize the IgG class of antibodies.
Antigen-binding site
The variable domains can also be referred to as the FV region. It is the subregion of Fab that binds to an antigen.
More specifically, each variable domain contains three hypervariable regions – the amino acids seen there vary the most from antibody to antibody.
When the protein folds, these regions give rise to three loops of β-strands, localized near one another on the surface of the antibody.
These loops are referred to as the complementarity-determining regions (CDRs), since their shape complements that of an antigen.
Three CDRs from each of the heavy and light chains together form an antibody-binding site whose shape can be anything from a pocket to which a smaller antigen binds, to a larger surface, to a protrusion that sticks out into a groove in an antigen.
Typically however only a few residues contribute to most of the binding energy.
The existence of two identical antibody-binding sites allows antibody molecules to bind strongly to multivalent antigen (repeating sites such as polysaccharides in bacterial cell walls, or other sites at some distance apart), as well as to form antibody complexes and larger antigen-antibody complexes. The resulting cross-linking plays a role in activating other parts of the immune system.
The structures of CDRs have been clustered and classified by Chothia et al.
and more recently by North et al.
and Nikoloudis et al. However, describing an antibody's binding site using only one single static structure limits the understanding and characterization of the antibody's function and properties. To improve antibody structure prediction and to take the strongly correlated CDR loop and interface movements into account, antibody paratopes should be described as interconverting states in solution with varying probabilities.
In the framework of the immune network theory, CDRs are also called idiotypes. According to immune network theory, the adaptive immune system is regulated by interactions between idiotypes.
Fc region
The Fc region (the trunk of the Y shape) is composed of constant domains from the heavy chains. Its role is in modulating immune cell activity: it is where effector molecules bind to, triggering various effects after the antibody Fab region binds to an antigen.
Effector cells (such as macrophages or natural killer cells) bind via their Fc receptors (FcR) to the Fc region of an antibody, while the complement system is activated by binding the C1q protein complex. IgG or IgM can bind to C1q, but IgA cannot, therefore IgA does not activate the classical complement pathway.
Another role of the Fc region is to selectively distribute different antibody classes across the body. In particular, the neonatal Fc receptor (FcRn) binds to the Fc region of IgG antibodies to transport it across the placenta, from the mother to the fetus.
Antibodies are glycoproteins, that is, they have carbohydrates (glycans) added to conserved amino acid residues.
These conserved glycosylation sites occur in the Fc region and influence interactions with effector molecules.
Protein structure
The N-terminus of each chain is situated at the tip.
Each immunoglobulin domain has a similar structure, characteristic of all the members of the immunoglobulin superfamily:
it is composed of between 7 (for constant domains) and 9 (for variable domains) β-strands, forming two beta sheets in a Greek key motif.
The sheets create a "sandwich" shape, the immunoglobulin fold, held together by a disulfide bond.
Antibody complexes
Secreted antibodies can occur as a single Y-shaped unit, a monomer.
However, some antibody classes also form dimers with two Ig units (as with IgA), tetramers with four Ig units (like teleost fish IgM), or pentamers with five Ig units (like shark IgW or mammalian IgM, which occasionally forms hexamers as well, with six units).
Antibodies also form complexes by binding to antigen: this is called an antigen-antibody complex or immune complex.
Small antigens can cross-link two antibodies, also leading to the formation of antibody dimers, trimers, tetramers, etc.
Multivalent antigens (e.g., cells with multiple epitopes) can form larger complexes with antibodies.
An extreme example is the clumping, or agglutination, of red blood cells with antibodies in the Coombs test to determine blood groups: the large clumps become insoluble, leading to visually apparent precipitation.
B cell receptors
The membrane-bound form of an antibody may be called a surface immunoglobulin (sIg) or a membrane immunoglobulin (mIg). It is part of the B cell receptor (BCR), which allows a B cell to detect when a specific antigen is present in the body and triggers B cell activation. The BCR is composed of surface-bound IgD or IgM antibodies and associated Ig-α and Ig-β heterodimers, which are capable of signal transduction. A typical human B cell will have 50,000 to 100,000 antibodies bound to its surface. Upon antigen binding, they cluster in large patches, which can exceed 1 micrometer in diameter, on lipid rafts that isolate the BCRs from most other cell signaling receptors.
These patches may improve the efficiency of the cellular immune response. In humans, the cell surface is bare around the B cell receptors for several hundred nanometers, which further isolates the BCRs from competing influences.
Classes
Antibodies can come in different varieties known as isotypes or classes. In placental mammals there are five antibody classes known as IgA, IgD, IgE, IgG, and IgM, which are further subdivided into subclasses such as IgA1, IgA2.
The prefix "Ig" stands for immunoglobulin, while the suffix denotes the type of heavy chain the antibody contains: the heavy chain types α (alpha), γ (gamma), δ (delta), ε (epsilon), μ (mu) give rise to IgA, IgG, IgD, IgE, IgM, respectively.
The distinctive features of each class are determined by the part of the heavy chain within the hinge and Fc region.
The classes differ in their biological properties, functional locations and ability to deal with different antigens, as depicted in the table.
For example, IgE antibodies are responsible for an allergic response consisting of histamine release from mast cells, often a sole contributor to asthma (though other pathways exist as do exist symptoms very similar to yet not technically asthma). The antibody's variable region binds to allergic antigen, for example house dust mite particles, while its Fc region (in the ε heavy chains) binds to Fc receptor ε on a mast cell, triggering its degranulation: the release of molecules stored in its granules.
The antibody isotype of a B cell changes during cell development and activation. Immature B cells, which have never been exposed to an antigen, express only the IgM isotype in a cell surface bound form. The B lymphocyte, in this ready-to-respond form, is known as a "naive B lymphocyte." The naive B lymphocyte expresses both surface IgM and IgD. The co-expression of both of these immunoglobulin isotypes renders the B cell ready to respond to antigen. B cell activation follows engagement of the cell-bound antibody molecule with an antigen, causing the cell to divide and differentiate into an antibody-producing cell called a plasma cell. In this activated form, the B cell starts to produce antibody in a secreted form rather than a membrane-bound form. Some daughter cells of the activated B cells undergo isotype switching, a mechanism that causes the production of antibodies to change from IgM or IgD to the other antibody isotypes, IgE, IgA, or IgG, that have defined roles in the immune system.
Light chain types
In mammals there are two types of immunoglobulin light chain, which are called lambda (λ) and kappa (κ). However, there is no known functional difference between them, and both can occur with any of the five major types of heavy chains. Each antibody contains two identical light chains: both κ or both λ. Proportions of κ and λ types vary by species and can be used to detect abnormal proliferation of B cell clones. Other types of light chains, such as the iota (ι) chain, are found in other vertebrates like sharks (Chondrichthyes) and bony fishes (Teleostei).
In non-mammalian animals
In most placental mammals, the structure of antibodies is generally the same.
Jawed fish appear to be the most primitive animals that are able to make antibodies similar to those of mammals, although many features of their adaptive immunity appeared somewhat earlier.
Cartilaginous fish (such as sharks) produce heavy-chain-only antibodies (i.e., lacking light chains) which moreover feature longer chain pentamers (with five constant units per molecule). Camelids (such as camels, llamas, alpacas) are also notable for producing heavy-chain-only antibodies.
Antibody–antigen interactions
The antibody's paratope interacts with the antigen's epitope. An antigen usually contains different epitopes along its surface arranged discontinuously, and dominant epitopes on a given antigen are called determinants.
Antibody and antigen interact by spatial complementarity (lock and key). The molecular forces involved in the Fab-epitope interaction are weak and non-specific – for example electrostatic forces, hydrogen bonds, hydrophobic interactions, and van der Waals forces. This means binding between antibody and antigen is reversible, and the antibody's affinity towards an antigen is relative rather than absolute. Relatively weak binding also means it is possible for an antibody to cross-react with different antigens of different relative affinities.
Function
The main categories of antibody action include the following:
Neutralisation, in which neutralizing antibodies block parts of the surface of a bacterial cell or virion to render its attack ineffective
Agglutination, in which antibodies "glue together" foreign cells into clumps that are attractive targets for phagocytosis
Precipitation, in which antibodies "glue together" serum-soluble antigens, forcing them to precipitate out of solution in clumps that are attractive targets for phagocytosis
Complement activation (fixation), in which antibodies that are latched onto a foreign cell encourage complement to attack it with a membrane attack complex, which leads to the following:
Lysis of the foreign cell
Encouragement of inflammation by chemotactically attracting inflammatory cells
More indirectly, an antibody can signal immune cells to present antibody fragments to T cells, or downregulate other immune cells to avoid autoimmunity.
Activated B cells differentiate into either antibody-producing cells called plasma cells that secrete soluble antibody or memory cells that survive in the body for years afterward in order to allow the immune system to remember an antigen and respond faster upon future exposures.
At the prenatal and neonatal stages of life, the presence of antibodies is provided by passive immunization from the mother. Early endogenous antibody production varies for different kinds of antibodies, and usually appear within the first years of life. Since antibodies exist freely in the bloodstream, they are said to be part of the humoral immune system. Circulating antibodies are produced by clonal B cells that specifically respond to only one antigen (an example is a virus capsid protein fragment). Antibodies contribute to immunity in three ways: They prevent pathogens from entering or damaging cells by binding to them; they stimulate removal of pathogens by macrophages and other cells by coating the pathogen; and they trigger destruction of pathogens by stimulating other immune responses such as the complement pathway. Antibodies will also trigger vasoactive amine degranulation to contribute to immunity against certain types of antigens (helminths, allergens).
Activation of complement
Antibodies that bind to surface antigens (for example, on bacteria) will attract the first component of the complement cascade with their Fc region and initiate activation of the "classical" complement system. This results in the killing of bacteria in two ways. First, the binding of the antibody and complement molecules marks the microbe for ingestion by phagocytes in a process called opsonization; these phagocytes are attracted by certain complement molecules generated in the complement cascade. Second, some complement system components form a membrane attack complex to assist antibodies to kill the bacterium directly (bacteriolysis).
Activation of effector cells
To combat pathogens that replicate outside cells, antibodies bind to pathogens to link them together, causing them to agglutinate. Since an antibody has at least two paratopes, it can bind more than one antigen by binding identical epitopes carried on the surfaces of these antigens. By coating the pathogen, antibodies stimulate effector functions against the pathogen in cells that recognize their Fc region.
Those cells that recognize coated pathogens have Fc receptors, which, as the name suggests, interact with the Fc region of IgA, IgG, and IgE antibodies. The engagement of a particular antibody with the Fc receptor on a particular cell triggers an effector function of that cell; phagocytes will phagocytose, mast cells and neutrophils will degranulate, natural killer cells will release cytokines and cytotoxic molecules; that will ultimately result in destruction of the invading microbe. The activation of natural killer cells by antibodies initiates a cytotoxic mechanism known as antibody-dependent cell-mediated cytotoxicity (ADCC) – this process may explain the efficacy of monoclonal antibodies used in biological therapies against cancer. The Fc receptors are isotype-specific, which gives greater flexibility to the immune system, invoking only the appropriate immune mechanisms for distinct pathogens.
Natural antibodies
Humans and higher primates also produce "natural antibodies" that are present in serum before viral infection. Natural antibodies have been defined as antibodies that are produced without any previous infection, vaccination, other foreign antigen exposure or passive immunization. These antibodies can activate the classical complement pathway leading to lysis of enveloped virus particles long before the adaptive immune response is activated. Many natural antibodies are directed against the disaccharide galactose α(1,3)-galactose (α-Gal), which is found as a terminal sugar on glycosylated cell surface proteins, and generated in response to production of this sugar by bacteria contained in the human gut. Rejection of xenotransplantated organs is thought to be, in part, the result of natural antibodies circulating in the serum of the recipient binding to α-Gal antigens expressed on the donor tissue.
Immunoglobulin diversity
Virtually all microbes can trigger an antibody response. Successful recognition and eradication of many different types of microbes requires diversity among antibodies; their amino acid composition varies allowing them to interact with many different antigens. It has been estimated that humans generate about 10 billion different antibodies, each capable of binding a distinct epitope of an antigen. Although a huge repertoire of different antibodies is generated in a single individual, the number of genes available to make these proteins is limited by the size of the human genome. Several complex genetic mechanisms have evolved that allow vertebrate B cells to generate a diverse pool of antibodies from a relatively small number of antibody genes.
Domain variability
The chromosomal region that encodes an antibody is large and contains several distinct gene loci for each domain of the antibody—the chromosome region containing heavy chain genes (IGH@) is found on chromosome 14, and the loci containing lambda and kappa light chain genes (IGL@ and IGK@) are found on chromosomes 22 and 2 in humans. One of these domains is called the variable domain, which is present in each heavy and light chain of every antibody, but can differ in different antibodies generated from distinct B cells. Differences between the variable domains are located on three loops known as hypervariable regions (HV-1, HV-2 and HV-3) or complementarity-determining regions (CDR1, CDR2 and CDR3). CDRs are supported within the variable domains by conserved framework regions. The heavy chain locus contains about 65 different variable domain genes that all differ in their CDRs. Combining these genes with an array of genes for other domains of the antibody generates a large cavalry of antibodies with a high degree of variability. This combination is called V(D)J recombination discussed below.
V(D)J recombination
Somatic recombination of immunoglobulins, also known as V(D)J recombination, involves the generation of a unique immunoglobulin variable region. The variable region of each immunoglobulin heavy or light chain is encoded in several pieces—known as gene segments (subgenes). These segments are called variable (V), diversity (D) and joining (J) segments. V, D and J segments are found in Ig heavy chains, but only V and J segments are found in Ig light chains. Multiple copies of the V, D and J gene segments exist, and are tandemly arranged in the genomes of mammals. In the bone marrow, each developing B cell will assemble an immunoglobulin variable region by randomly selecting and combining one V, one D and one J gene segment (or one V and one J segment in the light chain). As there are multiple copies of each type of gene segment, and different combinations of gene segments can be used to generate each immunoglobulin variable region, this process generates a huge number of antibodies, each with different paratopes, and thus different antigen specificities. The rearrangement of several subgenes (i.e. V2 family) for lambda light chain immunoglobulin is coupled with the activation of microRNA miR-650, which further influences biology of B-cells.
RAG proteins play an important role with V(D)J recombination in cutting DNA at a particular region. Without the presence of these proteins, V(D)J recombination would not occur.
After a B cell produces a functional immunoglobulin gene during V(D)J recombination, it cannot express any other variable region (a process known as allelic exclusion) thus each B cell can produce antibodies containing only one kind of variable chain.
Somatic hypermutation and affinity maturation
Following activation with antigen, B cells begin to proliferate rapidly. In these rapidly dividing cells, the genes encoding the variable domains of the heavy and light chains undergo a high rate of point mutation, by a process called somatic hypermutation (SHM). SHM results in approximately one nucleotide change per variable gene, per cell division. As a consequence, any daughter B cells will acquire slight amino acid differences in the variable domains of their antibody chains.
This serves to increase the diversity of the antibody pool and impacts the antibody's antigen-binding affinity. Some point mutations will result in the production of antibodies that have a weaker interaction (low affinity) with their antigen than the original antibody, and some mutations will generate antibodies with a stronger interaction (high affinity). B cells that express high affinity antibodies on their surface will receive a strong survival signal during interactions with other cells, whereas those with low affinity antibodies will not, and will die by apoptosis. Thus, B cells expressing antibodies with a higher affinity for the antigen will outcompete those with weaker affinities for function and survival allowing the average affinity of antibodies to increase over time. The process of generating antibodies with increased binding affinities is called affinity maturation. Affinity maturation occurs in mature B cells after V(D)J recombination, and is dependent on help from helper T cells.
Class switching
Isotype or class switching is a biological process occurring after activation of the B cell, which allows the cell to produce different classes of antibody (IgA, IgE, or IgG). The different classes of antibody, and thus effector functions, are defined by the constant (C) regions of the immunoglobulin heavy chain. Initially, naive B cells express only cell-surface IgM and IgD with identical antigen binding regions. Each isotype is adapted for a distinct function; therefore, after activation, an antibody with an IgG, IgA, or IgE effector function might be required to effectively eliminate an antigen. Class switching allows different daughter cells from the same activated B cell to produce antibodies of different isotypes. Only the constant region of the antibody heavy chain changes during class switching; the variable regions, and therefore antigen specificity, remain unchanged. Thus the progeny of a single B cell can produce antibodies, all specific for the same antigen, but with the ability to produce the effector function appropriate for each antigenic challenge. Class switching is triggered by cytokines; the isotype generated depends on which cytokines are present in the B cell environment.
Class switching occurs in the heavy chain gene locus by a mechanism called class switch recombination (CSR). This mechanism relies on conserved nucleotide motifs, called switch (S) regions, found in DNA upstream of each constant region gene (except in the δ-chain). The DNA strand is broken by the activity of a series of enzymes at two selected S-regions. The variable domain exon is rejoined through a process called non-homologous end joining (NHEJ) to the desired constant region (γ, α or ε). This process results in an immunoglobulin gene that encodes an antibody of a different isotype.
Specificity designations
An antibody can be called monospecific if it has specificity for the same antigen or epitope, or bispecific if they have affinity for two different antigens or two different epitopes on the same antigen. A group of antibodies can be called polyvalent (or unspecific) if they have affinity for various antigens or microorganisms. Intravenous immunoglobulin, if not otherwise noted, consists of a variety of different IgG (polyclonal IgG). In contrast, monoclonal antibodies are identical antibodies produced by a single B cell.
Asymmetrical antibodies
Heterodimeric antibodies, which are also asymmetrical antibodies, allow for greater flexibility and new formats for attaching a variety of drugs to the antibody arms. One of the general formats for a heterodimeric antibody is the "knobs-into-holes" format. This format is specific to the heavy chain part of the constant region in antibodies. The "knobs" part is engineered by replacing a small amino acid with a larger one. It fits into the "hole", which is engineered by replacing a large amino acid with a smaller one. What connects the "knobs" to the "holes" are the disulfide bonds between each chain. The "knobs-into-holes" shape facilitates antibody dependent cell mediated cytotoxicity. Single chain variable fragments (scFv) are connected to the variable domain of the heavy and light chain via a short linker peptide. The linker is rich in glycine, which gives it more flexibility, and serine/threonine, which gives it specificity. Two different scFv fragments can be connected together, via a hinge region, to the constant domain of the heavy chain or the constant domain of the light chain. This gives the antibody bispecificity, allowing for the binding specificities of two different antigens. The "knobs-into-holes" format enhances heterodimer formation but does not suppress homodimer formation.
To further improve the function of heterodimeric antibodies, many scientists are looking towards artificial constructs. Artificial antibodies are largely diverse protein motifs that use the functional strategy of the antibody molecule, but are not limited by the loop and framework structural constraints of the natural antibody. Being able to control the combinational design of the sequence and three-dimensional space could transcend the natural design and allow for the attachment of different combinations of drugs to the arms.
Heterodimeric antibodies have a greater range in shapes they can take and the drugs that are attached to the arms do not have to be the same on each arm, allowing for different combinations of drugs to be used in cancer treatment. Pharmaceuticals are able to produce highly functional bispecific, and even multispecific, antibodies. The degree to which they can function is impressive given that such a change of shape from the natural form should lead to decreased functionality.
History
The first use of the term "antibody" occurred in a text by Paul Ehrlich. The term Antikörper (the German word for antibody) appears in the conclusion of his article "Experimental Studies on Immunity", published in October 1891, which states that, "if two substances give rise to two different Antikörper, then they themselves must be different". However, the term was not accepted immediately and several other terms for antibody were proposed; these included Immunkörper, Amboceptor, Zwischenkörper, substance sensibilisatrice, copula, Desmon, philocytase, fixateur, and Immunisin. The word antibody has formal analogy to the word antitoxin and a similar concept to Immunkörper (immune body in English). As such, the original construction of the word contains a logical flaw; the antitoxin is something directed against a toxin, while the antibody is a body directed against something.
The study of antibodies began in 1890 when Emil von Behring and Kitasato Shibasaburō described antibody activity against diphtheria and tetanus toxins. Von Behring and Kitasato put forward the theory of humoral immunity, proposing that a mediator in serum could react with a foreign antigen. His idea prompted Paul Ehrlich to propose the side-chain theory for antibody and antigen interaction in 1897, when he hypothesized that receptors (described as "side-chains") on the surface of cells could bind specifically to toxins – in a "lock-and-key" interaction – and that this binding reaction is the trigger for the production of antibodies. Other researchers believed that antibodies existed freely in the blood and, in 1904, Almroth Wright suggested that soluble antibodies coated bacteria to label them for phagocytosis and killing; a process that he named opsoninization.
In the 1920s, Michael Heidelberger and Oswald Avery observed that antigens could be precipitated by antibodies and went on to show that antibodies are made of protein. The biochemical properties of antigen-antibody-binding interactions were examined in more detail in the late 1930s by John Marrack. The next major advance was in the 1940s, when Linus Pauling confirmed the lock-and-key theory proposed by Ehrlich by showing that the interactions between antibodies and antigens depend more on their shape than their chemical composition. In 1948, Astrid Fagraeus discovered that B cells, in the form of plasma cells, were responsible for generating antibodies.
Further work concentrated on characterizing the structures of the antibody proteins. A major advance in these structural studies was the discovery in the early 1960s by Gerald Edelman and Joseph Gally of the antibody light chain, and their realization that this protein is the same as the Bence-Jones protein described in 1845 by Henry Bence Jones. Edelman went on to discover that antibodies are composed of disulfide bond-linked heavy and light chains. Around the same time, antibody-binding (Fab) and antibody tail (Fc) regions of IgG were characterized by Rodney Porter. Together, these scientists deduced the structure and complete amino acid sequence of IgG, a feat for which they were jointly awarded the 1972 Nobel Prize in Physiology or Medicine. The Fv fragment was prepared and characterized by David Givol. While most of these early studies focused on IgM and IgG, other immunoglobulin isotypes were identified in the 1960s: Thomas Tomasi discovered secretory antibody (IgA); David S. Rowe and John L. Fahey discovered IgD; and Kimishige Ishizaka and Teruko Ishizaka discovered IgE and showed it was a class of antibodies involved in allergic reactions. In a landmark series of experiments beginning in 1976, Susumu Tonegawa showed that genetic material can rearrange itself to form the vast array of available antibodies.
Medical applications
Disease diagnosis
Detection of particular antibodies is a very common form of medical diagnostics, and applications such as serology depend on these methods. For example, in biochemical assays for disease diagnosis, a titer of antibodies directed against Epstein-Barr virus or Lyme disease is estimated from the blood. If those antibodies are not present, either the person is not infected or the infection occurred a very long time ago, and the B cells generating these specific antibodies have naturally decayed.
In clinical immunology, levels of individual classes of immunoglobulins are measured by nephelometry (or turbidimetry) to characterize the antibody profile of patient. Elevations in different classes of immunoglobulins are sometimes useful in determining the cause of liver damage in patients for whom the diagnosis is unclear. For example, elevated IgA indicates alcoholic cirrhosis, elevated IgM indicates viral hepatitis and primary biliary cirrhosis, while IgG is elevated in viral hepatitis, autoimmune hepatitis and cirrhosis.
Autoimmune disorders can often be traced to antibodies that bind the body's own epitopes; many can be detected through blood tests. Antibodies directed against red blood cell surface antigens in immune mediated hemolytic anemia are detected with the Coombs test. The Coombs test is also used for antibody screening in blood transfusion preparation and also for antibody screening in antenatal women.
Practically, several immunodiagnostic methods based on detection of complex antigen-antibody are used to diagnose infectious diseases, for example ELISA, immunofluorescence, Western blot, immunodiffusion, immunoelectrophoresis, and magnetic immunoassay. Antibodies raised against human chorionic gonadotropin are used in over the counter pregnancy tests.
New dioxaborolane chemistry enables radioactive fluoride (18F) labeling of antibodies, which allows for positron emission tomography (PET) imaging of cancer.
Disease therapy
Targeted monoclonal antibody therapy is employed to treat diseases such as rheumatoid arthritis, multiple sclerosis, psoriasis, and many forms of cancer including non-Hodgkin's lymphoma, colorectal cancer, head and neck cancer and breast cancer.
Some immune deficiencies, such as X-linked agammaglobulinemia and hypogammaglobulinemia, result in partial or complete lack of antibodies. These diseases are often treated by inducing a short-term form of immunity called passive immunity. Passive immunity is achieved through the transfer of ready-made antibodies in the form of human or animal serum, pooled immunoglobulin or monoclonal antibodies, into the affected individual.
Prenatal therapy
Rh factor, also known as Rh D antigen, is an antigen found on red blood cells; individuals that are Rh-positive (Rh+) have this antigen on their red blood cells and individuals that are Rh-negative (Rh–) do not. During normal childbirth, delivery trauma or complications during pregnancy, blood from a fetus can enter the mother's system. In the case of an Rh-incompatible mother and child, consequential blood mixing may sensitize an Rh- mother to the Rh antigen on the blood cells of the Rh+ child, putting the remainder of the pregnancy, and any subsequent pregnancies, at risk for hemolytic disease of the newborn.
Rho(D) immune globulin antibodies are specific for human RhD antigen. Anti-RhD antibodies are administered as part of a prenatal treatment regimen to prevent sensitization that may occur when a Rh-negative mother has a Rh-positive fetus. Treatment of a mother with Anti-RhD antibodies prior to and immediately after trauma and delivery destroys Rh antigen in the mother's system from the fetus. This occurs before the antigen can stimulate maternal B cells to "remember" Rh antigen by generating memory B cells. Therefore, her humoral immune system will not make anti-Rh antibodies, and will not attack the Rh antigens of the current or subsequent babies. Rho(D) Immune Globulin treatment prevents sensitization that can lead to Rh disease, but does not prevent or treat the underlying disease itself.
Research applications
Specific antibodies are produced by injecting an antigen into a mammal, such as a mouse, rat, rabbit, goat, sheep, or horse for large quantities of antibody. Blood isolated from these animals contains polyclonal antibodies—multiple antibodies that bind to the same antigen—in the serum, which can now be called antiserum. Antigens are also injected into chickens for generation of polyclonal antibodies in egg yolk. To obtain antibody that is specific for a single epitope of an antigen, antibody-secreting lymphocytes are isolated from the animal and immortalized by fusing them with a cancer cell line. The fused cells are called hybridomas, and will continually grow and secrete antibody in culture. Single hybridoma cells are isolated by dilution cloning to generate cell clones that all produce the same antibody; these antibodies are called monoclonal antibodies. Polyclonal and monoclonal antibodies are often purified using Protein A/G or antigen-affinity chromatography.
In research, purified antibodies are used in many applications. Antibodies for research applications can be found directly from antibody suppliers, or through use of a specialist search engine. Research antibodies are most commonly used to identify and locate intracellular and extracellular proteins. Antibodies are used in flow cytometry to differentiate cell types by the proteins they express; different types of cells express different combinations of cluster of differentiation molecules on their surface, and produce different intracellular and secretable proteins. They are also used in immunoprecipitation to separate proteins and anything bound to them (co-immunoprecipitation) from other molecules in a cell lysate, in Western blot analyses to identify proteins separated by electrophoresis, and in immunohistochemistry or immunofluorescence to examine protein expression in tissue sections or to locate proteins within cells with the assistance of a microscope. Proteins can also be detected and quantified with antibodies, using ELISA and ELISpot techniques.
Antibodies used in research are some of the most powerful, yet most problematic reagents with a tremendous number of factors that must be controlled in any experiment including cross reactivity, or the antibody recognizing multiple epitopes and affinity, which can vary widely depending on experimental conditions such as pH, solvent, state of tissue etc. Multiple attempts have been made to improve both the way that researchers validate antibodies and ways in which they report on antibodies. Researchers using antibodies in their work need to record them correctly in order to allow their research to be reproducible (and therefore tested, and qualified by other researchers). Less than half of research antibodies referenced in academic papers can be easily identified. Papers published in F1000 in 2014 and 2015 provide researchers with a guide for reporting research antibody use. The RRID paper, is co-published in 4 journals that implemented the RRIDs Standard for research resource citation, which draws data from the antibodyregistry.org as the source of antibody identifiers (see also group at Force11).
Antibody regions can be used to further biomedical research by acting as a guide for drugs to reach their target. Several application involve using bacterial plasmids to tag plasmids with the Fc region of the antibody such as pFUSE-Fc plasmid.
Regulations
Production and testing
Traditionally, most antibodies are produced by hybridoma cell lines through immortalization of antibody-producing cells by chemically induced fusion with myeloma cells. In some cases, additional fusions with other lines have created "triomas" and "quadromas". The manufacturing process should be appropriately described and validated. Validation studies should at least include:
The demonstration that the process is able to produce in good quality (the process should be validated)
The efficiency of the antibody purification (all impurities and virus must be eliminated)
The characterization of purified antibody (physicochemical characterization, immunological properties, biological activities, contaminants, ...)
Determination of the virus clearance studies
Before clinical trials
Product safety testing: Sterility (bacteria and fungi), in vitro and in vivo testing for adventitious viruses, murine retrovirus testing..., product safety data needed before the initiation of feasibility trials in serious or immediately life-threatening conditions, it serves to evaluate dangerous potential of the product.
Feasibility testing: These are pilot studies whose objectives include, among others, early characterization of safety and initial proof of concept in a small specific patient population (in vitro or in vivo testing).
Preclinical studies
Testing cross-reactivity of antibody: to highlight unwanted interactions (toxicity) of antibodies with previously characterized tissues. This study can be performed in vitro (reactivity of the antibody or immunoconjugate should be determined with a quick-frozen adult tissues) or in vivo (with appropriates animal models).
Preclinical pharmacology and toxicity testing: preclinical safety testing of antibody is designed to identify possible toxicity in humans, to estimate the likelihood and severity of potential adverse events in humans, and to identify a safe starting dose and dose escalation, when possible.
Animal toxicity studies: Acute toxicity testing, repeat-dose toxicity testing, long-term toxicity testing
Pharmacokinetics and pharmacodynamics testing: Use for determinate clinical dosages, antibody activities, evaluation of the potential clinical effects
Structure prediction and computational antibody design
The importance of antibodies in health care and the biotechnology industry demands knowledge of their structures at high resolution. This information is used for protein engineering, modifying the antigen binding affinity, and identifying an epitope, of a given antibody. X-ray crystallography is one commonly used method for determining antibody structures. However, crystallizing an antibody is often laborious and time-consuming. Computational approaches provide a cheaper and faster alternative to crystallography, but their results are more equivocal, since they do not produce empirical structures. Online web servers such as Web Antibody Modeling (WAM) and Prediction of Immunoglobulin Structure (PIGS) enables computational modeling of antibody variable regions. Rosetta Antibody is a novel antibody FV region structure prediction server, which incorporates sophisticated techniques to minimize CDR loops and optimize the relative orientation of the light and heavy chains, as well as homology models that predict successful docking of antibodies with their unique antigen. However, describing an antibody's binding site using only one single static structure limits the understanding and characterization of the antibody's function and properties. To improve antibody structure prediction and to take the strongly correlated CDR loop and interface movements into account, antibody paratopes should be described as interconverting states in solution with varying probabilities.
The ability to describe the antibody through binding affinity to the antigen is supplemented by information on antibody structure and amino acid sequences for the purpose of patent claims. Several methods have been presented for computational design of antibodies based on the structural bioinformatics studies of antibody CDRs.
There are a variety of methods used to sequence an antibody including Edman degradation, cDNA, etc.; albeit one of the most common modern uses for peptide/protein identification is liquid chromatography coupled with tandem mass spectrometry (LC-MS/MS). High volume antibody sequencing methods require computational approaches for the data analysis, including de novo sequencing directly from tandem mass spectra and database search methods that use existing protein sequence databases. Many versions of shotgun protein sequencing are able to increase the coverage by utilizing CID/HCD/ETD fragmentation methods and other techniques, and they have achieved substantial progress in attempt to fully sequence proteins, especially antibodies. Other methods have assumed the existence of similar proteins, a known genome sequence, or combined top-down and bottom up approaches. Current technologies have the ability to assemble protein sequences with high accuracy by integrating de novo sequencing peptides, intensity, and positional confidence scores from database and homology searches.
Antibody mimetic
Antibody mimetics are organic compounds, like antibodies, that can specifically bind antigens. They consist of artificial peptides or proteins, or aptamer-based nucleic acid molecules with a molar mass of about 3 to 20 kDa. Antibody fragments, such as Fab and nanobodies are not considered as antibody mimetics. Common advantages over antibodies are better solubility, tissue penetration, stability towards heat and enzymes, and comparatively low production costs. Antibody mimetics have being developed and commercialized as research, diagnostic and therapeutic agents.
Binding antibody unit
BAU (binding antibody unit, often as BAU/mL) is a measurement unit defined by the WHO for the comparison of assays detecting the same class of immunoglobulins with the same specificity.
See also
Affimer
Anti-mitochondrial antibodies
Anti-nuclear antibodies
Antibody mimetic
Aptamer
Colostrum
ELISA
Humoral immunity
Immunology
Immunosuppressive drug
Intravenous immunoglobulin (IVIg)
Magnetic immunoassay
Microantibody
Monoclonal antibody
Neutralizing antibody
Optimer Ligand
Secondary antibodies
Single-domain antibody
Slope spectroscopy
Synthetic antibody
Western blot normalization
References
External links
Mike's Immunoglobulin Structure/Function Page at University of Cambridge
Antibodies as the PDB molecule of the month Discussion of the structure of antibodies at RCSB Protein Data Bank
A hundred years of antibody therapy History and applications of antibodies in the treatment of disease at University of Oxford
How Lymphocytes Produce Antibody from Cells Alive!
Glycoproteins
Immunology
Reagents for biochemistry
|
https://en.wikipedia.org/wiki/Anode
|
An anode is an electrode of a polarized electrical device through which conventional current enters the device. This contrasts with a cathode, an electrode of the device through which conventional current leaves the device. A common mnemonic is ACID, for "anode current into device". The direction of conventional current (the flow of positive charges) in a circuit is opposite to the direction of electron flow, so (negatively charged) electrons flow from the anode of a galvanic cell, into an outside or external circuit connected to the cell. For example, the end of a household battery marked with a "+" is the cathode (while discharging).
In both a galvanic cell and an electrolytic cell, the anode is the electrode at which the oxidation reaction occurs. In a galvanic cell the anode is the wire or plate having excess negative charge as a result of the oxidation reaction. In an electrolytic cell, the anode is the wire or plate upon which excess positive charge is imposed. As a result of this, anions will tend to move towards the anode where they will undergo oxidation.
Historically, the anode of a galvanic cell was also known as the zincode because it was usually composed of zinc.
Charge flow
The terms anode and cathode are not defined by the voltage polarity of electrodes but the direction of current through the electrode. An anode is an electrode of a device through which conventional current (positive charge) flows into the device from an external circuit, while a cathode is an electrode through which conventional current flows out of the device. If the current through the electrodes reverses direction, as occurs for example in a rechargeable battery when it is being charged, the roles of the electrodes as anode and cathode are reversed.
Conventional current depends not only on the direction the charge carriers move, but also the carriers' electric charge. The currents outside the device are usually carried by electrons in a metal conductor. Since electrons have a negative charge, the direction of electron flow is opposite to the direction of conventional current. Consequently, electrons leave the device through the anode and enter the device through the cathode.
The definition of anode and cathode is different for electrical devices such as diodes and vacuum tubes where the electrode naming is fixed and does not depend on the actual charge flow (current). These devices usually allow substantial current flow in one direction but negligible current in the other direction. Therefore, the electrodes are named based on the direction of this "forward" current. In a diode the anode is the terminal through which current enters and the cathode is the terminal through which current leaves, when the diode is forward biased. The names of the electrodes do not change in cases where reverse current flows through the device. Similarly, in a vacuum tube only one electrode can emit electrons into the evacuated tube due to being heated by a filament, so electrons can only enter the device from the external circuit through the heated electrode. Therefore, this electrode is permanently named the cathode, and the electrode through which the electrons exit the tube is named the anode.
Examples
The polarity of voltage on an anode with respect to an associated cathode varies depending on the device type and on its operating mode. In the following examples, the anode is negative in a device that provides power, and positive in a device that consumes power:
In a discharging battery or galvanic cell (diagram on left), the anode is the negative terminal: it is where conventional current flows into the cell. This inward current is carried externally by electrons moving outwards.
In a recharging battery, or an electrolytic cell, the anode is the positive terminal imposed by an external source of potential difference. The current through a recharging battery is opposite to the direction of current during discharge; in other words, the electrode which was the cathode during battery discharge becomes the anode while the battery is recharging.
In battery engineering, it is common to designate one electrode of a rechargeable battery the anode and the other the cathode according to the roles the electrodes play when the battery is discharged. This is despite the fact that the roles are reversed when the battery is charged. When this is done, "anode" simply designates the negative terminal of the battery and "cathode" designates the positive terminal.
In a diode, the anode is the terminal represented by the tail of the arrow symbol (flat side of the triangle), where conventional current flows into the device. Note the electrode naming for diodes is always based on the direction of the forward current (that of the arrow, in which the current flows "most easily"), even for types such as Zener diodes or solar cells where the current of interest is the reverse current.
In vacuum tubes or gas-filled tubes, the anode is the terminal where current enters the tube.
Etymology
The word was coined in 1834 from the Greek ἄνοδος (anodos), 'ascent', by William Whewell, who had been consulted by Michael Faraday over some new names needed to complete a paper on the recently discovered process of electrolysis. In that paper Faraday explained that when an electrolytic cell is oriented so that electric current traverses the "decomposing body" (electrolyte) in a direction "from East to West, or, which will strengthen this help to the memory, that in which the sun appears to move", the anode is where the current enters the electrolyte, on the East side: "ano upwards, odos a way; the way which the sun rises".
The use of 'East' to mean the 'in' direction (actually 'in' → 'East' → 'sunrise' → 'up') may appear contrived. Previously, as related in the first reference cited above, Faraday had used the more straightforward term "eisode" (the doorway where the current enters). His motivation for changing it to something meaning 'the East electrode' (other candidates had been "eastode", "oriode" and "anatolode") was to make it immune to a possible later change in the direction convention for current, whose exact nature was not known at the time. The reference he used to this effect was the Earth's magnetic field direction, which at that time was believed to be invariant. He fundamentally defined his arbitrary orientation for the cell as being that in which the internal current would run parallel to and in the same direction as a hypothetical magnetizing current loop around the local line of latitude which would induce a magnetic dipole field oriented like the Earth's. This made the internal current East to West as previously mentioned, but in the event of a later convention change it would have become West to East, so that the East electrode would not have been the 'way in' any more. Therefore, "eisode" would have become inappropriate, whereas "anode" meaning 'East electrode' would have remained correct with respect to the unchanged direction of the actual phenomenon underlying the current, then unknown but, he thought, unambiguously defined by the magnetic reference. In retrospect the name change was unfortunate, not only because the Greek roots alone do not reveal the anode's function any more, but more importantly because as we now know, the Earth's magnetic field direction on which the "anode" term is based is subject to reversals whereas the current direction convention on which the "eisode" term was based has no reason to change in the future.
Since the later discovery of the electron, an easier to remember and more durably correct technically although historically false, etymology has been suggested: anode, from the Greek anodos, 'way up', 'the way (up) out of the cell (or other device) for electrons'.
Electrolytic anode
In electrochemistry, the anode is where oxidation occurs and is the positive polarity contact in an electrolytic cell. At the anode, anions (negative ions) are forced by the electrical potential to react chemically and give off electrons (oxidation) which then flow up and into the driving circuit. Mnemonics: LEO Red Cat (Loss of Electrons is Oxidation, Reduction occurs at the Cathode), or AnOx Red Cat (Anode Oxidation, Reduction Cathode), or OIL RIG (Oxidation is Loss, Reduction is Gain of electrons), or Roman Catholic and Orthodox (Reduction – Cathode, anode – Oxidation), or LEO the lion says GER (Losing electrons is Oxidation, Gaining electrons is Reduction).
This process is widely used in metals refining. For example, in copper refining, copper anodes, an intermediate product from the furnaces, are electrolysed in an appropriate solution (such as sulfuric acid) to yield high purity (99.99%) cathodes. Copper cathodes produced using this method are also described as electrolytic copper.
Historically, when non-reactive anodes were desired for electrolysis, graphite (called plumbago in Faraday's time) or platinum were chosen. They were found to be some of the least reactive materials for anodes. Platinum erodes very slowly compared to other materials, and graphite crumbles and can produce carbon dioxide in aqueous solutions but otherwise does not participate in the reaction.
Battery or galvanic cell anode
In a battery or galvanic cell, the anode is the negative electrode from which electrons flow out towards the external part of the circuit. Internally the positively charged cations are flowing away from the anode (even though it is negative and therefore would be expected to attract them, this is due to electrode potential relative to the electrolyte solution being different for the anode and cathode metal/electrolyte systems); but, external to the cell in the circuit, electrons are being pushed out through the negative contact and thus through the circuit by the voltage potential as would be expected. Note: in a galvanic cell, contrary to what occurs in an electrolytic cell, no anions flow to the anode, the internal current being entirely accounted for by the cations flowing away from it (cf drawing).
Battery manufacturers may regard the negative electrode as the anode, particularly in their technical literature. Though technically incorrect, it does resolve the problem of which electrode is the anode in a secondary (or rechargeable) cell. Using the traditional definition, the anode switches ends between charge and discharge cycles.
Vacuum tube anode
In electronic vacuum devices such as a cathode-ray tube, the anode is the positively charged electron collector. In a tube, the anode is a charged positive plate that collects the electrons emitted by the cathode through electric attraction. It also accelerates the flow of these electrons.
Diode anode
In a semiconductor diode, the anode is the P-doped layer which initially supplies holes to the junction. In the junction region, the holes supplied by the anode combine with electrons supplied from the N-doped region, creating a depleted zone. As the P-doped layer supplies holes to the depleted region, negative dopant ions are left behind in the P-doped layer ('P' for positive charge-carrier ions). This creates a base negative charge on the anode. When a positive voltage is applied to anode of the diode from the circuit, more holes are able to be transferred to the depleted region, and this causes the diode to become conductive, allowing current to flow through the circuit. The terms anode and cathode should not be applied to a Zener diode, since it allows flow in either direction, depending on the polarity of the applied potential (i.e. voltage).
Sacrificial anode
In cathodic protection, a metal anode that is more reactive to the corrosive environment than the metal system to be protected is electrically linked to the protected system. As a result, the metal anode partially corrodes or dissolves instead of the metal system. As an example, an iron or steel ship's hull may be protected by a zinc sacrificial anode, which will dissolve into the seawater and prevent the hull from being corroded. Sacrificial anodes are particularly needed for systems where a static charge is generated by the action of flowing liquids, such as pipelines and watercraft. Sacrificial anodes are also generally used in tank-type water heaters.
In 1824 to reduce the impact of this destructive electrolytic action on ships hulls, their fastenings and underwater equipment, the scientist-engineer Humphry Davy developed the first and still most widely used marine electrolysis protection system. Davy installed sacrificial anodes made from a more electrically reactive (less noble) metal attached to the vessel hull and electrically connected to form a cathodic protection circuit.
A less obvious example of this type of protection is the process of galvanising iron. This process coats iron structures (such as fencing) with a coating of zinc metal. As long as the zinc remains intact, the iron is protected from the effects of corrosion. Inevitably, the zinc coating becomes breached, either by cracking or physical damage. Once this occurs, corrosive elements act as an electrolyte and the zinc/iron combination as electrodes. The resultant current ensures that the zinc coating is sacrificed but that the base iron does not corrode. Such a coating can protect an iron structure for a few decades, but once the protecting coating is consumed, the iron rapidly corrodes.
If, conversely, tin is used to coat steel, when a breach of the coating occurs it actually accelerates oxidation of the iron.
Impressed current anode
Another cathodic protection is used on the impressed current anode. It is made from titanium and covered with mixed metal oxide. Unlike the sacrificial anode rod, the impressed current anode does not sacrifice its structure. This technology uses an external current provided by a DC source to create the cathodic protection. Impressed current anodes are used in larger structures like pipelines, boats, and water heaters.
Related antonym
The opposite of an anode is a cathode. When the current through the device is reversed, the electrodes switch functions, so the anode becomes the cathode and the cathode becomes anode, as long as the reversed current is applied. The exception is diodes where electrode naming is always based on the forward current direction.
See also
Anodizing
Galvanic anode
Gas-filled tube
Primary cell
Redox (reduction–oxidation)
References
External links
The Cathode Ray Tube site
How to define anode and cathode
Valence Technologies Inc. battery education page
Cathodic Protection Technical Library
Electrodes
|
https://en.wikipedia.org/wiki/Adhesive
|
Adhesive, also known as glue, cement, mucilage, or paste, is any non-metallic substance applied to one or both surfaces of two separate items that binds them together and resists their separation.
The use of adhesives offers certain advantages over other binding techniques such as sewing, mechanical fastenings, or welding. These include the ability to bind different materials together, the more efficient distribution of stress across a joint, the cost-effectiveness of an easily mechanized process, and greater flexibility in design. Disadvantages of adhesive use include decreased stability at high temperatures, relative weakness in bonding large objects with a small bonding surface area, and greater difficulty in separating objects during testing. Adhesives are typically organized by the method of adhesion followed by reactive or non-reactive, a term which refers to whether the adhesive chemically reacts in order to harden. Alternatively, they can be organized either by their starting physical phase or whether their raw stock is of natural or synthetic origin.
Adhesives may be found naturally or produced synthetically. The earliest human use of adhesive-like substances was approximately 200,000 years ago, when Neanderthals produced tar from the dry distillation of birch bark for use in binding stone tools to wooden handles. The first references to adhesives in literature appeared in approximately 2000 BC. The Greeks and Romans made great contributions to the development of adhesives. In Europe, glue was not widely used until the period AD 1500–1700. From then until the 1900s increases in adhesive use and discovery were relatively gradual. Only since the 20th century has the development of synthetic adhesives accelerated rapidly, and innovation in the field continues to the present.
History
Evidence of the earliest known use of adhesives was discovered in central Italy when two stone flakes partially covered with birch-bark tar and a third uncovered stone from the Middle Pleistocene era (circa 200,000 years ago) were found. This is thought to be the oldest discovered human use of tar-hafted stones.
The birch-bark-tar adhesive is a simple, one-component adhesive. A study from 2019 showed that birch tar production can be a very simple process—merely involving the burning of birch bark near smooth vertical surfaces in open air conditions. Although sticky enough, plant-based adhesives are brittle and vulnerable to environmental conditions. The first use of compound adhesives was discovered in Sibudu, South Africa. Here, 70,000-year-old stone segments that were once inserted in axe hafts were discovered covered with an adhesive composed of plant gum and red ochre (natural iron oxide) as adding ochre to plant gum produces a stronger product and protects the gum from disintegrating under wet conditions. The ability to produce stronger adhesives allowed middle Stone Age humans to attach stone segments to sticks in greater variations, which led to the development of new tools.
More recent examples of adhesive use by prehistoric humans have been found at the burial sites of ancient tribes. Archaeologists studying the sites found that approximately 6,000 years ago the tribesmen had buried their dead together with food found in broken clay pots repaired with tree resins. Another investigation by archaeologists uncovered the use of bituminous cements to fasten ivory eyeballs to statues in Babylonian temples dating to approximately 4000 BC.
In 2000, a paper revealed the discovery of a 5,200-year-old man nicknamed the "Tyrolean Iceman" or "Ötzi", who was preserved in a glacier near the Austria-Italy border. Several of his belongings were found with him including two arrows with flint arrowheads and a copper hatchet, each with evidence of organic glue used to connect the stone or metal parts to the wooden shafts. The glue was analyzed as pitch, which requires the heating of tar during its production. The retrieval of this tar requires a transformation of birch bark by means of heat, in a process known as pyrolysis.
The first references to adhesives in literature appeared in approximately 2000 BC. Further historical records of adhesive use are found from the period spanning 1500–1000 BC. Artifacts from this period include paintings depicting wood gluing operations and a casket made of wood and glue in King Tutankhamun's tomb. Other ancient Egyptian artifacts employ animal glue for bonding or lamination. Such lamination of wood for bows and furniture is thought to have extended their life and was accomplished using casein (milk protein)-based glues. The ancient Egyptians also developed starch-based pastes for the bonding of papyrus to clothing and a plaster of Paris-like material made of calcined gypsum.
From AD 1 to 500 the Greeks and Romans made great contributions to the development of adhesives. Wood veneering and marquetry were developed, the production of animal and fish glues refined, and other materials utilized. Egg-based pastes were used to bond gold leaves, and incorporated various natural ingredients such as blood, bone, hide, milk, cheese, vegetables, and grains. The Greeks began the use of slaked lime as mortar while the Romans furthered mortar development by mixing lime with volcanic ash and sand. This material, known as pozzolanic cement, was used in the construction of the Roman Colosseum and Pantheon. The Romans were also the first people known to have used tar and beeswax as caulk and sealant between the wooden planks of their boats and ships.
In Central Asia, the rise of the Mongols in approximately AD 1000 can be partially attributed to the good range and power of the bows of Genghis Khan's hordes. These bows were made of a bamboo core, with horn on the belly (facing towards the archer) and sinew on the back, bound together with animal glue.
In Europe, glue fell into disuse until the period AD 1500–1700. At this time, world-renowned cabinet and furniture makers such as Thomas Chippendale and Duncan Phyfe began to use adhesives to hold their products together. In 1690, the first commercial glue plant was established in The Netherlands. This plant produced glues from animal hides. In 1750, the first British glue patent was issued for fish glue. The following decades of the next century witnessed the manufacture of casein glues in German and Swiss factories. In 1876, the first U.S. patent (number 183,024) was issued to the Ross brothers for the production of casein glue.
The first U.S. postage stamps used starch-based adhesives when issued in 1847. The first US patent (number 61,991) on dextrin (a starch derivative) adhesive was issued in 1867.
Natural rubber was first used as material for adhesives starting in 1830, which marked the starting point of the modern adhesive. In 1862, a British patent (number 3288) was issued for the plating of metal with brass by electrodeposition to obtain a stronger bond to rubber. The development of the automobile and the need for rubber shock mounts required stronger and more durable bonds of rubber and metal. This spurred the development of cyclized rubber treated in strong acids. By 1927, this process was used to produce solvent-based thermoplastic rubber cements for metal to rubber bonding.
Natural rubber-based sticky adhesives were first used on a backing by Henry Day (US Patent 3,965) in 1845. Later these kinds of adhesives were used in cloth backed surgical and electric tapes. By 1925, the pressure-sensitive tape industry was born.
Today, sticky notes, Scotch Tape, and other tapes are examples of pressure-sensitive adhesives (PSA).
A key step in the development of synthetic plastics was the introduction of a thermoset plastic known as Bakelite phenolic in 1910. Within two years, phenolic resin was applied to plywood as a coating varnish. In the early 1930s, phenolics gained importance as adhesive resins.
The 1920s, 1930s, and 1940s witnessed great advances in the development and production of new plastics and resins due to the First and Second World Wars. These advances greatly improved the development of adhesives by allowing the use of newly developed materials that exhibited a variety of properties. With changing needs and ever evolving technology, the development of new synthetic adhesives continues to the present. However, due to their low cost, natural adhesives are still more commonly used.
Types
Adhesives are typically organized by the method of adhesion. These are then organized into reactive and non-reactive adhesives, which refers to whether the adhesive chemically reacts in order to harden. Alternatively they can be organized by whether the raw stock is of natural, or synthetic origin, or by their starting physical phase.
By reactiveness
Non-reactive
Drying
There are two types of adhesives that harden by drying: solvent-based adhesives and polymer dispersion adhesives, also known as emulsion adhesives. Solvent-based adhesives are a mixture of ingredients (typically polymers) dissolved in a solvent. White glue, contact adhesives and rubber cements are members of the drying adhesive family. As the solvent evaporates, the adhesive hardens. Depending on the chemical composition of the adhesive, they will adhere to different materials to greater or lesser degrees.
Polymer dispersion adhesives are milky-white dispersions often based on polyvinyl acetate (PVAc). They are used extensively in the woodworking and packaging industries. They are also used with fabrics and fabric-based components, and in engineered products such as loudspeaker cones.
Pressure-sensitive
Pressure-sensitive adhesives (PSA) form a bond by the application of light pressure to marry the adhesive with the adherend. They are designed to have a balance between flow and resistance to flow. The bond forms because the adhesive is soft enough to flow (i.e., "wet") to the adherend. The bond has strength because the adhesive is hard enough to resist flow when stress is applied to the bond. Once the adhesive and the adherend are in close proximity, molecular interactions, such as van der Waals forces, become involved in the bond, contributing significantly to its ultimate strength.
PSAs are designed for either permanent or removable applications. Examples of permanent applications include safety labels for power equipment, foil tape for HVAC duct work, automotive interior trim assembly, and sound/vibration damping films. Some high performance permanent PSAs exhibit high adhesion values and can support kilograms of weight per square centimeter of contact area, even at elevated temperatures. Permanent PSAs may initially be removable (for example to recover mislabeled goods) and build adhesion to a permanent bond after several hours or days.
Removable adhesives are designed to form a temporary bond, and ideally can be removed after months or years without leaving residue on the adherend. Removable adhesives are used in applications such as surface protection films, masking tapes, bookmark and note papers, barcode labels, price marking labels, promotional graphics materials, and for skin contact (wound care dressings, EKG electrodes, athletic tape, analgesic and transdermal drug patches, etc.). Some removable adhesives are designed to repeatedly stick and unstick. They have low adhesion, and generally cannot support much weight. Pressure-sensitive adhesive is used in Post-it notes.
Pressure-sensitive adhesives are manufactured with either a liquid carrier or in 100% solid form. Articles are made from liquid PSAs by coating the adhesive and drying off the solvent or water carrier. They may be further heated to initiate a cross-linking reaction and increase molecular weight. 100% solid PSAs may be low viscosity polymers that are coated and then reacted with radiation to increase molecular weight and form the adhesive, or they may be high viscosity materials that are heated to reduce viscosity enough to allow coating, and then cooled to their final form. Major raw material for PSA's are acrylate-based polymers.
Contact
Contact adhesives are used in strong bonds with high shear-resistance like laminates, such as bonding Formica to a wooden counter, and in footwear, as in attaching outsoles to uppers. Natural rubber and polychloroprene (Neoprene) are commonly used contact adhesives. Both of these elastomers undergo strain crystallization.
Contact adhesives must be applied to both surfaces and allowed some time to dry before the two surfaces are pushed together. Some contact adhesives require as long as 24 hours to dry before the surfaces are to be held together. Once the surfaces are pushed together, the bond forms very quickly. It is usually not necessary to apply pressure for a long time, so there is less need for clamps.
Hot
Hot adhesives, also known as hot melt adhesives, are thermoplastics applied in molten form (in the 65–180 °C range) which solidify on cooling to form strong bonds between a wide range of materials. Ethylene-vinyl acetate-based hot-melts are particularly popular for crafts because of their ease of use and the wide range of common materials they can join. A glue gun (shown at right) is one method of applying hot adhesives. The glue gun melts the solid adhesive, then allows the liquid to pass through its barrel onto the material, where it solidifies.
Thermoplastic glue may have been invented around 1940 by Procter & Gamble as a solution to the problem that water-based adhesives, commonly used in packaging at that time, failed in humid climates, causing packages to open. However, water-based adhesives are still of strong interest as they typically do not contain volatile solvents.
Reactive
Anaerobic
Anaerobic adhesives cure when in contact with metal, in the absence of oxygen. They work well in a close-fitting space, as when used as a Thread-locking fluid.
Multi-part
Multi-component adhesives harden by mixing two or more components which chemically react. This reaction causes polymers to cross-link into acrylates, urethanes, and epoxies .
There are several commercial combinations of multi-component adhesives in use in industry. Some of these combinations are:
Polyester resin & polyurethane resin
Polyols & polyurethane resin
Acrylic polymers & polyurethane resins
The individual components of a multi-component adhesive are not adhesive by nature. The individual components react with each other after being mixed and show full adhesion only on curing. The multi-component resins can be either solvent-based or solvent-less. The solvents present in the adhesives are a medium for the polyester or the polyurethane resin. The solvent is dried during the curing process.
Pre-mixed and frozen adhesives
Pre-mixed and frozen adhesives (PMFs) are adhesives that are mixed, deaerated, packaged, and frozen. As it is necessary for PMFs to remain frozen before use, once they are frozen at −80 °C they are shipped with dry ice and are required to be stored at or below −40 °C. PMF adhesives eliminate mixing mistakes by the end user and reduce exposure of curing agents that can contain irritants or toxins. PMFs were introduced commercially in the 1960s and are commonly used in aerospace and defense.
One-part
One-part adhesives harden via a chemical reaction with an external energy source, such as radiation, heat, and moisture.
Ultraviolet (UV) light curing adhesives, also known as light curing materials (LCM), have become popular within the manufacturing sector due to their rapid curing time and strong bond strength. Light curing adhesives can cure in as little as one second and many formulations can bond dissimilar substrates (materials) and withstand harsh temperatures. These qualities make UV curing adhesives essential to the manufacturing of items in many industrial markets such as electronics, telecommunications, medical, aerospace, glass, and optical. Unlike traditional adhesives, UV light curing adhesives not only bond materials together but they can also be used to seal and coat products. They are generally acrylic-based.
Heat curing adhesives consist of a pre-made mixture of two or more components. When heat is applied the components react and cross-link. This type of adhesive includes thermoset epoxies, urethanes, and polyimides.
Moisture curing adhesives cure when they react with moisture present on the substrate surface or in the air. This type of adhesive includes cyanoacrylates and urethanes.
By origin
Natural
Natural adhesives are made from organic sources such as vegetable starch (dextrin), natural resins, or animals (e.g. the milk protein casein and hide-based animal glues). These are often referred to as bioadhesives.
One example is a simple paste made by cooking flour in water. Starch-based adhesives are used in corrugated board and paper sack production, paper tube winding, and wallpaper adhesives. Casein glue is mainly used to adhere glass bottle labels. Animal glues have traditionally been used in bookbinding, wood joining, and many other areas but now are largely replaced by synthetic glues except in specialist applications like the production and repair of stringed instruments. Albumen made from the protein component of blood has been used in the plywood industry. Masonite, a wood hardboard, was originally bonded using natural wood lignin, an organic polymer, though most modern particle boards such as MDF use synthetic thermosetting resins.
Synthetic
Synthetic adhesives are made out of organic compounds. Many are based on elastomers, thermoplastics, emulsions, and thermosets. Examples of thermosetting adhesives are: epoxy, polyurethane, cyanoacrylate and acrylic polymers. The first commercially produced synthetic adhesive was Karlsons Klister in the 1920s.
Application
Applicators of different adhesives are designed according to the adhesive being used and the size of the area to which the adhesive will be applied. The adhesive is applied to either one or both of the materials being bonded. The pieces are aligned and pressure is added to aid in adhesion and rid the bond of air bubbles.
Common ways of applying an adhesive include brushes, rollers, using films or pellets, spray guns and applicator guns (e.g., caulk gun). All of these can be used manually or automated as part of a machine.
Mechanisms of adhesion
For an adhesive to be effective it must have three main properties. Firstly, it must be able to wet the base material. Wetting is the ability of a liquid to maintain contact with a solid surface. It must also increase in strength after application, and finally it must be able to transmit load between the two surfaces/substrates being adhered.
Adhesion, the attachment between adhesive and substrate may occur either by mechanical means, in which the adhesive works its way into small pores of the substrate, or by one of several chemical mechanisms. The strength of adhesion depends on many factors, including the means by which it occurs.
In some cases, an actual chemical bond occurs between adhesive and substrate. In others, electrostatic forces, as in static electricity, hold the substances together. A third mechanism involves the van der Waals forces that develop between molecules. A fourth means involves the moisture-aided diffusion of the glue into the substrate, followed by hardening.
Methods to improve adhesion
The quality of adhesive bonding depends strongly on the ability of the adhesive to efficiently cover (wet) the substrate area. This happens when the surface energy of the substrate is greater than the surface energy of the adhesive. However, high-strength adhesives have high surface energy. Thus, they bond poorly to low-surface-energy polymers or other materials. To solve this problem, surface treatment can be used to increase the surface energy as a preparation step before adhesive bonding. Importantly, surface preparation provides a reproducible surface allowing consistent bonding results. The commonly used surface activation techniques include plasma activation, flame treatment and wet chemistry priming.
Failure
There are several factors that could contribute to the failure of two adhered surfaces. Sunlight and heat may weaken the adhesive. Solvents can deteriorate or dissolve adhesive. Physical stresses may also cause the separation of surfaces. When subjected to loading, debonding may occur at different locations in the adhesive joint. The major fracture types are the following:
Cohesive fracture
Cohesive fracture is obtained if a crack propagates in the bulk polymer which constitutes the adhesive. In this case the surfaces of both adherends after debonding will be covered by fractured adhesive. The crack may propagate in the center of the layer or near an interface. For this last case, the cohesive fracture can be said to be "cohesive near the interface".
Adhesive fracture
Adhesive fracture (sometimes referred to as interfacial fracture) is when debonding occurs between the adhesive and the adherend. In most cases, the occurrence of adhesive fracture for a given adhesive goes along with smaller fracture toughness.
Other types of fracture
Other types of fracture include:
The mixed type, which occurs if the crack propagates at some spots in a cohesive and in others in an interfacial manner. Mixed fracture surfaces can be characterised by a certain percentage of adhesive and cohesive areas.
The alternating crack path type which occurs if the cracks jump from one interface to the other. This type of fracture appears in the presence of tensile pre-stresses in the adhesive layer.
Fracture can also occur in the adherend if the adhesive is tougher than the adherend. In this case, the adhesive remains intact and is still bonded to one substrate and remnants of the other. For example, when one removes a price label, the adhesive usually remains on the label and the surface. This is cohesive failure. If, however, a layer of paper remains stuck to the surface, the adhesive has not failed. Another example is when someone tries to pull apart Oreo cookies and all the filling remains on one side; this is an adhesive failure, rather than a cohesive failure.
Design of adhesive joints
As a general design rule, the material properties of the object need to be greater than the forces anticipated during its use. (i.e. geometry, loads, etc.). The engineering work will consist of having a good model to evaluate the function. For most adhesive joints, this can be achieved using fracture mechanics. Concepts such as the stress concentration factor and the strain energy release rate can be used to predict failure. In such models, the behavior of the adhesive layer itself is neglected and only the adherents are considered.
Failure will also very much depend on the opening mode of the joint.
Mode I is an opening or tensile mode where the loadings are normal to the crack.
Mode II is a sliding or in-plane shear mode where the crack surfaces slide over one another in direction perpendicular to the leading edge of the crack. This is typically the mode for which the adhesive exhibits the highest resistance to fracture.
Mode III is a tearing or antiplane shear mode.
As the loads are usually fixed, an acceptable design will result from combination of a material selection procedure and geometry modifications, if possible. In adhesively bonded structures, the global geometry and loads are fixed by structural considerations and the design procedure focuses on the material properties of the adhesive and on local changes on the geometry.
Increasing the joint resistance is usually obtained by designing its geometry so that:
The bonded zone is large
It is mainly loaded in mode II
Stable crack propagation will follow the appearance of a local failure.
Shelf life
Some glues and adhesives have a limited shelf life. Shelf life is dependent on multiple factors, the foremost of which being temperature. Adhesives may lose their effectiveness at high temperatures, as well as become increasingly stiff. Other factors affecting shelf life include exposure to oxygen or water vapor.
See also
Impact glue
References
Bibliography
Kinloch, Anthony J. (1987). Adhesion and Adhesives: Science and Technology. London: Chapman and Hall.
External links
Educational portal on adhesives and sealants
RoyMech: The theory of adhesive bonding
3M's Adhesive & Tapes Classification
Database of adhesives for attaching different materials
Visual arts materials
1750 introductions
Packaging materials
|
https://en.wikipedia.org/wiki/AMD
|
Advanced Micro Devices, Inc., commonly abbreviated as AMD, is an American multinational semiconductor company based in Santa Clara, California, that develops computer processors and related technologies for business and consumer markets.
The company was founded in 1969 by Jerry Sanders and a group of other technology professionals. AMD's early products were primarily memory chips and other components for computers. The company later expanded into the microprocessor market, competing with Intel, its main rival in the industry. In the early 2000s, AMD experienced significant growth and success, thanks in part to its strong position in the PC market and the success of its Athlon and Opteron processors. However, the company faced challenges in the late 2000s and early 2010s, as it struggled to keep up with Intel in the race to produce faster and more powerful processors. In the late 2010s, AMD regained some of its market share thanks to the success of its Ryzen processors which are now widely regarded as superior to Intel products in business applications including cloud applications. AMD's processors are used in a wide range of computing devices, including personal computers, servers, laptops, and gaming consoles. While it initially manufactured its own processors, the company later outsourced its manufacturing, a practice known as going fabless, after GlobalFoundries was spun off in 2009.
AMD's main products include microprocessors, motherboard chipsets, embedded processors, graphics processors, and FPGAs for servers, workstations, personal computers, and embedded system applications. The company has also expanded into new markets, such as the data center and gaming markets, and has announced plans to enter the high-performance computing market.
History
First twelve years
Advanced Micro Devices was formally incorporated by Jerry Sanders, along with seven of his colleagues from Fairchild Semiconductor, on May 1, 1969. Sanders, an electrical engineer who was the director of marketing at Fairchild, had, like many Fairchild executives, grown frustrated with the increasing lack of support, opportunity, and flexibility within the company. He later decided to leave to start his own semiconductor company, following the footsteps of Robert Noyce (developer of the first silicon integrated circuit at Fairchild in 1959) and Gordon Moore, who together founded the semiconductor company Intel in July 1968.
In September 1969, AMD moved from its temporary location in Santa Clara to Sunnyvale, California. To immediately secure a customer base, AMD initially became a second source supplier of microchips designed by Fairchild and National Semiconductor. AMD first focused on producing logic chips. The company guaranteed quality control to United States Military Standard, an advantage in the early computer industry since unreliability in microchips was a distinct problem that customers – including computer manufacturers, the telecommunications industry, and instrument manufacturers – wanted to avoid.
In November 1969, the company manufactured its first product: the Am9300, a 4-bit MSI shift register, which began selling in 1970. Also in 1970, AMD produced its first proprietary product, the Am2501 logic counter, which was highly successful. Its bestselling product in 1971 was the Am2505, the fastest multiplier available.
In 1971, AMD entered the RAM chip market, beginning with the Am3101, a 64-bit bipolar RAM. That year AMD also greatly increased the sales volume of its linear integrated circuits, and by year-end the company's total annual sales reached US$4.6 million.
AMD went public in September 1972. The company was a second source for Intel MOS/LSI circuits by 1973, with products such as Am14/1506 and Am14/1507, dual 100-bit dynamic shift registers. By 1975, AMD was producing 212 products – of which 49 were proprietary, including the Am9102 (a static N-channel 1024-bit RAM) and three low-power Schottky MSI circuits: Am25LS07, Am25LS08, and Am25LS09.
Intel had created the first microprocessor, its 4-bit 4004, in 1971. By 1975, AMD entered the microprocessor market with the Am9080, a reverse-engineered clone of the Intel 8080, and the Am2900 bit-slice microprocessor family. When Intel began installing microcode in its microprocessors in 1976, it entered into a cross-licensing agreement with AMD, which was granted a copyright license to the microcode in its microprocessors and peripherals, effective October 1976.
In 1977, AMD entered into a joint venture with Siemens, a German engineering conglomerate wishing to enhance its technology expertise and enter the American market. Siemens purchased 20% of AMD's stock, giving the company an infusion of cash to increase its product lines. The two companies also jointly established Advanced Micro Computers (AMC), located in Silicon Valley and in Germany, allowing AMD to enter the microcomputer development and manufacturing field, in particular based on AMD's second-source Zilog Z8000 microprocessors. When the two companies' vision for Advanced Micro Computers diverged, AMD bought out Siemens' stake in the American division in 1979. AMD closed Advanced Micro Computers in late 1981 after switching focus to manufacturing second-source Intel x86 microprocessors.
Total sales in fiscal year 1978 topped $100 million, and in 1979, AMD debuted on the New York Stock Exchange. In 1979, production also began on AMD's new semiconductor fabrication plant in Austin, Texas; the company already had overseas assembly facilities in Penang and Manila, and began construction on a fabrication plant in San Antonio in 1981. In 1980, AMD began supplying semiconductor products for telecommunications, an industry undergoing rapid expansion and innovation.
Technology exchange agreement with Intel
Intel had introduced the first x86 microprocessors in 1978. In 1981, IBM created its PC, and wanted Intel's x86 processors, but only under the condition that Intel also provide a second-source manufacturer for its patented x86 microprocessors. Intel and AMD entered into a 10-year technology exchange agreement, first signed in October 1981 and formally executed in February 1982. The terms of the agreement were that each company could acquire the right to become a second-source manufacturer of semiconductor products developed by the other; that is, each party could "earn" the right to manufacture and sell a product developed by the other, if agreed to, by exchanging the manufacturing rights to a product of equivalent technical complexity. The technical information and licenses needed to make and sell a part would be exchanged for a royalty to the developing company. The 1982 agreement also extended the 1976 AMD–Intel cross-licensing agreement through 1995. The agreement included the right to invoke arbitration of disagreements, and after five years the right of either party to end the agreement with one year's notice. The main result of the 1982 agreement was that AMD became a second-source manufacturer of Intel's x86 microprocessors and related chips, and Intel provided AMD with database tapes for its 8086, 80186, and 80286 chips. However, in the event of a bankruptcy or takeover of AMD, the cross-licensing agreement would be effectively canceled.
Beginning in 1982, AMD began volume-producing second-source Intel-licensed 8086, 8088, 80186, and 80188 processors, and by 1984, its own Am286 clone of Intel's 80286 processor, for the rapidly growing market of IBM PCs and IBM clones. It also continued its successful concentration on proprietary bipolar chips.
The company continued to spend greatly on research and development, and created the world's first 512K EPROM in 1984. That year, AMD was listed in the book The 100 Best Companies to Work for in America, and later made the Fortune 500 list for the first time in 1985.
By mid-1985, the microchip market experienced a severe downturn, mainly due to long-term aggressive trade practices (dumping) from Japan, but also due to a crowded and non-innovative chip market in the United States. AMD rode out the mid-1980s crisis by aggressively innovating and modernizing, devising the Liberty Chip program of designing and manufacturing one new chip or chipset per week for 52 weeks in fiscal year 1986, and by heavily lobbying the U.S. government until sanctions and restrictions were put in place to prevent predatory Japanese pricing. During this time, AMD withdrew from the DRAM market, and made some headway into the CMOS market, which it had lagged in entering, having focused instead on bipolar chips.
AMD had some success in the mid-1980s with the AMD7910 and AMD7911 "World Chip" FSK modem, one of the first multi-standard devices that covered both Bell and CCITT tones at up to 1200 baud half duplex or 300/300 full duplex. Beginning in 1986, AMD embraced the perceived shift toward RISC with their own AMD Am29000 (29k) processor; the 29k survived as an embedded processor. The company also increased its EPROM memory market share in the late 1980s. Throughout the 1980s, AMD was a second-source supplier of Intel x86 processors. In 1991, it introduced its own 386-compatible Am386, an AMD-designed chip. Creating its own chips, AMD began to compete directly with Intel.
AMD had a large, successful flash memory business, even during the dotcom bust. In 2003, to divest some manufacturing and aid its overall cash flow, which was under duress from aggressive microprocessor competition from Intel, AMD spun off its flash memory business and manufacturing into Spansion, a joint venture with Fujitsu, which had been co-manufacturing flash memory with AMD since 1993. In December 2005, AMD divested itself of Spansion to focus on the microprocessor market, and Spansion went public in an IPO.
Acquisition of ATI, spin-off of GlobalFoundries, and acquisition of Xilinx
On July 24, 2006, AMD announced its acquisition of the Canadian 3D graphics card company ATI Technologies. AMD paid $4.3 billion and 58 million shares of its capital stock, for a total of approximately $5.4 billion. The transaction was completed on October 25, 2006. On August 30, 2010, AMD announced that it would retire the ATI brand name for its graphics chipsets in favor of the AMD brand name.
In October 2008, AMD announced plans to spin off manufacturing operations in the form of GlobalFoundries Inc., a multibillion-dollar joint venture with Advanced Technology Investment Co., an investment company formed by the government of Abu Dhabi. The partnership and spin-off gave AMD an infusion of cash and allowed it to focus solely on chip design. To assure the Abu Dhabi investors of the new venture's success, AMD's CEO Hector Ruiz stepped down in July 2008, while remaining executive chairman, in preparation for becoming chairman of GlobalFoundries in March 2009. President and COO Dirk Meyer became AMD's CEO. Recessionary losses necessitated AMD cutting 1,100 jobs in 2009.
In August 2011, AMD announced that former Lenovo executive Rory Read would be joining the company as CEO, replacing Meyer. In November 2011, AMD announced plans to lay off more than 10% (1,400) of its employees from across all divisions worldwide. In October 2012, it announced plans to lay off an additional 15% of its workforce to reduce costs in the face of declining sales revenue.
AMD acquired the low-power server manufacturer SeaMicro in early 2012, with an eye to bringing out an Arm64 server chip.
On October 8, 2014, AMD announced that Rory Read had stepped down after three years as president and chief executive officer. He was succeeded by Lisa Su, a key lieutenant who had been serving as chief operating officer since June.
On October 16, 2014, AMD announced a new restructuring plan along with its Q3 results. Effective July 1, 2014, AMD reorganized into two business groups: Computing and Graphics, which primarily includes desktop and notebook processors and chipsets, discrete GPUs, and professional graphics; and Enterprise, Embedded, and Semi-Custom, which primarily includes server and embedded processors, dense servers, semi-custom SoC products (including solutions for gaming consoles), engineering services, and royalties. As part of this restructuring, AMD announced that 7% of its global workforce would be laid off by the end of 2014.
After the GlobalFoundries spin-off and subsequent layoffs, AMD was left with significant vacant space at 1 AMD Place, its aging Sunnyvale headquarters office complex. In August 2016, AMD's 47 years in Sunnyvale came to a close when it signed a lease with the Irvine Company for a new 220,000 sq. ft. headquarters building in Santa Clara. AMD's new location at Santa Clara Square faces the headquarters of archrival Intel across the Bayshore Freeway and San Tomas Aquino Creek. Around the same time, AMD also agreed to sell 1 AMD Place to the Irvine Company. In April 2019, the Irvine Company secured approval from the Sunnyvale City Council of its plans to demolish 1 AMD Place and redevelop the entire 32-acre site into townhomes and apartments.
In October 2020, AMD announced that it was acquiring Xilinx in an all-stock transaction. The acquisition was completed in February 2022, with an estimated acquisition price of $50 billion.
In October 2023, AMD acquired an open-source AI software provider, Nod.ai, to bolster its AI software ecosystem.
List of CEOs
Products
CPUs and APUs
IBM PC and the x86 architecture
In February 1982, AMD signed a contract with Intel, becoming a licensed second-source manufacturer of 8086 and 8088 processors. IBM wanted to use the Intel 8088 in its IBM PC, but its policy at the time was to require at least two sources for its chips. AMD later produced the Am286 under the same arrangement. In 1984, Intel internally decided to no longer cooperate with AMD in supplying product information to shore up its advantage in the marketplace, and delayed and eventually refused to convey the technical details of the Intel 80386. In 1987, AMD invoked arbitration over the issue, and Intel reacted by canceling the 1982 technological-exchange agreement altogether. After three years of testimony, AMD eventually won in arbitration in 1992, but Intel disputed this decision. Another long legal dispute followed, ending in 1994 when the Supreme Court of California sided with the arbitrator and AMD.
In 1990, Intel countersued AMD, renegotiating AMD's right to use derivatives of Intel's microcode for its cloned processors. In the face of uncertainty during the legal dispute, AMD was forced to develop clean room designed versions of Intel code for its x386 and x486 processors, the former long after Intel had released its own x386 in 1985. In March 1991, AMD released the Am386, its clone of the Intel 386 processor. By October of the same year it had sold one million units.
In 1993, AMD introduced the first of the Am486 family of processors, which proved popular with a large number of original equipment manufacturers, including Compaq, which signed an exclusive agreement using the Am486. The Am5x86, another Am486-based processor, was released in November 1995, and continued AMD's success as a fast, cost-effective processor.
Finally, in an agreement effective 1996, AMD received the rights to the microcode in Intel's x386 and x486 processor families, but not the rights to the microcode in the following generations of processors.
K5, K6, Athlon, Duron, and Sempron
AMD's first in-house x86 processor was the K5, launched in 1996. The "K" in its name was a reference to Kryptonite, the only substance known to harm comic book character Superman. This itself was a reference to Intel's hegemony over the market, i.e., an anthropomorphization of them as Superman. The number "5" was a reference to the fifth generation of x86 processors; rival Intel had previously introduced its line of fifth-generation x86 processors as Pentium because the U.S. Trademark and Patent Office had ruled that mere numbers could not be trademarked.
In 1996, AMD purchased NexGen, specifically for the rights to their Nx series of x86-compatible processors. AMD gave the NexGen design team their own building, left them alone, and gave them time and money to rework the Nx686. The result was the K6 processor, introduced in 1997. Although it was based on Socket 7, variants such as K6-III/450 were faster than Intel's Pentium II (sixth-generation processor).
The K7 was AMD's seventh-generation x86 processor, making its debut under the brand name Athlon on June 23, 1999. Unlike previous AMD processors, it could not be used on the same motherboards as Intel's, due to licensing issues surrounding Intel's Slot 1 connector, and instead used a Slot A connector, referenced to the Alpha processor bus. The Duron was a lower-cost and limited version of the Athlon (64KB instead of 256KB L2 cache) in a 462-pin socketed PGA (socket A) or soldered directly onto the motherboard. Sempron was released as a lower-cost Athlon XP, replacing Duron in the socket A PGA era. It has since been migrated upward to all new sockets, up to AM3.
On October 9, 2001, the Athlon XP was released. On February 10, 2003, the Athlon XP with 512KB L2 Cache was released.
Athlon 64, Opteron and Phenom
The K8 was a major revision of the K7 architecture, with the most notable features being the addition of a 64-bit extension to the x86 instruction set (called x86-64, AMD64, or x64), the incorporation of an on-chip memory controller, and the implementation of an extremely high-performance point-to-point interconnect called HyperTransport, as part of the Direct Connect Architecture. The technology was initially launched as the Opteron server-oriented processor on April 22, 2003. Shortly thereafter, it was incorporated into a product for desktop PCs, branded Athlon 64.
On April 21, 2005, AMD released the first dual-core Opteron, an x86-based server CPU. A month later, it released the Athlon 64 X2, the first desktop-based dual-core processor family. In May 2007, AMD abandoned the string "64" in its dual-core desktop product branding, becoming Athlon X2, downplaying the significance of 64-bit computing in its processors. Further updates involved improvements to the microarchitecture, and a shift of the target market from mainstream desktop systems to value dual-core desktop systems. In 2008, AMD started to release dual-core Sempron processors exclusively in China, branded as the Sempron 2000 series, with lower HyperTransport speed and smaller L2 cache. AMD completed its dual-core product portfolio for each market segment.
In September 2007, AMD released the first server Opteron K10 processors, followed in November by the Phenom processor for desktop. K10 processors came in dual-core, triple-core, and quad-core versions, with all cores on a single die. AMD released a new platform codenamed "Spider", which used the new Phenom processor, as well as an R770 GPU and a 790 GX/FX chipset from the AMD 700 chipset series. However, AMD built the Spider at 65nm, which was uncompetitive with Intel's smaller and more power-efficient 45nm.
In January 2009, AMD released a new processor line dubbed Phenom II, a refresh of the original Phenom built using the 45 nm process. AMD's new platform, codenamed "Dragon", used the new Phenom II processor, and an ATI R770 GPU from the R700 GPU family, as well as a 790 GX/FX chipset from the AMD 700 chipset series. The Phenom II came in dual-core, triple-core and quad-core variants, all using the same die, with cores disabled for the triple-core and dual-core versions. The Phenom II resolved issues that the original Phenom had, including a low clock speed, a small L3 cache, and a Cool'n'Quiet bug that decreased performance. The Phenom II cost less but was not performance-competitive with Intel's mid-to-high-range Core 2 Quads. The Phenom II also enhanced its predecessor's memory controller, allowing it to use DDR3 in a new native socket AM3, while maintaining backward compatibility with AM2+, the socket used for the Phenom, and allowing the use of the DDR2 memory that was used with the platform.
In April 2010, AMD released a new Phenom II Hexa-core (6-core) processor codenamed "Thuban". This was a totally new die based on the hexa-core "Istanbul" Opteron processor. It included AMD's "turbo core" technology, which allows the processor to automatically switch from 6 cores to 3 faster cores when more pure speed is needed.
The Magny Cours and Lisbon server parts were released in 2010. The Magny Cours part came in 8 to 12 cores and the Lisbon part in 4 and 6 core parts. Magny Cours is focused on performance while the Lisbon part is focused on high performance per watt. Magny Cours is an MCM (multi-chip module) with two hexa-core "Istanbul" Opteron parts. This will use a new socket G34 for dual and quad-socket processors and thus will be marketed as Opteron 61xx series processors. Lisbon uses socket C32 certified for dual-socket use or single socket use only and thus will be marketed as Opteron 41xx processors. Both will be built on a 45 nm SOI process.
Fusion becomes the AMD APU
Following AMD's 2006 acquisition of Canadian graphics company ATI Technologies, an initiative codenamed Fusion was announced to integrate a CPU and GPU together on some of AMD's microprocessors, including a built in PCI Express link to accommodate separate PCI Express peripherals, eliminating the northbridge chip from the motherboard. The initiative intended to move some of the processing originally done on the CPU (e.g. floating-point unit operations) to the GPU, which is better optimized for some calculations. The Fusion was later renamed the AMD APU (Accelerated Processing Unit).
Llano was AMD's first APU built for laptops. Llano was the second APU released, targeted at the mainstream market. It incorporated a CPU and GPU on the same die, as well as northbridge functions, and used "Socket FM1" with DDR3 memory. The CPU part of the processor was based on the Phenom II "Deneb" processor. AMD suffered an unexpected decrease in revenue based on production problems for the Llano. More AMD APUs for laptops running Windows 7 and Windows 8 OS are being used commonly. These include AMD's price-point APUs, the E1 and E2, and their mainstream competitors with Intel's Core i-series: The Vision A- series, the A standing for accelerated. These range from the lower-performance A4 chipset to the A6, A8, and A10. These all incorporate next-generation Radeon graphics cards, with the A4 utilizing the base Radeon HD chip and the rest using a Radeon R4 graphics card, with the exception of the highest-model A10 (A10-7300) which uses an R6 graphics card.
New microarchitectures
High-power, high-performance Bulldozer cores
Bulldozer was AMD's microarchitecture codename for server and desktop AMD FX processors, first released on October 12, 2011. This family 15h microarchitecture is the successor to the family 10h (K10) microarchitecture design. Bulldozer was a clean-sheet design, not a development of earlier processors. The core was specifically aimed at 10–125 W TDP computing products. AMD claimed dramatic performance-per-watt efficiency improvements in high-performance computing (HPC) applications with Bulldozer cores. While hopes were high that Bulldozer would bring AMD to be performance-competitive with Intel once more, most benchmarks were disappointing. In some cases the new Bulldozer products were slower than the K10 models they were built to replace.
The Piledriver microarchitecture was the 2012 successor to Bulldozer, increasing clock speeds and performance relative to its predecessor. Piledriver would be released in AMD FX, APU, and Opteron product lines. Piledriver was subsequently followed by the Steamroller microarchitecture in 2013. Used exclusively in AMD's APUs, Steamroller focused on greater parallelism.
In 2015, the Excavator microarchitecture replaced Piledriver. Expected to be the last microarchitecture of the Bulldozer series, Excavator focused on improved power efficiency.
Low-power Cat cores
The Bobcat microarchitecture was revealed during a speech from AMD executive vice-president Henri Richard in Computex 2007 and was put into production during the first quarter of 2011. Based on the difficulty competing in the x86 market with a single core optimized for the 10–100 W range, AMD had developed a simpler core with a target range of 1–10 watts. In addition, it was believed that the core could migrate into the hand-held space if the power consumption can be reduced to less than 1 W.
Jaguar is a microarchitecture codename for Bobcat's successor, released in 2013, that is used in various APUs from AMD aimed at the low-power/low-cost market. Jaguar and its derivates would go on to be used in the custom APUs of the PlayStation 4, Xbox One, PlayStation 4 Pro, Xbox One S, and Xbox One X. Jaguar would be later followed by the Puma microarchitecture in 2014.
ARM architecture-based designs
In 2012, AMD announced it was working on ARM products, both as a semi-custom product and server product. The initial server product was announced as the Opteron A1100 in 2014, an 8-core Cortex-A57 based ARMv8-A SoC, and was expected to be followed by an APU incorporating a Graphics Core Next GPU. However, the Opteron A1100 was not released until 2016, with the delay attributed to adding software support. The A1100 was also criticized for not having support from major vendors upon its release.
In 2014, AMD also announced the K12 custom core for release in 2016. While being ARMv8-A instruction set architecture compliant, the K12 was expected to be entirely custom-designed, targeting the server, embedded, and semi-custom markets. While ARM architecture development continued, products based on K12 were subsequently delayed with no release planned. Development of AMD's x86-based Zen microarchitecture was preferred.
Zen-based CPUs and APUs
Zen is a new architecture for x86-64 based Ryzen series of CPUs and APUs, introduced in 2017 by AMD and built from the ground up by a team led by Jim Keller, beginning with his arrival in 2012, and taping out before his departure in September 2015. One of AMD's primary goals with Zen was an IPC increase of at least 40%, however in February 2017 AMD announced that they had actually achieved a 52% increase. Processors made on the Zen architecture are built on the 14 nm FinFET node and have a renewed focus on single-core performance and HSA compatibility. Previous processors from AMD were either built in the 32 nm process ("Bulldozer" and "Piledriver" CPUs) or the 28 nm process ("Steamroller" and "Excavator" APUs). Because of this, Zen is much more energy efficient. The Zen architecture is the first to encompass CPUs and APUs from AMD built for a single socket (Socket AM4). Also new for this architecture is the implementation of simultaneous multithreading (SMT) technology, something Intel has had for years on some of their processors with their proprietary hyper-threading implementation of SMT. This is a departure from the "Clustered MultiThreading" design introduced with the Bulldozer architecture. Zen also has support for DDR4 memory. AMD released the Zen-based high-end Ryzen 7 "Summit Ridge" series CPUs on March 2, 2017, mid-range Ryzen 5 series CPUs on April 11, 2017, and entry level Ryzen 3 series CPUs on July 27, 2017. AMD later released the Epyc line of Zen derived server processors for 1P and 2P systems. In October 2017, AMD released Zen-based APUs as Ryzen Mobile, incorporating Vega graphics cores. In January 2018 AMD has announced their new lineup plans, with Ryzen 2. AMD launched CPUs with the 12nm Zen+ microarchitecture in April 2018, following up with the 7nm Zen 2 microarchitecture in June 2019, including an update to the Epyc line with new processors using the Zen 2 microarchitecture in August 2019, and Zen 3 slated for release in Q3 2020. As of 2019, AMD's Ryzen processors were reported to outsell Intel's consumer desktop processors. At CES 2020 AMD announced their Ryzen Mobile 4000, as the first 7 nm x86 mobile processor, the first 7 nm 8-core (also 16-thread) high-performance mobile processor, and the first 8-core (also 16-thread) processor for ultrathin laptops. This generation is still based on the Zen 2 architecture. In October 2020, AMD announced new processors based on the Zen 3 architecture. On PassMark's Single thread performance test the Ryzen 5 5600x bested all other CPUs besides the Ryzen 9 5950X. In August 2022, AMD announced their initial lineup of CPUs based on the new Zen 4 architecture.
The Steam Deck, PlayStation 5, Xbox Series X and Series S all use chips based on the Zen 2 microarchitecture, with proprietary tweaks and different configurations in each system's implementation than AMD sells in its own commercially available APUs.
Graphics products and GPUs
ATI prior to AMD acquisition
Radeon within AMD
In 2008, the ATI division of AMD released the TeraScale microarchitecture implementing a unified shader model. This design replaced the previous fixed-function hardware of previous graphics cards with multipurpose, programmable shaders. Initially released as part of the GPU for the Xbox 360, this technology would go on to be used in Radeon branded HD 2000 parts. Three generations of TeraScale would be designed and used in parts from 2008 to 2014.
Combined GPU and CPU divisions
In a 2009 restructuring, AMD merged the CPU and GPU divisions to support the company's APUs, which fused both graphics and general purpose processing. In 2011, AMD released the successor to TeraScale, Graphics Core Next (GCN). This new microarchitecture emphasized GPGPU compute capability in addition to graphics processing, with a particular aim of supporting heterogeneous computing on AMD's APUs. GCN's reduced instruction set ISA allowed for significantly increased compute capability over TeraScale's very long instruction word ISA. Since GCN's introduction with the HD 7970, five generations of the GCN architecture have been produced from 2008 through at least 2017.
Radeon Technologies Group
In September 2015, AMD separated the graphics technology division of the company into an independent internal unit called the Radeon Technologies Group (RTG) headed by Raja Koduri. This gave the graphics division of AMD autonomy in product design and marketing. The RTG then went on to create and release the Polaris and Vega microarchitectures released in 2016 and 2017, respectively. In particular the Vega, or fifth generation GCN, microarchitecture includes a number of major revisions to improve performance and compute capabilities.
In November 2017, Raja Koduri left RTG and CEO and President Lisa Su took his position. In January 2018, it was reported that two industry veterans joined RTG, namely Mike Rayfield as senior vice president and general manager of RTG, and David Wang as senior vice president of engineering for RTG. In January 2020, AMD announced that its second generation RDNA graphics architecture was in development, with the aim of competing with the Nvidia RTX graphics products for performance leadership. In October 2020, AMD announced their new RX 6000 series series GPUs, their first high-end product based on RDNA2 and capable of handling ray-tracing natively, aiming to challenge Nvidia's RTX 3000 GPUs.
Semi-custom and game console products
In 2012, AMD's then CEO Rory Read began a program to offer semi-custom designs. Rather than AMD simply designing and offering a single product, potential customers could work with AMD to design a custom chip based on AMD's intellectual property. Customers pay a non-recurring engineering fee for design and development, and a purchase price for the resulting semi-custom products. In particular, AMD noted their unique position of offering both x86 and graphics intellectual property. These semi-custom designs would have design wins as the APUs in the PlayStation 4 and Xbox One and the subsequent PlayStation 4 Pro, Xbox One S, Xbox One X, Xbox Series X/S, and PlayStation 5. Financially, these semi-custom products would represent a majority of the company's revenue in 2016. In November 2017, AMD and Intel announced that Intel would market a product combining in a single package an Intel Core CPU, a semi-custom AMD Radeon GPU, and HBM2 memory.
Other hardware
AMD motherboard chipsets
Before the launch of Athlon 64 processors in 2003, AMD designed chipsets for their processors spanning the K6 and K7 processor generations. The chipsets include the AMD-640, AMD-751, and the AMD-761 chipsets. The situation changed in 2003 with the release of Athlon 64 processors, and AMD chose not to further design its own chipsets for its desktop processors while opening the desktop platform to allow other firms to design chipsets. This was the "Open Platform Management Architecture" with ATI, VIA and SiS developing their own chipset for Athlon 64 processors and later Athlon 64 X2 and Athlon 64 FX processors, including the Quad FX platform chipset from Nvidia.
The initiative went further with the release of Opteron server processors as AMD stopped the design of server chipsets in 2004 after releasing the AMD-8111 chipset, and again opened the server platform for firms to develop chipsets for Opteron processors. As of today, Nvidia and Broadcom are the sole designing firms of server chipsets for Opteron processors.
As the company completed the acquisition of ATI Technologies in 2006, the firm gained the ATI design team for chipsets which previously designed the Radeon Xpress 200 and the Radeon Xpress 3200 chipsets. AMD then renamed the chipsets for AMD processors under AMD branding (for instance, the CrossFire Xpress 3200 chipset was renamed as AMD 580X CrossFire chipset). In February 2007, AMD announced the first AMD-branded chipset since 2004 with the release of the AMD 690G chipset (previously under the development codename RS690), targeted at mainstream IGP computing. It was the industry's first to implement a HDMI 1.2 port on motherboards, shipping for more than a million units. While ATI had aimed at releasing an Intel IGP chipset, the plan was scrapped and the inventories of Radeon Xpress 1250 (codenamed RS600, sold under ATI brand) was sold to two OEMs, Abit and ASRock. Although AMD stated the firm would still produce Intel chipsets, Intel had not granted the license of FSB to ATI.
On November 15, 2007, AMD announced a new chipset series portfolio, the AMD 7-Series chipsets, covering from the enthusiast multi-graphics segment to the value IGP segment, to replace the AMD 480/570/580 chipsets and AMD 690 series chipsets, marking AMD's first enthusiast multi-graphics chipset. Discrete graphics chipsets were launched on November 15, 2007, as part of the codenamed Spider desktop platform, and IGP chipsets were launched at a later time in spring 2008 as part of the codenamed Cartwheel platform.
AMD returned to the server chipsets market with the AMD 800S series server chipsets. It includes support for up to six SATA 6.0 Gbit/s ports, the C6 power state, which is featured in Fusion processors and AHCI 1.2 with SATA FIS–based switching support. This is a chipset family supporting Phenom processors and Quad FX enthusiast platform (890FX), IGP (890GX).
With the advent of AMD's APUs in 2011, traditional northbridge features such as the connection to graphics and the PCI Express controller were incorporated into the APU die. Accordingly, APUs were connected to a single chip chipset, renamed the Fusion Controller Hub (FCH), which primarily provided southbridge functionality.
AMD released new chipsets in 2017 to support the release of their new Ryzen products. As the Zen microarchitecture already includes much of the northbridge connectivity, the AM4-based chipsets primarily varied in the number of additional PCI Express lanes, USB connections, and SATA connections available. These AM4 chipsets were designed in conjunction with ASMedia.
Embedded products
Embedded CPUs
In the early 1990s, AMD began marketing a series of embedded System-on-a-chip (SoC) called AMD Élan, starting with the SC300 and SC310. Both combines a 32-Bit, Am386SX, low-voltage 25 MHz or 33 MHz CPU with memory controller, PC/AT peripheral controllers, real-time clock, PLL clock generators and ISA bus interface. The SC300 integrates in addition two PC card slots and a CGA-compatible LCD controller. They were followed in 1996 by the SC4xx types. Now supporting VESA Local Bus and using the Am486 with up to 100 MHz clock speed. A SC450 with 33 MHze.g. was used in the Nokia 9000 Communicator. In 1999 the SC520 was announced. Using an Am586 with 100 MHz or 133 MHz and supporting SDRAM and PCI it was the latest member of the series.
In February 2002, AMD acquired Alchemy Semiconductor for its Alchemy line of MIPS processors for the hand-held and portable media player markets. On June 13, 2006, AMD officially announced that the line was to be transferred to Raza Microelectronics, Inc., a designer of MIPS processors for embedded applications.
In August 2003, AMD also purchased the Geode business which was originally the Cyrix MediaGX from National Semiconductor to augment its existing line of embedded x86 processor products. During the second quarter of 2004, it launched new low-power Geode NX processors based on the K7 Thoroughbred architecture with speeds of fanless processors and , and processor with fan, of TDP 25 W. This technology is used in a variety of embedded systems (Casino slot machines and customer kiosks for instance), several UMPC designs in Asia markets, as well as the OLPC XO-1 computer, an inexpensive laptop computer intended to be distributed to children in developing countries around the world. The Geode LX processor was announced in 2005 and is said will continue to be available through 2015.
AMD has also introduced 64-bit processors into its embedded product line starting with the AMD Opteron processor. Leveraging the high throughput enabled through HyperTransport and the Direct Connect Architecture these server-class processors have been targeted at high-end telecom and storage applications. In 2007, AMD added the AMD Athlon, AMD Turion, and Mobile AMD Sempron processors to its embedded product line. Leveraging the same 64-bit instruction set and Direct Connect Architecture as the AMD Opteron but at lower power levels, these processors were well suited to a variety of traditional embedded applications. Throughout 2007 and into 2008, AMD has continued to add both single-core Mobile AMD Sempron and AMD Athlon processors and dual-core AMD Athlon X2 and AMD Turion processors to its embedded product line and now offers embedded 64-bit solutions starting with 8 W TDP Mobile AMD Sempron and AMD Athlon processors for fan-less designs up to multi-processor systems leveraging multi-core AMD Opteron processors all supporting longer than standard availability.
The ATI acquisition in 2006 included the Imageon and Xilleon product lines. In late 2008, the entire handheld division was sold off to Qualcomm, who have since produced the Adreno series. Also in 2008, the Xilleon division was sold to Broadcom.
In April 2007, AMD announced the release of the M690T integrated graphics chipset for embedded designs. This enabled AMD to offer complete processor and chipset solutions targeted at embedded applications requiring high-performance 3D and video such as emerging digital signage, kiosk, and Point of Sale applications. The M690T was followed by the M690E specifically for embedded applications which removed the TV output, which required Macrovision licensing for OEMs, and enabled native support for dual TMDS outputs, enabling dual independent DVI interfaces.
In January 2011, AMD announced the AMD Embedded G-Series Accelerated Processing Unit. This was the first APU for embedded applications. These were followed by updates in 2013 and 2016.
In May 2012, AMD Announced the AMD Embedded R-Series Accelerated Processing Unit. This family of products incorporates the Bulldozer CPU architecture, and Discrete-class Radeon HD 7000G Series graphics. This was followed by a system on a chip (SoC) version in 2015 which offered a faster CPU and faster graphics, with support for DDR4 SDRAM memory.
Embedded graphics
AMD builds graphic processors for use in embedded systems. They can be found in anything from casinos to healthcare, with a large portion of products being used in industrial machines. These products include a complete graphics processing device in a compact multi-chip module including RAM and the GPU. ATI began offering embedded GPUs with the E2400 in 2008. Since that time AMD has released regular updates to their embedded GPU lineup in 2009, 2011, 2015, and 2016; reflecting improvements in their GPU technology.
Current product lines
CPU and APU products
AMD's portfolio of CPUs and APUs
Athlon – brand of entry level CPUs (Excavator) and APUs (Ryzen)
A-series – Excavator-class consumer desktop and laptop APUs
G-series – Excavator- and Jaguar-class low-power embedded APUs
Ryzen – brand of consumer CPUs and APUs
Ryzen Threadripper – brand of prosumer/professional CPUs
R-series – Excavator class high-performance embedded APUs
Epyc – brand of server CPUs
Opteron – brand of microserver APUs
Graphics products
AMD's portfolio of dedicated graphics processors
Radeon – brand for consumer line of graphics cards; the brand name originated with ATI.
Mobility Radeon offers power-optimized versions of Radeon graphics chips for use in laptops.
Radeon Pro – Workstation graphics card brand. Successor to the FirePro brand.
Radeon Instinct – brand of server and workstation targeted machine learning and GPGPU products
Radeon-branded products
RAM
In 2011, AMD began selling Radeon branded DDR3 SDRAM to support the higher bandwidth needs of AMD's APUs. While the RAM is sold by AMD, it was manufactured by Patriot Memory and VisionTek. This was later followed by higher speeds of gaming oriented DDR3 memory in 2013. Radeon branded DDR4 SDRAM memory was released in 2015, despite no AMD CPUs or APUs supporting DDR4 at the time. AMD noted in 2017 that these products are "mostly distributed in Eastern Europe" and that it continues to be active in the business.
Solid-state drives
AMD announced in 2014 it would sell Radeon branded solid-state drives manufactured by OCZ with capacities up to 480 GB and using the SATA interface.
Technologies
CPU hardware
technologies found in AMD CPU/APU and other products include:
HyperTransport – a high-bandwidth, low-latency system bus used in AMD's CPU and APU products
Infinity Fabric – a derivative of HyperTransport used as the communication bus in AMD's Zen microarchitecture
Graphics hardware
technologies found in AMD GPU products include:
AMD Eyefinity – facilitates multi-monitor setup of up to 6 monitors per graphics card
AMD FreeSync – display synchronization based on the VESA Adaptive Sync standard
AMD TrueAudio – acceleration of audio calculations
AMD XConnect – allows the use of External GPU enclosures through Thunderbolt 3
AMD CrossFire – multi-GPU technology allowing the simultaneous use of multiple GPUs
Unified Video Decoder (UVD) – acceleration of video decompression (decoding)
Video Coding Engine (VCE) – acceleration of video compression (encoding)
Software
AMD has made considerable efforts towards opening its software tools above the firmware level in the past decade.
For the following mentions, software not expressely stated free can be assumed to be proprietary.
Distribution
AMD Radeon Software is the default channel for official software distribution from AMD. It includes both free and proprietary software components, and supports both Microsoft Windows and Linux.
Software by type
CPU
AOCC is AMD's optimizing proprietary C/C++ compiler based on LLVM and available for Linux.
AMDuProf is AMD's CPU performance and Power profiling tool suite, available for Linux and Windows.
AMD has also taken an active part in developing coreboot, an open-source project aimed at replacing the proprietary BIOS firmware. This cooperation ceased in 2013, but AMD has indicated recently that it is considering releasing source code so that Ryzen can be compatible with coreboot in the future.
GPU
Most notable public AMD software is on the GPU side.
AMD has opened both its graphic and compute stacks:
GPUOpen is AMD's graphics stack, which includes for example FidelityFX Super Resolution.
ROCm (Radeon Open Compute platform) is AMD's compute stack for machine learning and high-performance computing, based on the LLVM compiler technologies. Under the ROCm project, AMDgpu is AMD's open source device driver supporting the GCN and following architectures, available for Linux. This latter driver component is used both by the graphics and compute stacks.
Misc
AMD conducts open research on heterogeneous computing.
Other AMD software includes the AMD Core Math Library, and open-source software including the AMD Performance Library.
AMD contributes to open source projects, including working with Sun Microsystems to enhance OpenSolaris and Sun xVM on the AMD platform. AMD also maintains its own Open64 compiler distribution and contributes its changes back to the community.
In 2008, AMD released the low-level programming specifications for its GPUs, and works with the X.Org Foundation to develop drivers for AMD graphics cards.
Extensions for software parallelism (xSP), aimed at speeding up programs to enable multi-threaded and multi-core processing, announced in Technology Analyst Day 2007. One of the initiatives being discussed since August 2007 is the Light Weight Profiling (LWP), providing internal hardware monitor with runtimes, to observe information about executing process and help the re-design of software to be optimized with multi-core and even multi-threaded programs. Another one is the extension of Streaming SIMD Extension (SSE) instruction set, the SSE5.
Codenamed SIMFIRE – interoperability testing tool for the Desktop and mobile Architecture for System Hardware (DASH) open architecture.
Production and fabrication
Previously, AMD produced its chips at company-owned semiconductor foundries. AMD pursued a strategy of collaboration with other semiconductor manufacturers IBM and Motorola to co-develop production technologies. AMD's founder Jerry Sanders termed this the "Virtual Gorilla" strategy to compete with Intel's significantly greater investments in fabrication.
In 2008, AMD spun off its chip foundries into an independent company named GlobalFoundries. This breakup of the company was attributed to the increasing costs of each process node. The Emirate of Abu Dhabi purchased the newly created company through its subsidiary Advanced Technology Investment Company (ATIC), purchasing the final stake from AMD in 2009.
With the spin-off of its foundries, AMD became a fabless semiconductor manufacturer, designing products to be produced at for-hire foundries. Part of the GlobalFoundries spin-off included an agreement with AMD to produce some number of products at GlobalFoundries. Both prior to the spin-off and after AMD has pursued production with other foundries including TSMC and Samsung. It has been argued that this would reduce risk for AMD by decreasing dependence on any one foundry which has caused issues in the past.
In 2018, AMD started shifting the production of their CPUs and GPUs to TSMC, following GlobalFoundries' announcement that they were halting development of their 7 nm process. AMD revised their wafer purchase requirement with GlobalFoundries in 2019, allowing AMD to freely choose foundries for 7 nm nodes and below, while maintaining purchase agreements for 12 nm and above through 2021.
Corporate affairs
Partnerships
AMD uses strategic industry partnerships to further its business interests as well as to rival Intel's dominance and resources:
A partnership between AMD and Alpha Processor Inc. developed HyperTransport, a point-to-point interconnect standard which was turned over to an industry standards body for finalization. It is now used in modern motherboards that are compatible with AMD processors.
AMD also formed a strategic partnership with IBM, under which AMD gained silicon on insulator (SOI) manufacturing technology, and detailed advice on 90 nm implementation. AMD announced that the partnership would extend to 2011 for 32 nm and 22 nm fabrication-related technologies.
To facilitate processor distribution and sales, AMD is loosely partnered with end-user companies, such as HP, Dell, Asus, Acer, and Microsoft.
In 1993, AMD established a 50–50 partnership with Fujitsu called FASL, and merged into a new company called FASL LLC in 2003. The joint venture went public under the name Spansion and ticker symbol SPSN in December 2005, with AMD shares dropping 37%. AMD no longer directly participates in the Flash memory devices market now as AMD entered into a non-competition agreement on December 21, 2005, with Fujitsu and Spansion, pursuant to which it agreed not to directly or indirectly engage in a business that manufactures or supplies standalone semiconductor devices (including single-chip, multiple-chip or system devices) containing only Flash memory.
On May 18, 2006, Dell announced that it would roll out new servers based on AMD's Opteron chips by year's end, thus ending an exclusive relationship with Intel. In September 2006, Dell began offering AMD Athlon X2 chips in their desktop lineup.
In June 2011, HP announced new business and consumer notebooks equipped with the latest versions of AMD APUsaccelerated processing units. AMD will power HP's Intel-based business notebooks as well.
In the spring of 2013, AMD announced that it would be powering all three major next-generation consoles. The Xbox One and Sony PlayStation 4 are both powered by a custom-built AMD APU, and the Nintendo Wii U is powered by an AMD GPU. According to AMD, having their processors in all three of these consoles will greatly assist developers with cross-platform development to competing consoles and PCs as well as increased support for their products across the board.
AMD has entered into an agreement with Hindustan Semiconductor Manufacturing Corporation (HSMC) for the production of AMD products in India.
AMD is a founding member of the HSA Foundation which aims to ease the use of a Heterogeneous System Architecture. A Heterogeneous System Architecture is intended to use both central processing units and graphics processors to complete computational tasks.
AMD announced in 2016 that it was creating a joint venture to produce x86 server chips for the Chinese market.
On May 7, 2019, it was reported that the U.S. Department of Energy, Oak Ridge National Laboratory, and Cray Inc., are working in collaboration with AMD to develop the Frontier exascale supercomputer. Featuring the AMD Epyc CPUs and Radeon GPUs, the supercomputer is set to produce more than 1.5 exaflops (peak double-precision) in computing performance. It is expected to debut sometime in 2021.
On March 5, 2020, it was announced that the U.S. Department of Energy, Lawrence Livermore National Laboratory, and HPE are working in collaboration with AMD to develop the El Capitan exascale supercomputer. Featuring the AMD Epyc CPUs and Radeon GPUs, the supercomputer is set to produce more than 2 exaflops (peak double-precision) in computing performance. It is expected to debut in 2023.
In the summer of 2020, it was reported that AMD would be powering the next-generation console offerings from Microsoft and Sony.
On November 8, 2021, AMD announced a partnership with Meta to make the chips used in the Metaverse.
In January 2022, AMD partnered with Samsung to develop a mobile processor to be used in future products. The processor was named Exynos 2022 and works with the AMD RDNA 2 architecture.
Litigation with Intel
AMD has a long history of litigation with former (and current) partner and x86 creator Intel.
In 1986, Intel broke an agreement it had with AMD to allow them to produce Intel's microchips for IBM; AMD filed for arbitration in 1987 and the arbitrator decided in AMD's favor in 1992. Intel disputed this, and the case ended up in the Supreme Court of California. In 1994, that court upheld the arbitrator's decision and awarded damages for breach of contract.
In 1990, Intel brought a copyright infringement action alleging illegal use of its 287 microcode. The case ended in 1994 with a jury finding for AMD and its right to use Intel's microcode in its microprocessors through the 486 generation.
In 1997, Intel filed suit against AMD and Cyrix Corp. for misuse of the term MMX. AMD and Intel settled, with AMD acknowledging MMX as a trademark owned by Intel, and with Intel granting AMD rights to market the AMD K6 MMX processor.
In 2005, following an investigation, the Japan Federal Trade Commission found Intel guilty of a number of violations. On June 27, 2005, AMD won an antitrust suit against Intel in Japan, and on the same day, AMD filed a broad antitrust complaint against Intel in the U.S. Federal District Court in Delaware. The complaint alleges systematic use of secret rebates, special discounts, threats, and other means used by Intel to lock AMD processors out of the global market. Since the start of this action, the court has issued subpoenas to major computer manufacturers including Acer, Dell, Lenovo, HP and Toshiba.
In November 2009, Intel agreed to pay AMD $1.25bn and renew a five-year patent cross-licensing agreement as part of a deal to settle all outstanding legal disputes between them.
Guinness World Record achievement
On August 31, 2011, in Austin, Texas, AMD achieved a Guinness World Record for the "Highest frequency of a computer processor": 8.429 GHz. The company ran an 8-core FX-8150 processor with only one active module (two cores), and cooled with liquid helium. The previous record was 8.308 GHz, with an Intel Celeron 352 (one core).
On November 1, 2011, geek.com reported that Andre Yang, an overclocker from Taiwan, used an FX-8150 to set another record: 8.461 GHz.
On November 19, 2012, Andre Yang used an FX-8350 to set another record: 8.794 GHz.
Acquisitions, mergers and investments
Corporate social responsibility
In its 2012 report on progress relating to conflict minerals, the Enough Project rated AMD the fifth most progressive of 24 consumer electronics companies.
Other initiatives
50x15, digital inclusion, with targeted 50% of world population to be connected through Internet via affordable computers by the year of 2015.
The Green Grid, founded by AMD together with other founders, such as IBM, Sun and Microsoft, to seek lower power consumption for grids.
See also
Bill Gaede
List of AMD processors
List of AMD accelerated processing units
List of AMD graphics processing units
List of AMD chipsets
List of ATI chipsets
3DNow!
Cool'n'Quiet
PowerNow!
Notes
References
Rodengen, Jeffrey L. The Spirit of AMD: Advanced Micro Devices. Write Stuff, 1998.
Ruiz, Hector. Slingshot: AMD's Fight to Free an Industry from the Ruthless Grip of Intel. Greenleaf Book Group, 2013.
External links
1969 establishments in California
1970s initial public offerings
American companies established in 1969
Fabless semiconductor companies
Companies based in Santa Clara, California
Companies formerly listed on the New York Stock Exchange
Companies listed on the Nasdaq
Companies in the Nasdaq-100
Computer companies of the United States
Computer companies established in 1969
Electronics companies established in 1969
Graphics hardware companies
HSA Foundation founding members
Manufacturing companies based in the San Francisco Bay Area
Motherboard companies
Semiconductor companies of the United States
Superfund sites in California
Technology companies based in the San Francisco Bay Area
Technology companies established in 1969
|
https://en.wikipedia.org/wiki/Acceleration
|
In mechanics, acceleration is the rate of change of the velocity of an object with respect to time. Acceleration is one of several components of kinematics, the study of motion. Accelerations are vector quantities (in that they have magnitude and direction). The orientation of an object's acceleration is given by the orientation of the net force acting on that object. The magnitude of an object's acceleration, as described by Newton's Second Law, is the combined effect of two causes:
the net balance of all external forces acting onto that object — magnitude is directly proportional to this net resulting force;
that object's mass, depending on the materials out of which it is made — magnitude is inversely proportional to the object's mass.
The SI unit for acceleration is metre per second squared (, ).
For example, when a vehicle starts from a standstill (zero velocity, in an inertial frame of reference) and travels in a straight line at increasing speeds, it is accelerating in the direction of travel. If the vehicle turns, an acceleration occurs toward the new direction and changes its motion vector. The acceleration of the vehicle in its current direction of motion is called a linear (or tangential during circular motions) acceleration, the reaction to which the passengers on board experience as a force pushing them back into their seats. When changing direction, the effecting acceleration is called radial (or centripetal during circular motions) acceleration, the reaction to which the passengers experience as a centrifugal force. If the speed of the vehicle decreases, this is an acceleration in the opposite direction of the velocity vector (mathematically a negative, if the movement is unidimensional and the velocity is positive), sometimes called deceleration or retardation, and passengers experience the reaction to deceleration as an inertial force pushing them forward. Such negative accelerations are often achieved by retrorocket burning in spacecraft. Both acceleration and deceleration are treated the same, as they are both changes in velocity. Each of these accelerations (tangential, radial, deceleration) is felt by passengers until their relative (differential) velocity are neutralized in reference to the acceleration due to change in speed.
Definition and properties
Average acceleration
An object's average acceleration over a period of time is its change in velocity, , divided by the duration of the period, . Mathematically,
Instantaneous acceleration
Instantaneous acceleration, meanwhile, is the limit of the average acceleration over an infinitesimal interval of time. In the terms of calculus, instantaneous acceleration is the derivative of the velocity vector with respect to time:
As acceleration is defined as the derivative of velocity, , with respect to time and velocity is defined as the derivative of position, , with respect to time, acceleration can be thought of as the second derivative of with respect to :
(Here and elsewhere, if motion is in a straight line, vector quantities can be substituted by scalars in the equations.)
By the fundamental theorem of calculus, it can be seen that the integral of the acceleration function is the velocity function ; that is, the area under the curve of an acceleration vs. time ( vs. ) graph corresponds to the change of velocity.
Likewise, the integral of the jerk function , the derivative of the acceleration function, can be used to find the change of acceleration at a certain time:
Units
Acceleration has the dimensions of velocity (L/T) divided by time, i.e. L T−2. The SI unit of acceleration is the metre per second squared (m s−2); or "metre per second per second", as the velocity in metres per second changes by the acceleration value, every second.
Other forms
An object moving in a circular motion—such as a satellite orbiting the Earth—is accelerating due to the change of direction of motion, although its speed may be constant. In this case it is said to be undergoing centripetal (directed towards the center) acceleration.
Proper acceleration, the acceleration of a body relative to a free-fall condition, is measured by an instrument called an accelerometer.
In classical mechanics, for a body with constant mass, the (vector) acceleration of the body's center of mass is proportional to the net force vector (i.e. sum of all forces) acting on it (Newton's second law):
where is the net force acting on the body, is the mass of the body, and is the center-of-mass acceleration. As speeds approach the speed of light, relativistic effects become increasingly large.
Tangential and centripetal acceleration
The velocity of a particle moving on a curved path as a function of time can be written as:
with equal to the speed of travel along the path, and
a unit vector tangent to the path pointing in the direction of motion at the chosen moment in time. Taking into account both the changing speed and the changing direction of , the acceleration of a particle moving on a curved path can be written using the chain rule of differentiation for the product of two functions of time as:
where is the unit (inward) normal vector to the particle's trajectory (also called the principal normal), and is its instantaneous radius of curvature based upon the osculating circle at time . The components
are called the tangential acceleration and the normal or radial acceleration (or centripetal acceleration in circular motion, see also circular motion and centripetal force), respectively.
Geometrical analysis of three-dimensional space curves, which explains tangent, (principal) normal and binormal, is described by the Frenet–Serret formulas.
Special cases
Uniform acceleration
Uniform or constant acceleration is a type of motion in which the velocity of an object changes by an equal amount in every equal time period.
A frequently cited example of uniform acceleration is that of an object in free fall in a uniform gravitational field. The acceleration of a falling body in the absence of resistances to motion is dependent only on the gravitational field strength (also called acceleration due to gravity). By Newton's Second Law the force acting on a body is given by:
Because of the simple analytic properties of the case of constant acceleration, there are simple formulas relating the displacement, initial and time-dependent velocities, and acceleration to the time elapsed:
where
is the elapsed time,
is the initial displacement from the origin,
is the displacement from the origin at time ,
is the initial velocity,
is the velocity at time , and
is the uniform rate of acceleration.
In particular, the motion can be resolved into two orthogonal parts, one of constant velocity and the other according to the above equations. As Galileo showed, the net result is parabolic motion, which describes, e.g., the trajectory of a projectile in vacuum near the surface of Earth.
Circular motion
In uniform circular motion, that is moving with constant speed along a circular path, a particle experiences an acceleration resulting from the change of the direction of the velocity vector, while its magnitude remains constant. The derivative of the location of a point on a curve with respect to time, i.e. its velocity, turns out to be always exactly tangential to the curve, respectively orthogonal to the radius in this point. Since in uniform motion the velocity in the tangential direction does not change, the acceleration must be in radial direction, pointing to the center of the circle. This acceleration constantly changes the direction of the velocity to be tangent in the neighboring point, thereby rotating the velocity vector along the circle.
For a given speed , the magnitude of this geometrically caused acceleration (centripetal acceleration) is inversely proportional to the radius of the circle, and increases as the square of this speed:
For a given angular velocity , the centripetal acceleration is directly proportional to radius . This is due to the dependence of velocity on the radius .
Expressing centripetal acceleration vector in polar components, where is a vector from the centre of the circle to the particle with magnitude equal to this distance, and considering the orientation of the acceleration towards the center, yields
As usual in rotations, the speed of a particle may be expressed as an angular speed with respect to a point at the distance as
Thus
This acceleration and the mass of the particle determine the necessary centripetal force, directed toward the centre of the circle, as the net force acting on this particle to keep it in this uniform circular motion. The so-called 'centrifugal force', appearing to act outward on the body, is a so-called pseudo force experienced in the frame of reference of the body in circular motion, due to the body's linear momentum, a vector tangent to the circle of motion.
In a nonuniform circular motion, i.e., the speed along the curved path is changing, the acceleration has a non-zero component tangential to the curve, and is not confined to the principal normal, which directs to the center of the osculating circle, that determines the radius for the centripetal acceleration. The tangential component is given by the angular acceleration , i.e., the rate of change of the angular speed times the radius . That is,
The sign of the tangential component of the acceleration is determined by the sign of the angular acceleration (), and the tangent is always directed at right angles to the radius vector.
Relation to relativity
Special relativity
The special theory of relativity describes the behavior of objects traveling relative to other objects at speeds approaching that of light in vacuum. Newtonian mechanics is exactly revealed to be an approximation to reality, valid to great accuracy at lower speeds. As the relevant speeds increase toward the speed of light, acceleration no longer follows classical equations.
As speeds approach that of light, the acceleration produced by a given force decreases, becoming infinitesimally small as light speed is approached; an object with mass can approach this speed asymptotically, but never reach it.
General relativity
Unless the state of motion of an object is known, it is impossible to distinguish whether an observed force is due to gravity or to acceleration—gravity and inertial acceleration have identical effects. Albert Einstein called this the equivalence principle, and said that only observers who feel no force at all—including the force of gravity—are justified in concluding that they are not accelerating.
Conversions
See also
Acceleration (differential geometry)
Four-vector: making the connection between space and time explicit
Gravitational acceleration
Inertia
Orders of magnitude (acceleration)
Shock (mechanics)
Shock and vibration data loggermeasuring 3-axis acceleration
Space travel using constant acceleration
Specific force
References
External links
Acceleration Calculator Simple acceleration unit converter
Acceleration Calculator Acceleration Conversion calculator converts units form meter per second square, kilometer per second square, millimeter per second square & more with metric conversion.
Dynamics (mechanics)
Kinematic properties
Vector physical quantities
|
https://en.wikipedia.org/wiki/Apoptosis
|
Apoptosis (from ) is a form of programmed cell death that occurs in multicellular organisms and in some eukaryotic, single-celled microorganisms such as yeast. Biochemical events lead to characteristic cell changes (morphology) and death. These changes include blebbing, cell shrinkage, nuclear fragmentation, chromatin condensation, DNA fragmentation, and mRNA decay. The average adult human loses between 50 and 70 billion cells each day due to apoptosis. For an average human child between eight and fourteen years old, each day the approximate lost is 20 to 30 billion cells.
In contrast to necrosis, which is a form of traumatic cell death that results from acute cellular injury, apoptosis is a highly regulated and controlled process that confers advantages during an organism's life cycle. For example, the separation of fingers and toes in a developing human embryo occurs because cells between the digits undergo apoptosis. Unlike necrosis, apoptosis produces cell fragments called apoptotic bodies that phagocytes are able to engulf and remove before the contents of the cell can spill out onto surrounding cells and cause damage to them.
Because apoptosis cannot stop once it has begun, it is a highly regulated process. Apoptosis can be initiated through one of two pathways. In the intrinsic pathway the cell kills itself because it senses cell stress, while in the extrinsic pathway the cell kills itself because of signals from other cells. Weak external signals may also activate the intrinsic pathway of apoptosis. Both pathways induce cell death by activating caspases, which are proteases, or enzymes that degrade proteins. The two pathways both activate initiator caspases, which then activate executioner caspases, which then kill the cell by degrading proteins indiscriminately.
In addition to its importance as a biological phenomenon, defective apoptotic processes have been implicated in a wide variety of diseases. Excessive apoptosis causes atrophy, whereas an insufficient amount results in uncontrolled cell proliferation, such as cancer. Some factors like Fas receptors and caspases promote apoptosis, while some members of the Bcl-2 family of proteins inhibit apoptosis.
Discovery and etymology
German scientist Carl Vogt was first to describe the principle of apoptosis in 1842. In 1885, anatomist Walther Flemming delivered a more precise description of the process of programmed cell death. However, it was not until 1965 that the topic was resurrected. While studying tissues using electron microscopy, John Kerr at the University of Queensland was able to distinguish apoptosis from traumatic cell death. Following the publication of a paper describing the phenomenon, Kerr was invited to join Alastair Currie, as well as Andrew Wyllie, who was Currie's graduate student, at University of Aberdeen. In 1972, the trio published a seminal article in the British Journal of Cancer. Kerr had initially used the term programmed cell necrosis, but in the article, the process of natural cell death was called apoptosis. Kerr, Wyllie and Currie credited James Cormack, a professor of Greek language at University of Aberdeen, with suggesting the term apoptosis. Kerr received the Paul Ehrlich and Ludwig Darmstaedter Prize on March 14, 2000, for his description of apoptosis. He shared the prize with Boston biologist H. Robert Horvitz.
For many years, neither "apoptosis" nor "programmed cell death" was a highly cited term. Two discoveries brought cell death from obscurity to a major field of research: identification of the first component of the cell death control and effector mechanisms, and linkage of abnormalities in cell death to human disease, in particular cancer. This occurred in 1988 when it was shown that BCL2, the gene responsible for follicular lymphoma, encoded a protein that inhibited cell death.
The 2002 Nobel Prize in Medicine was awarded to Sydney Brenner, H. Robert Horvitz and John Sulston for their work identifying genes that control apoptosis. The genes were identified by studies in the nematode C. elegans and homologues of these genes function in humans to regulate apoptosis.
In Greek, apoptosis translates to the "falling off" of leaves from a tree. Cormack, professor of Greek language, reintroduced the term for medical use as it had a medical meaning for the Greeks over two thousand years before. Hippocrates used the term to mean "the falling off of the bones". Galen extended its meaning to "the dropping of the scabs". Cormack was no doubt aware of this usage when he suggested the name. Debate continues over the correct pronunciation, with opinion divided between a pronunciation with the second p silent ( ) and the second p pronounced (). In English, the p of the Greek -pt- consonant cluster is typically silent at the beginning of a word (e.g. pterodactyl, Ptolemy), but articulated when used in combining forms preceded by a vowel, as in helicopter or the orders of insects: diptera, lepidoptera, etc.
In the original Kerr, Wyllie & Currie paper, there is a footnote regarding the pronunciation:
We are most grateful to Professor James Cormack of the Department of Greek, University of Aberdeen, for suggesting this term. The word "apoptosis" () is used in Greek to describe the "dropping off" or "falling off" of petals from flowers, or leaves from trees. To show the derivation clearly, we propose that the stress should be on the penultimate syllable, the second half of the word being pronounced like "ptosis" (with the "p" silent), which comes from the same root "to fall", and is already used to describe the drooping of the upper eyelid.
Activation mechanisms
The initiation of apoptosis is tightly regulated by activation mechanisms, because once apoptosis has begun, it inevitably leads to the death of the cell. The two best-understood activation mechanisms are the intrinsic pathway (also called the mitochondrial pathway) and the extrinsic pathway. The intrinsic pathway is activated by intracellular signals generated when cells are stressed and depends on the release of proteins from the intermembrane space of mitochondria. The extrinsic pathway is activated by extracellular ligands binding to cell-surface death receptors, which leads to the formation of the death-inducing signaling complex (DISC).
A cell initiates intracellular apoptotic signaling in response to a stress, which may bring about cell suicide. The binding of nuclear receptors by glucocorticoids, heat, radiation, nutrient deprivation, viral infection, hypoxia, increased intracellular concentration of free fatty acids and increased intracellular calcium concentration, for example, by damage to the membrane, can all trigger the release of intracellular apoptotic signals by a damaged cell. A number of cellular components, such as poly ADP ribose polymerase, may also help regulate apoptosis. Single cell fluctuations have been observed in experimental studies of stress induced apoptosis.
Before the actual process of cell death is precipitated by enzymes, apoptotic signals must cause regulatory proteins to initiate the apoptosis pathway. This step allows those signals to cause cell death, or the process to be stopped, should the cell no longer need to die. Several proteins are involved, but two main methods of regulation have been identified: the targeting of mitochondria functionality, or directly transducing the signal via adaptor proteins to the apoptotic mechanisms. An extrinsic pathway for initiation identified in several toxin studies is an increase in calcium concentration within a cell caused by drug activity, which also can cause apoptosis via a calcium binding protease calpain.
Intrinsic pathway
The intrinsic pathway is also known as the mitochondrial pathway. Mitochondria are essential to multicellular life. Without them, a cell ceases to respire aerobically and quickly dies. This fact forms the basis for some apoptotic pathways. Apoptotic proteins that target mitochondria affect them in different ways. They may cause mitochondrial swelling through the formation of membrane pores, or they may increase the permeability of the mitochondrial membrane and cause apoptotic effectors to leak out. There is also a growing body of evidence indicating that nitric oxide is able to induce apoptosis by helping to dissipate the membrane potential of mitochondria and therefore make it more permeable. Nitric oxide has been implicated in initiating and inhibiting apoptosis through its possible action as a signal molecule of subsequent pathways that activate apoptosis.
During apoptosis, cytochrome c is released from mitochondria through the actions of the proteins Bax and Bak. The mechanism of this release is enigmatic, but appears to stem from a multitude of Bax/Bak homo- and hetero-dimers of Bax/Bak inserted into the outer membrane. Once cytochrome c is released it binds with Apoptotic protease activating factor – 1 (Apaf-1) and ATP, which then bind to pro-caspase-9 to create a protein complex known as an apoptosome. The apoptosome cleaves the pro-caspase to its active form of caspase-9, which in turn cleaves and activates pro-caspase into the effector caspase-3.
Mitochondria also release proteins known as SMACs (second mitochondria-derived activator of caspases) into the cell's cytosol following the increase in permeability of the mitochondria membranes. SMAC binds to proteins that inhibit apoptosis (IAPs) thereby deactivating them, and preventing the IAPs from arresting the process and therefore allowing apoptosis to proceed. IAP also normally suppresses the activity of a group of cysteine proteases called caspases, which carry out the degradation of the cell. Therefore, the actual degradation enzymes can be seen to be indirectly regulated by mitochondrial permeability.
Extrinsic pathway
Two theories of the direct initiation of apoptotic mechanisms in mammals have been suggested: the TNF-induced (tumor necrosis factor) model and the Fas-Fas ligand-mediated model, both involving receptors of the TNF receptor (TNFR) family coupled to extrinsic signals.
TNF pathway
TNF-alpha is a cytokine produced mainly by activated macrophages, and is the major extrinsic mediator of apoptosis. Most cells in the human body have two receptors for TNF-alpha: TNFR1 and TNFR2. The binding of TNF-alpha to TNFR1 has been shown to initiate the pathway that leads to caspase activation via the intermediate membrane proteins TNF receptor-associated death domain (TRADD) and Fas-associated death domain protein (FADD). cIAP1/2 can inhibit TNF-α signaling by binding to TRAF2. FLIP inhibits the activation of caspase-8. Binding of this receptor can also indirectly lead to the activation of transcription factors involved in cell survival and inflammatory responses. However, signalling through TNFR1 might also induce apoptosis in a caspase-independent manner. The link between TNF-alpha and apoptosis shows why an abnormal production of TNF-alpha plays a fundamental role in several human diseases, especially in autoimmune diseases. The TNF-alpha receptor superfamily also includes death receptors (DRs), such as DR4 and DR5. These receptors bind to the protein TRAIL and mediate apoptosis. Apoptosis is known to be one of the primary mechanisms of targeted cancer therapy. Luminescent iridium complex-peptide hybrids (IPHs) have recently been designed, which mimic TRAIL and bind to death receptors on cancer cells, thereby inducing their apoptosis.
Fas pathway
The fas receptor (First apoptosis signal) – (also known as Apo-1 or CD95) is a transmembrane protein of the TNF family which binds the Fas ligand (FasL). The interaction between Fas and FasL results in the formation of the death-inducing signaling complex (DISC), which contains the FADD, caspase-8 and caspase-10. In some types of cells (type I), processed caspase-8 directly activates other members of the caspase family, and triggers the execution of apoptosis of the cell. In other types of cells (type II), the Fas-DISC starts a feedback loop that spirals into increasing release of proapoptotic factors from mitochondria and the amplified activation of caspase-8.
Common components
Following TNF-R1 and Fas activation in mammalian cells a balance between proapoptotic (BAX, BID, BAK, or BAD) and anti-apoptotic (Bcl-Xl and Bcl-2) members of the Bcl-2 family are established. This balance is the proportion of proapoptotic homodimers that form in the outer-membrane of the mitochondrion. The proapoptotic homodimers are required to make the mitochondrial membrane permeable for the release of caspase activators such as cytochrome c and SMAC. Control of proapoptotic proteins under normal cell conditions of nonapoptotic cells is incompletely understood, but in general, Bax or Bak are activated by the activation of BH3-only proteins, part of the Bcl-2 family.
Caspases
Caspases play the central role in the transduction of ER apoptotic signals. Caspases are proteins that are highly conserved, cysteine-dependent aspartate-specific proteases. There are two types of caspases: initiator caspases, caspase 2,8,9,10,11,12, and effector caspases, caspase 3,6,7. The activation of initiator caspases requires binding to specific oligomeric activator protein. Effector caspases are then activated by these active initiator caspases through proteolytic cleavage. The active effector caspases then proteolytically degrade a host of intracellular proteins to carry out the cell death program.
Caspase-independent apoptotic pathway
There also exists a caspase-independent apoptotic pathway that is mediated by AIF (apoptosis-inducing factor).
Apoptosis model in amphibians
The frog Xenopus laevis serves as an ideal model system for the study of the mechanisms of apoptosis. In fact, iodine and thyroxine also stimulate the spectacular apoptosis of the cells of the larval gills, tail and fins in amphibian's metamorphosis, and stimulate the evolution of their nervous system transforming the aquatic, vegetarian tadpole into the terrestrial, carnivorous frog.
Negative regulators of apoptosis
Negative regulation of apoptosis inhibits cell death signaling pathways, helping tumors to evade cell death and developing drug resistance. The ratio between anti-apoptotic (Bcl-2) and pro-apoptotic (Bax) proteins determines whether a cell lives or dies. Many families of proteins act as negative regulators categorized into either antiapoptotic factors, such as IAPs and Bcl-2 proteins or prosurvival factors like cFLIP, BNIP3, FADD, Akt, and NF-κB.
Proteolytic caspase cascade: Killing the cell
Many pathways and signals lead to apoptosis, but these converge on a single mechanism that actually causes the death of the cell. After a cell receives stimulus, it undergoes organized degradation of cellular organelles by activated proteolytic caspases. In addition to the destruction of cellular organelles, mRNA is rapidly and globally degraded by a mechanism that is not yet fully characterized. mRNA decay is triggered very early in apoptosis.
A cell undergoing apoptosis shows a series of characteristic morphological changes. Early alterations include:
Cell shrinkage and rounding occur because of the retraction lamellipodia and the breakdown of the proteinaceous cytoskeleton by caspases.
The cytoplasm appears dense, and the organelles appear tightly packed.
Chromatin undergoes condensation into compact patches against the nuclear envelope (also known as the perinuclear envelope) in a process known as pyknosis, a hallmark of apoptosis.
The nuclear envelope becomes discontinuous and the DNA inside it is fragmented in a process referred to as karyorrhexis. The nucleus breaks into several discrete chromatin bodies or nucleosomal units due to the degradation of DNA.
Apoptosis progresses quickly and its products are quickly removed, making it difficult to detect or visualize on classical histology sections. During karyorrhexis, endonuclease activation leaves short DNA fragments, regularly spaced in size. These give a characteristic "laddered" appearance on agar gel after electrophoresis. Tests for DNA laddering differentiate apoptosis from ischemic or toxic cell death.
Apoptotic cell disassembly
Before the apoptotic cell is disposed of, there is a process of disassembly. There are three recognized steps in apoptotic cell disassembly:
Membrane blebbing: The cell membrane shows irregular buds known as blebs. Initially these are smaller surface blebs. Later these can grow into larger so-called dynamic membrane blebs. An important regulator of apoptotic cell membrane blebbing is ROCK1 (rho associated coiled-coil-containing protein kinase 1).
Formation of membrane protrusions: Some cell types, under specific conditions, may develop different types of long, thin extensions of the cell membrane called membrane protrusions. Three types have been described: microtubule spikes, apoptopodia (feet of death), and beaded apoptopodia (the latter having a beads-on-a-string appearance). Pannexin 1 is an important component of membrane channels involved in the formation of apoptopodia and beaded apoptopodia.
Fragmentation: The cell breaks apart into multiple vesicles called apoptotic bodies, which undergo phagocytosis. The plasma membrane protrusions may help bring apoptotic bodies closer to phagocytes.
Removal of dead cells
The removal of dead cells by neighboring phagocytic cells has been termed efferocytosis.
Dying cells that undergo the final stages of apoptosis display phagocytotic molecules, such as phosphatidylserine, on their cell surface. Phosphatidylserine is normally found on the inner leaflet surface of the plasma membrane, but is redistributed during apoptosis to the extracellular surface by a protein known as scramblase. These molecules mark the cell for phagocytosis by cells possessing the appropriate receptors, such as macrophages. The removal of dying cells by phagocytes occurs in an orderly manner without eliciting an inflammatory response. During apoptosis cellular RNA and DNA are separated from each other and sorted to different apoptotic bodies; separation of RNA is initiated as nucleolar segregation.
Pathway knock-outs
Many knock-outs have been made in the apoptosis pathways to test the function of each of the proteins. Several caspases, in addition to APAF1 and FADD, have been mutated to determine the new phenotype. In order to create a tumor necrosis factor (TNF) knockout, an exon containing the nucleotides 3704–5364 was removed from the gene. This exon encodes a portion of the mature TNF domain, as well as the leader sequence, which is a highly conserved region necessary for proper intracellular processing. TNF-/- mice develop normally and have no gross structural or morphological abnormalities. However, upon immunization with SRBC (sheep red blood cells), these mice demonstrated a deficiency in the maturation of an antibody response; they were able to generate normal levels of IgM, but could not develop specific IgG levels. Apaf-1 is the protein that turns on caspase 9 by cleavage to begin the caspase cascade that leads to apoptosis. Since a -/- mutation in the APAF-1 gene is embryonic lethal, a gene trap strategy was used in order to generate an APAF-1 -/- mouse. This assay is used to disrupt gene function by creating an intragenic gene fusion. When an APAF-1 gene trap is introduced into cells, many morphological changes occur, such as spina bifida, the persistence of interdigital webs, and open brain. In addition, after embryonic day 12.5, the brain of the embryos showed several structural changes. APAF-1 cells are protected from apoptosis stimuli such as irradiation. A BAX-1 knock-out mouse exhibits normal forebrain formation and a decreased programmed cell death in some neuronal populations and in the spinal cord, leading to an increase in motor neurons.
The caspase proteins are integral parts of the apoptosis pathway, so it follows that knock-outs made have varying damaging results. A caspase 9 knock-out leads to a severe brain malformation . A caspase 8 knock-out leads to cardiac failure and thus embryonic lethality . However, with the use of cre-lox technology, a caspase 8 knock-out has been created that exhibits an increase in peripheral T cells, an impaired T cell response, and a defect in neural tube closure . These mice were found to be resistant to apoptosis mediated by CD95, TNFR, etc. but not resistant to apoptosis caused by UV irradiation, chemotherapeutic drugs, and other stimuli. Finally, a caspase 3 knock-out was characterized by ectopic cell masses in the brain and abnormal apoptotic features such as membrane blebbing or nuclear fragmentation . A remarkable feature of these KO mice is that they have a very restricted phenotype: Casp3, 9, APAF-1 KO mice have deformations of neural tissue and FADD and Casp 8 KO showed defective heart development, however, in both types of KO other organs developed normally and some cell types were still sensitive to apoptotic stimuli suggesting that unknown proapoptotic pathways exist.
Methods for distinguishing apoptotic from necrotic cells
Label-free live cell imaging, time-lapse microscopy, flow fluorocytometry, and transmission electron microscopy can be used to compare apoptotic and necrotic cells. There are also various biochemical techniques for analysis of cell surface markers (phosphatidylserine exposure versus cell permeability by flow cytometry), cellular markers such as DNA fragmentation (flow cytometry), caspase activation, Bid cleavage, and cytochrome c release (Western blotting). Supernatant screening for caspases, HMGB1, and cytokeratin 18 release can identify primary from secondary necrotic cells. However, no distinct surface or biochemical markers of necrotic cell death have been identified yet, and only negative markers are available. These include absence of apoptotic markers (caspase activation, cytochrome c release, and oligonucleosomal DNA fragmentation) and differential kinetics of cell death markers (phosphatidylserine exposure and cell membrane permeabilization). A selection of techniques that can be used to distinguish apoptosis from necroptotic cells could be found in these references.
Implication in disease
Defective pathways
The many different types of apoptotic pathways contain a multitude of different biochemical components, many of them not yet understood. As a pathway is more or less sequential in nature, removing or modifying one component leads to an effect in another. In a living organism, this can have disastrous effects, often in the form of disease or disorder. A discussion of every disease caused by modification of the various apoptotic pathways would be impractical, but the concept overlying each one is the same: The normal functioning of the pathway has been disrupted in such a way as to impair the ability of the cell to undergo normal apoptosis. This results in a cell that lives past its "use-by date" and is able to replicate and pass on any faulty machinery to its progeny, increasing the likelihood of the cell's becoming cancerous or diseased.
A recently described example of this concept in action can be seen in the development of a lung cancer called NCI-H460. The X-linked inhibitor of apoptosis protein (XIAP) is overexpressed in cells of the H460 cell line. XIAPs bind to the processed form of caspase-9 and suppress the activity of apoptotic activator cytochrome c, therefore overexpression leads to a decrease in the number of proapoptotic agonists. As a consequence, the balance of anti-apoptotic and proapoptotic effectors is upset in favour of the former, and the damaged cells continue to replicate despite being directed to die. Defects in regulation of apoptosis in cancer cells occur often at the level of control of transcription factors. As a particular example, defects in molecules that control transcription factor NF-κB in cancer change the mode of transcriptional regulation and the response to apoptotic signals, to curtail dependence on the tissue that the cell belongs. This degree of independence from external survival signals, can enable cancer metastasis.
Dysregulation of p53
The tumor-suppressor protein p53 accumulates when DNA is damaged due to a chain of biochemical factors. Part of this pathway includes alpha-interferon and beta-interferon, which induce transcription of the p53 gene, resulting in the increase of p53 protein level and enhancement of cancer cell-apoptosis. p53 prevents the cell from replicating by stopping the cell cycle at G1, or interphase, to give the cell time to repair, however it will induce apoptosis if damage is extensive and repair efforts fail. Any disruption to the regulation of the p53 or interferon genes will result in impaired apoptosis and the possible formation of tumors.
Inhibition
Inhibition of apoptosis can result in a number of cancers, inflammatory diseases, and viral infections. It was originally believed that the associated accumulation of cells was due to an increase in cellular proliferation, but it is now known that it is also due to a decrease in cell death. The most common of these diseases is cancer, the disease of excessive cellular proliferation, which is often characterized by an overexpression of IAP family members. As a result, the malignant cells experience an abnormal response to apoptosis induction: Cycle-regulating genes (such as p53, ras or c-myc) are mutated or inactivated in diseased cells, and further genes (such as bcl-2) also modify their expression in tumors. Some apoptotic factors are vital during mitochondrial respiration e.g. cytochrome C. Pathological inactivation of apoptosis in cancer cells is correlated with frequent respiratory metabolic shifts toward glycolysis (an observation known as the "Warburg hypothesis".
HeLa cell
Apoptosis in HeLa cells is inhibited by proteins produced by the cell; these inhibitory proteins target retinoblastoma tumor-suppressing proteins. These tumor-suppressing proteins regulate the cell cycle, but are rendered inactive when bound to an inhibitory protein. HPV E6 and E7 are inhibitory proteins expressed by the human papillomavirus, HPV being responsible for the formation of the cervical tumor from which HeLa cells are derived. HPV E6 causes p53, which regulates the cell cycle, to become inactive. HPV E7 binds to retinoblastoma tumor suppressing proteins and limits its ability to control cell division. These two inhibitory proteins are partially responsible for HeLa cells' immortality by inhibiting apoptosis to occur.
Treatments
The main method of treatment for potential death from signaling-related diseases involves either increasing or decreasing the susceptibility of apoptosis in diseased cells, depending on whether the disease is caused by either the inhibition of or excess apoptosis. For instance, treatments aim to restore apoptosis to treat diseases with deficient cell death and to increase the apoptotic threshold to treat diseases involved with excessive cell death. To stimulate apoptosis, one can increase the number of death receptor ligands (such as TNF or TRAIL), antagonize the anti-apoptotic Bcl-2 pathway, or introduce Smac mimetics to inhibit the inhibitor (IAPs). The addition of agents such as Herceptin, Iressa, or Gleevec works to stop cells from cycling and causes apoptosis activation by blocking growth and survival signaling further upstream. Finally, adding p53-MDM2 complexes displaces p53 and activates the p53 pathway, leading to cell cycle arrest and apoptosis. Many different methods can be used either to stimulate or to inhibit apoptosis in various places along the death signaling pathway.
Apoptosis is a multi-step, multi-pathway cell-death programme that is inherent in every cell of the body. In cancer, the apoptosis cell-division ratio is altered. Cancer treatment by chemotherapy and irradiation kills target cells primarily by inducing apoptosis.
Hyperactive apoptosis
On the other hand, loss of control of cell death (resulting in excess apoptosis) can lead to neurodegenerative diseases, hematologic diseases, and tissue damage. Neurons that rely on mitochondrial respiration undergo apoptosis in neurodegenerative diseases such as Alzheimer's and Parkinson's. (an observation known as the "Inverse Warburg hypothesis"). Moreover, there is an inverse epidemiological comorbidity between neurodegenerative diseases and cancer. The progression of HIV is directly linked to excess, unregulated apoptosis. In a healthy individual, the number of CD4+ lymphocytes is in balance with the cells generated by the bone marrow; however, in HIV-positive patients, this balance is lost due to an inability of the bone marrow to regenerate CD4+ cells. In the case of HIV, CD4+ lymphocytes die at an accelerated rate through uncontrolled apoptosis, when stimulated.
At the molecular level, hyperactive apoptosis can be caused by defects in signaling pathways that regulate the Bcl-2 family proteins. Increased expression of apoptotic proteins such as BIM, or their decreased proteolysis, leads to cell death and can cause a number of pathologies, depending on the cells where excessive activity of BIM occurs. Cancer cells can escape apoptosis through mechanisms that suppress BIM expression or by increased proteolysis of BIM.
Treatments
Treatments aiming to inhibit works to block specific caspases. Finally, the Akt protein kinase promotes cell survival through two pathways. Akt phosphorylates and inhibits Bad (a Bcl-2 family member), causing Bad to interact with the 14-3-3 scaffold, resulting in Bcl dissociation and thus cell survival. Akt also activates IKKα, which leads to NF-κB activation and cell survival. Active NF-κB induces the expression of anti-apoptotic genes such as Bcl-2, resulting in inhibition of apoptosis. NF-κB has been found to play both an antiapoptotic role and a proapoptotic role depending on the stimuli utilized and the cell type.
HIV progression
The progression of the human immunodeficiency virus infection into AIDS is due primarily to the depletion of CD4+ T-helper lymphocytes in a manner that is too rapid for the body's bone marrow to replenish the cells, leading to a compromised immune system. One of the mechanisms by which T-helper cells are depleted is apoptosis, which results from a series of biochemical pathways:
HIV enzymes deactivate anti-apoptotic Bcl-2. This does not directly cause cell death but primes the cell for apoptosis should the appropriate signal be received. In parallel, these enzymes activate proapoptotic procaspase-8, which does directly activate the mitochondrial events of apoptosis.
HIV may increase the level of cellular proteins that prompt Fas-mediated apoptosis.
HIV proteins decrease the amount of CD4 glycoprotein marker present on the cell membrane.
Released viral particles and proteins present in extracellular fluid are able to induce apoptosis in nearby "bystander" T helper cells.
HIV decreases the production of molecules involved in marking the cell for apoptosis, giving the virus time to replicate and continue releasing apoptotic agents and virions into the surrounding tissue.
The infected CD4+ cell may also receive the death signal from a cytotoxic T cell.
Cells may also die as direct consequences of viral infections. HIV-1 expression induces tubular cell G2/M arrest and apoptosis. The progression from HIV to AIDS is not immediate or even necessarily rapid; HIV's cytotoxic activity toward CD4+ lymphocytes is classified as AIDS once a given patient's CD4+ cell count falls below 200.
Researchers from Kumamoto University in Japan have developed a new method to eradicate HIV in viral reservoir cells, named "Lock-in and apoptosis." Using the synthesized compound Heptanoylphosphatidyl L-Inositol Pentakisphophate (or L-Hippo) to bind strongly to the HIV protein PR55Gag, they were able to suppress viral budding. By suppressing viral budding, the researchers were able to trap the HIV virus in the cell and allow for the cell to undergo apoptosis (natural cell death). Associate Professor Mikako Fujita has stated that the approach is not yet available to HIV patients because the research team has to conduct further research on combining the drug therapy that currently exists with this "Lock-in and apoptosis" approach to lead to complete recovery from HIV.
Viral infection
Viral induction of apoptosis occurs when one or several cells of a living organism are infected with a virus, leading to cell death. Cell death in organisms is necessary for the normal development of cells and the cell cycle maturation. It is also important in maintaining the regular functions and activities of cells.
Viruses can trigger apoptosis of infected cells via a range of mechanisms including:
Receptor binding
Activation of protein kinase R (PKR)
Interaction with p53
Expression of viral proteins coupled to MHC proteins on the surface of the infected cell, allowing recognition by cells of the immune system (such as Natural Killer and cytotoxic T cells) that then induce the infected cell to undergo apoptosis.
Canine distemper virus (CDV) is known to cause apoptosis in central nervous system and lymphoid tissue of infected dogs in vivo and in vitro.
Apoptosis caused by CDV is typically induced via the extrinsic pathway, which activates caspases that disrupt cellular function and eventually leads to the cells death. In normal cells, CDV activates caspase-8 first, which works as the initiator protein followed by the executioner protein caspase-3. However, apoptosis induced by CDV in HeLa cells does not involve the initiator protein caspase-8. HeLa cell apoptosis caused by CDV follows a different mechanism than that in vero cell lines. This change in the caspase cascade suggests CDV induces apoptosis via the intrinsic pathway, excluding the need for the initiator caspase-8. The executioner protein is instead activated by the internal stimuli caused by viral infection not a caspase cascade.
The Oropouche virus (OROV) is found in the family Bunyaviridae. The study of apoptosis brought on by Bunyaviridae was initiated in 1996, when it was observed that apoptosis was induced by the La Crosse virus into the kidney cells of baby hamsters and into the brains of baby mice.
OROV is a disease that is transmitted between humans by the biting midge (Culicoides paraensis). It is referred to as a zoonotic arbovirus and causes febrile illness, characterized by the onset of a sudden fever known as Oropouche fever.
The Oropouche virus also causes disruption in cultured cells – cells that are cultivated in distinct and specific conditions. An example of this can be seen in HeLa cells, whereby the cells begin to degenerate shortly after they are infected.
With the use of gel electrophoresis, it can be observed that OROV causes DNA fragmentation in HeLa cells. It can be interpreted by counting, measuring, and analyzing the cells of the Sub/G1 cell population. When HeLA cells are infected with OROV, the cytochrome C is released from the membrane of the mitochondria, into the cytosol of the cells. This type of interaction shows that apoptosis is activated via an intrinsic pathway.
In order for apoptosis to occur within OROV, viral uncoating, viral internalization, along with the replication of cells is necessary. Apoptosis in some viruses is activated by extracellular stimuli. However, studies have demonstrated that the OROV infection causes apoptosis to be activated through intracellular stimuli and involves the mitochondria.
Many viruses encode proteins that can inhibit apoptosis. Several viruses encode viral homologs of Bcl-2. These homologs can inhibit proapoptotic proteins such as BAX and BAK, which are essential for the activation of apoptosis. Examples of viral Bcl-2 proteins include the Epstein-Barr virus BHRF1 protein and the adenovirus E1B 19K protein. Some viruses express caspase inhibitors that inhibit caspase activity and an example is the CrmA protein of cowpox viruses. Whilst a number of viruses can block the effects of TNF and Fas. For example, the M-T2 protein of myxoma viruses can bind TNF preventing it from binding the TNF receptor and inducing a response. Furthermore, many viruses express p53 inhibitors that can bind p53 and inhibit its transcriptional transactivation activity. As a consequence, p53 cannot induce apoptosis, since it cannot induce the expression of proapoptotic proteins. The adenovirus E1B-55K protein and the hepatitis B virus HBx protein are examples of viral proteins that can perform such a function.
Viruses can remain intact from apoptosis in particular in the latter stages of infection. They can be exported in the apoptotic bodies that pinch off from the surface of the dying cell, and the fact that they are engulfed by phagocytes prevents the initiation of a host response. This favours the spread of the virus. Prions can cause apoptosis in neurons.
Plants
Programmed cell death in plants has a number of molecular similarities to that of animal apoptosis, but it also has differences, notable ones being the presence of a cell wall and the lack of an immune system that removes the pieces of the dead cell. Instead of an immune response, the dying cell synthesizes substances to break itself down and places them in a vacuole that ruptures as the cell dies. Additionally, plants do not contain phagocytic cells, which are essential in the process of breaking down and removing apoptotic bodies. Whether this whole process resembles animal apoptosis closely enough to warrant using the name apoptosis (as opposed to the more general programmed cell death) is unclear.
Caspase-independent apoptosis
The characterization of the caspases allowed the development of caspase inhibitors, which can be used to determine whether a cellular process involves active caspases. Using these inhibitors it was discovered that cells can die while displaying a morphology similar to apoptosis without caspase activation. Later studies linked this phenomenon to the release of AIF (apoptosis-inducing factor) from the mitochondria and its translocation into the nucleus mediated by its NLS (nuclear localization signal). Inside the mitochondria, AIF is anchored to the inner membrane. In order to be released, the protein is cleaved by a calcium-dependent calpain protease.
See also
Anoikis
Apaf-1
Apo2.7
Apoptotic DNA fragmentation
Atromentin induces apoptosis in human leukemia U937 cells.
Autolysis
Autophagy
Cisplatin
Cytotoxicity
Entosis
Ferroptosis
Homeostasis
Immunology
Necrobiosis
Necrosis
Necrotaxis
Nemosis
Mitotic catastrophe
p53
Paraptosis
Pseudoapoptosis
PI3K/AKT/mTOR pathway
Explanatory footnotes
Citations
General bibliography
External links
Apoptosis & cell surface
Apoptosis & Caspase 3, The Proteolysis Map – animation
Apoptosis & Caspase 8, The Proteolysis Map – animation
Apoptosis & Caspase 7, The Proteolysis Map – animation
Apoptosis MiniCOPE Dictionary – list of apoptosis terms and acronyms
Apoptosis (Programmed Cell Death) – The Virtual Library of Biochemistry, Molecular Biology and Cell Biology
Apoptosis Research Portal
Apoptosis Info Apoptosis protocols, articles, news, and recent publications.
Database of proteins involved in apoptosis
Apoptosis Video
Apoptosis Video (WEHI on YouTube )
The Mechanisms of Apoptosis Kimball's Biology Pages. Simple explanation of the mechanisms of apoptosis triggered by internal signals (bcl-2), along the caspase-9, caspase-3 and caspase-7 pathway; and by external signals (FAS and TNF), along the caspase 8 pathway. Accessed 25 March 2007.
WikiPathways – Apoptosis pathway
"Finding Cancer's Self-Destruct Button". CR magazine (Spring 2007). Article on apoptosis and cancer.
Xiaodong Wang's lecture: Introduction to Apoptosis
Robert Horvitz's Short Clip: Discovering Programmed Cell Death
The Bcl-2 Database
DeathBase: a database of proteins involved in cell death, curated by experts
European Cell Death Organization
Apoptosis signaling pathway created by Cusabio
Cell signaling
Cellular senescence
Immunology
Medical aspects of death
Programmed cell death
|
https://en.wikipedia.org/wiki/Anus
|
The anus (: anuses or ani; from Latin, 'ring' or 'circle') is an opening at the opposite end of an animal's digestive tract from the mouth. Its function is to control the expulsion of feces, the residual semi-solid waste that remains after food digestion, which, depending on the type of animal, includes: matter which the animal cannot digest, such as bones; food material after the nutrients have been extracted, for example cellulose or lignin; ingested matter which would be toxic if it remained in the digestive tract; and dead or excess gut bacteria and other endosymbionts.
Amphibians, reptiles, and birds use the same orifice (known as the cloaca) for excreting liquid and solid wastes, for copulation and egg-laying. Monotreme mammals also have a cloaca, which is thought to be a feature inherited from the earliest amniotes via the therapsids. Marsupials have a single orifice for excreting both solids and liquids and, in females, a separate vagina for reproduction. Female placental mammals have completely separate orifices for defecation, urination, and reproduction; males have one opening for defecation and another for both urination and reproduction, although the channels flowing to that orifice are almost completely separate.
The development of the anus was an important stage in the evolution of multicellular animals. It appears to have happened at least twice, following different paths in protostomes and deuterostomes. This accompanied or facilitated other important evolutionary developments: the bilaterian body plan, the coelom, and metamerism, in which the body was built of repeated "modules" which could later specialize, such as the heads of most arthropods, which are composed of fused, specialized segments.
In comb jellies there are species with one and sometimes two permanent anuses, species like the warty comb jelly grows an anus which then disappear when it's no longer needed.
Development
In animals at least as complex as an earthworm, the embryo forms a dent on one side, the blastopore, which deepens to become the archenteron, the first phase in the growth of the gut. In deuterostomes, the original dent becomes the anus while the gut eventually tunnels through to make another opening, which forms the mouth. The protostomes were so named because it was thought that in their embryos the dent formed the mouth first (proto– meaning "first") and the anus was formed later at the opening made by the other end of the gut. Research from 2001 shows the edges of the dent close up in the middles of protosomes, leaving openings at the ends which become the mouths and anuses.
See also
References
External links
Digestive system
|
https://en.wikipedia.org/wiki/Amphetamine
|
Amphetamine (contracted from alpha-methylphenethylamine) is a central nervous system (CNS) stimulant that is used in the treatment of attention deficit hyperactivity disorder (ADHD), narcolepsy, and obesity. Amphetamine was discovered as a chemical in 1887 by Lazăr Edeleanu, and then as a drug in the late 1920s. It exists as two enantiomers: levoamphetamine and dextroamphetamine. Amphetamine properly refers to a specific chemical, the racemic free base, which is equal parts of the two enantiomers in their pure amine forms. The term is frequently used informally to refer to any combination of the enantiomers, or to either of them alone. Historically, it has been used to treat nasal congestion and depression. Amphetamine is also used as an athletic performance enhancer and cognitive enhancer, and recreationally as an aphrodisiac and euphoriant. It is a prescription drug in many countries, and unauthorized possession and distribution of amphetamine are often tightly controlled due to the significant health risks associated with recreational use.
The first amphetamine pharmaceutical was Benzedrine, a brand which was used to treat a variety of conditions. Currently, pharmaceutical amphetamine is prescribed as racemic amphetamine, Adderall, dextroamphetamine, or the inactive prodrug lisdexamfetamine. Amphetamine increases monoamine and excitatory neurotransmission in the brain, with its most pronounced effects targeting the norepinephrine and dopamine neurotransmitter systems.
At therapeutic doses, amphetamine causes emotional and cognitive effects such as euphoria, change in desire for sex, increased wakefulness, and improved cognitive control. It induces physical effects such as improved reaction time, fatigue resistance, and increased muscle strength. Larger doses of amphetamine may impair cognitive function and induce rapid muscle breakdown. Addiction is a serious risk with heavy recreational amphetamine use, but is unlikely to occur from long-term medical use at therapeutic doses. Very high doses can result in psychosis (e.g., delusions and paranoia) which rarely occurs at therapeutic doses even during long-term use. Recreational doses are generally much larger than prescribed therapeutic doses and carry a far greater risk of serious side effects.
Amphetamine belongs to the phenethylamine class. It is also the parent compound of its own structural class, the substituted amphetamines, which includes prominent substances such as bupropion, cathinone, MDMA, and methamphetamine. As a member of the phenethylamine class, amphetamine is also chemically related to the naturally occurring trace amine neuromodulators, specifically phenethylamine and , both of which are produced within the human body. Phenethylamine is the parent compound of amphetamine, while is a positional isomer of amphetamine that differs only in the placement of the methyl group.
Uses
Medical
Amphetamine is used to treat attention deficit hyperactivity disorder (ADHD), narcolepsy (a sleep disorder), and obesity, and is sometimes prescribed for its past medical indications, particularly for depression and chronic pain.
Long-term amphetamine exposure at sufficiently high doses in some animal species is known to produce abnormal dopamine system development or nerve damage, but, in humans with ADHD, long-term use of pharmaceutical amphetamines at therapeutic doses appears to improve brain development and nerve growth. Reviews of magnetic resonance imaging (MRI) studies suggest that long-term treatment with amphetamine decreases abnormalities in brain structure and function found in subjects with ADHD, and improves function in several parts of the brain, such as the right caudate nucleus of the basal ganglia.
Reviews of clinical stimulant research have established the safety and effectiveness of long-term continuous amphetamine use for the treatment of ADHD. Randomized controlled trials of continuous stimulant therapy for the treatment of ADHD spanning 2 years have demonstrated treatment effectiveness and safety. Two reviews have indicated that long-term continuous stimulant therapy for ADHD is effective for reducing the core symptoms of ADHD (i.e., hyperactivity, inattention, and impulsivity), enhancing quality of life and academic achievement, and producing improvements in a large number of functional outcomes across 9 categories of outcomes related to academics, antisocial behavior, driving, non-medicinal drug use, obesity, occupation, self-esteem, service use (i.e., academic, occupational, health, financial, and legal services), and social function. One review highlighted a nine-month randomized controlled trial of amphetamine treatment for ADHD in children that found an average increase of 4.5 IQ points, continued increases in attention, and continued decreases in disruptive behaviors and hyperactivity. Another review indicated that, based upon the longest follow-up studies conducted to date, lifetime stimulant therapy that begins during childhood is continuously effective for controlling ADHD symptoms and reduces the risk of developing a substance use disorder as an adult.
Current models of ADHD suggest that it is associated with functional impairments in some of the brain's neurotransmitter systems; these functional impairments involve impaired dopamine neurotransmission in the mesocorticolimbic projection and norepinephrine neurotransmission in the noradrenergic projections from the locus coeruleus to the prefrontal cortex. Psychostimulants like methylphenidate and amphetamine are effective in treating ADHD because they increase neurotransmitter activity in these systems. Approximately 80% of those who use these stimulants see improvements in ADHD symptoms. Children with ADHD who use stimulant medications generally have better relationships with peers and family members, perform better in school, are less distractible and impulsive, and have longer attention spans. The Cochrane reviews on the treatment of ADHD in children, adolescents, and adults with pharmaceutical amphetamines stated that short-term studies have demonstrated that these drugs decrease the severity of symptoms, but they have higher discontinuation rates than non-stimulant medications due to their adverse side effects. A Cochrane review on the treatment of ADHD in children with tic disorders such as Tourette syndrome indicated that stimulants in general do not make tics worse, but high doses of dextroamphetamine could exacerbate tics in some individuals.
Enhancing performance
Cognitive performance
In 2015, a systematic review and a meta-analysis of high quality clinical trials found that, when used at low (therapeutic) doses, amphetamine produces modest yet unambiguous improvements in cognition, including working memory, long-term episodic memory, inhibitory control, and some aspects of attention, in normal healthy adults; these cognition-enhancing effects of amphetamine are known to be partially mediated through the indirect activation of both dopamine receptor D1 and adrenoceptor α2 in the prefrontal cortex. A systematic review from 2014 found that low doses of amphetamine also improve memory consolidation, in turn leading to improved recall of information. Therapeutic doses of amphetamine also enhance cortical network efficiency, an effect which mediates improvements in working memory in all individuals. Amphetamine and other ADHD stimulants also improve task saliency (motivation to perform a task) and increase arousal (wakefulness), in turn promoting goal-directed behavior. Stimulants such as amphetamine can improve performance on difficult and boring tasks and are used by some students as a study and test-taking aid. Based upon studies of self-reported illicit stimulant use, of college students use diverted ADHD stimulants, which are primarily used for enhancement of academic performance rather than as recreational drugs. However, high amphetamine doses that are above the therapeutic range can interfere with working memory and other aspects of cognitive control.
Physical performance
Amphetamine is used by some athletes for its psychological and athletic performance-enhancing effects, such as increased endurance and alertness; however, non-medical amphetamine use is prohibited at sporting events that are regulated by collegiate, national, and international anti-doping agencies. In healthy people at oral therapeutic doses, amphetamine has been shown to increase muscle strength, acceleration, athletic performance in anaerobic conditions, and endurance (i.e., it delays the onset of fatigue), while improving reaction time. Amphetamine improves endurance and reaction time primarily through reuptake inhibition and release of dopamine in the central nervous system. Amphetamine and other dopaminergic drugs also increase power output at fixed levels of perceived exertion by overriding a "safety switch", allowing the core temperature limit to increase in order to access a reserve capacity that is normally off-limits. At therapeutic doses, the adverse effects of amphetamine do not impede athletic performance; however, at much higher doses, amphetamine can induce effects that severely impair performance, such as rapid muscle breakdown and elevated body temperature.
Recreational
Amphetamine, specifically the more dopaminergic dextrorotatory enantiomer (dextroamphetamine), is also used recreationally as a euphoriant and aphrodisiac, and like other amphetamines; is used as a club drug for its energetic and euphoric high. Dextroamphetamine (d-amphetamine) is considered to have a high potential for misuse in a recreational manner since individuals typically report feeling euphoric, more alert, and more energetic after taking the drug. A notable part of the 1960s mod subculture in the UK was recreational amphetamine use, which was used to fuel all-night dances at clubs like Manchester's Twisted Wheel. Newspaper reports described dancers emerging from clubs at 5 a.m. with dilated pupils. Mods used the drug for stimulation and alertness, which they viewed as different from the intoxication caused by alcohol and other drugs. Dr. Andrew Wilson argues that for a significant minority, "amphetamines symbolised the smart, on-the-ball, cool image" and that they sought "stimulation not intoxication [...] greater awareness, not escape" and "confidence and articulacy" rather than the "drunken rowdiness of previous generations." Dextroamphetamine's dopaminergic (rewarding) properties affect the mesocorticolimbic circuit; a group of neural structures responsible for incentive salience (i.e., "wanting"; desire or craving for a reward and motivation), positive reinforcement and positively-valenced emotions, particularly ones involving pleasure. Large recreational doses of dextroamphetamine may produce symptoms of dextroamphetamine overdose. Recreational users sometimes open dexedrine capsules and crush the contents in order to insufflate (snort) it or subsequently dissolve it in water and inject it. Immediate-release formulations have higher potential for abuse via insufflation (snorting) or intravenous injection due to a more favorable pharmacokinetic profile and easy crushability (especially tablets). Injection into the bloodstream can be dangerous because insoluble fillers within the tablets can block small blood vessels. Chronic overuse of dextroamphetamine can lead to severe drug dependence, resulting in withdrawal symptoms when drug use stops.
Contraindications
According to the International Programme on Chemical Safety (IPCS) and the United States Food and Drug Administration (USFDA), amphetamine is contraindicated in people with a history of drug abuse, cardiovascular disease, severe agitation, or severe anxiety. It is also contraindicated in individuals with advanced arteriosclerosis (hardening of the arteries), glaucoma (increased eye pressure), hyperthyroidism (excessive production of thyroid hormone), or moderate to severe hypertension. These agencies indicate that people who have experienced allergic reactions to other stimulants or who are taking monoamine oxidase inhibitors (MAOIs) should not take amphetamine, although safe concurrent use of amphetamine and monoamine oxidase inhibitors has been documented. These agencies also state that anyone with anorexia nervosa, bipolar disorder, depression, hypertension, liver or kidney problems, mania, psychosis, Raynaud's phenomenon, seizures, thyroid problems, tics, or Tourette syndrome should monitor their symptoms while taking amphetamine. Evidence from human studies indicates that therapeutic amphetamine use does not cause developmental abnormalities in the fetus or newborns (i.e., it is not a human teratogen), but amphetamine abuse does pose risks to the fetus. Amphetamine has also been shown to pass into breast milk, so the IPCS and the USFDA advise mothers to avoid breastfeeding when using it. Due to the potential for reversible growth impairments, the USFDA advises monitoring the height and weight of children and adolescents prescribed an amphetamine pharmaceutical.
Adverse effects
The adverse side effects of amphetamine are many and varied, and the amount of amphetamine used is the primary factor in determining the likelihood and severity of adverse effects. Amphetamine products such as Adderall, Dexedrine, and their generic equivalents are currently approved by the USFDA for long-term therapeutic use. Recreational use of amphetamine generally involves much larger doses, which have a greater risk of serious adverse drug effects than dosages used for therapeutic purposes.
Physical
Cardiovascular side effects can include hypertension or hypotension from a vasovagal response, Raynaud's phenomenon (reduced blood flow to the hands and feet), and tachycardia (increased heart rate). Sexual side effects in males may include erectile dysfunction, frequent erections, or prolonged erections. Gastrointestinal side effects may include abdominal pain, constipation, diarrhea, and nausea. Other potential physical side effects include appetite loss, blurred vision, dry mouth, excessive grinding of the teeth, nosebleed, profuse sweating, rhinitis medicamentosa (drug-induced nasal congestion), reduced seizure threshold, tics (a type of movement disorder), and weight loss. Dangerous physical side effects are rare at typical pharmaceutical doses.
Amphetamine stimulates the medullary respiratory centers, producing faster and deeper breaths. In a normal person at therapeutic doses, this effect is usually not noticeable, but when respiration is already compromised, it may be evident. Amphetamine also induces contraction in the urinary bladder sphincter, the muscle which controls urination, which can result in difficulty urinating. This effect can be useful in treating bed wetting and loss of bladder control. The effects of amphetamine on the gastrointestinal tract are unpredictable. If intestinal activity is high, amphetamine may reduce gastrointestinal motility (the rate at which content moves through the digestive system); however, amphetamine may increase motility when the smooth muscle of the tract is relaxed. Amphetamine also has a slight analgesic effect and can enhance the pain relieving effects of opioids.
USFDA-commissioned studies from 2011 indicate that in children, young adults, and adults there is no association between serious adverse cardiovascular events (sudden death, heart attack, and stroke) and the medical use of amphetamine or other ADHD stimulants. However, amphetamine pharmaceuticals are contraindicated in individuals with cardiovascular disease.
Psychological
At normal therapeutic doses, the most common psychological side effects of amphetamine include increased alertness, apprehension, concentration, initiative, self-confidence and sociability, mood swings (elated mood followed by mildly depressed mood), insomnia or wakefulness, and decreased sense of fatigue. Less common side effects include anxiety, change in libido, grandiosity, irritability, repetitive or obsessive behaviors, and restlessness; these effects depend on the user's personality and current mental state. Amphetamine psychosis (e.g., delusions and paranoia) can occur in heavy users. Although very rare, this psychosis can also occur at therapeutic doses during long-term therapy. According to the USFDA, "there is no systematic evidence" that stimulants produce aggressive behavior or hostility.
Amphetamine has also been shown to produce a conditioned place preference in humans taking therapeutic doses, meaning that individuals acquire a preference for spending time in places where they have previously used amphetamine.
Reinforcement disorders
Addiction
Addiction is a serious risk with heavy recreational amphetamine use, but is unlikely to occur from long-term medical use at therapeutic doses; in fact, lifetime stimulant therapy for ADHD that begins during childhood reduces the risk of developing substance use disorders as an adult. Pathological overactivation of the mesolimbic pathway, a dopamine pathway that connects the ventral tegmental area to the nucleus accumbens, plays a central role in amphetamine addiction. Individuals who frequently self-administer high doses of amphetamine have a high risk of developing an amphetamine addiction, since chronic use at high doses gradually increases the level of accumbal ΔFosB, a "molecular switch" and "master control protein" for addiction. Once nucleus accumbens ΔFosB is sufficiently overexpressed, it begins to increase the severity of addictive behavior (i.e., compulsive drug-seeking) with further increases in its expression. While there are currently no effective drugs for treating amphetamine addiction, regularly engaging in sustained aerobic exercise appears to reduce the risk of developing such an addiction. Exercise therapy improves clinical treatment outcomes and may be used as an adjunct therapy with behavioral therapies for addiction.
Biomolecular mechanisms
Chronic use of amphetamine at excessive doses causes alterations in gene expression in the mesocorticolimbic projection, which arise through transcriptional and epigenetic mechanisms. The most important transcription factors that produce these alterations are Delta FBJ murine osteosarcoma viral oncogene homolog B (ΔFosB), cAMP response element binding protein (CREB), and nuclear factor-kappa B (NF-κB). ΔFosB is the most significant biomolecular mechanism in addiction because ΔFosB overexpression (i.e., an abnormally high level of gene expression which produces a pronounced gene-related phenotype) in the D1-type medium spiny neurons in the nucleus accumbens is necessary and sufficient for many of the neural adaptations and regulates multiple behavioral effects (e.g., reward sensitization and escalating drug self-administration) involved in addiction. Once ΔFosB is sufficiently overexpressed, it induces an addictive state that becomes increasingly more severe with further increases in ΔFosB expression. It has been implicated in addictions to alcohol, cannabinoids, cocaine, methylphenidate, nicotine, opioids, phencyclidine, propofol, and substituted amphetamines, among others.
ΔJunD, a transcription factor, and G9a, a histone methyltransferase enzyme, both oppose the function of ΔFosB and inhibit increases in its expression. Sufficiently overexpressing ΔJunD in the nucleus accumbens with viral vectors can completely block many of the neural and behavioral alterations seen in chronic drug abuse (i.e., the alterations mediated by ΔFosB). Similarly, accumbal G9a hyperexpression results in markedly increased histone 3 lysine residue 9 dimethylation (H3K9me2) and blocks the induction of ΔFosB-mediated neural and behavioral plasticity by chronic drug use, which occurs via H3K9me2-mediated repression of transcription factors for ΔFosB and H3K9me2-mediated repression of various ΔFosB transcriptional targets (e.g., CDK5). ΔFosB also plays an important role in regulating behavioral responses to natural rewards, such as palatable food, sex, and exercise. Since both natural rewards and addictive drugs induce the expression of ΔFosB (i.e., they cause the brain to produce more of it), chronic acquisition of these rewards can result in a similar pathological state of addiction. Consequently, ΔFosB is the most significant factor involved in both amphetamine addiction and amphetamine-induced sexual addictions, which are compulsive sexual behaviors that result from excessive sexual activity and amphetamine use. These sexual addictions are associated with a dopamine dysregulation syndrome which occurs in some patients taking dopaminergic drugs.
The effects of amphetamine on gene regulation are both dose- and route-dependent. Most of the research on gene regulation and addiction is based upon animal studies with intravenous amphetamine administration at very high doses. The few studies that have used equivalent (weight-adjusted) human therapeutic doses and oral administration show that these changes, if they occur, are relatively minor. This suggests that medical use of amphetamine does not significantly affect gene regulation.
Pharmacological treatments
there is no effective pharmacotherapy for amphetamine addiction. Reviews from 2015 and 2016 indicated that TAAR1-selective agonists have significant therapeutic potential as a treatment for psychostimulant addictions; however, the only compounds which are known to function as TAAR1-selective agonists are experimental drugs. Amphetamine addiction is largely mediated through increased activation of dopamine receptors and NMDA receptors in the nucleus accumbens; magnesium ions inhibit NMDA receptors by blocking the receptor calcium channel. One review suggested that, based upon animal testing, pathological (addiction-inducing) psychostimulant use significantly reduces the level of intracellular magnesium throughout the brain. Supplemental magnesium treatment has been shown to reduce amphetamine self-administration (i.e., doses given to oneself) in humans, but it is not an effective monotherapy for amphetamine addiction.
A systematic review and meta-analysis from 2019 assessed the efficacy of 17 different pharmacotherapies used in randomized controlled trials (RCTs) for amphetamine and methamphetamine addiction; it found only low-strength evidence that methylphenidate might reduce amphetamine or methamphetamine self-administration. There was low- to moderate-strength evidence of no benefit for most of the other medications used in RCTs, which included antidepressants (bupropion, mirtazapine, sertraline), antipsychotics (aripiprazole), anticonvulsants (topiramate, baclofen, gabapentin), naltrexone, varenicline, citicoline, ondansetron, prometa, riluzole, atomoxetine, dextroamphetamine, and modafinil.
Behavioral treatments
A 2018 systematic review and network meta-analysis of 50 trials involving 12 different psychosocial interventions for amphetamine, methamphetamine, or cocaine addiction found that combination therapy with both contingency management and community reinforcement approach had the highest efficacy (i.e., abstinence rate) and acceptability (i.e., lowest dropout rate). Other treatment modalities examined in the analysis included monotherapy with contingency management or community reinforcement approach, cognitive behavioral therapy, 12-step programs, non-contingent reward-based therapies, psychodynamic therapy, and other combination therapies involving these.
Additionally, research on the neurobiological effects of physical exercise suggests that daily aerobic exercise, especially endurance exercise (e.g., marathon running), prevents the development of drug addiction and is an effective adjunct therapy (i.e., a supplemental treatment) for amphetamine addiction. Exercise leads to better treatment outcomes when used as an adjunct treatment, particularly for psychostimulant addictions. In particular, aerobic exercise decreases psychostimulant self-administration, reduces the reinstatement (i.e., relapse) of drug-seeking, and induces increased dopamine receptor D2 (DRD2) density in the striatum. This is the opposite of pathological stimulant use, which induces decreased striatal DRD2 density. One review noted that exercise may also prevent the development of a drug addiction by altering ΔFosB or immunoreactivity in the striatum or other parts of the reward system.
Dependence and withdrawal
Drug tolerance develops rapidly in amphetamine abuse (i.e., recreational amphetamine use), so periods of extended abuse require increasingly larger doses of the drug in order to achieve the same effect.
According to a Cochrane review on withdrawal in individuals who compulsively use amphetamine and methamphetamine, "when chronic heavy users abruptly discontinue amphetamine use, many report a time-limited withdrawal syndrome that occurs within 24 hours of their last dose." This review noted that withdrawal symptoms in chronic, high-dose users are frequent, occurring in roughly 88% of cases, and persist for weeks with a marked "crash" phase occurring during the first week. Amphetamine withdrawal symptoms can include anxiety, drug craving, depressed mood, fatigue, increased appetite, increased movement or decreased movement, lack of motivation, sleeplessness or sleepiness, and lucid dreams. The review indicated that the severity of withdrawal symptoms is positively correlated with the age of the individual and the extent of their dependence. Mild withdrawal symptoms from the discontinuation of amphetamine treatment at therapeutic doses can be avoided by tapering the dose.
Overdose
An amphetamine overdose can lead to many different symptoms, but is rarely fatal with appropriate care. The severity of overdose symptoms increases with dosage and decreases with drug tolerance to amphetamine. Tolerant individuals have been known to take as much as 5 grams of amphetamine in a day, which is roughly 100 times the maximum daily therapeutic dose. Symptoms of a moderate and extremely large overdose are listed below; fatal amphetamine poisoning usually also involves convulsions and coma. In 2013, overdose on amphetamine, methamphetamine, and other compounds implicated in an "amphetamine use disorder" resulted in an estimated 3,788 deaths worldwide ( deaths, 95% confidence).
Toxicity
In rodents and primates, sufficiently high doses of amphetamine cause dopaminergic neurotoxicity, or damage to dopamine neurons, which is characterized by dopamine terminal degeneration and reduced transporter and receptor function. There is no evidence that amphetamine is directly neurotoxic in humans. However, large doses of amphetamine may indirectly cause dopaminergic neurotoxicity as a result of hyperpyrexia, the excessive formation of reactive oxygen species, and increased autoxidation of dopamine. Animal models of neurotoxicity from high-dose amphetamine exposure indicate that the occurrence of hyperpyrexia (i.e., core body temperature ≥ 40 °C) is necessary for the development of amphetamine-induced neurotoxicity. Prolonged elevations of brain temperature above 40 °C likely promote the development of amphetamine-induced neurotoxicity in laboratory animals by facilitating the production of reactive oxygen species, disrupting cellular protein function, and transiently increasing blood–brain barrier permeability.
Psychosis
An amphetamine overdose can result in a stimulant psychosis that may involve a variety of symptoms, such as delusions and paranoia. A Cochrane review on treatment for amphetamine, dextroamphetamine, and methamphetamine psychosis states that about of users fail to recover completely. According to the same review, there is at least one trial that shows antipsychotic medications effectively resolve the symptoms of acute amphetamine psychosis. Psychosis rarely arises from therapeutic use.
Drug interactions
Many types of substances are known to interact with amphetamine, resulting in altered drug action or metabolism of amphetamine, the interacting substance, or both. Inhibitors of enzymes that metabolize amphetamine (e.g., CYP2D6 and FMO3) will prolong its elimination half-life, meaning that its effects will last longer. Amphetamine also interacts with , particularly monoamine oxidase A inhibitors, since both MAOIs and amphetamine increase plasma catecholamines (i.e., norepinephrine and dopamine); therefore, concurrent use of both is dangerous. Amphetamine modulates the activity of most psychoactive drugs. In particular, amphetamine may decrease the effects of sedatives and depressants and increase the effects of stimulants and antidepressants. Amphetamine may also decrease the effects of antihypertensives and antipsychotics due to its effects on blood pressure and dopamine respectively. Zinc supplementation may reduce the minimum effective dose of amphetamine when it is used for the treatment of ADHD.
In general, there is no significant interaction when consuming amphetamine with food, but the pH of gastrointestinal content and urine affects the absorption and excretion of amphetamine, respectively. Acidic substances reduce the absorption of amphetamine and increase urinary excretion, and alkaline substances do the opposite. Due to the effect pH has on absorption, amphetamine also interacts with gastric acid reducers such as proton pump inhibitors and H2 antihistamines, which increase gastrointestinal pH (i.e., make it less acidic).
Pharmacology
Pharmacodynamics
Amphetamine exerts its behavioral effects by altering the use of monoamines as neuronal signals in the brain, primarily in catecholamine neurons in the reward and executive function pathways of the brain. The concentrations of the main neurotransmitters involved in reward circuitry and executive functioning, dopamine and norepinephrine, increase dramatically in a dose-dependent manner by amphetamine because of its effects on monoamine transporters. The reinforcing and motivational salience-promoting effects of amphetamine are due mostly to enhanced dopaminergic activity in the mesolimbic pathway. The euphoric and locomotor-stimulating effects of amphetamine are dependent upon the magnitude and speed by which it increases synaptic dopamine and norepinephrine concentrations in the striatum.
Amphetamine has been identified as a potent full agonist of trace amine-associated receptor 1 (TAAR1), a and G protein-coupled receptor (GPCR) discovered in 2001, which is important for regulation of brain monoamines. Activation of increases production via adenylyl cyclase activation and inhibits monoamine transporter function. Monoamine autoreceptors (e.g., D2 short, presynaptic α2, and presynaptic 5-HT1A) have the opposite effect of TAAR1, and together these receptors provide a regulatory system for monoamines. Notably, amphetamine and trace amines possess high binding affinities for TAAR1, but not for monoamine autoreceptors. Imaging studies indicate that monoamine reuptake inhibition by amphetamine and trace amines is site specific and depends upon the presence of TAAR1 in the associated monoamine neurons.
In addition to the neuronal monoamine transporters, amphetamine also inhibits both vesicular monoamine transporters, VMAT1 and VMAT2, as well as SLC1A1, SLC22A3, and SLC22A5. SLC1A1 is excitatory amino acid transporter 3 (EAAT3), a glutamate transporter located in neurons, SLC22A3 is an extraneuronal monoamine transporter that is present in astrocytes, and SLC22A5 is a high-affinity carnitine transporter. Amphetamine is known to strongly induce cocaine- and amphetamine-regulated transcript (CART) gene expression, a neuropeptide involved in feeding behavior, stress, and reward, which induces observable increases in neuronal development and survival in vitro. The CART receptor has yet to be identified, but there is significant evidence that CART binds to a unique . Amphetamine also inhibits monoamine oxidases at very high doses, resulting in less monoamine and trace amine metabolism and consequently higher concentrations of synaptic monoamines. In humans, the only post-synaptic receptor at which amphetamine is known to bind is the receptor, where it acts as an agonist with low micromolar affinity.
The full profile of amphetamine's short-term drug effects in humans is mostly derived through increased cellular communication or neurotransmission of dopamine, serotonin, norepinephrine, epinephrine, histamine, CART peptides, endogenous opioids, adrenocorticotropic hormone, corticosteroids, and glutamate, which it affects through interactions with , , , , , , and possibly other biological targets. Amphetamine also activates seven human carbonic anhydrase enzymes, several of which are expressed in the human brain.
Dextroamphetamine is a more potent agonist of than levoamphetamine. Consequently, dextroamphetamine produces greater stimulation than levoamphetamine, roughly three to four times more, but levoamphetamine has slightly stronger cardiovascular and peripheral effects.
Dopamine
In certain brain regions, amphetamine increases the concentration of dopamine in the synaptic cleft. Amphetamine can enter the presynaptic neuron either through or by diffusing across the neuronal membrane directly. As a consequence of DAT uptake, amphetamine produces competitive reuptake inhibition at the transporter. Upon entering the presynaptic neuron, amphetamine activates which, through protein kinase A (PKA) and protein kinase C (PKC) signaling, causes DAT phosphorylation. Phosphorylation by either protein kinase can result in DAT internalization ( reuptake inhibition), but phosphorylation alone induces the reversal of dopamine transport through DAT (i.e., dopamine efflux). Amphetamine is also known to increase intracellular calcium, an effect which is associated with DAT phosphorylation through an unidentified Ca2+/calmodulin-dependent protein kinase (CAMK)-dependent pathway, in turn producing dopamine efflux. Through direct activation of G protein-coupled inwardly-rectifying potassium channels, reduces the firing rate of dopamine neurons, preventing a hyper-dopaminergic state.
Amphetamine is also a substrate for the presynaptic vesicular monoamine transporter, . Following amphetamine uptake at VMAT2, amphetamine induces the collapse of the vesicular pH gradient, which results in the release of dopamine molecules from synaptic vesicles into the cytosol via dopamine efflux through VMAT2. Subsequently, the cytosolic dopamine molecules are released from the presynaptic neuron into the synaptic cleft via reverse transport at .
Norepinephrine
Similar to dopamine, amphetamine dose-dependently increases the level of synaptic norepinephrine, the direct precursor of epinephrine. Based upon neuronal expression, amphetamine is thought to affect norepinephrine analogously to dopamine. In other words, amphetamine induces TAAR1-mediated efflux and reuptake inhibition at phosphorylated , competitive NET reuptake inhibition, and norepinephrine release from .
Serotonin
Amphetamine exerts analogous, yet less pronounced, effects on serotonin as on dopamine and norepinephrine. Amphetamine affects serotonin via and, like norepinephrine, is thought to phosphorylate via . Like dopamine, amphetamine has low, micromolar affinity at the human 5-HT1A receptor.
Other neurotransmitters, peptides, hormones, and enzymes
Acute amphetamine administration in humans increases endogenous opioid release in several brain structures in the reward system. Extracellular levels of glutamate, the primary excitatory neurotransmitter in the brain, have been shown to increase in the striatum following exposure to amphetamine. This increase in extracellular glutamate presumably occurs via the amphetamine-induced internalization of EAAT3, a glutamate reuptake transporter, in dopamine neurons. Amphetamine also induces the selective release of histamine from mast cells and efflux from histaminergic neurons through . Acute amphetamine administration can also increase adrenocorticotropic hormone and corticosteroid levels in blood plasma by stimulating the hypothalamic–pituitary–adrenal axis.
In December 2017, the first study assessing the interaction between amphetamine and human carbonic anhydrase enzymes was published; of the eleven carbonic anhydrase enzymes it examined, it found that amphetamine potently activates seven, four of which are highly expressed in the human brain, with low nanomolar through low micromolar activating effects. Based upon preclinical research, cerebral carbonic anhydrase activation has cognition-enhancing effects; but, based upon the clinical use of carbonic anhydrase inhibitors, carbonic anhydrase activation in other tissues may be associated with adverse effects, such as ocular activation exacerbating glaucoma.
Pharmacokinetics
The oral bioavailability of amphetamine varies with gastrointestinal pH; it is well absorbed from the gut, and bioavailability is typically over 75% for dextroamphetamine. Amphetamine is a weak base with a pKa of 9.9; consequently, when the pH is basic, more of the drug is in its lipid soluble free base form, and more is absorbed through the lipid-rich cell membranes of the gut epithelium. Conversely, an acidic pH means the drug is predominantly in a water-soluble cationic (salt) form, and less is absorbed. Approximately of amphetamine circulating in the bloodstream is bound to plasma proteins. Following absorption, amphetamine readily distributes into most tissues in the body, with high concentrations occurring in cerebrospinal fluid and brain tissue.
The half-lives of amphetamine enantiomers differ and vary with urine pH. At normal urine pH, the half-lives of dextroamphetamine and levoamphetamine are hours and hours, respectively. Highly acidic urine will reduce the enantiomer half-lives to 7 hours; highly alkaline urine will increase the half-lives up to 34 hours. The immediate-release and extended release variants of salts of both isomers reach peak plasma concentrations at 3 hours and 7 hours post-dose respectively. Amphetamine is eliminated via the kidneys, with of the drug being excreted unchanged at normal urinary pH. When the urinary pH is basic, amphetamine is in its free base form, so less is excreted. When urine pH is abnormal, the urinary recovery of amphetamine may range from a low of 1% to a high of 75%, depending mostly upon whether urine is too basic or acidic, respectively. Following oral administration, amphetamine appears in urine within 3 hours. Roughly 90% of ingested amphetamine is eliminated 3 days after the last oral dose.
CYP2D6, dopamine β-hydroxylase (DBH), flavin-containing monooxygenase 3 (FMO3), butyrate-CoA ligase (XM-ligase), and glycine N-acyltransferase (GLYAT) are the enzymes known to metabolize amphetamine or its metabolites in humans. Amphetamine has a variety of excreted metabolic products, including , , , benzoic acid, hippuric acid, norephedrine, and phenylacetone. Among these metabolites, the active sympathomimetics are , , and norephedrine. The main metabolic pathways involve aromatic para-hydroxylation, aliphatic alpha- and beta-hydroxylation, N-oxidation, N-dealkylation, and deamination. The known metabolic pathways, detectable metabolites, and metabolizing enzymes in humans include the following:
Pharmacomicrobiomics
The human metagenome (i.e., the genetic composition of an individual and all microorganisms that reside on or within the individual's body) varies considerably between individuals. Since the total number of microbial and viral cells in the human body (over 100 trillion) greatly outnumbers human cells (tens of trillions), there is considerable potential for interactions between drugs and an individual's microbiome, including: drugs altering the composition of the human microbiome, drug metabolism by microbial enzymes modifying the drug's pharmacokinetic profile, and microbial drug metabolism affecting a drug's clinical efficacy and toxicity profile. The field that studies these interactions is known as pharmacomicrobiomics.
Similar to most biomolecules and other orally administered xenobiotics (i.e., drugs), amphetamine is predicted to undergo promiscuous metabolism by human gastrointestinal microbiota (primarily bacteria) prior to absorption into the blood stream. The first amphetamine-metabolizing microbial enzyme, tyramine oxidase from a strain of E. coli commonly found in the human gut, was identified in 2019. This enzyme was found to metabolize amphetamine, tyramine, and phenethylamine with roughly the same binding affinity for all three compounds.
Related endogenous compounds
Amphetamine has a very similar structure and function to the endogenous trace amines, which are naturally occurring neuromodulator molecules produced in the human body and brain. Among this group, the most closely related compounds are phenethylamine, the parent compound of amphetamine, and , an isomer of amphetamine (i.e., it has an identical molecular formula). In humans, phenethylamine is produced directly from by the aromatic amino acid decarboxylase (AADC) enzyme, which converts into dopamine as well. In turn, is metabolized from phenethylamine by phenylethanolamine N-methyltransferase, the same enzyme that metabolizes norepinephrine into epinephrine. Like amphetamine, both phenethylamine and regulate monoamine neurotransmission via ; unlike amphetamine, both of these substances are broken down by monoamine oxidase B, and therefore have a shorter half-life than amphetamine.
Chemistry
Amphetamine is a methyl homolog of the mammalian neurotransmitter phenethylamine with the chemical formula . The carbon atom adjacent to the primary amine is a stereogenic center, and amphetamine is composed of a racemic 1:1 mixture of two enantiomers. This racemic mixture can be separated into its optical isomers: levoamphetamine and dextroamphetamine. At room temperature, the pure free base of amphetamine is a mobile, colorless, and volatile liquid with a characteristically strong amine odor, and acrid, burning taste. Frequently prepared solid salts of amphetamine include amphetamine adipate, aspartate, hydrochloride, phosphate, saccharate, sulfate, and tannate. Dextroamphetamine sulfate is the most common enantiopure salt. Amphetamine is also the parent compound of its own structural class, which includes a number of psychoactive derivatives. In organic chemistry, amphetamine is an excellent chiral ligand for the stereoselective synthesis of .
Substituted derivatives
The substituted derivatives of amphetamine, or "substituted amphetamines", are a broad range of chemicals that contain amphetamine as a "backbone"; specifically, this chemical class includes derivative compounds that are formed by replacing one or more hydrogen atoms in the amphetamine core structure with substituents. The class includes amphetamine itself, stimulants like methamphetamine, serotonergic empathogens like MDMA, and decongestants like ephedrine, among other subgroups.
Synthesis
Since the first preparation was reported in 1887, numerous synthetic routes to amphetamine have been developed. The most common route of both legal and illicit amphetamine synthesis employs a non-metal reduction known as the Leuckart reaction (method 1). In the first step, a reaction between phenylacetone and formamide, either using additional formic acid or formamide itself as a reducing agent, yields . This intermediate is then hydrolyzed using hydrochloric acid, and subsequently basified, extracted with organic solvent, concentrated, and distilled to yield the free base. The free base is then dissolved in an organic solvent, sulfuric acid added, and amphetamine precipitates out as the sulfate salt.
A number of chiral resolutions have been developed to separate the two enantiomers of amphetamine. For example, racemic amphetamine can be treated with to form a diastereoisomeric salt which is fractionally crystallized to yield dextroamphetamine. Chiral resolution remains the most economical method for obtaining optically pure amphetamine on a large scale. In addition, several enantioselective syntheses of amphetamine have been developed. In one example, optically pure is condensed with phenylacetone to yield a chiral Schiff base. In the key step, this intermediate is reduced by catalytic hydrogenation with a transfer of chirality to the carbon atom alpha to the amino group. Cleavage of the benzylic amine bond by hydrogenation yields optically pure dextroamphetamine.
A large number of alternative synthetic routes to amphetamine have been developed based on classic organic reactions. One example is the Friedel–Crafts alkylation of benzene by allyl chloride to yield beta chloropropylbenzene which is then reacted with ammonia to produce racemic amphetamine (method 2). Another example employs the Ritter reaction (method 3). In this route, allylbenzene is reacted acetonitrile in sulfuric acid to yield an organosulfate which in turn is treated with sodium hydroxide to give amphetamine via an acetamide intermediate. A third route starts with which through a double alkylation with methyl iodide followed by benzyl chloride can be converted into acid. This synthetic intermediate can be transformed into amphetamine using either a Hofmann or Curtius rearrangement (method 4).
A significant number of amphetamine syntheses feature a reduction of a nitro, imine, oxime, or other nitrogen-containing functional groups. In one such example, a Knoevenagel condensation of benzaldehyde with nitroethane yields . The double bond and nitro group of this intermediate is reduced using either catalytic hydrogenation or by treatment with lithium aluminium hydride (method 5). Another method is the reaction of phenylacetone with ammonia, producing an imine intermediate that is reduced to the primary amine using hydrogen over a palladium catalyst or lithium aluminum hydride (method 6).
Detection in body fluids
Amphetamine is frequently measured in urine or blood as part of a drug test for sports, employment, poisoning diagnostics, and forensics. Techniques such as immunoassay, which is the most common form of amphetamine test, may cross-react with a number of sympathomimetic drugs. Chromatographic methods specific for amphetamine are employed to prevent false positive results. Chiral separation techniques may be employed to help distinguish the source of the drug, whether prescription amphetamine, prescription amphetamine prodrugs, (e.g., selegiline), over-the-counter drug products that contain levomethamphetamine, or illicitly obtained substituted amphetamines. Several prescription drugs produce amphetamine as a metabolite, including benzphetamine, clobenzorex, famprofazone, fenproporex, lisdexamfetamine, mesocarb, methamphetamine, prenylamine, and selegiline, among others. These compounds may produce positive results for amphetamine on drug tests. Amphetamine is generally only detectable by a standard drug test for approximately 24 hours, although a high dose may be detectable for days.
For the assays, a study noted that an enzyme multiplied immunoassay technique (EMIT) assay for amphetamine and methamphetamine may produce more false positives than liquid chromatography–tandem mass spectrometry. Gas chromatography–mass spectrometry (GC–MS) of amphetamine and methamphetamine with the derivatizing agent chloride allows for the detection of methamphetamine in urine. GC–MS of amphetamine and methamphetamine with the chiral derivatizing agent Mosher's acid chloride allows for the detection of both dextroamphetamine and dextromethamphetamine in urine. Hence, the latter method may be used on samples that test positive using other methods to help distinguish between the various sources of the drug.
History, society, and culture
Amphetamine was first synthesized in 1887 in Germany by Romanian chemist Lazăr Edeleanu who named it phenylisopropylamine; its stimulant effects remained unknown until 1927, when it was independently resynthesized by Gordon Alles and reported to have sympathomimetic properties. Amphetamine had no medical use until late 1933, when Smith, Kline and French began selling it as an inhaler under the brand name Benzedrine as a decongestant. Benzedrine sulfate was introduced 3 years later and was used to treat a wide variety of medical conditions, including narcolepsy, obesity, low blood pressure, low libido, and chronic pain, among others. During World War II, amphetamine and methamphetamine were used extensively by both the Allied and Axis forces for their stimulant and performance-enhancing effects. As the addictive properties of the drug became known, governments began to place strict controls on the sale of amphetamine. For example, during the early 1970s in the United States, amphetamine became a schedule II controlled substance under the Controlled Substances Act. In spite of strict government controls, amphetamine has been used legally or illicitly by people from a variety of backgrounds, including authors, musicians, mathematicians, and athletes.
Amphetamine is still illegally synthesized today in clandestine labs and sold on the black market, primarily in European countries. Among European Union (EU) member states 11.9 million adults of ages have used amphetamine or methamphetamine at least once in their lives and 1.7 million have used either in the last year. During 2012, approximately 5.9 metric tons of illicit amphetamine were seized within EU member states; the "street price" of illicit amphetamine within the EU ranged from per gram during the same period. Outside Europe, the illicit market for amphetamine is much smaller than the market for methamphetamine and MDMA.
Legal status
As a result of the United Nations 1971 Convention on Psychotropic Substances, amphetamine became a schedule II controlled substance, as defined in the treaty, in all 183 state parties. Consequently, it is heavily regulated in most countries. Some countries, such as South Korea and Japan, have banned substituted amphetamines even for medical use. In other nations, such as Canada (schedule I drug), the Netherlands (List I drug), the United States (schedule II drug), Australia (schedule 8), Thailand (category 1 narcotic), and United Kingdom (class B drug), amphetamine is in a restrictive national drug schedule that allows for its use as a medical treatment.
Pharmaceutical products
Several currently marketed amphetamine formulations contain both enantiomers, including those marketed under the brand names Adderall, Adderall XR, Mydayis, Adzenys ER, , Dyanavel XR, Evekeo, and Evekeo ODT. Of those, Evekeo (including Evekeo ODT) is the only product containing only racemic amphetamine (as amphetamine sulfate), and is therefore the only one whose active moiety can be accurately referred to simply as "amphetamine". Dextroamphetamine, marketed under the brand names Dexedrine and Zenzedi, is the only enantiopure amphetamine product currently available. A prodrug form of dextroamphetamine, lisdexamfetamine, is also available and is marketed under the brand name Vyvanse. As it is a prodrug, lisdexamfetamine is structurally different from dextroamphetamine, and is inactive until it metabolizes into dextroamphetamine. The free base of racemic amphetamine was previously available as Benzedrine, Psychedrine, and Sympatedrine. Levoamphetamine was previously available as Cydril. Many current amphetamine pharmaceuticals are salts due to the comparatively high volatility of the free base. However, oral suspension and orally disintegrating tablet (ODT) dosage forms composed of the free base were introduced in 2015 and 2016, respectively. Some of the current brands and their generic equivalents are listed below.
Notes
Image legend
Reference notes
References
External links
– Dextroamphetamine
– Levoamphetamine
Comparative Toxicogenomics Database entry: Amphetamine
Comparative Toxicogenomics Database entry: CARTPT
5-HT1A agonists
Anorectics
Aphrodisiacs
Carbonic anhydrase activators
Drugs acting on the cardiovascular system
Drugs acting on the nervous system
Drugs in sport
Ergogenic aids
Euphoriants
Excitatory amino acid reuptake inhibitors
German inventions
Management of obesity
Narcolepsy
Nootropics
Norepinephrine-dopamine releasing agents
Phenethylamines
Stimulants
Substituted amphetamines
TAAR1 agonists
Attention deficit hyperactivity disorder management
VMAT inhibitors
World Anti-Doping Agency prohibited substances
|
https://en.wikipedia.org/wiki/Artillery
|
Artillery is a class of heavy military ranged weapons that launch munitions far beyond the range and power of infantry firearms. Early artillery development focused on the ability to breach defensive walls and fortifications during sieges, and led to heavy, fairly immobile siege engines. As technology improved, lighter, more mobile field artillery cannons developed for battlefield use. This development continues today; modern self-propelled artillery vehicles are highly mobile weapons of great versatility generally providing the largest share of an army's total firepower.
Originally, the word "artillery" referred to any group of soldiers primarily armed with some form of manufactured weapon or armour. Since the introduction of gunpowder and cannon, "artillery" has largely meant cannon, and in contemporary usage, usually refers to shell-firing guns, howitzers, and mortars (collectively called barrel artillery, cannon artillery or gun artillery) and rocket artillery. In common speech, the word "artillery" is often used to refer to individual devices, along with their accessories and fittings, although these assemblages are more properly called "equipment". However, there is no generally recognized generic term for a gun, howitzer, mortar, and so forth: the United States uses "artillery piece", but most English-speaking armies use "gun" and "mortar". The projectiles fired are typically either "shot" (if solid) or "shell" (if not solid). Historically, variants of solid shot including canister, chain shot and grapeshot were also used. "Shell" is a widely used generic term for a projectile, which is a component of munitions.
By association, artillery may also refer to the arm of service that customarily operates such engines. In some armies, the artillery arm has operated field, coastal, anti-aircraft, and anti-tank artillery; in others these have been separate arms, and with some nations coastal has been a naval or marine responsibility.
In the 20th century, target acquisition devices (such as radar) and techniques (such as sound ranging and flash spotting) emerged, primarily for artillery. These are usually utilized by one or more of the artillery arms. The widespread adoption of indirect fire in the early 20th century introduced the need for specialist data for field artillery, notably survey and meteorological, and in some armies, provision of these are the responsibility of the artillery arm. The majority of combat deaths in the Napoleonic Wars, World War I, and World War II were caused by artillery. In 1944, Joseph Stalin said in a speech that artillery was "the god of war".
Artillery piece
Although not called by that name, siege engines performing the role recognizable as artillery have been employed in warfare since antiquity. The first known catapult was developed in Syracuse in 399 BC. Until the introduction of gunpowder into western warfare, artillery was dependent upon mechanical energy which not only severely limited the kinetic energy of the projectiles, it also required the construction of very large engines to accumulate sufficient energy. A 1st-century BC Roman catapult launching stones achieved a kinetic energy of 16 kilojoules, compared to a mid-19th-century 12-pounder gun, which fired a round, with a kinetic energy of 240 kilojoules, or a 20th-century US battleship that fired a projectile from its main battery with an energy level surpassing 350 megajoules.
From the Middle Ages through most of the modern era, artillery pieces on land were moved by horse-drawn gun carriages. In the contemporary era, artillery pieces and their crew relied on wheeled or tracked vehicles as transportation. These land versions of artillery were dwarfed by railway guns; the largest of these large-calibre guns ever conceived – Project Babylon of the Supergun affair – was theoretically capable of putting a satellite into orbit. Artillery used by naval forces has also changed significantly, with missiles generally replacing guns in surface warfare.
Over the course of military history, projectiles were manufactured from a wide variety of materials, into a wide variety of shapes, using many different methods in which to target structural/defensive works and inflict enemy casualties. The engineering applications for ordnance delivery have likewise changed significantly over time, encompassing some of the most complex and advanced technologies in use today.
In some armies, the weapon of artillery is the projectile, not the equipment that fires it. The process of delivering fire onto the target is called gunnery. The actions involved in operating an artillery piece are collectively called "serving the gun" by the "detachment" or gun crew, constituting either direct or indirect artillery fire. The manner in which gunnery crews (or formations) are employed is called artillery support. At different periods in history, this may refer to weapons designed to be fired from ground-, sea-, and even air-based weapons platforms.
Crew
Some armed forces use the term "gunners" for the soldiers and sailors with the primary function of using artillery.
The gunners and their guns are usually grouped in teams called either "crews" or "detachments". Several such crews and teams with other functions are combined into a unit of artillery, usually called a battery, although sometimes called a company. In gun detachments, each role is numbered, starting with "1" the Detachment Commander, and the highest number being the Coverer, the second-in-command. "Gunner" is also the lowest rank, and junior non-commissioned officers are "Bombardiers" in some artillery arms.
Batteries are roughly equivalent to a company in the infantry, and are combined into larger military organizations for administrative and operational purposes, either battalions or regiments, depending on the army. These may be grouped into brigades; the Russian army also groups some brigades into artillery divisions, and the People's Liberation Army has artillery corps.
The term "artillery" also designates a combat arm of most military services when used organizationally to describe units and formations of the national armed forces that operate the weapons.
Tactics
During military operations, field artillery has the role of providing support to other arms in combat or of attacking targets, particularly in-depth. Broadly, these effects fall into two categories, aiming either to suppress or neutralize the enemy, or to cause casualties, damage, and destruction. This is mostly achieved by delivering high-explosive munitions to suppress, or inflict casualties on the enemy from casing fragments and other debris and from blast, or by destroying enemy positions, equipment, and vehicles. Non-lethal munitions, notably smoke, can also suppress or neutralize the enemy by obscuring their view.
Fire may be directed by an artillery observer or another observer, including crewed and uncrewed aircraft, or called onto map coordinates.
Military doctrine has had a significant influence on the core engineering design considerations of artillery ordnance through its history, in seeking to achieve a balance between the delivered volume of fire with ordnance mobility. However, during the modern period, the consideration of protecting the gunners also arose due to the late-19th-century introduction of the new generation of infantry weapons using conoidal bullet, better known as the Minié ball, with a range almost as long as that of field artillery.
The gunners' increasing proximity to and participation in direct combat against other combat arms and attacks by aircraft made the introduction of a gun shield necessary. The problems of how to employ a fixed or horse-towed gun in mobile warfare necessitated the development of new methods of transporting the artillery into combat. Two distinct forms of artillery were developed: the towed gun, used primarily to attack or defend a fixed-line; and the self-propelled gun, intended to accompany a mobile force and to provide continuous fire support and/or suppression. These influences have guided the development of artillery ordnance, systems, organizations, and operations until the present, with artillery systems capable of providing support at ranges from as little as 100 m to the intercontinental ranges of ballistic missiles. The only combat in which artillery is unable to take part is close-quarters combat, with the possible exception of artillery reconnaissance teams.
Etymology
The word as used in the current context originated in the Middle Ages. One suggestion is that it comes from French atelier, meaning the place where manual work is done.
Another suggestion is that it originates from the 13th century and the Old French artillier, designating craftsmen and manufacturers of all materials and warfare equipments (spears, swords, armor, war machines); and, for the next 250 years, the sense of the word "artillery" covered all forms of military weapons. Hence, the naming of the Honourable Artillery Company, which was essentially an infantry unit until the 19th century.
Another suggestion is that it comes from the Italian arte de tirare (art of shooting), coined by one of the first theorists on the use of artillery, Niccolò Tartaglia.
History
Mechanical systems used for throwing ammunition in ancient warfare, also known as "engines of war", like the catapult, onager, trebuchet, and ballista, are also referred to by military historians as artillery.
Medieval
During medieval times, more types of artillery were developed, most notably the trebuchet. Traction trebuchets, using manpower to launch projectiles, have been used in ancient China since the 4th century as anti-personnel weapons. However, in the 12th century, the counterweight trebuchet was introduced, with the earliest mention of it being in 1187.
Invention of gunpowder
Early Chinese artillery had vase-like shapes. This includes the "long range awe inspiring" cannon dated from 1350 and found in the 14th century Ming Dynasty treatise Huolongjing. With the development of better metallurgy techniques, later cannons abandoned the vase shape of early Chinese artillery. This change can be seen in the bronze "thousand ball thunder cannon", an early example of field artillery. These small, crude weapons diffused into the Middle East (the madfaa) and reached Europe in the 13th century, in a very limited manner.
In Asia, Mongols adopted the Chinese artillery and used it effectively in the great conquest. By the late 14th century, Chinese rebels used organized artillery and cavalry to push Mongols out.
As small smooth-bore barrels, these were initially cast in iron or bronze around a core, with the first drilled bore ordnance recorded in operation near Seville in 1247. They fired lead, iron, or stone balls, sometimes large arrows and on occasions simply handfuls of whatever scrap came to hand. During the Hundred Years' War, these weapons became more common, initially as the bombard and later the cannon. Cannon were always muzzle-loaders. While there were many early attempts at breech-loading designs, a lack of engineering knowledge rendered these even more dangerous to use than muzzle-loaders.
Expansion of use
In 1415, the Portuguese invaded the Mediterranean port town of Ceuta. While it is difficult to confirm the use of firearms in the siege of the city, it is known the Portuguese defended it thereafter with firearms, namely bombardas, colebratas, and falconetes. In 1419, Sultan Abu Sa'id led an army to reconquer the fallen city, and Marinids brought cannons and used them in the assault on Ceuta. Finally, hand-held firearms and riflemen appear in Morocco, in 1437, in an expedition against the people of Tangiers. It is clear these weapons had developed into several different forms, from small guns to large artillery pieces.
The artillery revolution in Europe caught on during the Hundred Years' War and changed the way that battles were fought. In the preceding decades, the English had even used a gunpowder-like weapon in military campaigns against the Scottish. However, at this time, the cannons used in battle were very small and not particularly powerful. Cannons were only useful for the defense of a castle, as demonstrated at Breteuil in 1356, when the besieged English used a cannon to destroy an attacking French assault tower. By the end of the 14th century, cannon were only powerful enough to knock in roofs, and could not penetrate castle walls.
However, a major change occurred between 1420 and 1430, when artillery became much more powerful and could now batter strongholds and fortresses quite efficiently. The English, French, and Burgundians all advanced in military technology, and as a result the traditional advantage that went to the defense in a siege was lost. The cannon during this period were elongated, and the recipe for gunpowder was improved to make it three times as powerful as before. These changes led to the increased power in the artillery weapons of the time.
Joan of Arc encountered gunpowder weaponry several times. When she led the French against the English at the Battle of Tourelles, in 1430, she faced heavy gunpowder fortifications, and yet her troops prevailed in that battle. In addition, she led assaults against the English-held towns of Jargeau, Meung, and Beaugency, all with the support of large artillery units. When she led the assault on Paris, Joan faced stiff artillery fire, especially from the suburb of St. Denis, which ultimately led to her defeat in this battle. In April 1430, she went to battle against the Burgundians, whose support was purchased by the English. At this time, the Burgundians had the strongest and largest gunpowder arsenal among the European powers, and yet the French, under Joan of Arc's leadership, were able to beat back the Burgundians and defend themselves. As a result, most of the battles of the Hundred Years' War that Joan of Arc participated in were fought with gunpowder artillery.
The army of Mehmet the Conqueror, which conquered Constantinople in 1453, included both artillery and foot soldiers armed with gunpowder weapons. The Ottomans brought to the siege sixty-nine guns in fifteen separate batteries and trained them at the walls of the city. The barrage of Ottoman cannon fire lasted forty days, and they are estimated to have fired 19,320 times. Artillery also played a decisive role in the Battle of St. Jakob an der Birs of 1444. Early cannon were not always reliable; King James II of Scotland was killed by the accidental explosion of one of his own cannon, imported from Flanders, at the siege of Roxburgh Castle in 1460.
The able use of artillery supported to a large measure the expansion and defense of the Portuguese Empire, as it was a necessary tool that allowed the Portuguese to face overwhelming odds both on land and sea from Morocco to Asia. In great sieges and in sea battles, the Portuguese demonstrated a level of proficiency in the use of artillery after the beginning of the 16th century unequalled by contemporary European neighbours, in part due to the experience gained in intense fighting in Morocco, which served as a proving ground for artillery and its practical application, and made Portugal a forerunner in gunnery for decades. During the reign of King Manuel (1495–1521) at least 2017 cannon were sent to Morocco for garrison defense, with more than 3000 cannon estimated to have been required during that 26 year period. An especially noticeable division between siege guns and anti-personnel guns inhanced the use and effectiveness of Portuguese firearms above contemporary powers, making cannon the most essential element in the Portuguese arsenal.
The three major classes of Portuguese artillery were anti-personnel guns with a high borelength (including: rebrodequim, berço, falconete, falcão, sacre, áspide, cão, serpentina and passavolante); bastion guns which could batter fortifications (camelete, leão, pelicano, basilisco, águia, camelo, roqueira, urso); and howitzers that fired large stone cannonballs in an elevated arch, weighted up to 4000 pounds and could fire incendiary devices, such as a hollow iron ball filled with pitch and fuse, designed to be fired at close range and burst on contact. The most popular in Portuguese arsenals was the berço, a 5cm, one pounder bronze breech-loading cannon that weighted 150kg with an effective range of 600 meters.
A tactical innovation the Portuguese introduced in fort defense was the use of combinations of projectiles against massed assaults. Although canister shot had been developed in the early 15th century, the Portuguese were the first to employ it extensively, and Portuguese engineers invented a canister round which consisted of a thin lead case filled with iron pellets, that broke up at the muzzle and scattered its contents in a narrow pattern. An innovation which Portugal adopted in advance of other European powers was fuse-delayed action shells, and were commonly used in 1505. Although dangerous, their effectiveness meant a sixth of all rounds used by the Portuguese in Morocco were of the fused-shell variety.
The new Ming Dynasty established the "Divine Engine Battalion" (神机营), which specialized in various types of artillery. Light cannons and cannons with multiple volleys were developed. In a campaign to suppress a local minority rebellion near today's Burmese border, "the Ming army used a 3-line method of arquebuses/muskets to destroy an elephant formation."
When the Portuguese and Spanish arrived at Southeast Asia, they found that the local kingdoms were already using cannons. Portuguese and Spanish invaders were unpleasantly surprised and even outgunned on occasion. Duarte Barbosa ca. 1514 said that the inhabitants of Java were great masters in casting artillery and very good artillerymen. They made many one-pounder cannons (cetbang or rentaka), long muskets, spingarde (arquebus), schioppi (hand cannon), Greek fire, guns (cannons), and other fire-works. Every place was considered excellent in casting artillery, and in the knowledge of using it. In 1513, the Javanese fleet led by Pati Unus sailed to attack Portuguese Malacca "with much artillery made in Java, for the Javanese are skilled in founding and casting, and in all works in iron, over and above what they have in India". By the early 16th century, the Javanese already locally-producing large guns, some of them still survived until the present day and dubbed as "sacred cannon" or "holy cannon". These cannons varied between 180 and 260 pounders, weighing anywhere between 3–8 tons, measuring between 3–6 m.
Between 1593 and 1597, about 200,000 Korean and Chinese troops which fought against Japan in Korea actively used heavy artillery in both siege and field combat. Korean forces mounted artillery in ships as naval guns, providing an advantage against Japanese navy which used Kunikuzushi (国崩し – Japanese breech-loading swivel gun) and Ōzutsu (大筒 – large size Tanegashima) as their largest firearms.
Smoothbores
Bombards were of value mainly in sieges. A famous Turkish example used at the siege of Constantinople in 1453 weighed 19 tons, took 200 men and sixty oxen to emplace, and could fire just seven times a day. The Fall of Constantinople was perhaps "the first event of supreme importance whose result was determined by the use of artillery" when the huge bronze cannons of Mehmed II breached the city's walls, ending the Byzantine Empire, according to Sir Charles Oman.
Bombards developed in Europe were massive smoothbore weapons distinguished by their lack of a field carriage, immobility once emplaced, highly individual design, and noted unreliability (in 1460 James II, King of Scots, was killed when one exploded at the siege of Roxburgh). Their large size precluded the barrels being cast and they were constructed out of metal staves or rods bound together with hoops like a barrel, giving their name to the gun barrel.
The use of the word "cannon" marks the introduction in the 15th century of a dedicated field carriage with axle, trail and animal-drawn limber—this produced mobile field pieces that could move and support an army in action, rather than being found only in the siege and static defenses. The reduction in the size of the barrel was due to improvements in both iron technology and gunpowder manufacture, while the development of trunnions—projections at the side of the cannon as an integral part of the cast—allowed the barrel to be fixed to a more movable base, and also made raising or lowering the barrel much easier.
The first land-based mobile weapon is usually credited to Jan Žižka, who deployed his oxen-hauled cannon during the Hussite Wars of Bohemia (1418–1424). However, cannons were still large and cumbersome. With the rise of musketry in the 16th century, cannon were largely (though not entirely) displaced from the battlefield—the cannon were too slow and cumbersome to be used and too easily lost to a rapid enemy advance.
The combining of shot and powder into a single unit, a cartridge, occurred in the 1620s with a simple fabric bag, and was quickly adopted by all nations. It speeded loading and made it safer, but unexpelled bag fragments were an additional fouling in the gun barrel and a new tool—a worm—was introduced to remove them. Gustavus Adolphus is identified as the general who made cannon an effective force on the battlefield—pushing the development of much lighter and smaller weapons and deploying them in far greater numbers than previously. The outcome of battles was still determined by the clash of infantry.
Shells, explosive-filled fused projectiles, were in use by the 15th century. The development of specialized pieces—shipboard artillery, howitzers and mortars—was also begun in this period. More esoteric designs, like the multi-barrel ribauldequin (known as "organ guns"), were also produced.
The 1650 book by Kazimierz Siemienowicz Artis Magnae Artilleriae pars prima was one of the most important contemporary publications on the subject of artillery. For over two centuries this work was used in Europe as a basic artillery manual.
One of the most significant effects of artillery during this period was however somewhat more indirect—by easily reducing to rubble any medieval-type fortification or city wall (some which had stood since Roman times), it abolished millennia of siege-warfare strategies and styles of fortification building. This led, among other things, to a frenzy of new bastion-style fortifications to be built all over Europe and in its colonies, but also had a strong integrating effect on emerging nation-states, as kings were able to use their newfound artillery superiority to force any local dukes or lords to submit to their will, setting the stage for the absolutist kingdoms to come.
Modern rocket artillery can trace its heritage back to the Mysorean rockets of India. Their first recorded use was in 1780 during the battles of the Second, Third and Fourth Mysore Wars. The wars fought between the British East India Company and the Kingdom of Mysore in India made use of the rockets as a weapon. In the Battle of Pollilur, the Siege of Seringapatam (1792) and in Battle of Seringapatam in 1799, these rockets were used with considerable effect against the British. After the wars, several Mysore rockets were sent to England, but experiments with heavier payloads were unsuccessful. In 1804 William Congreve, considering the Mysorian rockets to have too short a range (less than 1,000 yards) developed rockets in numerous sizes with ranges up to 3,000 yards and eventually utilizing iron casing as the Congreve rocket which were used effectively during the Napoleonic Wars and the War of 1812.
Napoleonic
With the Napoleonic Wars, artillery experienced changes in both physical design and operation. Rather than being overseen by "mechanics", artillery was viewed as its own service branch with the capability of dominating the battlefield. The success of the French artillery companies was at least in part due to the presence of specially trained artillery officers leading and coordinating during the chaos of battle. Napoleon, himself a former artillery officer, perfected the tactic of massed artillery batteries unleashed upon a critical point in his enemies' line as a prelude to a decisive infantry and cavalry assault.
Physically, cannons continued to become smaller and lighter. During the Seven Years War, King Frederick II of Prussia used these advances to deploy horse artillery that could move throughout the battlefield. Frederick also introduced the reversible iron ramrod, which was much more resistant to breakage than older wooden designs. The reversibility aspect also helped increase the rate of fire, since a soldier would no longer have to worry about what end of the ramrod they were using.
Jean-Baptiste de Gribeauval, a French artillery engineer, introduced the standardization of cannon design in the mid-18th century. He developed a 6-inch (150 mm) field howitzer whose gun barrel, carriage assembly and ammunition specifications were made uniform for all French cannons. The standardized interchangeable parts of these cannons down to the nuts, bolts and screws made their mass production and repair much easier. While the Gribeauval system made for more efficient production and assembly, the carriages used were heavy and the gunners were forced to march on foot (instead of riding on the limber and gun as in the British system). Each cannon was named for the weight of its projectiles, giving us variants such as 4, 8, and 12, indicating the weight in pounds. The projectiles themselves included solid balls or canister containing lead bullets or other material. These canister shots acted as massive shotguns, peppering the target with hundreds of projectiles at close range. The solid balls, known as round shot, was most effective when fired at shoulder-height across a flat, open area. The ball would tear through the ranks of the enemy or bounce along the ground breaking legs and ankles.
Modern
The development of modern artillery occurred in the mid to late 19th century as a result of the convergence of various improvements in the underlying technology. Advances in metallurgy allowed for the construction of breech-loading rifled guns that could fire at a much greater muzzle velocity.
After the British artillery was shown up in the Crimean War as having barely changed since the Napoleonic Wars, the industrialist William Armstrong was awarded a contract by the government to design a new piece of artillery. Production started in 1855 at the Elswick Ordnance Company and the Royal Arsenal at Woolwich, and the outcome was the revolutionary Armstrong Gun, which marked the birth of modern artillery. Three of its features particularly stand out.
First, the piece was rifled, which allowed for a much more accurate and powerful action. Although rifling had been tried on small arms since the 15th century, the necessary machinery to accurately rifle artillery was not available until the mid-19th century. Martin von Wahrendorff, and Joseph Whitworth independently produced rifled cannon in the 1840s, but it was Armstrong's gun that was first to see widespread use during the Crimean War. The cast iron shell of the Armstrong gun was similar in shape to a Minié ball and had a thin lead coating which made it fractionally larger than the gun's bore and which engaged with the gun's rifling grooves to impart spin to the shell. This spin, together with the elimination of windage as a result of the tight fit, enabled the gun to achieve greater range and accuracy than existing smooth-bore muzzle-loaders with a smaller powder charge.
His gun was also a breech-loader. Although attempts at breech-loading mechanisms had been made since medieval times, the essential engineering problem was that the mechanism could not withstand the explosive charge. It was only with the advances in metallurgy and precision engineering capabilities during the Industrial Revolution that Armstrong was able to construct a viable solution. The gun combined all the properties that make up an effective artillery piece. The gun was mounted on a carriage in such a way as to return the gun to firing position after the recoil.
What made the gun really revolutionary lay in the technique of the construction of the gun barrel that allowed it to withstand much more powerful explosive forces. The "built-up" method involved assembling the barrel with wrought-iron (later mild steel was used) tubes of successively smaller diameter. The tube would then be heated to allow it to expand and fit over the previous tube. When it cooled the gun would contract although not back to its original size, which allowed an even pressure along the walls of the gun which was directed inward against the outward forces that the gun's firing exerted on the barrel.
Another innovative feature, more usually associated with 20th-century guns, was what Armstrong called its "grip", which was essentially a squeeze bore; the 6 inches of the bore at the muzzle end was of slightly smaller diameter, which centered the shell before it left the barrel and at the same time slightly swaged down its lead coating, reducing its diameter and slightly improving its ballistic qualities.
Armstrong's system was adopted in 1858, initially for "special service in the field" and initially he produced only smaller artillery pieces, 6-pounder (2.5 in/64 mm) mountain or light field guns, 9-pounder (3 in/76 mm) guns for horse artillery, and 12-pounder (3 inches /76 mm) field guns.
The first cannon to contain all 'modern' features is generally considered to be the French 75 of 1897. The gun used cased ammunition, was breech-loading, had modern sights, and a self-contained firing mechanism. It was the first field gun to include a hydro-pneumatic recoil mechanism, which kept the gun's trail and wheels perfectly still during the firing sequence. Since it did not need to be re-aimed after each shot, the crew could fire as soon as the barrel returned to its resting position. In typical use, the French 75 could deliver fifteen rounds per minute on its target, either shrapnel or melinite high-explosive, up to about 5 miles (8,500 m) away. Its firing rate could even reach close to 30 rounds per minute, albeit only for a very short time and with a highly experienced crew. These were rates that contemporary bolt action rifles could not match.
Indirect fire
Indirect fire, the firing of a projectile without relying on direct line of sight between the gun and the target, possibly dates back to the 16th century. Early battlefield use of indirect fire may have occurred at Paltzig in July 1759, when the Russian artillery fired over the tops of trees, and at the Battle of Waterloo, where a battery of the Royal Horse Artillery fired shrapnel indirectly against advancing French troops.
In 1882, Russian Lieutenant Colonel KG Guk published Indirect Fire for Field Artillery, which provided a practical method of using aiming points for indirect fire by describing, "all the essentials of aiming points, crest clearance, and corrections to fire by an observer".
A few years later, the Richtfläche (lining-plane) sight was invented in Germany and provided a means of indirect laying in azimuth, complementing the clinometers for indirect laying in elevation which already existed. Despite conservative opposition within the German army, indirect fire was adopted as doctrine by the 1890s. In the early 1900s, Goertz in Germany developed an optical sight for azimuth laying. It quickly replaced the lining-plane; in English, it became the 'Dial Sight' (UK) or 'Panoramic Telescope' (US).
The British halfheartedly experimented with indirect fire techniques since the 1890s, but with the onset of the Boer War, they were the first to apply the theory in practice in 1899, although they had to improvise without a lining-plane sight.
In the next 15 years leading up to World War I, the techniques of indirect fire became available for all types of artillery. Indirect fire was the defining characteristic of 20th-century artillery and led to undreamt of changes in the amount of artillery, its tactics, organisation, and techniques, most of which occurred during World War I.
An implication of indirect fire and improving guns was increasing range between gun and target, this increased the time of flight and the vertex of the trajectory. The result was decreasing accuracy (the increasing distance between the target and the mean point of impact of the shells aimed at it) caused by the increasing effects of non-standard conditions. Indirect firing data was based on standard conditions including a specific muzzle velocity, zero wind, air temperature and density, and propellant temperature. In practice, this standard combination of conditions almost never existed, they varied throughout the day and day to day, and the greater the time of flight, the greater the inaccuracy. An added complication was the need for survey to accurately fix the coordinates of the gun position and provide accurate orientation for the guns. Of course, targets had to be accurately located, but by 1916, air photo interpretation techniques enabled this, and ground survey techniques could sometimes be used.
In 1914, the methods of correcting firing data for the actual conditions were often convoluted, and the availability of data about actual conditions was rudimentary or non-existent, the assumption was that fire would always be ranged (adjusted). British heavy artillery worked energetically to progressively solve all these problems from late 1914 onwards, and by early 1918, had effective processes in place for both field and heavy artillery. These processes enabled 'map-shooting', later called 'predicted fire'; it meant that effective fire could be delivered against an accurately located target without ranging. Nevertheless, the mean point of impact was still some tens of yards from the target-centre aiming point. It was not precision fire, but it was good enough for concentrations and barrages. These processes remain in use into the 21st century with refinements to calculations enabled by computers and improved data capture about non-standard conditions.
The British major-general Henry Hugh Tudor pioneered armour and artillery cooperation at the breakthrough Battle of Cambrai. The improvements in providing and using data for non-standard conditions (propellant temperature, muzzle velocity, wind, air temperature, and barometric pressure) were developed by the major combatants throughout the war and enabled effective predicted fire. The effectiveness of this was demonstrated by the British in 1917 (at Cambrai) and by Germany the following year (Operation Michael).
Major General J.B.A. Bailey, British Army (retired) wrote:
An estimated 75,000 French soldiers were casualties of friendly artillery fire in the four years of World War I.
Precision-guidance
Modern artillery is most obviously distinguished by its long range, firing an explosive shell or rocket and a mobile carriage for firing and transport. However, its most important characteristic is the use of indirect fire, whereby the firing equipment is aimed without seeing the target through its sights. Indirect fire emerged at the beginning of the 20th century and was greatly enhanced by the development of predicted fire methods in World War I. However, indirect fire was area fire; it was and is not suitable for destroying point targets; its primary purpose is area suppression. Nevertheless, by the late 1970s precision-guided munitions started to appear, notably the US 155 mm Copperhead and its Soviet 152 mm Krasnopol equivalent that had success in Indian service. These relied on laser designation to 'illuminate' the target that the shell homed onto. However, in the early 21st century, the Global Positioning System (GPS) enabled relatively cheap and accurate guidance for shells and missiles, notably the US 155 mm Excalibur and the 227 mm GMLRS rocket. The introduction of these led to a new issue, the need for very accurate three dimensional target coordinates—the mensuration process.
Weapons covered by the term 'modern artillery' include "cannon" artillery (such as howitzer, mortar, and field gun) and rocket artillery. Certain smaller-caliber mortars are more properly designated small arms rather than artillery, albeit indirect-fire small arms. This term also came to include coastal artillery which traditionally defended coastal areas against seaborne attack and controlled the passage of ships. With the advent of powered flight at the start of the 20th century, artillery also included ground-based anti-aircraft batteries.
The term "artillery" has traditionally not been used for projectiles with internal guidance systems, preferring the term "missilery", though some modern artillery units employ surface-to-surface missiles. Advances in terminal guidance systems for small munitions has allowed large-caliber guided projectiles to be developed, blurring this distinction.<ref>{{Cite book|last=Chikammadu|first=Ali Caleb|title=Enotenplato The Chronicle of Military Doctrine'|publisher=Lulu.com|date=September 3, 2019|isbn=9780359806997|pages=196}}</ref> See Long Range Precision Fires (LRPF), Joint terminal attack controllerAmmunition
One of the most important roles of logistics is the supply of munitions as a primary type of artillery consumable, their storage (ammunition dump, arsenal, magazine
) and the provision of fuzes, detonators and warheads at the point where artillery troops will assemble the charge, projectile, bomb or shell.
A round of artillery ammunition comprises four components:
Fuze
Projectile
Propellant
Primer
Fuzes
Fuzes are the devices that initiate an artillery projectile, either to detonate its High Explosive (HE) filling or eject its cargo (illuminating flare or smoke canisters being examples). The official military spelling is "fuze". Broadly there are four main types:
impact (including graze and delay)
mechanical time including airburst
proximity sensor including airburst
programmable electronic detonation including airburst
Most artillery fuzes are nose fuzes. However, base fuzes have been used with armor-piercing shells and for squash head (High-Explosive Squash Head (HESH) or High Explosive, Plastic (HEP) anti-tank shells). At least one nuclear shell and its non-nuclear spotting version also used a multi-deck mechanical time fuze fitted into its base.
Impact fuzes were, and in some armies remain, the standard fuze for HE projectiles. Their default action is normally 'superquick', some have had a 'graze' action which allows them to penetrate light cover and others have 'delay'. Delay fuzes allow the shell to penetrate the ground before exploding. Armor or Concrete-Piercing (AP or CP) fuzes are specially hardened. During World War I and later, ricochet fire with delay or graze fuzed HE shells, fired with a flat angle of descent, was used to achieve airburst.
HE shells can be fitted with other fuzes. Airburst fuzes usually have a combined airburst and impact function. However, until the introduction of proximity fuzes, the airburst function was mostly used with cargo munitions—for example, shrapnel, illumination, and smoke. The larger calibers of anti-aircraft artillery are almost always used airburst. Airburst fuzes have to have the fuze length (running time) set on them. This is done just before firing using either a wrench or a fuze setter pre-set to the required fuze length.
Early airburst fuzes used igniferous timers which lasted into the second half of the 20th century. Mechanical time fuzes appeared in the early part of the century. These required a means of powering them. The Thiel mechanism used a spring and escapement (i.e. 'clockwork'), Junghans used centrifugal force and gears, and Dixi used centrifugal force and balls. From about 1980, electronic time fuzes started replacing mechanical ones for use with cargo munitions.
Proximity fuzes have been of two types: photo-electric or radar. The former was not very successful and seems only to have been used with British anti-aircraft artillery 'unrotated projectiles' (rockets) in World War II. Radar proximity fuzes were a big improvement over the mechanical (time) fuzes which they replaced. Mechanical time fuzes required an accurate calculation of their running time, which was affected by non-standard conditions. With HE (requiring a burst 20 to above the ground), if this was very slightly wrong the rounds would either hit the ground or burst too high. Accurate running time was less important with cargo munitions that burst much higher.
The first radar proximity fuzes (perhaps originally codenamed 'VT' and later called Variable Time (VT)) were invented by the British and developed by the US and initially used against aircraft in World War II. Their ground use was delayed for fear of the enemy recovering 'blinds' (artillery shells which failed to detonate) and copying the fuze. The first proximity fuzes were designed to detonate about above the ground. These air-bursts are much more lethal against personnel than ground bursts because they deliver a greater proportion of useful fragments and deliver them into terrain where a prone soldier would be protected from ground bursts.
However, proximity fuzes can suffer premature detonation because of the moisture in heavy rain clouds. This led to 'Controlled Variable Time' (CVT) after World War II. These fuzes have a mechanical timer that switched on the radar about 5 seconds before expected impact, they also detonated on impact.
The proximity fuze emerged on the battlefields of Europe in late December 1944. They have become known as the U.S. Artillery's "Christmas present", and were much appreciated when they arrived during the Battle of the Bulge. They were also used to great effect in anti-aircraft projectiles in the Pacific against kamikaze as well as in Britain against V-1 flying bombs.
Electronic multi-function fuzes started to appear around 1980. Using solid-state electronics they were relatively cheap and reliable, and became the standard fitted fuze in operational ammunition stocks in some western armies. The early versions were often limited to proximity airburst, albeit with height of burst options, and impact. Some offered a go/no-go functional test through the fuze setter.
Later versions introduced induction fuze setting and testing instead of physically placing a fuze setter on the fuze. The latest, such as Junghan's DM84U provide options giving, superquick, delay, a choice of proximity heights of burst, time and a choice of foliage penetration depths.
A new type of artillery fuze will appear soon. In addition to other functions these offer some course correction capability, not full precision but sufficient to significantly reduce the dispersion of the shells on the ground.
Projectiles
The projectile is the munition or "bullet" fired downrange. This may be an explosive device. Projectiles have traditionally been classified as "shot" or "shell", the former being solid and the latter having some form of "payload".
Shells can be divided into three configurations: bursting, base ejection or nose ejection. The latter is sometimes called the shrapnel configuration. The most modern is base ejection, which was introduced in World War I. Base and nose ejection are almost always used with airburst fuzes. Bursting shells use various types of fuze depending on the nature of the payload and the tactical need at the time.
Payloads have included:
Bursting: high-explosive, white phosphorus, coloured marker, chemical, nuclear devices; high-explosive anti-tank and canister may be considered special types of bursting shell.
Nose ejection: shrapnel, star, incendiary and flechette (a more modern version of shrapnel).
Base ejection: Dual-Purpose Improved Conventional Munition bomblets, which arm themselves and function after a set number of rotations after having been ejected from the projectile (this produces unexploded sub-munitions, or "duds", which remain dangerous), scatterable mines, illuminating, coloured flare, smoke, incendiary, propaganda, chaff (foil to jam radars) and modern exotics such as electronic payloads and sensor-fuzed munitions.
Stabilization
Rifled: Artillery projectiles have traditionally been spin-stabilised, meaning that they spin in flight so that gyroscopic forces prevent them from tumbling. Spin is induced by gun barrels having rifling, which engages a soft metal band around the projectile, called a "driving band" (UK) or "rotating band" (U.S.). The driving band is usually made of copper, but synthetic materials have been used.
Smoothbore/fin-stabilized: In modern artillery, smoothbore barrels have been used mostly by mortars. These projectiles use fins in the airflow at their rear to maintain correct orientation. The primary benefits over rifled barrels is reduced barrel wear, longer ranges that can be achieved (due to the reduced loss of energy to friction and gas escaping around the projectile via the rifling) and larger explosive cores for a given caliber artillery due to less metal needing to be used to form the case of the projectile because of less force applied to the shell from the non-rifled sides of the barrel of smooth bore guns.
Rifled/fin-stabilized: A combination of the above can be used, where the barrel is rifled, but the projectile also has deployable fins for stabilization, guidance or gliding.
Propellant
Most forms of artillery require a propellant to propel the projectile at the target. Propellant is always a low explosive, which means it deflagrates, rather than detonating like high explosives. The shell is accelerated to a high velocity in a very short time by the rapid generation of gas from the burning propellant. This high pressure is achieved by burning the propellant in a contained area, either the chamber of a gun barrel or the combustion chamber of a rocket motor.
Until the late 19th century, the only available propellant was black powder. It had many disadvantages as a propellant; it has relatively low power, requiring large amounts of powder to fire projectiles, and created thick clouds of white smoke that would obscure the targets, betray the positions of guns, and make aiming impossible. In 1846, nitrocellulose (also known as guncotton) was discovered, and the high explosive nitroglycerin was discovered at nearly the same time. Nitrocellulose was significantly more powerful than black powder, and was smokeless. Early guncotton was unstable, however, and burned very fast and hot, leading to greatly increased barrel wear. Widespread introduction of smokeless powder would wait until the advent of the double-base powders, which combine nitrocellulose and nitroglycerin to produce powerful, smokeless, stable propellant.
Many other formulations were developed in the following decades, generally trying to find the optimum characteristics of a good artillery propellant – low temperature, high energy, non-corrosive, highly stable, cheap, and easy to manufacture in large quantities. Modern gun propellants are broadly divided into three classes: single-base propellants that are mainly or entirely nitrocellulose based, double-base propellants consisting of a combination of nitrocellulose and nitroglycerin, and triple base composed of a combination of nitrocellulose and nitroglycerin and nitroguanidine.
Artillery shells fired from a barrel can be assisted to greater range in three ways:
Rocket-assisted projectiles enhance and sustain the projectile's velocity by providing additional 'push' from a small rocket motor that is part of the projectile's base.
Base bleed uses a small pyrotechnic charge at the base of the projectile to introduce sufficient combustion products into the low-pressure region behind the base of the projectile responsible for a large proportion of the drag.
Ramjet-assisted, similar to rocket-assisted, but using a ramjet instead of a rocket motor; it is anticipated that a ramjet-assisted 120-mm mortar shell could reach a range of .
Propelling charges for barrel artillery can be provided either as cartridge bags or in metal cartridge cases. Generally, anti-aircraft artillery and smaller-caliber (up to 3" or 76.2 mm) guns use metal cartridge cases that include the round and propellant, similar to a modern rifle cartridge. This simplifies loading and is necessary for very high rates of fire. Bagged propellant allows the amount of powder to be raised or lowered, depending on the range to the target. It also makes handling of larger shells easier. Cases and bags require totally different types of breech. A metal case holds an integral primer to initiate the propellant and provides the gas seal to prevent the gases leaking out of the breech; this is called obturation. With bagged charges, the breech itself provides obturation and holds the primer. In either case, the primer is usually percussion, but electrical is also used, and laser ignition is emerging. Modern 155 mm guns have a primer magazine fitted to their breech.
Artillery ammunition has four classifications according to use:
Service: ammunition used in live fire training or for wartime use in a combat zone. Also known as "warshot" ammunition.
Practice: Ammunition with a non- or minimally-explosive projectile that mimics the characteristics (range, accuracy) of live rounds for use under training conditions. Practice artillery ammunition often utilizes a colored-smoke-generating bursting charge for marking purposes in place of the normal high-explosive charge.
Dummy: Ammunition with an inert warhead, inert primer, and no propellant; used for training or display.
Blank: Ammunition with live primer, greatly reduced propellant charge (typically black powder), and no projectile; used for training, demonstration or ceremonial use.
Field artillery system
Because modern field artillery mostly uses indirect fire, the guns have to be part of a system that enables them to attack targets invisible to them, in accordance with the combined arms plan.
The main functions in the field artillery system are:
Communications
Command: authority to allocate resources;
Target acquisition: detect, identify and deduce the location of targets;
Control: authority to decide which targets to attack and allot fire units to the attack;
Computation of firing data – to deliver fire from a fire unit onto its target;
Fire units: guns, launchers or mortars grouped together;
Specialist services: produce data to support the production of accurate firing data;
Logistic services: to provide combat supplies, particularly ammunition, and equipment support.
All these calculations to produce a quadrant elevation (or range) and azimuth were done manually using instruments, tabulated, data of the moment, and approximations until battlefield computers started appearing in the 1960s and 1970s. While some early calculators copied the manual method (typically substituting polynomials for tabulated data), computers use a different approach. They simulate a shell's trajectory by 'flying' it in short steps and applying data about the conditions affecting the trajectory at each step. This simulation is repeated until it produces a quadrant elevation and azimuth that lands the shell within the required 'closing' distance of the target coordinates.
NATO has a standard ballistic model for computer calculations and has expanded the scope of this into the NATO Armaments Ballistic Kernel (NABK) within the SG2 Shareable (Fire Control) Software Suite (S4).
Logistics
Supply of artillery ammunition has always been a major component of military logistics. Up until World War I some armies made artillery responsible for all forward ammunition supply because the load of small arms ammunition was trivial compared to artillery. Different armies use different approaches to ammunition supply, which can vary with the nature of operations. Differences include where the logistic service transfers artillery ammunition to artillery, the amount of ammunition carried in units and extent to which stocks are held at unit or battery level. A key difference is whether supply is 'push' or 'pull'. In the former the 'pipeline' keeps pushing ammunition into formations or units at a defined rate. In the latter units fire as tactically necessary and replenish to maintain or reach their authorised holding (which can vary), so the logistic system has to be able to cope with surge and slack.
Classification
Artillery types can be categorised in several ways, for example by type or size of weapon or ordnance, by role or by organizational arrangements.
Types of ordnance
The types of cannon artillery are generally distinguished by the velocity at which they fire projectiles.
Types of artillery:
Cannon: The oldest type of artillery with direct firing trajectory.
Bombard: A type of a large calibre, muzzle-loading artillery piece, a cannon or mortar used during sieges to shoot round stone projectiles at the walls of enemy fortifications.
Falconet was a type of light cannon developed in the late 15th century that fired a smaller shot than the similar falcon.
Swivel gun is a type of small cannon mounted on a swiveling stand or fork which allows a very wide arc of movement. Camel mounted swivel guns called as zamburak were used by the Gunpowder Empires as self-propelled artillery.
Siege artillery: Large-caliber artillery that have limited mobility with indirect firing trajectory, which was used to bombard targets at long distances.
Large-calibre artillery.
Field artillery: Mobile weapons used to support armies in the field. Subcategories include:
Infantry support guns: Directly support infantry units.
Mountain guns: Lightweight guns that can be disassembled and transported through difficult terrain.
Field guns: Capable of long-range direct fires.
Howitzers: Capable of high-angle fire, they are most often employed for indirect-fire.
Gun-howitzers: Capable of high or low-angle fire with a longer barrel.
Mortars: Typically muzzle-loaded, short-barreled, high-trajectory weapons designed primarily for an indirect-fire role.
Gun-mortars: Typically breech-loaded, capable of high or low-angle fire with a longer barrel.
Tank guns: Large-caliber guns mounted on tanks to provide mobile direct fire.
Anti-tank artillery: Guns, usually mobile, designed primarily for direct fire to destroy armored fighting vehicles with heavy armor.
Anti-tank gun: Guns designed for direct fire to destroy tanks and other armored fighting vehicles.
Anti-aircraft artillery: Guns, usually mobile, designed for attacking aircraft by land and/or at sea. Some guns were suitable for the dual roles of anti-aircraft and anti-tank warfare.
Rocket artillery: Launches rockets or missiles, instead of shot or shell.
Railway gun: Large-caliber weapons that are mounted on, transported by and fired from specially-designed railway wagons.
Naval artillery: Guns mounted on warships to be used either against other naval vessels or to bombard coastal targets in support of ground forces. The crowning achievement of naval artillery was the battleship, but the advent of air power and missiles have rendered this type of artillery largely obsolete. They are typically longer-barreled, low-trajectory, high-velocity weapons designed primarily for a direct-fire role.
Coastal artillery: Fixed-position weapons dedicated to defense of a particular location, usually a coast (for example, the Atlantic Wall in World War II) or harbor. Not needing to be mobile, coastal artillery used to be much larger than equivalent field artillery pieces, giving them longer range and more destructive power. Modern coastal artillery (for example, Russia's "Bereg" system) is often self-propelled, (allowing it to avoid counter-battery fire) and fully integrated, meaning that each battery has all of the support systems that it requires (maintenance, targeting radar, etc.) organic to its unit.
Aircraft artillery: Large-caliber guns mounted on attack aircraft, this is typically found on slow-flying gunships.
Nuclear artillery: Artillery which fires nuclear shells.
Modern field artillery can also be split into two other subcategories: towed and self-propelled. As the name suggests, towed artillery has a prime mover, usually an artillery tractor or truck, to move the piece, crew, and ammunition around. Towed artillery is in some cases equipped with an APU for small displacements. Self-propelled artillery is permanently mounted on a carriage or vehicle with room for the crew and ammunition and is thus capable of moving quickly from one firing position to another, both to support the fluid nature of modern combat and to avoid counter-battery fire. It includes mortar carrier vehicles, many of which allow the mortar to be removed from the vehicle and be used dismounted, potentially in terrain in which the vehicle cannot navigate, or in order to avoid detection.
Organizational types
At the beginning of the modern artillery period, the late 19th century, many armies had three main types of artillery, in some case they were sub-branches within the artillery branch in others they were separate branches or corps. There were also other types excluding the armament fitted to warships:
Horse artillery, first formed as regular units in the late 18th century, with the role of supporting cavalry, they were distinguished by the entire crew being mounted.
Field or "foot" artillery, the main artillery arm of the field army, using either guns, howitzers, or mortars. In World War II this branch again started using rockets and later surface to surface missiles.
Fortress or garrison artillery, operated a nation's fixed defences using guns, howitzers or mortars, either on land or coastal frontiers. Some had deployable elements to provide heavy artillery to the field army. In some nations coast defence artillery was a naval responsibility.
Mountain artillery, a few nations treated mountain artillery as a separate branch, in others it was a speciality in another artillery branch. They used light guns or howitzers, usually designed for pack animal transport and easily broken down into small easily handled loads
Naval artillery, some nations carried pack artillery on some warships, these were used and manhandled by naval (or marine) landing parties. At times, part of a ship's armament would be unshipped and mated to makeshift carriages and limbers for actions ashore, for example during the Second Boer War, during the First World War the guns from the stricken SMS Königsberg formed the main artillery strength of the German forces in East Africa.
After World War I many nations merged these different artillery branches, in some cases keeping some as sub-branches. Naval artillery disappeared apart from that belonging to marines. However, two new branches of artillery emerged during that war and its aftermath, both used specialised guns (and a few rockets) and used direct not indirect fire, in the 1950s and 1960s both started to make extensive use of missiles:
Anti-tank artillery, also under various organisational arrangements but typically either field artillery or a specialist branch and additional elements integral to infantry, etc., units. However, in most armies field and anti-aircraft artillery also had at least a secondary anti-tank role. After World War II anti-tank in Western armies became mostly the responsibility of infantry and armoured branches and ceased to be an artillery matter, with some exceptions.
Anti-aircraft artillery, under various organisational arrangements including being part of artillery, a separate corps, even a separate service or being split between army for the field and air force for home defence. In some cases infantry and the new armoured corps also operated their own integral light anti-aircraft artillery. Home defence anti-aircraft artillery often used fixed as well as mobile mountings. Some anti-aircraft guns could also be used as field or anti-tank artillery, providing they had suitable sights.
However, the general switch by artillery to indirect fire before and during World War I led to a reaction in some armies. The result was accompanying or infantry guns. These were usually small, short range guns, that could be easily man-handled and used mostly for direct fire but some could use indirect fire. Some were operated by the artillery branch but under command of the supported unit. In World War II they were joined by self-propelled assault guns, although other armies adopted infantry or close support tanks in armoured branch units for the same purpose, subsequently tanks generally took on the accompanying role.
Equipment types
The three main types of artillery "gun" are guns, howitzers, and mortars. During the 20th century, guns and howitzers have steadily merged in artillery use, making a distinction between the terms somewhat meaningless. By the end of the 20th century, true guns with calibers larger than about 60 mm have become very rare in artillery use, the main users being tanks, ships, and a few residual anti-aircraft and coastal guns. The term "cannon" is a United States generic term that includes guns, howitzers, and mortars; it is not used in other English speaking armies.
The traditional definitions differentiated between guns and howitzers in terms of maximum elevation (well less than 45° as opposed to close to or greater than 45°), number of charges (one or more than one charge), and having higher or lower muzzle velocity, sometimes indicated by barrel length. These three criteria give eight possible combinations, of which guns and howitzers are but two. However, modern "howitzers" have higher velocities and longer barrels than the equivalent "guns" of the first half of the 20th century.
True guns are characterized by long range, having a maximum elevation significantly less than 45°, a high muzzle velocity and hence a relatively long barrel, smooth bore (no rifling) and a single charge. The latter often led to fixed ammunition where the projectile is locked to the cartridge case. There is no generally accepted minimum muzzle velocity or barrel length associated with a gun.
Howitzers can fire at maximum elevations at least close to 45°; elevations up to about 70° are normal for modern howitzers. Howitzers also have a choice of charges, meaning that the same elevation angle of fire will achieve a different range depending on the charge used. They have rifled bores, lower muzzle velocities and shorter barrels than equivalent guns. All this means they can deliver fire with a steep angle of descent. Because of their multi-charge capability, their ammunition is mostly separate loading (the projectile and propellant are loaded separately).
That leaves six combinations of the three criteria, some of which have been termed gun howitzers. A term first used in the 1930s when howitzers with a relatively high maximum muzzle velocities were introduced, it never became widely accepted, most armies electing to widen the definition of "gun" or "howitzer". By the 1960s, most equipment had maximum elevations up to about 70°, were multi-charge, had quite high maximum muzzle velocities and relatively long barrels.
Mortars are simpler. The modern mortar originated in World War I and there were several patterns. After that war, most mortars settled on the Stokes pattern, characterized by a short barrel, smooth bore, low muzzle velocity, elevation angle of firing generally greater than 45°, and a very simple and light mounting using a "baseplate" on the ground. The projectile with its integral propelling charge was dropped down the barrel from the muzzle to hit a fixed firing pin. Since that time, a few mortars have become rifled and adopted breech loading.
There are other recognized typifying characteristics for artillery. One such characteristic is the type of obturation used to seal the chamber and prevent gases escaping through the breech. This may use a metal cartridge case that also holds the propelling charge, a configuration called "QF" or "quickfiring" by some nations. The alternative does not use a metal cartridge case, the propellant being merely bagged or in combustible cases with the breech itself providing all the sealing. This is called "BL" or "breech loading" by some nations.
A second characteristic is the form of propulsion. Modern equipment can either be towed or self-propelled (SP). A towed gun fires from the ground and any inherent protection is limited to a gun shield. Towing by horse teams lasted throughout World War II in some armies, but others were fully mechanized with wheeled or tracked gun towing vehicles by the outbreak of that war. The size of a towing vehicle depends on the weight of the equipment and the amount of ammunition it has to carry.
A variation of towed is portee, where the vehicle carries the gun which is dismounted for firing. Mortars are often carried this way. A mortar is sometimes carried in an armored vehicle and can either fire from it or be dismounted to fire from the ground. Since the early 1960s it has been possible to carry lighter towed guns and most mortars by helicopter. Even before that, they were parachuted or landed by glider from the time of the first airborne trials in the USSR in the 1930s.
In SP equipment, the gun is an integral part of the vehicle that carries it. SPs first appeared during World War I, but did not really develop until World War II. They are mostly tracked vehicles, but wheeled SPs started to appear in the 1970s. Some SPs have no armor and carry few or no other weapons and ammunition. Armored SPs usually carry a useful ammunition load. Early armored SPs were mostly a "casemate" configuration, in essence an open top armored box offering only limited traverse. However, most modern armored SPs have a full enclosed armored turret, usually giving full traverse for the gun. Many SPs cannot fire without deploying stabilizers or spades, sometimes hydraulic. A few SPs are designed so that the recoil forces of the gun are transferred directly onto the ground through a baseplate. A few towed guns have been given limited self-propulsion by means of an auxiliary engine.
Two other forms of tactical propulsion were used in the first half of the 20th century: Railways or transporting the equipment by road, as two or three separate loads, with disassembly and re-assembly at the beginning and end of the journey. Railway artillery took two forms, railway mountings for heavy and super-heavy guns and howitzers and armored trains as "fighting vehicles" armed with light artillery in a direct fire role. Disassembled transport was also used with heavy and super heavy weapons and lasted into the 1950s.
Caliber categories
A third form of artillery typing is to classify it as "light", "medium", "heavy" and various other terms. It appears to have been introduced in World War I, which spawned a very wide array of artillery in all sorts of sizes so a simple categorical system was needed. Some armies defined these categories by bands of calibers. Different bands were used for different types of weapons—field guns, mortars, anti-aircraft guns and coastal guns.
Modern operations
List of countries in order of amount of artillery (only conventional barrel ordnance is given, in use with land forces):
Artillery is used in a variety of roles depending on its type and caliber. The general role of artillery is to provide fire support—"the application of fire, coordinated with the manoeuvre of forces to destroy, neutralize or suppress the enemy". This NATO definition makes artillery a supporting arm although not all NATO armies agree with this logic. The italicised terms are NATO's.
Unlike rockets, guns (or howitzers as some armies still call them) and mortars are suitable for delivering close supporting fire. However, they are all suitable for providing deep supporting fire although the limited range of many mortars tends to exclude them from the role. Their control arrangements and limited range also mean that mortars are most suited to direct supporting fire. Guns are used either for this or general supporting fire while rockets are mostly used for the latter. However, lighter rockets may be used for direct fire support. These rules of thumb apply to NATO armies.
Modern mortars, because of their lighter weight and simpler, more transportable design, are usually an integral part of infantry and, in some armies, armor units. This means they generally do not have to concentrate their fire so their shorter range is not a disadvantage. Some armies also consider infantry operated mortars to be more responsive than artillery, but this is a function of the control arrangements and not the case in all armies. However, mortars have always been used by artillery units and remain with them in many armies, including a few in NATO.
In NATO armies artillery is usually assigned a tactical mission that establishes its relationship and responsibilities to the formation or units it is assigned to. It seems that not all NATO nations use the terms and outside NATO others are probably used. The standard terms are: direct support, general support, general support reinforcing and reinforcing. These tactical missions are in the context of the command authority: operational command, operational control, tactical command or tactical control.
In NATO direct support generally means that the directly supporting artillery unit provides observers and liaison to the manoeuvre troops being supported, typically an artillery battalion or equivalent is assigned to a brigade and its batteries to the brigade's battalions. However, some armies achieve this by placing the assigned artillery units under command of the directly supported formation. Nevertheless, the batteries' fire can be concentrated onto a single target, as can the fire of units in range and with the other tactical missions.
Application of fire
There are several dimensions to this subject. The first is the notion that fire may be against an opportunity target or may be arranged. If it is the latter it may be either on-call or scheduled. Arranged targets may be part of a fire plan. Fire may be either observed or unobserved, if the former it may be adjusted, if the latter then it has to be predicted. Observation of adjusted fire may be directly by a forward observer or indirectly via some other target acquisition system.
NATO also recognises several different types of fire support for tactical purposes:
Counterbattery fire: delivered for the purpose of destroying or neutralizing the enemy's fire support system.
Counterpreparation fire: intensive prearranged fire delivered when the imminence of the enemy attack is discovered.
Covering fire: used to protect troops when they are within range of enemy small arms.
Defensive fire: delivered by supporting units to assist and protect a unit engaged in a defensive action.
Final Protective Fire: an immediately available prearranged barrier of fire designed to impede enemy movement across defensive lines or areas.
Harassing fire: a random number of shells are fired at random intervals, without any pattern to it that the enemy can predict. This process is designed to hinder enemy forces' movement, and, by the constantly imposed stress, threat of losses and inability of enemy forces to relax or sleep, lowers their morale.
Interdiction fire: placed on an area or point to prevent the enemy from using the area or point.
Preparation fire: delivered before an attack to weaken the enemy position.
These purposes have existed for most of the 20th century, although their definitions have evolved and will continue to do so, lack of suppression in counterbattery is an omission. Broadly they can be defined as either:
Deep supporting fire: directed at objectives not in the immediate vicinity of own force, for neutralizing or destroying enemy reserves and weapons, and interfering with enemy command, supply, communications and observation; or
Close supporting fire: placed on enemy troops, weapons or positions which, because of their proximity present the most immediate and serious threat to the supported unit.
Two other NATO terms also need definition:
Neutralization fire: delivered to render a target temporarily ineffective or unusable; and
Suppression fire: that degrades the performance of a target below the level needed to fulfill its mission. Suppression is usually only effective for the duration of the fire.
The tactical purposes also include various "mission verbs", a rapidly expanding subject with the modern concept of "effects based operations".Targeting is the process of selecting target and matching the appropriate response to them taking account of operational requirements and capabilities. It requires consideration of the type of fire support required and the extent of coordination with the supported arm. It involves decisions about:
what effects are required, for example, neutralization or suppression;
the proximity of and risks to own troops or non-combatants;
what types of munitions, including their fuzing, are to be used and in what quantities;
when the targets should be attacked and possibly for how long;
what methods should be used, for example, converged or distributed, whether adjustment is permissible or surprise essential, the need for special procedures such as precision or danger close
how many fire units are needed and which ones they should be from those that are available (in range, with the required munitions type and quantity, not allotted to another target, have the most suitable line of fire if there is a risk to own troops or non-combatants);
The targeting process is the key aspect of tactical fire control. Depending on the circumstances and national procedures it may all be undertaken in one place or may be distributed. In armies practicing control from the front, most of the process may be undertaken by a forward observer or other target acquirer. This is particularly the case for a smaller target requiring only a few fire units. The extent to which the process is formal or informal and makes use of computer based systems, documented norms or experience and judgement also varies widely armies and other circumstances.
Surprise may be essential or irrelevant. It depends on what effects are required and whether or not the target is likely to move or quickly improve its protective posture. During World War II UK researchers concluded that for impact fuzed munitions the relative risk were as follows:
men standing – 1
men lying – 1/3
men firing from trenches – 1/15–1/50
men crouching in trenches – 1/25–1/100
Airburst munitions significantly increase the relative risk for lying men, etc. Historically most casualties occur in the first 10–15 seconds of fire, i.e. the time needed to react and improve protective posture, however, this is less relevant if airburst is used.
There are several ways of making best use of this brief window of maximum vulnerability:
ordering the guns to fire together, either by executive order or by a "fire at" time. The disadvantage is that if the fire is concentrated from many dispersed fire units then there will be different times of flight and the first rounds will be spread in time. To some extent a large concentration offsets the problem because it may mean that only one round is required from each gun and most of these could arrive in the 15 second window.
burst fire, a rate of fire to deliver three rounds from each gun within 10 or 15 seconds, this reduces the number of guns and hence fire units needed, which means they may be less dispersed and have less variation in their times of flight. Smaller caliber guns, such as 105 mm, have always been able to deliver three rounds in 15 seconds, larger calibers firing fixed rounds could also do it but it was not until the 1970s that a multi-charge 155 mm howitzer, FH-70 first gained the capability.
multiple round simultaneous impact (MRSI), where a single weapon or multiple individual weapons fire multiple rounds at differing trajectories so that all rounds arrive on target at the same time.
time on target'', fire units fire at the time less their time of flight, this works well with prearranged scheduled fire but is less satisfactory for opportunity targets because it means delaying the delivery of fire by selecting a 'safe' time that all or most fire units can achieve. It can be used with both the previous two methods.
Counter-battery fire
Modern counter-battery fire developed in World War I, with the objective of defeating the enemy's artillery. Typically such fire was used to suppress enemy batteries when they were or were about to interfere with the activities of friendly forces (such as to prevent enemy defensive artillery fire against an impending attack) or to systematically destroy enemy guns. In World War I the latter required air observation. The first indirect counter-battery fire was in May 1900 by an observer in a balloon.
Enemy artillery can be detected in two ways, either by direct observation of the guns from the air or by ground observers (including specialist reconnaissance), or from their firing signatures. This includes radars tracking the shells in flight to determine their place of origin, sound ranging detecting guns firing and resecting their position from pairs of microphones or cross-observation of gun flashes using observation by human observers or opto-electronic devices, although the widespread adoption of 'flashless' propellant limited the effectiveness of the latter.
Once hostile batteries have been detected they may be engaged immediately by friendly artillery or later at an optimum time, depending on the tactical situation and the counter-battery policy. Air strike is another option. In some situations the task is to locate all active enemy batteries for attack using a counter-battery fire at the appropriate moment in accordance with a plan developed by artillery intelligence staff. In other situations counter-battery fire may occur whenever a battery is located with sufficient accuracy.
Modern counter-battery target acquisition uses unmanned aircraft, counter-battery radar, ground reconnaissance and sound-ranging. Counter-battery fire may be adjusted by some of the systems, for example the operator of an unmanned aircraft can 'follow' a battery if it moves. Defensive measures by batteries include frequently changing position or constructing defensive earthworks, the tunnels used by North Korea being an extreme example. Counter-measures include air defence against aircraft and attacking counter-battery radars physically and electronically.
Field artillery team
'Field Artillery Team' is a US term and the following description and terminology applies to the US, other armies are broadly similar but differ in significant details. Modern field artillery (post–World War I) has three distinct parts: the Forward Observer (FO), the Fire Direction Center (FDC) and the actual guns themselves. The forward observer observes the target using tools such as binoculars, laser rangefinders, designators and call back fire missions on his radio, or relays the data through a portable computer via an encrypted digital radio connection protected from jamming by computerized frequency hopping. A lesser known part of the team is the FAS or Field Artillery Survey team which sets up the "Gun Line" for the cannons. Today most artillery battalions use a(n) "Aiming Circle" which allows for faster setup and more mobility. FAS teams are still used for checks and balances purposes and if a gun battery has issues with the "Aiming Circle" a FAS team will do it for them.
The FO can communicate directly with the battery FDC, of which there is one per each battery of 4–8 guns. Otherwise the several FOs communicate with a higher FDC such as at a Battalion level, and the higher FDC prioritizes the targets and allocates fires to individual batteries as needed to engage the targets that are spotted by the FOs or to perform preplanned fires.
The Battery FDC computes firing data—ammunition to be used, powder charge, fuse settings, the direction to the target, and the quadrant elevation to be fired at to reach the target, what gun will fire any rounds needed for adjusting on the target, and the number of rounds to be fired on the target by each gun once the target has been accurately located—to the guns. Traditionally this data is relayed via radio or wire communications as a warning order to the guns, followed by orders specifying the type of ammunition and fuse setting, direction, and the elevation needed to reach the target, and the method of adjustment or orders for fire for effect (FFE). However, in more advanced artillery units, this data is relayed through a digital radio link.
Other parts of the field artillery team include meteorological analysis to determine the temperature, humidity and pressure of the air and wind direction and speed at different altitudes. Also radar is used both for determining the location of enemy artillery and mortar batteries and to determine the precise actual strike points of rounds fired by battery and comparing that location with what was expected to compute a registration allowing future rounds to be fired with much greater accuracy.
Time on target
A technique called time on target (TOT) was developed by the British Army in North Africa at the end of 1941 and early 1942 particularly for counter-battery fire and other concentrations, it proved very popular. It relied on BBC time signals to enable officers to synchronize their watches to the second because this avoided the need to use military radio networks and the possibility of losing surprise, and the need for field telephone networks in the desert. With this technique the time of flight from each fire unit (battery or troop) to the target is taken from the range or firing tables, or the computer and each engaging fire unit subtracts its time of flight from the TOT to determine the time to fire. An executive order to fire is given to all guns in the fire unit at the correct moment to fire. When each fire unit fires their rounds at their individual firing time all the opening rounds will reach the target area almost simultaneously. This is especially effective when combined with techniques that allow fires for effect to be made without preliminary adjusting fires.
Multiple round simultaneous impact
Multiple round simultaneous impact (MRSI) is a modern version of the earlier time on target concept. MRSI is when a single gun fires multiple shells so all arrive at the same target simultaneously. This is possible because there is more than one trajectory for a round to fly to any given target. Typically one is below 45 degrees from horizontal and the other is above it, and by using different sized propellant charges with each shell, it is possible to utilize more than two trajectories. Because the higher trajectories cause the shells to arc higher into the air, they take longer to reach the target. If shells are fired on higher trajectories for initial volleys (starting with the shell with the most propellant and working down) and later volleys are fired on the lower trajectories, with the correct timing the shells will all arrive at the same target simultaneously. This is useful because many more shells can land on the target with no warning. With traditional methods of firing, the target area may have time (however long it takes to reload and re-fire the guns) to take cover between volleys. However, guns capable of burst fire can deliver multiple rounds in a few seconds if they use the same firing data for each, and if guns in more than one location are firing on one target they can use Time on Target procedures so that all their shells arrive at the same time and target.
MRSI has a few prerequisites. The first is guns with a high rate of fire. The second is the ability to use different sized propellant charges. Third is a fire control computer that has the ability to compute MRSI volleys and the capability to produce firing data, sent to each gun, and then presented to the gun commander in the correct order. The number of rounds that can be delivered in MRSI depends primarily on the range to the target and the rate of fire. To allow the most shells to reach the target, the target has to be in range of the lowest propellant charge.
Examples of guns with a rate of fire that makes them suitable for MRSI includes UK's AS-90, South Africa's Denel G6-52 (which can land six rounds simultaneously at targets at least away), Germany's Panzerhaubitze 2000 (which can land five rounds simultaneously at targets at least away), Slovakia's 155 mm SpGH ZUZANA model 2000, and K9 Thunder.
The Archer project (developed by BAE-Systems Bofors in Sweden) is a 155 mm howitzer on a wheeled chassis which is claimed to be able to deliver up to six shells on target simultaneously from the same gun. The 120 mm twin barrel AMOS mortar system, joint developed by Hägglunds (Sweden) and Patria (Finland), is capable of 7 + 7 shells MRSI. The United States Crusader program (now cancelled) was slated to have MRSI capability. It is unclear how many fire control computers have the necessary capabilities.
Two-round MRSI firings were a popular artillery demonstration in the 1960s, where well trained detachments could show off their skills for spectators.
Air burst
The destructiveness of artillery bombardments can be enhanced when some or all of the shells are set for airburst, meaning that they explode in the air above the target instead of upon impact. This can be accomplished either through time fuzes or proximity fuzes. Time fuzes use a precise timer to detonate the shell after a preset delay. This technique is tricky and slight variations in the functioning of the fuze can cause it to explode too high and be ineffective, or to strike the ground instead of exploding above it. Since December 1944 (Battle of the Bulge), proximity fuzed artillery shells have been available that take the guesswork out of this process. These employ a miniature, low powered radar transmitter in the fuze to detect the ground and explode them at a predetermined height above it. The return of the weak radar signal completes an electrical circuit in the fuze which explodes the shell. The proximity fuze itself was developed by the British to increase the effectiveness of anti-aircraft warfare.
This is a very effective tactic against infantry and light vehicles, because it scatters the fragmentation of the shell over a larger area and prevents it from being blocked by terrain or entrenchments that do not include some form of robust overhead cover. Combined with TOT or MRSI tactics that give no warning of the incoming rounds, these rounds are especially devastating because many enemy soldiers are likely to be caught in the open; even more so if the attack is launched against an assembly area or troops moving in the open rather than a unit in an entrenched tactical position.
Use in monuments
Numerous war memorials around the world incorporate an artillery piece that was used in the war or battle commemorated.
See also
List of artillery
Advanced Gun System
Artillery museums
Barrage (artillery)
Beehive anti-personnel round
Coilgun
Combustion light-gas gun
Cordite
Fuze
Gun laying
Light-gas gun
Paris Gun
Railgun
Shoot-and-scoot
Shrapnel shell
Suppressive fire
Improvised artillery in the Syrian Civil War
References
Notes
Bibliography
Further reading
External links
Naval Weapons of the World
Cannon Artillery – The Voice of Freedom's Thunder
Modern Artillery
What sort of forensic information can be derived from the analysis of shell fragments
Evans, Nigel F. (2001–2007) "British Artillery in World War 2"
Artillery Tactics and Combat during the Napoleonic Wars
Artillery of Napoleon's Imperial Guard
French artillery and its ammunition. 14th to the end of the 19th century
Historic films showing artillery in World War I at europeanfilmgateway.eu
Video: Inside shrieking shrapnel. Hear the great sound of shrapnel's – Finnish field artillery fire video year 2013
Video: Forensic and archaeological interpretation of artillery shell fragments and shrapnel
Chinese inventions
Explosive weapons
|
https://en.wikipedia.org/wiki/Ant
|
Ants are eusocial insects of the family Formicidae and, along with the related wasps and bees, belong to the order Hymenoptera. Ants evolved from vespoid wasp ancestors in the Cretaceous period. More than 13,800 of an estimated total of 22,000 species have been classified. They are easily identified by their geniculate (elbowed) antennae and the distinctive node-like structure that forms their slender waists.
Ants form colonies that range in size from a few dozen predatory individuals living in small natural cavities to highly organised colonies that may occupy large territories and consist of millions of individuals. Larger colonies consist of various castes of sterile, wingless females, most of which are workers (ergates), as well as soldiers (dinergates) and other specialised groups. Nearly all ant colonies also have some fertile males called "drones" and one or more fertile females called "queens" (gynes). The colonies are described as superorganisms because the ants appear to operate as a unified entity, collectively working together to support the colony.
Ants have colonised almost every landmass on Earth. The only places lacking indigenous ants are Antarctica and a few remote or inhospitable islands. Ants thrive in moist tropical ecosystems and may exceed the combined biomass of wild birds and mammals. Their success in so many environments has been attributed to their social organisation and their ability to modify habitats, tap resources, and defend themselves. Their long co-evolution with other species has led to mimetic, commensal, parasitic, and mutualistic relationships.
Ant societies have division of labour, communication between individuals, and an ability to solve complex problems. These parallels with human societies have long been an inspiration and subject of study. Many human cultures make use of ants in cuisine, medication, and rites. Some species are valued in their role as biological pest control agents. Their ability to exploit resources may bring ants into conflict with humans, however, as they can damage crops and invade buildings. Some species, such as the red imported fire ant (Solenopsis invicta) of South America, are regarded as invasive species in other parts of the world, establishing themselves in areas where they have been introduced accidentally.
Etymology
The word ant and the archaic word emmet are derived from , of Middle English, which come from of Old English; these are all related to Low Saxon , and varieties (Old Saxon ) and to German (Old High German ). All of these words come from West Germanic *, and the original meaning of the word was "the biter" (from Proto-Germanic , "off, away" + "cut").
The family name Formicidae is derived from the Latin ("ant") from which the words in other Romance languages, such as the Portuguese , Italian , Spanish , Romanian , and French are derived. It has been hypothesised that a Proto-Indo-European word *morwi- was the root for Sanskrit vamrah, Greek μύρμηξ mýrmēx, Old Church Slavonic mraviji, Old Irish moirb, Old Norse maurr, Dutch mier, Swedish myra, Danish myre, Middle Dutch miere, and Crimean Gothic miera.
Taxonomy and evolution
The family Formicidae belongs to the order Hymenoptera, which also includes sawflies, bees, and wasps. Ants evolved from a lineage within the stinging wasps, and a 2013 study suggests that they are a sister group of the Apoidea. In 1966, E. O. Wilson and his colleagues identified the fossil remains of an ant (Sphecomyrma) that lived in the Cretaceous period. The specimen, trapped in amber dating back to around 92 million years ago, has features found in some wasps, but not found in modern ants. The oldest fossils of ants date to the mid-Cretaceous, around 100 million years ago, which belong to extinct stem-groups such as the Haidomyrmecinae, Sphecomyrminae and Zigrasimeciinae, with modern ant subfamilies appearing towards the end of the Cretaceous around 80–70 million years ago. Ants diversified and assumed ecological dominance around 60 million years ago. Some groups, such as the Leptanillinae and Martialinae, are suggested to have diversified from early primitive ants that were likely to have been predators underneath the surface of the soil.
During the Cretaceous period, a few species of primitive ants ranged widely on the Laurasian supercontinent (the Northern Hemisphere). Their representation in the fossil record is poor, in comparison to the populations of other insects, representing only about 1% of fossil evidence of insects in the era. Ants became dominant after adaptive radiation at the beginning of the Paleogene period. By the Oligocene and Miocene, ants had come to represent 20–40% of all insects found in major fossil deposits. Of the species that lived in the Eocene epoch, around one in 10 genera survive to the present. Genera surviving today comprise 56% of the genera in Baltic amber fossils (early Oligocene), and 92% of the genera in Dominican amber fossils (apparently early Miocene).
Termites live in colonies and are sometimes called "white ants", but termites are only distantly related to ants. They are the sub-order Isoptera, and together with cockroaches, they form the order Blattodea. Blattodeans are related to mantids, crickets, and other winged insects that do not undergo full metamorphosis. Like ants, termites are eusocial, with sterile workers, but they differ greatly in the genetics of reproduction. The similarity of their social structure to that of ants is attributed to convergent evolution. Velvet ants look like large ants, but are wingless female wasps.
Distribution and diversity
Ants have a cosmopolitan distribution. They are found on all continents except Antarctica, and only a few large islands, such as Greenland, Iceland, parts of Polynesia and the Hawaiian Islands lack native ant species. Ants occupy a wide range of ecological niches and exploit many different food resources as direct or indirect herbivores, predators and scavengers. Most ant species are omnivorous generalists, but a few are specialist feeders. There is considerable variation in ant abundance across habitats, peaking in the moist tropics to nearly six times that found in less suitable habitats. Their ecological dominance has been examined primarily using estimates of their biomass: myrmecologist E. O. Wilson had estimated in 2009 that at any one time the total number of ants was between one and ten quadrillion (short scale) (i.e., between 1015 and 1016) and using this estimate he had suggested that the total biomass of all the ants in the world was approximately equal to the total biomass of the entire human race. More careful estimates made in 2022 which take into account regional variations puts the global ant contribution at 12 megatons of dry carbon, which is about 20% of the total human contribution, but greater than that of the wild birds and mammals combined. This study also puts a conservative estimate of the ants at about 20 × 1015 (20 quadrillion).
Ants range in size from , the largest species being the fossil Titanomyrma giganteum, the queen of which was long with a wingspan of . Ants vary in colour; most ants are yellow to red or brown to black, but a few species are green and some tropical species have a metallic lustre. More than 13,800 species are currently known (with upper estimates of the potential existence of about 22,000; see the article List of ant genera), with the greatest diversity in the tropics. Taxonomic studies continue to resolve the classification and systematics of ants. Online databases of ant species, including AntWeb and the Hymenoptera Name Server, help to keep track of the known and newly described species. The relative ease with which ants may be sampled and studied in ecosystems has made them useful as indicator species in biodiversity studies.
Morphology
Ants are distinct in their morphology from other insects in having geniculate (elbowed) antennae, metapleural glands, and a strong constriction of their second abdominal segment into a node-like petiole. The head, mesosoma, and metasoma are the three distinct body segments (formally tagmata). The petiole forms a narrow waist between their mesosoma (thorax plus the first abdominal segment, which is fused to it) and gaster (abdomen less the abdominal segments in the petiole). The petiole may be formed by one or two nodes (the second alone, or the second and third abdominal segments). Tergosternal fusion, when the tergite and sternite of a segment fuse together, can occur partly or fully on the second, third and fourth abdominal segment and is used in identification. Fourth abdominal tergosternal fusion was formerly used as character that defined the poneromorph subfamilies, Ponerinae and relatives within their clade, but this is no longer considered a synapomorphic character.
Like other arthropods, ants have an exoskeleton, an external covering that provides a protective casing around the body and a point of attachment for muscles, in contrast to the internal skeletons of humans and other vertebrates. Insects do not have lungs; oxygen and other gases, such as carbon dioxide, pass through their exoskeleton via tiny valves called spiracles. Insects also lack closed blood vessels; instead, they have a long, thin, perforated tube along the top of the body (called the "dorsal aorta") that functions like a heart, and pumps haemolymph toward the head, thus driving the circulation of the internal fluids. The nervous system consists of a ventral nerve cord that runs the length of the body, with several ganglia and branches along the way reaching into the extremities of the appendages.
Head
An ant's head contains many sensory organs. Like most insects, ants have compound eyes made from numerous tiny lenses attached together. Ant eyes are good for acute movement detection, but do not offer a high resolution image. They also have three small ocelli (simple eyes) on the top of the head that detect light levels and polarization. Compared to vertebrates, ants tend to have blurrier eyesight, particularly in smaller species, and a few subterranean taxa are completely blind. However, some ants, such as Australia's bulldog ant, have excellent vision and are capable of discriminating the distance and size of objects moving nearly a meter away.
Two antennae ("feelers") are attached to the head; these organs detect chemicals, air currents, and vibrations; they also are used to transmit and receive signals through touch. The head has two strong jaws, the mandibles, used to carry food, manipulate objects, construct nests, and for defence. In some species, a small pocket (infrabuccal chamber) inside the mouth stores food, so it may be passed to other ants or their larvae.
Mesosoma
Both the legs and wings of the ant are attached to the mesosoma ("thorax"). The legs terminate in a hooked claw which allows them to hook on and climb surfaces. Only reproductive ants (queens and males) have wings. Queens shed their wings after the nuptial flight, leaving visible stubs, a distinguishing feature of queens. In a few species, wingless queens (ergatoids) and males occur.
Metasoma
The metasoma (the "abdomen") of the ant houses important internal organs, including those of the reproductive, respiratory (tracheae), and excretory systems. Workers of many species have their egg-laying structures modified into stings that are used for subduing prey and defending their nests.
Polymorphism
In the colonies of a few ant species, there are physical castes—workers in distinct size-classes, called minor, median, and major ergates. Often, the larger ants have disproportionately larger heads, and correspondingly stronger mandibles. These are known as macrergates while smaller workers are known as micrergates. Although formally known as dinergates, such individuals are sometimes called "soldier" ants because their stronger mandibles make them more effective in fighting, although they still are workers and their "duties" typically do not vary greatly from the minor or median workers. In a few species, the median workers are absent, creating a sharp divide between the minors and majors. Weaver ants, for example, have a distinct bimodal size distribution. Some other species show continuous variation in the size of workers. The smallest and largest workers in Carebara diversa show nearly a 500-fold difference in their dry weights.
Workers cannot mate; however, because of the haplodiploid sex-determination system in ants, workers of a number of species can lay unfertilised eggs that become fully fertile, haploid males. The role of workers may change with their age and in some species, such as honeypot ants, young workers are fed until their gasters are distended, and act as living food storage vessels. These food storage workers are called repletes. For instance, these replete workers develop in the North American honeypot ant Myrmecocystus mexicanus. Usually the largest workers in the colony develop into repletes; and, if repletes are removed from the colony, other workers become repletes, demonstrating the flexibility of this particular polymorphism. This polymorphism in morphology and behaviour of workers initially was thought to be determined by environmental factors such as nutrition and hormones that led to different developmental paths; however, genetic differences between worker castes have been noted in Acromyrmex sp. These polymorphisms are caused by relatively small genetic changes; differences in a single gene of Solenopsis invicta can decide whether the colony will have single or multiple queens. The Australian jack jumper ant (Myrmecia pilosula) has only a single pair of chromosomes (with the males having just one chromosome as they are haploid), the lowest number known for any animal, making it an interesting subject for studies in the genetics and developmental biology of social insects.
Genome size
Genome size is a fundamental characteristic of an organism. Ants have been found to have tiny genomes, with the evolution of genome size suggested to occur through loss and accumulation of non-coding regions, mainly transposable elements, and occasionally by whole genome duplication. This may be related to colonisation processes, but further studies are needed to verify this.
Life cycle
The life of an ant starts from an egg; if the egg is fertilised, the progeny will be female diploid, if not, it will be male haploid. Ants develop by complete metamorphosis with the larva stages passing through a pupal stage before emerging as an adult. The larva is largely immobile and is fed and cared for by workers. Food is given to the larvae by trophallaxis, a process in which an ant regurgitates liquid food held in its crop. This is also how adults share food, stored in the "social stomach". Larvae, especially in the later stages, may also be provided solid food, such as trophic eggs, pieces of prey, and seeds brought by workers.
The larvae grow through a series of four or five moults and enter the pupal stage. The pupa has the appendages free and not fused to the body as in a butterfly pupa. The differentiation into queens and workers (which are both female), and different castes of workers, is influenced in some species by the nutrition the larvae obtain. Genetic influences and the control of gene expression by the developmental environment are complex and the determination of caste continues to be a subject of research. Winged male ants, called drones (termed "aner" in old literature), emerge from pupae along with the usually winged breeding females. Some species, such as army ants, have wingless queens. Larvae and pupae need to be kept at fairly constant temperatures to ensure proper development, and so often are moved around among the various brood chambers within the colony.
A new ergate spends the first few days of its adult life caring for the queen and young. She then graduates to digging and other nest work, and later to defending the nest and foraging. These changes are sometimes fairly sudden, and define what are called temporal castes. Such age-based task-specialization or polyethism has been suggested as having evolved due to the high casualties involved in foraging and defence, making it an acceptable risk only for ants who are older and likely to die sooner from natural causes. In the Brazilian ant Forelius pusillus, the nest entrance is closed from the outside to protect the colony from predatory ant species at sunset each day. About one to eight workers seal the nest entrance from the outside and they have no chance of returning to the nest and are in effect sacrificed. Whether these seemingly suicidal workers are older workers has not been determined.
Ant colonies can be long-lived. The queens can live for up to 30 years, and workers live from 1 to 3 years. Males, however, are more transitory, being quite short-lived and surviving for only a few weeks. Ant queens are estimated to live 100 times as long as solitary insects of a similar size.
Ants are active all year long in the tropics; however, in cooler regions, they survive the winter in a state of dormancy known as hibernation. The forms of inactivity are varied and some temperate species have larvae going into the inactive state (diapause), while in others, the adults alone pass the winter in a state of reduced activity.
Reproduction
A wide range of reproductive strategies have been noted in ant species. Females of many species are known to be capable of reproducing asexually through thelytokous parthenogenesis. Secretions from the male accessory glands in some species can plug the female genital opening and prevent females from re-mating. Most ant species have a system in which only the queen and breeding females have the ability to mate. Contrary to popular belief, some ant nests have multiple queens, while others may exist without queens. Workers with the ability to reproduce are called "gamergates" and colonies that lack queens are then called gamergate colonies; colonies with queens are said to be queen-right.
Drones can also mate with existing queens by entering a foreign colony, such as in army ants. When the drone is initially attacked by the workers, it releases a mating pheromone. If recognized as a mate, it will be carried to the queen to mate. Males may also patrol the nest and fight others by grabbing them with their mandibles, piercing their exoskeleton and then marking them with a pheromone. The marked male is interpreted as an invader by worker ants and is killed.
Most ants are univoltine, producing a new generation each year. During the species-specific breeding period, winged females and winged males, known to entomologists as alates, leave the colony in what is called a nuptial flight. The nuptial flight usually takes place in the late spring or early summer when the weather is hot and humid. Heat makes flying easier and freshly fallen rain makes the ground softer for mated queens to dig nests. Males typically take flight before the females. Males then use visual cues to find a common mating ground, for example, a landmark such as a pine tree to which other males in the area converge. Males secrete a mating pheromone that females follow. Males will mount females in the air, but the actual mating process usually takes place on the ground. Females of some species mate with just one male but in others they may mate with as many as ten or more different males, storing the sperm in their spermathecae. In Cardiocondyla elegans, workers may transport newly emerged queens to other conspecific nests where wingless males from unrelated colonies can mate with them, a behavioural adaptation that may reduce the chances of inbreeding.
Mated females then seek a suitable place to begin a colony. There, they break off their wings using their tibial spurs and begin to lay and care for eggs. The females can selectively fertilise future eggs with the sperm stored to produce diploid workers or lay unfertilized haploid eggs to produce drones. The first workers to hatch, known as nanitics, are weaker and smaller than later workers but they begin to serve the colony immediately. They enlarge the nest, forage for food, and care for the other eggs. Species that have multiple queens may have a queen leaving the nest along with some workers to found a colony at a new site, a process akin to swarming in honeybees.
Nests, colonies, and supercolonies
The typical ant species has a colony occupying a single nest, housing one or more queens, where the brood is raised. There are however more than 150 species of ants in 49 genera that are known to have colonies consisting of multiple spatially separated nests. These polydomous (as opposed to monodomous) colonies have food and workers moving between the nests. Membership to a colony is identified by the response of worker ants which identify whether another individual belongs to their own colony or not. A signature cocktail of body surface chemicals (also known as cuticular hydrocarbons or CHCs) forms the so-called colony odor which other members can recognize. Some ant species appear to be less discriminating and in the Argentine ant Linepithema humile, workers carried from a colony anywhere in the southern US and Mexico are acceptable within other colonies in the same region. Similarly workers from colonies that have established in Europe are accepted by any other colonies within Europe but not by the colonies in the Americas. The interpretation of these observations has been debated and some have been termed these large populations as supercolonies while others have termed the poulations as unicolonial.
Behaviour and ecology
Communication
Ants communicate with each other using pheromones, sounds, and touch. Since most ants live on the ground, they use the soil surface to leave pheromone trails that may be followed by other ants. In species that forage in groups, a forager that finds food marks a trail on the way back to the colony; this trail is followed by other ants, these ants then reinforce the trail when they head back with food to the colony. When the food source is exhausted, no new trails are marked by returning ants and the scent slowly dissipates. This behaviour helps ants deal with changes in their environment. For instance, when an established path to a food source is blocked by an obstacle, the foragers leave the path to explore new routes. If an ant is successful, it leaves a new trail marking the shortest route on its return. Successful trails are followed by more ants, reinforcing better routes and gradually identifying the best path.
Ants use pheromones for more than just making trails. A crushed ant emits an alarm pheromone that sends nearby ants into an attack frenzy and attracts more ants from farther away. Several ant species even use "propaganda pheromones" to confuse enemy ants and make them fight among themselves. Pheromones are produced by a wide range of structures including Dufour's glands, poison glands and glands on the hindgut, pygidium, rectum, sternum, and hind tibia. Pheromones also are exchanged, mixed with food, and passed by trophallaxis, transferring information within the colony. This allows other ants to detect what task group (e.g., foraging or nest maintenance) other colony members belong to. In ant species with queen castes, when the dominant queen stops producing a specific pheromone, workers begin to raise new queens in the colony.
Some ants produce sounds by stridulation, using the gaster segments and their mandibles. Sounds may be used to communicate with colony members or with other species.
Defence
Ants attack and defend themselves by biting and, in many species, by stinging often injecting or spraying chemicals. Bullet ants (Paraponera), located in Central and South America, are considered to have the most painful sting of any insect, although it is usually not fatal to humans. This sting is given the highest rating on the Schmidt sting pain index.
The sting of jack jumper ants can be lethal for humans, and an antivenom has been developed for it. Fire ants, Solenopsis spp., are unique in having a venom sac containing piperidine alkaloids. Their stings are painful and can be dangerous to hypersensitive people. Formicine ants secrete a poison from their glands, made mainly of formic acid.
Trap-jaw ants of the genus Odontomachus are equipped with mandibles called trap-jaws, which snap shut faster than any other predatory appendages within the animal kingdom. One study of Odontomachus bauri recorded peak speeds of between , with the jaws closing within 130 microseconds on average. The ants were also observed to use their jaws as a catapult to eject intruders or fling themselves backward to escape a threat. Before striking, the ant opens its mandibles extremely widely and locks them in this position by an internal mechanism. Energy is stored in a thick band of muscle and explosively released when triggered by the stimulation of sensory organs resembling hairs on the inside of the mandibles. The mandibles also permit slow and fine movements for other tasks. Trap-jaws also are seen in other ponerines such as Anochetus, as well as some genera in the tribe Attini, such as Daceton, Orectognathus, and Strumigenys, which are viewed as examples of convergent evolution.
A Malaysian species of ant in the Camponotus cylindricus group has enlarged mandibular glands that extend into their gaster. If combat takes a turn for the worse, a worker may perform a final act of suicidal altruism by rupturing the membrane of its gaster, causing the content of its mandibular glands to burst from the anterior region of its head, spraying a poisonous, corrosive secretion containing acetophenones and other chemicals that immobilise small insect attackers. The worker subsequently dies.
In addition to defence against predators, ants need to protect their colonies from pathogens. Secretions from the metapleural gland, unique to the ants, produce a complex range of chemicals including several with antibiotic properties. Some worker ants maintain the hygiene of the colony and their activities include undertaking or necrophoresis, the disposal of dead nest-mates. Oleic acid has been identified as the compound released from dead ants that triggers necrophoric behaviour in Atta mexicana while workers of Linepithema humile react to the absence of characteristic chemicals (dolichodial and iridomyrmecin) present on the cuticle of their living nestmates to trigger similar behaviour.
Nests may be protected from physical threats such as flooding and overheating by elaborate nest architecture. Workers of Cataulacus muticus, an arboreal species that lives in plant hollows, respond to flooding by drinking water inside the nest, and excreting it outside. Camponotus anderseni, which nests in the cavities of wood in mangrove habitats, deals with submergence under water by switching to anaerobic respiration.
Learning
Many animals can learn behaviours by imitation, but ants may be the only group apart from mammals where interactive teaching has been observed. A knowledgeable forager of Temnothorax albipennis can lead a naïve nest-mate to newly discovered food by the process of tandem running. The follower obtains knowledge through its leading tutor. The leader is acutely sensitive to the progress of the follower and slows down when the follower lags and speeds up when the follower gets too close.
Controlled experiments with colonies of Cerapachys biroi suggest that an individual may choose nest roles based on her previous experience. An entire generation of identical workers was divided into two groups whose outcome in food foraging was controlled. One group was continually rewarded with prey, while it was made certain that the other failed. As a result, members of the successful group intensified their foraging attempts while the unsuccessful group ventured out fewer and fewer times. A month later, the successful foragers continued in their role while the others had moved to specialise in brood care.
Nest construction
Complex nests are built by many ant species, but other species are nomadic and do not build permanent structures. Ants may form subterranean nests or build them on trees. These nests may be found in the ground, under stones or logs, inside logs, hollow stems, or even acorns. The materials used for construction include soil and plant matter, and ants carefully select their nest sites; Temnothorax albipennis will avoid sites with dead ants, as these may indicate the presence of pests or disease. They are quick to abandon established nests at the first sign of threats.
The army ants of South America, such as the Eciton burchellii species, and the driver ants of Africa do not build permanent nests, but instead, alternate between nomadism and stages where the workers form a temporary nest (bivouac) from their own bodies, by holding each other together.
Weaver ant (Oecophylla spp.) workers build nests in trees by attaching leaves together, first pulling them together with bridges of workers and then inducing their larvae to produce silk as they are moved along the leaf edges. Similar forms of nest construction are seen in some species of Polyrhachis.
Formica polyctena, among other ant species, constructs nests that maintain a relatively constant interior temperature that aids in the development of larvae. The ants maintain the nest temperature by choosing the location, nest materials, controlling ventilation and maintaining the heat from solar radiation, worker activity and metabolism, and in some moist nests, microbial activity in the nest materials.
Some ant species, such as those that use natural cavities, can be opportunistic and make use of the controlled micro-climate provided inside human dwellings and other artificial structures to house their colonies and nest structures.
Cultivation of food
Most ants are generalist predators, scavengers, and indirect herbivores, but a few have evolved specialised ways of obtaining nutrition. It is believed that many ant species that engage in indirect herbivory rely on specialized symbiosis with their gut microbes to upgrade the nutritional value of the food they collect and allow them to survive in nitrogen poor regions, such as rainforest canopies. Leafcutter ants (Atta and Acromyrmex) feed exclusively on a fungus that grows only within their colonies. They continually collect leaves which are taken to the colony, cut into tiny pieces and placed in fungal gardens. Ergates specialise in related tasks according to their sizes. The largest ants cut stalks, smaller workers chew the leaves and the smallest tend the fungus. Leafcutter ants are sensitive enough to recognise the reaction of the fungus to different plant material, apparently detecting chemical signals from the fungus. If a particular type of leaf is found to be toxic to the fungus, the colony will no longer collect it. The ants feed on structures produced by the fungi called gongylidia. Symbiotic bacteria on the exterior surface of the ants produce antibiotics that kill bacteria introduced into the nest that may harm the fungi.
Navigation
Foraging ants travel distances of up to from their nest and scent trails allow them to find their way back even in the dark. In hot and arid regions, day-foraging ants face death by desiccation, so the ability to find the shortest route back to the nest reduces that risk. Diurnal desert ants of the genus Cataglyphis such as the Sahara desert ant navigate by keeping track of direction as well as distance travelled. Distances travelled are measured using an internal pedometer that keeps count of the steps taken and also by evaluating the movement of objects in their visual field (optical flow). Directions are measured using the position of the sun.
They integrate this information to find the shortest route back to their nest.
Like all ants, they can also make use of visual landmarks when available as well as olfactory and tactile cues to navigate. Some species of ant are able to use the Earth's magnetic field for navigation. The compound eyes of ants have specialised cells that detect polarised light from the Sun, which is used to determine direction.
These polarization detectors are sensitive in the ultraviolet region of the light spectrum. In some army ant species, a group of foragers who become separated from the main column may sometimes turn back on themselves and form a circular ant mill. The workers may then run around continuously until they die of exhaustion.
Locomotion
The female worker ants do not have wings and reproductive females lose their wings after their mating flights in order to begin their colonies. Therefore, unlike their wasp ancestors, most ants travel by walking. Some species are capable of leaping. For example, Jerdon's jumping ant (Harpegnathos saltator) is able to jump by synchronising the action of its mid and hind pairs of legs. There are several species of gliding ant including Cephalotes atratus; this may be a common trait among arboreal ants with small colonies. Ants with this ability are able to control their horizontal movement so as to catch tree trunks when they fall from atop the forest canopy.
Other species of ants can form chains to bridge gaps over water, underground, or through spaces in vegetation. Some species also form floating rafts that help them survive floods. These rafts may also have a role in allowing ants to colonise islands. Polyrhachis sokolova, a species of ant found in Australian mangrove swamps, can swim and live in underwater nests. Since they lack gills, they go to trapped pockets of air in the submerged nests to breathe.
Cooperation and competition
Not all ants have the same kind of societies. The Australian bulldog ants are among the biggest and most basal of ants. Like virtually all ants, they are eusocial, but their social behaviour is poorly developed compared to other species. Each individual hunts alone, using her large eyes instead of chemical senses to find prey.
Some species attack and take over neighbouring ant colonies. Extreme specialists among these slave-raiding ants, such as the Amazon ants, are incapable of feeding themselves and need captured workers to survive. Captured workers of enslaved Temnothorax species have evolved a counter-strategy, destroying just the female pupae of the slave-making Temnothorax americanus, but sparing the males (who do not take part in slave-raiding as adults).
Ants identify kin and nestmates through their scent, which comes from hydrocarbon-laced secretions that coat their exoskeletons. If an ant is separated from its original colony, it will eventually lose the colony scent. Any ant that enters a colony without a matching scent will be attacked.
Parasitic ant species enter the colonies of host ants and establish themselves as social parasites; species such as Strumigenys xenos are entirely parasitic and do not have workers, but instead, rely on the food gathered by their Strumigenys perplexa hosts. This form of parasitism is seen across many ant genera, but the parasitic ant is usually a species that is closely related to its host. A variety of methods are employed to enter the nest of the host ant. A parasitic queen may enter the host nest before the first brood has hatched, establishing herself prior to development of a colony scent. Other species use pheromones to confuse the host ants or to trick them into carrying the parasitic queen into the nest. Some simply fight their way into the nest.
A conflict between the sexes of a species is seen in some species of ants with these reproducers apparently competing to produce offspring that are as closely related to them as possible. The most extreme form involves the production of clonal offspring. An extreme of sexual conflict is seen in Wasmannia auropunctata, where the queens produce diploid daughters by thelytokous parthenogenesis and males produce clones by a process whereby a diploid egg loses its maternal contribution to produce haploid males who are clones of the father.
Relationships with other organisms
Ants form symbiotic associations with a range of species, including other ant species, other insects, plants, and fungi. They also are preyed on by many animals and even certain fungi. Some arthropod species spend part of their lives within ant nests, either preying on ants, their larvae, and eggs, consuming the food stores of the ants, or avoiding predators. These inquilines may bear a close resemblance to ants. The nature of this ant mimicry (myrmecomorphy) varies, with some cases involving Batesian mimicry, where the mimic reduces the risk of predation. Others show Wasmannian mimicry, a form of mimicry seen only in inquilines.
Aphids and other hemipteran insects secrete a sweet liquid called honeydew, when they feed on plant sap. The sugars in honeydew are a high-energy food source, which many ant species collect. In some cases, the aphids secrete the honeydew in response to ants tapping them with their antennae. The ants in turn keep predators away from the aphids and will move them from one feeding location to another. When migrating to a new area, many colonies will take the aphids with them, to ensure a continued supply of honeydew. Ants also tend mealybugs to harvest their honeydew. Mealybugs may become a serious pest of pineapples if ants are present to protect mealybugs from their natural enemies.
Myrmecophilous (ant-loving) caterpillars of the butterfly family Lycaenidae (e.g., blues, coppers, or hairstreaks) are herded by the ants, led to feeding areas in the daytime, and brought inside the ants' nest at night. The caterpillars have a gland which secretes honeydew when the ants massage them. Some caterpillars produce vibrations and sounds that are perceived by the ants. A similar adaptation can be seen in Grizzled skipper butterflies that emit vibrations by expanding their wings in order to communicate with ants, which are natural predators of these butterflies. Other caterpillars have evolved from ant-loving to ant-eating: these myrmecophagous caterpillars secrete a pheromone that makes the ants act as if the caterpillar is one of their own larvae. The caterpillar is then taken into the ant nest where it feeds on the ant larvae. A number of specialized bacteria have been found as endosymbionts in ant guts. Some of the dominant bacteria belong to the order Hyphomicrobiales whose members are known for being nitrogen-fixing symbionts in legumes but the species found in ant lack the ability to fix nitrogen. Fungus-growing ants that make up the tribe Attini, including leafcutter ants, cultivate certain species of fungus in the genera Leucoagaricus or Leucocoprinus of the family Agaricaceae. In this ant-fungus mutualism, both species depend on each other for survival. The ant Allomerus decemarticulatus has evolved a three-way association with the host plant, Hirtella physophora (Chrysobalanaceae), and a sticky fungus which is used to trap their insect prey.
Lemon ants make devil's gardens by killing surrounding plants with their stings and leaving a pure patch of lemon ant trees, (Duroia hirsuta). This modification of the forest provides the ants with more nesting sites inside the stems of the Duroia trees. Although some ants obtain nectar from flowers, pollination by ants is somewhat rare, one example being of the pollination of the orchid Leporella fimbriata which induces male Myrmecia urens to pseudocopulate with the flowers, transferring pollen in the process. One theory that has been proposed for the rarity of pollination is that the secretions of the metapleural gland inactivate and reduce the viability of pollen. Some plants have special nectar exuding structures, extrafloral nectaries, that provide food for ants, which in turn protect the plant from more damaging herbivorous insects. Species such as the bullhorn acacia (Acacia cornigera) in Central America have hollow thorns that house colonies of stinging ants (Pseudomyrmex ferruginea) who defend the tree against insects, browsing mammals, and epiphytic vines. Isotopic labelling studies suggest that plants also obtain nitrogen from the ants. In return, the ants obtain food from protein- and lipid-rich Beltian bodies. In Fiji Philidris nagasau (Dolichoderinae) are known to selectively grow species of epiphytic Squamellaria (Rubiaceae) which produce large domatia inside which the ant colonies nest. The ants plant the seeds and the domatia of young seedling are immediately occupied and the ant faeces in them contribute to rapid growth. Similar dispersal associations are found with other dolichoderines in the region as well. Another example of this type of ectosymbiosis comes from the Macaranga tree, which has stems adapted to house colonies of Crematogaster ants.
Many plant species have seeds that are adapted for dispersal by ants. Seed dispersal by ants or myrmecochory is widespread, and new estimates suggest that nearly 9% of all plant species may have such ant associations. Often, seed-dispersing ants perform directed dispersal, depositing the seeds in locations that increase the likelihood of seed survival to reproduction. Some plants in arid, fire-prone systems are particularly dependent on ants for their survival and dispersal as the seeds are transported to safety below the ground. Many ant-dispersed seeds have special external structures, elaiosomes, that are sought after by ants as food. Ants can substantially alter rate of decomposition and nutrient cycling in their nest. By myrmecochory and modification of soil conditions they substantially alter vegetation and nutrient cycling in surrounding ecosystem.
A convergence, possibly a form of mimicry, is seen in the eggs of stick insects. They have an edible elaiosome-like structure and are taken into the ant nest where the young hatch.
Most ants are predatory and some prey on and obtain food from other social insects including other ants. Some species specialise in preying on termites (Megaponera and Termitopone) while a few Cerapachyinae prey on other ants. Some termites, including Nasutitermes corniger, form associations with certain ant species to keep away predatory ant species. The tropical wasp Mischocyttarus drewseni coats the pedicel of its nest with an ant-repellent chemical. It is suggested that many tropical wasps may build their nests in trees and cover them to protect themselves from ants. Other wasps, such as A. multipicta, defend against ants by blasting them off the nest with bursts of wing buzzing. Stingless bees (Trigona and Melipona) use chemical defences against ants.
Flies in the Old World genus Bengalia (Calliphoridae) prey on ants and are kleptoparasites, snatching prey or brood from the mandibles of adult ants. Wingless and legless females of the Malaysian phorid fly (Vestigipoda myrmolarvoidea) live in the nests of ants of the genus Aenictus and are cared for by the ants.
Fungi in the genera Cordyceps and Ophiocordyceps infect ants. Ants react to their infection by climbing up plants and sinking their mandibles into plant tissue. The fungus kills the ants, grows on their remains, and produces a fruiting body. It appears that the fungus alters the behaviour of the ant to help disperse its spores in a microhabitat that best suits the fungus. Strepsipteran parasites also manipulate their ant host to climb grass stems, to help the parasite find mates.
A nematode (Myrmeconema neotropicum) that infects canopy ants (Cephalotes atratus) causes the black-coloured gasters of workers to turn red. The parasite also alters the behaviour of the ant, causing them to carry their gasters high. The conspicuous red gasters are mistaken by birds for ripe fruits, such as Hyeronima alchorneoides, and eaten. The droppings of the bird are collected by other ants and fed to their young, leading to further spread of the nematode.
A study of Temnothorax nylanderi colonies in Germany found that workers parasitized by the tapeworm Anomotaenia brevis (ants are intermediate hosts, the definitive hosts are woodpeckers) lived much longer than unparasitized workers and had a reduced mortality rate, comparable to that of the queens of the same species, which live for as long as two decades.
South American poison dart frogs in the genus Dendrobates feed mainly on ants, and the toxins in their skin may come from the ants.
Army ants forage in a wide roving column, attacking any animals in that path that are unable to escape. In Central and South America, Eciton burchellii is the swarming ant most commonly attended by "ant-following" birds such as antbirds and woodcreepers. This behaviour was once considered mutualistic, but later studies found the birds to be parasitic. Direct kleptoparasitism (birds stealing food from the ants' grasp) is rare and has been noted in Inca doves which pick seeds at nest entrances as they are being transported by species of Pogonomyrmex. Birds that follow ants eat many prey insects and thus decrease the foraging success of ants. Birds indulge in a peculiar behaviour called anting that, as yet, is not fully understood. Here birds rest on ant nests, or pick and drop ants onto their wings and feathers; this may be a means to remove ectoparasites from the birds.
Anteaters, aardvarks, pangolins, echidnas and numbats have special adaptations for living on a diet of ants. These adaptations include long, sticky tongues to capture ants and strong claws to break into ant nests. Brown bears (Ursus arctos) have been found to feed on ants. About 12%, 16%, and 4% of their faecal volume in spring, summer and autumn, respectively, is composed of ants.
Relationship with humans
Ants perform many ecological roles that are beneficial to humans, including the suppression of pest populations and aeration of the soil. The use of weaver ants in citrus cultivation in southern China is considered one of the oldest known applications of biological control. On the other hand, ants may become nuisances when they invade buildings or cause economic losses.
In some parts of the world (mainly Africa and South America), large ants, especially army ants, are used as surgical sutures. The wound is pressed together and ants are applied along it. The ant seizes the edges of the wound in its mandibles and locks in place. The body is then cut off and the head and mandibles remain in place to close the wound. The large heads of the dinergates (soldiers) of the leafcutting ant Atta cephalotes are also used by native surgeons in closing wounds.
Some ants have toxic venom and are of medical importance. The species include Paraponera clavata (tocandira) and Dinoponera spp. (false tocandiras) of South America and the Myrmecia ants of Australia.
In South Africa, ants are used to help harvest the seeds of rooibos (Aspalathus linearis), a plant used to make a herbal tea. The plant disperses its seeds widely, making manual collection difficult. Black ants collect and store these and other seeds in their nest, where humans can gather them en masse. Up to half a pound (200 g) of seeds may be collected from one ant-heap.
Although most ants survive attempts by humans to eradicate them, a few are highly endangered. These tend to be island species that have evolved specialized traits and risk being displaced by introduced ant species. Examples include the critically endangered Sri Lankan relict ant (Aneuretus simoni) and Adetomyrma venatrix of Madagascar.
As food
Ants and their larvae are eaten in different parts of the world. The eggs of two species of ants are used in Mexican escamoles. They are considered a form of insect caviar and can sell for as much as US$50 per kg going up to US$200 per kg (as of 2006) because they are seasonal and hard to find. In the Colombian department of Santander, hormigas culonas (roughly interpreted as "large-bottomed ants") Atta laevigata are toasted alive and eaten. In areas of India, and throughout Burma and Thailand, a paste of the green weaver ant (Oecophylla smaragdina) is served as a condiment with curry. Weaver ant eggs and larvae, as well as the ants, may be used in a Thai salad, yam (), in a dish called yam khai mot daeng () or red ant egg salad, a dish that comes from the Issan or north-eastern region of Thailand. Saville-Kent, in the Naturalist in Australia wrote "Beauty, in the case of the green ant, is more than skin-deep. Their attractive, almost sweetmeat-like translucency possibly invited the first essays at their consumption by the human species". Mashed up in water, after the manner of lemon squash, "these ants form a pleasant acid drink which is held in high favor by the natives of North Queensland, and is even appreciated by many European palates".
In his First Summer in the Sierra, John Muir notes that the Digger Indians of California ate the tickling, acid gasters of the large jet-black carpenter ants. The Mexican Indians eat the repletes, or living honey-pots, of the honey ant (Myrmecocystus).
As pests
Some ant species are considered as pests, primarily those that occur in human habitations, where their presence is often problematic. For example, the presence of ants would be undesirable in sterile places such as hospitals or kitchens. Some species or genera commonly categorized as pests include the Argentine ant, immigrant pavement ant, yellow crazy ant, banded sugar ant, pharaoh ant, red wood ant, black carpenter ant, odorous house ant, red imported fire ant, and European fire ant. Some ants will raid stored food, some will seek water sources, others may damage indoor structures, some may damage agricultural crops directly or by aiding sucking pests. Some will sting or bite. The adaptive nature of ant colonies make it nearly impossible to eliminate entire colonies and most pest management practices aim to control local populations and tend to be temporary solutions. Ant populations are managed by a combination of approaches that make use of chemical, biological, and physical methods. Chemical methods include the use of insecticidal bait which is gathered by ants as food and brought back to the nest where the poison is inadvertently spread to other colony members through trophallaxis. Management is based on the species and techniques may vary according to the location and circumstance.
In science and technology
Observed by humans since the dawn of history, the behaviour of ants has been documented and the subject of early writings and fables passed from one century to another. Those using scientific methods, myrmecologists, study ants in the laboratory and in their natural conditions. Their complex and variable social structures have made ants ideal model organisms. Ultraviolet vision was first discovered in ants by Sir John Lubbock in 1881. Studies on ants have tested hypotheses in ecology and sociobiology, and have been particularly important in examining the predictions of theories of kin selection and evolutionarily stable strategies. Ant colonies may be studied by rearing or temporarily maintaining them in formicaria, specially constructed glass framed enclosures. Individuals may be tracked for study by marking them with dots of colours.
The successful techniques used by ant colonies have been studied in computer science and robotics to produce distributed and fault-tolerant systems for solving problems, for example Ant colony optimization and Ant robotics. This area of biomimetics has led to studies of ant locomotion, search engines that make use of "foraging trails", fault-tolerant storage, and networking algorithms.
As pets
From the late 1950s through the late 1970s, ant farms were popular educational children's toys in the United States. Some later commercial versions use transparent gel instead of soil, allowing greater visibility at the cost of stressing the ants with unnatural light.
In culture
Anthropomorphised ants have often been used in fables and children's stories to represent industriousness and cooperative effort. They also are mentioned in religious texts. In the Book of Proverbs in the Bible, ants are held up as a good example of hard work and cooperation. Aesop did the same in his fable The Ant and the Grasshopper. In the Quran, Sulayman is said to have heard and understood an ant warning other ants to return home to avoid being accidentally crushed by Sulayman and his marching army., In parts of Africa, ants are considered to be the messengers of the deities. Some Native American mythology, such as the Hopi mythology, considers ants as the very first animals. Ant bites are often said to have curative properties. The sting of some species of Pseudomyrmex is claimed to give fever relief. Ant bites are used in the initiation ceremonies of some Amazon Indian cultures as a test of endurance. In Greek mythology, the goddess Athena turned the maiden Myrmex into an ant when the latter claimed to have invented the plough, when in fact it was Athena's own invention.
Ant society has always fascinated humans and has been written about both humorously and seriously. Mark Twain wrote about ants in his 1880 book A Tramp Abroad. Some modern authors have used the example of the ants to comment on the relationship between society and the individual. Examples are Robert Frost in his poem "Departmental" and T. H. White in his fantasy novel The Once and Future King. The plot in French entomologist and writer Bernard Werber's Les Fourmis science-fiction trilogy is divided between the worlds of ants and humans; ants and their behaviour is described using contemporary scientific knowledge. H.G. Wells wrote about intelligent ants destroying human settlements in Brazil and threatening human civilization in his 1905 science-fiction short story, The Empire of the Ants. In more recent times, animated cartoons and 3-D animated films featuring ants have been produced including Antz, A Bug's Life, The Ant Bully, The Ant and the Aardvark, Ferdy the Ant and Atom Ant. Renowned myrmecologist E. O. Wilson wrote a short story, "Trailhead" in 2010 for The New Yorker magazine, which describes the life and death of an ant-queen and the rise and fall of her colony, from an ants' point of view. The French neuroanatomist, psychiatrist and eugenicist Auguste Forel believed that ant societies were models for human society. He published a five volume work from 1921 to 1923 that examined ant biology and society.
In the early 1990s, the video game SimAnt, which simulated an ant colony, won the 1992 Codie award for "Best Simulation Program".
Ants also are quite popular inspiration for many science-fiction insectoids, such as the Formics of Ender's Game, the Bugs of Starship Troopers, the giant ants in the films Them! and Empire of the Ants, Marvel Comics' super hero Ant-Man, and ants mutated into super-intelligence in Phase IV. In computer strategy games, ant-based species often benefit from increased production rates due to their single-minded focus, such as the Klackons in the Master of Orion series of games or the ChCht in Deadlock II. These characters are often credited with a hive mind, a common misconception about ant colonies.
See also
Glossary of ant terms
International Union for the Study of Social Insects
Myrmecological News (journal)
Task allocation and partitioning in social insects
References
Cited texts
Further reading
External links
AntWeb from The California Academy of Sciences
AntWiki – Bringing Ants to the World
Ant Species Fact Sheets from the National Pest Management Association on Argentine, Carpenter, Pharaoh, Odorous, and other ant species
Ant Genera of the World – distribution maps
The super-nettles. A dermatologist's guide to ants-in-the-plants
Symbiosis
Extant Albian first appearances
Articles containing video clips
Insects in culture
|
https://en.wikipedia.org/wiki/Abatis
|
An abatis, abattis, or abbattis is a field fortification consisting of an obstacle formed (in the modern era) of the branches of trees laid in a row, with the sharpened tops directed outwards, towards the enemy. The trees are usually interlaced or tied with wire. Abatis are used alone or in combination with wire entanglements and other obstacles.
In Slavic languages it is known as zaseka, a position behind sharpened objects.
History
There is evidence it was used as early as the Roman Imperial period, and as recently as the American Civil War and the Anglo-Zulu War of 1879.
Gregory of Tours mentions the use of abatises several times in his writing about the history of the early Franks. He wrote that the Franks ambushed and destroyed a Roman army near Neuss during the reign of Magnus Maximus with the use of an abatis. He also wrote that Mummolus, a general working for Burgundy, successfully used an abatis to defeat a Lombard army near Embrun.
A classic use of an abatis was at the Battle of Carillon (1758) during the Seven Years' War. The 3,600 French troops defeated a massive army of 16,000 British and Colonial troops by fronting their defensive positions with an extremely dense abatis. The British found the defences almost impossible to breach and were forced to withdraw with some 2,600 casualties. Other uses of an abatis can be found at the Battle of the Chateauguay, 26 October 1813, when approximately 1,300 Canadian Voltigeurs, under the command of Charles-Michel de Salaberry, defeated an American corps of approximately 4,000 men, or at the Battle of Plattsburgh.
Construction
An important weakness of abatis, in contrast to barbed wire, is that it can be destroyed by fire. Also, if laced together with rope instead of wire, the rope can be very quickly destroyed by such fires, after which the abatis can be quickly pulled apart by grappling hooks thrown from a safe distance.
An important advantage is that an improvised abatis can be quickly formed in forested areas. This can be done by simply cutting down a row of trees so that they fall with their tops toward the enemy. An alternative is to place explosives so as to blow the trees down.
Modern use
Abatis are rarely seen nowadays, having been largely replaced by wire obstacles. However, it may be used as a replacement or supplement when barbed wire is in short supply. A form of giant abatis, using whole trees instead of branches, can be used as an improvised anti-tank obstacle.
Though rarely used by modern conventional military units, abatises are still officially maintained in United States Army and Marine Corps training. Current US training instructs engineers or other constructors of such obstacles to fell trees, leaving a stump, in such a manner as the trees fall interlocked pointing at a 45-degree angle towards the direction of approach of the enemy. Furthermore, it is recommended that the trees remain connected to the stumps and the length of roadway covered be at least . US military maps record an abatis by use of an inverted "V" with a short line extending from it to the right.
See also
Great Zasechnaya cherta
Notes
References
External links
Pamplin Historical Park & The National Museum of the Civil War Soldier includes large and authentic reproduction of abatis used in the U.S. Civil War.
Fortifications by type
Engineering barrages
Medieval defences
|
https://en.wikipedia.org/wiki/Aedicula
|
In ancient Roman religion, an aedicula (plural aediculae) is a small shrine, and in classical architecture refers to a niche covered by a pediment or entablature supported by a pair of columns and typically framing a statue, the early Christian ones sometimes contained funeral urns. Aediculae are also represented in art as a form of ornamentation.
The word aedicula is the diminutive of the Latin aedes, a temple building or dwelling place. The Latin word has been Anglicised as "aedicule" and as "edicule". Describing post-antique architecture, especially Renaissance architecture, aedicular forms may be described using the word tabernacle, as in tabernacle window.
Classical aediculae
Many aediculae were household shrines (lararia) that held small altars or statues of the Lares and Di Penates. The Lares were Roman deities protecting the house and the family household gods. The Penates were originally patron gods (really genii) of the storeroom, later becoming household gods guarding the entire house.
Other aediculae were small shrines within larger temples, usually set on a base, surmounted by a pediment and surrounded by columns. In ancient Roman architecture the aedicula has this representative function in the society. They are installed in public buildings like the triumphal arch, city gate, and thermae. The Library of Celsus in Ephesus (2. c. AD) is a good example.
From the 4th century Christianization of the Roman Empire onwards such shrines, or the framework enclosing them, are often called by the Biblical term tabernacle, which becomes extended to any elaborated framework for a niche, window or picture.
Gothic aediculae
In Gothic architecture, too, an aedicula or tabernacle is a structural framing device that gives importance to its contents, whether an inscribed plaque, a cult object, a bust or the like, by assuming the tectonic vocabulary of a little building that sets it apart from the wall against which it is placed. A tabernacle frame on a wall serves similar hieratic functions as a free-standing, three-dimensional architectural baldaquin or a ciborium over an altar.
In Late Gothic settings, altarpieces and devotional images were customarily crowned with gables and canopies supported by clustered-column piers, echoing in small the architecture of Gothic churches. Painted aediculae frame figures from sacred history in initial letters of illuminated manuscripts.
Renaissance aediculae
Classicizing architectonic structure and décor all'antica, in the "ancient [Roman] mode", became a fashionable way to frame a painted or bas-relief portrait, or protect an expensive and precious mirror during the High Renaissance; Italian precedents were imitated in France, then in Spain, England and Germany during the later 16th century.
Post-Renaissance classicism
Aedicular door surrounds that are architecturally treated, with pilasters or columns flanking the doorway and an entablature even with a pediment over it came into use with the 16th century. In the neo-Palladian revival in Britain, architectonic aedicular or tabernacle frames, carved and gilded, are favourite schemes for English Palladian mirror frames of the late 1720s through the 1740s, by such designers as William Kent.
Aediculae feature prominently in the arrangement of the Saint Peter's tomb with statues by Bernini; a small aedicule directly underneath it, dated ca. 160 AD, was discovered in 1940.
Other aediculae
Similar small shrines, called naiskoi, are found in Greek religion, but their use was strictly religious.
Aediculae exist today in Roman cemeteries as a part of funeral architecture.
Presently the most famous aedicule is situated inside the Church of the Holy Sepulchre in city of Jerusalem.
Contemporary American architect Charles Moore (1925–1993) used the concept of aediculae in his work to create spaces within spaces and to evoke the spiritual significance of the home.
See also
Portico
Similar, but free-standing structures:
Ciborium
Baldachin
Monopteros
Gazebo
Notes
References
Bibliography
Adkins, Lesley & Adkins, Roy A. (1996). Dictionary of Roman Religion. Facts on File, inc. .
External links
Conservation glossary
Ancient Roman temples
Architectural elements
Ancient Roman architectural elements
|
https://en.wikipedia.org/wiki/Aegis
|
The aegis ( ; aigís), as stated in the Iliad, is a device carried by Athena and Zeus, variously interpreted as an animal skin or a shield and sometimes featuring the head of a Gorgon. There may be a connection with a deity named Aex, a daughter of Helios and a nurse of Zeus or alternatively a mistress of Zeus (Hyginus, Astronomica 2. 13).
The modern concept of doing something "under someone's aegis means doing something under the protection of a powerful, knowledgeable, or benevolent source. The word aegis is identified with protection by a strong force with its roots in Greek mythology and adopted by the Romans; there are parallels in Norse mythology and in Egyptian mythology as well, where the Greek word aegis is applied by extension.
Etymology
The Greek aigis, has many meanings including:
"violent windstorm", from the verb aïssō (word stem aïg-) = "I rush or move violently". Akin to kataigis, "thunderstorm".
The shield of a deity as described above.
"goatskin coat", from treating the word as meaning "something grammatically feminine pertaining to goat": Greek aix (stem aig-) = "goat", + suffix -is (stem -id-).
The original meaning may have been the first, and Zeus Aigiokhos = "Zeus who holds the aegis" may have originally meant "Sky/Heaven, who holds the thunderstorm". The transition to the meaning "shield" or "goatskin" may have come by folk etymology among a people familiar with draping an animal skin over the left arm as a shield.
In Greek mythology
The aegis of Athena is referred to in several places in the Iliad. "It produced a sound as from myriad roaring dragons (Iliad, 4.17) and was borne by Athena in battle ... and among them went bright-eyed Athene, holding the precious aegis which is ageless and immortal: a hundred tassels of pure gold hang fluttering from it, tight-woven each of them, and each the worth of a hundred oxen."
Virgil imagines the Cyclopes in Hephaestus' forge, who "busily burnished the aegis Athena wears in her angry moods—a fearsome thing with a surface of gold like scaly snake-skin, and the linked serpents and the Gorgon herself upon the goddess's breast—a severed head rolling its eyes", furnished with golden tassels and bearing the Gorgoneion (Medusa's head) in the central boss. Some of the Attic vase-painters retained an archaic tradition that the tassels had originally been serpents in their representations of the aegis. When the Olympian deities overtook the older deities of Greece and she was born of Metis (inside Zeus who had swallowed the goddess) and "re-born" through the head of Zeus fully clothed, Athena already wore her typical garments.
When the Olympian shakes the aegis, Mount Ida is wrapped in clouds, the thunder rolls and men are struck down with fear. "Aegis-bearing Zeus", as he is in the Iliad, sometimes lends the fearsome aegis to Athena. In the Iliad when Zeus sends Apollo to revive the wounded Hector, Apollo, holding the aegis, charges the Achaeans, pushing them back to their ships drawn up on the shore. According to Edith Hamilton's Mythology: Timeless Tales of Gods and Heroes, the Aegis is the breastplate of Zeus, and was "awful to behold". However, Zeus is normally portrayed in classical sculpture holding a thunderbolt or lightning, bearing neither a shield nor a breastplate.
In classical poetry and art
Classical Greece interpreted the Homeric aegis usually as a cover of some kind borne by Athena. It was supposed by Euripides (Ion, 995) that the aegis borne by Athena was the skin of the slain Gorgon, yet the usual understanding is that the Gorgoneion was added to the aegis, a votive offering from a grateful Perseus.
In a similar interpretation, Aex, a daughter of Helios, represented as a great fire-breathing chthonic serpent similar to the Chimera, was slain and flayed by Athena, who afterwards wore its skin, the aegis, as a cuirass (Diodorus Siculus iii. 70), or as a chlamys. The Douris cup shows that the aegis was represented exactly as the skin of the great serpent, with its scales clearly delineated.
John Tzetzes says that aegis was the skin of the monstrous giant Pallas whom Athena overcame and whose name she attached to her own.
In a late rendering by Gaius Julius Hyginus (Poetical Astronomy ii. 13), Zeus is said to have used the skin of a pet goat owned by his nurse Amalthea (aigis "goat-skin") which suckled him in Crete, as a shield when he went forth to do battle against the Titans.
The aegis appears in works of art sometimes as an animal's skin thrown over Athena's shoulders and arms, occasionally with a border of snakes, usually also bearing the Gorgon head, the gorgoneion. In some pottery it appears as a tasselled cover over Athena's dress. It is sometimes represented on the statues of Roman emperors, heroes, and warriors, and on coins, cameos and vases. A vestige of that appears in a portrait of Alexander the Great in a fresco from Pompeii dated to the first century BC, which shows the image of the head of a woman on his armor that resembles the Gorgon.
Interpretations
Herodotus thought he had identified the source of the aegis in ancient Libya, which was always a distant territory of ancient magic for the Greeks. "Athene's garments and aegis were borrowed by the Greeks from the Libyan women, who are dressed in exactly the same way, except that their leather garments are fringed with thongs, not serpents."
Robert Graves in The Greek Myths (1955) asserts that the aegis in its Libyan sense had been a shamanic pouch containing various ritual objects, bearing the device of a monstrous serpent-haired visage with tusk-like teeth and a protruding tongue which was meant to frighten away the uninitiated. In this context, Graves identifies the aegis as clearly belonging first to Athena.
One current interpretation is that the Hittite sacral hieratic hunting bag (kursas), a rough and shaggy goatskin that has been firmly established in literary texts and iconography by H.G. Güterbock, was a source of the aegis.
References
External links
Theoi Project: "Aigis"
Die Aigis: Zu Typologie und Ikonographie eines Mythischen Gegenstandes: a Doctoral dissertation on the Ægis (Westfälischen Wilhelms-Universität, Münster 1991) by Sigrid Vierck.
Comparative mythology
Greek mythology
Greek shields
Interpersonal relationships
Medusa
Mythography
Mythological clothing
Mythological shields
Symbols of Athena
|
https://en.wikipedia.org/wiki/Agarose
|
Agarose is a heteropolysaccharide, generally extracted from certain red seaweed. It is a linear polymer made up of the repeating unit of agarobiose, which is a disaccharide made up of D-galactose and 3,6-anhydro-L-galactopyranose. Agarose is one of the two principal components of agar, and is purified from agar by removing agar's other component, agaropectin.
Agarose is frequently used in molecular biology for the separation of large molecules, especially DNA, by electrophoresis. Slabs of agarose gels (usually 0.7 - 2%) for electrophoresis are readily prepared by pouring the warm, liquid solution into a mold. A wide range of different agaroses of varying molecular weights and properties are commercially available for this purpose. Agarose may also be formed into beads and used in a number of chromatographic methods for protein purification.
Structure
Agarose is a linear polymer with a molecular weight of about 120,000, consisting of alternating D-galactose and 3,6-anhydro-L-galactopyranose linked by α-(1→3) and β-(1→4) glycosidic bonds. The 3,6-anhydro-L-galactopyranose is an L-galactose with an anhydro bridge between the 3 and 6 positions, although some L-galactose units in the polymer may not contain the bridge. Some D-galactose and L-galactose units can be methylated, and pyruvate and sulfate are also found in small quantities.
Each agarose chain contains ~800 molecules of galactose, and the agarose polymer chains form helical fibers that aggregate into supercoiled structure with a radius of 20-30 nanometer (nm). The fibers are quasi-rigid, and have a wide range of length depending on the agarose concentration. When solidified, the fibers form a three-dimensional mesh of channels of diameter ranging from 50 nm to >200 nm depending on the concentration of agarose used - higher concentrations yield lower average pore diameters. The 3-D structure is held together with hydrogen bonds and can therefore be disrupted by heating back to a liquid state.
Properties
Agarose is available as a white powder which dissolves in near-boiling water, and forms a gel when it cools. Agarose exhibits the phenomenon of thermal hysteresis in its liquid-to-gel transition, i.e. it gels and melts at different temperatures. The gelling and melting temperatures vary depending on the type of agarose. Standard agaroses derived from Gelidium has a gelling temperature of and a melting temperature of , while those derived from Gracilaria, due to its higher methoxy substituents, has a gelling temperature of and melting temperature of . The melting and gelling temperatures may be dependent on the concentration of the gel, particularly at low gel concentration of less than 1%. The gelling and melting temperatures are therefore given at a specified agarose concentration.
Natural agarose contains uncharged methyl groups and the extent of methylation is directly proportional to the gelling temperature. Synthetic methylation however have the reverse effect, whereby increased methylation lowers the gelling temperature. A variety of chemically modified agaroses with different melting and gelling temperatures are available through chemical modifications.
The agarose in the gel forms a meshwork that contains pores, and the size of the pores depends on the concentration of agarose added. On standing, the agarose gels are prone to syneresis (extrusion of water through the gel surface), but the process is slow enough to not interfere with the use of the gel.
Agarose gel can have high gel strength at low concentration, making it suitable as an anti-convection medium for gel electrophoresis. Agarose gels as dilute as 0.15% can form slabs for gel electrophoresis. The agarose polymer contains charged groups, in particular pyruvate and sulfate. These negatively charged groups can slow down the movement of DNA molecules in a process called electroendosmosis (EEO), and low EEO agarose is therefore generally preferred for use in agarose gel electrophoresis of nucleic acids. Zero EEO agaroses are also available but these may be undesirable for some applications as they may be made by adding positively charged groups that can affect subsequent enzyme reactions. Electroendosmosis is a reason agarose is used preferentially over agar as agaropectin in agar contains a significant amount of negatively charged sulphate and carboxyl groups. The removal of agaropectin in agarose substantially reduces the EEO, as well as reducing the non-specific adsorption of biomolecules to the gel matrix. However, for some applications such as the electrophoresis of serum protein, a high EEO may be desirable, and agaropectin may be added in the gel used.
Low melting and gelling temperature agaroses
The melting and gelling temperatures of agarose can be modified by chemical modifications, most commonly by hydroxyethylation, which reduces the number of intrastrand hydrogen bonds, resulting in lower melting and setting temperatures compared to standard agaroses. The exact temperature is determined by the degree of substitution, and many available low-melting-point (LMP) agaroses can remain fluid at range. This property allows enzymatic manipulations to be carried out directly after the DNA gel electrophoresis by adding slices of melted gel containing DNA fragment of interest to a reaction mixture. The LMP agarose contains fewer of the sulphates that can affect some enzymatic reactions, and is therefore preferably used for some applications.
Hydroxyethylated agarose also has a smaller pore size (~90 nm) than standard agaroses. Hydroxyethylation may reduce the pore size by reducing the packing density of the agarose bundles, therefore LMP gel can also have an effect on the time and separation during electrophoresis. Ultra-low melting or gelling temperature agaroses may gel only at .
Applications
Agarose is a preferred matrix for work with proteins and nucleic acids as it has a broad range of physical, chemical and thermal stability, and its lower degree of chemical complexity also makes it less likely to interact with biomolecules. Agarose is most commonly used as the medium for analytical scale electrophoretic separation in agarose gel electrophoresis. Gels made from purified agarose have a relatively large pore size, making them useful for separation of large molecules, such as proteins and protein complexes >200 kilodaltons, as well as DNA fragments >100 basepairs. Agarose is also used widely for a number of other applications, for example immunodiffusion and immunoelectrophoresis, as the agarose fibers can function as anchor for immunocomplexes.
Agarose gel electrophoresis
Agarose gel electrophoresis is the routine method for resolving DNA in the laboratory. Agarose gels have lower resolving power for DNA than acrylamide gels, but they have greater range of separation, and are therefore usually used for DNA fragments with lengths of 50–20,000 bp (base pairs), although resolution of over 6 Mb is possible with pulsed field gel electrophoresis (PFGE). It can also be used to separate large protein molecules, and it is the preferred matrix for the gel electrophoresis of particles with effective radii larger than 5-10 nm.
The pore size of the gel affects the size of the DNA that can be sieved. The lower the concentration of the gel, the larger the pore size, and the larger the DNA that can be sieved. However low-concentration gels (0.1 - 0.2%) are fragile and therefore hard to handle, and the electrophoresis of large DNA molecules can take several days. The limit of resolution for standard agarose gel electrophoresis is around 750 kb. This limit can be overcome by PFGE, where alternating orthogonal electric fields are applied to the gel. The DNA fragments reorientate themselves when the applied field switches direction, but larger molecules of DNA take longer to realign themselves when the electric field is altered, while for smaller ones it is quicker, and the DNA can therefore be fractionated according to size.
Agarose gels are cast in a mold, and when set, usually run horizontally submerged in a buffer solution. Tris-acetate-EDTA and Tris-Borate-EDTA buffers are commonly used, but other buffers such as Tris-phosphate, barbituric acid-sodium barbiturate or Tris-barbiturate buffers may be used in other applications. The DNA is normally visualized by staining with ethidium bromide and then viewed under a UV light, but other methods of staining are available, such as SYBR Green, GelRed, methylene blue, and crystal violet. If the separated DNA fragments are needed for further downstream experiment, they can be cut out from the gel in slices for further manipulation.
Protein purification
Agarose gel matrix is often used for protein purification, for example, in column-based preparative scale separation as in gel filtration chromatography, affinity chromatography and ion exchange chromatography. It is however not used as a continuous gel, rather it is formed into porous beads or resins of varying fineness. The beads are highly porous so that protein may flow freely through the beads. These agarose-based beads are generally soft and easily crushed, so they should be used under gravity-flow, low-speed centrifugation, or low-pressure procedures. The strength of the resins can be improved by increased cross-linking and chemical hardening of the agarose resins, however such changes may also result in a lower binding capacity for protein in some separation procedures such as affinity chromatography.
Agarose is a useful material for chromatography because it does not absorb biomolecules to any significant extent, has good flow properties, and can tolerate extremes of pH and ionic strength as well as high concentration of denaturants such as 8M urea or 6M guanidine HCl. Examples of agarose-based matrix for gel filtration chromatography are Sepharose and WorkBeads 40 SEC (cross-linked beaded agarose), Praesto and Superose (highly cross-linked beaded agaroses), and Superdex (dextran covalently linked to agarose).
For affinity chromatography, beaded agarose is the most commonly used matrix resin for the attachment of the ligands that bind protein. The ligands are linked covalently through a spacer to activated hydroxyl groups of agarose bead polymer. Proteins of interest can then be selectively bound to the ligands to separate them from other proteins, after which it can be eluted. The agarose beads used are typically of 4% and 6% densities with a high binding capacity for protein.
Solid culture media
Agarose plate may sometimes be used instead of agar for culturing organisms as agar may contain impurities that can affect the growth of the organism or some downstream procedures such as polymerase chain reaction (PCR). Agarose is also harder than agar and may therefore be preferable where greater gel strength is necessary, and its lower gelling temperature may prevent causing thermal shock to the organism when the cells are suspended in liquid before gelling. It may be used for the culture of strict autotrophic bacteria, plant protoplast, Caenorhabditis elegans, other organisms and various cell lines.
Motility assays
Agarose is sometimes used instead of agar to measure microorganism motility and mobility. Motile species will be able to migrate, albeit slowly, throughout the porous gel and infiltration rates can then be visualized. The gel's porosity is directly related to the concentration of agar or agarose in the medium, so different concentration gels may be used to assess a cell's swimming, swarming, gliding and twitching motility. Under-agarose cell migration assay may be used to measure chemotaxis and chemokinesis. A layer of agarose gel is placed between a cell population and a chemoattractant. As a concentration gradient develops from the diffusion of the chemoattractant into the gel, various cell populations requiring different stimulation levels to migrate can then be visualized over time using microphotography as they tunnel upward through the gel against gravity along the gradient.
See also
Agar
SDD-AGE
References
Polysaccharides
|
https://en.wikipedia.org/wiki/Affection
|
Affection or fondness is a "disposition or state of mind or body" commonly linked to a feeling or type of love. It has led to multiple branches in philosophy and psychology that discuss emotion, disease, influence, and state of being. Often, "affection" denotes more than mere goodwill or friendship. Writers on ethics generally use the word to refer to distinct states of feeling, both lasting and temporary. Some contrast it with passion as being free from the distinctively sensual element.
Affection can elicit diverse emotional reactions such as embarrassment, disgust, pleasure, and annoyance. The emotional and physical effect of affection also varies between the giver and the receiver.
Restricted definition
Sometimes the term is restricted to emotional states directed towards living entities, including humans and animals. Affection is often compared with passion, stemming from the Greek word . Consequently, references to affection are found in the works of philosophers such as René Descartes, Baruch Spinoza, and early British ethicists. Despite these associations, it's commonly differentiated from passion on various grounds. Some definitions of affection exclude feelings of anxiety or heightened excitement, elements typically linked to passion. In this narrower context, the term holds significance in ethical frameworks, particularly concerning social or parental affections, forming a facet of moral duties and virtue. Ethical perspectives may hinge on whether affection is perceived as voluntary.
Expression
Affection can be communicated by looks, words, gestures, or touches. It conveys love and social connection. The five love languages explains how couples can communicate affections to each other. Affectionate behavior may have evolved from parental nurturing behavior due to its associations with hormonal rewards. Such affection has been shown to influence brain development in infants, especially their biochemical systems and prefrontal development.
Affectionate gestures can become undesirable if they insinuate potential harm to one's welfare. However, when welcomed, such behavior can offer several health benefits. Some theories suggest that positive sentiments enhance individuals' inclination to engage socially, and the sense of closeness fostered by affection contributes to nurturing positive sentiments among them.
Benefits of affection
Affection exchange is an adaptive human behavior that benefits well-being. Expressing affection brings emotional, physical, and relational gains for people and their close connections. Sharing positive emotions yields health advantages like reduced stress hormones, lower cholesterol, lower blood pressure, and a stronger immune system. Expressing affection, not merely feeling affection, is internally rewarding. Even if not reciprocated, givers still experience its effects.
Parental relationships
Affectionate behavior is frequently considered an outcome of parental nurturing, tied to hormonal rewards. Both positive and negative parental actions may health issues in later life. Neglect and abuse result in poorer well-being and mental health, contrasting with affection's positive effects. A 2013 study highlighted the impact of early child abuse and lack of affection on physical health.
Affectionism
Affectionism is a school of thought that considers affections to be of central importance. Although it is not found in mainstream Western philosophy, it does exist in Indian philosophy.
See also
References
External links
Emotions
Love
Personal life
Phrenology
|
https://en.wikipedia.org/wiki/Autocorrelation
|
Autocorrelation, sometimes known as serial correlation in the discrete time case, is the correlation of a signal with a delayed copy of itself as a function of delay. Informally, it is the similarity between observations of a random variable as a function of the time lag between them. The analysis of autocorrelation is a mathematical tool for finding repeating patterns, such as the presence of a periodic signal obscured by noise, or identifying the missing fundamental frequency in a signal implied by its harmonic frequencies. It is often used in signal processing for analyzing functions or series of values, such as time domain signals.
Different fields of study define autocorrelation differently, and not all of these definitions are equivalent. In some fields, the term is used interchangeably with autocovariance.
Unit root processes, trend-stationary processes, autoregressive processes, and moving average processes are specific forms of processes with autocorrelation.
Auto-correlation of stochastic processes
In statistics, the autocorrelation of a real or complex random process is the Pearson correlation between values of the process at different times, as a function of the two times or of the time lag. Let be a random process, and be any point in time ( may be an integer for a discrete-time process or a real number for a continuous-time process). Then is the value (or realization) produced by a given run of the process at time . Suppose that the process has mean and variance at time , for each . Then the definition of the auto-correlation function between times and is
where is the expected value operator and the bar represents complex conjugation. Note that the expectation may not be well defined.
Subtracting the mean before multiplication yields the auto-covariance function between times and :
Note that this expression is not well defined for all time series or processes, because the mean may not exist, or the variance may be zero (for a constant process) or infinite (for processes with distribution lacking well-behaved moments, such as certain types of power law).
Definition for wide-sense stationary stochastic process
If is a wide-sense stationary process then the mean and the variance are time-independent, and further the autocovariance function depends only on the lag between and : the autocovariance depends only on the time-distance between the pair of values but not on their position in time. This further implies that the autocovariance and auto-correlation can be expressed as a function of the time-lag, and that this would be an even function of the lag . This gives the more familiar forms for the auto-correlation function
and the auto-covariance function:
In particular, note that
Normalization
It is common practice in some disciplines (e.g. statistics and time series analysis) to normalize the autocovariance function to get a time-dependent Pearson correlation coefficient. However, in other disciplines (e.g. engineering) the normalization is usually dropped and the terms "autocorrelation" and "autocovariance" are used interchangeably.
The definition of the auto-correlation coefficient of a stochastic process is
If the function is well defined, its value must lie in the range , with 1 indicating perfect correlation and −1 indicating perfect anti-correlation.
For a wide-sense stationary (WSS) process, the definition is
.
The normalization is important both because the interpretation of the autocorrelation as a correlation provides a scale-free measure of the strength of statistical dependence, and because the normalization has an effect on the statistical properties of the estimated autocorrelations.
Properties
Symmetry property
The fact that the auto-correlation function is an even function can be stated as
respectively for a WSS process:
Maximum at zero
For a WSS process:
Notice that is always real.
Cauchy–Schwarz inequality
The Cauchy–Schwarz inequality, inequality for stochastic processes:
Autocorrelation of white noise
The autocorrelation of a continuous-time white noise signal will have a strong peak (represented by a Dirac delta function) at and will be exactly for all other .
Wiener–Khinchin theorem
The Wiener–Khinchin theorem relates the autocorrelation function to the power spectral density via the Fourier transform:
For real-valued functions, the symmetric autocorrelation function has a real symmetric transform, so the Wiener–Khinchin theorem can be re-expressed in terms of real cosines only:
Auto-correlation of random vectors
The (potentially time-dependent) auto-correlation matrix (also called second moment) of a (potentially time-dependent) random vector is an matrix containing as elements the autocorrelations of all pairs of elements of the random vector . The autocorrelation matrix is used in various digital signal processing algorithms.
For a random vector containing random elements whose expected value and variance exist, the auto-correlation matrix is defined by
where denotes the transposed matrix of dimensions .
Written component-wise:
If is a complex random vector, the autocorrelation matrix is instead defined by
Here denotes Hermitian transpose.
For example, if is a random vector, then is a matrix whose -th entry is .
Properties of the autocorrelation matrix
The autocorrelation matrix is a Hermitian matrix for complex random vectors and a symmetric matrix for real random vectors.
The autocorrelation matrix is a positive semidefinite matrix, i.e. for a real random vector, and respectively in case of a complex random vector.
All eigenvalues of the autocorrelation matrix are real and non-negative.
The auto-covariance matrix is related to the autocorrelation matrix as follows:Respectively for complex random vectors:
Auto-correlation of deterministic signals
In signal processing, the above definition is often used without the normalization, that is, without subtracting the mean and dividing by the variance. When the autocorrelation function is normalized by mean and variance, it is sometimes referred to as the autocorrelation coefficient or autocovariance function.
Auto-correlation of continuous-time signal
Given a signal , the continuous autocorrelation is most often defined as the continuous cross-correlation integral of with itself, at lag .
where represents the complex conjugate of . Note that the parameter in the integral is a dummy variable and is only necessary to calculate the integral. It has no specific meaning.
Auto-correlation of discrete-time signal
The discrete autocorrelation at lag for a discrete-time signal is
The above definitions work for signals that are square integrable, or square summable, that is, of finite energy. Signals that "last forever" are treated instead as random processes, in which case different definitions are needed, based on expected values. For wide-sense-stationary random processes, the autocorrelations are defined as
For processes that are not stationary, these will also be functions of , or .
For processes that are also ergodic, the expectation can be replaced by the limit of a time average. The autocorrelation of an ergodic process is sometimes defined as or equated to
These definitions have the advantage that they give sensible well-defined single-parameter results for periodic functions, even when those functions are not the output of stationary ergodic processes.
Alternatively, signals that last forever can be treated by a short-time autocorrelation function analysis, using finite time integrals. (See short-time Fourier transform for a related process.)
Definition for periodic signals
If is a continuous periodic function of period , the integration from to is replaced by integration over any interval of length :
which is equivalent to
Properties
In the following, we will describe properties of one-dimensional autocorrelations only, since most properties are easily transferred from the one-dimensional case to the multi-dimensional cases. These properties hold for wide-sense stationary processes.
A fundamental property of the autocorrelation is symmetry, , which is easy to prove from the definition. In the continuous case,
the autocorrelation is an even function when is a real function, and
the autocorrelation is a Hermitian function when is a complex function.
The continuous autocorrelation function reaches its peak at the origin, where it takes a real value, i.e. for any delay , . This is a consequence of the rearrangement inequality. The same result holds in the discrete case.
The autocorrelation of a periodic function is, itself, periodic with the same period.
The autocorrelation of the sum of two completely uncorrelated functions (the cross-correlation is zero for all ) is the sum of the autocorrelations of each function separately.
Since autocorrelation is a specific type of cross-correlation, it maintains all the properties of cross-correlation.
By using the symbol to represent convolution and is a function which manipulates the function and is defined as , the definition for may be written as:
Multi-dimensional autocorrelation
Multi-dimensional autocorrelation is defined similarly. For example, in three dimensions the autocorrelation of a square-summable discrete signal would be
When mean values are subtracted from signals before computing an autocorrelation function, the resulting function is usually called an auto-covariance function.
Efficient computation
For data expressed as a discrete sequence, it is frequently necessary to compute the autocorrelation with high computational efficiency. A brute force method based on the signal processing definition can be used when the signal size is small. For example, to calculate the autocorrelation of the real signal sequence (i.e. , and for all other values of ) by hand, we first recognize that the definition just given is the same as the "usual" multiplication, but with right shifts, where each vertical addition gives the autocorrelation for particular lag values:
Thus the required autocorrelation sequence is , where and the autocorrelation for other lag values being zero. In this calculation we do not perform the carry-over operation during addition as is usual in normal multiplication. Note that we can halve the number of operations required by exploiting the inherent symmetry of the autocorrelation. If the signal happens to be periodic, i.e. then we get a circular autocorrelation (similar to circular convolution) where the left and right tails of the previous autocorrelation sequence will overlap and give which has the same period as the signal sequence The procedure can be regarded as an application of the convolution property of Z-transform of a discrete signal.
While the brute force algorithm is order , several efficient algorithms exist which can compute the autocorrelation in order . For example, the Wiener–Khinchin theorem allows computing the autocorrelation from the raw data with two fast Fourier transforms (FFT):
where IFFT denotes the inverse fast Fourier transform. The asterisk denotes complex conjugate.
Alternatively, a multiple correlation can be performed by using brute force calculation for low values, and then progressively binning the data with a logarithmic density to compute higher values, resulting in the same efficiency, but with lower memory requirements.
Estimation
For a discrete process with known mean and variance for which we observe observations , an estimate of the autocorrelation coefficient may be obtained as
for any positive integer . When the true mean and variance are known, this estimate is unbiased. If the true mean and variance of the process are not known there are several possibilities:
If and are replaced by the standard formulae for sample mean and sample variance, then this is a biased estimate.
A periodogram-based estimate replaces in the above formula with . This estimate is always biased; however, it usually has a smaller mean squared error.
Other possibilities derive from treating the two portions of data and separately and calculating separate sample means and/or sample variances for use in defining the estimate.
The advantage of estimates of the last type is that the set of estimated autocorrelations, as a function of , then form a function which is a valid autocorrelation in the sense that it is possible to define a theoretical process having exactly that autocorrelation. Other estimates can suffer from the problem that, if they are used to calculate the variance of a linear combination of the 's, the variance calculated may turn out to be negative.
Regression analysis
In regression analysis using time series data, autocorrelation in a variable of interest is typically modeled either with an autoregressive model (AR), a moving average model (MA), their combination as an autoregressive-moving-average model (ARMA), or an extension of the latter called an autoregressive integrated moving average model (ARIMA). With multiple interrelated data series, vector autoregression (VAR) or its extensions are used.
In ordinary least squares (OLS), the adequacy of a model specification can be checked in part by establishing whether there is autocorrelation of the regression residuals. Problematic autocorrelation of the errors, which themselves are unobserved, can generally be detected because it produces autocorrelation in the observable residuals. (Errors are also known as "error terms" in econometrics.) Autocorrelation of the errors violates the ordinary least squares assumption that the error terms are uncorrelated, meaning that the Gauss Markov theorem does not apply, and that OLS estimators are no longer the Best Linear Unbiased Estimators (BLUE). While it does not bias the OLS coefficient estimates, the standard errors tend to be underestimated (and the t-scores overestimated) when the autocorrelations of the errors at low lags are positive.
The traditional test for the presence of first-order autocorrelation is the Durbin–Watson statistic or, if the explanatory variables include a lagged dependent variable, Durbin's h statistic. The Durbin-Watson can be linearly mapped however to the Pearson correlation between values and their lags. A more flexible test, covering autocorrelation of higher orders and applicable whether or not the regressors include lags of the dependent variable, is the Breusch–Godfrey test. This involves an auxiliary regression, wherein the residuals obtained from estimating the model of interest are regressed on (a) the original regressors and (b) k lags of the residuals, where 'k' is the order of the test. The simplest version of the test statistic from this auxiliary regression is TR2, where T is the sample size and R2 is the coefficient of determination. Under the null hypothesis of no autocorrelation, this statistic is asymptotically distributed as with k degrees of freedom.
Responses to nonzero autocorrelation include generalized least squares and the Newey–West HAC estimator (Heteroskedasticity and Autocorrelation Consistent).
In the estimation of a moving average model (MA), the autocorrelation function is used to determine the appropriate number of lagged error terms to be included. This is based on the fact that for an MA process of order q, we have , for , and , for .
Applications
Autocorrelation analysis is used heavily in fluorescence correlation spectroscopy to provide quantitative insight into molecular-level diffusion and chemical reactions.
Another application of autocorrelation is the measurement of optical spectra and the measurement of very-short-duration light pulses produced by lasers, both using optical autocorrelators.
Autocorrelation is used to analyze dynamic light scattering data, which notably enables determination of the particle size distributions of nanometer-sized particles or micelles suspended in a fluid. A laser shining into the mixture produces a speckle pattern that results from the motion of the particles. Autocorrelation of the signal can be analyzed in terms of the diffusion of the particles. From this, knowing the viscosity of the fluid, the sizes of the particles can be calculated.
Utilized in the GPS system to correct for the propagation delay, or time shift, between the point of time at the transmission of the carrier signal at the satellites, and the point of time at the receiver on the ground. This is done by the receiver generating a replica signal of the 1,023-bit C/A (Coarse/Acquisition) code, and generating lines of code chips [-1,1] in packets of ten at a time, or 10,230 chips (1,023 × 10), shifting slightly as it goes along in order to accommodate for the doppler shift in the incoming satellite signal, until the receiver replica signal and the satellite signal codes match up.
The small-angle X-ray scattering intensity of a nanostructured system is the Fourier transform of the spatial autocorrelation function of the electron density.
In surface science and scanning probe microscopy, autocorrelation is used to establish a link between surface morphology and functional characteristics.
In optics, normalized autocorrelations and cross-correlations give the degree of coherence of an electromagnetic field.
In signal processing, autocorrelation can give information about repeating events like musical beats (for example, to determine tempo) or pulsar frequencies, though it cannot tell the position in time of the beat. It can also be used to estimate the pitch of a musical tone.
In music recording, autocorrelation is used as a pitch detection algorithm prior to vocal processing, as a distortion effect or to eliminate undesired mistakes and inaccuracies.
Autocorrelation in space rather than time, via the Patterson function, is used by X-ray diffractionists to help recover the "Fourier phase information" on atom positions not available through diffraction alone.
In statistics, spatial autocorrelation between sample locations also helps one estimate mean value uncertainties when sampling a heterogeneous population.
The SEQUEST algorithm for analyzing mass spectra makes use of autocorrelation in conjunction with cross-correlation to score the similarity of an observed spectrum to an idealized spectrum representing a peptide.
In astrophysics, autocorrelation is used to study and characterize the spatial distribution of galaxies in the universe and in multi-wavelength observations of low mass X-ray binaries.
In panel data, spatial autocorrelation refers to correlation of a variable with itself through space.
In analysis of Markov chain Monte Carlo data, autocorrelation must be taken into account for correct error determination.
In geosciences (specifically in geophysics) it can be used to compute an autocorrelation seismic attribute, out of a 3D seismic survey of the underground.
In medical ultrasound imaging, autocorrelation is used to visualize blood flow.
In intertemporal portfolio choice, the presence or absence of autocorrelation in an asset's rate of return can affect the optimal portion of the portfolio to hold in that asset.
Autocorrelation has been used to accurately measure power system frequency in numerical relays.
Serial dependence
Serial dependence is closely linked to the notion of autocorrelation, but represents a distinct concept (see Correlation and dependence). In particular, it is possible to have serial dependence but no (linear) correlation. In some fields however, the two terms are used as synonyms.
A time series of a random variable has serial dependence if the value at some time in the series is statistically dependent on the value at another time . A series is serially independent if there is no dependence between any pair.
If a time series is stationary, then statistical dependence between the pair would imply that there is statistical dependence between all pairs of values at the same lag .
See also
Autocorrelation matrix
Autocorrelation of a formal word
Autocorrelation technique
Autocorrelator
Cochrane–Orcutt estimation (transformation for autocorrelated error terms)
Correlation function
Correlogram
Cross-correlation
CUSUM
Fluorescence correlation spectroscopy
Optical autocorrelation
Partial autocorrelation function
Phylogenetic autocorrelation (Galton's problem}
Pitch detection algorithm
Prais–Winsten transformation
Scaled correlation
Triple correlation
Unbiased estimation of standard deviation
References
Further reading
Mojtaba Soltanalian, and Petre Stoica. "Computational design of sequences with good correlation properties." IEEE Transactions on Signal Processing, 60.5 (2012): 2180–2193.
Solomon W. Golomb, and Guang Gong. Signal design for good correlation: for wireless communication, cryptography, and radar. Cambridge University Press, 2005.
Klapetek, Petr (2018). Quantitative Data Processing in Scanning Probe Microscopy: SPM Applications for Nanometrology (Second ed.). Elsevier. pp. 108–112 .
Signal processing
Time domain analysis
|
https://en.wikipedia.org/wiki/Aspartame
|
Aspartame is an artificial non-saccharide sweetener 200 times sweeter than sucrose and is commonly used as a sugar substitute in foods and beverages. It is a methyl ester of the aspartic acid/phenylalanine dipeptide with brand names NutraSweet, Equal, and Canderel. Aspartame was approved by the US Food and Drug Administration (FDA) in 1974, and then again in 1981, after approval was revoked in 1980.
Aspartame is one of the most studied food additives in the human food supply. Reviews by over 100 governmental regulatory bodies found the ingredient safe for consumption at the normal acceptable daily intake (ADI) limit.
Uses
Aspartame is around 180 to 200 times sweeter than sucrose (table sugar). Due to this property, even though aspartame produces of energy per gram when metabolized, about the same as sucrose, the quantity of aspartame needed to produce a sweet taste is so small that its caloric contribution is negligible. The sweetness of aspartame lasts longer than that of sucrose, so it is often blended with other artificial sweeteners such as acesulfame potassium to produce an overall taste more like that of sugar.
Like many other peptides, aspartame may hydrolyze (break down) into its constituent amino acids under conditions of elevated temperature or high pH. This makes aspartame undesirable as a baking sweetener and prone to degradation in products hosting a high pH, as required for a long shelf life. The stability of aspartame under heating can be improved to some extent by encasing it in fats or in maltodextrin. The stability when dissolved in water depends markedly on pH. At room temperature, it is most stable at pH 4.3, where its half-life is nearly 300 days. At pH 7, however, its half-life is only a few days. Most soft-drinks have a pH between 3 and 5, where aspartame is reasonably stable. In products that may require a longer shelf life, such as syrups for fountain beverages, aspartame is sometimes blended with a more stable sweetener, such as saccharin.
Descriptive analyses of solutions containing aspartame report a sweet aftertaste as well as bitter and off-flavor aftertastes.
Acceptable levels of consumption
The acceptable daily intake (ADI) value for food additives, including aspartame, is defined as the "amount of a food additive, expressed on a body weight basis, that can be ingested daily over a lifetime without appreciable health risk". The Joint FAO/WHO Expert Committee on Food Additives (JECFA) and the European Commission's Scientific Committee on Food (later becoming EFSA) have determined this value is 40 mg/kg of body weight per day for aspartame, while the FDA has set its ADI for aspartame at 50 mg/kg per day an amount equated to consuming 75 packets of commercial aspartame sweetener per day to be within a safe upper limit.
The primary source for exposure to aspartame in the US is diet soft drinks, though it can be consumed in other products, such as pharmaceutical preparations, fruit drinks, and chewing gum among others in smaller quantities. A can of diet soda contains of aspartame, and, for a adult, it takes approximately 21 cans of diet soda daily to consume the of aspartame that would surpass the FDA's 50 mg/kg of body weight ADI of aspartame from diet soda alone.
Reviews have analyzed studies which have looked at the consumption of aspartame in countries worldwide, including the US, countries in Europe, and Australia, among others. These reviews have found that even the high levels of intake of aspartame, studied across multiple countries and different methods of measuring aspartame consumption, are well below the ADI for safe consumption of aspartame. Reviews have also found that populations that are believed to be especially high consumers of aspartame, such as children and diabetics, are below the ADI for safe consumption, even considering extreme worst-case scenario calculations of consumption.
In a report released on 10 December 2013, the EFSA said that, after an extensive examination of evidence, it ruled out the "potential risk of aspartame causing damage to genes and inducing cancer" and deemed the amount found in diet sodas safe to consume.
Safety and health effects
The safety of aspartame has been studied since its discovery, and it is a rigorously tested food ingredient. Aspartame has been deemed safe for human consumption by over 100 regulatory agencies in their respective countries, including the US Food and Drug Administration (FDA), UK Food Standards Agency, the European Food Safety Authority (EFSA), Health Canada, and Food Standards Australia New Zealand.
Metabolism and body weight
reviews of clinical trials showed that using aspartame (or other non-nutritive sweeteners) in place of sugar reduces calorie intake and body weight in adults and children. A 2017 review of metabolic effects by consuming aspartame found that it did not affect blood glucose, insulin, total cholesterol, triglycerides, calorie intake, or body weight. While high-density lipoprotein levels were higher compared to control, they were lower compared to sucrose.
In 2023, the World Health Organization recommended against the use of common non-saccharide sweeteners (NSS), including aspartame, to control body weight or lower the risk of non-communicable diseases, stating: "The recommendation is based on the findings of a systematic review of the available evidence which suggests that use of NSS does not confer any long-term benefit in reducing body fat in adults or children. Results of the review also suggest that there may be potential undesirable effects from long-term use of NSS, such as an increased risk of type 2 diabetes, cardiovascular diseases, and mortality in adults."
Phenylalanine
High levels of the naturally occurring essential amino acid phenylalanine are a health hazard to those born with phenylketonuria (PKU), a rare inherited disease that prevents phenylalanine from being properly metabolized. Because aspartame contains a small amount of phenylalanine, foods containing aspartame sold in the US must state: "Phenylketonurics: Contains Phenylalanine" on product labels.
In the UK, foods that contain aspartame are required by the Food Standards Agency to list the substance as an ingredient, with the warning "Contains a source of phenylalanine". Manufacturers are also required to print "with sweetener(s)" on the label close to the main product name on foods that contain "sweeteners such as aspartame" or "with sugar and sweetener(s)" on "foods that contain both sugar and sweetener".
In Canada, foods that contain aspartame are required to list aspartame among the ingredients, include the amount of aspartame per serving, and state that the product contains phenylalanine.
Phenylalanine is one of the essential amino acids and is required for normal growth and maintenance of life. Concerns about the safety of phenylalanine from aspartame for those without phenylketonuria center largely on hypothetical changes in neurotransmitter levels as well as ratios of neurotransmitters to each other in the blood and brain that could lead to neurological symptoms. Reviews of the literature have found no consistent findings to support such concerns, and, while high doses of aspartame consumption may have some biochemical effects, these effects are not seen in toxicity studies to suggest aspartame can adversely affect neuronal function. As with methanol and aspartic acid, common foods in the typical diet, such as milk, meat, and fruits, will lead to ingestion of significantly higher amounts of phenylalanine than would be expected from aspartame consumption.
Cancer
, regulatory agencies, including the FDA and EFSA, and the US National Cancer Institute, have concluded that consuming aspartame is safe in amounts within acceptable daily intake levels and does not cause cancer. These conclusions are based on various sources of evidence, such as reviews and epidemiological studies finding no association between aspartame and cancer.
In July 2023, scientists for the International Agency for Research on Cancer (IARC) concluded that there was "limited evidence" for aspartame causing cancer in humans, classifying the sweetener as Group 2B (possibly carcinogenic). The lead investigator of the IARC report stated that the classification "shouldn't really be taken as a direct statement that indicates that there is a known cancer hazard from consuming aspartame. This is really more of a call to the research community to try to better clarify and understand the carcinogenic hazard that may or may not be posed by aspartame consumption."
The Joint FAO/WHO Expert Committee on Food Additives (JECFA) added that the limited cancer assessment indicated no reason to change the recommended acceptable daily intake level of 40 mg per kg of body weight per day, reaffirming the safety of consuming aspartame within this limit.
The FDA responded to the report by stating:
Neurotoxicity symptoms
Reviews found no evidence that low doses of aspartame would plausibly lead to neurotoxic effects. A review of studies on children did not show any significant findings for safety concerns with regard to neuropsychiatric conditions such as panic attacks, mood changes, hallucinations, attention deficit hyperactivity disorder (ADHD), or seizures by consuming aspartame.
Headaches
Reviews have found little evidence to indicate that aspartame induces headaches, although certain subsets of consumers may be sensitive to it.
Water quality
Aspartame passes through wastewater treatment plants mainly unchanged.
Mechanism of action
The perceived sweetness of aspartame (and other sweet substances like acesulfame potassium) in humans is due to its binding of the heterodimer G protein-coupled receptor formed by the proteins TAS1R2 and TAS1R3. Aspartame is not recognized by rodents due to differences in the taste receptors.
Metabolites
Aspartame is rapidly hydrolyzed in the small intestine by digestive enzymes which break aspartame down into methanol, phenylalanine, aspartic acid, and further metabolites, such as formaldehyde and formic acid. Due to its rapid and complete metabolism, aspartame is not found in circulating blood, even following ingestion of high doses over 200 mg/kg.
Aspartic acid
Aspartic acid (aspartate) is one of the most common amino acids in the typical diet. As with methanol and phenylalanine, intake of aspartic acid from aspartame is less than would be expected from other dietary sources. At the 90th percentile of intake, aspartame provides only between 1% and 2% of the daily intake of aspartic acid.
Methanol
The methanol produced by aspartame metabolism is unlikely to be a safety concern for several reasons. The amount of methanol produced from aspartame-sweetened foods and beverages is likely to be less than that from food sources already in diets. With regard to formaldehyde, it is rapidly converted in the body, and the amounts of formaldehyde from the metabolism of aspartame are trivial when compared to the amounts produced routinely by the human body and from other foods and drugs. At the highest expected human doses of consumption of aspartame, there are no increased blood levels of methanol or formic acid, and ingesting aspartame at the 90th percentile of intake would produce 25 times less methanol than what would be considered toxic.
Chemistry
Aspartame is a methyl ester of the dipeptide of the natural amino acids L-aspartic acid and L-phenylalanine. Under strongly acidic or alkaline conditions, aspartame may generate methanol by hydrolysis. Under more severe conditions, the peptide bonds are also hydrolyzed, resulting in free amino acids.
Two approaches to synthesis are used commercially. In the chemical synthesis, the two carboxyl groups of aspartic acid are joined into an anhydride, and the amino group is protected with a formyl group as the formamide, by treatment of aspartic acid with a mixture of formic acid and acetic anhydride. Phenylalanine is converted to its methyl ester and combined with the N-formyl aspartic anhydride; then the protecting group is removed from aspartic nitrogen by acid hydrolysis. The drawback of this technique is that a byproduct, the bitter-tasting β-form, is produced when the wrong carboxyl group from aspartic acid anhydride links to phenylalanine, with desired and undesired isomer forming in a 4:1 ratio. A process using an enzyme from Bacillus thermoproteolyticus to catalyze the condensation of the chemically altered amino acids will produce high yields without the β-form byproduct. A variant of this method, which has not been used commercially, uses unmodified aspartic acid but produces low yields. Methods for directly producing aspartyl-phenylalanine by enzymatic means, followed by chemical methylation, have also been tried but not scaled for industrial production.
History
Aspartame was discovered in 1965 by James M. Schlatter, a chemist working for G.D. Searle & Company. Schlatter had synthesized aspartame as an intermediate step in generating a tetrapeptide of the hormone gastrin, for use in assessing an anti-ulcer drug candidate. He discovered its sweet taste when he licked his finger, which had become contaminated with aspartame, to lift up a piece of paper. Torunn Atteraas Garin participated in the development of aspartame as an artificial sweetener.
In 1975, prompted by issues regarding Flagyl and Aldactone, an FDA task force team reviewed 25 studies submitted by the manufacturer, including 11 on aspartame. The team reported "serious deficiencies in Searle's operations and practices". The FDA sought to authenticate 15 of the submitted studies against the supporting data. In 1979, the Center for Food Safety and Applied Nutrition (CFSAN) concluded, since many problems with the aspartame studies were minor and did not affect the conclusions, the studies could be used to assess aspartame's safety.
In 1980, the FDA convened a Public Board of Inquiry (PBOI) consisting of independent advisors charged with examining the purported relationship between aspartame and brain cancer. The PBOI concluded aspartame does not cause brain damage, but it recommended against approving aspartame at that time, citing unanswered questions about cancer in laboratory rats.
In 1983, the FDA approved aspartame for use in carbonated beverages and for use in other beverages, baked goods, and confections in 1993. In 1996, the FDA removed all restrictions from aspartame, allowing it to be used in all foods. As of May 2023, the FDA stated that it regards aspartame as a safe food ingredient when consumed within the acceptable daily intake level of 50 mg per kg of body weight per day.
Several European Union countries approved aspartame in the 1980s, with EU-wide approval in 1994. The Scientific Committee on Food (SCF) reviewed subsequent safety studies and reaffirmed the approval in 2002. The European Food Safety Authority (EFSA) reported in 2006 that the previously established Acceptable daily intake (ADI) was appropriate, after reviewing yet another set of studies.
Compendial status
British Pharmacopoeia
United States Pharmacopeia
Commercial uses
Under the brand names Equal, NutraSweet, and Canderel, aspartame is an ingredient in approximately 6,000 consumer foods and beverages sold worldwide, including (but not limited to) diet sodas and other soft drinks, instant breakfasts, breath mints, cereals, sugar-free chewing gum, cocoa mixes, frozen desserts, gelatin desserts, juices, laxatives, chewable vitamin supplements, milk drinks, pharmaceutical drugs and supplements, shake mixes, tabletop sweeteners, teas, instant coffees, topping mixes, wine coolers, and yogurt. It is provided as a table condiment in some countries. Aspartame is less suitable for baking than other sweeteners because it breaks down when heated and loses much of its sweetness.
NutraSweet Company
In 1985, Monsanto bought G.D.Searle, and the aspartame business became a separate Monsanto subsidiary, NutraSweet. In March 2000, Monsanto sold it to J.W. Childs Associates Equity Partners II L.P. European use patents on aspartame expired starting in 1987, and the US patent expired in 1992.
Ajinomoto
Many aspects of industrial synthesis of aspartame were established by Ajinomoto. In 2004, the market for aspartame, in which Ajinomoto, the world's largest aspartame manufacturer, had a 40% share, was a year, and consumption of the product was rising by 2% a year. Ajinomoto acquired its aspartame business in 2000 from Monsanto for $67 million (equivalent to $ in ).
In 2007, Asda was the first British supermarket chain to remove all artificial flavourings and colours in its store brand foods. In 2008, Ajinomoto sued Asda, part of Walmart, for a malicious falsehood action concerning its aspartame product when the substance was listed as excluded from the chain's product line, along with other "nasties". In July 2009, a British court ruled in favor of Asda. In June 2010, an appeals court reversed the decision, allowing Ajinomoto to pursue a case against Asda to protect aspartame's reputation. Asda said that it would continue to use the term "no nasties" on its own-label products, but the suit was settled in 2011 with Asda choosing to remove references to aspartame from its packaging.
In November 2009, Ajinomoto announced a new brand name for its aspartame sweetener—AminoSweet.
Holland Sweetener Company
A joint venture of DSM and Tosoh, the Holland Sweetener Company manufactured aspartame using the enzymatic process developed by Toyo Soda (Tosoh) and sold as the brand Sanecta. Additionally, they developed a combination aspartame-acesulfame salt under the brand name Twinsweet. They left the sweetener industry in 2006, because "global aspartame markets are facing structural oversupply, which has caused worldwide strong price erosion over the last five years", making the business "persistently unprofitable".
Competing products
Because sucralose, unlike aspartame, retains its sweetness after being heated, and has at least twice the shelf life of aspartame, it has become more popular as an ingredient. This, along with differences in marketing and changing consumer preferences, caused aspartame to lose market share to sucralose. In 2004, aspartame traded at about and sucralose, which is roughly three times sweeter by weight, at around .
See also
Alitame
Aspartame controversy
Neotame
Phenylalanine ammonia lyase
Stevia
References
External links
Amino acid derivatives
Aromatic compounds
Beta-Amino acids
Butyramides
Dipeptides
Carboxylate esters
Sugar substitutes
Methyl esters
E-number additives
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.