text stringlengths 10 951k | source stringlengths 39 44 |
|---|---|
Clade
A clade (; from , "klados", "branch"), also known as a monophyletic group, is a group of organisms that consists of a common ancestor and all its lineal descendants. Rather than the English term, the equivalent Latin term "cladus" (plural "cladi") is often used in taxonomical literature.
The common ancestor may be an individual, a population, a species (extinct or extant), and so on right up to a kingdom and further. Clades are nested, one in another, as each branch in turn splits into smaller branches. These splits reflect evolutionary history as populations diverged and evolved independently. Clades are termed monophyletic (Greek: "one clan") groups.
Over the last few decades, the cladistic approach has revolutionized biological classification and revealed surprising evolutionary relationships among organisms. Increasingly, taxonomists try to avoid naming taxa that are not clades; that is, taxa that are not monophyletic. Some of the relationships between organisms that the molecular biology arm of cladistics has revealed are that fungi are closer relatives to animals than they are to plants, archaea are now considered different from bacteria, and multicellular organisms may have evolved from archaea.
The term "clade" was coined in 1957 by the biologist Julian Huxley to refer to the result of cladogenesis, the evolutionary splitting of a parent species into two distinct species, a concept Huxley borrowed from Bernhard Rensch.
Many commonly named groups, rodents and insects for example, are clades because, in each case, the group consists of a common ancestor with all its descendant branches. Rodents, for example, are a branch of mammals that split off after the end of the period when the clade Dinosauria stopped being the dominant terrestrial vertebrates 66 million years ago. The original population and all its descendants are a clade. The rodent clade corresponds to the order Rodentia, and insects to the class Insecta. These clades include smaller clades, such as chipmunk or ant, each of which consists of even smaller clades. The clade "rodent" is in turn included in the mammal, vertebrate and animal clades.
The idea of a clade did not exist in pre-Darwinian Linnaean taxonomy, which was based by necessity only on internal or external morphological similarities between organisms – although as it happens, many of the better known animal groups in Linnaeus' original Systema Naturae (notably among the vertebrate groups) do represent clades. The phenomenon of convergent evolution is, however, responsible for many cases where there are misleading similarities in the morphology of groups that evolved from different lineages.
With the increasing realization in the first half of the 19th century that species had changed and split through the ages, classification increasingly came to be seen as branches on the evolutionary tree of life. The publication of Darwin's theory of evolution in 1859 gave this view increasing weight. Thomas Henry Huxley, an early advocate of evolutionary theory, proposed a revised taxonomy based on a concept strongly resembling clades, although the term "clade" itself would not be coined until 1957 by his grandson, Julian Huxley. For example, the elder Huxley grouped birds with reptiles, based on fossil evidence.
German biologist Emil Hans Willi Hennig (1913 – 1976) is considered to be the founder of cladistics.
He proposed a classification system that represented repeated branchings of the family tree, as opposed to the previous systems, which put organisms on a "ladder", with supposedly more "advanced" organisms at the top.
Taxonomists have increasingly worked to make the taxonomic system reflect evolution. When it comes to naming, however, this principle is not always compatible with the traditional rank-based nomenclature (in which only taxa associated with a rank can be named) because there are not enough ranks to name a long series of nested clades. For these and other reasons, phylogenetic nomenclature has been developed; it is still controversial.
A clade is by definition monophyletic, meaning that it contains one ancestor (which can be an organism, a population, or a species) and all its descendants. The ancestor can be known or unknown; any and all members of a clade can be extant or extinct.
The science that tries to reconstruct phylogenetic trees and thus discover clades is called phylogenetics or cladistics, the latter term coined by Ernst Mayr (1965), derived from "clade". The results of phylogenetic/cladistic analyses are tree-shaped diagrams called "cladograms"; they, and all their branches, are phylogenetic hypotheses.
Three methods of defining clades are featured in phylogenetic nomenclature: node-, stem-, and apomorphy-based (see Phylogenetic nomenclature§Phylogenetic definitions of clade names for detailed definitions).
The relationship between clades can be described in several ways:
"Clade" is the title of a novel by James Bradley, who chose it both because of its biological meaning and also because of the larger implications of the word.
An episode of "Elementary" is titled "Dead Clade Walking" and deals with a case involving a rare fossil. | https://en.wikipedia.org/wiki?curid=6682 |
Communications in Afghanistan
Communications in Afghanistan is under the control of the Ministry of Communications and Information Technology (MCIT). It has rapidly expanded after the Karzai administration took over in late 2001, and has embarked on wireless companies, internet, radio stations and television channels.
The Afghan government signed a $64.5 million agreement in 2006 with China's ZTE on the establishment of a countrywide optical fiber cable network. The project began to improve telephone, internet, television and radio broadcast services throughout Afghanistan. About 90% of the country's population had access to communication services in 2014.
Afghanistan uses its own space satellite called Afghansat 1. There are about 18 million mobile phone users in the country. Telecom companies include Afghan Telecom, Afghan Wireless, Etisalat, MTN, Roshan, Salaam and a few others. Over 60% of the population have access to the internet.
There are about 32 million GSM mobile phone subscribers in Afghanistan as of 2016, with over 114,192 fixed-telephone-lines and over 264,000 CDMA subscribers. Mobile communications have improved because of the introduction of wireless carriers into this developing country. The first was Afghan Wireless, which is US based that was founded by Ehsan Bayat. The second was Roshan, which began providing services to all major cities within Afghanistan. There are also a number of VSAT stations in major cities such as Kabul, Kandahar, Herat, Mazari Sharif, and Jalalabad, providing international and domestic voice/data connectivity. The international calling code for Afghanistan is +93. The following is a partial list of mobile phone companies in the country:
All the companies providing communication services are obligated to deliver 2.5% of their income to the communication development fund annually. According to the Ministry of Communication and Information Technology there are 4760 active towers throughout the country which covers 85% of the population. The Ministry of Communication and Information Technology plans to expand its services in remote parts of the country where the remaining 15% of the population will be covered with the installation of 700 new towers.
Phone calls in Afghanistan have been monitored by the National Security Agency according to WikiLeaks.
MTN 21 According to a three-year duopoly agreement between the MCIT and mobile operators AWCC and Roshan, no mobile operator could enter the Afghan telecom market until July 2006. The third GSM license was awarded to Areeba in September 2005 for a period of 15 years, and a total license fee of $40.1 million. Areeba was a subsidiary of the Lebanon-based firm Investcom in consortium with Alokozai-FZE. After commencing services in July 2006, Areeba had an estimated subscribership of 200,000 by the end of that year. Areeba was later acquired by the South African-based Mobile Telephone Network (MTN) in mid-2007 as part of a $5.53 billion global merger between the two companies. MTN-Afghanistan is a subsidiary of the South African-based MTN Group, a multinational telecommunications company operating across the Middle East and Africa. MTN is the majority (90%) shareholder, while International Finance Corporation (IFC) at 9% is also a debt and equity shareholder of MTN-Afghanistan. MTN operates at 900-1800 MHz GSM band, and as of 2012 has 4.5 million subscribers and service coverage in most major cities, 464 districts, and all 34 provincial capitals. With over $400 million in total investment, MTN offers mobile voice, SMS, MMS, SRS, GPRS, fax, voicemail and PCO services through prepaid, postpaid and corporate tariffs.
MTN has interconnection agreements with all national telecom operators and provides international voice and SMS roaming in 121 countries and across 227 operators through prepaid and postpaid roaming tariffs. MTN also has a national ISP license which the company received in November 2008. MTN was the first company to introduce the popular per-second billing system in the country (also known as "pay as you talk") allowing its subscribers to transparently track their talk-time and receive billing summaries via SMS. The scheme was so popular that other GSM companies quickly adopted this method.
Afghanistan was given legal control of the ".af" domain in 2003, and the Afghanistan Network Information Center (AFGNIC) was established to administer domain names. As of 2016, there are at least 55 internet service providers (ISPs) in the country. Internet in Afghanistan is also at the peak with over 5 million users as of 2016.
According to the Ministry of Communications, the following are some of the different ISPs operating in Afghanistan:
There are over 106 television operators in Afghanistan and 320 television transmitters, many of which are based Kabul, while others are broadcast from other provinces. Selected foreign channels are also shown to the public in Afghanistan, but with the use of the internet, over 3,500 international TV channels may be accessed in Afghanistan.
There are an estimated 150 FM radio operators throughout the country. Broadcasts are in Dari, Pashto, English, Uzbeki and a number of other languages.
Radio listeners are generally decreasing and are being slowly outnumbered by television. Of Afghanistan's 6 main cities, Kandahar and Khost have the maximum number of radio listeners. Kabul and Jalalabad have moderate number of listeners. However, Mazar-e-Sharif and especially Herat have very few radio listeners.
In 1870, a central post office was established at Bala Hissar in Kabul and a post office in the capital of each province. The service was slowly being expanded over the years as more postal offices were established in each large city by 1918. Afghanistan became a member of the Universal Postal Union in 1928, and the postal administration elevated to the Ministry of Communication in 1934. Civil war caused a disruption in issuing official stamps during the 1980s-90s war but in 1999 postal service was operating again. Postal services to/from Kabul worked remarkably well all throughout the war years. Postal services to/from Herat resumed in 1997. The Afghan government has reported to the UPU several times about illegal stamps being issued and sold in 2003 and 2007.
Afghanistan Post has been reorganizing the postal service in 2000s with assistance from Pakistan Post. The Afghanistan Postal commission was formed to prepare a written policy for the development of the postal sector, which will form the basis of a new postal services law governing licensing of postal services providers. The project was expected to finish by 2008.
In January 2014 the Afghan Ministry of Communications and Information Technology signed an agreement with Eutelsat for the use of satellite resources to enhance deployment of Afghanistan's national broadcasting and telecommunications infrastructure as well as its international connectivity. Afghansat 1 was officially launched in May 2014, with expected service for at least seven years in Afghanistan. The Afghan government plans to launch Afghansat 2 after the lease of Afghansat 1 ends. | https://en.wikipedia.org/wiki?curid=6684 |
Coca-Cola
Coca-Cola, or Coke, is a carbonated soft drink manufactured by The Coca-Cola Company. Originally marketed as a temperance drink and intended as a patent medicine, it was invented in the late 19th century by John Stith Pemberton and was bought out by businessman Asa Griggs Candler, whose marketing tactics led Coca-Cola to its dominance of the world soft-drink market throughout the 20th century. The drink's name refers to two of its original ingredients: coca leaves, and kola nuts (a source of caffeine). The current formula of Coca-Cola remains a trade secret; however, a variety of reported recipes and experimental recreations have been published.
The Coca-Cola Company produces concentrate, which is then sold to licensed Coca-Cola bottlers throughout the world. The bottlers, who hold exclusive territory contracts with the company, produce the finished product in cans and bottles from the concentrate, in combination with filtered water and sweeteners. A typical can contains of sugar (usually in the form of high fructose corn syrup). The bottlers then sell, distribute, and merchandise Coca-Cola to retail stores, restaurants, and vending machines throughout the world. The Coca-Cola Company also sells concentrate for soda fountains of major restaurants and foodservice distributors.
The Coca-Cola Company has on occasion introduced other cola drinks under the Coke name. The most common of these is Diet Coke, along with others including Caffeine-Free Coca-Cola, Diet Coke Caffeine-Free, Coca-Cola Zero Sugar, Coca-Cola Cherry, Coca-Cola Vanilla, and special versions with lemon, lime, and coffee. Coca-Cola was called Coca-Cola Classic from July 1985 to 2009, to distinguish it from "New Coke". Based on Interbrand's "best global brand" study of 2015, Coca-Cola was the world's third most valuable brand, after Apple and Google. In 2013, Coke products were sold in over 200 countries worldwide, with consumers drinking more than 1.8 billion company beverage servings each day. Coca-Cola ranked No. 87 in the 2018 "Fortune" 500 list of the largest United States corporations by total revenue.
Confederate Colonel John Pemberton, who was wounded in the American Civil War and became addicted to morphine, began a quest to find a substitute for the problematic drug. In 1885 at Pemberton's Eagle Drug and Chemical House, a drugstore in Columbus, Georgia, he registered Pemberton's French Wine Coca nerve tonic. Pemberton's tonic may have been inspired by the formidable success of Vin Mariani, a French-Corsican coca wine, but his recipe additionally included the African kola nut, the beverage's source of caffeine.
It is also worth noting that a Spanish drink called "Kola Coca" was presented at a contest in Philadelphia in 1885, a year before the official birth of Coca-Cola. The rights for this Spanish drink were bought by Coca-Cola in 1953.
In 1886, when Atlanta and Fulton County passed prohibition legislation, Pemberton responded by developing Coca-Cola, a nonalcoholic version of Pemberton's French Wine Coca. It was marketed as "Coca-Cola: The temperance drink", which appealed to many people as the temperance movement enjoyed wide support during this time. The first sales were at Jacob's Pharmacy in Atlanta, Georgia, on May 8, 1886, where it initially sold for five cents a glass. Drugstore soda fountains were popular in the United States at the time due to the belief that carbonated water was good for the health, and Pemberton's new drink was marketed and sold as a patent medicine, Pemberton claiming it a cure for many diseases, including morphine addiction, indigestion, nerve disorders, headaches, and impotence. Pemberton ran the first advertisement for the beverage on May 29 of the same year in the "Atlanta Journal".
By 1888, three versions of Coca-Cola – sold by three separate businesses – were on the market. A co-partnership had been formed on January 14, 1888, between Pemberton and four Atlanta businessmen: J.C. Mayfield, A.O. Murphey, C.O. Mullahy, and E.H. Bloodworth. Not codified by any signed document, a verbal statement given by Asa Candler years later asserted under testimony that he had acquired a stake in Pemberton's company as early as 1887. John Pemberton declared that the "name" "Coca-Cola" belonged to his son, Charley, but the other two manufacturers could continue to use the "formula".
Charley Pemberton's record of control over the "Coca-Cola" name was the underlying factor that allowed for him to participate as a major shareholder in the March 1888 Coca-Cola Company incorporation filing made in his father's place. Charley's exclusive control over the "Coca-Cola" name became a continual thorn in Asa Candler's side. Candler's oldest son, Charles Howard Candler, authored a book in 1950 published by Emory University. In this definitive biography about his father, Candler specifically states: " on April 14, 1888, the young druggist Asa Griggs Candler purchased a one-third interest in the formula of an almost completely unknown proprietary elixir known as Coca-Cola." The deal was actually between John Pemberton's son Charley and Walker, Candler & Co. – with John Pemberton acting as cosigner for his son. For $50 down and $500 in 30 days, Walker, Candler & Co. obtained all of the one-third interest in the Coca-Cola Company that Charley held, all while Charley still held on to the name. After the April 14 deal, on April 17, 1888, one-half of the Walker/Dozier interest shares were acquired by Candler for an additional $750.
In 1892, Candler set out to incorporate a second company; "The Coca-Cola Company" (the current corporation). When Candler had the earliest records of the "Coca-Cola Company" destroyed in 1910, the action was claimed to have been made during a move to new corporation offices around this time.
After Candler had gained a better foothold on Coca-Cola in April 1888, he nevertheless was forced to sell the beverage he produced with the recipe he had under the names "Yum Yum" and "Koke". This was while Charley Pemberton was selling the elixir, although a cruder mixture, under the name "Coca-Cola", all with his father's blessing. After both names failed to catch on for Candler, by the middle of 1888, the Atlanta pharmacist was quite anxious to establish a firmer legal claim to Coca-Cola, and hoped he could force his two competitors, Walker and Dozier, completely out of the business, as well.
John Pemberton died suddenly on August 16, 1888. Asa Candler then decided to move swiftly forward to attain full control of the entire Coca-Cola operation.
Charley Pemberton, an alcoholic and opium addict unnerved Asa Candler more than anyone else. Candler is said to have quickly maneuvered to purchase the exclusive rights to the name "Coca-Cola" from Pemberton's son Charley immediately after he learned of Dr. Pemberton's death. One of several stories states that Candler approached Charley's mother at John Pemberton's funeral and offered her $300 in cash for the title to the name. Charley Pemberton was found on June 23, 1894, unconscious, with a stick of opium by his side. Ten days later, Charley died at Atlanta's Grady Hospital at the age of 40.
In Charles Howard Candler's 1950 book about his father, he stated: "On August 30 [1888], he Asa Candler became sole proprietor of Coca-Cola, a fact which was stated on letterheads, invoice blanks and advertising copy."
With this action on August 30, 1888, Candler's sole control became technically all true. Candler had negotiated with Margaret Dozier and her brother Woolfolk Walker a full payment amounting to $1,000, which all agreed Candler could pay off with a series of notes over a specified time span. By May 1, 1889, Candler was now claiming full ownership of the Coca-Cola beverage, with a total investment outlay by Candler for the drink enterprise over the years amounting to $2,300.
In 1914, Margaret Dozier, as co-owner of the original Coca-Cola Company in 1888, came forward to claim that her signature on the 1888 Coca-Cola Company bill of sale had been forged. Subsequent analysis of other similar transfer documents had also indicated John Pemberton's signature had most likely been forged as well, which some accounts claim was precipitated by his son Charley.
On September 12, 1919, Coca-Cola Co. was purchased by a group of investors for $25 million and reincorporated in Delaware. The company publicly offered 500,000 shares of the company for $40 a share.
In 1986, The Coca-Cola Company merged with two of their bottling operators (owned by JTL Corporation and BCI Holding Corporation) to form Coca-Cola Enterprises Inc. (CCE).
In December 1991, Coca-Cola Enterprises merged with the Johnston Coca-Cola Bottling Group, Inc.
The first bottling of Coca-Cola occurred in Vicksburg, Mississippi, at the Biedenharn Candy Company on March 12, 1894. The proprietor of the bottling works was Joseph A. Biedenharn. The original bottles were Hutchinson bottles, very different from the much later hobble-skirt design of 1915 now so familiar.
A few years later two entrepreneurs from Chattanooga, Tennessee, namely Benjamin F. Thomas and Joseph B. Whitehead, proposed the idea of bottling and were so persuasive that Candler signed a contract giving them control of the procedure for only one dollar. Candler never collected his dollar, but in 1899, Chattanooga became the site of the first Coca-Cola bottling company. Candler remained very content just selling his company's syrup. The loosely termed contract proved to be problematic for The Coca-Cola Company for decades to come. Legal matters were not helped by the decision of the bottlers to subcontract to other companies, effectively becoming parent bottlers. This contract specified that bottles would be sold at 5¢ each and had no fixed duration, leading to the fixed price of Coca-Cola from 1886 to 1959.
The first outdoor wall advertisement that promoted the Coca-Cola drink was painted in 1894 in Cartersville, Georgia. Cola syrup was sold as an over-the-counter dietary supplement for upset stomach. By the time of its 50th anniversary, the soft drink had reached the status of a national icon in the US. In 1935, it was certified kosher by Atlanta Rabbi Tobias Geffen with the help of Harold Hirsch, Geffen was the first person to see the top-secret ingredients list after facing scrutiny from the American Jewish population regarding the drinks kosher status, consequently the company made minor changes in the sourcing of some ingredients so it could continue to be consumed by Americas Jewish population and during Passover.
The longest running commercial Coca-Cola soda fountain anywhere was Atlanta's Fleeman's Pharmacy, which first opened its doors in 1914. Jack Fleeman took over the pharmacy from his father and ran it until 1995; closing it after 81 years. On July 12, 1944, the one-billionth gallon of Coca-Cola syrup was manufactured by The Coca-Cola Company. Cans of Coke first appeared in 1955.
On April 23, 1985, Coca-Cola, amid much publicity, attempted to change the formula of the drink with "New Coke". Follow-up taste tests revealed most consumers preferred the taste of New Coke to both Coke and Pepsi but Coca-Cola management was unprepared for the public's nostalgia for the old drink, leading to a backlash. The company gave in to protests and returned to the old formula under the name Coca-Cola Classic, on July 10, 1985. "New Coke" remained available and was renamed Coke II in 1992; it was discontinued in 2002.
On July 5, 2005, it was revealed that Coca-Cola would resume operations in Iraq for the first time since the Arab League boycotted the company in 1968.
In April 2007, in Canada, the name "Coca-Cola Classic" was changed back to "Coca-Cola". The word "Classic" was removed because "New Coke" was no longer in production, eliminating the need to differentiate between the two. The formula remained unchanged. In January 2009, Coca-Cola stopped printing the word "Classic" on the labels of bottles sold in parts of the southeastern United States. The change is part of a larger strategy to rejuvenate the product's image. The word "Classic" was removed from all Coca-Cola products by 2011.
In November 2009, due to a dispute over wholesale prices of Coca-Cola products, Costco stopped restocking its shelves with Coke and Diet Coke for two months; a separate pouring rights deal in 2013 saw Coke products removed from Costco food courts in favor of Pepsi. Some Costco locations (such as the ones in Tucson, Arizona) additionally sell imported Coca-Cola from Mexico with cane sugar instead of corn syrup from separate distributors. Coca-Cola introduced the 7.5-ounce mini-can in 2009, and on September 22, 2011, the company announced price reductions, asking retailers to sell eight-packs for $2.99. That same day, Coca-Cola announced the 12.5-ounce bottle, to sell for 89 cents. A 16-ounce bottle has sold well at 99 cents since being re-introduced, but the price was going up to $1.19.
In 2012, Coca-Cola resumed business in Myanmar after 60 years of absence due to U.S.-imposed investment sanctions against the country. Coca-Cola's bottling plant will be located in Yangon and is part of the company's five-year plan and $200 million investment in Myanmar. Coca-Cola with its partners is to invest US$5 billion in its operations in India by 2020. In 2013, it was announced that Coca-Cola Life would be introduced in Argentina and other parts of the world that would contain stevia and sugar. However, the drink was discontinued in Britain on June 2017.
A typical can of Coca-Cola (12 fl ounces/355 ml) contains 38 grams of sugar (usually in the form of HFCS), 50 mg of sodium, 0 grams fat, 0 grams potassium, and 140 calories. On May 5, 2014, Coca-Cola said it is working to remove a controversial ingredient, brominated vegetable oil, from all of its drinks.
The exact formula of Coca-Cola's natural flavorings (but not its other ingredients, which are listed on the side of the bottle or can) is a trade secret. The original copy of the formula was held in SunTrust Bank's main vault in Atlanta for 86 years. Its predecessor, the Trust Company, was the underwriter for the Coca-Cola Company's initial public offering in 1919. On December 8, 2011, the original secret formula was moved from the vault at SunTrust Banks to a new vault containing the formula which will be on display for visitors to its World of Coca-Cola museum in downtown Atlanta.
According to Snopes, a popular myth states that only two executives have access to the formula, with each executive having only half the formula. However, several sources state that while Coca-Cola does have a rule restricting access to only two executives, each knows the entire formula and others, in addition to the prescribed duo, have known the formulation process.
On February 11, 2011, Ira Glass said on his PRI radio show, "This American Life", that "TAL" staffers had found a recipe in "Everett Beal's Recipe Book", reproduced in the February 28, 1979, issue of "The Atlanta Journal-Constitution", that they believed was either Pemberton's original formula for Coca-Cola, or a version that he made either before or after the product hit the market in 1886. The formula basically matched the one found in Pemberton's diary. Coca-Cola archivist Phil Mooney acknowledged that the recipe "could. be a precursor" to the formula used in the original 1886 product, but emphasized that Pemberton's original formula is not the same as the one used in the current product.
When launched, Coca-Cola's two key ingredients were cocaine and caffeine. The cocaine was derived from the coca leaf and the caffeine from kola nut (also spelled "cola nut" at the time), leading to the name Coca-Cola.
Pemberton called for five ounces of coca leaf per gallon of syrup (approximately 37 g/L), a significant dose; in 1891, Candler claimed his formula (altered extensively from Pemberton's original) contained only a tenth of this amount. Coca-Cola once contained an estimated nine milligrams of cocaine per glass. (For comparison, a typical dose or "line" of cocaine is 50–75 mg.) In 1903, it was removed.
After 1904, instead of using fresh leaves, Coca-Cola started using "spent" leaves – the leftovers of the cocaine-extraction process with trace levels of cocaine. Since then, Coca-Cola has used a cocaine-free coca leaf extract. Today, that extract is prepared at a Stepan Company plant in Maywood, New Jersey, the only manufacturing plant authorized by the federal government to import and process coca leaves, which it obtains from Peru and Bolivia. Stepan Company extracts cocaine from the coca leaves, which it then sells to Mallinckrodt, the only company in the United States licensed to purify cocaine for medicinal use.
Long after the syrup had ceased to contain any significant amount of cocaine, in the southeastern U.S., "dope" remained a common colloquialism for Coca-Cola, and "dope-wagons" were trucks that transported it.
Kola nuts act as a flavoring and the original source of caffeine in Coca-Cola. Kola nuts contain about 2.0 to 3.5% caffeine, and has a bitter flavor.
In 1911, the U.S. government sued in "United States v. Forty Barrels and Twenty Kegs of Coca-Cola", hoping to force the Coca-Cola Company to remove caffeine from its formula. The court found that the syrup, when diluted as directed, would result in a beverage containing 1.21 grains (or 78.4 mg) of caffeine per serving. The case was decided in favor of the Coca-Cola Company at the district court, but subsequently in 1912, the U.S. Pure Food and Drug Act was amended, adding caffeine to the list of "habit-forming" and "deleterious" substances which must be listed on a product's label. In 1913 the case was appealed to the Sixth Circuit in Cincinnati, where the ruling was affirmed, but then appealed again in 1916 to the Supreme Court, where the government effectively won as a new trial was ordered. The company then voluntarily reduced the amount of caffeine in its product, and offered to pay the government's legal costs to settle and avoid further litigation.
Coca-Cola contains 34 mg of caffeine per 12 fluid ounces (9.8 mg per 100 ml).
The actual production and distribution of Coca-Cola follows a franchising model. The Coca-Cola Company only produces a syrup concentrate, which it sells to bottlers throughout the world, who hold Coca-Cola franchises for one or more geographical areas. The bottlers produce the final drink by mixing the syrup with filtered water and sweeteners, putting the mixture into cans and bottles, and carbonating it, which the bottlers then sell and distribute to retail stores, vending machines, restaurants, and food service distributors.
The Coca-Cola Company owns minority shares in some of its largest franchises, such as Coca-Cola Enterprises, Coca-Cola Amatil, Coca-Cola Hellenic Bottling Company, and Coca-Cola FEMSA, but fully independent bottlers produce almost half of the volume sold in the world.
Independent bottlers are allowed to sweeten the drink according to local tastes.
The bottling plant in Skopje, Macedonia, received the 2009 award for "Best Bottling Company".
Since it announced its intention to begin distribution in Myanmar in June 2012, Coca-Cola has been officially available in every country in the world except Cuba and North Korea. However, it is reported to be available in both countries as a grey import.
Coca-Cola has been a point of legal discussion in the Middle East. In the early 20th century, a fatwa was created in Egypt to discuss the question of "whether Muslims were permitted to drink Coca-Cola and Pepsi cola." The fatwa states: "According to the Muslim Hanefite, Shafi'ite, etc., the rule in Islamic law of forbidding or allowing foods and beverages is based on the presumption that such things are permitted unless it can be shown that they are forbidden on the basis of the Qur'an." The Muslim jurists stated that, unless the Qu'ran specifically prohibits the consumption of a particular product, it is permissible to consume. Another clause was discussed, whereby the same rules apply if a person is unaware of the condition or ingredients of the item in question.
This is a list of variants of Coca-Cola introduced around the world. In addition to the caffeine-free version of the original, additional fruit flavors have been included over the years. Not included here are versions of Diet Coke and Coca-Cola Zero Sugar; variant versions of those no-calorie colas can be found at their respective articles.
The Coca-Cola logo was created by John Pemberton's bookkeeper, Frank Mason Robinson, in 1885. Robinson came up with the name and chose the logo's distinctive cursive script. The writing style used, known as Spencerian script, was developed in the mid-19th century and was the dominant form of formal handwriting in the United States during that period.
Robinson also played a significant role in early Coca-Cola advertising. His promotional suggestions to Pemberton included giving away thousands of free drink coupons and plastering the city of Atlanta with publicity banners and streetcar signs.
Coca-Cola came under scrutiny in Egypt in 1951 because of a conspiracy theory that the Coca-Cola logo, when reflected in a mirror, spells out "No Mohammed no Mecca" in Arabic.
The Coca-Cola bottle, called the "contour bottle" within the company, was created by bottle designer Earl R. Dean and Coca-Cola's general counsel, Harold Hirsch. In 1915, The Coca-Cola Company was represented by their general counsel to launch a competition among its bottle suppliers as well as any competition entrants to create a new bottle for their beverage that would distinguish it from other beverage bottles, "a bottle which a person could recognize even if they felt it in the dark, and so shaped that, even if broken, a person could tell at a glance what it was."
Chapman J. Root, president of the Root Glass Company of Terre Haute, Indiana, turned the project over to members of his supervisory staff, including company auditor T. Clyde Edwards, plant superintendent Alexander Samuelsson, and Earl R. Dean, bottle designer and supervisor of the bottle molding room. Root and his subordinates decided to base the bottle's design on one of the soda's two ingredients, the coca leaf or the kola nut, but were unaware of what either ingredient looked like. Dean and Edwards went to the Emeline Fairbanks Memorial Library and were unable to find any information about coca or kola. Instead, Dean was inspired by a picture of the gourd-shaped cocoa pod in the Encyclopædia Britannica. Dean made a rough sketch of the pod and returned to the plant to show Root. He explained to Root how he could transform the shape of the pod into a bottle. Root gave Dean his approval.
Faced with the upcoming scheduled maintenance of the mold-making machinery, over the next 24 hours Dean sketched out a concept drawing which was approved by Root the next morning. Chapman Root approved the prototype bottle and a design patent was issued on the bottle in November 1915. The prototype never made it to production since its middle diameter was larger than its base, making it unstable on conveyor belts. Dean resolved this issue by decreasing the bottle's middle diameter. During the 1916 bottler's convention, Dean's contour bottle was chosen over other entries and was on the market the same year. By 1920, the contour bottle became the standard for The Coca-Cola Company. A revised version was also patented in 1923. Because the Patent Office releases the "Patent Gazette" on Tuesday, the bottle was patented on December 25, 1923, and was nicknamed the "Christmas bottle." Today, the contour Coca-Cola bottle is one of the most recognized packages on the planet..."even in the dark!".
As a reward for his efforts, Dean was offered a choice between a $500 bonus or a lifetime job at the Root Glass Company. He chose the lifetime job and kept it until the Owens-Illinois Glass Company bought out the Root Glass Company in the mid-1930s. Dean went on to work in other Midwestern glass factories.
Raymond Loewy updated the design in 1955 to accommodate larger formats.
Others have attributed inspiration for the design not to the cocoa pod, but to a Victorian hooped dress.
In 1944, Associate Justice Roger J. Traynor of the Supreme Court of California took advantage of a case involving a waitress injured by an exploding Coca-Cola bottle to articulate the doctrine of strict liability for defective products. Traynor's concurring opinion in "Escola v. Coca-Cola Bottling Co." is widely recognized as a landmark case in U.S. law today.
Karl Lagerfeld is the latest designer to have created a collection of aluminum bottles for Coca-Cola. Lagerfeld is not the first fashion designer to create a special version of the famous Coca-Cola Contour bottle. A number of other limited edition bottles by fashion designers for Coca-Cola Light soda have been created in the last few years, including Jean-Paul Gaultier.
In 2009, in Italy, Coca-Cola Light had a Tribute to Fashion to celebrate 100 years of the recognizable contour bottle. Well known Italian designers Alberta Ferretti, Blumarine, Etro, Fendi, Marni, Missoni, Moschino, and Versace each designed limited edition bottles.
In 2019, Coca-Cola shared the first beverage bottle made with ocean plastic.
Pepsi, the flagship product of PepsiCo, The Coca-Cola Company's main rival in the soft drink industry, is usually second to Coke in sales, and outsells Coca-Cola in some markets. RC Cola, now owned by the Dr Pepper Snapple Group, the third largest soft drink manufacturer, is also widely available.
Around the world, many local brands compete with Coke. In South and Central America Kola Real, known as Big Cola in Mexico, is a growing competitor to Coca-Cola. On the French island of Corsica, Corsica Cola, made by brewers of the local Pietra beer, is a growing competitor to Coca-Cola. In the French region of Brittany, Breizh Cola is available. In Peru, Inca Kola outsells Coca-Cola, which led The Coca-Cola Company to purchase the brand in 1999. In Sweden, Julmust outsells Coca-Cola during the Christmas season. In Scotland, the locally produced Irn-Bru was more popular than Coca-Cola until 2005, when Coca-Cola and Diet Coke began to outpace its sales. In the former East Germany, Vita Cola, invented during Communist rule, is gaining popularity.
In India, Coca-Cola ranked third behind the leader, Pepsi-Cola, and local drink Thums Up. The Coca-Cola Company purchased Thums Up in 1993. , Coca-Cola held a 60.9% market-share in India. Tropicola, a domestic drink, is served in Cuba instead of Coca-Cola, due to a United States embargo. French brand Mecca Cola and British brand Qibla Cola are competitors to Coca-Cola in the Middle East.
In Turkey, Cola Turka, in Iran and the Middle East, Zamzam Cola and Parsi Cola, in some parts of China, China Cola, in Czech Republic and Slovakia, Kofola, in Slovenia, Cockta, and the inexpensive Mercator Cola, sold only in the country's biggest supermarket chain, Mercator, are some of the brand's competitors. Classiko Cola, made by Tiko Group, the largest manufacturing company in Madagascar, is a competitor to Coca-Cola in many regions.
Coca-Cola's advertising has significantly affected American culture, and it is frequently credited with inventing the modern image of Santa Claus as an old man in a red-and-white suit. Although the company did start using the red-and-white Santa image in the 1930s, with its winter advertising campaigns illustrated by Haddon Sundblom, the motif was already common. Coca-Cola was not even the first soft drink company to use the modern image of Santa Claus in its advertising: White Rock Beverages used Santa in advertisements for its ginger ale in 1923, after first using him to sell mineral water in 1915. Before Santa Claus, Coca-Cola relied on images of smartly dressed young women to sell its beverages. Coca-Cola's first such advertisement appeared in 1895, featuring the young Bostonian actress Hilda Clark as its spokeswoman.
1941 saw the first use of the nickname "Coke" as an official trademark for the product, with a series of advertisements informing consumers that "Coke means Coca-Cola". In 1971, a song from a Coca-Cola commercial called "I'd Like to Teach the World to Sing", produced by Billy Davis, became a hit single.
Coke's advertising is pervasive, as one of Woodruff's stated goals was to ensure that everyone on Earth drank Coca-Cola as their preferred beverage. This is especially true in southern areas of the United States, such as Atlanta, where Coke was born.
Some Coca-Cola television commercials between 1960 through 1986 were written and produced by former Atlanta radio veteran Don Naylor (WGST 1936–1950, WAGA 1951–1959) during his career as a producer for the McCann Erickson advertising agency. Many of these early television commercials for Coca-Cola featured movie stars, sports heroes, and popular singers.
During the 1980s, Pepsi-Cola ran a series of television advertisements showing people participating in taste tests demonstrating that, according to the commercials, "fifty percent of the participants who said they preferred Coke "actually" chose the Pepsi." Statisticians pointed out the problematic nature of a 50/50 result: most likely, the taste tests showed that in blind tests, most people cannot tell the difference between Pepsi and Coke. Coca-Cola ran ads to combat Pepsi's ads in an incident sometimes referred to as the "cola wars"; one of Coke's ads compared the so-called Pepsi challenge to two chimpanzees deciding which tennis ball was furrier. Thereafter, Coca-Cola regained its leadership in the market.
Selena was a spokesperson for Coca-Cola from 1989 until the time of her death. She filmed three commercials for the company. During 1994, to commemorate her five years with the company, Coca-Cola issued special Selena coke bottles.
The Coca-Cola Company purchased Columbia Pictures in 1982, and began inserting Coke-product images into many of its films. After a few early successes during Coca-Cola's ownership, Columbia began to under-perform, and the studio was sold to Sony in 1989.
Coca-Cola has gone through a number of different advertising slogans in its long history, including "The pause that refreshes", "I had like to buy the world a Coke", and "Coke is it".
In 2006, Coca-Cola introduced My Coke Rewards, a customer loyalty campaign where consumers earn points by entering codes from specially marked packages of Coca-Cola products into a website. These points can be redeemed for various prizes or sweepstakes entries.
In Australia in 2011, Coca-Cola began the "share a Coke" campaign, where the Coca-Cola logo was replaced on the bottles and replaced with first names. Coca-Cola used the 150 most popular names in Australia to print on the bottles. The campaign was paired with a website page, Facebook page, and an online "share a virtual Coke". The same campaign was introduced to Coca-Cola, Diet Coke & Coke Zero bottles and cans in the UK in 2013.
Coca-Cola has also advertised its product to be consumed as a breakfast beverage, instead of coffee or tea for the morning caffeine.
From 1886 to 1959, the price of Coca-Cola was fixed at five cents, in part due to an advertising campaign.
Throughout the years, Coca-Cola has released limited time collector bottles for Christmas.
The "Holidays are coming!" advertisement features a train of red delivery trucks, emblazoned with the Coca-Cola name and decorated with Christmas lights, driving through a snowy landscape and causing everything that they pass to light up and people to watch as they pass through.
The advertisement fell into disuse in 2001, as the Coca-Cola company restructured its advertising campaigns so that advertising around the world was produced locally in each country, rather than centrally in the company's headquarters in Atlanta, Georgia. In 2007, the company brought back the campaign after, according to the company, many consumers telephoned its information center saying that they considered it to mark the beginning of Christmas. The advertisement was created by U.S. advertising agency Doner, and has been part of the company's global advertising campaign for many years.
Keith Law, a producer and writer of commercials for Belfast CityBeat, was not convinced by Coca-Cola's reintroduction of the advertisement in 2007, saying that "I do not think there's anything Christmassy about HGVs and the commercial is too generic."
In 2001, singer Melanie Thornton recorded the campaign's advertising jingle as a single, "Wonderful Dream (Holidays are Coming)", which entered the pop-music charts in Germany at no. 9. In 2005, Coca-Cola expanded the advertising campaign to radio, employing several variations of the jingle.
In 2011, Coca-Cola launched a campaign for the Indian holiday Diwali. The campaign included commercials, a song, and an integration with Shah Rukh Khan's film "Ra.One".
Coca-Cola was the first commercial sponsor of the Olympic games, at the 1928 games in Amsterdam, and has been an Olympics sponsor ever since. This corporate sponsorship included the 1996 Summer Olympics hosted in Atlanta, which allowed Coca-Cola to spotlight its hometown. Most recently, Coca-Cola has released localized commercials for the 2010 Winter Olympics in Vancouver; one Canadian commercial referred to Canada's hockey heritage and was modified after Canada won the gold medal game on February 28, 2010 by changing the ending line of the commercial to say "Now they know whose game they're playing".
Since 1978, Coca-Cola has sponsored the FIFA World Cup, and other competitions organized by FIFA. One FIFA tournament trophy, the FIFA World Youth Championship from Tunisia in 1977 to Malaysia in 1997, was called "FIFA – Coca-Cola Cup". In addition, Coca-Cola sponsors NASCAR's annual Coca-Cola 600 and Coke Zero Sugar 400 at Charlotte Motor Speedway in Concord, North Carolina and Daytona International Speedway in Daytona, Florida; since 2020, Coca-Cola has served as a premier partner of the NASCAR Cup Series, which includes holding the naming rights to the series' regular season championship trophy.
Coca-Cola has a long history of sports marketing relationships, which over the years have included Major League Baseball, the National Football League, the National Basketball Association, and the National Hockey League, as well as with many teams within those leagues. Coca-Cola has had a longtime relationship with the NFL's Pittsburgh Steelers, due in part to the now-famous 1979 television commercial featuring "Mean Joe" Greene, leading to the two opening the Coca-Cola Great Hall at Heinz Field in 2001 and a more recent Coca-Cola Zero commercial featuring Troy Polamalu.
Coca-Cola is the official soft drink of many collegiate football teams throughout the nation, partly due to Coca-Cola providing those schools with upgraded athletic facilities in exchange for Coca-Cola's sponsorship. This is especially prevalent at the high school level, which is more dependent on such contracts due to tighter budgets.
Coca-Cola was one of the official sponsors of the 1996 Cricket World Cup held on the Indian subcontinent. Coca-Cola is also one of the associate sponsors of Delhi Daredevils in the Indian Premier League.
In England, Coca-Cola was the main sponsor of The Football League between 2004 and 2010, a name given to the three professional divisions below the Premier League in soccer (football). In 2005, Coca-Cola launched a competition for the 72 clubs of The Football League – it was called "Win a Player". This allowed fans to place one vote per day for their favorite club, with one entry being chosen at random earning £250,000 for the club; this was repeated in 2006. The "Win A Player" competition was very controversial, as at the end of the 2 competitions, Leeds United A.F.C. had the most votes by more than double, yet they did not win any money to spend on a new player for the club. In 2007, the competition changed to "Buy a Player". This competition allowed fans to buy a bottle of Coca-Cola or Coca-Cola Zero and submit the code on the wrapper on the Coca-Cola website. This code could then earn anything from 50p to £100,000 for a club of their choice. This competition was favored over the old "Win a Player" competition, as it allowed all clubs to win some money. Between 1992 and 1998, Coca-Cola was the title sponsor of the Football League Cup (Coca-Cola Cup), the secondary cup tournament of England.
Between 1994 and 1997, Coca-Cola was also the title sponsor of the Scottish League Cup, renaming it the Coca-Cola Cup like its English counterpart. From 1998 to 2001, the company were the title sponsor of the Irish League Cup in Northern Ireland, where it was named the Coca-Cola League Cup.
Coca-Cola is the presenting sponsor of the Tour Championship, the final event of the PGA Tour held each year at East Lake Golf Club in Atlanta, GA.
Introduced March 1, 2010, in Canada, to celebrate the 2010 Winter Olympics, Coca-Cola sold gold colored cans in packs of 12 each, in select stores.
Coca-Cola has been prominently featured in many films and television programs. It was a major plot element in films such as One, Two, Three, The Coca-Cola Kid, and The Gods Must Be Crazy, among many others. In music, in the Beatles' song, "Come Together", the lyrics say, "He shoot Coca-Cola", he say... The Beach Boys also referenced Coca-Cola in their 1964 song "All Summer Long" (i.e. Member when you spilled Coke all over your blouse?)
The best selling artist of all time Elvis Presley, promoted Coca-Cola during his last tour of 1977. The Coca-Cola Company used Elvis' image to promote the product. For example, the company used a song performed by Presley, A Little Less Conversation, in a Japanese Coca-Cola commercial.
Other artists that promoted Coca-Cola include David Bowie, George Michael, Elton John, and Whitney Houston, who appeared in the Diet Coke commercial, among many others.
Not all musical references to Coca-Cola went well. A line in "Lola" by the Kinks was originally recorded as "You drink champagne and it tastes just like Coca-Cola." When the British Broadcasting Corporation refused to play the song because of the commercial reference, lead singer Ray Davies re-recorded the lyric as "it tastes just like cherry cola" to get airplay for the song.
Political cartoonist Michel Kichka satirized a famous Coca-Cola billboard in his 1982 poster "And I Love New York." On the billboard, the Coca-Cola wave is accompanied by the words "Enjoy Coke." In Kichka's poster, the lettering and script above the Coca-Cola wave instead read "Enjoy Cocaine."
Coca-Cola has a high degree of identification with the United States, being considered by some an "American Brand" or as an item representing America. During World War II, this gave rise to the brief production of White Coke by the request of and for Soviet Marshall Georgy Zhukov, who did not want to be seen drinking an American imperial symbol. The drink is also often a metonym for the Coca-Cola Company.
Coca-Cola was introduced to China in 1927, and was very popular until 1949. After the Chinese Civil War ended in 1949, the beverage was no longer imported into China, as it was perceived to be a symbol of decadent Western culture and the capitalist lifestyle. Importation and sales of the beverage resumed in 1979, after diplomatic relations between the United States and China were restored.
There are some consumer boycotts of Coca-Cola in Arab countries due to Coke's early investment in Israel during the Arab League boycott of Israel (its competitor Pepsi stayed out of Israel). Mecca-Cola and Pepsi are popular alternatives in the Middle East.
A Coca-Cola fountain dispenser (officially a Fluids Generic Bioprocessing Apparatus or FGBA) was developed for use on the Space Shuttle as a test bed to determine if carbonated beverages can be produced from separately stored carbon dioxide, water, and flavored syrups and determine if the resulting fluids can be made available for consumption without bubble nucleation and resulting foam formation. FGBA-1 flew on STS-63 in 1995 and dispensed pre-mixed beverages, followed by FGBA-2 on STS-77 the next year. The latter mixed CO₂, water, and syrup to make beverages. It supplied 1.65 liters each of Coca-Cola and Diet Coke.
Coca-Cola is sometimes used for the treatment of gastric phytobezoars. In about 50% of cases studied, Coca-Cola alone was found to be effective in gastric phytobezoar dissolution. Unfortunately, this treatment can result in the potential of developing small bowel obstruction in a minority of cases, necessitating surgical intervention.
Criticism of Coca-Cola has arisen from various groups around the world, concerning a variety of issues, including health effects, environmental issues, and business practices. The drink's coca flavoring, and the nickname "Coke", remain a common theme of criticism due to the relationship with the illegal drug cocaine. In 1911, the US government seized 40 barrels and 20 kegs of Coca-Cola syrup in Chattanooga, Tennessee, alleging the caffeine in its drink was "injurious to health", leading to amended food safety legislation.
Beginning in the 1940s, Pepsi started marketing their drinks to African Americans, a niche market that was largely ignored by white-owned manufacturers in the US, and was able to use its anti-racism stance as a selling point, attacking Coke's reluctance to hire blacks and support by the chairman of The Coca-Cola Company for segregationist Governor of Georgia Herman Talmadge. As a result of this campaign, Pepsi's market share as compared to Coca-Cola's shot up dramatically in the 1950s with African American soft-drink consumers three times more likely to purchase Pepsi over Coke.
The Coca-Cola Company, its subsidiaries and products have been subject to sustained criticism by consumer groups, environmentalists, and watchdogs, particularly since the early 2000s. In 2019, BreakFreeFromPlastic named Coca-Cola the single biggest plastic polluter in the world. After 72,541 volunteers collected 476,423 pieces of plastic waste from around where they lived, a total of 11,732 pieces were found to be labeled with a Coca-Cola brand (including the Dasani, Sprite, and Fanta brands) in 37 countries across four continents. At the 2020 World Economic Forum in Davos, Coca-Cola's Head of Sustainability, Bea Perez, said customers like them because they reseal and are lightweight, and "business won't be in business if we don't accommodate consumers."
Coca-Cola Classic is rich in sugar (or sweetners in some countries) especially sucrose, which causes dental caries when consumed regularly. Besides this, the high caloric value of the sugars themselves can contribute to obesity. Both are major health issues in the developed world.
In July 2001, the Coca-Cola company was sued over its alleged use of political far-right wing death squads (the United Self-Defense Forces of Colombia) to kidnap, torture, and kill Colombian bottler workers that were linked with trade union activity. Coca-Cola was sued in a US federal court in Miami by the Colombian food and drink union Sinaltrainal. The suit alleged that Coca-Cola was indirectly responsible for having "contracted with or otherwise directed paramilitary security forces that utilized extreme violence and murdered, tortured, unlawfully detained or otherwise silenced trade union leaders". This sparked campaigns to boycott Coca-Cola in the UK, US, Germany, Italy, and Australia. Javier Correa, the president of Sinaltrainal, said the campaign aimed to put pressure on Coca-Cola "to mitigate the pain and suffering" that union members had suffered.
Speaking from the Coca-Cola company's headquarters in Atlanta, company spokesperson Rafael Fernandez Quiros said "Coca-Cola denies any connection to any human-rights violation of this type" and added "We do not own or operate the plants". | https://en.wikipedia.org/wiki?curid=6690 |
Cofinality
In mathematics, especially in order theory, the cofinality cf("A") of a partially ordered set "A" is the least of the cardinalities of the cofinal subsets of "A".
This definition of cofinality relies on the axiom of choice, as it uses the fact that every non-empty set of cardinal numbers has a least member. The cofinality of a partially ordered set "A" can alternatively be defined as the least ordinal "x" such that there is a function from "x" to "A" with cofinal image. This second definition makes sense without the axiom of choice. If the axiom of choice is assumed, as will be the case in the rest of this article, then the two definitions are equivalent.
Cofinality can be similarly defined for a directed set and is used to generalize the notion of a subsequence in a net.
If "A" admits a totally ordered cofinal subset, then we can find a subset "B" which is well-ordered and cofinal in "A". Any subset of "B" is also well-ordered. If two cofinal subsets of "B" have minimal cardinality (i.e. their cardinality is the cofinality of "B"), then they are order isomorphic to each other.
The cofinality of an ordinal α is the smallest ordinal δ which is the order type of a cofinal subset of α. The cofinality of a set of ordinals or any other well-ordered set is the cofinality of the order type of that set.
Thus for a limit ordinal α, there exists a δ-indexed strictly increasing sequence with limit α. For example, the cofinality of ω² is ω, because the sequence ω·"m" (where "m" ranges over the natural numbers) tends to ω²; but, more generally, any countable limit ordinal has cofinality ω. An uncountable limit ordinal may have either cofinality ω as does ωω or an uncountable cofinality.
The cofinality of 0 is 0. The cofinality of any successor ordinal is 1. The cofinality of any nonzero limit ordinal is an infinite regular cardinal.
A regular ordinal is an ordinal which is equal to its cofinality. A singular ordinal is any ordinal which is not regular.
Every regular ordinal is the initial ordinal of a cardinal. Any limit of regular ordinals is a limit of initial ordinals and thus is also initial but need not be regular. Assuming the axiom of choice, formula_1 is regular for each α. In this case, the ordinals 0, 1, formula_2, formula_3, and formula_4 are regular, whereas 2, 3, formula_5, and ωω·2 are initial ordinals which are not regular.
The cofinality of any ordinal "α" is a regular ordinal, i.e. the cofinality of the cofinality of "α" is the same as the cofinality of "α". So the cofinality operation is idempotent.
If κ is an infinite cardinal number, then cf(κ) is the least cardinal such that there is an unbounded function from cf(κ) to κ; cf(κ) is also the cardinality of the smallest set of strictly smaller cardinals whose sum is κ; more precisely
That the set above is nonempty comes from the fact that
i.e. the disjoint union of κ singleton sets. This implies immediately that cf(κ) ≤ κ.
The cofinality of any totally ordered set is regular, so one has cf(κ) = cf(cf(κ)).
Using König's theorem, one can prove κ < κcf(κ) and κ < cf(2κ) for any infinite cardinal κ.
The last inequality implies that the cofinality of the cardinality of the continuum must be uncountable. On the other hand,
the ordinal number ω being the first infinite ordinal, so that the cofinality of formula_9 is card(ω) = formula_10. (In particular, formula_9 is singular.) Therefore,
Generalizing this argument, one can prove that for a limit ordinal δ
On the other hand, if the axiom of choice holds, then for a successor or zero ordinal δ | https://en.wikipedia.org/wiki?curid=6693 |
Citadel
A citadel is the core fortified area of a town or city. It may be a castle, fortress, or fortified center. The term is a diminutive of "city" and thus means "little city", so called because it is a smaller part of the city of which it is the defensive core. Ancient Sparta had a citadel, as did many other Greek cities and towns.
In a fortification with bastions, the citadel is the strongest part of the system, sometimes well inside the outer walls and bastions, but often forming part of the outer wall for the sake of economy. It is positioned to be the last line of defense, should the enemy breach the other components of the fortification system. The functions of the police and the army, as well as the army barracks were developed in the citadel.
Some of the oldest known structures which have served as citadels were built by the Indus Valley Civilisation, where citadels represented a centralised authority. Citadels in Indus Valley were almost 12 meters tall. The purpose of these structures, however, remains debated. Though the structures found in the ruins of Mohenjo-daro were walled, it is far from clear that these structures were defensive against enemy attacks. Rather, they may have been built to divert flood waters.
Several settlements in Anatolia, including the Assyrian city of Kaneš in modern-day Kültepe, featured citadels. Kaneš' citadel contained the city's palace, temples, and official buildings. The citadel of the Greek city of Mycenae was built atop a highly-defensible rectangular hill and was later surrounded by walls in order to increase its defensive capabilities.
In Ancient Greece, the Acropolis (literally: "high city"), placed on a commanding eminence, was important in the life of the people, serving as a refuge and stronghold in peril and containing military and food supplies, the shrine of the god and a royal palace. The most well known is the Acropolis of Athens, but nearly every Greek city-state had one – the Acrocorinth famed as a particularly strong fortress. In a much later period, when Greece was ruled by the Latin Empire, the same strong points were used by the new feudal rulers for much the same purpose.
In the first millennium BCE, the Castro culture emerged in northwestern Portugal and Spain in the region extending from the Douro river up to the Minho, but soon expanding north along the coast, and east following the river valleys. It was an autochthonous evolution of Atlantic Bronze Age communities. In 2008, the origins of the Celts were attributed to this period by John T. Koch and supported by Barry Cunliffe. The Ave River Valley in Portugal was the core region of this culture, with a large number of small settlements (the "castros"), but also settlements known as citadels or oppida by the Roman conquerors. These had several rings of walls and the Roman conquest of the citadels of Abobriga, Lambriaca and Cinania around 138 B.C. was possible only by prolonged siege. Ruins of notable citadels still exist, and are known by archaeologists as Citânia de Briteiros, Citânia de Sanfins, Cividade de Terroso and Cividade de Bagunte.
Rebels who took power in the city but with the citadel still held by the former rulers could by no means regard their tenure of power as secure. One such incident played an important part in the history of the Maccabean Revolt against the Seleucid Empire. The Hellenistic garrison of Jerusalem and local supporters of the Seleucids held out for many years in the Acra citadel, making Maccabean rule in the rest of Jerusalem precarious. When finally gaining possession of the place, the Maccabeans pointedly destroyed and razed the Acra, though they constructed another citadel for their own use in a different part of Jerusalem.
At various periods, and particularly during the Middle Ages and the Renaissance, the citadel – having its own fortifications, independent of the city walls – was the last defence of a besieged army, often held after the town had been conquered. Locals and defending armies have often held out citadels long after the city had fallen. For example, in the 1543 Siege of Nice the Ottoman forces led by Barbarossa conquered and pillaged the town and took many captives, but the citadel held out.
In the Philippines, the Ivatan people of the northern islands of Batanes often built fortifications to protect themselves during times of war. They built their so-called "idjangs" on hills and elevated areas. These fortifications were likened to European castles because of their purpose. Usually, the only entrance to the castles would be via a rope ladder that would only be lowered for the villagers and could be kept away when invaders arrived.
In time of war the citadel in many cases afforded retreat to the people living in the areas around the town. However, citadels were often used also to protect a garrison or political power from the inhabitants of the town where it was located, being designed to ensure loyalty from the town that they defended.
For example, during the Dutch Wars of 1664–1667, King Charles II of England constructed a Royal Citadel at Plymouth, an important channel port which needed to be defended from a possible naval attack. However, due to Plymouth's support for the Parliamentarians in the then-recent English Civil War, the Plymouth Citadel was so designed that its guns could fire on the town as well as on the sea approaches.
Barcelona had a great citadel built in 1714 to intimidate the Catalans against repeating their mid-17th- and early-18th-century rebellions against the Spanish central government. In the 19th century, when the political climate had liberalized enough to permit it, the people of Barcelona had the citadel torn down, and replaced it with the city's main central park, the Parc de la Ciutadella. A similar example is the Citadella in Budapest, Hungary.
The attack on the Bastille in the French Revolution – though afterwards remembered mainly for the release of the handful of prisoners incarcerated there – was to considerable degree motivated by the structure's being a Royal citadel in the midst of revolutionary Paris.
Similarly, after Garibaldi's overthrow of Bourbon rule in Palermo, during the 1860 Unification of Italy, Palermo's Castellamare Citadel – symbol of the hated and oppressive former rule – was ceremoniously demolished.
Following Belgium declaring independence in 1830, a Dutch garrison under General David Hendrik Chassé held out in Antwerp Citadel between 1830 and 1832, while the city had already become part of the independent Belgium.
The Siege of the Alcázar in the Spanish Civil War, in which the Nationalists held out against a much larger Republican force for two months until relieved, shows that in some cases a citadel can be effective even in modern warfare; a similar case is the Battle of Huế during the Vietnam war, where a North Vietnamese Army division held the citadel of Huế for 26 days against roughly their own numbers of much better-equipped US and South Vietnamese troops.
The Citadelle of Québec (construction started 1673, completed 1820) still survives as the largest citadel still in official military operation in North America. It is home to the Royal 22nd Regiment of the Canadian Army and forms part of the Ramparts of Quebec City dating back to 1620s.
Since the mid 20th century, citadels commonly enclose military command and control centres, rather than cities or strategic points of defense on the boundaries of a country. These modern citadels are built to protect the command center from heavy attacks, such as aerial or nuclear bombardment. The military citadels under London in the UK, including the massive underground complex Pindar beneath the Ministry of Defence, are examples, as is the Cheyenne Mountain nuclear bunker in the US.
On armored warships, the heavily armored section of the ship that protects the ammunition and machinery spaces is called the armored citadel.
A modern naval interpretation refers to the heaviest protected part of the hull as "the vitals", and the citadel is the semi-armoured freeboard above the vitals. Generally Anglo-American and German language follow this while Russian sources/language refer to "the vitals" as цитадел "tsitadel". Likewise Russian literature often refers to the turret of a tank as the 'tower'.
The safe room on a ship is also called a citadel. | https://en.wikipedia.org/wiki?curid=6695 |
Chain mail
Chain mail (often just mail or sometimes chainmail) is a type of armour consisting of small metal rings linked together in a pattern to form a mesh. It was generally in common military use between the 3rd century BC and the 14th century AD. A coat of this armour is often referred to as a hauberk, and sometimes a byrnie.
The earliest examples of surviving mail were found in the Carpathian Basin at a burial in Horný Jatov, Slovakia dated at 3rd century BC, and in a chieftain's burial located in Ciumești, Romania. Its invention is commonly credited to the Celts, but there are examples of Etruscan pattern mail dating from at least the 4th century BC. Mail may have been inspired by the much earlier scale armour. Mail spread to North Africa, West Africa, the Middle East, Central Asia, India, Tibet, South East Asia, and Japan.
Herodotus wrote that the ancient Persians wore scale armour, but mail is also distinctly mentioned in the Avesta, the ancient holy scripture of the Persian religion of Zoroastrianism that was founded by the prophet Zoroaster in the 5th century BC.
Mail continues to be used in the 21st century as a component of stab-resistant body armour, cut-resistant gloves for butchers and woodworkers, shark-resistant wetsuits for defense against shark bites, and a number of other applications.
The origins of the word "mail" are not fully known. One theory is that it originally derives from the Latin word "macula", meaning "spot" or "opacity" (as in macula of retina). Another theory relates the word to the old French "maillier", meaning "to hammer" (related to the modern English word "malleable"). In modern French, "maille" refers to a loop or stitch. The Arabic words "burnus", , a burnoose; a hooded cloak, also a chasuble (worn by Coptic priests) and "barnaza", , to bronze, suggest an Arabic influence for the Carolingian armour known as "byrnie" (see below).
The first attestations of the word "mail" are in Old French and Anglo-Norman: "maille", "maile", or "male" or other variants, which became "mailye", "maille", "maile", "male", or "meile" in Middle English.
The modern usage of terms for mail armour is highly contested in popular and, to a lesser degree, academic culture. Medieval sources referred to armour of this type simply as "mail"; however, "chain-mail" has become a commonly used, if incorrect, neologism coined no later than 1786, appearing in Francis Grose's "A Treatise on Ancient Armour and Weapons", and brought to popular attention no later than 1822 in Sir Walter Scott's novel "The Fortunes of Nigel". Since then the word "mail" has been commonly, if incorrectly, applied to other types of armour, such as in "plate-mail" (first attested in Grose's Treatise in 1786). The more correct term is "plate armour".
Civilizations that used mail invented specific terms for each garment made from it. The standard terms for European mail armour derive from French: leggings are called chausses, a hood is a mail coif, and mittens, mitons. A mail collar hanging from a helmet is a camail or aventail. A shirt made from mail is a hauberk if knee-length and a haubergeon if mid-thigh length. A layer (or layers) of mail sandwiched between layers of fabric is called a jazerant.
A waist-length coat in medieval Europe was called a byrnie, although the exact construction of a byrnie is unclear, including whether it was constructed of mail or other armour types. Noting that the byrnie was the "most highly valued piece of armour" to the Carolingian soldier, Bennet, Bradbury, DeVries, Dickie, and Jestice indicate that:
There is some dispute among historians as to what exactly constituted the Carolingian byrnie. Relying... only on artistic and some literary sources because of the lack of archaeological examples, some believe that it was a heavy leather jacket with metal scales sewn onto it. It was also quite long, reaching below the hips and covering most of the arms. Other historians claim instead that the Carolingian byrnie was nothing more than a coat of mail, but longer and perhaps heavier than traditional early medieval mail. Without more certain evidence, this dispute will continue.
The use of mail as battlefield armour was common during the Iron Age and the Middle Ages, becoming less common over the course of the 16th and 17th centuries when plate armor and more advanced firearms were developed. It is believed that the Roman Republic first came into contact with mail fighting the Gauls in Cisalpine Gaul, now Northern Italy, but a different pattern of mail was already in use among the Etruscans. The Roman army adopted the technology for their troops in the form of the lorica hamata which was used as a primary form of armour through the Imperial period.
After the fall of the Western Empire, much of the infrastructure needed to create plate armour diminished. Eventually the word "mail" came to be synonymous with armour. It was typically an extremely prized commodity, as it was expensive and time-consuming to produce and could mean the difference between life and death in a battle. Mail from dead combatants was frequently looted and was used by the new owner or sold for a lucrative price. As time went on and infrastructure improved, it came to be used by more soldiers. The oldest intact mail hauberk still in existence is thought to have been worn by Leopold III, Duke of Austria, who died in 1386 during the Battle of Sempach. Eventually with the rise of the lanced cavalry charge, impact warfare, and high-powered crossbows, mail came to be used as a secondary armour to plate for the mounted nobility.
By the 14th century, articulated plate armour was commonly used to supplement mail. Eventually mail was supplanted by plate for the most part, as it provided greater protection against windlass crossbows, bludgeoning weapons, and lance charges while maintaining most of the mobility of mail. However, it was still widely used by many soldiers as well as brigandines and padded jacks. These three types of armour made up the bulk of the equipment used by soldiers, with mail being the most expensive. It was sometimes more expensive than plate armour. Mail typically persisted longer in less technologically advanced areas such as Eastern Europe but was in use everywhere into the 16th century.
During the late 19th and early 20th century, mail was used as a material for bulletproof vests, most notably by the Wilkinson Sword Company. Results were unsatisfactory; Wilkinson mail worn by the Khedive of Egypt's regiment of "Iron Men" was manufactured from split rings which proved to be too brittle, and the rings would fragment when struck by bullets and aggravate the injury. The riveted mail armour worn by the opposing Sudanese Madhists did not have the same problem but also proved to be relatively useless against the firearms of British forces at the battle of Omdurman. During World War I, Wilkinson Sword transitioned from mail to a lamellar design which was the precursor to the flak jacket.
Also during World War I, a mail fringe, designed by Captain Cruise of the British Infantry, was added to helmets to protect the face. This proved unpopular with soldiers, in spite of being proven to defend against a three-ounce (100 g) shrapnel round fired at a distance of . A protective face mask or splatter mask had a mail veil and was used by early tank crews as a measure against flying steel fragments (spalling) inside the vehicle.
Mail armour was introduced to the Middle East and Asia through the Romans and was adopted by the Sassanid Persians starting in the 3rd century AD, where it was supplemental to the scale and lamellar armour already used.
Mail was commonly also used as horse armour for cataphracts and heavy cavalry as well as armour for the soldiers themselves. Asian mail could be just as heavy as the European variety and sometimes had prayer symbols stamped on the rings as a sign of their craftsmanship as well as for divine protection. Indeed, mail armour is mentioned in the Quran as being a gift revealed by Allah to David:
21:80 It was We Who taught him the making of coats of mail for your benefit, to guard you from each other's violence: will ye then be grateful? (Yusuf Ali's translation)
From the Abbasid Caliphate, mail was quickly adopted in Central Asia by Timur (Tamerlane) and the Sogdians and by India's Delhi Sultanate. Mail armour was introduced by the Turks in late 12th century and commonly used by Turk and the Mughal and Suri armies where it eventually became the armour of choice in India. Indian mail was constructed with alternating rows of solid links and round riveted links and it was often integrated with plate protection (mail and plate armour). Mail and plate armour was commonly used in India until the Battle of Plassey by the Nawabs of Bengal and the subsequent British conquest of the sub-continent.
The Ottoman Empire and the other Islamic Gunpowders used mail armour as well as mail and plate armour, and it was used in their armies until the 18th century by heavy cavalry and elite units such as the Janissaries. They spread its use into North Africa where it was adopted by Mamluk Egyptians and the Sudanese who produced it until the early 20th century. Ottoman mail was constructed with alternating rows of solid links and round riveted links. The Persians used mail armour as well as mail and plate armour. Persian mail and Ottoman mail were often quite similar in appearance.
Mail was introduced to China when its allies in Central Asia paid tribute to the Tang Emperor in 718 by giving him a coat of "link armour" assumed to be mail. China first encountered the armour in 384 when its allies in the nation of Kuchi arrived wearing "armour similar to chains". Once in China, mail was imported but was not produced widely. Due to its flexibility, comfort, and rarity, it was typically the armour of high-ranking guards and those who could afford the exotic import (to show off their social status) rather than the armour of the rank and file, who used more common brigandine, scale, and lamellar types. However, it was one of the few military products that China imported from foreigners. Mail spread to Korea slightly later where it was imported as the armour of imperial guards and generals.
In Japan mail is called "" which means chain. When the word "kusari" is used in conjunction with an armoured item it usually means that mail makes up the majority of the armour composition. An example of this would be "kusari gusoku" which means chain armour. "Kusari" "", "", "", "", "", shoulder, "", and other armoured clothing were produced, even "" socks.
"" was used in samurai armour at least from the time of the Mongol invasion (1270s) but particularly from the Nambokucho Period (1336–1392). The Japanese used many different weave methods including a square 4-in-1 pattern ("so gusari"), a hexagonal 6-in-1 pattern ("hana gusari") and a European 4-in-1 ("nanban gusari"). The rings of Japanese mail were much smaller than their European counterparts; they would be used in patches to link together plates and to drape over vulnerable areas such as the armpits.
"Riveted kusari" was known and used in Japan. On page 58 of the book "Japanese Arms & Armor: Introduction" by H. Russell Robinson, there is a picture of Japanese riveted kusari, and
this quote from the translated reference of 1800 book, "The Manufacture of Armour and Helmets in Sixteenth-Century Japan", shows that the Japanese not only knew of and used riveted kusari but that they manufactured it as well.
... karakuri-namban (riveted namban), with stout links each closed by a rivet. Its invention is credited to Fukushima Dembei Kunitaka, pupil, of Hojo Awa no Kami Ujifusa, but it is also said to be derived directly from foreign models. It is heavy because the links are tinned (biakuro-nagashi) and these are also sharp-edged because they are punched out of iron plate
Butted or split (twisted) links made up the majority of "kusari" links used by the Japanese. Links were either "butted" together meaning that the ends touched each other and were not riveted, or the "kusari" was constructed with links where the wire was turned or twisted two or more times; these split links are similar to the modern split ring commonly used on keychains. The rings were lacquered black to prevent rusting, and were always stitched onto a backing of cloth or leather. The kusari was sometimes concealed entirely between layers of cloth.
"Kusari gusoku" or chain armour was commonly used during the Edo period 1603 to 1868 as a stand-alone defense. According to George Cameron Stone
Entire suits of mail "kusari gusoku" were worn on occasions, sometimes under the ordinary clothing
Ian Bottomley in his book "Arms and Armor of the Samurai: The History of Weaponry in Ancient Japan" shows a picture of a kusari armour and mentions "" (chain jackets) with detachable arms being worn by samurai police officials during the Edo period. The end of the samurai era in the 1860s, along with the 1876 ban on wearing swords in public, marked the end of any practical use for mail and other armour in Japan. Japan turned to a conscription army and uniforms replaced armour.
Mail armour provided an effective defense against slashing blows by edged weapons and some forms of penetration by many thrusting and piercing weapons; in fact, a study conducted at the Royal Armouries at Leeds concluded that "it is almost impossible to penetrate using any conventional medieval weapon". Generally speaking, mail's resistance to weapons is determined by four factors: linkage type (riveted, butted, or welded), material used (iron versus bronze or steel), weave density (a tighter weave needs a thinner weapon to surpass), and ring thickness (generally ranging from 18 to 14 gauge (1.02–1.63 mm diameter) wire in most examples). Mail, if a warrior could afford it, provided a significant advantage when combined with competent fighting techniques.
When the mail was not riveted, a thrust from most sharp weapons could penetrate it. However, when mail was riveted, only a strong well-placed thrust from certain spears, or thin or dedicated mail-piercing swords like the estoc could penetrate, and a pollaxe or halberd blow could break through the armour. Strong projectile weapons such as stronger self bows, recurve bows, and crossbows could also penetrate riveted mail. Some evidence indicates that during armoured combat, the intention was to actually get around the armour rather than through it—according to a study of skeletons found in Visby, Sweden, a majority of the skeletons showed wounds on less well protected legs. Although mail was a formidable protection, due to longswords getting more tapered as time progressed, mail worn under plate armor (and stand-alone mail as well) could be penetrated by the conventional weaponry of another knight.
The flexibility of mail meant that a blow would often injure the wearer, potentially causing serious bruising or fractures, and it was a poor defence against head trauma. Mail-clad warriors typically wore separate rigid helms over their mail coifs for head protection. Likewise, blunt weapons such as maces and warhammers could harm the wearer by their impact without penetrating the armour; usually a soft armour, such as gambeson, was worn under the hauberk. Medieval surgeons were very well capable of setting and caring for bone fractures resulting from blunt weapons. With the poor understanding of hygiene, however, cuts that could get infected were much more of a problem. Thus mail armour proved to be sufficient protection in most situations.
Several patterns of linking the rings together have been known since ancient times, with the most common being the 4-to-1 pattern (where each ring is linked with four others). In Europe, the 4-to-1 pattern was completely dominant. Mail was also common in East Asia, primarily Japan, with several more patterns being utilised and an entire nomenclature developing around them.
Historically, in Europe, from the pre-Roman period on, the rings composing a piece of mail would be riveted closed to reduce the chance of the rings splitting open when subjected to a thrusting attack or a hit by an arrow.
Up until the 14th century European mail was made of alternating rows of round riveted rings and solid rings. Sometime during the 14th century European mail makers started to transition from round rivets to wedge shaped rivets but continued using alternating rows of solid rings. Eventually European mail makers stopped using solid rings and almost all European mail was made from wedge riveted rings only with no solid rings. Both were commonly made of wrought iron, but some later pieces were made of heat-treated steel. Wire for the riveted rings was formed by either of two methods. One was to hammer out wrought iron into plates and cut or slit the plates. These thin pieces were then pulled through a draw plate repeatedly until the desired diameter was achieved. Waterwheel powered drawing mills are pictured in several period manuscripts. Another method was to simply forge down an iron billet into a rod and then proceed to draw it out into wire. The solid links would have been made by punching from a sheet. Guild marks were often stamped on the rings to show their origin and craftsmanship. Forge welding was also used to create solid links, but there are few possible examples known; the only well documented example from Europe is that of the camail (mail neck-defence) of the 7th century Coppergate helmet. Outside of Europe this practice was more common such as "theta" links from India. Very few examples of historic butted mail have been found and it is generally accepted that butted mail was never in wide use historically except in Japan where mail ("kusari") was commonly made from "butted" links. Butted link mail was also used by the Moros of the Philippines in their mail and plate armors.
Mail is used as protective clothing for butchers against meat-packing equipment. Workers may wear up to of mail under their white coats. Butchers also commonly wear a single mail glove to protect themselves from self-inflicted injury while cutting meat, as do many oyster shuckers.
Woodcarvers sometimes use similar mail gloves to protect their hands from cuts and punctures.
Scuba divers use mail to protect them from sharkbite, as do animal control officers for protection against the animals they handle.
In 1980 marine biologist Jeremiah Sullivan patented his design for Neptunic full coverage chain mail shark resistant suits which he had developed for close encounters with sharks.
Shark expert and underwater filmmaker Valerie Taylor was also among the first to develop and test shark suits in 1979 while diving with sharks.
Mail is widely used in industrial settings as shrapnel guards and splash guards in metal working operations.
Electrical applications for mail include RF leakage testing and being worn as a faraday cage suit by tesla coil enthusiasts and high voltage electrical workers.
Conventional textile-based ballistic vests are designed to stop soft-nosed bullets but offer little defense from knife attacks. Knife-resistant armour is designed to defend against knife attacks; some of these use layers of metal plates, mail and metallic wires.
Many historical reenactment groups, especially those whose focus is Antiquity or the Middle Ages, commonly use mail both as practical armour and for costuming. Mail is especially popular amongst those groups which use steel weapons. A modern hauberk made from 1.5 mm diameter wire with 10 mm inner diameter rings weighs roughly and contains 15,000–45,000 rings.
One of the drawbacks of mail is the uneven weight distribution; the stress falls mainly on shoulders. Weight can be better distributed by wearing a belt over the mail, which provides another point of support.
Mail worn today for re-enactment and recreational use can be made in a variety of styles and materials. Most recreational mail today is made of butted links which are galvanized or stainless steel. This is historically inaccurate but is much less expensive to procure and especially to maintain than historically accurate reproductions. Mail can also be made of titanium, aluminium, bronze, or copper. Riveted mail offers significantly better protection ability as well as historical accuracy than mail constructed with butted links. Riveted mail can be more labour-intensive and expensive to manufacture. Japanese mail ("kusari") is one of the few historically correct examples of mail being constructed with such "butted links".
Mail remained in use as a decorative and possibly high-status symbol with military overtones long after its practical usefulness had passed. It was frequently used for the epaulettes of military uniforms. It is still used in this form by the British Territorial Army.
Mail has applications in sculpture and jewellery, especially when made out of precious metals or colourful anodized metals. Mail artwork includes headdresses, decorative wall hangings, ornaments, chess sets, macramé, and jewelry. For these non-traditional applications, hundreds of patterns (commonly referred to as "weaves") have been invented.
Large-linked mail is occasionally used as a fetish clothing material, with the large links intended to reveal – in part – the body beneath them.
In some films, knitted string spray-painted with a metallic paint is used instead of actual mail in order to cut down on cost (an example being "Monty Python and the Holy Grail", which was filmed on a very small budget). Films more dedicated to costume accuracy often use ABS plastic rings, for the lower cost and weight. Such ABS mail coats were made for "The Lord of the Rings" film trilogy, in addition to many metal coats. The metal coats are used rarely because of their weight, except in close-up filming where the appearance of ABS rings is distinguishable. A large scale example of the ABS mail used in the "Lord of the Rings" can be seen in the entrance to the Royal Armouries museum in Leeds in the form of a large curtain bearing the logo of the museum. It was acquired from the makers of the film's armour, Weta Workshop, when the museum hosted an exhibition of WETA armour from their films. For the film "Mad Max Beyond Thunderdome", Tina Turner is said to have worn actual mail and she complained how heavy this was. "Game of Thrones" makes use of mail, notably during the "Red Wedding" scene.
Typically worn under mail armour if thin or over mail armour if thick:
Can be worn over mail armour:
Others: | https://en.wikipedia.org/wiki?curid=6696 |
Ammonia
Ammonia is a compound of nitrogen and hydrogen with the formula NH3. A stable binary hydride, and the simplest pnictogen hydride, ammonia is a colourless gas with a characteristic pungent smell. It is a common nitrogenous waste, particularly among aquatic organisms, and it contributes significantly to the nutritional needs of terrestrial organisms by serving as a precursor to food and fertilizers. Ammonia, either directly or indirectly, is also a building block for the synthesis of many pharmaceutical products and is used in many commercial cleaning products. It is mainly collected by downward displacement of both air and water.
Although common in nature—both terrestrially and in the outer planets of the Solar System—and in wide use, ammonia is both caustic and hazardous in its concentrated form. It is classified as an extremely hazardous substance in the United States, and is subject to strict reporting requirements by facilities which produce, store, or use it in significant quantities.
The global industrial production of ammonia in 2018 was 175 million tonnes, with no significant change relative to the 2013 global industrial production of 175 million tonnes. Industrial ammonia is sold either as ammonia liquor (usually 28% ammonia in water) or as pressurized or refrigerated anhydrous liquid ammonia transported in tank cars or cylinders.
NH3 boils at at a pressure of one atmosphere, so the liquid must be stored under pressure or at low temperature. Household ammonia or ammonium hydroxide is a solution of NH3 in water. The concentration of such solutions is measured in units of the Baumé scale (density), with 26 degrees Baumé (about 30% (by weight) ammonia at ) being the typical high-concentration commercial product.
Pliny, in Book XXXI of his Natural History, refers to a salt produced in the Roman province of Cyrenaica named "hammoniacum", so called because of its proximity to the nearby Temple of Jupiter Amun (Greek Ἄμμων "Ammon"). However, the description Pliny gives of the salt does not conform to the properties of ammonium chloride. According to Herbert Hoover's commentary in his English translation of Georgius Agricola's "De re metallica", it is likely to have been common sea salt. In any case, that salt ultimately gave ammonia and ammonium compounds their name.
Ammonia is a chemical found in trace quantities in nature, being produced from nitrogenous animal and vegetable matter. Ammonia and ammonium salts are also found in small quantities in rainwater, whereas ammonium chloride (sal ammoniac), and ammonium sulfate are found in volcanic districts; crystals of ammonium bicarbonate have been found in Patagonia guano. The kidneys secrete ammonia to neutralize excess acid. Ammonium salts are found distributed through fertile soil and in seawater.
Ammonia is also found throughout the Solar System on Mars, Jupiter, Saturn, Uranus, Neptune, and Pluto, among other places: on smaller, icy bodies such as Pluto, ammonia can act as a geologically important antifreeze, as a mixture of water and ammonia can have a melting point as low as if the ammonia concentration is high enough and thus allow such bodies to retain internal oceans and active geology at a far lower temperature than would be possible with water alone. Substances containing ammonia, or those that are similar to it, are called "ammoniacal".
Ammonia is a colourless gas with a characteristically pungent smell. It is lighter than air, its density being 0.589 times that of air. It is easily liquefied due to the strong hydrogen bonding between molecules; the liquid boils at , and freezes to white crystals at .
Ammonia may be conveniently deodorized by reacting it with either sodium bicarbonate or acetic acid. Both of these reactions form an odourless ammonium salt.
The ammonia molecule has a trigonal pyramidal shape as predicted by the valence shell electron pair repulsion theory (VSEPR theory) with an experimentally determined bond angle of 106.7°. The central nitrogen atom has five outer electrons with an additional electron from each hydrogen atom. This gives a total of eight electrons, or four electron pairs that are arranged tetrahedrally. Three of these electron pairs are used as bond pairs, which leaves one lone pair of electrons. The lone pair repels more strongly than bond pairs, therefore the bond angle is not 109.5°, as expected for a regular tetrahedral arrangement, but 106.7°. This shape gives the molecule a dipole moment and makes it polar. The molecule's polarity, and especially, its ability to form hydrogen bonds, makes ammonia highly miscible with water. The lone pair makes ammonia a base, a proton acceptor. Ammonia is moderately basic; a 1.0 M aqueous solution has a pH of 11.6, and if a strong acid is added to such a solution until the solution is neutral (pH = 7), 99.4% of the ammonia molecules are protonated. Temperature and salinity also affect the proportion of NH4+. The latter has the shape of a regular tetrahedron and is isoelectronic with methane.
The ammonia molecule readily undergoes nitrogen inversion at room temperature; a useful analogy is an umbrella turning itself inside out in a strong wind. The energy barrier to this inversion is 24.7 kJ/mol, and the resonance frequency is 23.79 GHz, corresponding to microwave radiation of a wavelength of 1.260 cm. The absorption at this frequency was the first microwave spectrum to be observed.
One of the most characteristic properties of ammonia is its basicity. Ammonia is considered to be a weak base. It combines with acids to form salts; thus with hydrochloric acid it forms ammonium chloride (sal ammoniac); with nitric acid, ammonium nitrate, etc. Perfectly dry ammonia will not combine with perfectly dry hydrogen chloride; moisture is necessary to bring about the reaction. As a demonstration experiment, opened bottles of concentrated ammonia and hydrochloric acid produce clouds of ammonium chloride, which seem to appear "out of nothing" as the salt forms where the two diffusing clouds of molecules meet, somewhere between the two bottles.
The salts produced by the action of ammonia on acids are known as the and all contain the ammonium ion (NH4+).
Although ammonia is well known as a weak base, it can also act as an extremely weak acid. It is a protic substance and is capable of formation of amides (which contain the NH2− ion). For example, lithium dissolves in liquid ammonia to give a solution of lithium amide:
Like water, ammonia undergoes molecular autoionisation to form its acid and base conjugates:
Ammonia often functions as a weak base, so it has some buffering ability. Shifts in pH will cause more or fewer ammonium cations () and amide anions () to be present in solution. At standard pressure and temperature, K=[][] = 10
The combustion of ammonia to nitrogen and water is exothermic:
The standard enthalpy change of combustion, Δ"H"°c, expressed per mole of ammonia and with condensation of the water formed, is −382.81 kJ/mol. Dinitrogen is the thermodynamic product of combustion: all nitrogen oxides are unstable with respect to N2 and O2, which is the principle behind the catalytic converter. Nitrogen oxides can be formed as kinetic products in the presence of appropriate catalysts, a reaction of great industrial importance in the production of nitric acid:
A subsequent reaction leads to NO2:
The combustion of ammonia in air is very difficult in the absence of a catalyst (such as platinum gauze or warm chromium(III) oxide), due to the relatively low heat of combustion, a lower laminar burning velocity, high auto-ignition temperature, high heat of vaporization, and a narrow flammability range. However, recent studies have shown that efficient and stable combustion of ammonia can be achieved using swirl combustors, thereby rekindling research interest in ammonia as a fuel for thermal power production. The flammable range of ammonia in dry air is 15.15%-27.35% and in 100% relative humidity air is 15.95%-26.55%. For studying the kinetics of ammonia combustion a detailed reliable reaction mechanism is required, however knowledge about ammonia chemical kinetics during combustion process has been challenging.
In organic chemistry, ammonia can act as a nucleophile in substitution reactions. Amines can be formed by the reaction of ammonia with alkyl halides, although the resulting -NH2 group is also nucleophilic and secondary and tertiary amines are often formed as byproducts. An excess of ammonia helps minimise multiple substitution and neutralises the hydrogen halide formed. Methylamine is prepared commercially by the reaction of ammonia with chloromethane, and the reaction of ammonia with 2-bromopropanoic acid has been used to prepare racemic alanine in 70% yield. Ethanolamine is prepared by a ring-opening reaction with ethylene oxide: the reaction is sometimes allowed to go further to produce diethanolamine and triethanolamine.
Amides can be prepared by the reaction of ammonia with carboxylic acid derivatives. Acyl chlorides are the most reactive, but the ammonia must be present in at least a twofold excess to neutralise the hydrogen chloride formed. Esters and anhydrides also react with ammonia to form amides. Ammonium salts of carboxylic acids can be dehydrated to amides so long as there are no thermally sensitive groups present: temperatures of 150–200 °C are required.
The hydrogen in ammonia is susceptible to replacement by myriad substituents. When heated with sodium it converts to sodamide, NaNH2. With chlorine, monochloramine is formed.
Pentavalent ammonia is known as λ5-amine or, more commonly, ammonium hydride. This crystalline solid is only stable under high pressure and decomposes back into trivalent ammonia and hydrogen gas at normal conditions. This substance was once investigated as a possible solid rocket fuel in 1966.
Ammonia can act as a ligand in transition metal complexes. It is a pure σ-donor, in the middle of the spectrochemical series, and shows intermediate hard-soft behaviour (see also ECW model). For historical reasons, ammonia is named ammine in the nomenclature of coordination compounds. Some notable ammine complexes include tetraamminediaquacopper(II) ([Cu(NH3)4(H2O)2]2+), a dark blue complex formed by adding ammonia to a solution of copper(II) salts. Tetraamminediaquacopper(II) hydroxide is known as Schweizer's reagent, and has the remarkable ability to dissolve cellulose. Diamminesilver(I) ([Ag(NH3)2]+) is the active species in Tollens' reagent. Formation of this complex can also help to distinguish between precipitates of the different silver halides: silver chloride (AgCl) is soluble in dilute (2M) ammonia solution, silver bromide (AgBr) is only soluble in concentrated ammonia solution, whereas silver iodide (AgI) is insoluble in aqueous ammonia.
Ammine complexes of chromium(III) were known in the late 19th century, and formed the basis of Alfred Werner's revolutionary theory on the structure of coordination compounds. Werner noted only two isomers ("fac"- and "mer"-) of the complex [CrCl3(NH3)3] could be formed, and concluded the ligands must be arranged around the metal ion at the vertices of an octahedron. This proposal has since been confirmed by X-ray crystallography.
An ammine ligand bound to a metal ion is markedly more acidic than a free ammonia molecule, although deprotonation in aqueous solution is still rare. One example is the Calomel reaction, where the resulting amidomercury(II) compound is highly insoluble.
Ammonia and ammonium salts can be readily detected, in very minute traces, by the addition of Nessler's solution, which gives a distinct yellow colouration in the presence of the slightest trace of ammonia or ammonium salts. The amount of ammonia in ammonium salts can be estimated quantitatively by distillation of the salts with sodium or potassium hydroxide, the ammonia evolved being absorbed in a known volume of standard sulfuric acid and the excess of acid then determined volumetrically; or the ammonia may be absorbed in hydrochloric acid and the ammonium chloride so formed precipitated as ammonium hexachloroplatinate, (NH4)2PtCl6.
Sulfur sticks are burnt to detect small leaks in industrial ammonia refrigeration systems. Larger quantities can be detected by warming the salts with a caustic alkali or with quicklime, when the characteristic smell of ammonia will be at once apparent. Ammonia is an irritant and irritation increases with concentration; the permissible exposure limit is 25 ppm, and lethal above 500 ppm. Higher concentrations are hardly detected by conventional detectors, the type of detector is chosen according to the sensitivity required (e.g. semiconductor, catalytic, electrochemical). Holographic sensors have been proposed for detecting concentrations up to 12.5% in volume.
Ammoniacal nitrogen (NH3-N) is a measure commonly used for testing the quantity of ammonium ions, derived naturally from ammonia, and returned to ammonia via organic processes, in water or waste liquids. It is a measure used mainly for quantifying values in waste treatment and water purification systems, as well as a measure of the health of natural and man-made water reserves. It is measured in units of mg/L (milligram per litre).
The ancient Greek historian Herodotus mentioned that there were outcrops of salt in an area of Libya that was inhabited by a people called the "Ammonians" (now: the Siwa oasis in northwestern Egypt, where salt lakes still exist). | https://en.wikipedia.org/wiki?curid=1365 |
Amethyst
Amethyst is a violet variety of quartz. The name comes from the Koine Greek ἀμέθυστος "amethystos" from ἀ- "a-", "not" and μεθύσκω / μεθύω , "intoxicate", a reference to the belief that the stone protected its owner from drunkenness. The ancient Greeks wore amethyst and carved drinking vessels from it in the belief that it would prevent intoxication.
Amethyst is a semiprecious stone often used in jewelry and is the traditional birthstone for February.
Amethyst is a purple variety of quartz (SiO2) and owes its violet color to irradiation, impurities of iron and in some cases other transition metals, and the presence of other trace elements, which result in complex crystal lattice substitutions. The hardness of the mineral is the same as quartz, thus making it suitable for use in jewelry.
Amethyst occurs in primary hues from a light pinkish violet color to a deep purple color. Amethyst may exhibit one or both secondary hues, red and blue. High quality amethyst can be found in Siberia, Sri Lanka, Brazil, Uruguay, and the far East. The ideal grade is called "Deep Siberian" and has a primary purple hue of around 75–80%, with 15–20% blue and (depending on the light source) red secondary hues. ‘Rose de France’ is defined by its markedly light shade of the purple, reminiscent of a lavender/lilac shade. These pale colors, were once considered undesirable but have recently become popular due to intensive marketing.
Green quartz is sometimes incorrectly called green amethyst, which is a misnomer and not an appropriate name for the material, the proper terminology being prasiolite. Other names for green quartz are vermarine or lime citrine.
Of very variable intensity, the color of amethyst is often laid out in stripes parallel to the final faces of the crystal. One aspect in the art of lapidary involves correctly cutting the stone to place the color in a way that makes the tone of the finished gem homogeneous. Often, the fact that sometimes only a thin surface layer of violet color is present in the stone or that the color is not homogeneous makes for a difficult cutting. It can even cut crystal quartz, which is one of Earth’s sharpest gems.
The color of amethyst has been demonstrated to result from substitution by irradiation of trivalent iron (Fe3+) for silicon in the structure, in the presence of trace elements of large ionic radius, and, to a certain extent, the amethyst color can naturally result from displacement of transition elements even if the iron concentration is low. Natural amethyst is dichroic in reddish violet and bluish violet, but when heated, turns yellow-orange, yellow-brown, or dark brownish and may resemble citrine, but loses its dichroism, unlike genuine citrine. When partially heated, amethyst can result in ametrine.
Amethyst can fade in tone if overexposed to light sources and can be artificially darkened with adequate irradiation. It does not fluoresce under either short-wave or long-wave UV light.
Amethyst is produced in abundance from the state of Minas Gerais in Brazil where it occurs in large geodes within volcanic rocks. Many of the hollow agates of southwestern Brazil and Uruguay contain a crop of amethyst crystals in the interior. Artigas, Uruguay and neighboring Brazilian state Rio Grande do Sul are large world producers exceeding in quantity Minas Gerais, as well as Mato Grosso, Espirito Santo, Bahia, and Ceará states, all amethyst producers of importance in Brazil.
It is also found and mined in South Korea. The largest opencast amethyst vein in the world is in Maissau, Lower Austria. Much fine amethyst comes from Russia, especially from near Mursinka in the Ekaterinburg district, where it occurs in drusy cavities in granitic rocks. Many localities in south India yield amethyst. One of the largest global amethyst producers is Zambia in southern Africa with an annual production of about 1000 tons.
Amethyst occurs at many localities in the United States. Among these may be mentioned: the Mazatzal Mountain region in Gila and Maricopa Counties, Arizona; Red Feather Lakes, near Fort Collins, Colorado; Amethyst Mountain, Texas; Yellowstone National Park; Delaware County, Pennsylvania; Haywood County, North Carolina; Deer Hill and Stow, Maine and in the Lake Superior region of Minnesota, Wisconsin and Michigan. Amethyst is relatively common in the Canadian provinces of Ontario and Nova Scotia. The largest amethyst mine in North America is located in Thunder Bay, Ontario.
Amethyst is the official state gemstone of South Carolina. Several South Carolina amethysts are on display at the Smithsonian Museum of Natural History.
Amethyst was used as a gemstone by the ancient Egyptians and was largely employed in antiquity for intaglio engraved gems.
The Greeks believed amethyst gems could prevent intoxication, while medieval European soldiers wore amethyst amulets as protection in battle in the belief that amethysts heal people and keep them cool-headed. Beads of amethyst were found in Anglo-Saxon graves in England. Anglican bishops wear an episcopal ring often set with an amethyst, an allusion to the description of the Apostles as "not drunk" at Pentecost in Acts 2:15.
A large geode, or "amethyst-grotto", from near Santa Cruz in southern Brazil was presented at a 1902 exhibition in Düsseldorf, Germany.
In the 19th century, the color of amethyst was attributed to the presence of manganese. However, since it can be greatly altered and even discharged by heat, the color was believed by some authorities to be from an organic source. Ferric thiocyanate has been suggested, and sulfur was said to have been detected in the mineral.
Synthetic (laboratory-grown) amethyst is produced by a synthesis method called hydrothermal growth, which grows the crystals inside a high-pressure autoclave.
Synthetic amethyst is made to imitate the best quality amethyst. Its chemical and physical properties are the same as that of natural amethyst and it can not be differentiated with absolute certainty without advanced gemmological testing (which is often cost-prohibitive). One test based on "Brazil law twinning" (a form of quartz twinning where right and left hand quartz structures are combined in a single crystal) can be used to identify most synthetic amethyst rather easily. It is possible to synthesize twinned amethyst, but this type is not available in large quantities in the market.
Single-crystal quartz is very desirable in the industry, particularly for keeping the regular vibrations necessary for quartz movements in watches and clocks, which is where a lot of synthetic quartz is used.
Treated amethyst is produced by gamma ray, X-ray or electron beam irradiation of clear quartz (rock crystal) which has been first doped with ferric impurities. Exposure to heat partially cancels the irradiation effects and amethyst generally becomes yellow or even green. Much of the citrine, cairngorm, or yellow quartz of jewelry is said to be merely "burnt amethyst".
The Greek word "amethystos" may be translated as "not drunken", from Greek "a-", "not" + , "intoxicated". Amethyst was considered to be a strong antidote against drunkenness, | https://en.wikipedia.org/wiki?curid=1366 |
Albertosaurus
Albertosaurus (; meaning "Alberta lizard") is a genus of tyrannosaurid theropod dinosaurs that lived in western North America during the Late Cretaceous Period, about 70 million years ago. The type species, "A. sarcophagus", was apparently restricted in range to the modern-day Canadian province of Alberta, after which the genus is named, although an indeterminate species ("cf. "Albertosaurus" sp.") has been discovered in the Corral de Enmedio and Packard Formations in Mexico. Scientists disagree on the content of the genus, with some recognizing "Gorgosaurus libratus" as a second species.
As a tyrannosaurid, "Albertosaurus" was a bipedal predator with tiny, two-fingered hands and a massive head that had dozens of large, sharp teeth. It may have been at the top of the food chain in its local ecosystem. While "Albertosaurus" was large for a theropod, it was much smaller than its larger and more famous relative "Tyrannosaurus rex", growing nine to ten meters long and possibly weighing 2.5 metric tons or less.
Since the first discovery in 1884, fossils of more than 30 individuals have been recovered, providing scientists with a more detailed knowledge of "Albertosaurus" anatomy than is available for most other tyrannosaurids. The discovery of 26 individuals at one site provides evidence of pack behaviour and allows studies of ontogeny and population biology, which are impossible with lesser-known dinosaurs.
"Albertosaurus" was smaller than some other tyrannosaurids, such as "Tarbosaurus" and "Tyrannosaurus". Typical "Albertosaurus" adults measured up to long, while rare individuals of great age could grow to be over long. Several independent mass estimates, obtained by different methods, suggest that an adult "Albertosaurus" weighed between 1.3 tonnes and 2.5 tonnes (2.8 tons). In 2016 Molina-Pérez and Larramendi estimated the (CMN 5600) specimen at 9.7 meters (32 ft) and 4 tonnes (4.4 short tons).
"Albertosaurus" shared a similar body appearance with all other tyrannosaurids. Typically for a theropod, "Albertosaurus" was bipedal and balanced the heavy head and torso with a long tail. However, tyrannosaurid forelimbs were extremely small for their body size and retained only two digits. The hind limbs were long and ended in a four-toed foot on which the first digit, called the hallux, was short and did not reach the ground. The third digit was longer than the rest. "Albertosaurus" may have been able to reach walking speeds of 14−21 km/hour (8−13 mi/hour). At least for the younger individuals, a high running speed is plausible.
Two skin impressions from "Albertosaurus" are known, both showing scales. One patch is found with some gastralic ribs and the impression of a long, unknown bone, indicating that the patch is from the belly. The scales are pebbly and gradually become larger and somewhat hexagonal in shape. Also preserved are two larger feature scales, placed 4.5 cm apart from each other. Another skin impression is from an unknown part of the body. These scales are small, diamond-shaped and arranged in rows.
The massive skull of "Albertosaurus", which was perched on a short, S-shaped neck, was about long in the largest adults. Wide openings in the skull (fenestrae) reduced the weight of the head while also providing space for muscle attachment and sensory organs. Its long jaws contained, both sides combined, 58 or more banana-shaped teeth; larger tyrannosaurids possessed fewer teeth, "Gorgosaurus" at least 62. Unlike most theropods, "Albertosaurus" and other tyrannosaurids were heterodont, with teeth of different forms depending on their position in the mouth. The premaxillary teeth at the tip of the upper jaw, four per side, were much smaller than the rest, more closely packed, and D-shaped in cross section. Like with "Tyrannosaurus", the maxillary (cheek) teeth of "Albertosaurus" were adapted in general form to resist lateral forces exerted by a struggling prey. The bite force of "Albertosaurus" was less formidable, however, with the maximum force, by the hind teeth, reaching 3,413 Newtons. Above the eyes were short bony crests that may have been brightly coloured in life and used in courtship to attract a mate.
William Abler observed in 2001 that "Albertosaurus" tooth serrations resemble a crack in the tooth ending in a round void called an ampulla. Tyrannosaurid teeth were used as holdfasts for pulling flesh off a body, so when a tyrannosaur pulled back on a piece of meat, the tension could cause a purely crack-like serration to spread through the tooth. However, the presence of the ampulla distributed these forces over a larger surface area, and lessened the risk of damage to the tooth under strain. The presence of incisions ending in voids has parallels in human engineering. Guitar makers use incisions ending in voids to, as Abler describes, "impart alternating regions of flexibility and rigidity" to wood they work. The use of a drill to create an "ampulla" of sorts and prevent the propagation of cracks through material is also used to protect aircraft surfaces. Abler demonstrated that a plexiglass bar with incisions called "kerfs" and drilled holes was more than 25% stronger than one with only regularly placed incisions. Unlike tyrannosaurs, ancient predators like phytosaurs and "Dimetrodon" had no adaptations to prevent the crack-like serrations of their teeth from spreading when subjected to the forces of feeding.
"Albertosaurus" was named by Henry Fairfield Osborn in a one-page note at the end of his 1905 description of "Tyrannosaurus rex". The name honours Alberta, the Canadian province established the same year, in which the first remains were found. The generic name also incorporates the Greek term "σαυρος"/"sauros" ("lizard"), the most common suffix in dinosaur names. The type species is "Albertosaurus sarcophagus"; the specific name is derived from Ancient Greek σαρκοφάγος ("sarkophagos") meaning "flesh-eating" and having the same etymology as the funeral container with which it shares its name: a combination of the Greek words σαρξ/' ("flesh") and φαγειν/' ("to eat"). More than 30 specimens of all ages are known to science.
The type specimen is a partial skull, collected in the summer of 1884 from an outcrop of the Horseshoe Canyon Formation alongside the Red Deer River, in Alberta. This specimen, found on June 9, 1884, was recovered by an expedition of the Geological Survey of Canada, led by the famous geologist Joseph Burr Tyrrell. Due to a lack of specialised equipment the almost complete skull could only be partially secured. In 1889, Tyrrell's colleague Thomas Chesmer Weston found an incomplete smaller skull associated with some skeletal material at a location nearby. The two skulls were assigned to the preexisting species "Laelaps incrassatus" by Edward Drinker Cope in 1892, although the name "Laelaps" was preoccupied by a genus of mite and had been changed to "Dryptosaurus" in 1877 by Othniel Charles Marsh. Cope refused to recognize the new name created by his archrival Marsh. However, Lawrence Lambe used the name "Dryptosaurus incrassatus" instead of "Laelaps incrassatus" when he described the remains in detail in 1903 and 1904, a combination first coined by Oliver Perry Hay in 1902. Shortly later, Osborn pointed out that "D. incrassatus" was based on generic tyrannosaurid teeth, so the two Horseshoe Canyon skulls could not be confidently referred to that species. The Horseshoe Canyon skulls also differed markedly from the remains of "D. aquilunguis", type species of "Dryptosaurus", so Osborn created the new name "Albertosaurus sarcophagus" for them in 1905. He did not describe the remains in any great detail, citing Lambe's complete description the year before. Both specimens (the holotype CMN 5600 and the paratype CMN 5601) are stored in the Canadian Museum of Nature in Ottawa. By the early twenty-first century, some concerns had arisen that, due to the damaged state of the holotype, "Albertosaurus" might be a "nomen dubium", a "dubious name" that could only be used for the type specimen itself because other fossils could not reliably be assigned to it. However, in 2010, Thomas Carr established that the holotype, the paratype and comparable later finds all shared a single common unique trait or autapomorphy: the possession of an enlarged pneumatic opening in the back rim of the side of the palatine bone, proving that "Albertosaurus" was a valid taxon.
On 11 August 1910, American paleontologist Barnum Brown discovered the remains of a large group of "Albertosaurus" at another quarry alongside the Red Deer River. Because of the large number of bones and the limited time available, Brown's party did not collect every specimen, but made sure to collect remains from all of the individuals that they could identify in the bonebed. Among the bones deposited in the American Museum of Natural History collections in New York City are seven sets of right metatarsals, along with two isolated toe bones that did not match any of the metatarsals in size. This indicated the presence of at least nine individuals in the quarry. Palaeontologist Philip J. Currie of the Royal Tyrrell Museum of Palaeontology rediscovered the bonebed in 1997 and resumed fieldwork at the site, which is now located inside Dry Island Buffalo Jump Provincial Park. Further excavation from 1997 to 2005 turned up the remains of 13 more individuals of various ages, including a diminutive two-year-old and a very old individual estimated at over in length. None of these individuals are known from complete skeletons, and most are represented by remains in both museums. Excavations continued until 2008, when the minimum number of individuals present had been established at 12, on the basis of preserved elements that occur only once in a skeleton, and at 26 if mirrored elements were counted when differing in size due to ontogeny. A total of 1,128 "Albertosaurus" bones had been secured, the largest concentration of large theropod fossils known from the Cretaceous.
In 1911, Barnum Brown, during the second year of American Museum of Natural History operations in Alberta, uncovered a fragmentary partial "Albertosaurus" skull at the Red Deer River near Tolman Bridge, specimen AMNH 5222.
William Parks described a new species in 1928, "Albertosaurus arctunguis", based on a partial skeleton lacking the skull excavated by Gus Lindblad and Ralph Hornell near the Red Deer River in 1923, but this species has been considered identical to "A. sarcophagus" since 1970. Parks' specimen (ROM 807) is housed in the Royal Ontario Museum in Toronto.
Between 1926 and 1972, no "Albertosaurus" fossils were found at all; but, since the seventies, there has been a steady increase in the known material. Apart from the Dry Island bonebed, six more skulls and skeletons have since been discovered in Alberta and are housed in various Canadian museums: specimens RTMP 81.010.001, found in 1978 by amateur paleontologist Maurice Stefanuk; RTMP 85.098.001, found by Stefanuk on 16 June 1985; RTMP 86.64.001 (December 1985); RTMP 86.205.001 (1986); RTMP 97.058.0001 (1996); and CMN 11315. However, due to vandalism and accidents, no undamaged and complete skulls could be secured among these finds. Fossils have also been reported from the American states of Montana, New Mexico, Wyoming, and Missouri, but these probably do not represent "A. sarcophagus" and may not even belong to the genus "Albertosaurus".
Two specimens from ("cf "Albertosaurus" ".sp") have been found in Mexico (Packard Formation and Corral de Enmedio Formation).
In 1913, paleontologist Charles H. Sternberg recovered another tyrannosaurid skeleton from the slightly older Dinosaur Park Formation in Alberta. Lawrence Lambe named this dinosaur "Gorgosaurus libratus" in 1914. Other specimens were later found in Alberta and the US state of Montana. Finding, largely due to a lack of good "Albertosaurus" skull material, no significant differences to separate the two taxa, Dale Russell declared the name "Gorgosaurus" a junior synonym of "Albertosaurus", which had been named first, and "G. libratus" was renamed "Albertosaurus libratus" in 1970. A species distinction was maintained because of the age difference. This addition extended the temporal range of the genus "Albertosaurus" backwards by several million years and its geographic range southwards by hundreds of kilometres.
In 2003, Philip J. Currie, benefiting from much more extensive finds and a general increase in anatomical knowledge of theropods, compared several tyrannosaurid skulls and came to the conclusion that the two species are more distinct than previously thought. The decision to use one or two genera is rather arbitrary, as the two species are sister taxa, more closely related to each other than to any other species. Recognizing this, Currie nevertheless recommended that "Albertosaurus" and "Gorgosaurus" be retained as separate genera, as he concluded that they were no more similar than "Daspletosaurus" and "Tyrannosaurus", which are almost always separated. In addition, several albertosaurine specimens have been recovered from Alaska and New Mexico, and Currie suggested that the "Albertosaurus"-"Gorgosaurus" situation may be clarified once these are described fully. Most authors have followed Currie's recommendation, but some have not.
Apart from "A. sarcophagus", "A. arctunguis" and "A. libratus", several other species of "Albertosaurus" have been named. All of these are today seen as younger synonyms of other species or as "nomina dubia", and are not assigned to "Albertosaurus".
In 1930, Anatoly Nikolaevich Riabinin named "Albertosaurus pericolosus" based on a tooth from China, that probably belonged to "Tarbosaurus". In 1932, Friedrich von Huene renamed "Dryptosaurus incrassatus", not considered a "nomen dubium" by him, to "Albertosaurus incrassatus". Because he had identified "Gorgosaurus" with "Albertosaurus", in 1970, Russell also renamed "Gorgosaurus sternbergi" (Matthew & Brown 1922) into "Albertosaurus sternbergi" and "Gorgosaurus lancensis" (Gilmore 1946) into "Albertosaurus lancensis". The former species is today seen as a juvenile form of "Gorgosaurus libratus", the latter as either identical to "Tyrannosaurus" or representing a separate genus "Nanotyrannus". In 1988, Gregory S. Paul based "Albertosaurus megagracilis" on a small tyrannosaurid skeleton, specimen LACM 28345, from the Hell Creek Formation of Montana. It was renamed "Dinotyrannus" in 1995, but is now thought to represent a juvenile "Tyrannosaurus rex". Also in 1988, Paul renamed "Alectrosaurus olseni" (Gilmore 1933) into "Albertosaurus olseni"; this has found no general acceptance. In 1989, "Gorgosaurus novojilovi" (Maleev 1955) was renamed by Bryn Mader and Robert Bradley as "Albertosaurus novojilovi"; today this is seen as a synonym of "Tarbosaurus".
On two occasions, species based on valid "Albertosaurus" material were reassigned to a different genus: in 1922 William Diller Matthew renamed "A. sarcophagus" into "Deinodon sarcophagus" and in 1939 German paleontologist Oskar Kuhn renamed "A. arctunguis" into "Deinodon arctunguis".
"Albertosaurus" is a member of the theropod family Tyrannosauridae, in the subfamily Albertosaurinae. Its closest relative is the slightly older "Gorgosaurus libratus" (sometimes called "Albertosaurus libratus"; see below). These two species are the only described albertosaurines; other undescribed species may exist. Thomas Holtz found "Appalachiosaurus" to be an albertosaurine in 2004, but his more recent unpublished work locates it just outside Tyrannosauridae, in agreement with other authors.
The other major subfamily of tyrannosaurids is the Tyrannosaurinae, which includes "Daspletosaurus", "Tarbosaurus" and "Tyrannosaurus". Compared with these robust tyrannosaurines, albertosaurines had slender builds, with proportionately smaller skulls and longer bones of the lower leg (tibia) and feet (metatarsals and phalanges).
Below is the cladogram of the Tyrannosauridae based on the phylogenetic analysis conducted by Loewen "et al." in 2013.
Most age categories of "Albertosaurus" are represented in the fossil record. Using bone histology, the age of an individual animal at the time of death can often be determined, allowing growth rates to be estimated and compared with other species. The youngest known "Albertosaurus" is a two-year-old discovered in the Dry Island bonebed, which would have weighed about 50 kilograms (110 lb) and measured slightly more than in length. The specimen from the same quarry is the oldest and largest known, at 28 years of age. When specimens of intermediate age and size are plotted on a graph, an "S"-shaped growth curve results, with the most rapid growth occurring in a four-year period ending around the sixteenth year of life, a pattern also seen in other tyrannosaurids. The growth rate during this phase was per year, based on an adult 1.3 tonnes. Other studies have suggested higher adult weights; this would affect the magnitude of the growth rate, but not the overall pattern. Tyrannosaurids similar in size to "Albertosaurus" had similar growth rates, although the much larger "Tyrannosaurus rex" grew at almost five times this rate ( per year) at its peak. The end of the rapid growth phase suggests the onset of sexual maturity in "Albertosaurus", although growth continued at a slower rate throughout the animals' lives. Sexual maturation while still actively growing appears to be a shared trait among small and large dinosaurs as well as in large mammals such as humans and elephants. This pattern of relatively early sexual maturation differs strikingly from the pattern in birds, which delay their sexual maturity until after they have finished growing.
During growth, through thickening the tooth morphology changed so much that, had the association of young and adult skeletons on the Dry Island bonebed not proven they belonged to the same taxon, the teeth of juveniles would likely have been identified by statistical analysis as those of a different species.
Most known "Albertosaurus" individuals were aged 14 years or more at the time of death. Juvenile animals are rarely found as fossils for several reasons, mainly preservation bias, where the smaller bones of younger animals were less likely to be preserved by fossilization than the larger bones of adults, and collection bias, where smaller fossils are less likely to be noticed by collectors in the field. Young "Albertosaurus" are relatively large for juvenile animals, but their remains are still rare in the fossil record compared with adults. It has been suggested that this phenomenon is a consequence of life history, rather than bias, and that fossils of juvenile "Albertosaurus" are rare because they simply did not die as often as adults did.
A hypothesis of "Albertosaurus" life history postulates that hatchlings died in large numbers, but have not been preserved in the fossil record due to their small size and fragile construction. After just two years, juveniles were larger than any other predator in the region aside from adult "Albertosaurus", and more fleet of foot than most of their prey animals. This resulted in a dramatic decrease in their mortality rate and a corresponding rarity of fossil remains. Mortality rates doubled at age twelve, perhaps the result of the physiological demands of the rapid growth phase, and then doubled again with the onset of sexual maturity between the ages of fourteen and sixteen. This elevated mortality rate continued throughout adulthood, perhaps due to the high physiological demands of procreation, including stress and injuries received during intraspecific competition for mates and resources, and eventually, the ever-increasing effects of senescence. The higher mortality rate in adults may explain their more common preservation. Very large animals were rare because few individuals survived long enough to attain such sizes. High infant mortality rates, followed by reduced mortality among juveniles and a sudden increase in mortality after sexual maturity, with very few animals reaching maximum size, is a pattern observed in many modern large mammals, including elephants, African buffalo, and rhinoceros. The same pattern is also seen in other tyrannosaurids. The comparison with modern animals and other tyrannosaurids lends support to this life history hypothesis, but bias in the fossil record may still play a large role, especially since more than two-thirds of all "Albertosaurus" specimens are known from one locality.
The Dry Island bonebed discovered by Barnum Brown and his crew contains the remains of 26 "Albertosaurus", the most individuals found in one locality of any large Cretaceous theropod, and the second-most of any large theropod dinosaur behind the "Allosaurus" assemblage at the Cleveland-Lloyd Dinosaur Quarry in Utah. The group seems to be composed of one very old adult; eight adults between 17 and 23 years old; seven sub-adults undergoing their rapid growth phases at between 12 and 16 years old; and six juveniles between the ages of 2 and 11 years, who had not yet reached the growth phase.
The near-absence of herbivore remains and the similar state of preservation common to the many individuals at the "Albertosaurus" bonebed quarry led Currie to conclude that the locality was not a predator trap like the La Brea Tar Pits in California, and that all of the preserved animals died at the same time. Currie claims this as evidence of pack behaviour. Other scientists are skeptical, observing that the animals may have been driven together by drought, flood or for other reasons.
There is plentiful evidence for gregarious behaviour among herbivorous dinosaurs, including ceratopsians and hadrosaurs. However, only rarely are so many dinosaurian predators found at the same site. Small theropods like "Deinonychus" and "Coelophysis" have been found in aggregations, as have larger predators like "Allosaurus" and "Mapusaurus". There is some evidence of gregarious behaviour in other tyrannosaurids as well. Fragmentary remains of smaller individuals were found alongside "Sue", the "Tyrannosaurus" mounted in the Field Museum of Natural History in Chicago, and a bonebed in the Two Medicine Formation of Montana contains at least three specimens of "Daspletosaurus", preserved alongside several hadrosaurs. These findings may corroborate the evidence for social behaviour in "Albertosaurus", although some or all of the above localities may represent temporary or unnatural aggregations. Others have speculated that instead of social groups, at least some of these finds represent Komodo dragon-like mobbing of carcasses, where aggressive competition leads to some of the predators being killed and cannibalized.
Currie has also speculated on the pack-hunting habits of "Albertosaurus". The leg proportions of the smaller individuals were comparable to those of ornithomimids, which were probably among the fastest dinosaurs. Younger "Albertosaurus" were probably equally fleet-footed, or at least faster than their prey. Currie hypothesized that the younger members of the pack may have been responsible for driving their prey towards the adults, who were larger and more powerful, but also slower. Juveniles may also have had different lifestyles than adults, filling predator niches between the enormous adults and the smaller contemporaneous theropods, the largest of which were two orders of magnitude smaller than adult "Albertosaurus" in mass. A similar situation is observed in modern Komodo dragons, with hatchlings beginning life as small insectivores before growing to become the dominant predators on their islands. However, as the preservation of behaviour in the fossil record is exceedingly rare, these ideas cannot readily be tested. In 2010, Currie, though still favouring the hunting pack hypothesis, admitted that the concentration could have been brought about by other causes, such as a slowly rising water level during an extended flood.
In 2009, researchers hypothesized that smooth-edged holes found in the fossil jaws of tyrannosaurid dinosaurs such as "Albertosaurus" were caused by a parasite similar to "Trichomonas gallinae", which infects birds. They suggested that tyrannosaurids transmitted the infection by biting each other, and that the infection impaired their ability to eat food.
In 2001, Bruce Rothschild and others published a study examining evidence for stress fractures and tendon avulsions in theropod dinosaurs and the implications for their behavior. They found that only one of the 319 "Albertosaurus" foot bones checked for stress fractures actually had them and none of the four hand bones did. The scientists found that stress fractures were "significantly" less common in "Albertosaurus" than in the carnosaur "Allosaurus". ROM 807, the holotype of "A. arctunguis" (now referred to "A. sarcophagus"), had a deep hole in the iliac blade, although the describer of the species did not recognize this as pathological. The specimen also contains some exostosis on the fourth left metatarsal. In 1970, two of the five "Albertosaurus sarcophagus" specimens with humeri were reported by Dale Russel as having pathological damage to them.
In 2010, the health of the Dry Island "Albertosaurus" assembly was reported upon. Most specimens showed no sign of disease. On three phalanges of the foot strange bony spurs, consisting of abnormal ossifications of the tendons, so-called enthesophytes, were present, their cause unknown. Two ribs and a belly-rib showed signs of breaking and healing. One adult specimen had a left lower jaw showing a puncture wound and both healed and unhealed bite marks. The low number of abnormalities compares favourably with the health condition of a "Majungasaurus" population of which it in 2007 was established that 19% of individuals showed bone pathologies.
Most fossils of "Albertosaurus sarcophagus" are known from the upper Horseshoe Canyon Formation in Alberta. These younger units of this geologic formation date to the early Maastrichtian stage of the Late Cretaceous Period, 70 to 68 Ma (million years ago). Immediately below this formation is the Bearpaw Shale, a marine formation representing a section of the Western Interior Seaway. The seaway was receding as the climate cooled and sea levels subsided towards the end of the Cretaceous, exposing land that had previously been underwater. It was not a smooth process, however, and the seaway would periodically rise to cover parts of the region throughout Horseshoe Canyon before finally receding altogether in the years after. Due to the changing sea levels, many different environments are represented in the Horseshoe Canyon Formation, including offshore and near-shore marine habitats and coastal habitats like lagoons, estuaries and tidal flats. Numerous coal seams represent ancient peat swamps. Like most of the other vertebrate fossils from the formation, "Albertosaurus" remains are found in deposits laid down in the deltas and floodplains of large rivers during the later half of Horseshoe Canyon times.
The fauna of the Horseshoe Canyon Formation is well-known, as vertebrate fossils, including those of dinosaurs, are quite common. Sharks, rays, sturgeons, bowfins, gars and the gar-like "Aspidorhynchus" made up the fish fauna. Mammals included multituberculates and the marsupial "Didelphodon". The saltwater plesiosaur "Leurospondylus" has been found in marine sediments in the Horseshoe Canyon, while freshwater environments were populated by turtles, "Champsosaurus", and crocodilians like "Leidyosuchus" and "Stangerochampsa". Dinosaurs dominate the fauna, especially hadrosaurs, which make up half of all dinosaurs known, including the genera "Edmontosaurus", "Saurolophus" and "Hypacrosaurus". Ceratopsians and ornithomimids were also very common, together making up another third of the known fauna. Along with much rarer ankylosaurians and pachycephalosaurs, all of these animals would have been prey for a diverse array of carnivorous theropods, including troodontids, dromaeosaurids, and caenagnathids. Intermingled with the "Albertosaurus" remains of the Dry Island bonebed, the bones of the small theropod "Albertonykus" were found. Adult "Albertosaurus" were the apex predators in this environment, with intermediate niches possibly filled by juvenile albertosaurs. | https://en.wikipedia.org/wiki?curid=1367 |
Assembly language
In computer programming, assembly language (or assembler language), often abbreviated asm, is any low-level programming language in which there is a very strong correspondence between the instructions in the language and the architecture's machine code instructions. Because assembly depends on the machine code instructions, every assembler has its own assembly language which is designed for exactly one specific computer architecture. Assembly language may also be called "symbolic machine code".
Assembly code is converted into executable machine code by a utility program referred to as an "assembler". The conversion process is referred to as "assembly", as in "assembling" the source code. Assembly language usually has one statement per machine instruction (1:1), but comments and statements that are assembler directives, macros, and symbolic labels of program and memory locations are often also supported.
The term "assembler" is generally attributed to Wilkes, Wheeler and Gill in their 1951 book "The preparation of programs for an electronic digital computer", who, however, used the term to mean "a program that assembles another program consisting of several sections into a single program".
Each assembly language is specific to a particular computer architecture and sometimes to an operating system. However, some assembly languages do not provide specific syntax for operating system calls, and most assembly languages can be used universally with any operating system, as the language provides access to all the real capabilities of the processor, upon which all system call mechanisms ultimately rest. In contrast to assembly languages, most high-level programming languages are generally portable across multiple architectures but require interpreting or compiling, a much more complicated task than assembling.
The computational step when an assembler is processing a program is called "assembly time".
Assembly language uses a mnemonic to represent each low-level machine instruction or opcode, typically also each architectural register, flag, etc. Many operations require one or more operands in order to form a complete instruction. Most assemblers permit named constants, registers, and labels for program and memory locations, and can calculate expressions for operands. Thus, the programmers are freed from tedious repetitive calculations and assembler programs are much more readable than machine code. Depending on the architecture, these elements may also be combined for specific instructions or addressing modes using offsets or other data as well as fixed addresses. Many assemblers offer additional mechanisms to facilitate program development, to control the assembly process, and to aid debugging.
An assembler program creates object code by translating combinations of mnemonics and syntax for operations and addressing modes into their numerical equivalents. This representation typically includes an "operation code" ("opcode") as well as other control bits and data. The assembler also calculates constant expressions and resolves symbolic names for memory locations and other entities. The use of symbolic references is a key feature of assemblers, saving tedious calculations and manual address updates after program modifications. Most assemblers also include macro facilities for performing textual substitution – e.g., to generate common short sequences of instructions as inline, instead of "called" subroutines.
Some assemblers may also be able to perform some simple types of instruction set-specific optimizations. One concrete example of this may be the ubiquitous x86 assemblers from various vendors. Called jump-sizing, most of them are able to perform jump-instruction replacements (long jumps replaced by short or relative jumps) in any number of passes, on request. Others may even do simple rearrangement or insertion of instructions, such as some assemblers for RISC architectures that can help optimize a sensible instruction scheduling to exploit the CPU pipeline as efficiently as possible.
Like early programming languages such as Fortran, Algol, Cobol and Lisp, assemblers have been available since the 1950s and the first generations of text based computer interfaces. However, assemblers came first as they are far simpler to write than compilers for high-level languages. This is because each mnemonic along with the addressing modes and operands of an instruction translates rather directly into the numeric representations of that particular instruction, without much context or analysis. There have also been several classes of translators and semi automatic code generators with properties similar to both assembly and high level languages, with Speedcode as perhaps one of the better known examples.
There may be several assemblers with different syntax for a particular CPU or instruction set architecture. For instance, an instruction to add memory data to a register in a x86-family processor might be codice_1, in original "Intel syntax", whereas this would be written codice_2 in the "AT&T syntax" used by the GNU Assembler. Despite different appearances, different syntactic forms generally generate the same numeric machine code. A single assembler may also have different modes in order to support variations in syntactic forms as well as their exact semantic interpretations (such as FASM-syntax, TASM-syntax, ideal mode, etc., in the special case of x86 assembly programming).
There are two types of assemblers based on how many passes through the source are needed (how many times the assembler reads the source) to produce the object file.
In both cases, the assembler must be able to determine the size of each instruction on the initial passes in order to calculate the addresses of subsequent symbols. This means that if the size of an operation referring to an operand defined later depends on the type or distance of the operand, the assembler will make a pessimistic estimate when first encountering the operation, and if necessary, pad it with one or more
"no-operation" instructions in a later pass or the errata. In an assembler with peephole optimization, addresses may be recalculated between passes to allow replacing pessimistic code with code tailored to the exact distance from the target.
The original reason for the use of one-pass assemblers was memory size and speed of assembly – often a second pass would require storing the symbol table in memory (to handle forward references), rewinding and rereading the program source on tape, or rereading a deck of cards or punched paper tape. Later computers with much larger memories (especially disc storage), had the space to perform all necessary processing without such re-reading. The advantage of the multi-pass assembler is that the absence of errata makes the linking process (or the program load if the assembler directly produces executable code) faster.
Example: in the following code snippet, a one-pass assembler would be able to determine the address of the backward reference BKWD when assembling statement S2, but would not be able to determine the address of the forward reference FWD when assembling the branch statement S1; indeed, FWD may be undefined. A two-pass assembler would determine both addresses in pass 1, so they would be known when generating code in pass 2.
More sophisticated high-level assemblers provide language abstractions such as:
See Language design below for more details.
A program written in assembly language consists of a series of mnemonic processor instructions and meta-statements (known variously as directives, pseudo-instructions, and pseudo-ops), comments and data. Assembly language instructions usually consist of an opcode mnemonic followed by a list of data, arguments or parameters. These are translated by an assembler into machine language instructions that can be loaded into memory and executed.
For example, the instruction below tells an x86/IA-32 processor to move an immediate 8-bit value into a register. The binary code for this instruction is 10110 followed by a 3-bit identifier for which register to use. The identifier for the "AL" register is 000, so the following machine code loads the "AL" register with the data 01100001.
This binary computer code can be made more human-readable by expressing it in hexadecimal as follows.
Here, codice_3 means 'Move a copy of the following value into "AL", and codice_4 is a hexadecimal representation of the value 01100001, which is 97 in decimal. Assembly language for the 8086 family provides the mnemonic MOV (an abbreviation of "move") for instructions such as this, so the machine code above can be written as follows in assembly language, complete with an explanatory comment if required, after the semicolon. This is much easier to read and to remember.
MOV AL, 61h ; Load AL with 97 decimal (61 hex)
In some assembly languages (including this one) the same mnemonic, such as MOV, may be used for a family of related instructions for loading, copying and moving data, whether these are immediate values, values in registers, or memory locations pointed to by values in registers or by immediate (a/k/a direct) addresses. Other assemblers may use separate opcode mnemonics such as L for "move memory to register", ST for "move register to memory", LR for "move register to register", MVI for "move immediate operand to memory", etc.
If the same mnemonic is used for different instructions, that means that the mnemonic corresponds to several different binary instruction codes, excluding data (e.g. the codice_5 in this example), depending on the operands that follow the mnemonic. For example, for the x86/IA-32 CPUs, the Intel assembly language syntax codice_6 represents an instruction that moves the contents of register "AH" into register "AL". The hexadecimal form of this instruction is:
The first byte, 88h, identifies a move between a byte-sized register and either another register or memory, and the second byte, E0h, is encoded (with three bit-fields) to specify that both operands are registers, the source is "AH", and the destination is "AL".
In a case like this where the same mnemonic can represent more than one binary instruction, the assembler determines which instruction to generate by examining the operands. In the first example, the operand codice_5 is a valid hexadecimal numeric constant and is not a valid register name, so only the codice_3 instruction can be applicable. In the second example, the operand codice_9 is a valid register name and not a valid numeric constant (hexadecimal, decimal, octal, or binary), so only the codice_10 instruction can be applicable.
Assembly languages are always designed so that this sort of unambiguousness is universally enforced by their syntax. For example, in the Intel x86 assembly language, a hexadecimal constant must start with a numeral digit, so that the hexadecimal number 'A' (equal to decimal ten) would be written as codice_11 or codice_12, not codice_9, specifically so that it cannot appear to be the name of register "AH". (The same rule also prevents ambiguity with the names of registers "BH", "CH", and "DH", as well as with any user-defined symbol that ends with the letter "H" and otherwise contains only characters that are hexadecimal digits, such as the word "BEACH".)
Returning to the original example, while the x86 opcode 10110000 (codice_3) copies an 8-bit value into the "AL" register, 10110001 (codice_15) moves it into "CL" and 10110010 (codice_16) does so into "DL". Assembly language examples for these follow.
MOV AL, 1h ; Load AL with immediate value 1
MOV CL, 2h ; Load CL with immediate value 2
MOV DL, 3h ; Load DL with immediate value 3
The syntax of MOV can also be more complex as the following examples show.
MOV EAX, [EBX] ; Move the 4 bytes in memory at the address contained in EBX into EAX
MOV [ESI+EAX], CL ; Move the contents of CL into the byte at address ESI+EAX
MOV DS, DX ; Move the contents of DX into segment register DS
In each case, the MOV mnemonic is translated directly into one of the opcodes 88-8C, 8E, A0-A3, B0-BF, C6 or C7 by an assembler, and the programmer normally does not have to know or remember which.
Transforming assembly language into machine code is the job of an assembler, and the reverse can at least partially be achieved by a disassembler. Unlike high-level languages, there is a one-to-one correspondence between many simple assembly statements and machine language instructions. However, in some cases, an assembler may provide "pseudoinstructions" (essentially macros) which expand into several machine language instructions to provide commonly needed functionality. For example, for a machine that lacks a "branch if greater or equal" instruction, an assembler may provide a pseudoinstruction that expands to the machine's "set if less than" and "branch if zero (on the result of the set instruction)". Most full-featured assemblers also provide a rich macro language (discussed below) which is used by vendors and programmers to generate more complex code and data sequences. Since the information about pseudoinstructions and macros defined in the assembler environment is not present in the object program, a disassembler cannot reconstruct the macro and pseudoinstruction invocations but can only disassemble the actual machine instructions that the assembler generated from those abstract assembly-language entities. Likewise, since comments in the assembly language source file are ignored by the assembler and have no effect on the object code it generates, a disassembler is always completely unable to recover source comments.
Each computer architecture has its own machine language. Computers differ in the number and type of operations they support, in the different sizes and numbers of registers, and in the representations of data in storage. While most general-purpose computers are able to carry out essentially the same functionality, the ways they do so differ; the corresponding assembly languages reflect these differences.
Multiple sets of mnemonics or assembly-language syntax may exist for a single instruction set, typically instantiated in different assembler programs. In these cases, the most popular one is usually that supplied by the CPU manufacturer and used in its documentation.
Two examples of CPUs that have two different sets of mnemonics are the Intel 8080 family and the Intel 8086/8088. Because Intel claimed copyright on its assembly language mnemonics (on each page of their documentation published in the 1970s and early 1980s, at least), some companies that independently produced CPUs compatible with Intel instruction sets invented their own mnemonics. The Zilog Z80 CPU, an enhancement of the Intel 8080A, supports all the 8080A instructions plus many more; Zilog invented an entirely new assembly language, not only for the new instructions but also for all of the 8080A instructions. For example, where Intel uses the mnemonics "MOV", "MVI", "LDA", "STA", "LXI", "LDAX", "STAX", "LHLD", and "SHLD" for various data transfer instructions, the Z80 assembly language uses the mnemonic "LD" for all of them. A similar case is the NEC V20 and V30 CPUs, enhanced copies of the Intel 8086 and 8088, respectively. Like Zilog with the Z80, NEC invented new mnemonics for all of the 8086 and 8088 instructions, to avoid accusations of infringement of Intel's copyright. (It is questionable whether such copyrights can be valid, and later CPU companies such as AMD and Cyrix republished Intel's x86/IA-32 instruction mnemonics exactly with neither permission nor legal penalty.) It is doubtful whether in practice many people who programmed the V20 and V30 actually wrote in NEC's assembly language rather than Intel's; since any two assembly languages for the same instruction set architecture are isomorphic (somewhat like English and Pig Latin), there is no requirement to use a manufacturer's own published assembly language with that manufacturer's products.
There is a large degree of diversity in the way the authors of assemblers categorize statements and in the nomenclature that they use. In particular, some describe anything other than a machine mnemonic or extended mnemonic as a pseudo-operation (pseudo-op). A typical assembly language consists of 3 types of instruction statements that are used to define program operations:
Instructions (statements) in assembly language are generally very simple, unlike those in high-level languages. Generally, a mnemonic is a symbolic name for a single executable machine language instruction (an opcode), and there is at least one opcode mnemonic defined for each machine language instruction. Each instruction typically consists of an "operation" or "opcode" plus zero or more "operands". Most instructions refer to a single value or a pair of values. Operands can be immediate (value coded in the instruction itself), registers specified in the instruction or implied, or the addresses of data located elsewhere in storage. This is determined by the underlying processor architecture: the assembler merely reflects how this architecture works. "Extended mnemonics" are often used to specify a combination of an opcode with a specific operand, e.g., the System/360 assemblers use as an extended mnemonic for with a mask of 15 and ("NO OPeration" – do nothing for one step) for with a mask of 0.
"Extended mnemonics" are often used to support specialized uses of instructions, often for purposes not obvious from the instruction name. For example, many CPU's do not have an explicit NOP instruction, but do have instructions that can be used for the purpose. In 8086 CPUs the instruction is used for , with being a pseudo-opcode to encode the instruction . Some disassemblers recognize this and will decode the instruction as . Similarly, IBM assemblers for System/360 and System/370 use the extended mnemonics and for and with zero masks. For the SPARC architecture, these are known as "synthetic instructions".
Some assemblers also support simple built-in macro-instructions that generate two or more machine instructions. For instance, with some Z80 assemblers the instruction is recognized to generate followed by . These are sometimes known as "pseudo-opcodes".
Mnemonics are arbitrary symbols; in 1985 the IEEE published Standard 694 for a uniform set of mnemonics to be used by all assemblers. The standard has since been withdrawn.
There are instructions used to define data elements to hold data and variables. They define the type of data, the length and the alignment of data. These instructions can also define whether the data is available to outside programs (programs assembled separately) or only to the program in which the data section is defined. Some assemblers classify these as pseudo-ops.
Assembly directives, also called pseudo-opcodes, pseudo-operations or pseudo-ops, are commands given to an assembler "directing it to perform operations other than assembling instructions". Directives affect how the assembler operates and "may affect the object code, the symbol table, the listing file, and the values of internal assembler parameters". Sometimes the term "pseudo-opcode" is reserved for directives that generate object code, such as those that generate data.
The names of pseudo-ops often start with a dot to distinguish them from machine instructions. Pseudo-ops can make the assembly of the program dependent on parameters input by a programmer, so that one program can be assembled in different ways, perhaps for different applications. Or, a pseudo-op can be used to manipulate presentation of a program to make it easier to read and maintain. Another common use of pseudo-ops is to reserve storage areas for run-time data and optionally initialize their contents to known values.
Symbolic assemblers let programmers associate arbitrary names ("labels" or "symbols") with memory locations and various constants. Usually, every constant and variable is given a name so instructions can reference those locations by name, thus promoting self-documenting code. In executable code, the name of each subroutine is associated with its entry point, so any calls to a subroutine can use its name. Inside subroutines, GOTO destinations are given labels. Some assemblers support "local symbols" which are often lexically distinct from normal symbols (e.g., the use of "10$" as a GOTO destination).
Some assemblers, such as NASM, provide flexible symbol management, letting programmers manage different namespaces, automatically calculate offsets within data structures, and assign labels that refer to literal values or the result of simple computations performed by the assembler. Labels can also be used to initialize constants and variables with relocatable addresses.
Assembly languages, like most other computer languages, allow comments to be added to program source code that will be ignored during assembly. Judicious commenting is essential in assembly language programs, as the meaning and purpose of a sequence of binary machine instructions can be difficult to determine. The "raw" (uncommented) assembly language generated by compilers or disassemblers is quite difficult to read when changes must be made.
Many assemblers support "predefined macros", and others support "programmer-defined" (and repeatedly re-definable) macros involving sequences of text lines in which variables and constants are embedded. The macro definition is most commonly a mixture of assembler statements, e.g., directives, symbolic machine instructions, and templates for assembler statements. This sequence of text lines may include opcodes or directives. Once a macro has been defined its name may be used in place of a mnemonic. When the assembler processes such a statement, it replaces the statement with the text lines associated with that macro, then processes them as if they existed in the source code file (including, in some assemblers, expansion of any macros existing in the replacement text). Macros in this sense date to IBM autocoders of the 1950s.
In assembly language, the term "macro" represents a more comprehensive concept than it does in some other contexts, such as the pre-processor in the C programming language, where its #define directive typically is used to create short single line macros. Assembler macro instructions, like macros in PL/I and some other languages, can be lengthy "programs" by themselves, executed by interpretation by the assembler during assembly.
Since macros can have 'short' names but expand to several or indeed many lines of code, they can be used to make assembly language programs appear to be far shorter, requiring fewer lines of source code, as with higher level languages. They can also be used to add higher levels of structure to assembly programs, optionally introduce embedded debugging code via parameters and other similar features.
Macro assemblers often allow macros to take parameters. Some assemblers include quite sophisticated macro languages, incorporating such high-level language elements as optional parameters, symbolic variables, conditionals, string manipulation, and arithmetic operations, all usable during the execution of a given macro, and allowing macros to save context or exchange information. Thus a macro might generate numerous assembly language instructions or data definitions, based on the macro arguments. This could be used to generate record-style data structures or "unrolled" loops, for example, or could generate entire algorithms based on complex parameters. For instance, a "sort" macro could accept the specification of a complex sort key and generate code crafted for that specific key, not needing the run-time tests that would be required for a general procedure interpreting the specification. An organization using assembly language that has been heavily extended using such a macro suite can be considered to be working in a higher-level language since such programmers are not working with a computer's lowest-level conceptual elements. Underlining this point, macros were used to implement an early virtual machine in SNOBOL4 (1967), which was written in the SNOBOL Implementation Language (SIL), an assembly language for a virtual machine. The target machine would translate this to its native code using a macro assembler. This allowed a high degree of portability for the time.
Macros were used to customize large scale software systems for specific customers in the mainframe era and were also used by customer personnel to satisfy their employers' needs by making specific versions of manufacturer operating systems. This was done, for example, by systems programmers working with IBM's Conversational Monitor System / Virtual Machine (VM/CMS) and with IBM's "real time transaction processing" add-ons, Customer Information Control System CICS, and ACP/TPF, the airline/financial system that began in the 1970s and still runs many large computer reservation systems (CRS) and credit card systems today.
It is also possible to use solely the macro processing abilities of an assembler to generate code written in completely different languages, for example, to generate a version of a program in COBOL using a pure macro assembler program containing lines of COBOL code inside assembly time operators instructing the assembler to generate arbitrary code. IBM OS/360 uses macros to perform system generation. The user specifies options by coding a series of assembler macros. Assembling these macros generates a job stream to build the system, including job control language and utility control statements.
This is because, as was realized in the 1960s, the concept of "macro processing" is independent of the concept of "assembly", the former being in modern terms more word processing, text processing, than generating object code. The concept of macro processing appeared, and appears, in the C programming language, which supports "preprocessor instructions" to set variables, and make conditional tests on their values. Unlike certain previous macro processors inside assemblers, the C preprocessor is not Turing-complete because it lacks the ability to either loop or "go to", the latter allowing programs to loop.
Despite the power of macro processing, it fell into disuse in many high level languages (major exceptions being C, C++ and PL/I) while remaining a perennial for assemblers.
Macro parameter substitution is strictly by name: at macro processing time, the value of a parameter is textually substituted for its name. The most famous class of bugs resulting was the use of a parameter that itself was an expression and not a simple name when the macro writer expected a name. In the macro:
the intention was that the caller would provide the name of a variable, and the "global" variable or constant b would be used to multiply "a". If foo is called with the parameter codice_17, the macro expansion of codice_18 occurs. To avoid any possible ambiguity, users of macro processors can parenthesize formal parameters inside macro definitions, or callers can parenthesize the input parameters.
Packages of macros have been written providing structured programming elements to encode execution flow. The earliest example of this approach was in the Concept-14 macro set, originally proposed by Harlan Mills (March 1970), and implemented by Marvin Kessler at IBM's Federal Systems Division, which provided IF/ELSE/ENDIF and similar control flow blocks for OS/360 assembler programs. This was a way to reduce or eliminate the use of GOTO operations in assembly code, one of the main factors causing spaghetti code in assembly language. This approach was widely accepted in the early 1980s (the latter days of large-scale assembly language use).
A curious design was A-natural, a "stream-oriented" assembler for 8080/Z80, processors from Whitesmiths Ltd. (developers of the Unix-like Idris operating system, and what was reported to be the first commercial C compiler). The language was classified as an assembler because it worked with raw machine elements such as opcodes, registers, and memory references; but it incorporated an expression syntax to indicate execution order. Parentheses and other special symbols, along with block-oriented structured programming constructs, controlled the sequence of the generated instructions. A-natural was built as the object language of a C compiler, rather than for hand-coding, but its logical syntax won some fans.
There has been little apparent demand for more sophisticated assemblers since the decline of large-scale assembly language development. In spite of that, they are still being developed and applied in cases where resource constraints or peculiarities in the target system's architecture prevent the effective use of higher-level languages.
Assemblers with a strong macro engine allow structured programming via macros, such as the switch macro provided with the Masm32 package (this code is a complete program):
include \masm32\include\masm32rt.inc ; use the Masm32 library
.code
demomain:
end demomain
Assembly languages were not available at the time when the stored-program computer was introduced. Kathleen Booth "is credited with inventing assembly language" based on theoretical work she began in 1947, while working on the ARC2 at Birkbeck, University of London following consultation by Andrew Booth (later her husband) with mathematician John von Neumann and physicist Herman Goldstine at the Institute for Advanced Study.
In late 1948, the Electronic Delay Storage Automatic Calculator (EDSAC) had an assembler (named "initial orders") integrated into its bootstrap program. It used one-letter mnemonics developed by David Wheeler, who is credited by the IEEE Computer Society as the creator of the first "assembler". Reports on the EDSAC introduced the term "assembly" for the process of combining fields into an instruction word. SOAP (Symbolic Optimal Assembly Program) was an assembly language for the IBM 650 computer written by Stan Poley in 1955.
Assembly languages eliminate much of the error-prone, tedious, and time-consuming first-generation programming needed with the earliest computers, freeing programmers from tedium such as remembering numeric codes and calculating addresses. They were once widely used for all sorts of programming. However, by the 1980s (1990s on microcomputers), their use had largely been supplanted by higher-level languages, in the search for improved programming productivity. Today, assembly language is still used for direct hardware manipulation, access to specialized processor instructions, or to address critical performance issues. Typical uses are device drivers, low-level embedded systems, and real-time systems.
Historically, numerous programs have been written entirely in assembly language. The Burroughs MCP (1961) was the first computer for which an operating system was not developed entirely in assembly language; it was written in Executive Systems Problem Oriented Language (ESPOL), an Algol dialect. Many commercial applications were written in assembly language as well, including a large amount of the IBM mainframe software written by large corporations. COBOL, FORTRAN and some PL/I eventually displaced much of this work, although a number of large organizations retained assembly-language application infrastructures well into the 1990s.
Most early microcomputers relied on hand-coded assembly language, including most operating systems and large applications. This was because these systems had severe resource constraints, imposed idiosyncratic memory and display architectures, and provided limited, buggy system services. Perhaps more important was the lack of first-class high-level language compilers suitable for microcomputer use. A psychological factor may have also played a role: the first generation of microcomputer programmers retained a hobbyist, "wires and pliers" attitude.
In a more commercial context, the biggest reasons for using assembly language were minimal bloat (size), minimal overhead, greater speed, and reliability.
Typical examples of large assembly language programs from this time are IBM PC DOS operating systems, the Turbo Pascal compiler and early applications such as the spreadsheet program Lotus 1-2-3. Assembly language was used to get the best performance out of the Sega Saturn, a console that was notoriously challenging to develop and program games for. The 1993 arcade game "NBA Jam" is another example.
Assembly language has long been the primary development language for many popular home computers of the 1980s and 1990s (such as the MSX, Sinclair ZX Spectrum, Commodore 64, Commodore Amiga, and Atari ST). This was in large part because interpreted BASIC dialects on these systems offered insufficient execution speed, as well as insufficient facilities to take full advantage of the available hardware on these systems. Some systems even have an integrated development environment (IDE) with highly advanced debugging and macro facilities. Some compilers available for the Radio Shack TRS-80 and its successors had the capability to combine inline assembly source with high-level program statements. Upon compilation, a built-in assembler produced inline machine code.
There have always been debates over the usefulness and performance of assembly language relative to high-level languages.
Although assembly language has specific niche uses where it is important (see below), there are other tools for optimization.
, the TIOBE index of programming language popularity ranks assembly language at 11, ahead of Visual Basic, for example. Assembler can be used to optimize for speed or optimize for size. In the case of speed optimization, modern optimizing compilers are claimed to render high-level languages into code that can run as fast as hand-written assembly, despite the counter-examples that can be found. The complexity of modern processors and memory sub-systems makes effective optimization increasingly difficult for compilers, as well as for assembly programmers. Moreover, increasing processor performance has meant that most CPUs sit idle most of the time, with delays caused by predictable bottlenecks such as cache misses, I/O operations and paging. This has made raw code execution speed a non-issue for many programmers.
There are some situations in which developers might choose to use assembly language:
Assembly language is still taught in most computer science and electronic engineering programs. Although few programmers today regularly work with assembly language as a tool, the underlying concepts remain important. Such fundamental topics as binary arithmetic, memory allocation, stack processing, character set encoding, interrupt processing, and compiler design would be hard to study in detail without a grasp of how a computer operates at the hardware level. Since a computer's behavior is fundamentally defined by its instruction set, the logical way to learn such concepts is to study an assembly language. Most modern computers have similar instruction sets. Therefore, studying a single assembly language is sufficient to learn: I) the basic concepts; II) to recognize situations where the use of assembly language might be appropriate; and III) to see how efficient executable code can be created from high-level languages. | https://en.wikipedia.org/wiki?curid=1368 |
Ambrosia
In the ancient Greek myths, ambrosia (, ) is the food or drink of the Greek gods, often depicted as conferring longevity or immortality upon whoever consumed it. It was brought to the gods in Olympus by doves and served by either Hebe or Ganymede at the heavenly feast.
"Ambrosia" is sometimes depicted in ancient art as distributed by a nymph labeled with that name and a nurse of Dionysus. In the myth of Lycurgus, the king attacked Ambrosia and Dionysus' entourage, causing the god to drive Lycurgus insane.
Ambrosia is very closely related to the gods' other form of sustenance, "nectar". The two terms may not have originally been distinguished; though in Homer's poems nectar is usually the drink and ambrosia the food of the gods; it was with ambrosia Hera "cleansed all defilement from her lovely flesh", and with ambrosia Athena prepared Penelope in her sleep, so that when she appeared for the final time before her suitors, the effects of years had been stripped away, and they were inflamed with passion at the sight of her. On the other hand, in Alcman, nectar is the food, and in Sappho and Anaxandrides, ambrosia is the drink. A character in Aristophanes' "Knights" says, "I dreamed the goddess poured ambrosia over your head—out of a ladle." Both descriptions could be correct, as ambrosia could be a liquid considered a food (such as honey).
The consumption of ambrosia was typically reserved for divine beings. Upon his assumption into immortality on Olympus, Heracles is given ambrosia by Athena, while the hero Tydeus is denied the same thing when the goddess discovers him eating human brains. In one version of the myth of Tantalus, part of Tantalus' crime is that after tasting ambrosia himself, he attempts to steal some away to give to other mortals. Those who consume ambrosia typically had not blood in their veins, but ichor, the blood of immortals.
Both nectar and ambrosia are fragrant, and may be used as perfume: in the "Odyssey" Menelaus and his men are disguised as seals in untanned seal skins, "...and the deadly smell of the seal skins vexed us sore; but the goddess saved us; she brought ambrosia and put it under our nostrils." Homer speaks of ambrosial raiment, ambrosial locks of hair, even the gods' ambrosial sandals.
Among later writers, "ambrosia" has been so often used with generic meanings of "delightful liquid" that such late writers as Athenaeus, Paulus and Dioscurides employ it as a technical terms in contexts of cookery, medicine, and botany. Pliny used the term in connection with different plants, as did early herbalists.
Additionally, some modern ethnomycologists, such as Danny Staples, identify ambrosia with the hallucinogenic mushroom "Amanita muscaria": "...it was the food of the gods, their ambrosia, and nectar was the pressed sap of its juices", Staples asserts.
W. H. Roscher thinks that both nectar and ambrosia were kinds of honey, in which case their power of conferring immortality would be due to the supposed healing and cleansing powers of honey, and because fermented honey (mead) preceded wine as an entheogen in the Aegean world; on some Minoan seals, goddesses were represented with bee faces (compare Merope and Melissa).
The concept of an immortality drink is attested in at least two Indo-European areas: Greek and Sanskrit. The Greek ἀμβροσία ("ambrosia") is semantically linked to the Sanskrit ("amṛta") as both words denote a drink or food that gods use to achieve immortality. The two words appear to be derived from the same Indo-European form *"ṇ-mṛ-tós", "un-dying" ("n-": negative prefix from which the prefix "a-" in both Greek and Sanskrit are derived; "mṛ": zero grade of *"mer-", "to die"; and "-to-": adjectival suffix). A semantically similar etymology exists for nectar, the beverage of the gods (Greek: νέκταρ "néktar") presumed to be a compound of the PIE roots "*nek-", "death", and "-*tar", "overcoming".
Lycurgus, king of Thrace, forbade the cult of Dionysus, whom he drove from Thrace, and attacked the gods' entourage when they celebrated the god. Among them was Ambrosia, who turned herself into a grapevine to hide from his wrath. Dionysus, enraged by the king's actions, drove him mad. In his fit of insanity he killed his son, whom he mistook for a stock of ivy, and then himself. | https://en.wikipedia.org/wiki?curid=1369 |
Ambrose
Aurelius Ambrosius ( – 397), better known in English as Ambrose (), an Archbishop of Milan, became one of the most influential ecclesiastical figures of the 4th century. He served as the Roman governor of Liguria and Emilia, headquartered in Milan, before popular acclamation propelled him into becoming Bishop of Milan in 374. Ambrose staunchly opposed Arianism.
Western Christianity identified Ambrose as one of its four traditional Doctors of the Church, and as the patron saint of Milan. He had notable influence on Augustine of Hippo (354-430).
Tradition credits Ambrose with promoting "antiphonal chant", a style of chanting in which one side of the choir responds alternately to the other, as well as with composing "Veni redemptor gentium", an Advent hymn.
Ambrose was born into a Roman Christian family about 340 and was raised in Gallia Belgica, the capital of which was Augusta Treverorum. His father is sometimes identified with Aurelius Ambrosius, a praetorian prefect of Gaul; but some scholars identify his father as an official named Uranius who received an imperial constitution dated 3 February 339 (addressed in a brief extract from one of the three emperors ruling in 339, Constantine II, Constantius II, or Constans, in the "Codex Theodosianus", book XI.5).
His mother was a woman of intellect and piety and a member of the Roman family "Aurelii Symmachi," and thus Ambrose was cousin of the orator Quintus Aurelius Symmachus. He was the youngest of three children, who included Marcellina and Satyrus (who is the subject of Ambrose's "De excessu fratris Satyri"), also venerated as saints. There is a legend that as an infant, a swarm of bees settled on his face while he lay in his cradle, leaving behind a drop of honey. His father considered this a sign of his future eloquence and honeyed tongue. For this reason, bees and beehives often appear in the saint's symbology.
About the year 354 Ambrosius, the father, died, whereupon the family moved to Rome. There he studied literature, law, and rhetoric. He then followed in his father's footsteps and entered public service. Praetorian Prefect Sextus Claudius Petronius Probus first gave him a place in the council and then in about 372 made him governor of Liguria and Emilia, with headquarters at Milan. In 286 Diocletian had moved the capital of the Western Roman Empire from Rome to Mediolanum (Milan).
Ambrose was the Governor of Aemilia-Liguria in northern Italy until 374, when he became the Bishop of Milan. Ambrose was a very popular political figure, and since he had been the Governor in the effective capital in the Roman West, he was a recognizable figure in the court of Valentinian I.
In the late 4th century there was a deep conflict in the diocese of Milan between the Nicene Church and Arians. In 374 the bishop of Milan, Auxentius, an Arian, died, and the Arians challenged the succession. Ambrose went to the church where the election was to take place, to prevent an uproar, which was probable in this crisis. His address was interrupted by a call, "Ambrose, bishop!", which was taken up by the whole assembly.
Ambrose was known to be Nicene Christian in belief, but also acceptable to Arians due to the charity shown in theological matters in this regard. At first he energetically refused the office, for which he was in no way prepared: Ambrose was neither baptized nor formally trained in theology. Upon his appointment, Ambrose fled to a colleague's home seeking to hide. Upon receiving a letter from the Emperor Gratian praising the appropriateness of Rome appointing individuals evidently worthy of holy positions, Ambrose's host gave him up. Within a week, he was baptized, ordained and duly consecrated bishop of Milan.
As bishop, he immediately adopted an ascetic lifestyle, apportioned his money to the poor, donating all of his land, making only provision for his sister Marcellina (who had become a nun). This raised his popularity even further, giving him considerable political leverage over even the emperor. Upon the unexpected appointment of Ambrose to the episcopate, his brother Satyrus resigned a prefecture in order to move to Milan, where he took over managing the diocese's temporal affairs.
In 383 Gratian was assassinated at Lyon, France, and Paulinus of Nola, who had served as governor of Campania, went to Milan to attend the school of Ambrose.
Ambrose studied theology with Simplician, a presbyter of Rome. Using to his advantage his excellent knowledge of Greek, which was then rare in the West, he studied the Old Testament and Greek authors like Philo, Origen, Athanasius, and Basil of Caesarea, with whom he was also exchanging letters. He applied this knowledge as preacher, concentrating especially on exegesis of the Old Testament, and his rhetorical abilities impressed Augustine of Hippo, who hitherto had thought poorly of Christian preachers.
In the confrontation with Arians, Ambrose sought to theologically refute their propositions, which were contrary to the Nicene creed and thus to the officially defined orthodoxy. The Arians appealed to many high level leaders and clergy in both the Western and Eastern empires. Although the western Emperor Gratian supported orthodoxy, the younger Valentinian II, who became his colleague in the Empire, adhered to the Arian creed. Ambrose did not sway the young prince's position. In the East, Emperor Theodosius I likewise professed the Nicene creed; but there were many adherents of Arianism throughout his dominions, especially among the higher clergy.
In this contested state of religious opinion, two leaders of the Arians, bishops Palladius of Ratiaria and Secundianus of Singidunum, confident of numbers, prevailed upon Gratian to call a general council from all parts of the empire. This request appeared so equitable that he complied without hesitation. However, Ambrose feared the consequences and prevailed upon the emperor to have the matter determined by a council of the Western bishops. Accordingly, a synod composed of thirty-two bishops was held at Aquileia in the year 381. Ambrose was elected president and Palladius, being called upon to defend his opinions, declined. A vote was then taken and Palladius and his associate Secundianus were deposed from their episcopal offices.
Nevertheless, the increasing strength of the Arians proved a formidable task for Ambrose. In 385 or 386 the emperor and his mother Justina, along with a considerable number of clergy and laity, especially military, professed Arianism. They demanded two churches in Milan, one in the city (the Basilica of the Apostles), the other in the suburbs (St Victor's), be allocated to the Arians. Ambrose refused and was required to answer for his conduct before the council. He went, his eloquence in defense of the Church reportedly overawing the ministers of Valentinian, so he was permitted to retire without making the surrender of the churches. The day following, when he was performing divine service in the basilica, the prefect of the city came to persuade him to give up at least the Portian basilica in the suburbs. As he still refused, certain deans or officers of the court were sent to take possession of the Portian basilica, by hanging up in it imperial escutcheons to prepare for the arrival of the emperor and his mother at the ensuing festival of Easter.
In spite of Imperial opposition, Ambrose declared, "If you demand my person, I am ready to submit: carry me to prison or to death, I will not resist; but I will never betray the church of Christ. I will not call upon the people to succour me; I will die at the foot of the altar rather than desert it. The tumult of the people I will not encourage: but God alone can appease it."
In 386 Justina and Valentinian received the Arian bishop Auxentius the younger, and Ambrose was again ordered to hand over a church in Milan for Arian usage. Ambrose and his congregation barricaded themselves inside the church, and the imperial order was rescinded.
The imperial court was displeased with the religious principles of Ambrose, however his aid was soon solicited by the Emperor. When Magnus Maximus usurped the supreme power in Gaul, and was meditating a descent upon Italy, Valentinian sent Ambrose to dissuade him from the undertaking, and the embassy was successful.
A second later embassy was unsuccessful. The enemy entered Italy and Milan was taken. Justina and her son fled but Ambrose remained at his post and did good service to many of the sufferers by causing the plate of the church to be melted for their relief.
Theodosius I, the emperor of the East, espoused the cause of Justina, and regained the kingdom. Theodosius was excommunicated by Ambrose for the massacre of 7,000 people at Thessalonica in 390, after the murder of the Roman governor there by rioters. Ambrose told Theodosius to imitate David in his repentance as he had imitated him in guilt, and he readmitted the emperor to the Eucharist only after several months of penance. This shows the strong position of a bishop in the western part of the empire, even when facing a strong emperor. The controversy of John Chrysostom with a much weaker emperor a few years later in Constantinople led to a crushing defeat of the bishop.
In 392, after the death of Valentinian II and the fall of Eugenius, Ambrose supplicated the emperor for the pardon of those who had supported Eugenius after Theodosius was eventually victorious.
In his treatise on Abraham, Ambrose warns against intermarriage with pagans, Jews, or heretics. In 388, Emperor Theodosius the Great was informed that a crowd of Christians, led by their bishop, had destroyed the synagogue at Callinicum on the Euphrates. He ordered the synagogue rebuilt at the expense of the bishop, but Ambrose persuaded Theodosius to retreat from this position. He wrote to the Emperor, pointing out that he was thereby "exposing the bishop to the danger of either acting against the truth or of death"; in the letter "the reasons given for the imperial rescript are met, especially by the plea that the Jews had burnt many churches". Ambrose, referring to a prior incident where Magnus Maximus issued an edict censuring Christians in Rome for burning down a Jewish synagogue, warned Theodosius that the people in turn exclaimed "the emperor has become a Jew", implying that if Theodosius attempted to apply the law to protect his Jewish subjects he'd be viewed similarly. In the course of the letter Ambrose speaks of the clemency that the emperor had shown with regard to the many houses of wealthy people and churches that had been destroyed by unruly mobs, with many then still not restored and then adds: "There is, then, no adequate cause for such a commotion, that the people should be so severely punished for the burning of a building, and much less since it is the burning of a synagogue, a home of unbelief, a house of impiety, a receptacle of folly, which God Himself has condemned. For thus we read, where the Lord our God speaks by the mouth of the prophet Jeremiah: 'And I will do to this house, which is called by My Name, wherein ye trust, and to the place which I gave to you and to your fathers, as I have done to Shiloh, and I will cast you forth from My sight, as I cast forth your brethren, the whole seed of Ephraim. And do not thou pray for that people, and do not thou ask mercy for them, and do not come near Me on their behalf, for I will not hear thee. Or seest thou not what they do in the cities of Judah?' God forbids intercession to be made for those." Yet, Ambrose did not oppose punishing those who were directly responsible for destroying the synagogue.
In his exposition of Psalm 1, Ambrose says: "Virtues without faith are leaves, flourishing in appearance, but unproductive. How many pagans have mercy and sobriety but no fruit, because they do not attain their purpose! The leaves speedily fall at the wind's breath. Some Jews exhibit purity of life and much diligence and love of study, but bear no fruit and live like leaves."
Under his influence, emperors Gratian, Valentinian II and Theodosius I carried on a persecution of paganism; Theodosius issued the 391 "Theodosian decrees," which with increasing intensity outlawed pagan practices.The Altar of Victory was removed by Gratian. Ambrose prevailed upon Gratian, Valentinian and Theodosius to reject requests to restore the altar.
In April 393 Arbogast, "magister militum" of the West and his puppet Emperor Eugenius, marched into Italy to consolidate their position in regard to Theodosius I and his son, Honorius, whom Theodosius had appointed Augustus to govern the western portion of the empire. Arbogast and Eugenius courted Ambrose's support by very obliging letters; but before they arrived at Milan, he had retired to Bologna, where he assisted at the translation of the relics of Saints Vitalis and Agricola. From there he went to Florence, where he remained until Eugenius withdrew from Milan to meet Theodosius in the Battle of the Frigidus in early September 394.
Soon after acquiring the undisputed possession of the Roman Empire, Theodosius died at Milan in 395, and two years later (4 April 397) Ambrose also died. He was succeeded as bishop of Milan by Simplician. Ambrose's body may still be viewed in the church of Saint Ambrogio in Milan, where it has been continuously venerated – along with the bodies identified in his time as being those of Saints Gervase and Protase.
Many circumstances in the history of Ambrose are characteristic of the general spirit of the times. The chief causes of his victory over his opponents were his great popularity and the reverence paid to the episcopal character at that period. He used several indirect means to obtain and support his authority with the people.
It was his custom to comment severely in his preaching on the public characters of his times; and he introduced popular reforms in the order and manner of public worship. It is alleged, too, that at a time when the influence of Ambrose required vigorous support, he was admonished in a dream to search for, and found under the pavement of the church, the remains of two martyrs, Gervasius and Protasius. The saints, although they would have had to have been hundreds of years old, looked as if they had just died. The applause of the people was mingled with the derision of the court party.
Ambrose joins Augustine, Jerome, and Gregory the Great as one of the Latin Doctors of the Church. Theologians compare him with Hilary, who they claim fell short of Ambrose's administrative excellence but demonstrated greater theological ability. He succeeded as a theologian despite his juridical training and his comparatively late handling of Biblical and doctrinal subjects.
Ambrose's intense episcopal consciousness furthered the growing doctrine of the Church and its sacerdotal ministry, while the prevalent asceticism of the day, continuing the Stoic and Ciceronian training of his youth, enabled him to promulgate a lofty standard of Christian ethics. Thus we have the "De officiis ministrorum", "De viduis", "De virginitate" and "De paenitentia".
Ambrose displayed a kind of liturgical flexibility that kept in mind that liturgy was a tool to serve people in worshiping God, and ought not to become a rigid entity that is invariable from place to place. His advice to Augustine of Hippo on this point was to follow local liturgical custom. "When I am at Rome, I fast on a Saturday; when I am at Milan, I do not. Follow the custom of the church where you are." Thus Ambrose refused to be drawn into a false conflict over which particular local church had the "right" liturgical form where there was no substantial problem. His advice has remained in the English language as the saying, "When in Rome, do as the Romans do."
One interpretation of Ambrose's writings is that he was a Christian universalist. It has been noted that Ambrose's theology was significantly influenced by that of Origen and Didymus the Blind, two other early Christian universalists. One quotation cited in favor of this belief is:
One could interpret this passage as being another example of the mainstream Christian belief in a general resurrection (that both those in heaven and in hell undergo a bodily resurrection), or an allusion to purgatory (that some destined for heaven must first undergo a phase of purification). Several other works by Ambrose clearly teach the mainstream view of salvation. For example: "The Jews feared to believe in manhood taken up into God, "and therefore have lost the grace of redemption", because they reject that on which salvation depends."
He was also interested in the condition of contemporary Italian society. Ambrose considered the poor not a distinct group of outsiders, but a part of the united, solidary people. Giving to the poor was not to be considered an act of generosity towards the fringes of society but a repayment of resources that God had originally bestowed on everyone equally and that the rich had usurped.
The theological treatises of Ambrose of Milan would come to influence Popes Damasus, Siricius and Leo XIII. Central to Ambrose is the virginity of Mary and her role as Mother of God.
Ambrose viewed celibacy as superior to marriage and saw Mary as the model of virginity.
In matters of exegesis he is, like Hilary, an Alexandrian. In dogma he follows Basil of Caesarea and other Greek authors, but nevertheless gives a distinctly Western cast to the speculations of which he treats. This is particularly manifest in the weightier emphasis which he lays upon human sin and divine grace, and in the place which he assigns to faith in the individual Christian life.
Ambrose is traditionally credited but not actually known to have composed any of the repertory of Ambrosian chant also known simply as "antiphonal chant", a method of chanting where one side of the choir alternately responds to the other. (The later pope Gregory I the Great is not known to have composed any Gregorian chant, the plainsong or "Romish chant".) However, Ambrosian chant was named in his honor due to his contributions to the music of the Church; he is credited with introducing hymnody from the Eastern Church into the West.
Catching the impulse from Hilary of Arles and confirmed in it by the success of Arian psalmody, Ambrose composed several original hymns as well, four of which still survive, along with music which may not have changed too much from the original melodies. Each of these hymns has eight, four-line stanzas and is written in strict iambic tetrameter (that is 4 x 2 syllables, each iamb being two syllables). Marked by dignified simplicity, they served as a fruitful model for later times.
In his writings, Ambrose refers only to the performance of psalms, in which solo singing of psalm verses alternated with a congregational refrain called an "antiphon".
Saint Ambrose was also traditionally credited with composing the hymn "Te Deum", which he is said to have composed when he baptised Saint Augustine of Hippo, his celebrated convert.
Ambrose was Bishop of Milan at the time of Augustine's conversion, and is mentioned in Augustine's "Confessions". It is commonly understood in the Christian Tradition that Ambrose baptized Augustine.
In a passage of Augustine's "Confessions" in which Augustine wonders why he could not share his burden with Ambrose, he comments: "Ambrose himself I esteemed a happy man, as the world counted happiness, because great personages held him in honor. Only his celibacy appeared to me a painful burden."
In this same passage of Augustine's "Confessions" is an anecdote which bears on the history of reading:
This is a celebrated passage in modern scholarly discussion. The practice of reading to oneself without vocalizing the text was less common in antiquity than it has since become. In a culture that set a high value on oratory and public performances of all kinds, in which the production of books was very labor-intensive, the majority of the population was illiterate, and where those with the leisure to enjoy literary works also had slaves to read for them, written texts were more likely to be seen as scripts for recitation than as vehicles of silent reflection. However, there is also evidence that silent reading did occur in antiquity and that it was not generally regarded as unusual.
Latin
English translations
Several of Ambrose's works have recently been published in the bilingual Latin-German "Fontes Christiani" series (currently edited by Brepols).
Several religious brotherhoods which have sprung up in and around Milan at various times since the 14th century have been called Ambrosians. Their connection to Ambrose is tenuous | https://en.wikipedia.org/wiki?curid=1370 |
Amber
Amber is fossilized tree resin, which has been appreciated for its color and natural beauty since Neolithic times. Much valued from antiquity to the present as a gemstone, amber is made into a variety of decorative objects. Amber is used in jewelry. It has also been used as a healing agent in folk medicine.
There are five classes of amber, defined on the basis of their chemical constituents. Because it originates as a soft, sticky tree resin, amber sometimes contains animal and plant material as inclusions. Amber occurring in coal seams is also called resinite, and the term "ambrite" is applied to that found specifically within New Zealand coal seams.
The English word "amber" derives from Arabic (cognate with Middle Persian "ambar") via Middle Latin "ambar" and Middle French "ambre". The word was adopted in Middle English in the 14th century as referring to what is now known as "ambergris" ("ambre gris" or "grey amber"), a solid waxy substance derived from the sperm whale.
In the Romance languages, the sense of the word had come to be extended to Baltic amber (fossil resin) from as early as the late 13th century. At first called white or yellow amber ("ambre jaune"), this meaning was adopted in English by the early 15th century. As the use of ambergris waned, this became the main sense of the word.
The two substances ("yellow amber" and "grey amber") conceivably became associated or confused because they both were found washed up on beaches. Ambergris is less dense than water and floats, whereas amber is too dense to float, though less dense than stone.
The classical names for amber, Latin "electrum" and Ancient Greek ("ēlektron"), are connected to a term ἠλέκτωρ ("ēlektōr") meaning "beaming Sun". According to myth, when Phaëton son of Helios (the Sun) was killed, his mourning sisters became poplar trees, and their tears became "elektron", amber. The word "elektron" gave rise to the words "electric, electricity", and their relatives because of amber's ability to bear a charge of static electricity.
Theophrastus discussed amber in the 4th century BC, as did Pytheas (c. 330 BC), whose work "On the Ocean" is lost, but was referenced by Pliny the Elder (23 to 79 AD), according to whose "The Natural History" (in what is also the earliest known mention of the name "Germania"):
Earlier Pliny says that Pytheas refers to a large island - three days' sail from the Scythian coast and called Balcia by Xenophon of Lampsacus (author of a fanciful travel book in Greek) - as "Basilia" - a name generally equated with "Abalus". Given the presence of amber, the island could have been Heligoland, Zealand, the shores of Bay of Gdansk, the Sambia Peninsula or the Curonian Lagoon, which were historically the richest sources of amber in northern Europe.
It is assumed that there were well-established trade routes for amber connecting the Baltic with the Mediterranean (known as the "Amber Road"). Pliny states explicitly that the Germans exported amber to Pannonia, from where the Veneti distributed it onwards.
The ancient Italic peoples of southern Italy used to work amber; the National Archaeological Museum of Siritide (Museo Archeologico Nazionale della Siritide) at Policoro in the province of Matera (Basilicata) displays important surviving examples.
Amber used in antiquity as at Mycenae and in the prehistory of the Mediterranean comes from deposits of Sicily.
Pliny also cites the opinion of Nicias ( 470–413 BC), according to whom amber Besides the fanciful explanations according to which amber is "produced by the Sun", Pliny cites opinions that are well aware of its origin in tree resin, citing the native Latin name of "succinum" ("sūcinum", from "sucus" "juice"). In Book 37, section XI of "Natural History", Pliny wrote:
He also states that amber is also found in Egypt and in India, and he even refers to the electrostatic properties of amber, by saying that "in Syria the women make the whorls of their spindles of this substance, and give it the name of "harpax" [from ἁρπάζω, "to drag"] from the circumstance that it attracts leaves towards it, chaff, and the light fringe of tissues".
Pliny says that the German name of amber was "glæsum", "for which reason the Romans, when Germanicus Caesar commanded the fleet in those parts, gave to one of these islands the name of Glæsaria, which by the barbarians was known as Austeravia". This is confirmed by the recorded Old High German word "glas" and by the Old English word "glær" for "amber" (compare "glass").
In Middle Low German, amber was known as "berne-, barn-, börnstēn" (with etymological roots related to "burn" and to "stone"). The Low German term became dominant also in High German by the 18th century, thus modern German "Bernstein" besides Dutch "barnsteen".
In the Baltic languages, the Lithuanian term for amber is "gintaras" and the Latvian "dzintars". These words, and the Slavic "jantar" and Hungarian "gyanta" ('resin'), are thought to originate from Phoenician "jainitar" ("sea-resin").
Early in the nineteenth century, the first reports of amber found in North America came from discoveries in New Jersey along Crosswicks Creek near Trenton, at Camden, and near Woodbury.
Amber is heterogeneous in composition, but consists of several resinous bodies more or less soluble in alcohol, ether and chloroform, associated with an insoluble bituminous substance. Amber is a macromolecule by free radical polymerization of several precursors in the labdane family, e.g. communic acid, cummunol, and biformene. These labdanes are diterpenes (C20H32) and trienes, equipping the organic skeleton with three alkene groups for polymerization. As amber matures over the years, more polymerization takes place as well as isomerization reactions, crosslinking and cyclization.
Heated above , amber decomposes, yielding an oil of amber, and leaves a black residue which is known as "amber colophony", or "amber pitch"; when dissolved in oil of turpentine or in linseed oil this forms "amber varnish" or "amber lac".
Molecular polymerization, resulting from high pressures and temperatures produced by overlying sediment, transforms the resin first into copal. Sustained heat and pressure drives off terpenes and results in the formation of amber.
For this to happen, the resin must be resistant to decay. Many trees produce resin, but in the majority of cases this deposit is broken down by physical and biological processes. Exposure to sunlight, rain, microorganisms (such as bacteria and fungi), and extreme temperatures tends to disintegrate the resin. For the resin to survive long enough to become amber, it must be resistant to such forces or be produced under conditions that exclude them.
Fossil resins from Europe fall into two categories, the famous Baltic ambers and another that resembles the "Agathis" group. Fossil resins from the Americas and Africa are closely related to the modern genus "Hymenaea", while Baltic ambers are thought to be fossil resins from family Sciadopityaceae plants that once lived in north Europe.
Most amber has a hardness between 2.0 and 2.5 on the Mohs scale, a refractive index of 1.5–1.6, a specific gravity between 1.06 and 1.10, and a melting point of 250–300 °C.
The abnormal development of resin in living trees ("succinosis") can result in the formation of amber. Impurities are quite often present, especially when the resin dropped onto the ground, so the material may be useless except for varnish-making. Such impure amber is called "firniss".
Such inclusion of other substances can cause amber to have an unexpected color. Pyrites may give a bluish color. "Bony amber" owes its cloudy opacity to numerous tiny bubbles inside the resin. However, so-called "black amber" is really only a kind of jet.
In darkly clouded and even opaque amber, inclusions can be imaged using high-energy, high-contrast, high-resolution X-rays.
Amber is globally distributed, mainly in rocks of Cretaceous age or younger.
Historically, the coast west of Königsberg in Prussia was the world's leading source of amber. The first mentions of amber deposits here date back to the 12th century. About 90% of the world's extractable amber is still located in that area, which became the Kaliningrad Oblast of Russia in 1946.
Pieces of amber torn from the seafloor are cast up by the waves and collected by hand, dredging, or diving. Elsewhere, amber is mined, both in open works and underground galleries. Then nodules of "blue earth" have to be removed and an opaque crust must be cleaned off, which can be done in revolving barrels containing sand and water. Erosion removes this crust from sea-worn amber.
Caribbean amber, especially Dominican blue amber, is mined through bell pitting, which is dangerous due to the risk of tunnel collapse.
The Vienna amber factories, which use pale amber to manufacture pipes and other smoking tools, turn it on a lathe and polish it with whitening and water or with rotten stone and oil. The final luster is given by friction with flannel.
When gradually heated in an oil-bath, amber becomes soft and flexible. Two pieces of amber may be united by smearing the surfaces with linseed oil, heating them, and then pressing them together while hot. Cloudy amber may be clarified in an oil-bath, as the oil fills the numerous pores to which the turbidity is due.
Small fragments, formerly thrown away or used only for varnish, are now used on a large scale in the formation of "ambroid" or "pressed amber". The pieces are carefully heated with exclusion of air and then compressed into a uniform mass by intense hydraulic pressure, the softened amber being forced through holes in a metal plate. The product is extensively used for the production of cheap jewelry and articles for smoking. This pressed amber yields brilliant interference colors in polarized light.
Amber has often been imitated by other resins like copal and kauri gum, as well as by celluloid and even glass. Baltic amber is sometimes colored artificially, but also called "true amber".
Amber occurs in a range of different colors. As well as the usual yellow-orange-brown that is associated with the color "amber", amber itself can range from a whitish color through a pale lemon yellow, to brown and almost black. Other uncommon colors include red amber (sometimes known as "cherry amber"), green amber, and even blue amber, which is rare and highly sought after.
Yellow amber is a hard fossil resin from evergreen trees, and despite the name it can be translucent, yellow, orange, or brown colored. Known to the Iranians by the Pahlavi compound word kah-ruba (from kah "straw" plus rubay "attract, snatch", referring to its electrical properties), which entered Arabic as kahraba' or kahraba (which later became the Arabic word for electricity, كهرباء "kahrabā'"), it too was called amber in Europe (Old French and Middle English ambre). Found along the southern shore of the Baltic Sea, yellow amber reached the Middle East and western Europe via trade. Its coastal acquisition may have been one reason yellow amber came to be designated by the same term as ambergris. Moreover, like ambergris, the resin could be burned as an incense. The resin's most popular use was, however, for ornamentation—easily cut and polished, it could be transformed into beautiful jewelry.
Much of the most highly prized amber is transparent, in contrast to the very common cloudy amber and opaque amber. Opaque amber contains numerous minute bubbles. This kind of amber is known as "bony amber".
Although all Dominican amber is fluorescent, the rarest Dominican amber is blue amber. It turns blue in natural sunlight and any other partially or wholly ultraviolet light source. In long-wave UV light it has a very strong reflection, almost white. Only about is found per year, which makes it valuable and expensive.
Sometimes amber retains the form of drops and stalactites, just as it exuded from the ducts and receptacles of the injured trees. It is thought that, in addition to exuding onto the surface of the tree, amber resin also originally flowed into hollow cavities or cracks within trees, thereby leading to the development of large lumps of amber of irregular form.
Amber can be classified into several forms. Most fundamentally, there are two types of plant resin with the potential for fossilization. Terpenoids, produced by conifers and angiosperms, consist of ring structures formed of isoprene (C5H8) units. Phenolic resins are today only produced by angiosperms, and tend to serve functional uses. The extinct medullosans produced a third type of resin, which is often found as amber within their veins. The composition of resins is highly variable; each species produces a unique blend of chemicals which can be identified by the use of pyrolysis–gas chromatography–mass spectrometry. The overall chemical and structural composition is used to divide ambers into five classes. There is also a separate classification of amber gemstones, according to the way of production.
This class is by far the most abundant. It comprises labdatriene carboxylic acids such as communic or ozic acids. It is further split into three sub-classes. Classes Ia and Ib utilize regular labdanoid diterpenes (e.g. communic acid, communol, biformenes), while Ic uses "enantio" labdanoids (ozic acid, ozol, "enantio" biformenes).
Class Ia includes "Succinite" (= 'normal' Baltic amber) and "Glessite". They have a communic acid base, and they also include much succinic acid.
Baltic amber yields on dry distillation succinic acid, the proportion varying from about 3% to 8%, and being greatest in the pale opaque or "bony" varieties. The aromatic and irritating fumes emitted by burning amber are mainly due to this acid. Baltic amber is distinguished by its yield of succinic acid, hence the name "succinite". Succinite has a hardness between 2 and 3, which is rather greater than that of many other fossil resins. Its specific gravity varies from 1.05 to 1.10. It can be distinguished from other ambers via IR spectroscopy due to a specific carbonyl absorption peak. IR spectroscopy can detect the relative age of an amber sample. Succinic acid may not be an original component of amber, but rather a degradation product of abietic acid.
Like class Ia ambers, these are based on communic acid; however, they lack succinic acid.
This class is mainly based on "enantio"-labdatrienonic acids, such as ozic and zanzibaric acids. Its most familiar representative is Dominican amber.
Dominican amber differentiates itself from Baltic amber by being mostly transparent and often containing a higher number of fossil inclusions. This has enabled the detailed reconstruction of the ecosystem of a long-vanished tropical forest. Resin from the extinct species "Hymenaea protera" is the source of Dominican amber and probably of most amber found in the tropics. It is not "succinite" but "retinite".
These ambers are formed from resins with a sesquiterpenoid base, such as cadinene.
These ambers are polystyrenes.
Class IV is something of a wastebasket; its ambers are not polymerized, but mainly consist of cedrene-based sesquiterpenoids.
Class V resins are considered to be produced by a pine or pine relative. They comprise a mixture of diterpinoid resins and "n"-alkyl compounds. Their main variety is "Highgate copalite".
The oldest amber recovered dates to the Upper Carboniferous period (). Its chemical composition makes it difficult to match the amber to its producers – it is most similar to the resins produced by flowering plants; however, there are no flowering plant fossils known from before the Cretaceous, and they were not common until the Late Cretaceous. Amber becomes abundant long after the Carboniferous, in the Early Cretaceous, , when it is found in association with insects. The oldest amber with arthropod inclusions comes from the Levant, from Lebanon and Jordan. This amber, roughly 125–135 million years old, is considered of high scientific value, providing evidence of some of the oldest sampled ecosystems.
In Lebanon, more than 450 outcrops of Lower Cretaceous amber were discovered by Dany Azar, a Lebanese paleontologist and entomologist. Among these outcrops, 20 have yielded biological inclusions comprising the oldest representatives of several recent families of terrestrial arthropods. Even older, Jurassic amber has been found recently in Lebanon as well. Many remarkable insects and spiders were recently discovered in the amber of Jordan including the oldest zorapterans, clerid beetles, umenocoleid roaches, and achiliid planthoppers.
Baltic amber or succinite (historically documented as Prussian amber) is found as irregular nodules in marine glauconitic sand, known as "blue earth", occurring in the Lower Oligocene strata of Sambia in Prussia (in historical sources also referred to as "Glaesaria"). After 1945, this territory around Königsberg was turned into Kaliningrad Oblast, Russia, where amber is now systematically mined.
It appears, however, to have been partly derived from older Eocene deposits and it occurs also as a derivative phase in later formations, such as glacial drift. Relics of an abundant flora occur as inclusions trapped within the amber while the resin was yet fresh, suggesting relations with the flora of Eastern Asia and the southern part of North America. Heinrich Göppert named the common amber-yielding pine of the Baltic forests "Pinites succiniter", but as the wood does not seem to differ from that of the existing genus it has been also called "Pinus succinifera". It is improbable, however, that the production of amber was limited to a single species; and indeed a large number of conifers belonging to different genera are represented in the amber-flora.
Amber is a unique preservational mode, preserving otherwise unfossilizable parts of organisms; as such it is helpful in the reconstruction of ecosystems as well as organisms; the chemical composition of the resin, however, is of limited utility in reconstructing the phylogenetic affinity of the resin producer.
Amber sometimes contains animals or plant matter that became caught in the resin as it was secreted. Insects, spiders and even their webs, annelids, frogs, crustaceans, bacteria and amoebae, marine microfossils, wood, flowers and fruit, hair, feathers and other small organisms have been recovered in Cretaceous ambers (deposited c. ). The oldest amber to bear fossils (mites) is from the Carnian (Triassic, ) of north-eastern Italy.
The preservation of prehistoric organisms in amber forms a key plot point in Michael Crichton's 1990 novel "Jurassic Park" and the 1993 movie adaptation by Steven Spielberg. In the story, scientists are able to extract the preserved blood of dinosaurs from prehistoric mosquitoes trapped in amber, from which they genetically clone living dinosaurs. Scientifically this is as yet impossible, since no amber with fossilized mosquitoes has ever yielded preserved blood. Amber is, however, conducive to preserving DNA, since it dehydrates and thus stabilizes organisms trapped inside. One projection in 1999 estimated that DNA trapped in amber could last up to 100 million years, far beyond most estimates of around 1 million years in the most ideal conditions, although a later 2013 study was unable to extract DNA from insects trapped in much more recent Holocene copal.
Amber has been used since prehistory (Solutrean) in the manufacture of jewelry and ornaments, and also in folk medicine.
Amber has been used as jewelry since the Stone Age, from 13,000 years ago. Amber ornaments have been found in Mycenaean tombs and elsewhere across Europe. To this day it is used in the manufacture of smoking and glassblowing mouthpieces. Amber's place in culture and tradition lends it a tourism value; Palanga Amber Museum is dedicated to the fossilized resin.
Amber has long been used in folk medicine for its purported healing properties. Amber and extracts were used from the time of Hippocrates in ancient Greece for a wide variety of treatments through the Middle Ages and up until the early twentieth century. Traditional Chinese medicine uses amber to "tranquilize the mind".
Amber necklaces are a traditional European remedy for colic or teething pain due to the purported analgesic properties of succinic acid, although there is no evidence that this is an effective remedy or delivery method. The American Academy of Pediatrics and the FDA have warned strongly against their use, as they present both a choking and a strangulation hazard.
In ancient China, it was customary to burn amber during large festivities. If amber is heated under the right conditions, oil of amber is produced, and in past times this was combined carefully with nitric acid to create "artificial musk" – a resin with a peculiar musky odor. Although when burned, amber does give off a characteristic "pinewood" fragrance, modern products, such as perfume, do not normally use actual amber due to the fact that fossilized amber produces very little scent. In perfumery, scents referred to as "amber" are often created and patented
to emulate the opulent golden warmth of the fossil.
The modern name for amber is thought to come from the Arabic word, ambar, meaning ambergris. Ambergris is the waxy aromatic substance created in the intestines of sperm whales and was used in making perfumes both in ancient times as well as modern.
The scent of amber was originally derived from emulating the scent of ambergris and/or the plant resin labdanum, but due to the endangered species status of the sperm whale the scent of amber is now largely derived from labdanum. The term "amber" is loosely used to describe a scent that is warm, musky, rich and honey-like, and also somewhat earthy. It can be synthetically created or derived from natural resins. When derived from natural resins it is most often created out of labdanum. Benzoin is usually part of the recipe. Vanilla and cloves are sometimes used to enhance the aroma.
"Amber" perfumes may be created using combinations of labdanum, benzoin resin, copal (itself a type of tree resin used in incense manufacture), vanilla, Dammara resin and/or synthetic materials.
Young resins, these are used as imitations:
Plastics, these are used as imitations: | https://en.wikipedia.org/wiki?curid=1372 |
Alphorn
The alphorn or alpenhorn or alpine horn is a labrophone, consisting of a straight several-meter-long wooden natural horn of conical bore, with a wooden cup-shaped mouthpiece. It is used by mountain dwellers in the Swiss Alps, Austrian Alps, Bavarian Alps in Germany, French Alps, and elsewhere. Similar wooden horns were used for communication in most mountainous regions of Europe, from the Alps to the Carpathians. Alphorns are today used as musical instruments.
For a long time, scholars believed that the alphorn had been derived from the Roman-Etruscan lituus, because of their resemblance in shape, and because of the word "liti", meaning Alphorn in the dialect of Obwalden. There is no documented evidence for this theory, however, and, the word "liti" was probably borrowed from 16th–18th century writings in Latin, where the word "lituus" could describe various wind instruments, such as the horn, the crumhorn, or the cornett. Swiss naturalist Conrad Gesner used the words "lituum alpinum" for the first known detailed description of the alphorn in his "De raris et admirandis herbis" in 1555. The oldest known document using the German word "Alphorn" is a page from a 1527 account book from the former Cistercian abbey St. Urban near Pfaffnau mentioning the payment of two Batzen for an itinerant alphorn player from the Valais.
17th–19th century collections of alpine myths and legends suggest that alphorn-like instruments had frequently been used as signal instruments in village communities since medieval times or earlier, sometimes substituting for the lack of church bells. Surviving artifacts, dating back to as far as ca. AD 1400, include wooden labrophones in their stretched form, like the alphorn, or coiled versions, such as the "Büchel" and the "Allgäuisches Waldhorn" or "Ackerhorn". The alphorn's exact origins remain indeterminate, and the ubiquity of horn-like signal instruments in valleys throughout Europe may indicate a long history of cross influences regarding their construction and usage.
The alphorn is carved from solid softwood, generally spruce but sometimes pine. In former times the alphorn maker would find a tree bent at the base in the shape of an alphorn, but modern makers piece the wood together at the base. A cup-shaped mouthpiece carved out of a block of hard wood is added and the instrument is complete.
An alphorn made at Rigi-Kulm, Schwyz, and now in the Victoria and Albert Museum, measures in length and has a straight tube. The Swiss alphorn varies in shape according to the locality, being curved near the bell in the Bernese Oberland. Michael Praetorius mentions an alphorn-like instrument under the name of Hölzern Trummet (wooden trumpet) in "Syntagma Musicum" (Wittenberg, 1615–1619; Pl. VIII).
The alphorn has no lateral openings and therefore gives the pure natural harmonic series of the open pipe. The notes of the natural harmonic series overlap, but do not exactly correspond, to notes found in the familiar chromatic scale in standard Western equal temperament. Most prominently within the alphorn's range, the 7th and 11th harmonics are particularly noticeable, because they fall between adjacent notes in the chromatic scale.
Accomplished alphornists often command a range of nearly three octaves, consisting of the 2nd through the 16th notes of the harmonic series. The availability of the higher tones is due in part to the relatively small diameter of the bore of the mouthpiece and tubing in relation to the overall length of the horn.
The well-known "Ranz des Vaches" (score; audio) is a traditional Swiss melody often heard on the alphorn. The song describes the time of bringing the cows to the high country at cheese making time. Rossini introduced the "Ranz des Vaches" into his masterpiece "William Tell," along with many other delightful melodies scattered throughout the opera in vocal and instrumental parts that are well-suited to the alphorn. Brahms wrote to Clara Schumann that the inspiration for the dramatic entry of the horn in the introduction to the last movement of his First Symphony was an alphorn melody he heard while vacationing in the Rigi area of Switzerland. For Clara's birthday in 1868 Brahms sent her a greeting that was to be sung with the melody.
Among music composed for the alphorn: | https://en.wikipedia.org/wiki?curid=1374 |
Army
An army (from Latin "arma" "arms, weapons" via Old French "armée", "armed" [feminine]), ground force or land force is a fighting force that fights primarily on land. In the broadest sense, it is the land-based military branch, service branch or armed service of a nation or state. It may also include aviation assets by possessing an army aviation component. Within a national military force, the word army may also mean a field army.
In some countries, such as France and China, the term "army", especially in its plural form "armies", has the broader meaning of armed forces as a whole, while retaining the colloquial sense of land forces. To differentiate the colloquial army from the formal concept of military force, the term is qualified, for example in France the land force is called "Armée de terre", meaning Land Army, and the air force is called "Armée de l'Air", meaning Air Army. The naval force, although not using the term "army", is also included in the broad sense of the term "armies" — thus the French Navy is an integral component of the collective French Armies (French Armed Forces) under the Ministry of the Armies. A similar pattern is seen in China, with the People's Liberation Army (PLA) being the overall military, the "actual army" being the PLA Ground Force, and so forth for the PLA Air Force, the PLA Navy, and other branches.
The current largest army in the world, by number of active troops, is the PLA Ground Force of China with 1,600,000 active troops and 510,000 reserve personnel followed by the Indian Army with 1,237,117 active troops and 960,000 reserve personnel.
By convention, irregular military is understood in contrast to regular armies which grew slowly from personal bodyguards or elite militia. Regular in this case refers to standardized doctrines, uniforms, organizations, etc. Regular military can also refer to full-time status (standing army), versus reserve or part-time personnel. Other distinctions may separate statutory forces (established under laws such as the National Defence Act), from de facto "non-statutory" forces such as some guerrilla and revolutionary armies. Armies may also be expeditionary (designed for overseas or international deployment) or fencible (designed for – or restricted to – homeland defence)
India's armies were among the first in the world. The first recorded battle, the Battle of the Ten Kings, happened when an Hindu Aryan king named Sudas defeated an alliance of ten kings and their supportive chieftains. During the Iron Age, the Maurya and Nanda Empires had the largest armies in the world, the peak being approximately over 600,000 Infantry, 30,000 Cavalry, 8,000 War-Chariots and 9,000 War Elephants not including tributary state allies. In the Gupta age, large armies of longbowmen were recruited to fight off invading horse archer armies. Elephants, pikemen and cavalry were other featured troops.
In Rajput times, the main piece of equipment was iron or chain-mail armour, a round shield, either a curved blade or a straight-sword, a chakra disc and a katar dagger.
The states of China raised armies for at least 1000 years before the Spring and Autumn Annals. By the Warring States period, the crossbow had been perfected enough to become a military secret, with bronze bolts which could pierce any armor. Thus any political power of a state rested on the armies and their organization. China underwent political consolidation of the states of Han (韓), Wei (魏), Chu (楚), Yan (燕), Zhao (趙) and Qi (齊), until by 221 BCE, Qin Shi Huang (秦始皇帝), the first emperor of the Qin dynasty, attained absolute power. This first emperor of China could command the creation of a Terracotta Army to guard his tomb in the city of Xi'an (西安), as well as a realignment of the Great Wall of China to strengthen his empire against insurrection, invasion and incursion.
Sun Tzu's "The Art of War" remains one of China's Seven Military Classics, even though it is two thousand years old. Since no political figure could exist without an army, measures were taken to ensure only the most capable leaders could control the armies. Civil bureaucracies (士大夫) arose to control the productive power of the states, and their military power.
The Spartan Army was one of the earliest known professional armies. Boys were sent to a barracks at the age of seven or eight to train for becoming a soldier. At the age of thirty they were released from the barracks and allowed to marry and have a family. After that, men devoted their lives to war until their retirement at the age of 60. Unlike other civilizations, whose armies had to disband during the planting and harvest seasons, the Spartan serfs or "helots", did the manual labor.
This allowed the Spartans to field a full-time army with a campaign season that lasted all year. The Spartan Army was largely composed of hoplites, equipped with arms and armor nearly identical to each other. Each hoplite bore the Spartan emblem and a scarlet uniform. The main pieces of this armor were a round shield, a spear and a helmet.
The Roman Army had its origins in the citizen army of the Republic, which was staffed by citizens serving mandatory duty for Rome. Reforms turned the army into a professional organization which was still largely filled by citizens, but these citizens served continuously for 25 years before being discharged.
The Romans were also noted for making use of auxiliary troops, non-Romans who served with the legions and filled roles that the traditional Roman military could not fill effectively, such as light skirmish troops and heavy cavalry. After their service in the army they were made citizens of Rome and then their children were citizens also. They were also given land and money to settle in Rome. In the Late Roman Empire, these auxiliary troops, along with foreign mercenaries, became the core of the Roman Army; moreover, by the time of the Late Roman Empire tribes such as the Visigoths were paid to serve as mercenaries.
In the earliest Middle Ages it was the obligation of every aristocrat to respond to the call to battle with his own equipment, archers, and infantry. This decentralized system was necessary due to the social order of the time, but could lead to motley forces with variable training, equipment and abilities. The more resources the noble had access to, the better his troops would be.
Initially, the words "knight" and "noble" were used interchangeably as there was not generally a distinction between them. While the nobility did fight upon horseback, they were also supported by lower class citizens – and mercenaries and criminals – whose only purpose was participating in warfare because, most often than not, they held brief employment during their lord's engagement. As the Middle Ages progressed and feudalism developed in a legitimate social and economic system, knights started to develop into their own class with a minor caveat: they were still in debt to their lord. No longer primarily driven by economic need, the newly established vassal class were, instead, driven by fealty and chivalry.
As central governments grew in power, a return to the citizen armies of the classical period also began, as central levies of the peasantry began to be the central recruiting tool. England was one of the most centralized states in the Middle Ages, and the armies that fought in the Hundred Years' War were, predominantly, composed of paid professionals.
In theory, every Englishman had an obligation to serve for forty days. Forty days was not long enough for a campaign, especially one on the continent.
Thus the scutage was introduced, whereby most Englishmen paid to escape their service and this money was used to create a permanent army. However, almost all high medieval armies in Europe were composed of a great deal of paid core troops, and there was a large mercenary market in Europe from at least the early 12th century.
As the Middle Ages progressed in Italy, Italian cities began to rely mostly on mercenaries to do their fighting rather than the militias that had dominated the early and high medieval period in this region. These would be groups of career soldiers who would be paid a set rate. Mercenaries tended to be effective soldiers, especially in combination with standing forces, but in Italy they came to dominate the armies of the city states. This made them considerably less reliable than a standing army. Mercenary-on-mercenary warfare in Italy also led to relatively bloodless campaigns which relied as much on maneuver as on battles.
In 1439 the French legislature, known as the Estates General (French: "états généraux"), passed laws that restricted military recruitment and training to the king alone. There was a new tax to be raised known as the "taille" that was to provide funding for a new Royal army. The mercenary companies were given a choice of either joining the Royal army as "compagnies d'ordonnance" on a permanent basis, or being hunted down and destroyed if they refused. France gained a total standing army of around 6,000 men, which was sent out to gradually eliminate the remaining mercenaries who insisted on operating on their own. The new standing army had a more disciplined and professional approach to warfare than its predecessors. The reforms of the 1440s, eventually led to the French victory at Castillon in 1453, and the conclusion of the Hundred Years' War. By 1450 the companies were divided into the field army, known as the "grande ordonnance" and the garrison force known as the "petite ordonnance".
First nation states lacked the funds needed to maintain standing forces, so they tended to hire mercenaries to serve in their armies during wartime. Such mercenaries typically formed at the ends of periods of conflict, when men-at-arms were no longer needed by their respective governments.
The veteran soldiers thus looked for other forms of employment, often becoming mercenaries. Free Companies would often specialize in forms of combat that required longer periods of training that was not available in the form of a mobilized militia.
As late as the 1650s, most troops were mercenaries. However, after the 17th century, most states invested in better disciplined and more politically reliable permanent troops. For a time mercenaries became important as trainers and administrators, but soon these tasks were also taken by the state. The massive size of these armies required a large supporting force of administrators.
The newly centralized states were forced to set up vast organized bureaucracies to manage these armies, which some historians argue is the basis of the modern bureaucratic state. The combination of increased taxes and increased centralisation of government functions caused a series of revolts across Europe such as the Fronde in France and the English Civil War.
In many countries, the resolution of this conflict was the rise of absolute monarchy. Only in England and the Netherlands did representative government evolve as an alternative. From the late 17th century, states learned how to finance wars through long term low interest loans from national banking institutions. The first state to master this process was the Dutch Republic. This transformation in the armies of Europe had great social impact. The defense of the state now rested on the commoners, not on the aristocrats.
However, aristocrats continued to monopolise the officer corps of almost all early modern armies, including their high command. Moreover, popular revolts almost always failed unless they had the support and patronage of the noble or gentry classes. The new armies, because of their vast expense, were also dependent on taxation and the commercial classes who also began to demand a greater role in society. The great commercial powers of the Dutch and English matched much larger states in military might.
As any man could be quickly trained in the use of a musket, it became far easier to form massive armies. The inaccuracy of the weapons necessitated large groups of massed soldiers. This led to a rapid swelling of the size of armies. For the first time huge masses of the population could enter combat, rather than just the highly skilled professionals.
It has been argued that the drawing of men from across the nation into an organized corps helped breed national unity and patriotism, and during this period the modern notion of the nation state was born. However, this would only become apparent after the French Revolutionary Wars. At this time, the "levée en masse" and conscription would become the defining paradigm of modern warfare.
Before then, however, most national armies were in fact composed of many nationalities. In Spain armies were recruited from all the Spanish European territories including Spain, Italy, Wallonia (Walloon Guards) and Germany. The French recruited some soldiers from Germany, Switzerland as well as from Piedmont. Britain recruited Hessian and Hanovrian troops until the late 18th century. Irish Catholics made careers for themselves in the armies of many Catholic European states.
Prior to the English Civil War in England, the monarch maintained a personal bodyguard of Yeomen of the Guard and the Honourable Corps of Gentlemen at Arms, or "gentlemen pensioners", and a few locally raised companies to garrison important places such as Berwick on Tweed or Portsmouth (or Calais before it was recaptured by France in 1558).
Troops for foreign expeditions were raised upon an "ad hoc" basis. Noblemen and professional regular soldiers were commissioned by the monarch to supply troops, raising their quotas by indenture from a variety of sources. On January 26, 1661 Charles II issued the Royal Warrant that created the genesis of what would become the British Army, although the Scottish and English Armies would remain two separate organizations until the unification of England and Scotland in 1707. The small force was represented by only a few regiments.
After the American Revolutionary War the Continental Army was quickly disbanded as part of the Americans' distrust of standing armies, and irregular state militias became the sole ground army of the United States, with the exception of one battery of artillery guarding West Point's arsenal. Then First American Regiment was established in 1784. However, because of continuing conflict with Native Americans, it was soon realized that it was necessary to field a trained standing army. The first of these, the Legion of the United States, was established in 1791.
Until 1733 the common soldiers of Prussian Army consisted largely of peasantry recruited or impressed from Brandenburg–Prussia, leading many to flee to neighboring countries. To halt this trend, Frederick William I divided Prussia into regimental cantons. Every youth was required to serve as a soldier in these recruitment districts for three months each year; this met agrarian needs and added extra troops to bolster the regular ranks.
Russian tsars before Peter I of Russia maintained professional hereditary musketeer corps (streltsy in Russian) that were highly unreliable and undisciplined. In times of war the armed forces were augmented by peasants. Peter I introduced a modern regular army built on German model, but with a new aspect: officers not necessarily from nobility, as talented commoners were given promotions that eventually included a noble title at the attainment of an officer's rank. Conscription of peasants and townspeople was based on quota system, per settlement. Initially it was based on the number of households, later it was based on the population numbers. The term of service in the 18th century was for life. In 1793 it was reduced to 25 years. In 1834 it was reduced to 20 years plus 5 years in reserve and in 1855 to 12 years plus 3 years of reserve.
The first Ottoman standing army were Janissaries. They replaced forces that mostly comprised tribal warriors ("ghazis") whose loyalty and morale could not always be trusted. The first Janissary units were formed from prisoners of war and slaves, probably as a result of the sultan taking his traditional one-fifth share of his army's booty in kind rather than cash.
From the 1380s onwards, their ranks were filled under the "devşirme" system, where feudal dues were paid by service to the sultan. The "recruits" were mostly Christian youths, reminiscent of mamluks.
China organized the Manchu people into the Eight Banner system in the early 17th century. Defected Ming armies formed the Green Standard Army. These troops enlisted voluntarily and for long terms of service.
Conscription allowed the French Republic to form the "Grande Armée", what Napoleon Bonaparte called "the nation in arms", which successfully battled European professional armies.
Conscription, particularly when the conscripts are being sent to foreign wars that do not directly affect the security of the nation, has historically been highly politically contentious in democracies.
Canada also had a political dispute over conscription during World War II. Similarly, mass protests against conscription to fight the Vietnam War occurred in several countries in the late 1960s.
In developed nations, the increasing emphasis on technological firepower and better-trained fighting forces, the sheer unlikelihood of a conventional military assault on most developed nations, as well as memories of the contentiousness of the Vietnam War experience, make mass conscription unlikely in the foreseeable future.
Russia, as well as many other nations, retains mainly a conscript army. There is also a very rare "citizen army" as used in Switzerland (see Military of Switzerland).
Western armies are usually subdivided as follows:
A field army is composed of a headquarters, army troops, a variable number of corps, typically between three and four, and a variable number of divisions, also between three and four. A battle is influenced at the Field Army level by transferring divisions and reinforcements from one corps to another to increase the pressure on the enemy at a critical point. Field armies are controlled by a general or lieutenant general.
A particular army can be named or numbered to distinguish it from military land forces in general. For example, the First United States Army and the Army of Northern Virginia. In the British Army it is normal to spell out the ordinal number of an army (e.g. First Army), whereas lower formations use figures (e.g. 1st Division).
Armies (as well as army groups and theaters) are large formations which vary significantly between armed forces in size, composition, and scope of responsibility.
In the Soviet Red Army and the Soviet Air Force, "Armies" could vary in size, but were subordinate to an Army Group-sized "front" in wartime. In peacetime, a Soviet army was usually subordinate to a military district. Viktor Suvorov's "Inside the Soviet Army" describes how Cold War era Soviet military districts were actually composed of a front headquarters and a military district headquarters co-located for administration and deception ('maskirovika') reasons. | https://en.wikipedia.org/wiki?curid=1376 |
Alligatoridae
The family Alligatoridae of crocodylians includes alligators and caimans.
The superfamily Alligatoroidea includes all crocodilians (fossil and extant) that are more closely related to the American alligator than to either the Nile crocodile or the gharial.
Members of this superfamily first arose in the late Cretaceous. "Leidyosuchus" of Alberta is the earliest known genus. Fossil alligatoroids have been found throughout Eurasia as land bridges across both the North Atlantic and the Bering Strait have connected North America to Eurasia during the Cretaceous, Paleogene, and Neogene periods. Alligators and caimans split in North America during the late Cretaceous and the latter reached South America by the Paleogene, before the closure of the Isthmus of Panama during the Neogene period. The Chinese alligator likely descended from a lineage that crossed the Bering land bridge during the Neogene. The modern American alligator is well represented in the fossil record of the Pleistocene. The alligator's full mitochondrial genome was sequenced in the 1990s and it suggests the animal evolved at a rate similar to mammals and greater than birds and most cold-blooded vertebrates. The full genome, published in 2014, suggests that the alligator evolved much more slowly than mammals and birds.
The lineage including alligators proper (Alligatorinae) occurs in the fluvial deposits of the age of the Upper Chalk in Europe, where they did not die out until the Pliocene age. The true alligators are today represented by two species, "A. mississippiensis" in the southeastern United States, which can grow to 15 ft (4.6 m) and weigh 1000 lbs (453 kg) and the small "A. sinensis" in the Yangtze River, China, which grows to an average of 5 ft (1.5 m). Their name derives from the Spanish "el lagarto", which means "the lizard".
In Central and South America, the alligator family is represented by six species of the subfamily Caimaninae, which differ from the alligator by the absence of a bony septum between the nostrils, and having ventral armour composed of overlapping bony scutes, each of which is formed of two parts united by a suture. Besides the three species in "Caiman", the smooth-fronted caimans in genus "Paleosuchus" and the black caiman in "Melanosuchus" are described. Caimans tend to be more agile and crocodile-like in their movements, and have longer, sharper teeth than alligators.
"C. crocodilus", the spectacled caiman, has the widest distribution, from southern Mexico to the northern half of Argentina, and grows to a modest size of about . The largest is the near-threatened "Melanosuchus niger", the "jacaré-açu" or large or black caiman of the Amazon River basin. Black caimans grow to , with the largest recorded size . The black caiman and American alligator are the only members of the alligator family that pose the same danger to humans as the larger species of the crocodile family.
Although caimans have not been studied in depth, scientists have learned their mating cycles (previously thought to be spontaneous or year-round) are linked to the rainfall cycles and the river levels, which increases chances of survival for their offspring. | https://en.wikipedia.org/wiki?curid=1380 |
Alder
Alder is the common name of a genus of flowering plants, Alnus, belonging to the birch family Betulaceae. The genus comprises about 35 species of monoecious trees and shrubs, a few reaching a large size, distributed throughout the north temperate zone with a few species extending into Central America, as well as the northern and southern Andes.
The common name "alder" evolved from the Old English word "alor", which in turn is derived from Proto-Germanic root "aliso". The generic name "Alnus" is the equivalent Latin name (which is also the source for "Alamo", the Spanish term for the tree).
With a few exceptions, alders are deciduous, and the leaves are alternate, simple, and serrated. The flowers are catkins with elongate male catkins on the same plant as shorter female catkins, often before leaves appear; they are mainly wind-pollinated, but also visited by bees to a small extent. These trees differ from the birches ("Betula", another genus in the family) in that the female catkins are woody and do not disintegrate at maturity, opening to release the seeds in a similar manner to many conifer cones.
The largest species are red alder ("A. rubra") on the west coast of North America, and black alder ("A. glutinosa"), native to most of Europe and widely introduced elsewhere, both reaching over 30 m. By contrast, the widespread "Alnus alnobetula" (green alder) is rarely more than a 5-m-tall shrub.
Alders are commonly found near streams, rivers, and wetlands. Sometimes where the prevalence of alders is particularly prominent these are called alder carrs. In the Pacific Northwest of North America, the white alder (Alnus rhombifolia) unlike other northwest alders, has an affinity for warm, dry climates, where it grows along watercourses, such as along the lower Columbia River east of the Cascades and the Snake River, including Hells Canyon.
Alder leaves and sometimes catkins are used as food by numerous butterflies and moths.
"A. glutinosa" and "A. viridis" are classed as environmental weeds in New Zealand. Alder leaves and especially the roots are important to the ecosystem because they enrich the soil with nitrogen and other nutrients.
Alder is particularly noted for its important symbiotic relationship with "Frankia alni", an actinomycete, filamentous, nitrogen-fixing bacterium. This bacterium is found in root nodules, which may be as large as a human fist, with many small lobes, and light brown in colour. The bacterium absorbs nitrogen from the air and makes it available to the tree. Alder, in turn, provides the bacterium with sugars, which it produces through photosynthesis. As a result of this mutually beneficial relationship, alder improves the fertility of the soil where it grows, and as a pioneer species, it helps provide additional nitrogen for the successional species which follow.
Because of its abundance, red alder delivers large amounts of nitrogen to enrich forest soils. Red alder stands have been found to supply between 120 and 290 pounds of nitrogen per acre (130 to 320 kg per ha) annually to the soil. From Alaska to Oregon, "Alnus viridis" subsp. "sinuata" ("A. sinuata", Sitka Alder or Slide Alder), characteristically pioneer fresh, gravelly sites at the foot of retreating glaciers. Studies show that Sitka alder, a more shrubby variety of alder, adds nitrogen to the soil at an average of 55 pounds per acre (60 kg per ha) per year, helping convert the sterile glacial terrain to soil capable of supporting a conifer forest. Alders are common among the first species to colonize disturbed areas from floods, windstorms, fires, landslides, etc. Alder groves themselves often serve as natural firebreaks since these broad-leaved trees are much less flammable than conifers. Their foliage and leaf litter does not carry a fire well, and their thin bark is sufficiently resistant to protect them from light surface fires. In addition, the light weight of alder seeds (650,000 per pound, or 1.5 million per kg) allows for easy dispersal by the wind. Although it outgrows coastal Douglas-fir for the first 25 years, it is very shade intolerant and seldom lives more than 100 years. Red alder is the Pacific Northwest's largest alder and the most plentiful and commercially important broad-leaved tree in the coastal Northwest. Groves of red alder 10 to 20 inches (25 to 50 cm) in diameter intermingle with young Douglas-fir forests west of the Cascades, attaining a maximum height of 100 to 110 feet (30 to 33 m) in about sixty years and then lose vigor as heart rot sets in. Alders largely help create conditions favorable for giant conifers that replace them.
Alder roots are parasitized by northern groundcone.
The catkins of some alder species have a degree of edibility, and may be rich in protein. Reported to have a bitter and unpleasant taste, they are more useful for survival purposes. The wood of certain alder species is often used to smoke various food items such as coffee, salmon and other seafood.
Most of the pilings that form the foundation of Venice were made from alder trees.
Alder bark contains the anti-inflammatory salicin, which is metabolized into salicylic acid in the body. Some Native American cultures use red alder bark ("Alnus rubra") to treat poison oak, insect bites, and skin irritations. Blackfeet Indians have traditionally used an infusion made from the bark of red alder to treat lymphatic disorders and tuberculosis. Recent clinical studies have verified that red alder contains betulin and lupeol, compounds shown to be effective against a variety of tumors.
The inner bark of the alder, as well as red osier dogwood, or chokecherry, is used by some Indigenous peoples of the Americas in smoking mixtures, known as "kinnikinnick", to improve the taste of the bearberry leaf.
Alder is illustrated in the coat of arms for the Austrian town of Grossarl.
Electric guitars, most notably those manufactured by the Fender Musical Instruments Corporation, have been built with alder bodies since the 1950s. Alder is appreciated for its claimed tight and even balanced tone, especially when compared to mahogany, and has been adopted by many electric guitar manufacturers.
As a hardwood, alder is used in making furniture, cabinets, and other woodworking products. For example, in the television series "Northern Exposure" season 3 episode "Things Become Extinct" (1992), Native American Ira Wingfeather makes duck flutes out of alder tree branches while Ed Chigliak films.
Alder bark and wood (like oak and sweet chestnut) contain tannin and are traditionally used to tan leather.
A red dye can also be extracted from the outer bark, and a yellow dye from the inner bark.
The genus is divided into three subgenera:
Subgenus "Alnus": Trees with stalked shoot buds, male and female catkins produced in autumn (fall) but stay closed over winter, pollinating in late winter or early spring, about 15–25 species, including:
Subgenus "Clethropsis". Trees or shrubs with stalked shoot buds, male and female catkins produced in autumn (fall) and expanding and pollinating then, three species:
Subgenus "Alnobetula". Shrubs with shoot buds not stalked, male and female catkins produced in late spring (after leaves appear) and expanding and pollinating then, one to four species: | https://en.wikipedia.org/wiki?curid=1383 |
Amos Bronson Alcott
Amos Bronson Alcott (; November 29, 1799March 4, 1888) was an American teacher, writer, philosopher, and reformer. As an educator, Alcott pioneered new ways of interacting with young students, focusing on a conversational style, and avoided traditional punishment. He hoped to perfect the human spirit and, to that end, advocated a vegan diet before the term was coined. He was also an abolitionist and an advocate for women's rights.
Born in Wolcott, Connecticut in 1799, Alcott had only minimal formal schooling before attempting a career as a traveling salesman. Worried about how the itinerant life might have a negative impact on his soul, he turned to teaching. His innovative methods, however, were controversial, and he rarely stayed in one place very long. His most well-known teaching position was at the Temple School in Boston. His experience there was turned into two books: "Records of a School" and "Conversations with Children on the Gospels". Alcott became friends with Ralph Waldo Emerson and became a major figure in transcendentalism. His writings on behalf of that movement, however, are heavily criticized for being incoherent. Based on his ideas for human perfection, Alcott founded Fruitlands, a transcendentalist experiment in community living. The project was short-lived and failed after seven months. Alcott continued to struggle financially for most of his life. Nevertheless, he continued focusing on educational projects and opened a new school at the end of his life in 1879. He died in 1888.
Alcott married Abby May in 1830 and they eventually had four surviving children, all daughters. Their second was Louisa May, who fictionalized her experience with the family in her novel "Little Women" in 1868.
A native New Englander, Amos Bronson Alcott was born in Wolcott, Connecticut (then recently renamed from "Farmingbury") on November 29, 1799. His parents were Joseph Chatfield Alcott and Anna Bronson Alcott. The family home was in an area known as Spindle Hill, and his father, Joseph Alcox, traced his ancestry to colonial-era settlers in eastern Massachusetts. The family originally spelled their name "Alcock", later changed to "Alcocke" then "Alcox". Amos Bronson, the oldest of eight children, later changed the spelling to "Alcott" and dropped his first name.
At age six, young Bronson began his formal education in a one-room schoolhouse in the center of town but learned how to read at home with the help of his mother. The school taught only reading, writing, and spelling and he left this school at the age of 10. At age 13, his uncle, Reverend Tillotson Bronson, invited Alcott into his home in Cheshire, Connecticut, to be educated and prepared for college. Bronson gave it up after only a month and was self-educated from then on. He was not particularly social and his only close friend was his neighbor and second cousin William Alcott, with whom he shared books and ideas. Bronson Alcott later reflected on his childhood at Spindle Hill: "It kept me pure... I dwelt amidst the hills... God spoke to me while I walked the fields." Starting at age 15, he took a job working for clockmaker Seth Thomas in the nearby town of Plymouth.
At age 17, Alcott passed the exam for a teaching certificate but had trouble finding work as a teacher. Instead, he left home and became a traveling salesman in the American South, peddling books and merchandise. He hoped the job would earn him enough money to support his parents, "to make their cares, and burdens less... and get them free from debt", though he soon spent most of his earnings on a new suit. At first, he thought it an acceptable occupation but soon worried about his spiritual well-being. In March 1823, Alcott wrote to his brother: "Peddling is a hard place to serve God, but a capital one to serve Mammon." Near the end of his life, he fictionalized this experience in his book, "New Connecticut", originally circulated only among friends before its publication in 1881.
By the summer of 1823, Alcott returned to Connecticut in debt to his father, who bailed him out after his last two unsuccessful sales trips. He took a job as a schoolteacher in Cheshire with the help of his Uncle Tillotson. He quickly set about reforming the school. He added backs to the benches on which students sat, improved lighting and heating, de-emphasized rote learning, and provided individual slates to each student—paid for by himself. Alcott had been influenced by educational philosophy of the Swiss pedagogue Johann Heinrich Pestalozzi and even renamed his school "The Cheshire Pestalozzi School". His style attracted the attention of Samuel Joseph May, who introduced Alcott to his sister Abby May. She called him, "an intelligent, philosophic, modest man" and found his views on education "very attractive". Locals in Cheshire were less supportive and became suspicious of his methods. Many students left and were enrolled in the local common school or a recently re-opened private school for boys. On November 6, 1827, Alcott started teaching in Bristol, Connecticut, still using the same methods he used in Cheshire, but opposition from the community surfaced quickly; he was unemployed by March 1828. He moved to Boston on April 24, 1828, and was immediately impressed, referring to the city as a place "where the light of the sun of righteousness has risen". He opened the Salem Street Infant School two months later on June 23. Abby May applied as his teaching assistant; instead, the couple were engaged, without consent of the family. They were married at King's Chapel on May 22, 1830; he was 30 years old and she was 29. Her brother conducted the ceremony and a modest reception followed at her father's house. After their marriage the Alcotts moved to 12 Franklin Street in Boston, a boarding house run by a Mrs. Newall. Around this time, Alcott also first expressed his public disdain for slavery. In November 1830, he and William Lloyd Garrison founded what he later called a "preliminary Anti-Slavery Society", though he differed from Garrison as a nonresistant. Alcott became a member of the Boston Vigilance Committee.
Attendance at Alcott's school was falling. A wealthy Quaker named Reuben Haines proposed he and educator William Russell start a new school in Pennsylvania associated with the Germantown Academy. Alcott accepted and he and his newly pregnant wife set forth on December 14. The school was established in Germantown and the Alcotts were offered a rent-free home by Haines. Alcott and Russell were initially concerned that the area would not be conducive to their progressive approach to education and considered establishing the school in nearby Philadelphia instead. Unsuccessful, they went back to Germantown, though the rent-free home was no longer available and the Alcotts instead had to rent rooms in a boarding-house. It was there that their first child, a daughter they named Anna Bronson Alcott, was born on March 16, 1831, after 36 hours of labor. By the fall of that year, their benefactor Haines died suddenly and the Alcotts again suffered financial difficulty. "We hardly earn the bread", wrote Abby May to her brother, "[and] the butter we have to think about."
The couple's only son was born on April 6, 1839, but lived only a few minutes. The mother recorded: "Gave birth to a fine boy full grown perfectly formed but not living". It was in Germantown that the couple's second daughter was born. Louisa May Alcott was born on her father's birthday, November 29, 1832, at a half-hour past midnight. Bronson described her as "a very fine healthful child, much more so than Anna was at birth". The Germantown school, however, was faltering; soon only eight pupils remained. Their benefactor Haines died before Louisa's birth. He had helped recruit students and even paid tuition for some of them. As Abby wrote, his death "has prostrated all our hopes here". On April 10, 1833, the family moved to Philadelphia, where Alcott ran a day school. As usual, Alcott's methods were controversial; a former student later referred to him as "the most eccentric man who ever took on himself to train and form the youthful mind." Alcott began to believe Boston was the best place for his ideas to flourish. He contacted theologian William Ellery Channing for support. Channing approved of Alcott's methods and promised to help find students to enroll, including his daughter Mary. Channing also secured aid from Justice Lemuel Shaw and Boston mayor Josiah Quincy, Jr.
On September 22, 1834, Alcott opened a school of about 30 students, mostly from wealthy families. It was named the Temple School because classes were held at the Masonic Temple on Tremont Street in Boston. His assistant was Elizabeth Palmer Peabody, later replaced by Margaret Fuller. Mary Peabody Mann served as a French instructor for a time. The school was briefly famous, and then infamous, because of his original methods. Before 1830, writing (except in higher education) equated to rote drills in the rules of grammar, spelling, vocabulary, penmanship and transcription of adult texts. However, in that decade, progressive reformers such as Alcott, influenced by Pestalozzi as well as Friedrich Fröbel and Johann Friedrich Herbart, began to advocate writing about subjects from students' personal experiences. Reformers debated against beginning instruction with rules and were in favor of helping students learn to write by expressing the personal meaning of events within their own lives. Alcott's plan was to develop self-instruction on the basis of self-analysis, with an emphasis on conversation and questioning rather than lecturing and drill, which were prevalent in the U.S. classrooms of the time. Alongside writing and reading, he gave lessons in "spiritual culture", which included interpretation of the Gospels, and advocated "object teaching" in writing instruction. He even went so far as to decorate his schoolroom with visual elements he thought would inspire learning: paintings, books, comfortable furniture, and busts or portraits of Plato, Socrates, Jesus, and William Ellery Channing.
During this time, the Alcotts had another child. Born on June 24, 1835, she was named Elizabeth Peabody Alcott in honor of the teaching assistant at the Temple School. By age three, however, her mother changed her name to Elizabeth "Sewall" Alcott, after her own mother.
In July 1835, Peabody published her account as an assistant to the Temple School as "Record of a School: Exemplifying the General Principles of Spiritual Culture". While working on a second book, Alcott and Peabody had a falling out and "Conversations with Children on the Gospels" was prepared with help from Peabody's sister Sophia, published at the end of December 1836. Alcott's methods were not well received; many found his conversations on the Gospels close to blasphemous. For example, he asked students to question if Biblical miracles were literal and suggested that all people are part of God. In the "Boston Daily Advertiser", Nathan Hale criticized Alcott's "flippant and off hand conversation" about serious topics from the Virgin birth of Jesus to circumcision. Joseph T. Buckingham called Alcott "either insane or half-witted" and "an ignorant and presuming charlatan". The book did not sell well; a Boston lawyer bought 750 copies to use as waste paper.
The temple school was widely denounced in the press. Reverend James Freeman Clarke was one of Alcott's few supporters and defended him against the harsh response from Boston periodicals. Alcott was rejected by most public opinion and, by the summer of 1837, he had only 11 students left and no assistant after Margaret Fuller moved to Providence, Rhode Island. The controversy had caused many parents to remove their children and, as the school closed, Alcott became increasingly financially desperate. Remaining steadfast to his pedagogy, a forerunner of progressive and democratic schooling, he alienated parents in a later "parlor school" by admitting an African American child to the class, whom he then refused to expel in the face of protests.
Beginning in 1836, Alcott's membership in the Transcendental Club put him in such company as Ralph Waldo Emerson, Orestes Brownson and Theodore Parker. He became a member with the Club's second meeting and hosted their third. A biographer of Emerson described the group as "the occasional meetings of a changing body of liberal thinkers, agreeing in nothing but their liberality". Frederic Henry Hedge wrote of the group's nature: "There was no club in the strict sense... only occasional meetings of like-minded men and women". Alcott preferred the term "Symposium" for their group.
In late April 1840, Alcott moved to the town of Concord urged by Emerson. He rented a home for $50 a year within walking distance of Emerson's house; he named it Dove Cottage, though they also called it Concordia Cottage. A supporter of Alcott's philosophies, Emerson offered to help with his writing, which proved a difficult task. After several revisions, for example, he deemed the essay "Psyche" (Alcott's account of how he educated his daughters) unpublishable. Alcott also wrote a series patterned after the work of German writer Johann Wolfgang von Goethe which were eventually published in the Transcendentalists' journal, "The Dial". Emerson wrote to Margaret Fuller, then editor, that they might "pass muster & even pass for just & great". He was wrong. Alcott's so-called "Orphic Sayings" were widely mocked for being silly and unintelligible; Fuller herself disliked them, but did not want to hurt Alcott's feelings. In the first issue, for example, he wrote:
On July 26, 1840, Abby May gave birth again. Originally referred to as Baby for several months, she was eventually named Abby May after her mother. As a teenager, she changed the spelling of her name to "Abbie" before choosing to use only "May".
With financial support from Emerson, Alcott left Concord on May 8, 1842, to a visit to England, leaving his brother Junius with his family. He met two admirers, Charles Lane and Henry C. Wright. The two men were leaders of Alcott House, an experimental school based on Alcott's methods from the Temple School located about ten miles outside London. The school's founder, James Pierpont Greaves, had only recently died but Alcott was invited to stay there for a week. Alcott persuaded them to come to the United States with him; Lane and his son moved into the Alcott house and helped with family chores. Persuaded in part by Lane's abolitionist views, Alcott took a stand against the John Tyler administration's plan to annex Texas as a slave territory and refused to pay his poll tax. Abby May wrote in her journal on January 17, 1843, "A day of some excitement, as Mr. Alcott refused to pay his town tax... After waiting some time to be committed [to jail], he was told it was paid by a friend. Thus we were spared the affliction of his absence and the triumph of suffering for his principles." The annual poll tax was only $1.50. The incident inspired Henry David Thoreau, whose similar protest led to a night in jail and his essay "Civil Disobedience". Around this time, the Alcott family set up a sort of domestic post office to curb potential domestic tension. Abby May described her idea: "I thought it would afford a daily opportunity for the children, indeed all of us, to interchange thought and sentiment".
Lane and Alcott collaborated on a major expansion of their educational theories into a Utopian society. Alcott, however, was still in debt and could not purchase the land needed for their planned community. In a letter, Lane wrote, "I do not see anyone to act the money part but myself." In May 1843, he purchased a farm in Harvard, Massachusetts. Up front, he paid $1,500 of the total $1,800 value of the property; the rest was meant to be paid by the Alcotts over a two-year period. They moved to the farm on June 1 and optimistically named it "Fruitlands" despite only ten old apple trees on the property. In July, Alcott announced their plans in "The Dial": "We have made an arrangement with the proprietor of an estate of about a hundred acres, which liberates this tract from human ownership".
Their goal was to regain access to Eden by finding the correct formula for perfect living, following specific rules governing agriculture, diet, and reproduction. In order to achieve this, they removed themselves from the economy as much as possible and lived independently; unlike a similar project named Brook Farm, the participants at Fruitlands avoided interaction with local communities. Calling themselves a "consociate family", they agreed to follow a strict vegetarian diet and to till the land without the use of animal labor. After some difficulty, they relented and allowed some cattle to be "enslaved". They also banned coffee, tea, alcoholic drinks, milk, and warm bathwater. They only ate "aspiring vegetables"—those which grew upward—and refused those that grew downward like potatoes. As Alcott had published earlier, "Our wine is water,—flesh, bread;—drugs, fruits." For clothing, they prohibited leather because animals were killed for it, as well as cotton, silk, and wool, because they were products of slave labor. Alcott had high expectations but was often away when the community most needed him as he attempted to recruit more members.
The experimental community was never successful, partly because most of the land was not arable. Alcott lamented, "None of us were prepared to actualize practically the ideal life of which we dreamed. So we fell apart". Its founders were often away as well; in the middle of harvesting, they left for a lecture tour through Providence, Rhode Island, New York City, and New Haven, Connecticut. In its seven months, only 13 people joined, included the Alcotts and Lanes. Other than Abby May and her daughters, only one other woman joined, Ann Page. One rumor is that Page was asked to leave after eating a fish tail with a neighbor. Lane believed Alcott had misled him into thinking enough people would join the enterprise and developed a strong dislike for the nuclear family. He quit the project and moved to a nearby Shaker family with his son. After Lane's departure, Alcott fell into a depression and could not speak or eat for three days. Abby May thought Lane purposely sabotaged her family. She wrote to her brother, "All Mr. Lane's efforts have been to disunite us. But Mr. Alcott's ... paternal instincts were too strong for him." When the final payment on the farm was owed, Sam May refused to cover his brother-in-law's debts, as he often did, possibly at Abby May's suggestion. The experiment failed, the Alcotts had to leave Fruitlands.
The members of the Alcott family were not happy with their Fruitlands experience. At one point, Abby May threatened that she and their daughters would move elsewhere, leaving Bronson behind. Louisa May Alcott, who was ten years old at the time, later wrote of the experience in "Transcendental Wild Oats" (1873): "The band of brothers began by spading garden and field; but a few days of it lessened their ardor amazingly."
In January 1844, Alcott moved his family to Still River, a village within Harvard but, on March 1, 1845, the family returned to Concord to live in a home they named "The Hillside" (later renamed "The Wayside" by Nathaniel Hawthorne). Both Emerson and Sam May assisted in securing the home for the Alcotts. While living in the home, Louisa began writing in earnest and was given her own room. She later said her years at the home "were the happiest years" of her life; many of the incidents in her novel "Little Women" (1868) are based on this period. Alcott renovated the property, moving a barn and painting the home a rusty olive color, as well as tending to over six acres of land. On May 23, 1845, Abby May was granted a sum from her father's estate which was put into a trust fund, granting minor financial security. That summer, Bronson Alcott let Henry David Thoreau borrow his ax to prepare his home at Walden Pond.
The Alcotts hosted a steady stream of visitors at The Hillside, including fugitive slaves, which they hosted in secret as a station of the Underground Railroad. Alcott's opposition to slavery also fueled his opposition to the Mexican–American War which began in 1846. He considered the war a blatant attempt to extend slavery and asked if the country was made up of "a people bent on conquest, on getting the golden treasures of Mexico into our hands, and of subjugating foreign peoples?"
In 1848, Abby May insisted they leave Concord, which she called "cold, heartless, brainless, soulless". The Alcott family put The Hillside up for rent and moved to Boston. There, next door to Peabody's book store on West Street, Bronson Alcott hosted a series based on the "Conversations" model by Margaret Fuller called "A Course on the Conversations on Man—his History, Resources, and Expectations". Participants, both men and women, were charged three dollars to attend or five dollars for all seven lectures. In March 1853, Alcott was invited to teach fifteen students at Harvard Divinity School in an extracurricular, non-credit course.
Alcott and his family moved back to Concord after 1857, where he and his family lived in the Orchard House until 1877. In 1860, Alcott was named superintendent of Concord Schools.
Alcott voted in a presidential election for the first time in 1860. In his journal for November 6, 1860, he wrote: "At Town House, and cast my vote for Lincoln and the Republican candidates generally—the first vote I ever cast for a President and State officers." Alcott was an abolitionist and a friend of the more radical William Lloyd Garrison. He had attended a rally led by Wendell Phillips on behalf of 17-year-old Thomas Sims, a fugitive slave on trial in Boston. Alcott was one of several who attempted to storm the courthouse; when gunshots were heard, he was the only one who stood his ground, though the effort was unsuccessful. He had also stood his ground in a protest against the trial of Anthony Burns. A group had broken down the door of the Boston courthouse but guards beat them back. Alcott stood forward and asked the leader of the group, Thomas Wentworth Higginson, "Why are we not within?" He then walked calmly into the courthouse, was threatened with a gun, and turned back, "but without hastening a step", according to Higginson.
In 1862, Louisa moved to Washington, D.C. to volunteer as a nurse. On January 14, 1863, the Alcotts received a telegram that Louisa was sick; Bronson immediately went to bring her home, briefly meeting Abraham Lincoln while there. Louisa turned her experience into the book "Hospital Sketches". Her father wrote of it, "I see nothing in the way of a good appreciation of Louisa's merits as a woman and a writer."
Henry David Thoreau died on May 6, 1862, likely from an illness he caught from Alcott two years earlier.
At Emerson's request, Alcott helped arrange Thoreau's funeral, which was held at First Parish Sanctuary in Concord, despite Thoreau having disavowed membership in the church when he was in his early twenties. Emerson wrote a eulogy, and Alcott helped plan the preparations. Only two years later, neighbor Nathaniel Hawthorne died as well. Alcott served as a pallbearer along with Louis Agassiz, James Thomas Fields, Oliver Wendell Holmes, Sr., Henry Wadsworth Longfellow, and others. With Hawthorne's death, Alcott worried that few of the Concord notables remained. He recorded in his journal: "Fair figures one by one are fading from sight." The next year, Lincoln was assassinated, which Alcott called "appalling news".
In 1868, Alcott met with publisher Thomas Niles, an admirer of "Hospital Sketches". Alcott asked Niles if he would publish a book of short stories by his daughter; instead, he suggested she write a book about girls. Louisa May was not interested initially but agreed to try. "They want a book of 200 pages or more", Alcott told his daughter. The result was "Little Women", published later that year. The book, which fictionalized the Alcott family during the girls' coming-of-age years, recast the father figure as a chaplain, away from home at the front in the Civil War.
Alcott spoke, as opportunity arose, before the "lyceums" then common in various parts of the United States, or addressed groups of hearers as they invited him. These "conversations" as he called them, were more or less informal talks on a great range of topics, spiritual, aesthetic and practical, in which he emphasized the ideas of the school of American Transcendentalists led by Emerson, who was always his supporter and discreet admirer. He often discussed Platonic philosophy, the illumination of the mind and soul by direct communion with Spirit; upon the spiritual and poetic monitions of external nature; and upon the benefit to man of a serene mood and a simple way of life.
Alcott's published books, all from late in his life, include "Tablets" (1868), "Concord Days" (1872), "New Connecticut" (1881), and "Sonnets and Canzonets" (1882). Louisa May attended to her father's needs in his final years. She purchased a house for her sister Anna which had been the last home of Henry David Thoreau, now known as the Thoreau-Alcott House. Louisa and her parents moved in with Anna as well.
After the death of his wife Abby May on November 25, 1877, Alcott never returned to Orchard House, too heartbroken to live there. He and Louisa May collaborated on a memoir and went over her papers, letters, and journals. "My heart bleeds with the memories of those days", he wrote, "and even long years, of cheerless anxiety and hopeless dependence." Louisa noted her father had become "restless with his anchor gone". They gave up on the memoir project and Louisa burned many of her mother's papers.
On January 19, 1879, Alcott and Franklin Benjamin Sanborn wrote a prospectus for a new school which they distributed to potentially interested people throughout the country. The result was the Concord School of Philosophy and Literature, which held its first session in 1879 in Alcott's study in the Orchard House. In 1880 the school moved to the Hillside Chapel, a building next to the house, where he held conversations and, over the course of successive summers, as he entered his eighties, invited others to give lectures on themes in philosophy, religion and letters. The school, considered one of the first formal adult education centers in America, was also attended by foreign scholars. It continued for nine years.
In April 1882, Alcott's friend and benefactor Ralph Waldo Emerson was sick and bedridden. After visiting him, Alcott wrote, "Concord will be shorn of its human splendor when he withdraws behind the cloud." Emerson died the next day. Alcott himself moved out of Concord for his final years, settling at 10 Louisburg Square in Boston beginning in 1885.
As he was bedridden at the end of his life, Alcott's daughter Louisa May came to visit him at Louisburg on March 1, 1888. He said to her, "I am going "up. Come with me"." She responded, "I wish I could." He died three days later on March 4; Louisa May died only two days after her father.
Alcott was fundamentally and philosophically opposed to corporal punishment as a means of disciplining his students. Instead, beginning at the Temple School, he would appoint a daily student superintendent. When that student observed an infraction, he or she reported it to the rest of the class and, as a whole, they deliberated on punishment. At times, Alcott offered his own hand for an offending student to strike, saying that any failing was the teacher's responsibility. The shame and guilt this method induced, he believed, was far superior to the fear instilled by corporal punishment; when he used physical "correction" he required that the students be unanimously in support of its application, even including the student to be punished.
The most detailed discussion of his theories on education is in an essay, "Observations on the Principles and Methods of Infant Instruction". Alcott believed that early education must draw out "unpremeditated thoughts and feelings of the child" and emphasized that infancy should primarily focus on enjoyment. He noted that learning was not about the acquisition of facts but the development of a reflective state of mind.
Alcott's ideas as an educator were controversial. Writer Harriet Martineau, for example, wrote dubiously that, "the master presupposes his little pupils possessed of all truth; and that his business is to bring it out into expression". Even so, his ideas helped to found one of the first adult education centers in America, and provided the foundation for future generations of liberal education. Many of Alcott's educational principles are still used in classrooms today, including "teach by encouragement", art education, music education, acting exercises, learning through experience, risk-taking in the classroom, tolerance in schools, physical education/recess, and early childhood education. The teachings of William Ellery Channing a few years earlier had also laid the groundwork for the work of most of the Concord Transcendentalists.
The Concord School of Philosophy, which closed following Alcott's death in 1888, was reopened almost 90 years later in the 1970s. It has continued functioning with a Summer Conversational Series in its original building at Orchard House, now run by the Louisa May Alcott Memorial Association.
While many of Alcott's ideas continue to be perceived as being on the liberal/radical edge, they are still common themes in society, including vegetarian/veganism, sustainable living, and temperance/self-control. Alcott described his sustenance as a "Pythagorean diet": meat, eggs, butter, cheese, and milk were excluded and drinking was confined to well water. Alcott believed that diet held the key to human perfection and connected physical well-being to mental improvement. He further viewed a perfection of nature to the spirit and, in a sense, predicted modern environmentalism by condemning pollution and encouraging humankind's role in sustaining ecology.
Alcott's philosophical teachings have been criticized as inconsistent, hazy or abrupt. He formulated no system of philosophy, and shows the influence of Plato, German mysticism, and Immanuel Kant as filtered through the writings of Samuel Taylor Coleridge. Margaret Fuller referred to Alcott as "a philosopher of the balmy times of ancient Greece—a man whom the worldlings of Boston hold in as much horror as the worldlings of Athens held Socrates." In his later years, Alcott related a story from his boyhood: during a total solar eclipse, he threw rocks at the sky until he fell and dislocated his shoulder. He reflected that the event was a prophecy that he would be "tilting at the sun and always catching the fall".
Like Emerson, Alcott was always optimistic, idealistic, and individualistic in thinking. Writer James Russell Lowell referred to Alcott in his poem "Studies for Two Heads" as "an angel with clipped wings". Even so, Emerson noted that Alcott's brilliant conversational ability did not translate into good writing. "When he sits down to write," Emerson wrote, "all his genius leaves him; he gives you the shells and throws away the kernel of his thought." His "Orphic Sayings", published in "The Dial", became famous for their hilarity as dense, pretentious, and meaningless. In New York, for example, "The Knickerbocker" published a parody titled "Gastric Sayings" in November 1840. A writer for the "Boston Post" referred to Alcott's "Orphic Sayings" as "a train of fifteen railroad cars with one passenger".
Modern critics often fault Alcott for not being able to financially support his family. Alcott himself worried about his own prospects as a young man, once writing to his mother that he was "still at my old trade—hoping." Alcott held his principles above his well-being. Shortly before his marriage, for example, his future father-in-law Colonel Joseph May helped him find a job teaching at a school in Boston run by the Society of Free Enquirers, followers of Robert Owen, for a lucrative $1,000 to $1,200 annual salary. He refused it because he did not agree with their beliefs, writing, "I shall have nothing to do with them."
From the other perspective, Alcott's unique teaching ideas created an environment which produced two famous daughters in different fields in a time when women were not commonly encouraged to have independent careers. | https://en.wikipedia.org/wiki?curid=1384 |
Arachnophobia
Arachnophobia is an intense and unreasonable fear of spiders and other arachnids such as scorpions.
Treatment is typically by exposure therapy, where the person is presented with pictures of spiders or the spiders themselves.
People with arachnophobia tend to feel uneasy in any area they believe could harbor spiders or that has visible signs of their presence, such as webs. If arachnophobes see a spider, they may not enter the general vicinity until they have overcome the panic attack that is often associated with their phobia. Some people run away, scream, cry, have emotional outbursts, experience trouble breathing, sweat, have increased heart rates, or even faint when they come in contact with an area near spiders or their webs. In some extreme cases, even a picture or a realistic drawing of a spider can trigger intense fear.
Arachnophobia may be an exaggerated form of an instinctive response that helped early humans to survive, or a cultural phenomenon that is most common in predominantly European societies.
An evolutionary reason for the phobia remains unresolved. One view, especially held in evolutionary psychology, is that the presence of venomous spiders led to the evolution of a fear of spiders, or made acquisition of a fear of spiders especially easy. Like all traits, there is variability in the intensity of fear of spiders, and those with more intense fears are classified as phobic. Being relatively small, spiders do not fit the usual criterion for a threat in the animal kingdom where size is a factor, but they can have medically significant venom. However, a phobia is an irrational fear as opposed to a rational fear.
By ensuring that their surroundings were free from spiders, arachnophobes would have had a reduced risk of being bitten in ancestral environments, giving them a slight advantage over non-arachnophobes in terms of survival. However, having a disproportionate fear of spiders in comparison to other, potentially dangerous creatures present during "Homo sapiens"' environment of evolutionary adaptiveness may have had drawbacks.
A 2001 study found that people could detect images of spiders among images of flowers and mushrooms more quickly than they could detect images of flowers or mushrooms among images of spiders. The researchers suggested that this was because fast response to spiders was more relevant to human evolution.
The alternative view is that the dangers, such as from spiders, are overrated and not sufficient to influence evolution. Instead, inheriting phobias would have restrictive and debilitating effects upon survival, rather than being an aid. For some communities such as in Papua New Guinea and Cambodia spiders are included in traditional foods. This suggests arachnophobia may be a cultural, rather than genetic trait.
The fear of spiders can be treated by any of the general techniques suggested for specific phobias. The first line of treatment is systematic desensitization – also known as exposure therapy. Before engaging in systematic desensitization, it is common to train the individual with arachnophobia in relaxation techniques, which will help keep the patient calm. Systematic desensitization can be done in vivo (with live spiders) or by getting the individual to imagine situations involving spiders, then modelling interaction with spiders for the person affected and eventually interacting with real spiders. This technique can be effective in just one session, although it generally takes more time.
Recent advances in technology have enabled the use of virtual or augmented reality spiders for use in therapy. These techniques have proven to be effective. It has been suggested that exposure to short clips from the "Spider-Man" movies may help to reduce an individual's arachnophobia.
Arachnophobia affects 3.5 to 6.1 percent of the global population. | https://en.wikipedia.org/wiki?curid=1386 |
Ahab
Ahab (; ; "Achaáb"; ) was the seventh king of Israel since Jeroboam, the son and successor of King Omri and the husband of Jezebel of Sidon, according to the Hebrew Bible. The Hebrew Bible presents Ahab as a wicked king, particularly for condoning Jezebel's influence on religious policies and his principal role behind Naboth's arbitrary execution.
The existence of Ahab is historically supported outside the Bible. Shalmaneser III of Assyria documented in 853 BC that he defeated an alliance of a dozen kings in the Battle of Qarqar; one of these was Ahab. He is also mentioned on the inscriptions of the Mesha Stele.
Ahab became king of Israel in the thirty-eighth year of King Asa of Judah, and reigned for twenty-two years, according to 1 Kings. William F. Albright dated his reign to 869–850 BCE, while Edwin R. Thiele offered the dates 874–853 BCE. Most recently, Michael Coogan has dated Ahab's reign to 871–852 BCE.
King Omri, Ahab's father and founder of the short-lived Omri dynasty, seems to have been a successful military leader; he is reported in the text of the Moabite Mesha Stele to have "oppressed Moab for many days." During Ahab's reign, Moab, which had been conquered by his father, remained tributary. Ahab was allied by marriage with Jehoshaphat, who was king of Judah. Only with Aram-Damascus is he believed to have had strained relations.
Ahab married Jezebel, the daughter of the King of Tyre. 1 Kings 16–22 tells the story of Ahab and Jezebel, and indicates that Jezebel was a dominant influence on Ahab, persuading him to abandon Yahweh and establish the religion of Baal in Israel. Ahab lived in Samaria, the royal capital established by Omri, and built a temple and altar to Baal there. These actions were said to have led to severe consequences for Israel, including a drought that lasted for several years and Jezebel's fanatical religious persecution of the prophets of Yahweh, which Ahab condoned. His reputation was so negative that in 1 Kings 16:34, the author attributed to his reign the deaths of Abiram and Segub, the sons of Hiel of Bethel, caused by their father's invocation of Joshua's curse several centuries ago. Ahab was succeeded by Ahaziah and Jehoram, who reigned over Israel until Jehu's revolt of 842 BCE.
The Battle of Qarqar is mentioned in extra-biblical records, and was perhaps at Apamea, where Shalmaneser III of Assyria fought a great confederation of princes from Cilicia, Northern Syria, Israel, Ammon, and the tribes of the Syrian desert (853 BCE), including Ahab the Israelite ("A-ha-ab-bu matSir-'a-la-a-a") and Hadadezer ("Adad-'idri").
Ahab's contribution was estimated at 2000 chariots and 10,000 men. In reality, however, the number of chariots in Ahab's forces was probably closer to a number in the hundreds (based upon archaeological excavations of the area and the foundations of stables that have been found). If, however, the numbers are referring to allies it could possibly include forces from Tyre, Judah, Edom, and Moab. The Assyrian king claimed a victory, but his immediate return and subsequent expeditions in 849 BC and 846 BC against a similar but unspecified coalition seem to show that he met with no lasting success. However, according to the Hebrew Bible, Ahab with 7000 troops had previously overthrown Ben-hadad and his thirty-two kings, who had come to lay siege to Samaria, and in the following year obtained a decisive victory over him at Aphek, probably in the Sharon plain at Antipatris (1 Kings 20). A treaty was made whereby Ben-hadad restored the cities which his father had taken from Ahab's father, and trading facilities between Damascus and Samaria were granted.
Jezreel has been identified as Ahab's fortified chariot and cavalry base.
In the Biblical text, Ahab has five important encounters with prophets:
Three years later, war broke out east of the Jordan River, and Ahab with Jehoshaphat of Judah went to recover Ramoth-Gilead from the Arameans. During this battle, Ahab disguised himself, but he was mortally wounded by an unaimed arrow (1 Kings 22). The Hebrew Bible says that dogs licked his blood, according to the prophecy of Elijah. But the Septuagint adds that pigs also licked his blood, symbolically making him unclean to the Israelites, who abstained from pork. Ahab was succeeded by his sons, Ahaziah and Jehoram.
Jezebel's death, however, was more dramatic than Ahab's. As recorded in 2 Kings 9:30-34, Jezebel was confronted by Jehu who had her servants throw her out the window, causing her death.
1 Kings 16:29 through 22:40 contains the narrative of Ahab's reign. His reign was slightly more emphasised upon than the previous kings, due to his blatant trivialization of the "sins of Jeroboam", which the previous kings of Israel were plagued by, and his subsequent marriage with a pagan princess, the nationwide institution of Baal worship, the persecution of Yahweh's prophets and Naboth's shocking murder. These offenses and atrocities stirred up populist resentment from figures such as Elijah and Micaiah. Indeed, he is referred to by the author of Kings as being "more evil than all the kings before him" (1 Kings 16:30).
Nonetheless, there were achievements that the author took note of, including his ability to fortify numerous Israelite cities and build an ivory palace (1 Kings 22:39). Adherents of the Yahwist religion found their principal champion in Elijah. His denunciation of the royal dynasty of Israel and his emphatic insistence on the worship of Yahweh and Yahweh alone, illustrated by the contest between Yahweh and Baal on Mount Carmel (1 Kings 18), form the keynote to a period which culminated in the accession of Jehu, an event in which Elijah's chosen disciple Elisha was the leading figure and the Omride Dynasty was brutally defeated.
One of the three or four wicked kings of Israel singled out by tradition as being excluded from the future world of bliss (Sanh. x. 2; Tosef., Sanh. xii. 11). Midrash Konen places him in the fifth department of Gehenna, as having the heathen under his charge. Though held up as a warning to sinners, Ahab is also described as displaying noble traits of character (Sanh. 102b; Yer. Sanh. xi. 29b). Talmudic literature represents him as an enthusiastic idolater who left no hilltop in Palestine without an idol before which he bowed, and to which he or his wife, Jezebel, brought his weight in gold as a daily offering. So defiant in his apostasy was he that he had inscribed on all the doors of the city of Samaria the words, "Ahab hath abjured the living God of Israel." Nevertheless, he paid great respect to the representatives of learning, "to the Torah given in twenty-two letters," for which reason he was permitted to reign for twenty-two successive years. He generously supported the students of the Law out of his royal treasury, in consequence of which half his sins were forgiven him. A type of worldliness (Ber. 61b), the Crœsus of his time, he was, according to ancient tradition (Meg. 11a), ruler over the whole world. Two hundred and thirty subject kings had initiated a rebellion; but he brought their sons as hostages to Samaria and Jerusalem. All the latter turned from idolaters into worshipers of the God of Israel (Tanna debe Eliyahu, i. 9). Each of his seventy sons had an ivory palace built for him. Since, however, it was Ahab's idolatrous wife who was the chief instigator of his crimes (B. M. 59a), some of the ancient teachers gave him the same position in the world to come as a sinner who had repented (Sanh. 104b, Num. R. xiv). Like Manasseh, he was made a type of repentance (I Kings, xxi. 29). Accordingly, he is described as undergoing fasts and penances for a long time; praying thrice a day to God for forgiveness, until his prayer was heard (PirḲe R. El. xliii). Hence, the name of Ahab in the list of wicked kings was changed to Ahaz (Yer. Sanh. x. 28b; Tanna debe Eliyahu Rabba ix, Zuṭṭa xxiv.).
Pseudo-Epiphanius ("Opera," ii. 245) makes Micah an Ephraimite. Confounding him with Micaiah, son of Imlah (I Kings xxii. 8 et seq.), he states that Micah, for his inauspicious prophecy, was killed by order of Ahab through being thrown from a precipice, and was buried at Morathi (Maroth?; Mic. i. 12), near the cemetery of Enakim (Ένακεὶμ Septuagint rendering of ; ib. i. 10). According to "Gelilot Ereẓ Yisrael" (quoted in "Seder ha-Dorot," i. 118, Warsaw, 1889), Micah was buried in Chesil, a town in southern Judah (Josh. xv. 30). Naboth's soul was the lying spirit that was permitted to deceive Ahab to his death. | https://en.wikipedia.org/wiki?curid=1389 |
Dasyproctidae
Dasyproctidae is a family of large South American rodents, comprising the agoutis and acouchis. Their fur is a reddish or dark colour above, with a paler underside. They are herbivorous, often feeding on ripe fruit that falls from trees. They live in burrows, and, like squirrels, will bury some of their food for later use.
Dasyproctids exist in Central and South America, which are the tropical parts of the New World. The fossil record of this family can be traced back to the Late Oligocene (Deseadan in the SALMA classification).
As with all rodents, members of this family have incisors, pre-molars, and molars, but no canines. The cheek teeth are hypsodont and flat-crowned.
Fossil taxa follow McKenna and Bell, with modifications following Kramarz.
The pacas (genus "Cuniculus") are placed by some authorities in Dasyproctidae, but molecular studies have demonstrated they do not form a monophyletic group. | https://en.wikipedia.org/wiki?curid=1392 |
Algol
Algol , designated Beta Persei (β Persei, abbreviated Beta Per, β Per), known colloquially as the Demon Star, is a bright multiple star in the constellation of Perseus and one of the first non-nova variable stars to be discovered.
Algol is a three-star system, consisting of Beta Persei Aa1, Aa2, and Ab – in which the hot luminous primary β Persei Aa1 and the larger, but cooler and fainter, β Persei Aa2 regularly pass in front of each other, causing eclipses. Thus Algol's magnitude is usually near-constant at 2.1, but regularly dips to 3.4 every 2.86 days during the roughly 10-hour-long partial eclipses. The secondary eclipse when the brighter primary star occults the fainter secondary is very shallow and can only be detected photoelectrically.
Algol gives its name to its class of eclipsing variable, known as Algol variables.
An Ancient Egyptian Calendar of Lucky and Unlucky Days composed some 3,200 years ago is claimed to be the oldest historical document of the discovery of Algol.
The association of Algol with a demon-like creature (Gorgon in the Greek tradition, ghoul in the Arabic tradition) suggests that its variability was known long before the 17th century, but there is still no indisputable evidence for this. The Arabic astronomer al-Sufi said nothing about any variability of the star in his "Book of Fixed Stars" published c.964.
The variability of Algol was noted in 1667 by Italian astronomer Geminiano Montanari, but the periodic nature of its variations in brightness was not recognized until more than a century later, when the British amateur astronomer John Goodricke also proposed a mechanism for the star's variability. In May 1783, he presented his findings to the Royal Society, suggesting that the periodic variability was caused by a dark body passing in front of the star (or else that the star itself has a darker region that is periodically turned toward the Earth). For his report he was awarded the Copley Medal.
In 1881, the Harvard astronomer Edward Charles Pickering presented evidence that Algol was actually an eclipsing binary. This was confirmed a few years later, in 1889, when the Potsdam astronomer Hermann Carl Vogel found periodic doppler shifts in the spectrum of Algol, inferring variations in the radial velocity of this binary system. Thus Algol became one of the first known spectroscopic binaries. Joel Stebbins at the University of Illinois Observatory used an early selenium cell photometer to produce the first-ever photoelectric study of a variable star. The light curve revealed the second minimum and the reflection effect between the two stars.
Some difficulties in explaining the observed spectroscopic features led to the conjecture that a third star may be present in the system; four decades later this conjecture was found to be correct.
Listed are the first eclipse dates and times of each month; all times in UT.
β Persei Aa2 eclipses β Persei Aa1 every 2.867321 days (2 days 20 hours 49 min); therefore keep adding that much to each date and time to get the following eclipses. For example, the Jan 2, 20h, eclipse will yield consecutive eclipse times on Jan 5, 17h, then Jan 8, 16h, then Jan 11, 13h, etc. (all times approximate).
Algol is a multiple-star system with three confirmed and two suspected stellar components. From the point of view of the Earth, Algol Aa1 and Algol Aa2 form an eclipsing binary because their orbital plane contains the line of sight to the Earth. The eclipsing binary pair is separated by only 0.062 astronomical units (au) from each other, whereas the third star in the system (Algol Ab) is at an average distance of 2.69 au from the pair, and the mutual orbital period of the trio is 681 Earth days. The total mass of the system is about 5.8 solar masses, and the mass ratios of Aa1, Aa2, and Ab are about 4.5 to 1 to 2.
The three components of the bright triple star used to be, and still sometimes are, referred to as β Per A, B, and C. The Washington Double Star Catalog lists them as Aa1, Aa2, and Ab, with two very faint stars B and C about one arcmin distant. A further five faint stars are also listed as companions.
Studies of Algol led to the Algol paradox in the theory of stellar evolution: although components of a binary star form at the same time, and massive stars evolve much faster than the less massive stars, the more massive component Algol A is still in the main sequence, but the less massive Algol B is a subgiant star at a later evolutionary stage. The paradox can be solved by mass transfer: when the more massive star became a subgiant, it filled its Roche lobe, and most of the mass was transferred to the other star, which is still in the main sequence. In some binaries similar to Algol, a gas flow can be seen. The gas flow between the primary and secondary stars in Algol has been imaged using Doppler Tomography.
This system also exhibits x-ray and radio wave flares. The x-ray flares are thought to be caused by the magnetic fields of the A and B components interacting with the mass transfer. The radio-wave flares might be created by magnetic cycles similar to those of sunspots, but because the magnetic fields of these stars are up to ten times stronger than the field of the Sun, these radio flares are more powerful and more persistent. The secondary component was identified as the radio emitting source in Algol using Very-long-baseline interferometry by Lestrade and co-authors.
Magnetic activity cycles in the chromospherically active secondary component induce changes in its radius of gyration that have been linked to recurrent orbital period variations on the order of ≈ 10−5 via the Applegate mechanism. Mass transfer between the components is small in the Algol system but could be a significant source of period change in other Algol-type binaries.
Algol is about 92.8 light-years from the Sun, but about 7.3 million years ago it passed within 9.8 light-years of the Solar System and its apparent magnitude was about −2.5, which is considerably brighter than the star Sirius is today. Because the total mass of the Algol system is about 5.8 solar masses, at the closest approach this might have given enough gravity to perturb the Oort cloud of the Solar System somewhat and hence increase the number of comets entering the inner Solar System. However, the actual increase in net cometary collisions is thought to have been quite small.
"Beta Persei" is the star's Bayer designation. The name "Algol" derives from Arabic "raʾs al-ghūl" : head ("raʾs") of the ogre ("al-ghūl") (see "ghoul"). The English name Demon Star was taken from the Arabic name. In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN's first bulletin of July 2016 included a table of the first two batches of names approved by the WGSN; which included "Algol" for this star. It is so entered on the IAU Catalog of Star Names.
In Hebrew folklore, Algol was called "Rōsh ha Sāṭān" or "Satan's Head", as stated by Edmund Chilmead, who called it "Divels head" or "Rosch hassatan". A Latin name for Algol from the 16th century was "Caput Larvae" or "the Spectre's Head". Hipparchus and Pliny made this a separate, though connected, constellation.
In Chinese, (), meaning "Mausoleum", refers to an asterism consisting of β Persei, 9 Persei, τ Persei, ι Persei, κ Persei, ρ Persei, 16 Persei and 12 Persei. Consequently, the Chinese name for β Persei itself is (, English: The Fifth Star of Mausoleum.). According to R.H. Allen the star bore the grim name of "Tseih She" (), meaning "Piled up Corpses" but this appears to be a misidentification, and "Dié Shī" is correctly π Persei, which is inside the Mausoleum.
Historically, the star has received a strong association with bloody violence across a wide variety of cultures. In the "Tetrabiblos", the 2nd-century astrological text of the Alexandrian astronomer Ptolemy, Algol is referred to as "the Gorgon of Perseus" and associated with death by decapitation: a theme which mirrors the myth of the hero Perseus's victory over the snake-haired Gorgon Medusa. Astrologically, Algol is considered one of the unluckiest stars in the sky, and was listed as one of the 15 Behenian stars. | https://en.wikipedia.org/wiki?curid=1394 |
Amazing Grace
"Amazing Grace" is a Christian hymn published in 1779, with words written in 1772 by the English poet and Anglican clergyman John Newton (1725–1807).
Newton wrote the words from personal experience. He grew up without any particular religious conviction, but his life's path was formed by a variety of twists and coincidences that were often put into motion by others' reactions to what they took as his recalcitrant insubordination.
He was pressed (conscripted) into service in the Royal Navy. After leaving the service, he became involved in the Atlantic slave trade. In 1748, a violent storm battered his vessel off the coast of County Donegal, Ireland, so severely that he called out to God for mercy. This moment marked his spiritual conversion but he continued slave trading until 1754 or 1755, when he ended his seafaring altogether. He began studying Christian theology.
Ordained in the Church of England in 1764, Newton became curate of Olney, Buckinghamshire, where he began to write hymns with poet William Cowper. "Amazing Grace" was written to illustrate a sermon on New Year's Day of 1773. It is unknown if there was any music accompanying the verses; it may have been chanted by the congregation. It debuted in print in 1779 in Newton and Cowper's "Olney Hymns" but settled into relative obscurity in England. In the United States, "Amazing Grace" became a popular song used by Baptist and Methodist preachers as part of their evangelizing, especially in the South, during the Second Great Awakening of the early 19th century. It has been associated with more than 20 melodies. In 1835, American composer William Walker set it to the tune known as "New Britain" in a shape note format. This is the version most frequently sung today.
With the message that forgiveness and redemption are possible regardless of sins committed and that the soul can be delivered from despair through the mercy of God, "Amazing Grace" is one of the most recognisable songs in the English-speaking world. Author Gilbert Chase writes that it is "without a doubt the most famous of all the folk hymns". Jonathan Aitken, a Newton biographer, estimates that the song is performed about 10 million times annually. It has had particular influence in folk music, and has become an emblematic black spiritual. Its universal message has been a significant factor in its crossover into secular music. "Amazing Grace" became newly popular during a revival of folk music in the US during the 1960s, and it has been recorded thousands of times during and since the 20th century.
According to the "Dictionary of American Hymnology", "Amazing Grace" is John Newton's spiritual autobiography in verse.
In 1725, Newton was born in Wapping, a district in London near the Thames. His father was a shipping merchant who was brought up as a Catholic but had Protestant sympathies, and his mother was a devout Independent, unaffiliated with the Anglican Church. She had intended Newton to become a clergyman, but she died of tuberculosis when he was six years old. For the next few years, while his father was at sea Newton was raised by his emotionally distant stepmother. He was also sent to boarding school, where he was mistreated. At the age of eleven, he joined his father on a ship as an apprentice; his seagoing career would be marked by headstrong disobedience.
As a youth, Newton began a pattern of coming very close to death, examining his relationship with God, then relapsing into bad habits. As a sailor, he denounced his faith after being influenced by a shipmate who discussed with him "Characteristicks of Men, Manners, Opinions, Times", a book by the Third Earl of Shaftesbury. In a series of letters Newton later wrote, "Like an unwary sailor who quits his port just before a rising storm, I renounced the hopes and comforts of the Gospel at the very time when every other comfort was about to fail me." His disobedience caused him to be pressed into the Royal Navy, and he took advantage of opportunities to overstay his leave.
He deserted the navy to visit Mary "Polly" Catlett, a family friend with whom he had fallen in love. After enduring humiliation for deserting, he was traded as crew to a slave ship.
He began a career in slave trading.
Newton often openly mocked the captain by creating obscene poems and songs about him, which became so popular that the crew began to join in. His disagreements with several colleagues resulted in his being starved almost to death, imprisoned while at sea, and chained like the slaves they carried. He was himself enslaved and forced to work on a plantation in the British colony Sierra Leone near the Sherbro River. After several months he came to think of Sierra Leone as his home, but his father intervened after Newton sent him a letter describing his circumstances, and crew from another ship happened to find him. Newton claimed the only reason he left the colony was because of Polly.
While aboard the ship "Greyhound", Newton gained notoriety as being one of the most profane men the captain had ever met. In a culture where sailors habitually swore, Newton was admonished several times for not only using the worst words the captain had ever heard, but creating new ones to exceed the limits of verbal debauchery. In March 1748, while the "Greyhound" was in the North Atlantic, a violent storm came upon the ship that was so rough it swept overboard a crew member who was standing where Newton had been moments before. After hours of the crew emptying water from the ship and expecting to be capsized, Newton and another mate tied themselves to the ship's pump to keep from being washed overboard, working for several hours. After proposing the measure to the captain, Newton had turned and said, "If this will not do, then Lord have mercy upon us!" Newton rested briefly before returning to the deck to steer for the next eleven hours. During his time at the wheel, he pondered his divine challenge.
About two weeks later, the battered ship and starving crew landed in Lough Swilly, Ireland. For several weeks before the storm, Newton had been reading "The Christian's Pattern", a summary of the 15th-century "The Imitation of Christ" by Thomas à Kempis. The memory of his own "Lord have mercy upon us!" uttered during a moment of desperation in the storm did not leave him; he began to ask if he was worthy of God's mercy or in any way redeemable. Not only had he neglected his faith but directly opposed it, mocking others who showed theirs, deriding and denouncing God as a myth. He came to believe that God had sent him a profound message and had begun to work through him.
Newton's conversion was not immediate, but he contacted Polly's family and announced his intention to marry her. Her parents were hesitant as he was known to be unreliable and impetuous. They knew he was profane too but allowed him to write to Polly, and he set to begin to submit to authority for her sake. He sought a place on a slave ship bound for Africa, and Newton and his crewmates participated in most of the same activities he had written about before; the only immorality from which he was able to free himself was profanity. After a severe illness his resolve was renewed, yet he retained the same attitude towards slavery as was held by his contemporaries. Newton continued in the slave trade through several voyages where he sailed the coasts of Africa, now as a captain, and procured slaves being offered for sale in larger ports, transporting them to North America.
In between voyages, he married Polly in 1750, and he found it more difficult to leave her at the beginning of each trip. After three shipping voyages in the slave trade, Newton was promised a position as ship's captain with cargo unrelated to slavery. But at the age of thirty, he collapsed and never sailed again.
Working as a customs agent in Liverpool starting in 1756, Newton began to teach himself Latin, Greek, and theology. He and Polly immersed themselves in the church community, and Newton's passion was so impressive that his friends suggested he become a priest in the Church of England. He was turned down by John Gilbert, Archbishop of York, in 1758, ostensibly for having no university degree, although the more likely reasons were his leanings toward evangelism and tendency to socialise with Methodists. Newton continued his devotions, and after being encouraged by a friend, he wrote about his experiences in the slave trade and his conversion. William Legge, 2nd Earl of Dartmouth, impressed with his story, sponsored Newton for ordination by John Green, Bishop of Lincoln, and offered him the curacy of Olney, Buckinghamshire, in 1764.
Olney was a village of about 2,500 residents whose main industry was making lace by hand. The people were mostly illiterate and many of them were poor. Newton's preaching was unique in that he shared many of his own experiences from the pulpit; many clergy preached from a distance, not admitting any intimacy with temptation or sin. He was involved in his parishioners' lives and was much loved, although his writing and delivery were sometimes unpolished. But his devotion and conviction were apparent and forceful, and he often said his mission was to "break a hard heart and to heal a broken heart". He struck a friendship with William Cowper, a gifted writer who had failed at a career in law and suffered bouts of insanity, attempting suicide several times. Cowper enjoyed Olney and Newton's company; he was also new to Olney and had gone through a spiritual conversion similar to Newton's. Together, their effect on the local congregation was impressive. In 1768, they found it necessary to start a weekly prayer meeting to meet the needs of an increasing number of parishioners. They also began writing lessons for children.
Partly from Cowper's literary influence, and partly because learned vicars were expected to write verses, Newton began to try his hand at hymns, which had become popular through the language, made plain for common people to understand. Several prolific hymn writers were at their most productive in the 18th century, including Isaac Watts whose hymns Newton had grown up hearing and Charles Wesley, with whom Newton was familiar. Wesley's brother John, the eventual founder of the Methodist Church, had encouraged Newton to go into the clergy. Watts was a pioneer in English hymn writing, basing his work after the Psalms. The most prevalent hymns by Watts and others were written in the common meter in 8.6.8.6: the first line is eight syllables and the second is six.
Newton and Cowper attempted to present a poem or hymn for each prayer meeting. The lyrics to "Amazing Grace" were written in late 1772 and probably used in a prayer meeting for the first time on 1 January 1773. A collection of the poems Newton and Cowper had written for use in services at Olney was bound and published anonymously in 1779 under the title "Olney Hymns". Newton contributed 280 of the 348 texts in "Olney Hymns"; "1 Chronicles 17:16–17, Faith's Review and Expectation" was the title of the poem with the first line "Amazing grace! (how sweet the sound)".
The general impact of "Olney Hymns" was immediate and it became a widely popular tool for evangelicals in Britain for many years. Scholars appreciated Cowper's poetry somewhat more than Newton's plaintive and plain language, expressing his forceful personality. The most prevalent themes in the verses written by Newton in "Olney Hymns" are faith in salvation, wonder at God's grace, his love for Jesus, and his cheerful exclamations of the joy he found in his faith. As a reflection of Newton's connection to his parishioners, he wrote many of the hymns in first person, admitting his own experience with sin. Bruce Hindmarsh in "Sing Them Over Again To Me: Hymns and Hymnbooks in America" considers "Amazing Grace" an excellent example of Newton's testimonial style afforded by the use of this perspective. Several of Newton's hymns were recognised as great work ("Amazing Grace" was not among them), while others seem to have been included to fill in when Cowper was unable to write. Jonathan Aitken calls Newton, specifically referring to "Amazing Grace", an "unashamedly middlebrow lyricist writing for a lowbrow congregation", noting that only twenty-one of the nearly 150 words used in all six verses have more than one syllable.
William Phipps in the "Anglican Theological Review" and author James Basker have interpreted the first stanza of "Amazing Grace" as evidence of Newton's realisation that his participation in the slave trade was his wretchedness, perhaps representing a wider common understanding of Newton's motivations. Newton joined forces with a young man named William Wilberforce, the British Member of Parliament who led the Parliamentarian campaign to abolish the slave trade in the British Empire, culminating in the Slave Trade Act 1807. But Newton did not become an ardent and outspoken abolitionist until after he left Olney in the 1780s; he is not known to have connected writing the hymn known as "Amazing Grace" to anti-slavery sentiments.
The lyrics in "Olney Hymns" were arranged by their association to the Biblical verses that would be used by Newton and Cowper in their prayer meetings, and did not address any political objective. For Newton, the beginning of the year was a time to reflect on one's spiritual progress. At the same time he completed a diary which has since been lost that he had begun 17 years before, two years after he quit sailing. The last entry of 1772 was a recounting of how much he had changed since then.
The title ascribed to the hymn, "1 Chronicles 17:16–17", refers to David's reaction to the prophet Nathan telling him that God intends to maintain his family line forever. Some Christians interpret this as a prediction that Jesus Christ, as a descendant of David, was promised by God as the salvation for all people. Newton's sermon on that January day in 1773 focused on the necessity to express one's gratitude for God's guidance, that God is involved in the daily lives of Christians though they may not be aware of it, and that patience for deliverance from the daily trials of life is warranted when the glories of eternity await. Newton saw himself a sinner like David who had been chosen, perhaps undeservedly, and was humbled by it. According to Newton, unconverted sinners were "blinded by the god of this world" until "mercy came to us not only undeserved but undesired ... our hearts endeavored to shut him out till he overcame us by the power of his grace."
The New Testament served as the basis for many of the lyrics of "Amazing Grace". The first verse, for example, can be traced to the story of the Prodigal Son. In the Gospel of Luke the father says, "For this son of mine was dead and is alive again; he was lost, and is found". The story of Jesus healing a blind man who tells the Pharisees that he can now see is told in the Gospel of John. Newton used the words "I was blind but now I see" and declared "Oh to grace how great a debtor!" in his letters and diary entries as early as 1752. The effect of the lyrical arrangement, according to Bruce Hindmarsh, allows an instant release of energy in the exclamation "Amazing grace!", to be followed by a qualifying reply in "how sweet the sound". In "An Annotated Anthology of Hymns", Newton's use of an exclamation at the beginning of his verse is called "crude but effective" in an overall composition that "suggest(s) a forceful, if simple, statement of faith". Grace is recalled three times in the following verse, culminating in Newton's most personal story of his conversion, underscoring the use of his personal testimony with his parishioners.
The sermon preached by Newton was his last of those that William Cowper heard in Olney, since Cowper's mental instability returned shortly thereafter. Steve Turner, author of "Amazing Grace: The Story of America's Most Beloved Song", suggests Newton may have had his friend in mind, employing the themes of assurance and deliverance from despair for Cowper's benefit.
Although it had its roots in England, "Amazing Grace" became an integral part of the Christian tapestry in the United States. More than 60 of Newton and Cowper's hymns were republished in other British hymnals and magazines, but "Amazing Grace" was not, appearing only once in a 1780 hymnal sponsored by the Countess of Huntingdon. Scholar John Julian commented in his 1892 "A Dictionary of Hymnology" that outside of the United States, the song was unknown and it was "far from being a good example of Newton's finest work". Between 1789 and 1799, four variations of Newton's hymn were published in the US in Baptist, Dutch Reformed, and Congregationalist hymnodies; by 1830 Presbyterians and Methodists also included Newton's verses in their hymnals.
The greatest influences in the 19th century that propelled "Amazing Grace" to spread across the US and become a staple of religious services in many denominations and regions were the Second Great Awakening and the development of shape note singing communities. A tremendous religious movement swept the US in the early 19th century, marked by the growth and popularity of churches and religious revivals that got their start on the frontier in Kentucky and Tennessee. Unprecedented gatherings of thousands of people attended camp meetings where they came to experience salvation; preaching was fiery and focused on saving the sinner from temptation and backsliding. Religion was stripped of ornament and ceremony, and made as plain and simple as possible; sermons and songs often used repetition to get across to a rural population of poor and mostly uneducated people the necessity of turning away from sin. Witnessing and testifying became an integral component to these meetings, where a congregation member or stranger would rise and recount his turn from a sinful life to one of piety and peace. "Amazing Grace" was one of many hymns that punctuated fervent sermons, although the contemporary style used a refrain, borrowed from other hymns, that employed simplicity and repetition such as:
Simultaneously, an unrelated movement of communal singing was established throughout the South and Western states. A format of teaching music to illiterate people appeared in 1800. It used four sounds to symbolise the basic scale: fa-sol-la-fa-sol-la-mi-fa. Each sound was accompanied by a specifically shaped note and thus became known as shape note singing. The method was simple to learn and teach, so schools were established throughout the South and West. Communities would come together for an entire day of singing in a large building where they sat in four distinct areas surrounding an open space, one member directing the group as a whole. Other groups would sing outside, on benches set up in a square. Preachers used shape note hymns to teach people on the frontier and to raise the emotion of camp meetings. Most of the music was Christian, but the purpose of communal singing was not primarily spiritual. Communities either could not afford music accompaniment or rejected it out of a Calvinistic sense of simplicity, so the songs were sung a cappella.
When originally used in Olney, it is unknown what music, if any, accompanied the verses written by John Newton. Contemporary hymnbooks did not contain music and were simply small books of religious poetry. The first known instance of Newton's lines joined to music was in "A Companion to the Countess of Huntingdon's Hymns" (London, 1808), where it is set to the tune "Hephzibah" by English composer John Husband. Common meter hymns were interchangeable with a variety of tunes; more than twenty musical settings of "Amazing Grace" circulated with varying popularity until 1835, when American composer William Walker assigned Newton's words to a traditional song named "New Britain". This was an amalgamation of two melodies ("Gallaher" and "St. Mary"), first published in the "Columbian Harmony" by Charles H. Spilman and Benjamin Shaw (Cincinnati, 1829). Spilman and Shaw, both students at Kentucky's Centre College, compiled their tunebook both for public worship and revivals, to satisfy "the wants of the Church in her triumphal march". Most of the tunes had been previously published, but "Gallaher" and "St. Mary" had not. As neither tune is attributed and both show elements of oral transmission, scholars can only speculate that they are possibly of British origin. A manuscript from 1828 by Lucius Chapin, a famous hymn writer of that time, contains a tune very close to "St. Mary", but that does not mean that he wrote it.
"Amazing Grace", with the words written by Newton and joined with "New Britain", the melody most currently associated with it, appeared for the first time in Walker's shape note tunebook "Southern Harmony" in 1847. It was, according to author Steve Turner, a "marriage made in heaven ... The music behind 'amazing' had a sense of awe to it. The music behind 'grace' sounded graceful. There was a rise at the point of confession, as though the author was stepping out into the open and making a bold declaration, but a corresponding fall when admitting his blindness." Walker's collection was enormously popular, selling about 600,000 copies all over the US when the total population was just over 20 million. Another shape note tunebook named "The Sacred Harp" (1844) by Georgia residents Benjamin Franklin White and Elisha J. King became widely influential and continues to be used.
Another verse was first recorded in Harriet Beecher Stowe's immensely influential 1852 anti-slavery novel "Uncle Tom's Cabin". Three verses were emblematically sung by Tom in his hour of deepest crisis. He sings the sixth and fifth verses in that order, and Stowe included another verse, not written by Newton, that had been passed down orally in African-American communities for at least 50 years. It was one of between 50 and 70 verses of a song titled "Jerusalem, My Happy Home", which was first published in a 1790 book called "A Collection of Sacred Ballads":
"Amazing Grace" came to be an emblem of a Christian movement and a symbol of the US itself as the country was involved in a great political experiment, attempting to employ democracy as a means of government. Shape-note singing communities, with all the members sitting around an open center, each song employing a different song leader, illustrated this in practice. Simultaneously, the US began to expand westward into previously unexplored territory that was often wilderness. The "dangers, toils, and snares" of Newton's lyrics had both literal and figurative meanings for Americans. This became poignantly true during the most serious test of American cohesion in the U.S. Civil War (1861–1865). "Amazing Grace", set to "New Britain", was included in two hymnals distributed to soldiers. With death so real and imminent, religious services in the military became commonplace. The hymn was translated into other languages as well: while on the Trail of Tears, the Cherokee sang Christian hymns as a way of coping with the ongoing tragedy, and a version of the song by Samuel Worcester that had been translated into the Cherokee language became very popular.
Although "Amazing Grace" set to "New Britain" was popular, other versions existed regionally. Primitive Baptists in the Appalachian region often used "New Britain" with other hymns, and sometimes sing the words of "Amazing Grace" to other folk songs, including titles such as "In the Pines", "Pisgah", "Primrose", and "Evan", as all are able to be sung in common meter, of which the majority of their repertoire consists. In the late 19th century, Newton's verses were sung to a tune named "Arlington" as frequently as to "New Britain" for a time.
Two musical arrangers named Dwight Moody and Ira Sankey heralded another religious revival in the cities of the US and Europe, giving the song international exposure. Moody's preaching and Sankey's musical gifts were significant; their arrangements were the forerunners of gospel music, and churches all over the US were eager to acquire them. Moody and Sankey began publishing their compositions in 1875, and "Amazing Grace" appeared three times with three different melodies, but they were the first to give it its title; hymns were typically published using the incipits (first line of the lyrics), or the name of the tune such as "New Britain". Publisher Edwin Othello Excell gave the version of "Amazing Grace" set to "New Britain" immense popularity by publishing it in a series of hymnals that were used in urban churches. Excell altered some of Walker's music, making it more contemporary and European, giving "New Britain" some distance from its rural folk-music origins. Excell's version was more palatable for a growing urban middle class and arranged for larger church choirs. Several editions featuring Newton's first three stanzas and the verse previously included by Harriet Beecher Stowe in "Uncle Tom's Cabin" were published by Excell between 1900 and 1910. His version of "Amazing Grace" became the standard form of the song in American churches.
With the advent of recorded music and radio, "Amazing Grace" began to cross over from primarily a gospel standard to secular audiences. The ability to record combined with the marketing of records to specific audiences allowed "Amazing Grace" to take on thousands of different forms in the 20th century. Where Edwin Othello Excell sought to make the singing of "Amazing Grace" uniform throughout thousands of churches, records allowed artists to improvise with the words and music specific to each audience. AllMusic lists over 1,000 recordings – including re-releases and compilations – as of 2019. Its first recording is an a cappella version from 1922 by the Sacred Harp Choir. It was included from 1926 to 1930 in Okeh Records' catalogue, which typically concentrated strongly on blues and jazz. Demand was high for black gospel recordings of the song by H. R. Tomlin and J. M. Gates. A poignant sense of nostalgia accompanied the recordings of several gospel and blues singers in the 1940s and 1950s who used the song to remember their grandparents, traditions, and family roots. It was recorded with musical accompaniment for the first time in 1930 by Fiddlin' John Carson, although to another folk hymn named "At the Cross", not to "New Britain". "Amazing Grace" is emblematic of several kinds of folk music styles, often used as the standard example to illustrate such musical techniques as lining out and call and response, that have been practised in both black and white folk music.
Mahalia Jackson's 1947 version received significant radio airplay, and as her popularity grew throughout the 1950s and 1960s, she often sang it at public events such as concerts at Carnegie Hall. Author James Basker states that the song has been employed by African Americans as the "paradigmatic Negro spiritual" because it expresses the joy felt at being delivered from slavery and worldly miseries. Anthony Heilbut, author of "The Gospel Sound", states that the "dangers, toils, and snares" of Newton's words are a "universal testimony" of the African American experience. During the civil rights movement and opposition to the Vietnam War, the song took on a political tone. Mahalia Jackson employed "Amazing Grace" for Civil Rights marchers, writing that she used it "to give magical protection a charm to ward off danger, an incantation to the angels of heaven to descend ... I was not sure the magic worked outside the church walls ... in the open air of Mississippi. But I wasn't taking any chances." Folk singer Judy Collins, who knew the song before she could remember learning it, witnessed Fannie Lou Hamer leading marchers in Mississippi in 1964, singing "Amazing Grace". Collins also considered it a talisman of sorts, and saw its equal emotional impact on the marchers, witnesses, and law enforcement who opposed the civil rights demonstrators. According to fellow folk singer Joan Baez, it was one of the most requested songs from her audiences, but she never realised its origin as a hymn; by the time she was singing it in the 1960s she said it had "developed a life of its own". It even made an appearance at the Woodstock Music Festival in 1969 during Arlo Guthrie's performance.
Collins decided to record it in the late 1960s amid an atmosphere of counterculture introspection; she was part of an encounter group that ended a contentious meeting by singing "Amazing Grace" as it was the only song to which all the members knew the words. Her producer was present and suggested she include a version of it on her 1970 album "Whales & Nightingales". Collins, who had a history of alcohol abuse, claimed that the song was able to "pull her through" to recovery. It was recorded in St. Paul's, the chapel at Columbia University, chosen for the acoustics. She chose an "a cappella" arrangement that was close to Edwin Othello Excell's, accompanied by a chorus of amateur singers who were friends of hers. Collins connected it to the Vietnam War, to which she objected: "I didn't know what else to do about the war in Vietnam. I had marched, I had voted, I had gone to jail on political actions and worked for the candidates I believed in. The war was still raging. There was nothing left to do, I thought ... but sing 'Amazing Grace'." Gradually and unexpectedly, the song began to be played on the radio, and then be requested. It rose to number 15 on the "Billboard" Hot 100, remaining on the charts for 15 weeks, as if, she wrote, her fans had been "waiting to embrace it". In the UK, it charted 8 times between 1970 and 1972, peaking at number 5 and spending a total of 75 weeks on popular music charts. Her rendition also reached number 5 in New Zealand and number 12 in Ireland in 1971.
Although Collins used it as a catharsis for her opposition to the Vietnam War, two years after her rendition, the Royal Scots Dragoon Guards, senior Scottish regiment of the British Army, recorded an instrumental version featuring a bagpipe soloist accompanied by a pipe band. The tempo of their arrangement was slowed to allow for the bagpipes, but it was based on Collins': it began with a bagpipe solo introduction similar to her lone voice, then it was accompanied by the band of bagpipes and horns, whereas in her version she is backed up by a chorus. It topped the "RPM" national singles chart in Canada for three weeks, and rose as high as number 11 in the US. It is also a controversial instrumental, as it combined pipes with a military band. The Pipe Major of the Royal Scots Dragoon Guards was summoned to Edinburgh Castle and chastised for demeaning the bagpipes.
Aretha Franklin and Rod Stewart also recorded "Amazing Grace" around the same time, and both of their renditions were popular. All four versions were marketed to distinct types of audiences, thereby assuring its place as a pop song. Johnny Cash recorded it on his 1975 album "Sings Precious Memories", dedicating it to his older brother Jack, who had been killed in a mill accident when they were boys in Dyess, Arkansas. Cash and his family sang it to themselves while they worked in the cotton fields following Jack's death. Cash often included the song when he toured prisons, saying "For the three minutes that song is going on, everybody is free. It just frees the spirit and frees the person."
The U.S. Library of Congress has a collection of 3,000 versions of and songs inspired by "Amazing Grace", some of which were first-time recordings by folklorists Alan and John Lomax, a father and son team who in 1932 travelled thousands of miles across the southern states of the US to capture the different regional styles of the song. More contemporary renditions include samples from such popular artists as Sam Cooke and the Soul Stirrers (1963), the Byrds (1970), Elvis Presley (1971), Skeeter Davis (1972), Mighty Clouds of Joy (1972), Amazing Rhythm Aces (1975), Willie Nelson (1976), the Lemonheads (1992), LeperKhanz on the album "Tiocfaidh Ár Lá" (2005), MNL48 (2018), and Five for Fighting (2020).
Following the appropriation of the hymn in secular music, "Amazing Grace" became such an icon in American culture that it has been used for a variety of secular purposes and marketing campaigns, placing it in danger of becoming a cliché. It has been mass-produced on souvenirs, lent its name to a Superman villain, appeared on "The Simpsons" to demonstrate the redemption of a murderous character named Sideshow Bob, incorporated into Hare Krishna chants and adapted for Wicca ceremonies. It can also be sung to the theme from "The Mickey Mouse Club", as Garrison Keillor has observed. The hymn has been employed in several films, including "Alice's Restaurant", "Invasion of the Body Snatchers", "Coal Miner's Daughter", and "Silkwood". It is referenced in the 2006 film "Amazing Grace", which highlights Newton's influence on the leading British abolitionist William Wilberforce, and in the film biography of Newton, "Newton's Grace". The 1982 science fiction film "" used "Amazing Grace" amid a context of Christian symbolism, to memorialise Mr. Spock following his death, but more practically, because the song has become "instantly recognizable to many in the audience as music that sounds appropriate for a funeral" according to a "Star Trek" scholar. Since 1954, when an organ instrumental of "New Britain" became a best-seller, "Amazing Grace" has been associated with funerals and memorial services. The hymn has become a song that inspires hope in the wake of tragedy, becoming a sort of "spiritual national anthem" according to authors Mary Rourke and Emily Gwathmey. For example, Barack Obama recited and later sang the hymn at the memorial service for Clementa Pinckney, who was one of the nine victims of the Charleston church shooting in 2015.
In recent years, the words of the hymn have been changed in some religious publications to downplay a sense of imposed self-loathing by its singers. The second line, "That saved a wretch like me!" has been rewritten as "That saved and strengthened me", "save a soul like me", or "that saved and set me free". Kathleen Norris in her book "Amazing Grace: A Vocabulary of Faith" characterises this transformation of the original words as "wretched English" making the line that replaces the original "laughably bland". Part of the reason for this change has been the altered interpretations of what wretchedness and grace means. Newton's Calvinistic view of redemption and divine grace formed his perspective that he considered himself a sinner so vile that he was unable to change his life or be redeemed without God's help. Yet his lyrical subtlety, in Steve Turner's opinion, leaves the hymn's meaning open to a variety of Christian and non-Christian interpretations. "Wretch" also represents a period in Newton's life when he saw himself outcast and miserable, as he was when he was enslaved in Sierra Leone; his own arrogance was matched by how far he had fallen in his life.
Due to its immense popularity and iconic nature, the meaning behind the words of "Amazing Grace" has become as individual as the singer or listener. Bruce Hindmarsh suggests that the secular popularity of "Amazing Grace" is due to the absence of any mention of God in the lyrics until the fourth verse (by Excell's version, the fourth verse begins "When we've been there ten thousand years"), and that the song represents the ability of humanity to transform itself instead of a transformation taking place at the hands of God. "Grace", however, had a clearer meaning to John Newton, as he used the word to represent God or the power of God.
The transformative power of the song was investigated by journalist Bill Moyers in a documentary released in 1990. Moyers was inspired to focus on the song's power after watching a performance at Lincoln Center, where the audience consisted of Christians and non-Christians, and he noticed that it had an equal impact on everybody in attendance, unifying them. James Basker also acknowledged this force when he explained why he chose "Amazing Grace" to represent a collection of anti-slavery poetry: "there is a transformative power that is applicable ... : the transformation of sin and sorrow into grace, of suffering into beauty, of alienation into empathy and connection, of the unspeakable into imaginative literature."
Moyers interviewed Collins, Cash, opera singer Jessye Norman, Appalachian folk musician Jean Ritchie and her family, white Sacred Harp singers in Georgia, black Sacred Harp singers in Alabama, and a prison choir at the Texas State Penitentiary at Huntsville. Collins, Cash, and Norman were unable to discern if the power of the song came from the music or the lyrics. Norman, who once notably sang it at the end of a large outdoor rock concert for Nelson Mandela's 70th birthday, stated, "I don't know whether it's the text I don't know whether we're talking about the lyrics when we say that it touches so many people or whether it's that tune that everybody knows." A prisoner interviewed by Moyers explained his literal interpretation of the second verse: "'Twas grace that taught my heart to fear, and grace my fears relieved" by saying that the fear became immediately real to him when he realised he may never get his life in order, compounded by the loneliness and restriction in prison. Gospel singer Marion Williams summed up its effect: "That's a song that gets to everybody".
The "Dictionary of American Hymnology" claims it is included in more than a thousand published hymnals, and recommends its use for "occasions of worship when we need to confess with joy that we are saved by God's grace alone; as a hymn of response to forgiveness of sin or as an assurance of pardon; as a confession of faith or after the sermon". | https://en.wikipedia.org/wiki?curid=1395 |
AOL
AOL (stylized as Aol., formerly a company known as AOL Inc. and originally known as America Online) is an American web portal and online service provider based in New York City. It is a brand marketed by Verizon Media.
The service traces its history to an online service known as PlayNET, which hosted multi-player games for the Commodore 64. PlayNET licensed their software to a new service, Quantum Link (Q-Link), who went online in November 1985. PlayNET shut down shortly thereafter. The initial Q-Link service was similar to the original PlayNET, but over time Q-Link added many new services. When a new IBM PC client was released, the company focused on the non-gaming services and launched it under the name America Online. The original Q-Link was shut down on November 1, 1995, while AOL grew to become the largest online service, displacing established players like CompuServe and The Source. By 1995, AOL had about three million active users.
AOL was one of the early pioneers of the Internet in the mid-1990s, and the most recognized brand on the web in the United States. It originally provided a dial-up service to millions of Americans, as well as providing a web portal, e-mail, instant messaging and later a web browser following its purchase of Netscape. In 2001, at the height of its popularity, it purchased the media conglomerate Time Warner in the largest merger in U.S. history. AOL rapidly declined thereafter, partly due to the decline of dial-up and rise of broadband. AOL was eventually spun off from Time Warner in 2009, with Tim Armstrong appointed the new CEO. Under his leadership, the company invested in media brands and advertising technologies.
On June 23, 2015, AOL was acquired by Verizon Communications for $4.4 billion.
AOL began in 1983, as a short-lived venture called Control Video Corporation (or CVC), founded by William von Meister. Its sole product was an online service called GameLine for the Atari 2600 video game console, after von Meister's idea of buying music on demand was rejected by Warner Bros. Subscribers bought a modem from the company for US$49.95 and paid a one-time US$15 setup fee. GameLine permitted subscribers to temporarily download games and keep track of high scores, at a cost of US$1 per game. The telephone disconnected and the downloaded game would remain in GameLine's Master Module and playable until the user turned off the console or downloaded another game.
In January 1983, Steve Case was hired as a marketing consultant for Control Video on the recommendation of his brother, investment banker Dan Case. In May 1983, Jim Kimsey became a manufacturing consultant for Control Video, which was near bankruptcy. Kimsey was brought in by his West Point friend Frank Caufield, an investor in the company. In early 1985, von Meister left the company.
On May 24, 1985, Quantum Computer Services, an online services company, was founded by Jim Kimsey from the remnants of Control Video, with Kimsey as chief executive officer, and Marc Seriff as chief technology officer. The technical team consisted of Marc Seriff, Tom Ralston, Ray Heinrich, Steve Trus, Ken Huntsman, Janet Hunter, Dave Brown, Craig Dykstra, Doug Coward, and Mike Ficco. In 1987, Case was promoted again to executive vice-president. Kimsey soon began to groom Case to take over the role of CEO, which he did when Kimsey retired in 1991.
Kimsey changed the company's strategy, and in 1985, launched a dedicated online service for Commodore 64 and 128 computers, originally called Quantum Link ("Q-Link" for short). The Quantum Link software was based on software licensed from PlayNet, Inc, (founded in 1983 by Howard Goldberg and Dave Panzl). The service was different from other online services as it used the computing power of the Commodore 64 and the Apple II rather than just a "dumb" terminal. It passed tokens back and forth and provided a fixed price service tailored for home users. In May 1988, Quantum and Apple launched AppleLink Personal Edition for Apple II and Macintosh computers. In August 1988, Quantum launched PC Link, a service for IBM-compatible PCs developed in a joint venture with the Tandy Corporation. After the company parted ways with Apple in October 1989, Quantum changed the service's name to America Online. Case promoted and sold AOL as the online service for people unfamiliar with computers, in contrast to CompuServe, which was well established in the technical community.
From the beginning, AOL included online games in its mix of products; many classic and casual games were included in the original PlayNet software system. In the early years of AOL the company introduced many innovative online interactive titles and games, including:
In February 1991, AOL for DOS was launched using a GeoWorks interface followed a year later by AOL for Windows. This coincided with growth in pay-based online services, like Prodigy, CompuServe, and GEnie. 1991 also saw the introduction of an original Dungeons & Dragons title called "Neverwinter Nights" from Stormfront Studios; which was one of the first Multiplayer Online Role Playing Games to depict the adventure with graphics instead of text.
During the early 1990s, the average subscription lasted for about 25 months and accounted for $350 in total revenue. Advertisements invited modem owners to "Try America Online FREE", promising free software and trial membership. AOL discontinued Q-Link and PC Link in late 1994. In September 1993, AOL added Usenet access to its features. This is commonly referred to as the "Eternal September", as Usenet's cycle of new users was previously dominated by smaller numbers of college and university freshmen gaining access in September and taking a few weeks to acclimate. This also coincided with a new "carpet bombing" marketing campaign by CMO Jan Brandt to distribute as many free trial AOL trial disks as possible through nonconventional distribution partners. At one point, 50% of the CDs produced worldwide had an AOL logo. AOL quickly surpassed GEnie, and by the mid-1990s, it passed Prodigy (which for several years allowed AOL advertising) and CompuServe.
Over the next several years, AOL launched services with the National Education Association, the American Federation of Teachers, "National Geographic", the Smithsonian Institution, the Library of Congress, Pearson, Scholastic, ASCD, NSBA, NCTE, Discovery Networks, Turner Education Services (CNN Newsroom), NPR, The Princeton Review, Stanley Kaplan, Barron's, Highlights for Kids, the U.S. Department of Education, and many other education providers. AOL offered the first real-time homework help service (the Teacher Pager—1990; prior to this, AOL provided homework help bulletin boards), the first service by children, for children (Kids Only Online, 1991), the first online service for parents (the Parents Information Network, 1991), the first online courses (1988), the first omnibus service for teachers (the Teachers' Information Network, 1990), the first online exhibit (Library of Congress, 1991), the first parental controls, and many other online education firsts.
AOL purchased search engine WebCrawler in 1995, but sold it to Excite the following year; the deal made Excite the sole search and directory service on AOL. After the deal closed in March 1997, AOL launched its own branded search engine, based on Excite, called NetFind. This was renamed to AOL Search in 1999.
AOL charged its users an hourly fee until December 1996, when the company changed to a flat monthly rate of $19.95. During this time, AOL connections were flooded with users trying to connect, and many canceled their accounts due to constant busy signals. A commercial was made featuring Steve Case telling people AOL was working day and night to fix the problem. Within three years, AOL's user base grew to 10 million people. In 1995 AOL was headquartered at 8619 Westwood Center Drive in the Tysons Corner CDP in unincorporated Fairfax County, Virginia, near the Town of Vienna.
AOL was quickly running out of room in October 1996 for its network at the Fairfax County campus. In mid-1996, AOL moved to 22000 AOL Way in Dulles, unincorporated Loudoun County, Virginia to provide room for future growth. In a five-year landmark agreement with the most popular operating system, AOL was bundled with Windows software.
On March 31, 1996, the short-lived eWorld was purchased by AOL. In 1997, about half of all U.S. homes with Internet access had it through AOL. During this time, AOL's content channels, under Jason Seiken, including News, Sports, and Entertainment, experienced their greatest growth as AOL become the dominant online service internationally with more than 34 million subscribers. In November 1998, AOL announced it would acquire Netscape, best known for their web browser, in a major $4.2 billion deal. The deal closed on March 17, 1999. Another large acquisition in December 1999 was that of MapQuest, for $1.1 billion.
In January 2000, AOL and Time Warner announced plans to merge, forming AOL Time Warner, Inc. The terms of the deal called for AOL shareholders to own 55% of the new, combined company. The deal closed on January 11, 2001. The new company was led by executives from AOL, SBI, and Time Warner. Gerald Levin, who had served as CEO of Time Warner, was CEO of the new company. Steve Case served as Chairman, J. Michael Kelly (from AOL) was the Chief Financial Officer, Robert W. Pittman (from AOL) and Dick Parsons (from Time Warner) served as Co-Chief Operating Officers. In 2002, Jonathan Miller became CEO of AOL. The following year, AOL Time Warner dropped the "AOL" from its name. It was the largest merger in history when completed with the combined value of the companies at $360 billion. This value fell sharply, as low as $120 billion as markets repriced AOL's valuation as a pure internet firm more modestly when combined with the traditional media and cable business. This state didn't last long, and the company's value rose again within 3 months. By the end of that year, the tide had turned against "pure" internet companies, with many collapsing under falling stock prices, and even the strongest companies in the field losing up to 75% of their market value. The decline continued though 2001, but even with the losses, AOL was among the internet giants that continued to outperform brick and mortar companies.
In 2004, along with the launch of AOL 9.0 Optimized, AOL also made available the option of personalized greetings which would enable the user to hear his or her name while accessing basic functions and mail alerts, or while logging in or out. In 2005, AOL broadcast the Live 8 concert live over the Internet, and thousands of users downloaded clips of the concert over the following months. In late 2005, AOL released AOL Safety & Security Center, a bundle of McAfee Antivirus, CA anti-spyware, and proprietary firewall and phishing protection software. News reports in late 2005 identified companies such as Yahoo!, Microsoft, and Google as candidates for turning AOL into a joint venture. Those plans were abandoned when it was revealed on December 20, 2005, that Google would purchase a 5% share of AOL for $1 billion.
On April 3, 2006, AOL announced it was retiring the full name America Online; the official name of the service became AOL, and the full name of the Time Warner subdivision became AOL LLC.
On June 8, 2006, AOL offered a new program called AOL Active Security Monitor, a diagnostic tool which checked the local PC's security status, and recommended additional security software from AOL or Download.com. The program rated the computer on a variety of different areas of security and general computer health. Two months later, AOL released AOL Active Virus Shield. This software was developed by Kaspersky Lab. Active Virus Shield software was free and did not require an AOL account, only an internet email address. The ISP side of AOL UK was bought by The Carphone Warehouse in October 2006 to take advantage of their 100,000 LLU customers, making The Carphone Warehouse the biggest LLU provider in the UK.
In August 2006, AOL announced they would give away email accounts and software previously available only to its paying customers provided the customer accessed AOL or AOL.com through a non-AOL-owned access method (otherwise known as "third party transit", "bring your own access", or "BYOA"). The move was designed to reduce costs associated with the "Walled Garden" business model by reducing usage of AOL-owned access points and shifting members with high-speed internet access from client-based usage to the more lucrative advertising provider, AOL.com. The change from paid to free was also designed to slow the rate of members canceling their accounts and defecting to Microsoft Hotmail, Yahoo!, or other free email providers. The other free services included:
Also that month, AOL informed its American customers it would be increasing the price of its dial-up access to US$25.90. The increase was part of an effort to migrate the service's remaining dial-up users to broadband, as the increased price was the same price they had been charging for monthly DSL access. However, AOL has since started offering their services for $9.95 a month for unlimited dial-up access.
On November 16, 2006, Randy Falco succeeded Jonathan Miller as CEO. In December 2006, AOL closed their last remaining call center in the United States, "taking the America out of America Online" according to industry pundits. Service centers based in India and the Philippines continue to this day to provide customer support and technical assistance to subscribers.
On September 17, 2007, AOL announced it was moving one of its corporate headquarters from Dulles, Virginia, to New York City and combining its various advertising units into a new subsidiary called Platform A. This action followed several advertising acquisitions, most notably Advertising.com, and highlighted the company's new focus on advertising-driven business models. AOL management stressed "significant operations" will remain in Dulles, which included the company's access services and modem banks.
In October 2007, AOL announced it would move one of its other headquarters from Loudoun County, Virginia, to New York City; it would continue to operate its Virginia offices. As part of the impending move to New York and the restructuring of responsibilities at the Dulles headquarters complex after the Reston move, AOL CEO Randy Falco announced on October 15, 2007, plans to lay off 2,000 employees worldwide by the end of 2007, beginning "immediately". The end result was a near 40% layoff across the board at AOL. Most compensation packages associated with the October 2007 layoffs included a minimum of 120 days of severance pay, 60 of which were given in lieu of the 60-day advance notice requirement by provisions of the 1988 Federal WARN Act.
By November 2007, AOL's customer base had been reduced to 10.1 million subscribers, just narrowly ahead of Comcast and AT&T Yahoo!. According to Falco, as of December 2007, the conversion rate of accounts from paid access to free access was over 80%.
On January 3, 2008, AOL announced the closing of one of its three Northern Virginia data centers, Reston Technology Center, and sold it to CRG West. On February 6, Time Warner CEO Jeff Bewkes announced Time Warner would split AOL's internet access and advertising businesses in two, with the possibility of later selling the internet access division.
On March 13, 2008, AOL purchased the social networking site Bebo for $850m (£417m). On July 25, AOL announced it was shedding Xdrive, AOL Pictures, and BlueString to save on costs and focus on its core advertising business. AOL Pictures was terminated on December 31. On October 31, AOL Hometown (a web hosting service for the websites of AOL customers) and the AOL Journal blog hosting service were eliminated.
On March 12, 2009, Tim Armstrong, formerly with Google, was named Chairman and CEO of AOL. Shortly thereafter, on May 28, Time Warner announced it would spin off AOL as an independent company once Google's shares ceased at the end of the fiscal year. On November 23, AOL unveiled a sneak preview of a new brand identity which has the wordmark "Aol." superimposed onto canvases created by commissioned artists. The new identity, designed by Wolff Olins, was enacted onto all of AOL's services on December 10, the date AOL traded independently for the first time since the Time Warner merger on the New York Stock Exchange under the symbol AOL.
On April 6, 2010, AOL announced plans to shut down or sell Bebo; on June 16, the property was sold to Criterion Capital Partners for an undisclosed amount, believed to be around $10 million. In December, AIM eliminated access to AOL chat rooms noting a marked decline of patronage in recent months.
Under Armstrong's leadership, AOL began taking steps in a new business direction, marked by a series of acquisitions. On June 11, 2009, AOL had already announced the acquisition of Patch Media, a network of community-specific news and information sites which focuses on individual towns and communities. On September 28, 2010, at the San Francisco TechCrunch Disrupt Conference, AOL signed an agreement to acquire TechCrunch to further its overall strategy of providing premier online content. On December 12, 2010, AOL acquired about.me, a personal profile and identity platform, four days after that latter's public launch.
On January 31, 2011, AOL announced the acquisition of European video distribution network, goviral. In March 2011, AOL acquired "HuffPost" for $315 million. Shortly after the acquisition was announced, Huffington Post co-founder Arianna Huffington replaced AOL Content Chief David Eun, assuming the role of President and Editor-in-Chief of the AOL Huffington Post Media Group. On March 10, AOL announced it would cut around 900 workers due to the HuffPost acquisition.
On September 14, 2011, AOL formed a strategic ad selling partnership with two of its largest competitors, Yahoo and Microsoft. According to the new partnership, the three companies would begin selling inventory on each other's sites. The strategy was designed to help them compete with Google and ad networks.
On February 28, 2012, AOL partnered with PBS to launch MAKERS, a digital documentary series focusing on high-achieving women in male-dominated industries such as war, comedy, space, business, Hollywood and politics. Subjects for MAKERS episodes have included Oprah Winfrey, Hillary Clinton, Sheryl Sandberg, Martha Stewart, Indra Nooyi, Lena Dunham, and Ellen DeGeneres.
On March 15, 2012, AOL announced the acquisition of Hipster, a mobile photo-sharing app for an undisclosed amount. On April 9, 2012, AOL announced a deal to sell 800 patents to Microsoft for $1.056 billion. The deal includes a "perpetual" license for AOL to use these patents.
In April, AOL took several steps to expand its ability to generate revenue through online video advertising. The company announced it would offer gross rating point (GRP) guarantee for online video, mirroring the TV ratings system and guaranteeing audience delivery for online video advertising campaigns bought across its properties. This announcement came just days before the Digital Content NewFront (DCNF) a two-week event held by AOL, Google, Hulu, Microsoft, Vevo and Yahoo to showcase the participating sites' digital video offerings. The Digital Content NewFront were conducted in advance of the traditional television upfronts in hopes of diverting more advertising money into the digital space. On April 24, the company launched the AOL On network, a single website for its video output.
In February 2013, AOL reported its fourth quarter revenue of $599.5 million, its first growth in quarterly revenue in 8 years.
In August 2013, Armstrong announced Patch Media would scale back or sell hundreds of its local news sites. Not long afterwards, layoffs began, with up to 500 out of 1,100 positions initially impacted. On January 15, 2014, Patch Media was spun off, with majority ownership being held by Hale Global. By the end of 2014, AOL controlled 0.74% of the global advertising market, well behind industry leader Google's 31.4%.
On January 23, 2014, AOL acquired Gravity, a software startup that tracked users’ online behavior and tailored ads and content based on their interests, for $83 million. The deal, which included roughly 40 Gravity employees and their personalization technology, was CEO Tim Armstrong's fourth largest deal since taking over the company in 2009. Later that year, AOL also acquired Vidible, which developed technology to help websites run video content from other publishers, and help video publishers sell their content to these websites. The deal, which was announced December 1, 2014, was reportedly worth roughly $50 million.
On July 16, 2014, AOL earned an Emmy nomination for the AOL original series, The Future Starts Here, in the News and Documentary category. This came days after AOL earned its first Primetime Emmy Award nomination for "Park Bench with Steve Buscemi" in the Outstanding Short Form Variety Series category, which later won the award. Created and hosted by Tiffany Shlain, the series focused on human's relationship with technology and featured episodes such as The Future of Our Species, Why We Love Robots, and A Case for Optimism.
On May 12, 2015, Verizon announced plans to buy AOL for $50 per share in a deal valued at $4.4 billion. The transaction was completed on June 23. Armstrong, who continued to lead the firm following regulatory approval, called the deal the logical next step for AOL. "If you look forward five years, you're going to be in a space where there are going to be massive, global-scale networks, and there's no better partner for us to go forward with than Verizon." he said. "It's really not about selling the company today. It's about setting up for the next five to 10 years."
Analyst David Bank said he thought the deal made sense for Verizon. The deal will broaden Verizon's advertising sales platforms and increase its video production ability through websites such as "HuffPost", TechCrunch, and Engadget. However, Craig Moffett said it was unlikely the deal would make a big difference to Verizon's bottom line. AOL had about two million dial-up subscribers at the time of the buyout. The announcement caused AOL's stock price to rise 17%, while Verizon's stock price dropped slightly.
Shortly before the Verizon purchase, on April 14, 2015, AOL launched ONE by AOL, a digital marketing programmatic platform that unifies buying channels and audience management platforms to track and optimize campaigns over multiple screens. Later that year, on September 15, AOL expanded the product with ONE by AOL: Creative, which is geared towards creative and media agencies to similarly connect marketing and ad distribution efforts.
On May 8, 2015, AOL reported its first-quarter revenue of $625.1 million, $483.5 million of which came from advertising and related operations, marking a 7% increase from Q1 2014. Over that year, the AOL Platforms division saw a 21% increase in revenue, but a drop in adjusted OIBDA due to increased investments in the company's video and programmatic platforms.
On June 29, 2015, AOL announced a deal with Microsoft to take over the majority of its digital advertising business. Under the pact, as many as 1,200 Microsoft employees involved with the business will be transferred to AOL, and the company will take over the sale of display, video, and mobile ads on various Microsoft platforms in nine countries, including Brazil, Canada, the United States, and the United Kingdom. Additionally, Google Search will be replaced on AOL properties with Bing—which will display advertising sold by Microsoft. Both advertising deals are subject to affiliate marketing revenue sharing.
On July 22, 2015, AOL received two News and Documentary Emmy nominations, one for MAKERS in the Outstanding Historical Programming category, and the other for "True Trans With Laura Jane Grace", which documented the story of Laura Jane Grace, a transgender musician best known as the founder, lead singer, songwriter and guitarist of the punk rock band Against Me!, and her decision to come out publicly and overall transition experience.
On September 3, 2015, AOL agreed to buy Millennial Media for US$238 million. On October 23, 2015, AOL completed the acquisition.
On October 1, 2015, Go90, a free ad-supported mobile video service aimed at young adult and teen viewers that Verizon owns and AOL oversees and operates launched its content publicly after months of beta testing. The initial launch line-up included content from Comedy Central, HuffPost, Nerdist News, Univision News, Vice, ESPN and MTV.
On January 25, 2016, AOL expanded its ONE platform by introducing ONE by AOL: Publishers, which combines six previously separate technologies to offer various publisher capabilities such as customizing video players, offering premium ad experience to boost visibility, and generating large video libraries. The announcement was made in tandem with AOL's acquisition of AlephD, a Paris-based startup focused on publisher analytics of ad price tracking based on historical data. AOL announced AlephD would be a part of the ONE by AOL: Publishers platform.
On April 20, 2016, AOL acquired virtual reality studio RYOT to bring immersive 360 degree video and VR content to HuffPost's global audience across desktop, mobile, and apps.
In July 2016, Verizon Communications announced its intent to purchase the core internet business of Yahoo!. Verizon tentatively plans to merge AOL with Yahoo into a new company called "Oath Inc.".
In April 2018, Oath Inc. sold Moviefone to MoviePass Parent Helios and Matheson Analytics.
As of 2019, following media brands became subsidiary of AOL's parent Verizon Media.
AOL's content contributors consists of over 20,000 bloggers, including politicians, celebrities, academics, and policy experts, who contribute on a wide range of topics making news.
In addition to mobile-optimized web experiences, AOL produces mobile applications for existing AOL properties like Autoblog, Engadget, The Huffington Post, TechCrunch, and products such as Alto, Pip, and Vivv.
AOL has a global portfolio of media brands and advertising services across mobile, desktop, and TV. Services include brand integration and sponsorships through its in-house branded content arm, Partner Studio by AOL, as well as data and programmatic offerings through ad technology stack, ONE by AOL.
AOL acquired a number of businesses and technologies help to form ONE by AOL. These acquisitions included AdapTV in 2013 and Convertro, Precision Demand, and Vidible in 2014. ONE by AOL is further broken down into ONE by AOL for Publishers (formerly Vidible, AOL On Network and Be On for Publishers) and ONE by AOL for Advertisers, each of which have several sub-platforms.
On 10 September 2018, AOL's parent company Oath consolidated Yahoo BrightRoll, One by AOL and Yahoo Gemini to ‘simplify’ adtech service by launching a single advertising proposition dubbed Oath Ad Platforms.
AOL offers a range of integrated products and properties including communication tools, mobile apps and services and subscription packages.
AOL Desktop is an internet suite produced by AOL from 2007 that integrates a web browser, a media player and an instant messenger client. Version 10.X was based on AOL OpenRide, it is an upgrade from such. The macOS version is based on WebKit.
AOL Desktop version 10.X was different from previous AOL browsers and AOL Desktop versions. Its features are focused on web browsing as well as email. For instance, one does not have to sign into AOL in order to use it as a regular browser. In addition, non-AOL email accounts can be accessed through it. Primary buttons include "MAIL", "IM", and several shortcuts to various web pages. The first two require users to sign in, but the shortcuts to web pages can be used without authentication. AOL Desktop version 10.X was late marked as unsupported in favor of supporting the AOL Desktop 9.X versions.
Version 9.8 was released, replacing the Internet Explorer components of the internet browser with CEF (Chromium Embedded Framework) to give users an improved web browsing experience closer to that of Chrome
Version 11 of AOL Desktop, currently in Beta, is a total rewrite but maintains a similar user interface to the previous 9.8.X series of releases.
In 2017, a new paid version called AOL Desktop Gold was released, available for $4.99 per month after trial. It replaced the previous free version.
In its earlier incarnation as a "walled garden" community and service provider, AOL received criticism for its community policies, terms of service, and customer service. Prior to 2006, AOL was known for its direct mailing of CD-ROMs and 3.5-inch floppy disks containing its software. The disks were distributed in large numbers; at one point, half of the CDs manufactured worldwide had AOL logos on them. The marketing tactic was criticized for its environmental cost, and AOL CDs were recognized as "PC World"s most annoying tech product.
AOL used a system of volunteers to moderate its chat rooms, forums and user communities. The program dated back to AOL's early days, when it charged by the hour for access and one of its highest billing services was chat. AOL provided free access to community leaders in exchange for moderating the chat rooms, and this effectively made chat very cheap to operate, and more lucrative than AOL's other services of the era. There were 33,000 community leaders in 1996. All community leaders received hours of training and underwent a probationary period. While most community leaders moderated chat rooms, some ran AOL communities and controlled their layout and design, with as much as 90% of AOL's content being created or overseen by community managers until 1996.
By 1996, ISPs were beginning to charge flat rates for unlimited access, which they could do at a profit because they only provided internet access. Even though AOL would lose money with such a pricing scheme, it was forced by market conditions to offer unlimited access in October 1996. In order to return to profitability, AOL rapidly shifted its focus from content creation to advertising, resulting in less of a need to carefully moderate every forum and chat room to keep users willing to pay by the minute to remain connected.
After unlimited access, AOL considered scrapping the program entirely, but continued it with a reduced number of community leaders, with scaled-back roles in creating content. Although community leaders continued to receive free access, after 1996 they were motivated more by the prestige of the position and the access to moderator tools and restricted areas within AOL. By 1999, there were over 15,000 volunteers in the program.
In May 1999, two former volunteers filed a class-action lawsuit alleging AOL violated the Fair Labor Standards Act by treating volunteers like employees. Volunteers had to apply for the position, commit to working for at least three to four hours a week, fill out timecards and sign a non-disclosure agreement. On July 22, AOL ended its youth corps, which consisted of 350 underage community leaders. At this time, the United States Department of Labor began an investigation into the program, but it came to no conclusions about AOL's practices.
AOL ended its community leader program on June 8, 2005. The class action lawsuit dragged on for years, even after AOL ended the program and AOL declined as a major internet company. In 2010, AOL finally agreed to settle the lawsuit for $15 million. The community leader program was found to be an example of co-production in a 2009 article in International Journal of Cultural Studies.
AOL has faced a number of lawsuits over claims that it has been slow to stop billing customers after their accounts have been canceled, either by the company or the user. In addition, AOL changed its method of calculating used minutes in response to a class action lawsuit. Previously, AOL would add 15 seconds to the time a user was connected to the service and round up to the next whole minute (thus, a person who used the service for 12 minutes and 46 seconds would be charged for 14 minutes). AOL claimed this was to account for sign on/sign off time, but because this practice was not made known to its customers, the plaintiffs won (some also pointed out that signing on and off did not always take 15 seconds, especially when connecting via another ISP). AOL disclosed its connection-time calculation methods to all of its customers and credited them with extra free hours. In addition, the AOL software would notify the user of exactly how long they were connected and how many minutes they were being charged.
AOL was sued by the Ohio Attorney General in October 2003 for improper billing practices. The case was settled on June 8, 2005. AOL agreed to resolve any consumer complaints filed with the Ohio AG's office. In December 2006, AOL agreed to provide restitution to Florida consumers to settle the case filed against them by the Florida Attorney General.
Many customers complained that AOL personnel ignored their demands to cancel service and stop billing. In response to approximately 300 consumer complaints, the New York Attorney General's office began an inquiry of AOL's customer service policies. The investigation revealed that the company had an elaborate scheme for rewarding employees who purported to retain or "save" subscribers who had called to cancel their Internet service. In many instances, such retention was done against subscribers' wishes, or without their consent. Under the scheme, customer service personnel received bonuses worth tens of thousands of dollars if they could successfully dissuade or "save" half of the people who called to cancel service. For several years, AOL had instituted minimum retention or "save" percentages, which consumer representatives were expected to meet. These bonuses, and the minimum "save" rates accompanying them, had the effect of employees not honoring cancellations, or otherwise making cancellation unduly difficult for consumers.
On August 24, 2005, America Online agreed to pay $1.25 million to the state of New York and reformed its customer service procedures. Under the agreement, AOL would no longer require its customer service representatives to meet a minimum quota for customer retention in order to receive a bonus. However the agreement only covered people in the state of New York.
On June 13, 2006, Vincent Ferrari documented his account cancellation phone call in a blog post, stating he had switched to broadband years earlier. In the recorded phone call, the AOL representative refused to cancel the account unless the 30-year-old Ferrari explained why AOL hours were still being recorded on it. Ferrari insisted that AOL software was not even installed on the computer. When Ferrari demanded that the account be canceled regardless, the AOL representative asked to speak with Ferrari's father, for whom the account had been set up. The conversation was aired on CNBC. When CNBC reporters tried to have an account on AOL cancelled, they were hung up on immediately and it ultimately took more than 45 minutes to cancel the account.
On July 19, 2006, AOL's entire retention manual was released on the Internet. On August 3, 2006, Time Warner announced that the company would be dissolving AOL's retention centers due to its profits hinging on $1 billion in cost cuts. The company estimated that it would lose more than six million subscribers over the following year.
Prior to 2006, AOL was infamous for the unsolicited mass direct mail of 3½" floppy disks and CD-ROMs containing their software. They were the most frequent user of this marketing tactic, and received criticism for the environmental cost of the campaign. According to "PC World", in the 1990s "you couldn't open a magazine ("PC World" included) or your mailbox without an AOL disk falling out of it".
The mass distribution of these disks was seen as wasteful by the public and led to protest groups. One such was No More AOL CDs, a web-based effort by two IT workers to collect one million disks with the intent to return the disks to AOL. The website was started in August 2001, and an estimated 410,176 CDs were collected by August 2007 when the project was shut down.
In 2000, AOL was served with an $8 billion lawsuit alleging that its AOL 5.0 software caused significant difficulties for users attempting to use third-party Internet service providers. The lawsuit sought damages of up to $1000 for each user that had downloaded the software cited at the time of the lawsuit. AOL later agreed to a settlement of $15 million, without admission of wrongdoing. The AOL software then was given a feature called AOL Dialer, or AOL Connect on . This feature allowed users to connect to the ISP without running the full interface. This allowed users to use only the applications they wish to use, especially if they do not favor the AOL Browser.
AOL 9.0 was once identified by Stopbadware as being "under investigation" for installing additional software without disclosure, and modifying browser preferences, toolbars, and icons. However, as of the release of AOL 9.0 VR (Vista Ready) on January 26, 2007, it was no longer considered badware due to changes AOL made in the software.
When AOL gave clients access to Usenet in 1993, they hid at least one newsgroup in standard list view: "alt.aol-sucks". AOL did list the newsgroup in the alternative description view, but changed the description to "Flames and complaints about America Online". With AOL clients swarming Usenet newsgroups, the old, existing user base started to develop a strong distaste for both AOL and its clients, referring to the new state of affairs as Eternal September.
AOL discontinued access to Usenet on June 25, 2005. No official details were provided as to the cause of decommissioning Usenet access, except providing users the suggestion to access Usenet services from a third-party, Google Groups. AOL then provided community-based message boards in lieu of Usenet.
AOL has a detailed set of guidelines and expectations for users on their service, known as the Terms of Service (TOS, also known as Conditions of Service, or COS in the UK). It is separated into three different sections: "Member Agreement", "Community Guidelines" and "Privacy Policy". All three agreements are presented to users at time of registration and digital acceptance is achieved when they access the AOL service. During the period when volunteer chat room hosts and board monitors were used, chat room hosts were given a brief online training session and test on Terms of Service violations.
There have been many complaints over rules that govern an AOL user's conduct. Some users disagree with the TOS, citing the guidelines are too strict to follow coupled with the fact the TOS may change without users being made aware. A considerable cause for this was likely due to alleged censorship of user-generated content during the earlier years of growth for AOL.
In early 2005, AOL stated its intention to implement a certified email system called Goodmail, which will allow companies to send email to users with whom they have pre-existing business relationships, with a visual indication that the email is from a trusted source and without the risk that the email messages might be blocked or stripped by spam filters.
This decision drew fire from MoveOn, which characterized the program as an "email tax", and the Electronic Frontier Foundation (EFF), which characterized it as a shakedown of non-profits. A website called Dearaol.com was launched, with an online petition and a blog that garnered hundreds of signatures from people and organizations expressing their opposition to AOL's use of Goodmail.
Esther Dyson defended the move in an editorial in "The New York Times", saying "I hope Goodmail succeeds, and that it has lots of competition. I also think it and its competitors will eventually transform into services that more directly serve the interests of mail recipients. Instead of the fees going to Goodmail and AOL, they will also be shared with the individual recipients."
Tim Lee of the Technology Liberation Front posted an article that questioned the Electronic Frontier Foundation's adopting a confrontational posture when dealing with private companies. Lee's article cited a series of discussions on Declan McCullagh's Politechbot mailing list on this subject between the EFF's Danny O'Brien and antispammer Suresh Ramasubramanian, who has also compared the EFF's tactics in opposing Goodmail to tactics used by Republican political strategist Karl Rove. SpamAssassin developer Justin Mason posted some criticism of the EFF's and Moveon's "going overboard" in their opposition to the scheme.
The dearaol.com campaign lost momentum and disappeared, with the last post to the now defunct dearaol.com blog—"AOL starts the shakedown" being made on May 9, 2006.
Comcast, who also used the service, announced on its website that Goodmail had ceased operations and as of February 4, 2011 they no longer used the service.
On August 4, 2006, AOL released a compressed text file on one of its websites containing 20 million search keywords for over 650,000 users over a 3-month period between March 1, 2006 and May 31, intended for research purposes. AOL pulled the file from public access by August 7, but not before its wide distribution on the Internet by others. Derivative research, titled "A Picture of Search" was published by authors Pass, Chowdhury and Torgeson for The First International Conference on Scalable Information Systems.
The data were used by websites such as AOLstalker for entertainment purposes, where users of AOLstalker are encouraged to judge AOL clients based on the humorousness of personal details revealed by search behavior.
In 2003, Jason Smathers, an AOL employee, was convicted of stealing America Online's 92 million screen names and selling them to a known spammer. Smathers pled guilty to conspiracy charges in 2005. Smathers pled guilty to violations of the US CAN-SPAM Act of 2003. He was sentenced in August 2005 to 15 months in prison; the sentencing judge also recommended Smathers be forced to pay $84,000 in restitution, triple the $28,000 that he sold the addresses for.
On February 27, 2012, a class action lawsuit was filed against Support.com, Inc. and partner AOL, Inc. The lawsuit alleged Support.com and AOL's Computer Checkup "scareware" (which uses software developed by Support.com) misrepresented that their software programs would identify and resolve a host of technical problems with computers, offered to perform a free “scan,” which often found problems with users' computers. The companies then offered to sell software—for which AOL allegedly charged $4.99 a month and Support.com $29—to remedy those problems. Both AOL, Inc. and Support.com, Inc. settled on May 30, 2013, for $8.5 million. This included $25.00 to each valid class member and $100,000 each to Consumer Watchdog and the Electronic Frontier Foundation. Judge Jacqueline Scott Corley wrote: “Distributing a portion of the [funds] to Consumer Watchdog will meet the interests of the silent class members because the organization will use the funds to help protect consumers across the nation from being subject to the types of fraudulent and misleading conduct that is alleged here,” and “EFF’s mission includes a strong consumer protection component, especially in regards to online protection.”
AOL continues to market Computer Checkup. It is not clear if this latest Computer Checkup continues to use scareware techniques.
Following media reports about PRISM, NSA's massive electronic surveillance program, in June 2013, several technology companies were identified as participants, including AOL. According to leaks of said program, AOL joined the PRISM program in 2011.
At one time, most AOL users had an online "profile" hosted by the AOL Hometown service. When AOL Hometown was discontinued, users had to create a new profile on Bebo. This was an unsuccessful attempt to create a social network that would compete with Facebook. When the value of Bebo decreased to a tiny fraction of the $850 million AOL paid for it, users were forced to recreate their profiles yet again, on a new service called AOL Lifestream.
AOL took the decision to shut down Lifestream on February 24, 2017, and gave users one month's notice to save off photos and videos that had been uploaded to Lifestream. Following the shutdown, AOL no longer provides any option for hosting user profiles.
During the Hometown/Bebo/Lifestream era, another user's profile could be displayed by clicking the "Buddy Info" button in the AOL Desktop software. After the shutdown of Lifestream, clicking "Buddy Info" does something that provides no information whatsoever about the selected buddy: it causes the AIM home page (www.aim.com) to be displayed. | https://en.wikipedia.org/wiki?curid=1397 |
Anno Domini
The terms (AD) and before Christ (BC) are used to label or number years in the Julian and Gregorian calendars. The term "" is Medieval Latin and means "in the year of the Lord", but is often presented using "our Lord" instead of "the Lord", taken from the full original phrase ""anno Domini nostri Jesu Christi"", which translates to "in the year of our Lord Jesus Christ".
This calendar era is based on the traditionally reckoned year of the conception or birth of Jesus of Nazareth, with "AD" counting years from the start of this epoch, and "BC" denoting years before the start of the era. There is no year zero in this scheme, so the year AD 1 immediately follows the year 1 BC. This dating system was devised in 525 by Dionysius Exiguus of Scythia Minor, but was not widely used until after 800.
The Gregorian calendar is the most widely used calendar in the world today. For decades, it has been the unofficial global standard, adopted in the pragmatic interests of international communication, transportation, and commercial integration, and recognized by international institutions such as the United Nations.
Traditionally, English followed Latin usage by placing the "AD" abbreviation before the year number. However, BC is placed after the year number (for example: AD 2020, but 68 BC), which also preserves syntactic order. The abbreviation is also widely used after the number of a century or millennium, as in "fourth century AD" or "second millennium AD" (although conservative usage formerly rejected such expressions). Because BC is the English abbreviation for "Before Christ", it is sometimes incorrectly concluded that AD means "After Death", i.e., after the death of Jesus. However, this would mean that the approximate 33 years commonly associated with the life of Jesus would be included in neither the BC nor the AD time scales.
Terminology that is viewed by some as being more neutral and inclusive of non-Christian people is to call this the Current or Common Era (abbreviated as CE), with the preceding years referred to as Before the Common or Current Era (BCE). Astronomical year numbering and ISO 8601 avoid words or abbreviations related to Christianity, but use the same numbers for AD years.
The "Anno Domini" dating system was devised in 525 by Dionysius Exiguus to enumerate the years in his Easter table. His system was to replace the Diocletian era that had been used in an old Easter table because he did not wish to continue the memory of a tyrant who persecuted Christians. The last year of the old table, Diocletian Anno Martyrium 247, was immediately followed by the first year of his table, Anno Domini 532. When he devised his table, Julian calendar years were identified by naming the consuls who held office that year—he himself stated that the "present year" was "the consulship of Probus Junior", which was 525 years "since the incarnation of our Lord Jesus Christ". Thus Dionysius implied that Jesus' incarnation occurred 525 years earlier, without stating the specific year during which his birth or conception occurred. "However, nowhere in his exposition of his table does Dionysius relate his epoch to any other dating system, whether consulate, Olympiad, year of the world, or regnal year of Augustus; much less does he explain or justify the underlying date."
Bonnie J. Blackburn and Leofranc Holford-Strevens briefly present arguments for 2 BC, 1 BC, or AD 1 as the year Dionysius intended for the Nativity or incarnation. Among the sources of confusion are:
It is not known how Dionysius established the year of Jesus's birth. Two major theories are that Dionysius based his calculation on the Gospel of Luke, which states that Jesus was "about thirty years old" shortly after "the fifteenth year of the reign of Tiberius Caesar", and hence subtracted thirty years from that date, or that Dionysius counted back 532 years from the first year of his new table.
It has also been speculated by Georges Declercq that Dionysius' desire to replace Diocletian years with a calendar based on the incarnation of Christ was intended to prevent people from believing the imminent end of the world. At the time, it was believed by some that the resurrection of the dead and end of the world would occur 500 years after the birth of Jesus. The old "Anno Mundi" calendar theoretically commenced with the creation of the world based on information in the Old Testament. It was believed that, based on the "Anno Mundi" calendar, Jesus was born in the year 5500 (5500 years after the world was created) with the year 6000 of the "Anno Mundi" calendar marking the end of the world. "Anno Mundi" 6000 (approximately AD 500) was thus equated with the end of the world but this date had already passed in the time of Dionysius.
The Anglo-Saxon historian Saint (Venerable) Bede, who was familiar with the work of Dionysius Exiguus, used "Anno Domini" dating in his "Ecclesiastical History of the English People", which he completed in AD 731. In the "History" he also used the Latin phrase "ante [...] incarnationis dominicae tempus anno sexagesimo" ("in the sixtieth year before the time of the Lord's incarnation"), which is equivalent to the English "before Christ", to identify years before the first year of this era. Both Dionysius and Bede regarded "Anno Domini" as beginning at the incarnation of Jesus Christ, but "the distinction between Incarnation and Nativity was not drawn until the late 9th century, when in some places the Incarnation epoch was identified with Christ's conception, i. e., the Annunciation on March 25" ("Annunciation style" dating).
On the continent of Europe, "Anno Domini" was introduced as the era of choice of the Carolingian Renaissance by the English cleric and scholar Alcuin in the late eighth century. Its endorsement by Emperor Charlemagne and his successors popularizing the use of the epoch and spreading it throughout the Carolingian Empire ultimately lies at the core of the system's prevalence. According to the Catholic Encyclopedia, popes continued to date documents according to regnal years for some time, but usage of AD gradually became more common in Catholic countries from the 11th to the 14th centuries. In 1422, Portugal became the last Western European country to switch to the system begun by Dionysius. Eastern Orthodox countries only began to adopt AD instead of the Byzantine calendar in 1700 when Russia did so, with others adopting it in the 19th and 20th centuries.
Although "Anno Domini" was in widespread use by the 9th century, the term "Before Christ" (or its equivalent) did not become common until much later. Bede used the expression ""anno [...] ante incarnationem Dominicam"" (in the year before the incarnation of the Lord) twice. ""Anno ante Christi nativitatem"" (in the year before the birth of Christ) is found in 1474 in a work by a German monk. In 1627, the French Jesuit theologian Denis Pétau (Dionysius Petavius in Latin), with his work "De doctrina temporum", popularized the usage "ante Christum" (Latin for "Before Christ") to mark years prior to AD.
When the reckoning from Jesus' incarnation began replacing the previous dating systems in western Europe, various people chose different Christian feast days to begin the year: Christmas, Annunciation, or Easter. Thus, depending on the time and place, the year number changed on different days in the year, which created slightly different styles in chronology:
With these various styles, the same day could, in some cases, be dated in 1099, 1100 or 1101.
The date of birth of Jesus of Nazareth is not stated in the gospels or in any secular text, but most scholars assume a date of birth between 6 BC and 4 BC. The historical evidence is too fragmentary to allow a definitive dating, but the date is estimated through two different approaches – one by analyzing references to known historical events mentioned in the Nativity accounts in the Gospels of Luke and Matthew, and the second by working backwards from the estimation of the start of the ministry of Jesus.
During the first six centuries of what would come to be known as the Christian era, European countries used various systems to count years. Systems in use included consular dating, imperial regnal year dating, and Creation dating.
Although the last non-imperial consul, Basilius, was appointed in 541 by Emperor Justinian I, later emperors through Constans II (641–668) were appointed consuls on the first of January after their accession. All of these emperors, except Justinian, used imperial post-consular years for the years of their reign, along with their regnal years. Long unused, this practice was not formally abolished until Novell XCIV of the law code of Leo VI did so in 888.
Another calculation had been developed by the Alexandrian monk Annianus around the year AD 400, placing the Annunciation on 25 March AD 9 (Julian)—eight to ten years after the date that Dionysius was to imply. Although this incarnation was popular during the early centuries of the Byzantine Empire, years numbered from it, an "Era of Incarnation", were exclusively used and are still used in Ethiopia. This accounts for the seven- or eight-year discrepancy between the Gregorian and Ethiopian calendars. Byzantine chroniclers like Maximus the Confessor, George Syncellus, and Theophanes dated their years from Annianus' creation of the world. This era, called "Anno Mundi", "year of the world" (abbreviated AM), by modern scholars, began its first year on 25 March 5492 BC. Later Byzantine chroniclers used "Anno Mundi" years from 1 September 5509 BC, the Byzantine Era. No single "Anno Mundi" epoch was dominant throughout the Christian world. Eusebius of Caesarea in his "Chronicle" used an era beginning with the birth of Abraham, dated in 2016 BC (AD 1 = 2017 Anno Abrahami).
Spain and Portugal continued to date by the Spanish Era (also called Era of the Caesars), which began counting from 38 BC, well into the Middle Ages. In 1422, Portugal became the last Catholic country to adopt the "Anno Domini" system.
The Era of Martyrs, which numbered years from the accession of Diocletian in 284, who launched the most severe persecution of Christians, was used by the Church of Alexandria and is still used, officially, by the Coptic Orthodox and Coptic Catholic churches. It was also used by the Ethiopian church. Another system was to date from the crucifixion of Jesus, which as early as Hippolytus and Tertullian was believed to have occurred in the consulate of the Gemini (AD 29), which appears in some medieval manuscripts.
Alternative names for the "Anno Domini" era include "vulgaris aerae" (found 1615 in Latin),
"Vulgar Era" (in English, as early as 1635),
"Christian Era" (in English, in 1652),
"Common Era" (in English, 1708),
and "Current Era".
Since 1856, the alternative abbreviations CE and BCE, (sometimes written C.E. and B.C.E.) are sometimes used in place of AD and BC.
The "Common/Current Era" ("CE") terminology is often preferred by those who desire a term that does not explicitly make religious references.
For example, Cunningham and Starr (1998) write that "B.C.E./C.E. …do not presuppose faith in Christ and hence are more appropriate for interfaith dialog than the conventional B.C./A.D." Upon its foundation, the Republic of China adopted the Minguo Era, but used the Western calendar for international purposes. The translated term was 西元 ("xī yuán", "Western Era"). Later, in 1949, the People's Republic of China adopted 公元 ("gōngyuán", "Common Era") for all purposes domestic and foreign.
In the AD year numbering system, whether applied to the Julian or Gregorian calendars, AD 1 is immediately preceded by 1 BC, with nothing in between them (there was no year zero). There are debates as to whether a new decade, century, or millennium begin on a year ending in zero or one.
For computational reasons, astronomical year numbering and the ISO 8601 standard designate years so that AD 1 = year 1, 1 BC = year 0, 2 BC = year −1, etc. In common usage, ancient dates are expressed in the Julian calendar, but ISO 8601 uses the Gregorian calendar and astronomers may use a variety of time scales depending on the application. Thus dates using the year 0 or negative years may require further investigation before being converted to BC or AD. | https://en.wikipedia.org/wiki?curid=1400 |
Alcuin
Alcuin of York (; ; 735 – 19 May 804 AD) – also called Ealhwine, Alhwin or Alchoin – was an English scholar, clergyman, poet and teacher from York, Northumbria. He was born around 735 and became the student of Archbishop Ecgbert at York. At the invitation of Charlemagne, he became a leading scholar and teacher at the Carolingian court, where he remained a figure in the 780s and '90s. During this period he perfected Carolingian minuscule, an easily read manuscript hand using a mixture of upper and lower case letters. Latin paleography in the 8th century leaves little room for a single origin of the script, and sources contradict his importance as there is no proof of his direct involvement in the creation of the script. Carolingian minuscule was already in use before Alcuin arrived in Francia. Most likely he was responsible for copying and preserving the script while at the same time restoring the purity of the form.
Alcuin wrote many theological and dogmatic treatises, as well as a few grammatical works and a number of poems. He was made Abbot of Tours in 796, where he remained until his death. "The most learned man anywhere to be found", according to Einhard's "Life of Charlemagne" ("ca." 817-833), he is considered among the most important architects of the Carolingian Renaissance. Among his pupils were many of the dominant intellectuals of the Carolingian era.
Alcuin was born in Northumbria, presumably sometime in the 730s. Virtually nothing is known of his parents, family background, or origin. In common hagiographical fashion, the "Vita Alcuini" asserts that Alcuin was 'of noble English stock,' and this statement has usually been accepted by scholars. Alcuin's own work only mentions such collateral kinsmen as Wilgils, father of the missionary saint Willibrord; and Beornrad (also spelled Beornred), abbot of Echternach and bishop of Sens. Willibrord, Alcuin and Beornrad were all related by blood.
In his "Life" of St Willibrord, Alcuin writes that Wilgils, called a "paterfamilias", had founded an oratory and church at the mouth of the Humber, which had fallen into Alcuin's possession by inheritance. Because in early Anglo-Latin writing "paterfamilias" ("head of a family, householder") usually referred to a ceorl, Donald A. Bullough suggests that Alcuin's family was of cierlisc status: "i.e.," free but subordinate to a noble lord, and that Alcuin and other members of his family rose to prominence through beneficial connections with the aristocracy. If so, Alcuin's origins may lie in the southern part of what was formerly known as Deira.
The young Alcuin came to the cathedral church of York during the golden age of Archbishop Ecgbert and his brother, the Northumbrian King Eadberht. Ecgbert had been a disciple of the Venerable Bede, who urged him to raise York to an archbishopric. King Eadberht and Archbishop Ecgbert oversaw the re-energising and re-organisation of the English church, with an emphasis on reforming the clergy and on the tradition of learning that Bede had begun. Ecgbert was devoted to Alcuin, who thrived under his tutelage.
The York school was renowned as a centre of learning in the liberal arts, literature, and science, as well as in religious matters. It was from here that Alcuin drew inspiration for the school he would lead at the Frankish court. He revived the school with the trivium and quadrivium disciplines, writing a codex on the trivium, while his student Hraban wrote one on the quadrivium.
Alcuin graduated to become a teacher during the 750s. His ascendancy to the headship of the York school, the ancestor of St Peter's School, began after Aelbert became Archbishop of York in 767. Around the same time Alcuin became a deacon in the church. He was never ordained a priest. Though there is no real evidence that he took monastic vows, he lived as if he had.
In 781, King Elfwald sent Alcuin to Rome to petition the Pope for official confirmation of York's status as an archbishopric and to confirm the election of the new archbishop, Eanbald I. On his way home he met Charlemagne (whom he had met once before), this time in the Italian city of Parma.
Alcuin's intellectual curiosity allowed him to be reluctantly persuaded to join Charlemagne's court. He joined an illustrious group of scholars that Charlemagne had gathered around him, the mainsprings of the Carolingian Renaissance: Peter of Pisa, Paulinus of Aquileia, Rado, and Abbot Fulrad. Alcuin would later write that "the Lord was calling me to the service of King Charles."
Alcuin became Master of the Palace School of Charlemagne in Aachen ("Urbs Regale") in 782. It had been founded by the king's ancestors as a place for the education of the royal children (mostly in manners and the ways of the court). However, Charlemagne wanted to include the liberal arts and, most importantly, the study of religion. From 782 to 790, Alcuin taught Charlemagne himself, his sons Pepin and Louis, as well as young men sent to be educated at court, and the young clerics attached to the palace chapel. Bringing with him from York his assistants Pyttel, Sigewulf, and Joseph, Alcuin revolutionised the educational standards of the Palace School, introducing Charlemagne to the liberal arts and creating a personalised atmosphere of scholarship and learning, to the extent that the institution came to be known as the 'school of Master Albinus'.
In this role as adviser, he took issue with the emperor's policy of forcing pagans to be baptised on pain of death, arguing, "Faith is a free act of the will, not a forced act. We must appeal to the conscience, not compel it by violence. You can force people to be baptised, but you cannot force them to believe." His arguments seem to have prevailed – Charlemagne abolished the death penalty for paganism in 797.
Charlemagne gathered the best men of every land in his court, and became far more than just the king at the centre. It seems that he made many of these men his closest friends and counsellors. They referred to him as 'David', a reference to the Biblical king David. Alcuin soon found himself on intimate terms with Charlemagne and the other men at court, where pupils and masters were known by affectionate and jesting nicknames. Alcuin himself was known as 'Albinus' or 'Flaccus'. While at Aachen, Alcuin bestowed pet names upon his pupils – derived mainly from Virgil's "Eclogues". According to the "Encyclopædia Britannica", "He loved Charlemagne and enjoyed the king's esteem, but his letters reveal that his fear of him was as great as his love.
In 790 Alcuin returned from the court of Charlemagne to England, to which he had remained attached. He dwelt there for some time, but Charlemagne then invited him back to help in the fight against the Adoptionist heresy which was at that time making great progress in Toledo, the old capital of the Visigoths and still a major city for the Christians under Islamic rule in Spain. He is believed to have had contacts with Beatus of Liébana, from the Kingdom of Asturias, who fought against Adoptionism. At the Council of Frankfurt in 794, Alcuin upheld the orthodox doctrine against the views expressed by Felix of Urgel, an heresiarch according to the Catholic Encyclopaedia. Having failed during his stay in Northumbria to influence King Æthelred in the conduct of his reign, Alcuin never returned home.
He was back at Charlemagne's court by at least mid-792, writing a series of letters to Æthelred, to Hygbald, Bishop of Lindisfarne, and to Æthelhard, Archbishop of Canterbury in the succeeding months, dealing with the Viking attack on Lindisfarne in July 793. These letters and Alcuin's poem on the subject, "De clade Lindisfarnensis monasterii", provide the only significant contemporary account of these events. In his description of the Viking attack, he wrote: ""Never before has such terror appeared in Britain. Behold the church of St Cuthbert, splattered with the blood of God's priests, robbed of its ornaments"."
In 796 Alcuin was in his sixties. He hoped to be free from court duties and upon the death of Abbot Itherius of Saint Martin at Tours, Charlemagne put Marmoutier Abbey into Alcuin's care, with the understanding that he should be available if the king ever needed his counsel. There he encouraged the work of the monks on the beautiful Carolingian minuscule script, ancestor of modern Roman typefaces.
Alcuin died on 19 May 804, some ten years before the emperor, and was buried at St. Martin's Church under an epitaph that partly read:
The majority of details on Alcuin's life come from his letters and poems. There are also autobiographical sections in Alcuin's poem on York and in the "Vita Alcuini", a "Life" written for him at Ferrières in the 820s, possibly based in part on the memories of Sigwulf, one of Alcuin's pupils.
The collection of mathematical and logical word problems entitled "Propositiones ad acuendos juvenes" ("Problems to Sharpen Youths") is sometimes attributed to Alcuin. In a 799 letter to Charlemagne the scholar claimed to have sent "certain figures of arithmetic for the joy of cleverness," which some scholars have identified with the "Propositiones."
The text contains about 53 mathematical word problems (with solutions), in no particular pedagogical order. Among the most famous of these problems are: four that involve river crossings, including the problem of three anxious brothers, each of whom has an unmarried sister whom he cannot leave alone with either of the other men lest she be defiled (Problem 17); the problem of the wolf, goat, and cabbage (Problem 18); and the problem of "the two adults and two children where the children weigh half as much as the adults" (Problem 19). Alcuin's sequence is the solution to one of the problems of that book.
Alcuin made the abbey school into a model of excellence and many students flocked to it. He had many manuscripts copied using outstandingly beautiful calligraphy, the Carolingian minuscule based on round and legible uncial letters. He wrote many letters to his English friends, to Arno, bishop of Salzburg and above all to Charlemagne. These letters (of which 311 are extant) are filled mainly with pious meditations, but they form an important source of information as to the literary and social conditions of the time and are the most reliable authority for the history of humanism during the Carolingian age. Alcuin trained the numerous monks of the abbey in piety, and it was in the midst of these pursuits that he died.
Alcuin is the most prominent figure of the Carolingian Renaissance, in which three main periods have been distinguished: in the first of these, up to the arrival of Alcuin at the court, the Italians occupy a central place; in the second, Alcuin and the Anglo-Saxons are dominant; in the third (from 804), the influence of Theodulf, the Visigoth is preponderant.
Alcuin also developed manuals used in his educational work – a grammar and works on rhetoric and dialectics. These are written in the form of dialogues, and in two of them the interlocutors are Charlemagne and Alcuin. He wrote several theological treatises: a "De fide Trinitatis", and commentaries on the Bible. Alcuin is credited with inventing the first known question mark, though it didn't resemble the modern symbol.
Alcuin transmitted to the Franks the knowledge of Latin culture which had existed in Anglo-Saxon England. A number of his works still exist. Besides some graceful epistles in the style of Venantius Fortunatus, he wrote some long poems, and notably he is the author of a history (in verse) of the church at York, "Versus de patribus, regibus et sanctis Eboracensis ecclesiae". At the same time, he is noted for making one of the only explicit comments on Old English poetry surviving from the early Middle Ages, in a letter to one Speratus, the bishop of an unnamed English see (possibly Unwona of Leicester): 'verba Dei legantur in sacerdotali convivio: ibi decet lectorem audiri, non citharistam; sermones patrum, non carmina gentilium. Quid Hinieldus cum Christo?' ('Let God's words be read at the episcopal dinner-table. It is right that a reader should be heard, not a harpist, patristic discourse, not pagan song. What has Hinield to do with Christ?').
Historian John Boswell cited Alcuin's writings as demonstrating a personal outpouring of his internalized homosexual feelings. Others agree that Alcuin at times "comes perilously close to communicating openly his same sex desires", and this reflects the erotic subculture of the Carolingian monastic school, but also perhaps a 'queer space' where "erotic attachment and affections may be safely articulated". According to David Clark, passages in some of Alcuin's writings can be seen to display homosocial desire, even possibly homoerotic imagery. However, he argues that it is not possible to necessarily determine whether they were the result of an outward expression of erotic feelings on the part of Alcuin.
The interpretation of homosexual desire has been disputed by Allen Frantzen, who identifies Alcuin's language with that of medieval Christian "amicitia" or friendship. Douglas Dales and Rowan Williams say "the use of language drawn [by Alcuin] from the "Song of Songs" transforms apparently erotic language into something within Christian friendship – 'an ordained affection'."
Alcuin was also a close friend of Charlemagne's sister Gisela, Abbess of Chelles, and he hailed her as "a noble sister in the bond of sweet love". He wrote to Charlemagne's daughters Rotrudis and Bertha that "the devotion of my heart specially tends towards you both because of the familiarity and dedication you have shown me." He dedicated the last two books of his commentary on John's gospel to them both.
Despite inconclusive evidence of Alcuin's personal passions, he was clear in his own writings that the men of Sodom had been punished with fire for "sinning against nature with men" – a view commonly held by the Church at the time. Such sins, argued Alcuin, were therefore more serious than lustful acts with women, for which the earth was cleansed and revivified by the water of the Flood, and merit to be "withered by flames unto eternal barrenness."
In several churches of the Anglican Communion, Alcuin is celebrated on 20 May, the first available day after the day of his death (as Dunstan is celebrated on 19 May).
Alcuin College, one of the colleges of the University of York, England, is named after him.
In January 2020 Alcuin was the subject of the BBC Radio 4 programme "In Our Time".
For a complete census of Alcuin's works, see Marie-Hélène Jullien and Françoise Perelman, eds., "Clavis scriptorum latinorum medii aevi: Auctores Galliae 735–987. Tomus II: Alcuinus." Turnhout: Brepols, 1999.
Of Alcuin's letters, just over 310 have survived. | https://en.wikipedia.org/wiki?curid=1408 |
Amine
In organic chemistry, amines (, ) are compounds and functional groups that contain a basic nitrogen atom with a lone pair. Amines are formally derivatives of ammonia, wherein one or more hydrogen atoms have been replaced by a substituent such as an alkyl or aryl group (these may respectively be called alkylamines and arylamines; amines in which both types of substituent are attached to one nitrogen atom may be called alkylarylamines). Important amines include amino acids, biogenic amines, trimethylamine, and aniline; see for a list of amines. Inorganic derivatives of ammonia are also called amines, such as monochloramine (NClH2).
The substituent -NH2 is called an amino group.
Compounds with a nitrogen atom attached to a carbonyl group, thus having the structure R–CO–NR′R″, are called amides and have different chemical properties from amines.
Amines can be classified according to the nature and number of substituents on nitrogen. Aliphatic amines contain only H and alkyl substituents. Aromatic amines have the nitrogen atom connected to an aromatic ring.
Amines, alkyl and aryl alike, are organized into three subcategories (see table) based on the number of carbon atoms adjacent to the nitrogen:
A fourth subcategory is determined by the connectivity of the substituents attached to the nitrogen:
It is also possible to have four organic substituents on the nitrogen. These species are not amines but are quaternary ammonium cations and have a charged nitrogen center. Quaternary ammonium salts exist with many kinds of anions.
Amines are named in several ways. Typically, the compound is given the prefix "amino-" or the suffix "-amine". The prefix ""N"-" shows substitution on the nitrogen atom. An organic compound with multiple amino groups is called a diamine, triamine, tetraamine and so forth.
Systematic names for some common amines:
Hydrogen bonding significantly influences the properties of primary and secondary amines. For example, methyl and ethyl amines are gases under standard conditions, whereas the corresponding methyl and ethyl alcohols are liquids. Amines possess a characteristic ammonia smell, liquid amines have a distinctive "fishy" smell.
The nitrogen atom features a lone electron pair that can bind H+ to form an ammonium ion R3NH+. The lone electron pair is represented in this article by a two dots above or next to the N. The water solubility of simple amines is enhanced by hydrogen bonding involving these lone electron pairs. Typically salts of ammonium compounds exhibit the following order of solubility in water: primary ammonium () > secondary ammonium () > tertiary ammonium (R3NH+). Small aliphatic amines display significant solubility in many solvents, whereas those with large substituents are lipophilic. Aromatic amines, such as aniline, have their lone pair electrons conjugated into the benzene ring, thus their tendency to engage in hydrogen bonding is diminished. Their boiling points are high and their solubility in water is low.
Typically the presence of an amine functional group is deduced by a combination of techniques, including mass spectrometry as well as NMR and IR spectroscopies. 1H NMR signals for amines disappear upon treatment of the sample with D2O. In their infrared spectrum primary amines exhibit two N-H bands, whereas secondary amines exhibit only one.
Alkyl amines characteristically feature tetrahedral nitrogen centers. C-N-C and C-N-H angles approach the idealized angle of 109°. C-N distances are slightly shorter than C-C distances. The energy barrier for the nitrogen inversion of the stereocenter is about 7 kcal/mol for a trialkylamine. The interconversion has been compared to the inversion of an open umbrella into a strong wind.
Amines of the type NHRR′ and NRR′R″ are chiral: the nitrogen center bears four substituents counting the lone pair. Because of the low barrier to inversion, amines of the type NHRR′ cannot be obtained in optical purity. For chiral tertiary amines, NRR′R″ can only be resolved when the R, R′, and R″ groups are constrained in cyclic structures such as N-substituted aziridines (quaternary ammonium salts are resolvable).
In aromatic amines ("anilines"), nitrogen is often nearly planar owing to conjugation of the lone pair with the aryl substituent. The C-N distance is correspondingly shorter. In aniline, the C-N distance is the same as the C-C distances.
Like ammonia, amines are bases. Compared to alkali metal hydroxides, amines are weaker (see table for examples of conjugate acid "K"a values).
The basicity of amines depends on:
Owing to inductive effects, the basicity of an amine might be expected to increase with the number of alkyl groups on the amine. Correlations are complicated owing to the effects of solvation which are opposite the trends for inductive effects. Solvation effects also dominate the basicity of aromatic amines (anilines). For anilines, the lone pair of electrons on nitrogen delocalises into the ring, resulting in decreased basicity. Substituents on the aromatic ring, and their positions relative to the amino group, also affect basicity as seen in the table.
Solvation significantly affects the basicity of amines. N-H groups strongly interact with water, especially in ammonium ions. Consequently, the basicity of ammonia is enhanced by 1011 by solvation. The intrinsic basicity of amines, i.e. the situation where solvation is unimportant, has been evaluated in the gas phase. In the gas phase, amines exhibit the basicities predicted from the electron-releasing effects of the organic substituents. Thus tertiary amines are more basic than secondary amines, which are more basic than primary amines, and finally ammonia is least basic. The order of pKb's (basicities in water) does not follow this order. Similarly aniline is more basic than ammonia in the gas phase, but ten thousand times less so in aqueous solution.
In aprotic polar solvents such as DMSO, DMF, and acetonitrile the energy of solvation is not as high as in protic polar solvents like water and methanol. For this reason, the basicity of amines in these aprotic solvents is almost solely governed by the electronic effects.
Industrially significant amines are prepared from ammonia by alkylation with alcohols:
Unlike the reaction of amines with alkyl halides, the industrial method is green insofar that the coproduct is water. The reaction of amines and ammonia with alkyl halides is used for synthesis in the laboratory:
Such reactions, which are most useful for alkyl iodides and bromides, are rarely employed because the degree of alkylation is difficult to control. Selectivity can be improved via the Delépine reaction, although this is rarely employed on an industrial scale.
Disubstituted alkenes react with HCN in the presence of strong acids to give formamides, which can be decarbonylated. This method, the Ritter reaction, can be used industrially to produce tertiary amines such a tert-octylamine.
Hydroamination of alkenes is also widely practiced. The reaction is catalyzed by zeolite-based solid acids.
Via the process of hydrogenation, nitriles are reduced to amines using hydrogen in the presence of a nickel catalyst. Reactions are sensitive to acidic or alkaline conditions, which can cause hydrolysis of the –CN group. LiAlH4 is more commonly employed for the reduction of nitriles on the laboratory scale. Similarly, LiAlH4 reduces amides to amines. Many amines are produced from aldehydes and ketones via reductive amination, which can either proceed catalytically or stoichiometrically.
Aniline (C6H5NH2) and its derivatives are prepared by reduction of the nitroaromatics. In industry, hydrogen is the preferred reductant, whereas, in the laboratory, tin and iron are often employed.
Many methods exist for the preparation of amines, many of these methods being rather specialized.
Aside from their basicity, the dominant reactivity of amines is their nucleophilicity. Most primary amines are good ligands for metal ions to give coordination complexes. Amines are alkylated by alkyl halides. Acyl chlorides and acid anhydrides react with primary and secondary amines to form amides (the "Schotten–Baumann reaction").
Similarly, with sulfonyl chlorides, one obtains sulfonamides. This transformation, known as the Hinsberg reaction, is a chemical test for the presence of amines.
Because amines are basic, they neutralize acids to form the corresponding ammonium salts R3NH+. When formed from carboxylic acids and primary and secondary amines, these salts thermally dehydrate to form the corresponding amides.
Amines react with nitrous acid to give diazonium salts. The alkyl diazonium salts are of little synthetic importance because they are too unstable. The most important members are derivatives of aromatic amines such as aniline ("phenylamine") (A = aryl or naphthyl):
Anilines and naphthylamines form more stable diazonium salts, which can be isolated in the crystalline form. Diazonium salts undergo a variety of useful transformations involving replacement of the N2 group with anions. For example, cuprous cyanide gives the corresponding nitriles:
Aryldiazonium couple with electron-rich aromatic compounds such as a phenol to form azo compounds. Such reactions are widely applied to the production of dyes.
Imine formation is an important reaction. Primary amines react with ketones and aldehydes to form imines. In the case of formaldehyde (R′ H), these products typically exist as cyclic trimers.
Reduction of these imines gives secondary amines:
Similarly, secondary amines react with ketones and aldehydes to form enamines:
An overview of the reactions of amines is given below:
Amines are ubiquitous in biology. The breakdown of amino acids releases amines, famously in the case of decaying fish which smell of trimethylamine. Many neurotransmitters are amines, including epinephrine, norepinephrine, dopamine, serotonin, and histamine. Protonated amino groups () are the most common positively charged moieties in proteins, specifically in the amino acid lysine. The anionic polymer DNA is typically bound to various amine-rich proteins. Additionally, the terminal charged primary ammonium on lysine forms salt bridges with carboxylate groups of other amino acids in polypeptides, which is one of the primary influences on the three-dimensional structures of proteins.
Primary aromatic amines are used as a starting material for the manufacture of azo dyes. It reacts with nitrous acid to form diazonium salt, which can undergo coupling reaction to form an azo compound. As azo-compounds are highly coloured, they are widely used in dyeing industries, such as:
Many drugs are designed to mimic or to interfere with the action of natural amine neurotransmitters, exemplified by the amine drugs:
Aqueous monoethanolamine (MEA), diglycolamine (DGA), diethanolamine (DEA), diisopropanolamine (DIPA) and methyldiethanolamine (MDEA) are widely used industrially for removing carbon dioxide (CO2) and hydrogen sulfide (H2S) from natural gas and refinery process streams. They may also be used to remove CO2 from combustion gases and flue gases and may have potential for abatement of greenhouse gases. Related processes are known as sweetening.
Low molecular weight simple amines, such as ethylamine, are only weakly toxic with between 100 and 1000 mg/kg. They are skin irritants, especially as some are easily absorbed through the skin. Amines are a broad class of compounds, and more complex members of the class can be extremely bioactive, for example strychnine and heroin. | https://en.wikipedia.org/wiki?curid=1412 |
Absolute zero
Absolute zero is the lowest limit of the thermodynamic temperature scale, a state at which the enthalpy and entropy of a cooled ideal gas reach their minimum value, taken as zero kelvins. The fundamental particles of nature have minimum vibrational motion, retaining only quantum mechanical, zero-point energy-induced particle motion. The theoretical temperature is determined by extrapolating the ideal gas law; by international agreement, absolute zero is taken as −273.15° on the Celsius scale (International System of Units), which equals −459.67° on the Fahrenheit scale (United States customary units or Imperial units). The corresponding Kelvin and Rankine temperature scales set their zero points at absolute zero by definition.
It is commonly thought of as the lowest temperature possible, but it is not the lowest "enthalpy" state possible, because all real substances begin to depart from the ideal gas when cooled as they approach the change of state to liquid, and then to solid; and the sum of the enthalpy of vaporization (gas to liquid) and enthalpy of fusion (liquid to solid) exceeds the ideal gas's change in enthalpy to absolute zero. In the quantum-mechanical description, matter (solid) at absolute zero is in its ground state, the point of lowest internal energy.
The laws of thermodynamics indicate that absolute zero cannot be reached using only thermodynamic means, because the temperature of the substance being cooled approaches the temperature of the cooling agent asymptotically, and a system at absolute zero still possesses quantum mechanical zero-point energy, the energy of its ground state at absolute zero. The kinetic energy of the ground state cannot be removed.
Scientists and technologists routinely achieve temperatures close to absolute zero, where matter exhibits quantum effects such as superconductivity and superfluidity.
At temperatures near , nearly all molecular motion ceases and Δ"S" = 0 for any adiabatic process, where "S" is the entropy. In such a circumstance, pure substances can (ideally) form perfect crystals as "T" → 0. Max Planck's strong form of the third law of thermodynamics states the entropy of a perfect crystal vanishes at absolute zero in which a perfect crystal is gone. The original Nernst "heat theorem" makes the weaker and less controversial claim that the entropy change for any isothermal process approaches zero as "T" → 0:
The implication is that the entropy of a perfect crystal approaches a constant value.
The Nerst postulate identifies the isotherm T = 0 as coincident with the adiabat S = 0, although other isotherms and adiabats are distinct. As no two adiabats intersect, no other adiabat can intersect the T = 0 isotherm. Consequently no adiabatic process initiated at nonzero temperature can lead to zero temperature. (≈ Callen, pp. 189–190)
A perfect crystal is one in which the internal lattice structure extends uninterrupted in all directions. The perfect order can be represented by translational symmetry along three (not usually orthogonal) axes. Every lattice element of the structure is in its proper place, whether it is a single atom or a molecular grouping. For substances that exist in two (or more) stable crystalline forms, such as diamond and graphite for carbon, there is a kind of "chemical degeneracy". The question remains whether both can have zero entropy at "T" = 0 even though each is perfectly ordered.
Perfect crystals never occur in practice; imperfections, and even entire amorphous material inclusions, can and do get "frozen in" at low temperatures, so transitions to more stable states do not occur.
Using the Debye model, the specific heat and entropy of a pure crystal are proportional to "T" 3, while the enthalpy and chemical potential are proportional to "T" 4. (Guggenheim, p. 111) These quantities drop toward their "T" = 0 limiting values and approach with "zero" slopes. For the specific heats at least, the limiting value itself is definitely zero, as borne out by experiments to below 10 K. Even the less detailed Einstein model shows this curious drop in specific heats. In fact, all specific heats vanish at absolute zero, not just those of crystals. Likewise for the coefficient of thermal expansion. Maxwell's relations show that various other quantities also vanish. These phenomena were unanticipated.
Since the relation between changes in Gibbs free energy ("G"), the enthalpy ("H") and the entropy is
thus, as "T" decreases, Δ"G" and Δ"H" approach each other (so long as Δ"S" is bounded). Experimentally, it is found that all spontaneous processes (including chemical reactions) result in a decrease in "G" as they proceed toward equilibrium. If Δ"S" and/or "T" are small, the condition Δ"G"
This state of matter was first predicted by Satyendra Nath Bose and Albert Einstein in 1924–25. Bose first sent a paper to Einstein on the quantum statistics of light quanta (now called photons). Einstein was impressed, translated the paper from English to German and submitted it for Bose to the "Zeitschrift für Physik", which published it. Einstein then extended Bose's ideas to material particles (or matter) in two other papers.
Seventy years later, in 1995, the first gaseous condensate was produced by Eric Cornell and Carl Wieman at the University of Colorado at Boulder NIST-JILA lab, using a gas of rubidium atoms cooled to 170 nanokelvins (nK) ().
A record cold temperature of 450 ±80 picokelvins (pK) () in a BEC of sodium atoms was achieved in 2003 by researchers at Massachusetts Institute of Technology (MIT). The associated black-body (peak emittance) wavelength of 6,400 kilometers is roughly the radius of Earth.
Absolute, or thermodynamic, temperature is conventionally measured in kelvins (Celsius-scaled increments) and in the Rankine scale (Fahrenheit-scaled increments) with increasing rarity. Absolute temperature measurement is uniquely determined by a multiplicative constant which specifies the size of the "degree", so the "ratios" of two absolute temperatures, "T"2/"T"1, are the same in all scales. The most transparent definition of this standard comes from the Maxwell–Boltzmann distribution. It can also be found in Fermi–Dirac statistics (for particles of half-integer spin) and Bose–Einstein statistics (for particles of integer spin). All of these define the relative numbers of particles in a system as decreasing exponential functions of energy (at the particle level) over "kT", with "k" representing the Boltzmann constant and "T" representing the temperature observed at the macroscopic level.
Temperatures that are expressed as negative numbers on the familiar Celsius or Fahrenheit scales are simply colder than the zero points of those scales. Certain systems can achieve truly negative temperatures; that is, their thermodynamic temperature (expressed in kelvins) can be of a negative quantity. A system with a truly negative temperature is not colder than absolute zero. Rather, a system with a negative temperature is hotter than "any" system with a positive temperature, in the sense that if a negative-temperature system and a positive-temperature system come in contact, heat flows from the negative to the positive-temperature system.
Most familiar systems cannot achieve negative temperatures because adding energy always increases their entropy. However, some systems have a maximum amount of energy that they can hold, and as they approach that maximum energy their entropy actually begins to decrease. Because temperature is defined by the relationship between energy and entropy, such a system's temperature becomes negative, even though energy is being added. As a result, the Boltzmann factor for states of systems at negative temperature increases rather than decreases with increasing state energy. Therefore, no complete system, i.e. including the electromagnetic modes, can have negative temperatures, since there is no highest energy state, so that the sum of the probabilities of the states would diverge for negative temperatures. However, for quasi-equilibrium systems (e.g. spins out of equilibrium with the electromagnetic field) this argument does not apply, and negative effective temperatures are attainable.
On 3 January 2013, physicists announced that for the first time they had created a quantum gas made up of potassium atoms with a negative temperature in motional degrees of freedom.
One of the first to discuss the possibility of an absolute minimal temperature was Robert Boyle. His 1665 "New Experiments and Observations touching Cold", articulated the dispute known as the "primum frigidum". The concept was well known among naturalists of the time. Some contended an absolute minimum temperature occurred within earth (as one of the four classical elements), others within water, others air, and some more recently within nitre. But all of them seemed to agree that, "There is some body or other that is of its own nature supremely cold and by participation of which all other bodies obtain that quality."
The question whether there is a limit to the degree of coldness possible, and, if so, where the zero must be placed, was first addressed by the French physicist Guillaume Amontons in 1702, in connection with his improvements in the air thermometer. His instrument indicated temperatures by the height at which a certain mass of air sustained a column of mercury—the volume, or "spring" of the air varying with temperature. Amontons therefore argued that the zero of his thermometer would be that temperature at which the spring of the air was reduced to nothing. He used a scale that marked the boiling point of water at +73 and the melting point of ice at +, so that the zero was equivalent to about −240 on the Celsius scale. Amontons held that the absolute zero cannot be reached, so never attempted to compute it explicitly.
The value of −240 °C, or "431 divisions [in Fahrenheit's thermometer] below the cold of freezing water" was published by George Martine in 1740.
This close approximation to the modern value of −273.15 °C for the zero of the air thermometer was further improved upon in 1779 by Johann Heinrich Lambert, who observed that might be regarded as absolute cold.
Values of this order for the absolute zero were not, however, universally accepted about this period. Pierre-Simon Laplace and Antoine Lavoisier, in their 1780 treatise on heat, arrived at values ranging from 1,500 to 3,000 below the freezing point of water, and thought that in any case it must be at least 600 below. John Dalton in his "Chemical Philosophy" gave ten calculations of this value, and finally adopted −3,000 °C as the natural zero of temperature.
After James Prescott Joule had determined the mechanical equivalent of heat, Lord Kelvin approached the question from an entirely different point of view, and in 1848 devised a scale of absolute temperature that was independent of the properties of any particular substance and was based on Carnot's theory of the Motive Power of Heat and data published by Henri Victor Regnault. It followed from the principles on which this scale was constructed that its zero was placed at −273 °C, at almost precisely the same point as the zero of the air thermometer. This value was not immediately accepted; values ranging from to , derived from laboratory measurements and observations of astronomical refraction, remained in use in the early 20th century.
With a better theoretical understanding of absolute zero, scientists were eager to reach this temperature in the lab. By 1845, Michael Faraday had managed to liquefy most gases then known to exist, and reached a new record for lowest temperatures by reaching . Faraday believed that certain gases, such as oxygen, nitrogen, and hydrogen, were permanent gases and could not be liquefied. Decades later, in 1873 Dutch theoretical scientist Johannes Diderik van der Waals demonstrated that these gases could be liquefied, but only under conditions of very high pressure and very low temperatures. In 1877, Louis Paul Cailletet in France and Raoul Pictet in Switzerland succeeded in producing the first droplets of liquid air . This was followed in 1883 by the production of liquid oxygen by the Polish professors Zygmunt Wróblewski and Karol Olszewski.
Scottish chemist and physicist James Dewar and Dutch physicist Heike Kamerlingh Onnes took on the challenge to liquefy the remaining gases, hydrogen and helium. In 1898, after 20 years of effort, Dewar was first to liquefy hydrogen, reaching a new low-temperature record of . However, Kamerlingh Onnes, his rival, was the first to liquefy helium, in 1908, using several precooling stages and the Hampson–Linde cycle. He lowered the temperature to the boiling point of helium . By reducing the pressure of the liquid helium he achieved an even lower temperature, near 1.5 K. These were the coldest temperatures achieved on Earth at the time and his achievement earned him the Nobel Prize in 1913. Kamerlingh Onnes would continue to study the properties of materials at temperatures near absolute zero, describing superconductivity and superfluids for the first time.
The average temperature of the universe today is approximately , based on measurements of cosmic microwave background radiation.
Absolute zero cannot be achieved, although it is possible to reach temperatures close to it through the use of cryocoolers, dilution refrigerators, and nuclear adiabatic demagnetization. The use of laser cooling has produced temperatures less than a billionth of a kelvin. At very low temperatures in the vicinity of absolute zero, matter exhibits many unusual properties, including superconductivity, superfluidity, and Bose–Einstein condensation. To study such phenomena, scientists have worked to obtain even lower temperatures. | https://en.wikipedia.org/wiki?curid=1418 |
Adiabatic process
An adiabatic process occurs without transferring heat or mass between a thermodynamic system and its surroundings. Unlike an isothermal process, an adiabatic process transfers energy to the surroundings only as work. It also conceptually undergirds the theory used to expound the first law of thermodynamics and is therefore a key thermodynamic concept.
Some chemical and physical processes occur too rapidly for energy to enter or leave the system as heat, allowing a convenient "adiabatic approximation". For example, the adiabatic flame temperature uses this approximation to calculate the upper limit of flame temperature by assuming combustion loses no heat to its surroundings.
In meteorology and oceanography, adiabatic cooling produces condensation of moisture or salinity, oversaturating the parcel. Therefore, the excess must be removed. There, the process becomes a "pseudo-adiabatic process" whereby the liquid water or salt that condenses is assumed to be removed upon formation by idealized instantaneous precipitation. The pseudoadiabatic process is only defined for expansion because a compressed parcel becomes warmer and remains undersaturated.
A process without transfer of heat or matter to or from a system, so that , is called adiabatic, and such a system is said to be adiabatically isolated. The assumption that a process is adiabatic is a frequently made simplifying assumption. For example, the compression of a gas within a cylinder of an engine is assumed to occur so rapidly that on the time scale of the compression process, little of the system's energy can be transferred out as heat to the surroundings. Even though the cylinders are not insulated and are quite conductive, that process is idealized to be adiabatic. The same can be said to be true for the expansion process of such a system.
The assumption of adiabatic isolation is useful and often combined with other such idealizations to calculate a good first approximation of a system's behaviour. For example, according to Laplace, when sound travels in a gas, there is no time for heat conduction in the medium, and so the propagation of sound is adiabatic. For such an adiabatic process, the modulus of elasticity (Young's modulus) can be expressed as , where is the ratio of specific heats at constant pressure and at constant volume ( ) and is the pressure of the gas .
For a closed system, one may write the first law of thermodynamics as : , where denotes the change of the system's internal energy, the quantity of energy added to it as heat, and the work done by the system on its surroundings.
Naturally occurring adiabatic processes are irreversible (entropy is produced).
The transfer of energy as work into an adiabatically isolated system can be imagined as being of two idealized extreme kinds. In one such kind, no entropy is produced within the system (no friction, viscous dissipation, etc.), and the work is only pressure-volume work (denoted by ). In nature, this ideal kind occurs only approximately because it demands an infinitely slow process and no sources of dissipation.
The other extreme kind of work is isochoric work (), for which energy is added as work solely through friction or viscous dissipation within the system. A stirrer that transfers energy to a viscous fluid of an adiabatically isolated system with rigid walls, without phase change, will cause a rise in temperature of the fluid, but that work is not recoverable. Isochoric work is irreversible. The second law of thermodynamics observes that a natural process, of transfer of energy as work, always consists at least of isochoric work and often both of these extreme kinds of work. Every natural process, adiabatic or not, is irreversible, with , as friction or viscosity are always present to some extent.
The adiabatic compression of a gas causes a rise in temperature of the gas. Adiabatic expansion against pressure, or a spring, causes a drop in temperature. In contrast, free expansion is an isothermal process for an ideal gas.
Adiabatic heating occurs when the pressure of a gas is increased by work done on it by its surroundings, e.g., a piston compressing a gas contained within a cylinder and raising the temperature where in many practical situations heat conduction through walls can be slow compared with the compression time. This finds practical application in diesel engines which rely on the lack of heat dissipation during the compression stroke to elevate the fuel vapor temperature sufficiently to ignite it.
Adiabatic heating occurs in the Earth's atmosphere when an air mass descends, for example, in a katabatic wind, Foehn wind, or chinook wind flowing downhill over a mountain range. When a parcel of air descends, the pressure on the parcel increases. Because of this increase in pressure, the parcel's volume decreases and its temperature increases as work is done on the parcel of air, thus increasing its internal energy, which manifests itself by a rise in the temperature of that mass of air. The parcel of air can only slowly dissipate the energy by conduction or radiation (heat), and to a first approximation it can be considered adiabatically isolated and the process an adiabatic process.
Adiabatic cooling occurs when the pressure on an adiabatically isolated system is decreased, allowing it to expand, thus causing it to do work on its surroundings. When the pressure applied on a parcel of air is reduced, the air in the parcel is allowed to expand; as the volume increases, the temperature falls as its internal energy decreases. Adiabatic cooling occurs in the Earth's atmosphere with orographic lifting and lee waves, and this can form pileus or lenticular clouds.
Adiabatic cooling does not have to involve a fluid. One technique used to reach very low temperatures (thousandths and even millionths of a degree above absolute zero) is via adiabatic demagnetisation, where the change in magnetic field on a magnetic material is used to provide adiabatic cooling. Also, the contents of an expanding universe can be described (to first order) as an adiabatically cooling fluid. (See heat death of the universe.)
Rising magma also undergoes adiabatic cooling before eruption, particularly significant in the case of magmas that rise quickly from great depths such as kimberlites.
In the Earth's convecting mantle (the asthenosphere) beneath the lithosphere, the mantle temperature is approximately an adiabat. The slight decrease in temperature with shallowing depth is due to the decrease in pressure the shallower the material is in the Earth.
Such temperature changes can be quantified using the ideal gas law, or the hydrostatic equation for atmospheric processes.
In practice, no process is truly adiabatic. Many processes rely on a large difference in time scales of the process of interest and the rate of heat dissipation across a system boundary, and thus are approximated by using an adiabatic assumption. There is always some heat loss, as no perfect insulators exist.
The mathematical equation for an ideal gas undergoing a reversible (i.e., no entropy generation) adiabatic process can be represented by the polytropic process equation
where is pressure, is volume, and for this case , where
For a monatomic ideal gas, , and for a diatomic gas (such as nitrogen and oxygen, the main components of air) . Note that the above formula is only applicable to classical ideal gases and not Bose–Einstein or Fermi gases.
For reversible adiabatic processes, it is also true that
where "T" is an absolute temperature. This can also be written as
The compression stroke in a gasoline engine can be used as an example of adiabatic compression. The model assumptions are: the uncompressed volume of the cylinder is one litre (1 L = 1000 cm3 = 0.001 m3); the gas within is the air consisting of molecular nitrogen and oxygen only (thus a diatomic gas with 5 degrees of freedom, and so ); the compression ratio of the engine is 10:1 (that is, the 1 L volume of uncompressed gas is reduced to 0.1 L by the piston); and the uncompressed gas is at approximately room temperature and pressure (a warm room temperature of ~27 °C, or 300 K, and a pressure of 1 bar = 100 kPa, i.e. typical sea-level atmospheric pressure).
so our adiabatic constant for this example is about 6.31 Pa m4.2.
The gas is now compressed to a 0.1 L (0.0001 m3) volume (we will assume this happens quickly enough that no heat can enter or leave the gas through the walls). The adiabatic constant remains the same, but with the resulting pressure unknown
so solving for "P2":
or 25.1 bar. Note that this pressure increase is more than a simple 10:1 compression ratio would indicate; this is because the gas is not only compressed, but the work done to compress the gas also increases its internal energy, which manifests itself by a rise in the gas temperature and an additional rise in pressure above what would result from a simplistic calculation of 10 times the original pressure.
We can solve for the temperature of the compressed gas in the engine cylinder as well, using the ideal gas law, "PV" = "nRT" ("n" is amount of gas in moles and "R" the gas constant for that gas). Our initial conditions being 100 kPa of pressure, 1 L volume, and 300 K of temperature, our experimental constant ("nR") is:
We know the compressed gas has = 0.1 L and = , so we can solve for temperature:
That is a final temperature of 753 K, or 479 °C, or 896 °F, well above the ignition point of many fuels. This is why a high-compression engine requires fuels specially formulated to not self-ignite (which would cause engine knocking when operated under these conditions of temperature and pressure), or that a supercharger with an intercooler to provide a pressure boost but with a lower temperature rise would be advantageous. A diesel engine operates under even more extreme conditions, with compression ratios of 16:1 or more being typical, in order to provide a very high gas temperature, which ensures immediate ignition of the injected fuel.
For an adiabatic free expansion of an ideal gas, the gas is contained in an insulated container and then allowed to expand in a vacuum. Because there is no external pressure for the gas to expand against, the work done by or on the system is zero. Since this process does not involve any heat transfer or work, the first law of thermodynamics then implies that the net internal energy change of the system is zero. For an ideal gas, the temperature remains constant because the internal energy only depends on temperature in that case. Since at constant temperature, the entropy is proportional to the volume, the entropy increases in this case, therefore this process is irreversible.
The definition of an adiabatic process is that heat transfer to the system is zero, . Then, according to the first law of thermodynamics,
where is the change in the internal energy of the system and is work done "by" the system. Any work () done must be done at the expense of internal energy , since no heat is being supplied from the surroundings. Pressure–volume work done "by" the system is defined as
However, does not remain constant during an adiabatic process but instead changes along with .
It is desired to know how the values of and relate to each other as the adiabatic process proceeds. For an ideal gas ( recall ideal gas law ) the internal energy is given by
where is the number of degrees of freedom divided by two, is the universal gas constant and is the number of moles in the system (a constant).
Differentiating equation (3) yields
Equation (4) is often expressed as because .
Now substitute equations (2) and (4) into equation (1) to obtain
factorize :
and divide both sides by :
After integrating the left and right sides from to and from to and changing the sides respectively,
Exponentiate both sides, substitute with , the heat capacity ratio
and eliminate the negative sign to obtain
Therefore,
and
Substituting the ideal gas law into the above, we obtain
which simplifies to
The change in internal energy of a system, measured from state 1 to state 2, is equal to
At the same time, the work done by the pressure–volume changes as a result from this process, is equal to
Since we require the process to be adiabatic, the following equation needs to be true
By the previous derivation,
Rearranging (4) gives
Substituting this into (2) gives
Integrating we obtain the expression for work,
Substituting in second term,
Rearranging,
Using the ideal gas law and assuming a constant molar quantity (as often happens in practical cases),
By the continuous formula,
or
Substituting into the previous expression for ,
Substituting this expression and (1) in (3) gives
Simplifying,
An adiabat is a curve of constant entropy in a diagram. Some properties of adiabats on a "P"–"V" diagram are indicated. These properties may be read from the classical behaviour of ideal gases, except in the region where "PV" becomes small (low temperature), where quantum effects become important.
The right diagram is a "P"–"V" diagram with a superposition of adiabats and isotherms:
The isotherms are the red curves and the adiabats are the black curves.
The adiabats are isentropic.
Volume is the horizontal axis and pressure is the vertical axis.
The term "adiabatic" () is an anglicization of the Greek term ἀδιάβατος "impassable" (used by Xenophon of rivers).
It is used in the thermodynamic sense by Rankine (1866), and adopted by Maxwell in 1871 (explicitly attributing the term to Rankine).
The etymological origin corresponds here to an impossibility of transfer of energy as heat and of transfer of matter across the wall.
The Greek word ἀδιάβατος is formed from privative ἀ- ("not") and διαβατός, "passable", in turn deriving from διά ("through"), and βαῖνειν ("to walk, go, come").
The adiabatic process has been important for thermodynamics since its early days. It was important in the work of Joule because it provided a way of nearly directly relating quantities of heat and work.
Energy can enter or leave a thermodynamic system enclosed by walls that prevent mass transfer only as heat or work. Therefore, a quantity of work in such a system can be related almost directly to an equivalent quantity of heat in a cycle of two limbs. The first limb is an isochoric adiabatic work process increasing the system's internal energy; the second, an isochoric and workless heat transfer returning the system to its original state. Accordingly, Rankine measured quantity of heat in units of work, rather than as a calorimetric quantity . In 1854, Rankine used a quantity that he called "the thermodynamic function" that later was called entropy, and at that time he wrote also of the "curve of no transmission of heat", which he later called an adiabatic curve. Besides its two isothermal limbs, Carnot's cycle has two adiabatic limbs.
For the foundations of thermodynamics, the conceptual importance of this was emphasized by Bryan, by Carathéodory, and by Born. The reason is that calorimetry presupposes a type of temperature as already defined before the statement of the first law of thermodynamics, such as one based on empirical scales. Such a presupposition involves making the distinction between empirical temperature and absolute temperature. Rather, the definition of absolute thermodynamic temperature is best left till the second law is available as a conceptual basis.
In the eighteenth century, the law of conservation of energy was not yet fully formulated or established, and the nature of heat was debated. One approach to these problems was to regard heat, measured by calorimetry, as a primary substance that is conserved in quantity. By the middle of the nineteenth century, it was recognized as a form of energy, and the law of conservation of energy was thereby also recognized. The view that eventually established itself, and is currently regarded as right, is that the law of conservation of energy is a primary axiom, and that heat is to be analyzed as consequential. In this light, heat cannot be a component of the total energy of a single body because it is not a state variable but, rather, a variable that describes a transfer between two bodies. The adiabatic process is important because it is a logical ingredient of this current view.
This present article is written from the viewpoint of macroscopic thermodynamics, and the word "adiabatic" is used in this article in the traditional way of thermodynamics, introduced by Rankine. It is pointed out in the present article that, for example, if a compression of a gas is rapid, then there is little time for heat transfer to occur, even when the gas is not adiabatically isolated by a definite wall. In this sense, a rapid compression of a gas is sometimes approximately or loosely said to be "adiabatic", though often far from isentropic, even when the gas is not adiabatically isolated by a definite wall.
Quantum mechanics and quantum statistical mechanics, however, use the word "adiabatic" in a very different sense, one that can at times seem almost opposite to the classical thermodynamic sense. In quantum theory, the word "adiabatic" can mean something perhaps near isentropic, or perhaps near quasi-static, but the usage of the word is very different between the two disciplines.
On the one hand, in quantum theory, if a perturbative element of compressive work is done almost infinitely slowly (that is to say quasi-statically), it is said to have been done "adiabatically". The idea is that the shapes of the eigenfunctions change slowly and continuously, so that no quantum jump is triggered, and the change is virtually reversible. While the occupation numbers are unchanged, nevertheless there is change in the energy levels of one-to-one corresponding, pre- and post-compression, eigenstates. Thus a perturbative element of work has been done without heat transfer and without introduction of random change within the system. For example, Max Born writes "Actually, it is usually the 'adiabatic' case with which we have to do: i.e. the limiting case where the external force (or the reaction of the parts of the system on each other) acts very slowly. In this case, to a very high approximation
that is, there is no probability for a transition, and the system is in the initial state after cessation of the perturbation. Such a slow perturbation is therefore reversible, as it is classically."
On the other hand, in quantum theory, if a perturbative element of compressive work is done rapidly, it randomly changes the occupation numbers of the eigenstates, as well as changing their shapes. In that theory, such a rapid change is said not to be "adiabatic", and the contrary word "diabatic" is applied to it. One might guess that perhaps Clausius, if he were confronted with this, in the now-obsolete language he used in his day, would have said that "internal work" was done and that 'heat was generated though not transferred'.
In classical thermodynamics, such a rapid change would still be called adiabatic because the system is adiabatically isolated, and there is no transfer of energy as heat. The strong irreversibility of the change, due to viscosity or other entropy production, does not impinge on this classical usage.
Thus for a mass of gas, in macroscopic thermodynamics, words are so used that a compression is sometimes loosely or approximately said to be adiabatic if it is rapid enough to avoid heat transfer, even if the system is not adiabatically isolated. But in quantum statistical theory, a compression is not called adiabatic if it is rapid, even if the system is adiabatically isolated in the classical thermodynamic sense of the term. The words are used differently in the two disciplines, as stated just above. | https://en.wikipedia.org/wiki?curid=1419 |
Amide
In organic chemistry, an amide, also known as an organic amide or a carboxamide, is a compound with the general formula RC(=O)NR′R″, where R, R′, and R″ represent organic groups or hydrogen atoms.. The amide group is called a peptide bond when it is part of the main chain of a protein, and isopeptide bond when it occurs in a side chain, such as in the amino acids asparagine and glutamine. It can be viewed as a derivative of a carboxylic acid RC(=O)OH with the hydroxyl group –OH replaced by an amine group –NR′R″; or, equivalently, an acyl (alkanoyl) group RC(=O)– joined to an amine group.
Common examples of amides are acetamide H3C–CONH2, benzamide C6H5–CONH2, and dimethylformamide HCON(–CH3)2.
Amides are qualified as primary, secondary, and tertiary according to whether the amine subgroup has the form –NH2, –NHR, or –NRR′, where R and R′ are groups other than hydrogen.
The core –C(=O)N= of amides is called the amide group (specifically, carboxamide group).
Amides are pervasive in nature and technology. Proteins and important plastics like Nylons, Aramid, Twaron, and Kevlar are polymers whose units are connected by amide groups (polyamides); these linkages are easily formed, confer structural rigidity, and resist hydrolysis. Amides include many other important biological compounds, as well as many drugs like paracetamol, penicillin and LSD. Low molecular weight amides, such as dimethylformamide, are common solvents.
In the usual nomenclature, one adds the term "amide" to the stem of the parent acid's name. For instance, the amide derived from acetic acid is named acetamide (CH3CONH2). IUPAC recommends ethanamide, but this and related formal names are rarely encountered. When the amide is derived from a primary or secondary amine, the substituents on nitrogen are indicated first in the name. Thus, the amide formed from dimethylamine and acetic acid is "N","N"-dimethylacetamide (CH3CONMe2, where Me = CH3). Usually even this name is simplified to dimethylacetamide. Cyclic amides are called lactams; they are necessarily secondary or tertiary amides.
The term "amide" is variously pronounced or or .
Different pronunciations may be used for the two main senses, saying for the carbonyl–nitrogen compound and for the anion. Others replace one of these with .
The lone pair of electrons on the nitrogen atom is delocalized into the carbonyl group, thus forming a partial double bond between nitrogen and carbon. In fact the O, C and N atoms have molecular orbitals occupied by delocalized electrons, forming a conjugated system. Consequently, the three bonds of the nitrogen in amides is not pyramidal (as in the amines) but planar.
The structure of an amide can be described also as a resonance between two alternative structures:
It is estimated that for acetamide, structure A makes a 62% contribution to the structure, while structure B makes a 28% contribution. (These figures do not sum to 100% because there are additional less-important resonance forms that are not depicted above). There is also a hydrogen bond present between the active groups hydrogen and nitrogen atoms. Resonance is largely prevented in the very strained quinuclidone.
Compared to amines, amides are very weak bases. While the conjugate acid of an amine has a p"K"a of about 9.5, the conjugate acid of an amide has a p"K"a around −0.5. Therefore, amides don't have as clearly noticeable acid–base properties in water. This relative lack of basicity is explained by the withdrawing of electrons from the amine by the carbonyl. On the other hand, amides are much stronger bases than carboxylic acids, esters, aldehydes, and ketones (their conjugate acids' p"K"as are between −6 and −10).
The proton of a primary or secondary amide does not dissociate readily under normal conditions; its p"K"a is usually well above 15. Conversely, under extremely acidic conditions, the carbonyl oxygen can become protonated with a p"K"a of roughly −1. It is not only because of the positive charge on the nitrogen, but also because of the negative charge on the oxygen gained through resonance.
Because of the greater electronegativity of oxygen, the carbonyl (C=O) is a stronger dipole than the N–C dipole. The presence of a C=O dipole and, to a lesser extent a N–C dipole, allows amides to act as H-bond acceptors. In primary and secondary amides, the presence of N–H dipoles allows amides to function as H-bond donors as well. Thus amides can participate in hydrogen bonding with water and other protic solvents; the oxygen atom can accept hydrogen bonds from water and the N–H hydrogen atoms can donate H-bonds. As a result of interactions such as these, the water solubility of amides is greater than that of corresponding hydrocarbons. These hydrogen bonds are also have an important role in the secondary structure of proteins.
The solubilities of amides and esters are roughly comparable. Typically amides are less soluble than comparable amines and carboxylic acids since these compounds can both donate and accept hydrogen bonds. Tertiary amides, with the important exception of "N","N"-dimethylformamide, exhibit low solubility in water.
The presence of the amide group –C(=O)N– is generally easily established, at least in small molecules. It can be distinguished from nitro and cyano groups in IR spectra. Amides exhibit a moderately intense "ν"CO band near 1650 cm−1. By 1H NMR spectroscopy, CONHR signals occur at low fields. In X-ray crystallography, the C(=O)N center together with the three immediately adjacent atoms characteristically define a plane.
Amides undergo many chemical reactions, although they are less reactive than esters. Amides hydrolyse in hot alkali as well as in strong acidic conditions. Acidic conditions yield the carboxylic acid and the ammonium ion while basic hydrolysis yield the carboxylate ion and ammonia. The protonation of the initially generated amine under acidic conditions and the deprotonation of the initially generated carboxylic acid under basic conditions render these processes non-catalytic and irreversible. Amides are also versatile precursors to many other functional groups. Electrophiles react with the carbonyl oxygen. This step often precedes hydrolysis, which is catalyzed by both Brønsted acids and Lewis acids. Enzymes, e.g. peptidases and artificial catalysts, are known to accelerate the hydrolysis reactions.
Many methods exist in amide synthesis. On paper, the simplest method for making amides is by coupling a carboxylic acid with an amine. In general this reaction is thermodynamically favorable, however it suffers from a high activation energy, largely due to the amine first deprotonating the carboxylic acid, which reduces its reactivity. As such the direct reaction often requires high temperatures.
Many methods are known for driving the equilibrium to the right. For the most part these reactions involve "activating" the carboxylic acid by first converting it to a better electrophile; such as esters, acid chlorides (Schotten-Baumann reaction) or anhydrides (Lumière–Barbier method). Conventional methods in peptide synthesis use coupling agents such as HATU, HOBt, or PyBOP. In recent years there has also been a surge in the development of Boron reagents for amide bond formation, including catalytic use of 2-IodoPhenylBoronic acid or MIBA, and Tris(2,2,2-trifluoroethyl) borate.
Dehydrogenative acylation of amines is catalyzed by organoruthenium complexes:
The reaction proceed by one dehydrogenation of the alcohol to the aldehyde followed by formation of a hemiaminal, which undergoes a second dehydrogenation to the amide. Elimination of water in the hemiaminal to the imine is not observed.
Transamidation is typically very slow, but it is accelerated with Lewis acid and organometallic catalysts:
Primary amides (RC(O)NH2) are more amenable to this reaction. | https://en.wikipedia.org/wiki?curid=1422 |
Animism
Animism (from Latin "", "breath, spirit, life") is the belief that objects, places and creatures all possess a distinct spiritual essence. Potentially, animism perceives all things—animals, plants, rocks, rivers, weather systems, human handiwork and perhaps even words—as animated and alive. Animism is used in the anthropology of religion as a term for the belief system of many indigenous peoples, especially in contrast to the relatively more recent development of organised religions.
Although each culture has its own different mythologies and rituals, "animism" is said to describe the most common, foundational thread of indigenous peoples' "spiritual" or "supernatural" perspectives. The animistic perspective is so widely held and inherent to most indigenous peoples that they often do not even have a word in their languages that corresponds to "animism" (or even "religion"); the term is an anthropological construct.
Largely due to such ethnolinguistic and cultural discrepancies, opinion has differed on whether "animism" refers to an ancestral mode of experience common to indigenous peoples around the world, or to a full-fledged religion in its own right. The currently accepted definition of animism was only developed in the late 19th century (1871) by Sir Edward Tylor, who created it as "one of anthropology's earliest concepts, if not the first".
Animism encompasses the beliefs that all material phenomena have agency, that there exists no hard and fast distinction between the spiritual and physical (or material) world and that soul or spirit or sentience exists not only in humans, but also in other animals, plants, rocks, geographic features such as mountains or rivers or other entities of the natural environment: water sprites, vegetation deities, tree sprites, ... . Animism may further attribute a life force to abstract concepts such as words, true names or metaphors in mythology. Some members of the non-tribal world also consider themselves animists (such as author Daniel Quinn, sculptor Lawson Oyekan and many contemporary Pagans).
Earlier anthropological perspectives, which have since been termed the "old animism", were concerned with knowledge on what is alive and what factors make something alive. The "old animism" assumed that animists were individuals who were unable to understand the difference between persons and things.
Critics of the "old animism" have accused it of preserving "colonialist and dualist worldviews and rhetoric".
The idea of animism was developed by the anthropologist Sir Edward Tylor in his 1871 book "Primitive Culture", in which he defined it as "the general doctrine of souls and other spiritual beings in general". According to Tylor, animism often includes "an idea of pervading life and will in nature"; a belief that natural objects other than humans have souls. That formulation was little different from that proposed by Auguste Comte as "fetishism", but the terms now have distinct meanings.
For Tylor, animism represented the earliest form of religion, being situated within an evolutionary framework of religion which has developed in stages and which will ultimately lead to humanity rejecting religion altogether in favor of scientific rationality.
Thus, for Tylor, animism was fundamentally seen as a mistake, a basic error from which all religion grew. He did not believe that animism was inherently illogical, but he suggested that it arose from early humans' dreams and visions and thus was a rational system. However, it was based on erroneous, unscientific observations about the nature of reality. Stringer notes that his reading of "Primitive Culture" led him to believe that Tylor was far more sympathetic in regard to "primitive" populations than many of his contemporaries and that Tylor expressed no belief that there was any difference between the intellectual capabilities of "savage" people and Westerners.
Tylor had initially wanted to describe the phenomenon as "spiritualism" but realised that would cause confusion with the modern religion of Spiritualism, that was then prevalent across Western nations. He adopted the term "animism" from the writings of the German scientist Georg Ernst Stahl, who, in 1708, had developed the term "" as a biological theory that souls formed the vital principle and that the normal phenomena of life and the abnormal phenomena of disease could be traced to spiritual causes. The first known usage in English appeared in 1819.
The idea that there had once been "one universal form of primitive religion" (whether labelled "animism", "totemism", or "shamanism") has been dismissed as "unsophisticated" and "erroneous" by the archaeologist Timothy Insoll, who stated that "it removes complexity, a precondition of religion now, in "all" its variants".
Tylor's definition of animism was a part of a growing international debate on the nature of "primitive society" by lawyers, theologians and philologists. The debate defined the field of research of a new science: anthropology. By the end of the 19th century, an orthodoxy on "primitive society" had emerged, but few anthropologists still would accept that definition. The "19th-century armchair anthropologists" argued "primitive society" (an evolutionary category) was ordered by kinship and was divided into exogamous descent groups related by a series of marriage exchanges. Their religion was animism, the belief that natural species and objects had souls. With the development of private property, the descent groups were displaced by the emergence of the territorial state. These rituals and beliefs eventually evolved over time into the vast array of "developed" religions. According to Tylor, the more scientifically advanced a society became, the fewer members of that society believed in animism. However, any remnant ideologies of souls or spirits, to Tylor, represented "survivals" of the original animism of early humanity.
In 1869 (three years after Tylor proposed his definition of animism), the Edinburgh lawyer, John Ferguson McLennan, argued that the animistic thinking evident in fetishism gave rise to a religion he named Totemism. Primitive people believed, he argued, that they were descended of the same species as their totemic animal.
Subsequent debate by the 'armchair anthropologists' (including J. J. Bachofen, Émile Durkheim, and Sigmund Freud) remained focused on totemism rather than animism, with few directly challenging Tylor's definition.
Anthropologists "have commonly avoided the issue of animism and even the term itself rather than revisit this prevalent notion in light of their new and rich ethnographies."
According to the anthropologist Tim Ingold, animism shares similarities to totemism but differs in its focus on individual spirit beings which help to perpetuate life, whereas totemism more typically holds that there is a primary source, such as the land itself or the ancestors, who provide the basis to life. Certain indigenous religious groups such as the Australian Aboriginals are more typically totemic in their worldview, whereas others like the Inuit are more typically animistic.
From his studies into child development, Jean Piaget suggested that children were born with an innate animist worldview in which they anthropomorphized inanimate objects, and that it was only later that they grew out of this belief. Conversely, from her ethnographic research, Margaret Mead argued the opposite, believing that children were not born with an animist worldview but that they became acculturated to such beliefs as they were educated by their society.
Stewart Guthrie saw animism – or "attribution" as he preferred it – as an evolutionary strategy to aid survival. He argued that both humans and other animal species view inanimate objects as potentially alive as a means of being constantly on guard against potential threats. His suggested explanation, however, did not deal with the question of why such a belief became central to religion.
In 2000, Guthrie suggested that the "most widespread" concept of animism was that it was the "attribution of spirits to natural phenomena
such as stones and trees".
Many anthropologists ceased using the term "animism", deeming it to be too close to early anthropological theory and religious polemic. However, the term had also been claimed by religious groups – namely indigenous communities and nature worshipers – who felt that it aptly described their own beliefs, and who in some cases actively identified as "animists". It was thus readopted by various scholars, however they began using the term in a different way, placing the focus on knowing how to behave toward other persons, some of whom aren't human. As the religious studies scholar Graham Harvey stated, while the "old animist" definition had been problematic, the term "animism" was nevertheless "of considerable value as a critical, academic term for a style of religious and cultural relating to the world."
The "new animism" emerged largely from the publications of the anthropologist Irving Hallowell which were produced on the basis of his ethnographic research among the Ojibwe communities of Canada in the mid-20th century. For the Ojibwe encountered by Hallowell, "personhood" did not require human-likeness, but rather humans were perceived as being like other persons, who for instance included rock persons and bear persons. For the Ojibwe, these persons were each wilful beings who gained meaning and power through their interactions with others; through respectfully interacting with other persons, they themselves learned to "act as a person". Hallowell's approach to the understanding of Ojibwe personhood differed strongly from prior anthropological concepts of animism. He emphasized the need to challenge the modernist, Western perspectives of what a person is by entering into a dialogue with different worldwide-views.
Hallowell's approach influenced the work of anthropologist Nurit Bird-David, who produced a scholarly article reassessing the idea of animism in 1999. Seven comments from other academics were provided in the journal, debating Bird-David's ideas.
More recently post-modern anthropologists are increasingly engaging with the concept of animism. Modernism is characterized by a Cartesian subject-object dualism that divides the subjective from the objective, and culture from nature; in this view, Animism is the inverse of scientism, and hence inherently invalid. Drawing on the work of Bruno Latour, these anthropologists question these modernist assumptions, and theorize that all societies continue to "animate" the world around them, and not just as a Tylorian survival of primitive thought. Rather, the instrumental reason characteristic of modernity is limited to our "professional subcultures," which allows us to treat the world as a detached mechanical object in a delimited sphere of activity.
We, like animists, also continue to create personal relationships with elements of the so-called objective world, whether pets, cars or teddy-bears, who we recognize as subjects. As such, these entities are "approached as communicative subjects rather than the inert objects perceived by modernists." These approaches are careful to avoid the modernist assumptions that the environment consists dichotomously of a physical world distinct from humans, and from modernist conceptions of the person as composed dualistically as body and soul.
Nurit Bird-David argues that "Positivistic ideas about the meaning of 'nature', 'life' and 'personhood' misdirected these previous attempts to understand the local concepts. Classical theoreticians (it is argued) attributed their own modernist ideas of self to 'primitive peoples' while asserting that the 'primitive peoples' read their idea of self into others!" She argues that animism is a "relational epistemology", and not a Tylorian failure of primitive reasoning. That is, self-identity among animists is based on their relationships with others, rather than some distinctive feature of the self. Instead of focusing on the essentialized, modernist self (the "individual"), persons are viewed as bundles of social relationships ("dividuals"), some of which are with "superpersons" (i.e. non-humans).
Guthrie expressed criticism of Bird-David's attitude toward animism, believing that it promulgated the view that "the world is in large measure whatever our local imagination makes it". This, he felt, would result in anthropology abandoning "the scientific project".
Tim Ingold, like Bird-David, argues that animists do not see themselves as separate from their environment: "Hunter-gatherers do not, as a rule, approach their environment as an external world of nature that has to be 'grasped' intellectually ... indeed the separation of mind and nature has no place in their thought and practice." Willerslev extends the argument by noting that animists reject this Cartesian dualism, and that the animist self identifies with the world, "feeling at once "within" and "apart" from it so that the two glide ceaselessly in and out of each other in a sealed circuit." The animist hunter is thus aware of himself as a human hunter, but, through mimicry is able to assume the viewpoint, senses, and sensibilities of his prey, to be one with it. Shamanism, in this view, is an everyday attempt to influence spirits of ancestors and animals by mirroring their behaviours as the hunter does his prey.
Cultural ecologist and philosopher David Abram articulates an intensely ethical and ecological understanding of animism grounded in the phenomenology of sensory experience. In his books "The Spell of the Sensuous" and "Becoming Animal," Abram suggests that material things are never entirely passive in our direct perceptual experience, holding rather that perceived things actively "solicit our attention" or "call our focus," coaxing the perceiving body into an ongoing participation with those things.
In the absence of intervening technologies, he suggests, sensory experience is inherently animistic, disclosing a material field that is animate and self-organizing from the get-go. Drawing upon contemporary cognitive and natural science, as well as upon the perspectival worldviews of diverse indigenous, oral cultures, Abram proposes a richly pluralist and story-based cosmology in which matter is alive through and through. Such a relational ontology is in close accord, he suggests, with our spontaneous perceptual experience; it would draw us back to our senses and to the primacy of the sensuous terrain, enjoining a more respectful and ethical relation to the more-than-human community of animals, plants, soils, mountains, waters and weather-patterns that materially sustains us.
In contrast to a long-standing tendency in the Western social sciences, which commonly provide rational explanations of animistic experience, Abram develops an animistic account of reason itself. He holds that civilized reason is sustained only by an intensely animistic participation between human beings and their own written signs. For instance, as soon as we turn our gaze toward the alphabetic letters written on a page or a screen, we "see what they say"—the letters, that is, seem to speak to us—much as spiders, trees, gushing rivers and lichen-encrusted boulders once spoke to our oral ancestors. For Abram, reading can usefully be understood as an intensely concentrated form of animism, one that effectively eclipses all of the other, older, more spontaneous forms of animistic participation in which we once engaged. "To tell the story in this manner—to provide an animistic account of reason, rather than the other way around—is to imply that animism is the wider and more inclusive term, and that oral, mimetic modes of experience still underlie, and support, all our literate and technological modes of reflection. When reflection's rootedness in such bodily, participatory modes of experience is entirely unacknowledged or unconscious, reflective reason becomes dysfunctional, unintentionally destroying the corporeal, sensuous world that sustains it."
The religious studies scholar Graham Harvey defined animism as the belief "that the world is full of persons, only some of whom are human, and that life is always lived in relationship with others". He added that it is therefore "concerned with learning how to be a good person in respectful relationships with other persons".
In his "Handbook of Contemporary Animism" (2013), Harvey identifies the animist perspective in line with Martin Buber's "I-thou" as opposed to "I-it". In such, Harvey says, the animist takes an I-thou approach to relating to his world, where objects and animals are treated as a "thou" rather than as an "it".
There is ongoing disagreement (and no general consensus) as to whether animism is merely a singular, broadly encompassing religious belief or a worldview in and of itself, comprising many diverse mythologies found worldwide in many diverse cultures. This also raises a controversy regarding the ethical claims animism may or may not make: whether animism ignores questions of ethics altogether or, by endowing various non-human elements of nature with spirituality or personhood, in fact promotes a complex ecological ethics.
In many animistic world views, the human being is often regarded as on a roughly equal footing with other animals, plants, and natural forces.
A shaman is a person regarded as having access to, and influence in, the world of benevolent and malevolent spirits, who typically enters into a trance state during a ritual, and practices divination and healing.
According to Mircea Eliade, shamanism encompasses the premise that shamans are intermediaries or messengers between the human world and the spirit worlds. Shamans are said to treat ailments / illness by mending the soul. Alleviating traumas affecting the soul / spirit restores the physical body of the individual to balance and wholeness. The shaman also enters supernatural realms or dimensions to obtain solutions to problems afflicting the community. Shamans may visit other worlds/dimensions to bring guidance to misguided souls and to ameliorate illnesses of the human soul caused by foreign elements. The shaman operates primarily within the spiritual world, which in turn affects the human world. The restoration of balance results in the elimination of the ailment.
Abram, however, articulates a less supernatural and much more ecological understanding of the shaman's role than that propounded by Eliade. Drawing upon his own field research in Indonesia, Nepal, and the Americas, Abram suggests that in animistic cultures, the shaman functions primarily as an intermediary between the human community and the more-than-human community of active agencies — the local animals, plants, and landforms (mountains, rivers, forests, winds and weather patterns, all of whom are felt to have their own specific sentience). Hence, the shaman's ability to heal individual instances of dis-ease (or imbalance) within the human community is a by-product of her / his more continual practice of balancing the reciprocity between the human community and the wider collective of animate beings in which that community is embedded.
Animism is not the same as pantheism, although the two are sometimes confused. Some religions are both pantheistic and animistic. One of the main differences is that while animists believe everything to be spiritual in nature, they do not necessarily see the spiritual nature of everything in existence as being united (monism), the way pantheists do. As a result, animism puts more emphasis on the uniqueness of each individual soul. In pantheism, everything shares the same spiritual essence, rather than having distinct spirits and/or souls.
Animism entails the belief that "all living things have a soul", and thus a central concern of animist thought surrounds how animals can be eaten or otherwise used for humans' subsistence needs. The actions of non-human animals are viewed as "intentional, planned and purposive", and they are understood to be persons because they are both alive and communicate with others. In animist world-views, non-human animals are understood to participate in kinship systems and ceremonies with humans, as well as having their own kinship systems and ceremonies. Harvey cited an example of an animist understanding of animal behaviour that occurred at a powwow held by the Conne River Mi'kmaq in 1996; an eagle flew over the proceedings, circling over the central drum group. The assembled participants called out "kitpu" ("eagle"), conveying welcome to the bird and expressing pleasure at its beauty, and they later articulated the view that the eagle's actions reflected its approval of the event and the Mi'kmaq's return to traditional spiritual practices.
Some animists also view plant and fungi life as persons and interact with them accordingly. The most common encounter between humans and these plant and fungi persons is with the former's collection of the latter for food, and for animists this interaction typically has to be carried out respectfully. Harvey cited the example of Maori communities in New Zealand, who often offer "karakia" invocations to sweet potatoes as they dig the latter up; while doing so there is an awareness of a kinship relationship between the Maori and the sweet potatoes, with both understood as having arrived in Aotearoa together in the same canoes. In other instances, animists believe that interaction with plant and fungi persons can result in the communication of things unknown or even otherwise unknowable. Among some modern Pagans, for instance, relationships are cultivated with specific trees, who are understood to bestow knowledge or physical gifts, such as flowers, sap, or wood that can be used as firewood or to fashion into a wand; in return, these Pagans give offerings to the tree itself, which can come in the form of libations of mead or ale, a drop of blood from a finger, or a strand of wool.
Various animistic cultures also comprehend stones as persons. Discussing ethnographic work conducted among the Ojibwe, Harvey noted that their society generally conceived of stones as being inanimate, but with two notable exceptions: the stones of the Bell Rocks and those stones which are situated beneath trees struck by lightning, which were understood to have become Thunderers themselves. The Ojibwe conceived of weather as being capable of having personhood, with storms being conceived of as persons known as 'Thunderers' whose sounds conveyed communications and who engaged in seasonal conflict over the lakes and forests, throwing lightning at lake monsters. Wind, similarly, can be conceived as a person in animistic thought.
The importance of place is also a recurring element of animism, with some places being understood to be persons in their own right.
Animism can also entail relationships being established with non-corporeal spirit entities.
In the early 20th century, William McDougall defended a form of Animism in his book "Body and Mind: A History and Defence of Animism" (1911).
The physicist Nick Herbert has argued for "quantum animism" in which mind permeates the world at every level.
Werner Krieglstein wrote regarding his quantum Animism:
Ashley Curtis has argued in "Error and Loss: A Licence to Enchantment" that the Cartesian idea of an experiencing subject facing off with an inert physical world is incoherent at its very foundation, and that this incoherence is predicted rather than belied by Darwinism. Human reason (and its rigorous extension in the natural sciences) fits an evolutionary niche just as echolocation does for bats and infrared vision does for pit vipers, and is—according to western science's own dictates—epistemologically on a par with rather than superior to such capabilities. The meaning or aliveness of the "objects" we encounter—rocks, trees, rivers, other animals—thus depends for its validity not on a detached cognitive judgment but purely on the quality of our experience. The animist experience, and, indeed, the wolf's or raven's experience, thus become licensed as equally valid world-views to the modern western scientific one—indeed, they are more valid, since they are not plagued with the incoherence that inevitably crops up when "objective existence" is separated from "subjective experience."
Harvey opined that animism's views on personhood represented a radical challenge to the dominant perspectives of modernity, because it accords "intelligence, rationality, consciousness, volition, agency, intentionality, language and desire" to non-humans. Similarly, it challenges the view of human uniqueness that is prevalent in both Abrahamic religions and Western rationalism.
Animist beliefs can also be expressed through artwork. For instance, among the Maori communities of New Zealand, there is an acknowledgment that creating art through carving wood or stone entails violence against the wood or stone person, and that the persons who are damaged therefore have to be placated and respected during the process; any excess or waste from the creation of the artwork is returned to the land, while the artwork itself is treated with particular respect. Harvey therefore argued that the creation of art among the Maori was not about creating an inanimate object for display, but rather a transformation of different persons within a relationship.
Harvey expressed the view that animist worldviews were present in various works of literature, citing such examples as the writings of Alan Garner, Leslie Silko, Barbara Kingsolver, Alice Walker, Daniel Quinn, Linda Hogan, David Abram, Patricia Grace, Chinua Achebe, Ursula Le Guin, Louise Erdrich, and Marge Piercy. Animist worldviews have also been identified in the animated films of Hayao Miyazaki. | https://en.wikipedia.org/wiki?curid=1423 |
Antonio Vivaldi
Antonio Lucio Vivaldi (, ; ; 4 March 1678 – 28 July 1741) was an Italian Baroque musical composer, virtuoso violinist, teacher, and Roman Catholic priest. Born in Venice, the capital of the Venetian Republic, he is regarded as one of the greatest Baroque composers, and his influence during his lifetime was widespread across Europe. He composed many instrumental concertos, for the violin and a variety of other musical instruments, as well as sacred choral works and more than forty operas. His best-known work is a series of violin concertos known as the "Four Seasons".
Many of his compositions were written for the all-female music ensemble of the "Ospedale della Pietà", a home for abandoned children. Vivaldi had worked there as a Catholic priest for years and was employed there from 1703 to 1715 and from 1723 to 1740. Vivaldi also had some success with expensive stagings of his operas in Venice, Mantua and Vienna. After meeting the Emperor Charles VI, Vivaldi moved to Vienna, hoping for royal support. However, the Emperor died soon after Vivaldi's arrival, and Vivaldi himself died in poverty less than a year later.
Antonio Lucio Vivaldi was born on 4 March 1678 in Venice, then the capital of the Venetian Republic. He was baptized immediately after his birth at his home by the midwife, which led to a belief that his life was somehow in danger. Though the reasons for the child's immediate baptism are not known for certain, it was done most likely due either to his poor health or to an earthquake that shook the city that day. In the trauma of the earthquake, Vivaldi's mother may have dedicated him to the priesthood. The ceremonies which had been omitted were supplied two months later.
Vivaldi's parents were Giovanni Battista Vivaldi and Camilla Calicchio, as recorded in the register of San Giovanni in Bragora. Vivaldi had five known siblings: Bonaventura Tomaso Vivaldi, Margarita Gabriela Vivaldi, Cecilia Maria Vivaldi, Francesco Gaetano Vivaldi, and Zanetta Anna Vivaldi. Giovanni Battista, who was a barber before becoming a professional violinist, taught Antonio to play the violin and then toured Venice playing the violin with his young son. Antonio was probably taught at an early age, judging by the extensive musical knowledge he had acquired by the age of 24, when he started working at the Ospedale della Pietà. Giovanni Battista was one of the founders of the "Sovvegno dei musicisti di Santa Cecilia", an association of musicians.
The president of the "Sovvegno" was Giovanni Legrenzi, an early Baroque composer and the "maestro di cappella" at St Mark's Basilica. It is possible that Legrenzi gave the young Antonio his first lessons in composition. The Luxembourg scholar Walter Kolneder has discerned the influence of Legrenzi's style in Vivaldi's early liturgical work "Laetatus sum" (RV Anh 31), written in 1691 at the age of thirteen. Vivaldi's father may have been a composer himself: in 1689, an opera titled "La Fedeltà sfortunata" was composed by a Giovanni Battista Rossi—the name under which Vivaldi's father had joined the Sovvegno di Santa Cecilia.
Vivaldi's health was problematic. One of his symptoms, "strettezza di petto" ("tightness of the chest"), has been interpreted as a form of asthma. This did not prevent him from learning to play the violin, composing, or taking part in musical activities, although it did stop him from playing wind instruments. In 1693, at the age of fifteen, he began studying to become a priest. He was ordained in 1703, aged 25, and was soon nicknamed "il Prete Rosso", "The Red Priest". ("" is Italian for "red", and would have referred to the color of his hair, a family trait.)
Not long after his ordination, in 1704, he was given a dispensation from celebrating Mass most likely because of his ill health. Vivaldi said Mass as a priest only a few times, and appeared to have withdrawn from liturgical duties, though he remained a member of the priesthood. It is thought that this is also due to his habit of composing while performing mass. He seems to have remained committed to Catholicism, since the entry in the Vienna death records for him reads, "Antonio Vivaldi, Secular Priest". It is thought that he remained a devout Catholic, indeed, in 1792, the Protestant composer Ernst Ludwig Gerber, wrote of the aged Vivaldi that "the rosary never left his hand except when he picked up the pen to write an opera".
In September 1703, Vivaldi became "maestro di violino" (master of violin) at an orphanage called the Pio Ospedale della Pietà (Devout Hospital of Mercy) in Venice. While Vivaldi is most famous as a composer, he was regarded as an exceptional technical violinist as well. The German architect Johann Friedrich Armand von Uffenbach referred to Vivaldi as "the famous composer and violinist" and said that "Vivaldi played a solo accompaniment excellently, and at the conclusion he added a free fantasy [an improvised cadenza] which absolutely astounded me, for it is hardly possible that anyone has ever played, or ever will play, in such a fashion."
Vivaldi was only 25 when he started working at the orphanage. Over the next thirty years he composed most of his major works while working there. There were four similar institutions in Venice; their purpose was to give shelter and education to children who were abandoned or orphaned, or whose families could not support them. They were financed by funds provided by the Republic. The boys learned a trade and had to leave when they reached the age of fifteen. The girls received a musical education, and the most talented among them stayed and became members of the Ospedale's renowned orchestra and choir.
Shortly after Vivaldi's appointment, the orphans began to gain appreciation and esteem abroad, too. Vivaldi wrote concertos, cantatas and sacred vocal music for them. These sacred works, which number over 60, are varied: they included solo motets and large-scale choral works for soloists, double chorus, and orchestra. In 1704, the position of teacher of "viola all'inglese" was added to his duties as violin instructor. The position of "maestro di coro", which was at one time filled by Vivaldi, required a lot of time and work. He had to compose an oratorio or concerto at every feast and teach the orphans both music theory and how to play certain instruments.
His relationship with the board of directors of the Ospedale was often strained. The board had to take a vote every year on whether to keep a teacher. The vote on Vivaldi was seldom unanimous, and went 7 to 6 against him in 1709. After a year as a freelance musician, he was recalled by the Ospedale with a unanimous vote in 1711; clearly during his year's absence the board had realized the importance of his role. He became responsible for all of the musical activity of the institution when he was promoted to "maestro de' concerti" (music director) in 1716.
In 1705, the first collection ("Connor Cassara") of his works was published by Giuseppe Sala: his Opus 1 is a collection of 12 sonatas for two violins and basso continuo, in a conventional style. In 1709, a second collection of 12 sonatas for violin and basso continuo appeared—Opus 2. A real breakthrough as a composer came with his first collection of 12 concerti for one, two, and four violins with strings, "L'estro armonico" (Opus 3), which was published in Amsterdam in 1711 by Estienne Roger, dedicated to Grand Prince Ferdinand of Tuscany. The prince sponsored many musicians including Alessandro Scarlatti and George Frideric Handel. He was a musician himself, and Vivaldi probably met him in Venice. "L'estro armonico" was a resounding success all over Europe. It was followed in 1714 by "La stravaganza" (Opus 4), a collection of concerti for solo violin and strings, dedicated to an old violin student of Vivaldi's, the Venetian noble Vettor Dolfin.
In February 1711, Vivaldi and his father traveled to Brescia, where his setting of the Stabat Mater (RV 621) was played as part of a religious festival. The work seems to have been written in haste: the string parts are simple, the music of the first three movements is repeated in the next three, and not all the text is set. Nevertheless, perhaps in part because of the forced essentiality of the music, the work is considered to be one of his early masterpieces.
Despite his frequent travels from 1718, the Ospedale paid him 2 sequins to write two concerti a month for the orchestra and to rehearse with them at least five times when in Venice. The orphanage's records show that he was paid for 140 concerti between 1723 and 1733.
In early 18th-century Venice, opera was the most popular musical entertainment. It proved most profitable for Vivaldi. There were several theaters competing for the public's attention. Vivaldi started his career as an opera composer as a sideline: his first opera, "Ottone in villa" (RV 729) was performed not in Venice, but at the Garzerie Theater in Vicenza in 1713. The following year, Vivaldi became the impresario of the Teatro San Angelo in Venice, where his opera "Orlando finto pazzo" (RV 727) was performed. The work was not to the public's taste, and it closed after a couple of weeks, being replaced with a repeat of a different work already given the previous year.
In 1715, he presented "Nerone fatto Cesare" (RV 724, now lost), with music by seven different composers, of which he was the leader. The opera contained eleven arias, and was a success. In the late season, Vivaldi planned to put on an opera entirely of his own creation, "Arsilda, regina di Ponto" (RV 700), but the state censor blocked the performance. The main character, Arsilda, falls in love with another woman, Lisea, who is pretending to be a man. Vivaldi got the censor to accept the opera the following year, and it was a resounding success.
During this period, the "Pietà" commissioned several liturgical works. The most important were two oratorios. "Moyses Deus Pharaonis", (RV 643) is now lost. The second, "Juditha triumphans" (RV 644), celebrates the victory of the Republic of Venice against the Turks and the recapture of the island of Corfu. Composed in 1716, it is one of his sacred masterpieces. All eleven singing parts were performed by girls of the orphanage, both the female and male roles. Many of the arias include parts for solo instruments—recorders, oboes, violas d'amore, and mandolins—that showcased the range of talents of the girls.
Also in 1716, Vivaldi wrote and produced two more operas, "L'incoronazione di Dario" (RV 719) and "La costanza trionfante degli amori e degli odi" (RV 706). The latter was so popular that it performed two years later, re-edited and retitled "Artabano re dei Parti" (RV 701, now lost). It was also performed in Prague in 1732. In the years that followed, Vivaldi wrote several operas that were performed all over Italy.
His progressive operatic style caused him some trouble with more conservative musicians such as Benedetto Marcello, a magistrate and amateur musician who wrote a pamphlet denouncing Vivaldi and his operas. The pamphlet, "Il teatro alla moda", attacks the composer even as it does not mention him directly. The cover drawing shows a boat (the Sant'Angelo), on the left end of which stands a little angel wearing a priest's hat and playing the violin. The Marcello family claimed ownership of the Teatro Sant'Angelo, and a long legal battle had been fought with the management for its restitution, without success. The obscure text under the engraving mentions non-existent places and names: for example, "ALDIVIVA" is an anagram of "A. Vivaldi".
In a letter written by Vivaldi to his patron Marchese Bentivoglio in 1737, he makes reference to his "94 operas". Only around 50 operas by Vivaldi have been discovered, and no other documentation of the remaining operas exists. Although Vivaldi may have been exaggerating, it is plausible that, in his dual role of composer and "impresario", he may have either written or been responsible for the production of as many as 94 operas—given that his career had by then spanned almost 25 years. While Vivaldi certainly composed many operas in his time, he never attained the prominence of other great composers such as Alessandro Scarlatti, Johann Adolph Hasse, Leonardo Leo, and Baldassare Galuppi, as evidenced by his inability to keep a production running for an extended period of time in any major opera house.
In 1717 or 1718, Vivaldi was offered a prestigious new position as "Maestro di Cappella" of the court of prince Philip of Hesse-Darmstadt, governor of Mantua, in the northwest of Italy. He moved there for three years and produced several operas, among them "Tito Manlio" (RV 738). In 1721, he was in Milan, where he presented the pastoral drama "La Silvia" (RV 734); nine arias from it survive. He visited Milan again the following year with the oratorio "L'adorazione delli tre re magi al bambino Gesù" (RV 645, now lost). In 1722 he moved to Rome, where he introduced his operas' new style. The new pope Benedict XIII invited Vivaldi to play for him. In 1725, Vivaldi returned to Venice, where he produced four operas in the same year.
During this period Vivaldi wrote the "Four Seasons", four violin concertos that give musical expression to the seasons of the year. Though three of the concerti are wholly original, the first, "Spring", borrows motifs from a Sinfonia in the first act of Vivaldi's contemporaneous opera "Il Giustino". The inspiration for the concertos was probably the countryside around Mantua. They were a revolution in musical conception: in them Vivaldi represented flowing creeks, singing birds (of different species, each specifically characterized), barking dogs, buzzing mosquitoes, crying shepherds, storms, drunken dancers, silent nights, hunting parties from both the hunters' and the prey's point of view, frozen landscapes, ice-skating children, and warming winter fires. Each concerto is associated with a sonnet, possibly by Vivaldi, describing the scenes depicted in the music. They were published as the first four concertos in a collection of twelve, "Il cimento dell'armonia e dell'inventione", Opus 8, published in Amsterdam by Michel-Charles Le Cène in 1725.
During his time in Mantua, Vivaldi became acquainted with an aspiring young singer Anna Tessieri Girò, who would become his student, protégée, and favorite "prima donna". Anna, along with her older half-sister Paolina, moved in with Vivaldi and regularly accompanied him on his many travels. There was speculation as to the nature of Vivaldi's and Girò's relationship, but no evidence exists to indicate anything beyond friendship and professional collaboration. Vivaldi, in fact, adamantly denied any romantic relationship with Girò in a letter to his patron Bentivoglio dated 16 November 1737.
At the height of his career, Vivaldi received commissions from European nobility and royalty. The "serenata" (cantata) "Gloria e Imeneo" (RV 687) was commissioned in 1725 by the French ambassador to Venice in celebration of the marriage of Louis XV. The following year, another "serenata", "La Sena festeggiante" (RV 694), was written for and premiered at the French embassy as well, celebrating the birth of the French royal princesses, Henriette and Louise Élisabeth. Vivaldi's Opus 9, "La cetra", was dedicated to Emperor Charles VI. In 1728, Vivaldi met the emperor while the emperor was visiting Trieste to oversee the construction of a new port. Charles admired the music of the Red Priest so much that he is said to have spoken more with the composer during their one meeting than he spoke to his ministers in over two years. He gave Vivaldi the title of knight, a gold medal and an invitation to Vienna. Vivaldi gave Charles a manuscript copy of "La cetra", a set of concerti almost completely different from the set of the same title published as Opus 9. The printing was probably delayed, forcing Vivaldi to gather an improvised collection for the emperor.
Accompanied by his father, Vivaldi traveled to Vienna and Prague in 1730, where his opera "Farnace" (RV 711) was presented; it garnered six revivals. Some of his later operas were created in collaboration with two of Italy's major writers of the time. "L'Olimpiade" and "Catone in Utica" were written by Pietro Metastasio, the major representative of the Arcadian movement and court poet in Vienna. "La Griselda" was rewritten by the young Carlo Goldoni from an earlier libretto by Apostolo Zeno.
Like many composers of the time, Vivaldi faced financial difficulties in his later years. His compositions were no longer held in such high esteem as they once had been in Venice; changing musical tastes quickly made them outmoded. In response, Vivaldi chose to sell off sizeable numbers of his manuscripts at paltry prices to finance his migration to Vienna. The reasons for Vivaldi's departure from Venice are unclear, but it seems likely that, after the success of his meeting with Emperor Charles VI, he wished to take up the position of a composer in the imperial court. On his way to Vienna, Vivaldi may have stopped in Graz to see Anna Girò.
It is also likely that Vivaldi went to Vienna to stage operas, especially as he took up residence near the Kärntnertortheater. Shortly after his arrival in Vienna, Charles VI died, which left the composer without any royal protection or a steady source of income. Soon afterwards, Vivaldi became impoverished and died during the night of 27/28 July 1741, aged 63, of "internal infection", in a house owned by the widow of a Viennese saddlemaker. On 28 July, Vivaldi was buried in a simple grave in a burial ground that was owned by the public hospital fund. His funeral took place at St. Stephen's Cathedral. Contrary to popular legend, the young Joseph Haydn had nothing to do with his burial, since no music was performed on that occasion. The cost of his funeral with a 'Kleingeläut' was 19 Gulden 45 Kreuzer which was rather expensive for the lowest class of peal of bells.
Vivaldi was buried next to Karlskirche, a baroque church in an area which is now part of the site of the TU Wien. The house where he lived in Vienna has since been destroyed; the Hotel Sacher is built on part of the site. Memorial plaques have been placed at both locations, as well as a Vivaldi "star" in the Viennese Musikmeile and a monument at the Rooseveltplatz.
Only two, possibly three original portraits of Vivaldi are known to survive: an engraving, an ink sketch and an oil painting. The engraving, which was the basis of several copies produced later by other artists, was made in 1725 by François Morellon de La Cave for the first edition of "Il cimento dell'armonia e dell'inventione", and shows Vivaldi holding a sheet of music. The ink sketch, a caricature, was done by Ghezzi in 1723 and shows Vivaldi's head and shoulders in profile. It exists in two versions: a first jotting kept at the Vatican Library, and a much lesser-known, slightly more detailed copy recently discovered in Moscow. The oil painting, which can be seen in the International Museum and Library of Music of Bologna, is anonymous and is thought to depict Vivaldi due to its strong resemblance to the La Cave engraving.
Vivaldi's music was innovative. He brightened the formal and rhythmic structure of the concerto, in which he looked for harmonic contrasts and innovative melodies and themes. Many of his compositions are flamboyantly exuberant.
Johann Sebastian Bach was deeply influenced by Vivaldi's concertos and arias (recalled in his "St John Passion", "St Matthew Passion", and cantatas). Bach transcribed six of Vivaldi's concerti for solo keyboard, three for organ, and one for four harpsichords, strings, and basso continuo (BWV 1065) based upon the concerto for four violins, two violas, cello, and basso continuo (RV 580).
During his lifetime, Vivaldi was popular in many countries throughout Europe, including France, but after his death his popularity dwindled. After the end of the Baroque period, Vivaldi's published concerti became relatively unknown, and were largely ignored. Even his most famous work, "The Four Seasons", was unknown in its original edition during the Classical and Romantic periods.
In the early 20th century, Fritz Kreisler's Concerto in C, in the Style of Vivaldi (which he passed off as an original Vivaldi work) helped revive Vivaldi's reputation. This spurred the French scholar Marc Pincherle to begin an academic study of Vivaldi's oeuvre. Many Vivaldi manuscripts were rediscovered, which were acquired by the Turin National University Library as a result of the generous sponsorship of Turinese businessmen Roberto Foa and Filippo Giordano, in memory of their sons. This led to a renewed interest in Vivaldi by, among others, Mario Rinaldi, Alfredo Casella, Ezra Pound, Olga Rudge, Desmond Chute, Arturo Toscanini, Arnold Schering and Louis Kaufman, all of whom were instrumental in the revival of Vivaldi throughout the 20th century.
In 1926, in a monastery in Piedmont, researchers discovered fourteen folios of Vivaldi's work that were previously thought to have been lost during the Napoleonic Wars. Some missing volumes in the numbered set were discovered in the collections of the descendants of the Grand Duke Durazzo, who had acquired the monastery complex in the 18th century. The volumes contained 300 concertos, 19 operas and over 100 vocal-instrumental works.
The resurrection of Vivaldi's unpublished works in the 20th century is mostly due to the efforts of Alfredo Casella, who in 1939 organized the historic Vivaldi Week, in which the rediscovered Gloria (RV 589) and l'Olimpiade were revived. Since World War II, Vivaldi's compositions have enjoyed wide success. Historically informed performances, often on "original instruments", have increased Vivaldi's fame still further.
Recent rediscoveries of works by Vivaldi include two psalm settings of "Nisi Dominus" (RV 803, in eight movements) and Dixit Dominus (RV 807, in eleven movements). These were identified in 2003 and 2005 respectively, by the Australian scholar Janice Stockigt. The Vivaldi scholar Michael Talbot described RV 807 as "arguably the best nonoperatic work from Vivaldi's pen to come to light since […] the 1920s". Vivaldi's 1730 opera "Argippo" (RV 697), which had been considered lost, was rediscovered in 2006 by the harpsichordist and conductor Ondřej Macek, whose Hofmusici orchestra performed the work at Prague Castle on 3 May 2008—its first performance since 1730.
A composition by Vivaldi is identified by RV number, which refers to its place in the "Ryom-Verzeichnis" or "Répertoire des oeuvres d'Antonio Vivaldi", a catalog created in the 20th century by the musicologist Peter Ryom.
"Le quattro stagioni" (The Four Seasons) of 1723 is his most famous work. Part of "Il cimento dell'armonia e dell'inventione" ("The Contest between Harmony and Invention"), it depicts moods and scenes from each of the four seasons. This work has been described as an outstanding instance of pre-19th century program music.
Vivaldi wrote more than 500 other concertos. About 350 of these are for solo instrument and strings, of which 230 are for violin, the others being for bassoon, cello, oboe, flute, viola d'amore, recorder, lute, or mandolin. About forty concertos are for two instruments and strings, and about thirty are for three or more instruments and strings.
As well as about 46 operas, Vivaldi composed a large body of sacred choral music, such as Magnificat. Other works include sinfonias, about 90 sonatas and chamber music.
Some sonatas for flute, published as "Il Pastor Fido", have been erroneously attributed to Vivaldi, but were composed by Nicolas Chédeville.
Vivaldi's works attracted cataloging efforts befitting a major composer. Scholarly work intended to increase the accuracy and variety of Vivaldi performances also supported new discoveries which made old catalogs incomplete. Works still in circulation today may be numbered under several different systems (some earlier catalogs are mentioned here).
Because the simply consecutive Complete Edition (CE) numbers did not reflect the individual works (Opus numbers) into which compositions were grouped, numbers assigned by Antonio Fanna were often used in conjunction with CE numbers. Combined Complete Edition (CE)/Fanna numbering was especially common in the work of Italian groups driving the mid-20th century revival of Vivaldi, such as Gli Accademici di Milano under Piero Santi. For example, the Bassoon Concerto in B major, "La Notte" RV 501, became CE 12, F. VIII,1
Despite the awkwardness of having to overlay Fanna numbers onto the Complete Edition number for meaningful grouping of Vivaldi's oeuvre, these numbers displaced the older Pincherle numbers as the (re-)discovery of more manuscripts had rendered older catalogs obsolete.
This cataloging work was led by the Istituto Italiano Antonio Vivaldi, where Gian Francesco Malipiero was both the director and the editor of the published scores (Edizioni G. Ricordi). His work built on that of Antonio Fanna, a Venetian businessman and the Institute's founder, and thus formed a bridge to the scholarly catalog dominant today.
Compositions by Vivaldi are identified today by RV number, the number assigned by Danish musicologist Peter Ryom in works published mostly in the 1970s, such as the "Ryom-Verzeichnis" or "Répertoire des oeuvres d'Antonio Vivaldi". Like the Complete Edition before it, the RV does not typically assign its single, consecutive numbers to "adjacent" works that occupy one of the composer's single opus numbers. Its goal as a modern catalog is to index the manuscripts and sources that establish the existence and nature of all known works.
The movie "" was completed in 2005 as an Italian-French co-production under the direction of . In 2005, ABC Radio National commissioned a radio play about Vivaldi, which was written by Sean Riley. Entitled "The Angel and the Red Priest", the play was later adapted for the stage and was performed at the Adelaide Festival of the Arts. | https://en.wikipedia.org/wiki?curid=1425 |
Aare
The Aare () or Aar () is a tributary of the High Rhine and the longest river that both rises and ends entirely within Switzerland.
Its total length from its source to its junction with the Rhine comprises about , during which distance it descends , draining an area of , almost entirely within Switzerland, and accounting for close to half the area of the country, including all of Central Switzerland.
There are more than 40 hydroelectric plants along the course of the Aare.
The river's name dates to at least the La Tène period, and it is attested as "Nantaror" "Aare valley" in the Berne zinc tablet.
The name was Latinized as "Arula"/"Arola"/"Araris".
The Aare rises in the great Aargletschers (Aare Glaciers) of the Bernese Alps, in the canton of Bern and west of the Grimsel Pass. The Finsteraargletscher and Lauteraargletscher come together to form the Unteraargletscher (Lower Aar Glacier), which is the main source of water for the Grimselsee (Lake of Grimsel). The Oberaargletscher (Upper Aar Glacier) feeds the Oberaarsee, which also flows into the Grimselsee. The Aare leaves the Grimselsee just to the east to the Grimsel Hospiz, below the Grimsel Pass, and then flows northwest through the Haslital, forming on the way the magnificent Handegg Waterfall, , past Guttannen.
Right after Innertkirchen it is joined by its first major tributary, the Gamderwasser. Less than later the river carves through a limestone ridge in the Aare Gorge (). It is here that the Aare proves itself to be more than just a river, as it attracts thousands of tourists annually to the causeways through the gorge. A little past Meiringen, near Brienz, the river expands into Lake Brienz. Near the west end of the lake it indirectly receives its first important tributary, the Lütschine, by the Lake of Brienz. It then runs across the swampy plain of the Bödeli (Swiss German diminutive for ground) between Interlaken and Unterseen before flowing into Lake Thun.
Near the west end of Lake Thun, the river indirectly receives the waters of the Kander, which has just been joined by the Simme, by the Lake of Thun. Lake Thun marks the head of navigation. On flowing out of the lake it passes through Thun, and then flows through the city of Bern, passing beneath eighteen bridges and around the steeply-flanked peninsula on which the Old City of Berne is located. The river soon changes its northwesterly flow for a due westerly direction, but after receiving the Saane or La Sarine it turns north until it nears Aarberg. There, in one of the major Swiss engineering feats of the 19th century, the Jura water correction, the river, which had previously rendered the countryside north of Bern a swampland through frequent flooding, was diverted by the Aare-Hagneck Canal into the Lac de Bienne. From the upper end of the lake, at Nidau, the river issues through the Nidau-Büren Canal, also called the Aare Canal, and then runs east to Büren. The lake absorbs huge amounts of eroded gravel and snowmelt that the river brings from the Alps, and the former swamps have become fruitful plains: they are known as the "vegetable garden of Switzerland".
From here the Aare flows northeast for a long distance, past the ambassador town Solothurn (below which the Grosse Emme flows in on the right), Aarburg (where it is joined by the Wigger), Olten, Aarau, near which is the junction with the Suhre, and Wildegg, where the Seetal Aabach falls in on the right. A short distance further, below Brugg it receives first the Reuss, its major tributary, and shortly afterwards the Limmat, its second strongest tributary. It now turns to north, and soon becomes itself a tributary of the Rhine, which it even surpasses in volume when the two rivers unite downstream from Koblenz (Switzerland), opposite Waldshut in Germany. The Rhine, in turn, empties into the North Sea after crossing into the Netherlands. | https://en.wikipedia.org/wiki?curid=1433 |
Abbotsford House
Abbotsford is a historic country house in the Scottish Borders, near Galashiels, on the south bank of the River Tweed. It was formerly the residence of historical novelist and poet, Sir Walter Scott. It is a Category A Listed Building and the estate is listed in the Inventory of Gardens and Designed Landscapes in Scotland.
The nucleus of the estate was a small farm of , called Cartleyhole, nicknamed Clarty (i.e., muddy) Hole, and was bought by Scott on the lapse of his lease (1811) of the neighbouring house of Ashestiel. Scott renamed it "Abbotsford" after a neighbouring ford used by the monks of Melrose Abbey. Following a modest enlargement of the original farmhouse in 1811–12, massive expansions took place in 1816–19 and 1822–24. In this mansion Scott he gathered a large library, a collection of ancient furniture, arms and armour, and other relics and curiosities especially connected with Scottish history, notably the Celtic Torrs Pony-cap and Horns and the Woodwrae Stone, all now in the Museum of Scotland.Scott described the resulting building as "a sort of romance in Architecture" and "a kind of Conundrum Castle to be sure".
The last and principal acquisition was that of Toftfield (afterwards named Huntlyburn), purchased in 1817. The new house was then begun and completed in 1824.
The general ground-plan is a parallelogram, with irregular outlines, one side overlooking the Tweed; and the style is mainly the Scottish Baronial. With his architects William Atkinson and Edward Blore Scott was a pioneer of the Scottish Baronial style of architecture: the house is recognized as a highly influential creation with themes from Abbotsford being reflected across many buildings in the Scottish Borders and beyond. The manor as a whole appears as a "castle-in-miniature," with small towers and imitation battlements decorating the house and garden walls. Into various parts of the fabric were built relics and curiosities from historical structures, such as the doorway of the old Tolbooth in Edinburgh. Scott collected many of these curiosities to be built into the walls of the South Garden, which previously hosted a colonnade of gothic arches along the garden walls. Along the path of the former colonnade sits the remains of Edinburgh's 15th century Mercat Cross and several examples of classical sculpture.
The estate and its neo-Medieval features nod towards Scott's desire for a historical feel, but the writer ensured that the house would provide all the comforts of modern living. As a result, Scott used the space as a proving-ground for new technologies. The house was outfitted with early gas lighting and pneumatic bells connecting residents with servants elsewhere in the house.
Scott had only enjoyed his residence one year when (1825) he met with that reverse of fortune which involved the estate in debt. In 1830, the library and museum were presented to him as a free gift by the creditors. The property was wholly disencumbered in 1847 by Robert Cadell, the publisher, who cancelled the bond upon it in exchange for the family's share in the copyright of Sir Walter's works.
Scott's only son Walter did not live to enjoy the property, having died on his way from India in 1847. Among subsequent possessors were Scott's grandson Walter Scott Lockhart (later Walter Lockhart Scott, 1826-1853), his younger sister Charlotte Harriet Jane Hope-Scott (née Lockhart) 1828–1858, J. R. Hope Scott, QC, and his daughter (Scott's great-granddaughter), the Hon. Mrs Maxwell Scott.
The house was opened to the public in 1833, but continued to be occupied by Scott's descendants until 2004. The last of his direct descendants to hold the Lairdship of Abbotsford was his great-great-great-granddaughter Dame Jean Maxwell-Scott (8 June 1923 – 5 May 2004). She inherited it from her elder sister Patricia Maxwell-Scott in 1998. The sisters turned the house into one of Scotland's premier tourist attractions, after they had to rely on paying visitors to afford the upkeep of the house. It had electricity installed only in 1962.
Dame Jean was at one time a lady-in-waiting to Princess Alice, Duchess of Gloucester, patron of the Dandie Dinmont Club, a breed of dog named after one of Sir Walter Scott's characters; and a horse trainer, one of whose horses, Sir Wattie, ridden by Ian Stark, won two silver medals at the 1988 Summer Olympics.
On Dame Jean's death the Abbotsford Trust was established to safeguard the estate.
In 2005, Scottish Borders Council considered an application by a property developer to build a housing estate on the opposite bank of the River Tweed from Abbotsford, to which Historic Scotland and the National Trust for Scotland objected. There have been modifications to the proposed development, but it is still being opposed in 2020.
Sir Walter Scott rescued the "jougs" from Threave Castle in Dumfries and Galloway and attached them to the castellated gateway he built at Abbotsford.
Tweedbank railway station is located near to Abbotsford House.
Abbotsford gave its name to the Abbotsford Club, founded by William Barclay Turnbull in 1833 or 1834 in Scott's honour, and a successor to the Bannatyne and Maitland Clubs. It was a text publication society, which existed to print and publish historical works connected with Scott's writings. Its publications extended from 1835 to 1864.
In 2012, a new Visitor Centre opened at Abbotsford which houses a small exhibition, gift shop and Ochiltree's Dining, a café/restaurant with views over the house and grounds. The house re-opened to the public after extensive renovations in 2013.
In 2014 it won the European Union Prize for Cultural Heritage / Europa Nostra Award for its recent conservation project.
Attribution | https://en.wikipedia.org/wiki?curid=1435 |
Abraham
Abraham (originally Abram) is the common patriarch of the Abrahamic religions, including Judaism, Christianity and Islam. In Judaism, he is the founding father of the covenant of the pieces, the special relationship between the Hebrews and God; in Christianity, he is the prototype of all believers, Jewish or Gentile; and in Islam he is seen as a link in the chain of prophets that begins with Adam and culminates in Muhammad.
The narrative in the Book of Genesis revolves around the themes of posterity and land. Abraham is called by God to leave the house of his father Terah and settle in the land originally given to Canaan but which God now promises to Abraham and his progeny. Various candidates are put forward who might inherit the land after Abraham; and, while promises are made to Ishmael about founding a great nation, Isaac, Abraham's son by his half-sister Sarah, inherits God's promises to Abraham. Abraham purchases a tomb (the Cave of the Patriarchs) at Hebron to be Sarah's grave, thus establishing his right to the land; and, in the second generation, his heir Isaac is married to a woman from his own kin, thus ruling the Canaanites out of any inheritance. Abraham later marries Keturah and has six more sons; but, on his death, when he is buried beside Sarah, it is Isaac who receives "all Abraham's goods", while the other sons receive only "gifts" (Genesis 25:5–8).
The Abraham story cannot be definitively related to any specific time, and it is widely agreed that the patriarchal age, along with the exodus and the period of the judges, is a late literary construct that does not relate to any period in actual history. A common hypothesis among scholars is that it was composed in the early Persian period (late 6th century BCE) as a result of tensions between Jewish landowners who had stayed in Judah during the Babylonian captivity and traced their right to the land through their "father Abraham", and the returning exiles who based their counter-claim on Moses and the Exodus tradition.
Terah, the ninth in descent from Noah, was the father of three sons: Abram, Nahor, and Haran. The entire family, including grandchildren, lived in Ur of the Chaldees. According to a midrash, Abram worked in Terah's idol shop in his youth. Haran was the father of Lot, and thus Lot was Abram's nephew. Haran died in his native city, Ur of the Chaldees.
Abram married Sarah (Sarai), who was barren. Terah, with Abram, Sarai, and Lot, then departed for Canaan, but settled in a place named Haran, where Terah died at the age of 205. God had told Abram to leave his country and kindred and go to a land that he would show him, and promised to make of him a great nation, bless him, make his name great, bless them that bless him, and curse them who may curse him. Abram was 75 years old when he left Haran with his wife Sarai, his nephew Lot, and the substance and souls that they had acquired, and traveled to Shechem in Canaan.
There was a severe famine in the land of Canaan, so that Abram and Lot and their households traveled to Egypt. On the way Abram told Sarai to say that she was his sister, so that the Egyptians would not kill him. When they entered Egypt, the Pharaoh's officials praised Sarai's beauty to Pharaoh, and they took her into the palace and gave Abram goods in exchange. God afflicted Pharaoh and his household with plagues, which led Pharaoh to try to find out what was wrong. Upon discovering that Sarai was a married woman, Pharaoh demanded that Abram and Sarai leave.
When they came back to the Bethel and Hai area, Abram's and Lot's sizable herds occupied the same pastures. This became a problem for the herdsmen who were assigned to each family's cattle. The conflicts between herdsmen had become so troublesome that Abram suggested that Lot choose a separate area, either on the left hand or on the right hand, that there be no conflict amongst brethren. Lot chose to go eastward to the plain of Jordan where the land was well watered everywhere as far as Zoar, and he dwelled in the cities of the plain toward Sodom. Abram went south to Hebron and settled in the plain of Mamre, where he built another altar to worship God.
During the rebellion of the Jordan River cities against Elam, Abram's nephew, Lot, was taken prisoner along with his entire household by the invading Elamite forces. The Elamite army came to collect the spoils of war, after having just defeated the king of Sodom's armies. Lot and his family, at the time, were settled on the outskirts of the Kingdom of Sodom which made them a visible target.
One person who escaped capture came and told Abram what happened. Once Abram received this news, he immediately assembled 318 trained servants. Abram's force headed north in pursuit of the Elamite army, who were already worn down from the Battle of Siddim. When they caught up with them at Dan, Abram devised a battle plan by splitting his group into more than one unit, and launched a night raid. Not only were they able to free the captives, Abram's unit chased and slaughtered the Elamite King Chedorlaomer at Hobah, just north of Damascus. They freed Lot, as well as his household and possessions, and recovered all of the goods from Sodom that had been taken.
Upon Abram's return, Sodom's king came out to meet with him in the Valley of Shaveh, the "king's dale". Also, Melchizedek king of Salem (Jerusalem), a priest of God Most High, brought out bread and wine and blessed Abram and God. Abram then gave Melchizedek a tenth of everything. The king of Sodom then offered to let Abram keep all the possessions if he would merely return his people. Abram refused any deal from the king of Sodom, other than the share to which his allies were entitled.
The voice of the Lord came to Abram in a vision and repeated the promise of the land and descendants as numerous as the stars. Abram and God made a covenant ceremony, and God told of the future bondage of Israel in Egypt. God described to Abram the land that his offspring would claim: the land of the Kenites, Kenizzites, Kadmonites, Hittites, Perizzites, Rephaims, Amorites, Canaanites, Girgashites, and Jebusites.
Abram and Sarai tried to make sense of how he would become a progenitor of nations, because after 10 years of living in Canaan, no child had been born. Sarai then offered her Egyptian handmaiden, Hagar, to Abram with the intention that she would bear him a son.
After Hagar found she was pregnant, she began to despise her mistress, Sarai. Sarai responded by mistreating Hagar, and Hagar fled into the wilderness. An angel spoke with Hagar at the fountain on the way to Shur. He instructed her to return to the camp of Abram, and that her son would be "a wild ass of a man; his hand shall be against every man, and every man's hand against him; and he shall dwell in the face of all his brethren." She was told to call her son Ishmael. Hagar then called God who spoke to her "El-roi", ("Thou God seest me:" KJV). From that day onward, the well was called Beer-lahai-roi, ("The well of him that liveth and seeth me." KJV margin). She then did as she was instructed by returning to her mistress in order to have her child. Abram was 86 years of age when Ishmael was born.
Thirteen years later, when Abram was 99 years of age, God declared Abram's new name: "Abraham" – "a father of many nations". Abraham then received the instructions for the covenant, of which circumcision was to be the sign.
God declared Sarai's new name: "Sarah", blessed her, and told Abraham, "I will give thee a son also of her". Abraham laughed, and "said in his heart, 'Shall a "child" be born unto him that is a hundred years old? and shall Sarah, that is ninety years old, bear?'" Immediately after Abraham's encounter with God, he had his entire household of men, including himself (age 99) and Ishmael (age 13), circumcised.
Not long afterward, during the heat of the day, Abraham had been sitting at the entrance of his tent by the terebinths of Mamre. He looked up and saw three men in the presence of God. Then he ran and bowed to the ground to welcome them. Abraham then offered to wash their feet and fetch them a morsel of bread, to which they assented. Abraham rushed to Sarah's tent to order cakes made from choice flour, then he ordered a servant-boy to prepare a choice calf. When all was prepared, he set curds, milk and the calf before them, waiting on them, under a tree, as they ate.
One of the visitors told Abraham that upon his return next year, Sarah would have a son. While at the tent entrance, Sarah overheard what was said and she laughed to herself about the prospect of having a child at their ages. The visitor inquired of Abraham why Sarah laughed at bearing a child at her age, as nothing is too hard for God. Frightened, Sarah denied laughing.
After eating, Abraham and the three visitors got up. They walked over to the peak that overlooked the 'cities of the plain' to discuss the fate of Sodom and Gomorrah for their detestable sins that were so great, it moved God to action. Because Abraham's nephew was living in Sodom, God revealed plans to confirm and judge these cities. At this point, the two other visitors left for Sodom. Then Abraham turned to God and pleaded decrementally with Him (from fifty persons to less) that "if there were at least ten righteous men found in the city, would not God spare the city?" For the sake of ten righteous people, God declared that he would not destroy the city.
When the two visitors got to Sodom to conduct their report, they planned on staying in the city square. However, Abraham's nephew, Lot, met with them and strongly insisted that these two "men" stay at his house for the night. A rally of men stood outside of Lot's home and demanded that Lot bring out his guests so that they may "know" (v.5) them. However, Lot objected and offered his virgin daughters who had not "known" (v.8) man to the rally of men instead. They rejected that notion and sought to break down Lot's door to get to his male guests, thus confirming the wickedness of the city and portending their imminent destruction.
Early the next morning, Abraham went to the place where he stood before God. He "looked out toward Sodom and Gomorrah" and saw what became of the cities of the plain, where not even "ten righteous" (v.18:32) had been found, as "the smoke of the land went up as the smoke of a furnace."
Abraham settled between Kadesh and Shur in the land of the Philistines. While he was living in Gerar, Abraham openly claimed that Sarah was his sister. Upon discovering this news, King Abimelech had her brought to him. God then came to Abimelech in a dream and declared that taking her would result in death because she was a man's wife. Abimelech had not laid hands on her, so he inquired if he would also slay a righteous nation, especially since Abraham had claimed that he and Sarah were siblings. In response, God told Abimelech that he did indeed have a blameless heart and that is why he continued to exist. However, should he not return the wife of Abraham back to him, God would surely destroy Abimelech and his entire household. Abimelech was informed that Abraham was a prophet who would pray for him.
Early next morning, Abimelech informed his servants of his dream and approached Abraham inquiring as to why he had brought such great guilt upon his kingdom. Abraham stated that he thought there was no fear of God in that place, and that they might kill him for his wife. Then Abraham defended what he had said as not being a lie at all: "And yet indeed "she is" my sister; she "is" the daughter of my father, but not the daughter of my mother; and she became my wife." Abimelech returned Sarah to Abraham, and gave him gifts of sheep, oxen, and servants; and invited him to settle wherever he pleased in Abimelech's lands. Further, Abimelech gave Abraham a thousand pieces of silver to serve as Sarah's vindication before all. Abraham then prayed for Abimelech and his household, since God had stricken the women with infertility because of the taking of Sarah.
After living for some time in the land of the Philistines, Abimelech and Phicol, the chief of his troops, approached Abraham because of a dispute that resulted in a violent confrontation at a well. Abraham then reproached Abimelech due to his Philistine servant's aggressive attacks and the seizing of Abraham's well. Abimelech claimed ignorance of the incident. Then Abraham offered a pact by providing sheep and oxen to Abimelech. Further, to attest that Abraham was the one who dug the well, he also gave Abimelech seven ewes for proof. Because of this sworn oath, they called the place of this well: Beersheba. After Abimelech and Phicol headed back to Philistia, Abraham planted a grove in Beersheba and called upon "the name of the , the everlasting God."
As had been prophesied in Mamre the previous year, Sarah became pregnant and bore a son to Abraham, on the first anniversary of the covenant of circumcision. Abraham was "an hundred years old", when his son whom he named Isaac was born; and he circumcised him when he was eight days old. For Sarah, the thought of giving birth and nursing a child, at such an old age, also brought her much laughter, as she declared, "God hath made me to laugh, so that all who hear will laugh with me." Isaac continued to grow and on the day he was weaned, Abraham held a great feast to honor the occasion. During the celebration, however, Sarah found Ishmael mocking; an observation that would begin to clarify the birthright of Isaac.
Ishmael was fourteen years old when Abraham's son Isaac was born to Sarah. When she found Ishmael teasing Isaac, Sarah told Abraham to send both Ishmael and Hagar away. She declared that Ishmael would not share in Isaac's inheritance. Abraham was greatly distressed by his wife's words and sought the advice of his God. God told Abraham not to be distressed but to do as his wife commanded. God reassured Abraham that "in Isaac shall seed be called to thee." He also said that Ishmael would make a nation, "because he is thy seed".
Early the next morning, Abraham brought Hagar and Ishmael out together. He gave her bread and water and sent them away. The two wandered in the wilderness of Beersheba until her bottle of water was completely consumed. In a moment of despair, she burst into tears. After God heard the boy's voice, an angel of the Lord confirmed to Hagar that he would become a great nation, and will be "living on his sword". A well of water then appeared so that it saved their lives. As the boy grew, he became a skilled archer living in the wilderness of Paran. Eventually his mother found a wife for Ishmael from her home country, the land of Egypt.
At some point in Isaac's youth, Abraham was commanded by God to offer his son up as a sacrifice in the land of Moriah. The patriarch traveled three days until he came to the mount that God told him of. He then commanded the servants to remain while he and Isaac proceeded alone into the mount. Isaac carried the wood upon which he would be sacrificed. Along the way, Isaac asked his father where the animal for the burnt offering was, to which Abraham replied "God will provide himself a lamb for a burnt offering". Just as Abraham was about to sacrifice his son, he was interrupted by the angel of the Lord, and he saw behind him a "ram caught in a thicket by his horns", which he sacrificed instead of his son. For his obedience he received another promise of numerous descendants and abundant prosperity. After this event, Abraham went to Beersheba.
Sarah died, and Abraham buried her in the Cave of the Patriarchs (the "cave of Machpelah"), near Hebron which he had purchased along with the adjoining field from Ephron the Hittite. After the death of Sarah, Abraham took another wife, a concubine named Keturah, by whom he had six sons: Zimran, Jokshan, Medan, Midian, Ishbak, and Shuah. According to the Bible, reflecting the change of his name to "Abraham" meaning "a father of many nations", Abraham is considered to be the progenitor of many nations mentioned in the Bible, among others the Israelites, Ishmaelites, Edomites,) Amalekites, Kenizzites, Midianites and Assyrians, and through his nephew Lot he was also related to the Moabites and Ammonites. Abraham lived to see his son marry Rebekah, (and to see the birth of his twin grandsons Jacob and Esau). He died at age 175, and was buried in the cave of Machpelah by his sons Isaac and Ishmael.
In the early and middle 20th century, leading archaeologists such as William F. Albright and biblical scholars such as Albrecht Alt believed that the patriarchs and matriarchs were either real individuals or believable composites of people who lived in the "patriarchal age", the 2nd millennium BCE. But, in the 1970s, new arguments concerning Israel's past and the biblical texts challenged these views; these arguments can be found in Thomas L. Thompson's "The Historicity of the Patriarchal Narratives" (1974), and John Van Seters' "Abraham in History and Tradition" (1975). Thompson, a literary scholar, based his argument on archaeology and ancient texts. His thesis centered on the lack of compelling evidence that the patriarchs lived in the 2nd millennium BCE, and noted how certain biblical texts reflected first millennium conditions and concerns. Van Seters examined the patriarchal stories and argued that their names, social milieu, and messages strongly suggested that they were Iron Age creations. By the beginning of the 21st century, archaeologists had given up hope of recovering any context that would make Abraham, Isaac or Jacob credible historical figures.
According to the "Zondervan Illustrated Bible Dictionary", the descent of Abraham into Egypt as recorded in Genesis 12:10-20 should correspond to the early years of the 2nd milenium BCE, which is before the time the Hyksos ruled in Egypt, but would coincide with the Semitic parties known to have visited the Egyptians circa 1900 BCE, as documented in the paintings of the tomb of Khnumhotep II at Beni Hasan. It might be possible to associate Abraham to such known Semitic visitors to Egypt, as they would have been ethnically connected.
Abraham's name is apparently very ancient, as the tradition found in Genesis no longer understands its original meaning (probably "Father is exalted" – the meaning offered in Genesis 17:5, "Father of a multitude", is a popular etymology). The story, like those of the other patriarchs, most likely had a substantial oral prehistory. At some stage the oral traditions became part of the written tradition of the Pentateuch; a majority of scholars believe this stage belongs to the Persian period, roughly 520–320 BCE. The mechanisms by which this came about remain unknown, but there are currently two important hypotheses. The first, called Persian Imperial authorisation, is that the post-Exilic community devised the Torah as a legal basis on which to function within the Persian Imperial system; the second is that the Pentateuch was written to provide the criteria for determining who would belong to the post Exilic Jewish community and to establish the power structures and relative positions of its various groups, notably the priesthood and the lay "elders".
Nevertheless, the completion of the Torah and its elevation to the centre of post-Exilic Judaism was as much or more about combining older texts as writing new ones – the final Pentateuch was based on existing traditions. In Ezekiel , written during the Exile (i.e., in the first half of the 6th century BCE), Ezekiel, an exile in Babylon, tells how those who remained in Judah are claiming ownership of the land based on inheritance from Abraham; but the prophet tells them they have no claim because they do not observe Torah. Isaiah similarly testifies of tension between the people of Judah and the returning post-Exilic Jews (the "gôlâ"), stating that God is the father of Israel and that Israel's history begins with the Exodus and not with Abraham. The conclusion to be inferred from this and similar evidence (e.g., Ezra–Nehemiah), is that the figure of Abraham must have been preeminent among the great landowners of Judah at the time of the Exile and after, serving to support their claims to the land in opposition to those of the returning exiles.
Abraham is given a high position of respect in three major world faiths, Judaism, Christianity and Islam. In Judaism he is the founding father of the Covenant, the special relationship between the Jewish people and God – leading to the belief that the Jews are the Chosen People of God. In Christianity, the Apostle Paul taught that Abraham's faith in God – preceding the Mosaic law – made him the prototype of all believers, circumcised and uncircumcised. In Islam, the prophet Muhammad claimed Abraham, whose submission to God constituted "Islam", was a "believer before the fact" and undercut Jewish claims to an exclusive relationship with God and the Covenant.
In Jewish tradition, Abraham is called "Avraham Avinu" (אברהם אבינו), "our father Abraham," signifying that he is both the biological progenitor of the Jews and the father of Judaism, the first Jew. His story is read in the weekly Torah reading portions, predominantly in the parashot: Lech-Lecha (לֶךְ-לְךָ), Vayeira (וַיֵּרָא), Chayei Sarah (חַיֵּי שָׂרָה), and Toledot (תּוֹלְדֹת).
In Jewish legend, God created heaven and earth for the sake of the merits of Abraham. After the deluge, Abraham was the only one among the pious who solemnly swear never forsaking God, and studied in house of Noah and Shem to learn about "Ways of God," and continuing the line of High Priest from Noah and Shem, then he descended the office to Levi and his seed forever. Before leaving his fathers' land, Abraham was miraculously saved from the fiery furnace of Nimrod following his brave action of breaking the idols of the Chaldeans into pieces. During his sojourning in Canaan, Abraham was accustomed to extend hospitality to travelers and strangers and taught how to praise God also knowledge of God to those who had received his kindness.
Besides Isaac and Jacob, he is the one whose name would appear united with God, as God in Judaism was called "Elohei Abraham, Elohei Yitzchaq ve Elohei Ya`aqob" ("God of Abraham, God of Isaac, and God of Jacob") and never the God of any one else. He was also mentioned as the father of thirty nations.
Abraham is generally credited as the author of the "Sefer Yetzirah" ("The Book of Creation"), one of the earliest extant books on Jewish mysticism.
Abraham does not loom so large in Christianity as he does in Judaism and Islam. It is Jesus as the Jewish Messiah who is central to Christianity, and the idea of a divine Messiah is what separates Christianity from the other two religions. In Romans 4, Abraham's merit is less his obedience to the divine will than his faith in God's ultimate grace; this faith provides him the merit for God having chosen him for the covenant, and the covenant becomes one of faith, not obedience.
The Roman Catholic Church calls Abraham "our father in Faith" in the Eucharistic prayer of the Roman Canon, recited during the Mass (see "Abraham in the Catholic liturgy"). He is also commemorated in the calendars of saints of several denominations: on 20 August by the Maronite Church, 28 August in the Coptic Church and the Assyrian Church of the East (with the full office for the latter), and on 9 October by the Roman Catholic Church and the Lutheran Church–Missouri Synod. In the introduction to his 15th-century translation of the Golden Legend's account of Abraham, William Caxton noted that this patriarch's life was read in church on Quinquagesima Sunday.
He is the patron saint of those in the hospitality industry. The Eastern Orthodox Church commemorates him as the "Righteous Forefather Abraham", with two feast days in its liturgical calendar. The first time is on 9 October (for those churches which follow the traditional Julian Calendar, 9 October falls on 22 October of the modern Gregorian Calendar), where he is commemorated together with his nephew "Righteous Lot". The other is on the "Sunday of the Forefathers" (two Sundays before Christmas), when he is commemorated together with other ancestors of Jesus. Abraham is also mentioned in the Divine Liturgy of Saint Basil the Great, just before the Anaphora, and Abraham and Sarah are invoked in the prayers said by the priest over a newly married couple.
Islam regards Abraham as a link in the chain of prophets that begins with Adam and culminates in Muhammad.
Ibrāhīm is mentioned in 35 chapters of the Quran, more often than any other biblical personage apart from Moses. He is called both a "hanif" (monotheist) and "muslim" (one who submits), and Muslims regard him as a prophet and patriarch, the archetype of the perfect Muslim, and the revered reformer of the Kaaba in Mecca. Islamic traditions consider Ibrāhīm (Abraham) the first Pioneer of Islam (which is also called "millat Ibrahim", the "religion of Abraham"), and that his purpose and mission throughout his life was to proclaim the Oneness of God. In Islam, Abraham holds an exalted position among the major prophets and he is referred to as "Ibrahim Khalilullah", meaning "Abraham the Beloved of Allah".
Besides Ishaq and Yaqub, Ibrahim is among the most honorable and the most excellent men in sight of God. Ibrahim was also mentioned in Quran as "Father of Muslims" and the role model for the community.
Paintings on the life of Abraham tend to focus on only a few incidents: the sacrifice of Isaac; meeting Melchizedek; entertaining the three angels; Hagar in the desert; and a few others. Additionally, Martin O'Kane, a professor of Biblical Studies, writes that the parable of Lazarus resting in the "Bosom of Abraham", as described in the Gospel of Luke, became an iconic image in Christian works. According to O'Kane, artists often chose to divert from the common literary portrayal of Lazarus sitting next to Abraham at a banquet in Heaven and instead focus on the "somewhat incongruous notion of Abraham, the most venerated of patriarchs, holding a naked and vulnerable child in his bosom". Several artists have been inspired by the life of Abraham, including Albrecht Dürer (1471–1528), Caravaggio (1573–1610), Donatello, Raphael, Philip van Dyck (Dutch painter, 1680–1753), and Claude Lorrain (French painter, 1600–1682). Rembrandt (Dutch, 1606–1669) created at least seven works on Abraham, Peter Paul Rubens (1577–1640) did several, Marc Chagall did at least five on Abraham, Gustave Doré (French illustrator, 1832–1883) did six, and James Tissot (French painter and illustrator, 1836–1902) did over twenty works on the subject.
The Sarcophagus of Junius Bassus depicts a set of biblical stories, including Abraham about to sacrifice Isaac. These sculpted scenes are on the outside of a marble Early Christian sarcophagus used for the burial of Junius Bassus. He died in 359. This sarcophagus has been described as "probably the single most famous piece of early Christian relief sculpture." The sarcophagus was originally placed in or under Old St. Peter's Basilica, was rediscovered in 1597, and is now below the modern basilica in the Museo Storico del Tesoro della Basilica di San Pietro (Museum of St. Peter's Basilica) in the Vatican. The base is approximately 4 × 8 × 4 feet. The Old Testament scenes depicted were chosen as precursors of Christ's sacrifice in the New Testament, in an early form of typology. Just to the right of the middle is Daniel in the lion's den and on the left is Abraham about to sacrifice Isaac.
George Segal created figural sculptures by molding plastered gauze strips over live models in his 1987 work "Abraham's Farewell to Ishmael". The human condition was central to his concerns, and Segal used the Old Testament as a source for his imagery. This sculpture depicts the dilemma faced by Abraham when Sarah demanded that he expel Hagar and Ishmael. In the sculpture, the father's tenderness, Sarah's rage, and Hagar's resigned acceptance portray a range of human emotions. The sculpture was donated to the Miami Art Museum after the artist's death in 2000.
Usually Abraham can be identified by the context of the image the meeting with Melchizedek, , or . In solo portraits a sword or knife may be used as his attribute, as in by Gian Maria Morlaiter or by Lorenzo Monaco. He always wears a gray or white beard.
As early as the beginning of the 3rd century, Christian art followed Christian typology in making the sacrifice of Isaac a foreshadowing of Christ's sacrifice on the cross and its memorial in the sacrifice of the Mass. See for example engraved with Abraham's and other sacrifices taken to prefigure that of Christ in the Eucharist.
Some early Christian writers interpreted the three visitors as the triune God. Thus in Santa Maria Maggiore, Rome, portrays only the visitors against a gold ground and puts semitransparent copies of them in the "heavenly" space above the scene. In Eastern Orthodox art the visit is the chief means by which the Trinity is pictured (). Some images do not include Abraham and Sarah, like Andrei Rublev's "Trinity", which shows only the three visitors as beardless youths at a table.
"Fear and Trembling" (original Danish title: "Frygt og Bæven") is an influential philosophical work by Søren Kierkegaard, published in 1843 under the pseudonym "Johannes de silentio" ("John the Silent"). Kierkegaard wanted to understand the anxiety that must have been present in Abraham when God asked him to sacrifice his son. W. G. Hardy's novel "Father Abraham" (1935), tells the fictionalized life of Abraham.
In 1681, Marc-Antoine Charpentier released a Dramatic motet "Sacrificim Abrahae" H 402 - 402 a - 402 b, for soloists, chorus, doubling instruments and bc. Sébastien de Brossard released a cantate Abraham (date unknown).
In 1994, Steve Reich released an opera named "The Cave". The title refers to the Cave of the Patriarchs. The narrative of the opera is based on the story of Abraham and his immediate family as it is recounted in the various religious texts, and as it is understood by individual people from different cultures and religious traditions.
Bob Dylan's "Highway 61 Revisited" is the title track for his 1965 album "Highway 61 Revisited". In 2004, "Rolling Stone" magazine ranked the song as number 364 in their 500 Greatest Songs of All Time. The song has five stanzas. In each stanza, someone describes an unusual problem that is ultimately resolved on Highway 61. In Stanza 1, God tells Abraham to "kill me a son". God wants the killing done on Highway 61. Abram, the original name of the biblical Abraham, is also the name of Dylan's own father. | https://en.wikipedia.org/wiki?curid=1436 |
Abraxas
Abraxas (, variant form Abrasax, ΑΒΡΑΣΑΞ) is a word of mystic meaning in the system of the Gnostic Basilides, being there applied to the "Great Archon" (Gk., "megas archōn"), the princeps of the 365 spheres (Gk., "ouranoi"). The word is found in Gnostic texts such as the "Holy Book of the Great Invisible Spirit", and also appears in the Greek Magical Papyri. It was engraved on certain antique gemstones, called on that account Abraxas stones, which were used as amulets or charms. As the initial spelling on stones was "Abrasax" (Αβρασαξ), the spelling of "Abraxas" seen today probably originates in the confusion made between the Greek letters Sigma (Σ) and Xi (Ξ) in the Latin transliteration.
The seven letters spelling its name may represent each of the seven classic planets. The word may be related to "Abracadabra", although other explanations exist.
There are similarities and differences between such figures in reports about Basilides's teaching, ancient Gnostic texts, the larger Greco-Roman magical traditions, and modern magical and esoteric writings. Speculations have proliferated on Abraxas in recent centuries, who has been claimed to be both an Egyptian god and a demon.
Gaius Julius Hyginus ("Fab". 183) gives "Abrax Aslo Therbeeo" as names of horses of the sun mentioned by 'Homerus.' The passage is miserably corrupt: but it may not be accidental that the first three syllables make Abraxas.
The proper form of the name is evidently "Abrasax", as with the Greek writers, Hippolytus, Epiphanias, Didymus ("De Trin". iii. 42), and Theodoret; also Augustine and 'Praedestinatus'; and in nearly all the legends on gems. By a probably euphonic inversion the translator of Irenaeus and the other Latin authors have "Abraxas", which is found in the magical papyri, and even, though most sparingly, on engraved stones.
The attempts to discover a derivation for the name, Greek, Hebrew, Coptic, or other, have not been entirely successful:
Perhaps the word may be included among those mysterious expressions discussed by Adolf von Harnack, “which belong to no known speech, and by their singular collocation of vowels and consonants give evidence that they belong to some mystic dialect, or take their origin from some supposed divine inspiration.”
The Egyptian author of the book "De Mysteriis" in reply to Porphyry (vii. 4) admits a preference of 'barbarous' to vernacular names in sacred things, urging a peculiar sanctity in the languages of certain nations, as the Egyptians and Assyrians; and Origen ("Contra Cels". i. 24) refers to the 'potent names' used by Egyptian sages, Persian Magi, and Indian Brahmins, signifying deities in the several languages.
It is uncertain what the actual role and function of Abraxas was in the Basilidian system, as our authorities (see below) often show no direct acquaintance with the doctrines of Basilides himself.
In the system described by Irenaeus, "the Unbegotten Father" is the progenitor of "Nous", and from "Nous Logos", from "Logos Phronesis", from "Phronesis Sophia" and "Dynamis", from "Sophia" and "Dynamis" principalities, powers, and angels, the last of whom create "the first heaven." They in turn originate a second series, who create a second heaven. The process continues in like manner until 365 heavens are in existence, the angels of the last or visible heaven being the authors of our world. "The ruler" ["principem, i.e.", probably "ton archonta"] of the 365 heavens "is Abraxas, and for this reason he contains within himself 365 numbers."
The name occurs in the "Refutation of all Heresies" (vii. 26) by Hippolytus, who appears in these chapters to have followed the "Exegetica" of Basilides. After describing the manifestation of the Gospel in the Ogdoad and Hebdomad, he adds that the Basilidians have a long account of the innumerable creations and powers in the several 'stages' of the upper world ("diastemata"), in which they speak of 365 heavens and say that "their great archon" is Abrasax, because his name contains the number 365, the number of the days in the year; i.e. the sum of the numbers denoted by the Greek letters in ΑΒΡΑΣΑΞ according to the rules of isopsephy is 365:
Epiphanius ("Haer". 69, 73 f.) appears to follow partly Irenaeus, partly the lost Compendium of Hippolytus. He designates Abraxas more distinctly as "the power above all, and First Principle," "the cause and first archetype" of all things; and mentions that the Basilidians referred to 365 as the number of parts ("mele") in the human body, as well as of days in the year.
The author of the appendix to Tertullian "De Praescr. Haer". (c. 4), who likewise follows Hippolytus's Compendium, adds some further particulars; that 'Abraxas' gave birth to Mind ("nous"), the first in the series of primary powers enumerated likewise by Irenaeus and Epiphanius; that the world, as well as the 365 heavens, was created in honour of 'Abraxas;' and that Christ was sent not by the Maker of the world but by 'Abraxas.'
Nothing can be built on the vague allusions of Jerome, according to whom 'Abraxas' meant for Basilides "the greatest God" ("De vir. ill". 21), "the highest God" ("Dial. adv. Lucif". 23), "the Almighty God" ("Comm. in Amos" iii. 9), and "the Lord the Creator" ("Comm. in Nah". i. 11). The notices in Theodoret ("Haer. fab". i. 4), Augustine ("Haer". 4), and 'Praedestinatus' (i. 3), have no independent value.
It is evident from these particulars that Abrasax was the name of the first of the 365 Archons, and accordingly stood below Sophia and Dynamis and their progenitors; but his position is not expressly stated, so that the writer of the supplement to Tertullian had some excuse for confusing him with "the Supreme God."
With the availability of primary sources, such as those in the Nag Hammadi library, the identity of Abrasax remains unclear. The "Holy Book of the Great Invisible Spirit", for instance, refers to Abrasax as an Aeon dwelling with Sophia and other Aeons of the Pleroma in the light of the luminary Eleleth. In several texts, the luminary Eleleth is the last of the luminaries (Spiritual Lights) that come forward, and it is the Aeon Sophia, associated with Eleleth, who encounters darkness and becomes involved in the chain of events that leads to the Demiurge's rule of this world, and the salvage effort that ensues. As such, the role of Aeons of Eleleth, including Abraxas, Sophia, and others, pertains to this outer border of the Pleroma that encounters the ignorance of the world of Lack and interacts to rectify the error of ignorance in the world of materiality.
The Catholic church later deemed Abraxas a pagan god, and ultimately branded him a demon as documented in J. Collin de Plancy's "Infernal Dictionary", Abraxas (or Abracax) is labeled the "supreme God" of the Basilidians, whom he describes as "heretics of the second century." He further indicated the Basilidians attributed to Abraxas the rule over "365 skies" and "365 virtues". In a final statement on Basilidians, de Plancy states that their view was that Jesus Christ was merely a "benevolent ghost sent on Earth by Abrasax."
A vast number of engraved stones are in existence, to which the name "Abrasax-stones" has long been given. One particularly fine example was included as part of the Thetford treasure from fourth century Norfolk, UK. The subjects are mythological, and chiefly grotesque, with various inscriptions, in which ΑΒΡΑΣΑΞ often occurs, alone or with other words. Sometimes the whole space is taken up with the inscription. In certain obscure magical writings of Egyptian origin ἀβραξάς or ἀβρασάξ is found associated with other names which frequently accompany it on gems; it is also found on the Greek metal "tesseræ" among other mystic words. The meaning of the legends is seldom intelligible: but some of the gems are amulets; and the same may be the case with nearly all.
In a great majority of instances the name Abrasax is associated with a singular composite figure, having a Chimera-like appearance somewhat resembling a basilisk or the Greek primordial god Chronos (not to be confused with the Greek titan Cronus). According to E. A. Wallis Budge, "as a Pantheus, i.e. All-God, he appears on the amulets with the head of a cock (Phœbus) or of a lion (Ra or Mithras), the body of a man, and his legs are serpents which terminate in scorpions, types of the Agathodaimon. In his right hand he grasps a club, or a flail, and in his left is a round or oval shield." This form was also referred to as the Anguipede. Budge surmised that Abrasax was "a form of the Adam Kadmon of the Kabbalists and the Primal Man whom God made in His own image."
Some parts at least of the figure mentioned above are solar symbols, and the Basilidian Abrasax is manifestly connected with the sun. J. J. Bellermann has speculated that "the whole represents the Supreme Being, with his Five great Emanations, each one pointed out by means of an expressive emblem. Thus, from the human body, the usual form assigned to the Deity, forasmuch as it is written that God created man in his own image, issue the two supporters, "Nous" and "Logos", symbols of the inner sense and the quickening understanding, as typified by the serpents, for the same reason that had induced the old Greeks to assign this reptile for an attribute to Pallas. His head—a cock's—represents "Phronesis", the fowl being emblematical of foresight and vigilance. His two hands bear the badges of "Sophia" and "Dynamis", the shield of Wisdom, and the scourge of Power."
In the absence of other evidence to show the origin of these curious relics of antiquity the occurrence of a name known as Basilidian on patristic authority has not unnaturally been taken as a sufficient mark of origin, and the early collectors and critics assumed this whole group to be the work of Gnostics. During the last three centuries attempts have been made to sift away successively those gems that had no claim to be considered in any sense Gnostic, or specially Basilidian, or connected with Abrasax. The subject is one which has exercised the ingenuity of many savants, but it may be said that all the engraved stones fall into three classes:
While it would be rash to assert positively that no existing gems were the work of Gnostics, there is no valid reason for attributing all of them to such an origin. The fact that the name occurs on these gems in connection with representations of figures with the head of a cock, a lion, or an ass, and the tail of a serpent was formerly taken in the light of what Irenaeus says about the followers of Basilides:
Incantations by mystic names were characteristic of the hybrid Gnosticism planted in Spain and southern Gaul at the end of the fourth century and at the beginning of the fifth, which Jerome connects with Basilides and which (according to his "Epist"., lxxv.) used the name Abrasax.
It is therefore not unlikely that some Gnostics used amulets, though the confident assertions of modern writers to this effect rest on no authority. Isaac de Beausobre properly calls attention to the significant silence of Clement in the two passages in which he instructs the Christians of Alexandria on the right use of rings and gems, and the figures which may legitimately be engraved on them ("Paed". 241 ff.; 287 ff.). But no attempt to identify the figures on existing gems with the personages of Gnostic mythology has had any success, and "Abrasax" is the only Gnostic term found in the accompanying legends that is not known to belong to other religions or mythologies. The present state of the evidence therefore suggests that their engravers and the Basilidians received the mystic name from a common source now unknown.
Having due regard to the magic papyri, in which many of the unintelligible names of the Abrasax-stones reappear, besides directions for making and using gems with similar figures and formulas for magical purposes, it can scarcely be doubted that many of these stones are pagan amulets and instruments of magic.
The magic papyri reflect the same ideas as the Abrasax-gems and often bear Hebraic names of God. The following example is illustrative: "I conjure you by Iaō Sabaōth Adōnai Abrasax, and by the great god, Iaeō". The patriarchs are sometimes addressed as deities; for which fact many instances may be adduced. In the group "Iakoubia, Iaōsabaōth Adōnai Abrasax," the first name seems to be composed of Jacob and Ya. Similarly, entities considered angels in Judaism are invoked as gods alongside Abrasax: thus "I conjure you... by the god Michaēl, by the god Souriēl, by the god Gabriēl, by the god Raphaēl, by the god Abrasax Ablathanalba Akrammachari...".
In text PGM V. 96-172, Abrasax is identified as part of the "true name which has been transmitted to the prophets of Israel" of the "Headless One, who created heaven and earth, who created night and day... Osoronnophris whom none has ever seen... awesome and invisible god with an empty spirit"; the name also includes Iaō and Adōnai. "Osoronnophris" represents Egyptian "Wsir Wn-nfr", "Osiris the Perfect Being". Another identification with Osiris is made in PGM VII. 643-51: "you are not wine, but the guts of Osiris, the guts of... Ablanathanalba Akrammachamarei Eee, who has been stationed over necessity, Iakoub Ia Iaō Sabaōth Adōnai Abrasax." PGM VIII. 1-63, on the other hand, identifies Abrasax as a name of "Hermes" (i.e. Thoth). Here the numerological properties of the name are invoked, with its seven letters corresponding to the seven planets and its isopsephic value of 365 corresponding to the days of the year. Thoth is also identified with Abrasax in PGM LXXIX. 1-7: "I am the soul of darkness, Abrasax, the eternal one, Michaēl, but my true name is Thōouth, Thōouth."
One papyrus titled the "Monad" or the "Eighth Book of Moses" (PGM XIII. 1-343) contains an invocation to a supreme creator God; Abrasax is given as being the name of this God in the language of the baboons. The papyrus goes on to describe a cosmogonic myth about Abrasax, describing how he created the Ogdoad by laughing. His first laughter created light; his second divided the primordial waters; his third created the mind; his fourth created fertility and procreation; his fifth created fate; his sixth created time (as the sun and moon); and his seventh and final laughter created the soul. Then, from various sounds made by Abrasax, there arose the serpent Python who "foreknew all things", the first man (or Fear), and the god Iaō, "who is lord of all". The man fought with Iaō, and Abrasax declared that Iaō's power would derive from both of the others, and that Iaō would take precedence over all the other gods. This text also describes Helios as an archangel of God/Abrasax.
The Leyden Papyrus recommends that this invocation be pronounced to the moon:
The magic word "Ablanathanalba," which reads in Greek the same backward as forward, also occurs in the Abrasax-stones as well as in the magic papyri. This word is usually conceded to be derived from the Hebrew (Aramaic), meaning "Thou art our father" (אב לן את), and also occurs in connection with Abrasax; the following inscription is found upon a metal plate in the Carlsruhe Museum:
АВРАΣАΞ
ΑΒΛΑΝΑΘ
ΑΝΑΛΒΑ | https://en.wikipedia.org/wiki?curid=1437 |
Absalom
Absalom ( "Aḇšālōm", "father of peace"), according to the Hebrew Bible, was the third son of David, King of Israel with Maacah, daughter of Talmai, King of Geshur.
Absalom, David's third son, by Maacah, was born in Hebron. He moved at an early age along with the transfer of the capital to Jerusalem, where he spent most of his life. He was a great favorite of his father, and of the people. His charming manners, personal beauty, insinuating ways, love of pomp, and royal pretensions, captivated the hearts of the people from the beginning. He lived in great style, drove in a magnificent chariot, and had fifty men run before him.
Little is known of Absalom's family life, but the biblical narrative states that he had three sons and one daughter, whose name was Tamar and is described as a beautiful woman. From the language of , "I have no son to keep my name in remembrance", it is implied that his sons died at an early age.
Although he had no sons says that Absalom had another daughter or granddaughter named Maacah, who later became the favorite wife of Rehoboam. Maacah was the mother of Abijah of Judah, and grandmother of Asa of Judah. She served as queen mother for Asa, until he deposed her for idolatry. (, , )
Absalom's sister, who was also called Tamar, was raped by Amnon, who was their half-brother. Amnon was also David's eldest son. After the rape, Absalom waited two years, and then avenged Tamar by sending his servants to murder a drunken Amnon at a feast, to which Absalom had invited all the king's sons ().
After this murder Absalom fled to Talmai, who was the king of Geshur and Absalom's maternal grandfather (see also or ). It was not until three years later that Absalom was fully reinstated in his father's favour and finally returned to Jerusalem (see Joab).
While at Jerusalem, Absalom built support for himself by speaking to those who came to King David for justice, saying, "See, your claims are good and right; but there is no one deputed by the king to hear you", perhaps reflecting flaws in the judicial system of the united monarchy. "If only I were the judge of the land! Then all who had a suit or cause might come to me, and I would give them justice." He made gestures of flattery by kissing those who bowed before him instead of accepting supplication. He "stole the hearts of the people of Israel".
After four years he declared himself king, raised a revolt at Hebron, the former capital, and slept with his father's concubines. All Israel and Judah flocked to him, and David, attended only by the Cherethites and Pelethites and his former bodyguard, which had followed him from Gath, found it expedient to flee. The priests Zadok and Abiathar remained in Jerusalem, and their sons Jonathan and Ahimaaz served as David's spies. Absalom reached the capital and consulted with the renowned Ahithophel (sometimes spelled Achitophel).
David took refuge from Absalom's forces beyond the Jordan River. However, he took the precaution of instructing a servant, Hushai, to infiltrate Absalom's court and subvert it. Hushai convinced Absalom to ignore Ahithophel's advice to attack his father while he was on the run, and instead to prepare his forces for a major attack. This gave David critical time to prepare his own troops for the battle.
A fateful battle was fought in the Wood of Ephraim (the name suggests a locality west of the Jordan) and Absalom's army was completely routed. Absalom's head was caught in the boughs of an oak tree as the mule he was riding ran beneath it. He was discovered there still alive by one of David's men, who reported this to Joab, the king's commander. Joab, accustomed to avenging himself, took this opportunity to even the score with Absalom. Absalom had once set Joab's field on fire and then made Amasa Captain of the Host instead of Joab. Killing Absalom was against David's explicit command, "Beware that none touch the young man Absalom". Joab killed Absalom with three darts through the heart.
When David heard that Absalom was killed, although not how he was killed, he greatly sorrowed.
David withdrew to the city (Mahanaim) in mourning, until Joab roused him from "the extravagance of his grief" and called on him to fulfill his duty to his people.
Absalom had erected a monument near Jerusalem to perpetuate his name:
Now Absalom in his lifetime had taken and reared up for himself a pillar, which is in the king's dale: for he said, I have no son to keep my name in remembrance: and he called the pillar after his own name: and it is called unto this day, Absalom's place.
An ancient monument in the Kidron Valley near the Old City of Jerusalem, known as the Tomb of Absalom or Absalom's Pillar and traditionally identified as the monument of the biblical narrative, is now dated by modern archeologists to the first century AD. The Jewish Encyclopedia reports: "A tomb twenty feet high and twenty-four feet square, which late tradition points out as the resting-place of Absalom. It is situated in the eastern part of the valley of Kidron, to the east of Jerusalem. In all probability it is the tomb of Alexander Jannæus (Conder, in Hastings' "Dict. Bible", article "Jerusalem", p. 597). It existed in the days of Josephus ("Antiquities" vii. 10, § 3). See illustrations on pp. 133, 134. However, archaeologists have now dated the tomb to the 1st century AD. In a 2013 conference, Professor Gabriel Barkay suggested that it could be the tomb of Herod Agrippa, the grandson of Herod the Great, based in part on the similarity to Herod's newly discovered tomb at Herodium. For centuries, it was the custom among passers-by—Jews, Christians and Muslims—to throw stones at the monument. Residents of Jerusalem would bring their unruly children to the site to teach them what became of a rebellious son.
The life and death of Absalom offered to the rabbis a welcome theme wherewith to warn the people against false ambition, vainglory, and unfilial conduct. The vanity with which he displayed his beautiful hair, the rabbis say, became his snare and his stumbling-block. "By his long hair the Nazirite entangled the people to rebel against his father, and by it he himself became entangled, to fall a victim to his pursuers" (Mishnah Soṭah, i. 8). And again, elsewhere: "By his vile stratagem he deceived and stole three hearts, that of his father, of the elders, and finally of the whole nation of Israel, and for this reason three darts were thrust into his heart to end his treacherous life" (Tosef., Soṭah, iii. 17). More striking is the following: "Did one ever hear of an oak-tree having a heart? And yet in the oak-tree in whose branches Absalom was caught, we read that upon its heart he was held up still alive while the darts were thrust through him [Mek., Shirah, § 6]. This is to show that when a man becomes so heartless as to make war against his own father, nature itself takes on a heart to avenge the deed."
Popular legend states that the eye of Absalom was of immense size, signifying his insatiable greed (Niddah, 24b). Indeed, "hell itself opened beneath him, and David, his father, cried seven times: 'My son! my son!' while bewailing his death, praying at the same time for his redemption from the seventh section of Gehenna, to which he was consigned" (Soṭah, 10b). According to R. Meir (Sanh. 103b), "he has no share in the life to come". And according to the description of Gehenna by Joshua ben Levi, who, like Dante, wandered through hell under the guidance of the angel Duma, Absalom still dwells there, having the rebellious heathen in charge; and when the angels with their fiery rods run also against Absalom to smite him like the rest, a heavenly voice says: "Spare Absalom, the son of David, My servant." | https://en.wikipedia.org/wiki?curid=1438 |
Abydos, Egypt
Abydos (; Sahidic ') is one of the oldest cities of ancient Egypt, and also of the eighth nome in Upper Egypt. It is located about west of the Nile at latitude 26° 10' N, near the modern Egyptian towns of el-'Araba el Madfuna and al-Balyana. In the ancient Egyptian language, the city was called Abdju"' ("ꜣbḏw" or "AbDw"). The English name "Abydos" comes from the Greek , a name borrowed by Greek geographers from the unrelated city of Abydos on the Hellespont.
Considered one of the most important archaeological sites in Egypt, the sacred city of Abydos was the site of many ancient temples, including Umm el-Qa'ab, a royal necropolis where early pharaohs were entombed. These tombs began to be seen as extremely significant burials and in later times it became desirable to be buried in the area, leading to the growth of the town's importance as a cult site.
Today, Abydos is notable for the memorial temple of Seti I, which contains an inscription from the nineteenth dynasty known to the modern world as the Abydos King List. It is a chronological list showing cartouches of most dynastic pharaohs of Egypt from Menes until Seti I's father, Ramesses I.
The Great Temple and most of the ancient town are buried under the modern buildings to the north of the Seti temple. Many of the original structures and the artifacts within them are considered irretrievable and lost; many may have been destroyed by the new construction.
Abydos was occupied by the rulers of the Predynastic period, whose town, temple and tombs have been found there. The temple and town continued to be rebuilt at intervals down to the times of the Thirtieth Dynasty, and the cemetery was in continuous use.
The pharaohs of the First Dynasty were buried in Abydos, including Narmer, who is regarded as the founder of the First Dynasty, and his successor, Aha. It was in this time period that the Abydos boats were constructed. Some pharaohs of the Second Dynasty were also buried in Abydos. The temple was renewed and enlarged by these pharaohs as well. Funerary enclosures, misinterpreted in modern times as great 'forts', were built on the desert behind the town by three kings of the Second Dynasty; the most complete is that of Khasekhemwy.
From the Fifth Dynasty, the deity Khentiamentiu, "foremost of the Westerners", came to be seen as a manifestation of the dead pharaoh in the underworld. Pepi I (Sixth Dynasty) constructed a funerary chapel which evolved over the years into the Great Temple of Osiris, the ruins of which still exist within the town enclosure. Abydos became the centre of the worship of the Isis and Osiris cult.
During the First Intermediate Period, the principal deity of the area, Khentiamentiu, began to be seen as an aspect of Osiris, and the deities gradually merged and came to be regarded as one. Khentiamentiu's name became an epithet of Osiris. King Mentuhotep II was the first to build a royal chapel. In the Twelfth Dynasty a gigantic tomb was cut into the rock by Senusret III. Associated with this tomb was a "cenotaph", a cult temple and a small town known as "Wah-Sut", that was used by the workers for these structures. Next to the cenotaph at least two kings of the Thirteenth Dynasty were buried (in tombs S9 and S10) as well as some rulers of the Second Intermediate Period, such as Senebkay. An indigenous line of kings, the Abydos Dynasty, may have ruled the region from Abydos at the time.
New construction during the Eighteenth Dynasty began with a large chapel of Ahmose I. The Pyramid of Ahmose I was also constructed at Abydos—the only pyramid in the area; very little of it remains today.
Thutmose III built a far larger temple, about . He also made a processional way leading past the side of the temple to the cemetery beyond, featuring a great gateway of granite.
Seti I, during the Nineteenth Dynasty, founded a temple to the south of the town in honor of the ancestral pharaohs of the early dynasties; this was finished by Ramesses II, who also built a lesser temple of his own. Merneptah added the Osireion, just to the north of the temple of Seti.
Ahmose II in the Twenty-sixth Dynasty rebuilt the temple again, and placed in it a large monolith shrine of red granite, finely wrought. The foundations of the successive temples were comprised within approximately . depth of the ruins discovered in modern times; these needed the closest examination to discriminate the various buildings, and were recorded by more than 4,000 measurements and 1,000 levellings.
The last building added was a new temple of Nectanebo I, built in the Thirtieth Dynasty. From the Ptolemaic times of the Greek occupancy of Egypt, that began three hundred years before the Roman occupancy that followed, the structures began to decay and no later works are known.
From earliest times, Abydos was a cult centre, first of the local deity, Khentiamentiu, and from the end of the Old Kingdom, the rising cult of Osiris. A tradition developed that the Early Dynastic cemetery was the burial place of Osiris and the tomb of Djer was reinterpreted as that of Osiris.
Decorations in tombs throughout Egypt, such as the one displayed to the right, record pilgrimages to Abydos by wealthy families.
From the First Dynasty to the Twenty-sixth Dynasty, nine or ten temples were successively built on one site at Abydos. The first was an enclosure, about , enclosed by a thin wall of unbaked bricks. Incorporating one wall of this first structure, the second temple of about square was built with walls about thick. An outer "temenos" (enclosure) wall surrounded the grounds. This outer wall was made wider some time around the Second or Third Dynasty. The old temple entirely vanished in the Fourth Dynasty, and a smaller building was erected behind it, enclosing a wide hearth of black ashes. Pottery models of offerings are found in these ashes and were probably the substitutes for live sacrifices decreed by Khufu (or Cheops) in his temple reforms.
At an undetermined date, a great clearance of temple offerings had been made and the modern discovery of a chamber into which they were gathered yielded the fine ivory carvings and the glazed figures and tiles that demonstrate the splendid work of the First Dynasty. A vase of Menes with purple hieroglyphs inlaid into a green glaze and tiles with relief figures are the most important pieces found. The noble statuette of Cheops in ivory, found in the stone chamber of the temple, gives the only portrait of this great pharaoh.
The temple was entirely rebuilt on a larger scale by Pepi I in the Sixth Dynasty. He placed a great stone gateway to the temenos, an outer temenos wall and gateway, with a colonnade between the gates. His temple was about inside, with stone gateways front and back, showing that it was of the processional type. In the Eleventh Dynasty Mentuhotep I added a colonnade and altars. Soon after, Mentuhotep II entirely rebuilt the temple, laying a stone pavement over the area, about square. He also added subsidiary chambers. Soon thereafter, in the Twelfth Dynasty, Senusret I laid massive foundations of stone over the pavement of his predecessor. A great temenos was laid out enclosing a much larger area and the new temple itself was about three times the earlier size.
The temple of Seti I was built on entirely new ground half a mile to the south of the long series of temples just described. This surviving building is best known as the Great Temple of Abydos, being nearly complete and an impressive sight. A principal purpose of the temple was to serve as a memorial to king Seti I, as well as to show reverence for the early pharaohs, which is incorporated within as part of the "Rite of the Ancestors".
The long list of the pharaohs of the principal dynasties—recognized by Seti—are carved on a wall and known as the "Abydos King List" (showing the cartouche name of many dynastic pharaohs of Egypt from the first, Narmer or Menes, until Seti's time)- with the exception of those noted above. There were significant names deliberately left off of the list. So rare, as an almost complete list of pharaoh names, the Table of Abydos, rediscovered by William John Bankes, has been called the "Rosetta Stone" of Egyptian archaeology, analogous to the Rosetta Stone for Egyptian writing, beyond the Narmer Palette.
There were also seven chapels built for the worship of the pharaoh and principal deities. These included three chapels for the "state" deities Ptah, Re-Horakhty, and (centrally positioned) Amun-Re and the challenge for the Abydos triad of Osiris, Isis and Horus. The rites recorded in the deity chapels represent the first complete form known of the Daily Ritual, which was performed daily in temples across Egypt throughout the pharaonic period. At the back of the temple is an enigmatic structure known as the Osireion, which served as a cenotaph for Seti-Osiris, and is thought to be connected with the worship of Osiris as an "Osiris tomb". It is possible that from those chambers was led out the great Hypogeum for the celebration of the Osiris mysteries, built by Merenptah. The temple was originally long, but the forecourts are scarcely recognizable, and the part still in good condition is about
Except for the list of pharaohs and a panegyric on Ramesses II, the subjects are not historical, but religious in nature, dedicated to the transformation of the king after his death. The temple reliefs are celebrated for their delicacy and artistic refinement, utilizing both the archaism of earlier dynasties with the vibrancy of late 18th Dynasty reliefs. The sculptures had been published mostly in hand copy, not facsimile, by Auguste Mariette in his "Abydos", I. The temple has been partially recorded epigraphically by Amice Calverley and Myrtle Broome in their 4 volume publication of "The Temple of King Sethos I at Abydos" (1933–1958).
The Osirion or Osireon is an ancient Egyptian temple. It is located to the rear of the temple of Seti I. It is an integral part of Seti I's funeral complex and is built to resemble an 18th Dynasty Valley of the Kings tomb.
The adjacent temple of Ramesses II was much smaller and simpler in plan; but it had a fine historical series of scenes around the outside that lauded his achievements, of which the lower parts remain. The outside of the temple was decorated with scenes of the Battle of Kadesh. His list of pharaohs, similar to that of Seti I, formerly stood here; but the fragments were removed by the French consul and sold to the British Museum.
The royal necropolises of the earliest dynasties were placed about a mile into the great desert plain, in a place now known as Umm El Qa'ab "The Mother of Pots" because of the shards remaining from all of the devotional objects left by religious pilgrims.
The earliest burial is about inside, a pit lined with brick walls, and originally roofed with timber and matting. Others tombs also built before Menes are . The probable tomb of Menes is of the latter size.
Afterward, the tombs increased in size and complexity. The tomb-pit was surrounded by chambers to hold offerings, the sepulchre being a great wooden chamber in the midst of the brick-lined pit. Rows of small pits, tombs for the servants of the pharaoh, surrounded the royal chamber, many dozens of such burials being usual. Some of the offerings included sacrificed animals, such as the asses found in the tomb of Merneith. Evidence of human sacrifice exists in the early tombs, such as the 118 servants in the tomb of Merneith, but this practice was changed into symbolic offerings later.
By the end of the Second Dynasty the type of tomb constructed changed to a long passage with chambers on either side, the royal burial being in the middle of the length. The greatest of these tombs with its dependencies, covered a space of over , however it is possible for this to have been several tombs which abutted one another during construction; the Egyptians had no means of mapping the positioning of the tombs. The contents of the tombs have been nearly destroyed by successive plunderers; but enough remained to show that rich jewellery was placed on the mummies, a profusion of vases of hard and valuable stones from the royal table service stood about the body, the store-rooms were filled with great jars of wine, perfumed ointments, and other supplies, and tablets of ivory and of ebony were engraved with a record of the yearly annals of the reigns. The seals of various officials, of which over 200 varieties have been found, give an insight into the public arrangements.
A cemetery for private persons was put into use during the First Dynasty, with some pit-tombs in the town. It was extensive in the Twelfth and Thirteenth Dynasties and contained many rich tombs. A large number of fine tombs were made in the Eighteenth to Twentieth Dynasties, and members of later dynasties continued to bury their dead here until the Roman period. Many hundreds of funeral steles were removed by Auguste Mariette's workmen, without any details of the burials being noted. Later excavations have been recorded by Edward R. Ayrton, Abydos, iii.; Maclver, "El Amrah and Abydos"; and Garstang, "El Arabah".
Some of the tomb structures, referred to as "forts" by modern researchers, lay behind the town. Known as Shunet ez Zebib, it is about over all, and one still stands high. It was built by Khasekhemwy, the last pharaoh of the Second Dynasty. Another structure nearly as large adjoined it, and probably is older than that of Khasekhemwy. A third "fort" of a squarer form is now occupied by a convent of the Coptic Orthodox Church of Alexandria; its age cannot be ascertained.
The area now known as Kom El Sultan is a big mudbrick structure, the purpose of which is not clear and thought to have been at the original settlement area, dated to the Early Dynastic Period. The structure includes the early temple of Osiris.
Some of the hieroglyphs carved over an arch on the site have been interpreted in esoteric and "ufological" circles as depicting modern technology.
The "helicopter" image is the result of carved stone being re-used over time. The initial carving was made during the reign of Seti I and translates to "He who repulses the nine [enemies of Egypt]". This carving was later filled in with plaster and re-carved during the reign of Ramesses II with the title "He who protects Egypt and overthrows the foreign countries". Over time, the plaster has eroded away, leaving both inscriptions partially visible and creating a palimpsest-like effect of overlapping hieroglyphs. | https://en.wikipedia.org/wiki?curid=1440 |
Abydos (Hellespont)
Abydos (, ) was an ancient city and bishopric in Mysia. It was located at the Nara Burnu promontory on the Asian coast of the Hellespont, opposite the ancient city of Sestos, and near the city of Çanakkale in Turkey. Abydos was founded in c. 670 BC at the most narrow point in the straits, and thus was one of the main crossing points between Europe and Asia, until its replacement by the crossing between Lampsacus and Kallipolis in the 13th century, and the abandonment of Abydos in the early 14th century.
In Greek mythology, Abydos is presented in the myth of Hero and Leander as the home of Leander. The city is also mentioned in "Rodanthe and Dosikles", a novel written by Theodore Prodromos, a 12th-century writer, in which Dosikles kidnaps Rodanthe at Abydos.
In 1675, the site of Abydos was first identified, and was subsequently visited by numerous classicists and travellers, such as Robert Wood, Richard Chandler, and Lord Byron. The city's acropolis is known in Turkish as Mal Tepe.
Following the city's abandonment, the ruins of Abydos were scavenged for building materials from the 14th to the 19th century, and remains of walls and buildings continued to be reported until at least the 19th century, however, little remains and the area was declared a restricted military zone in the early 20th century, thus little to no excavation has taken place.
Abydos is mentioned in the "Iliad" as a Trojan ally, and, according to Strabo, was occupied by Bebryces and later Thracians after the Trojan War. It has been suggested that the city was originally a Phoenician colony as there was a temple of Aphrodite Porne (Aphrodite the Harlot) within Abydos. Abydos was settled by Milesian colonists contemporaneously with the foundation of the cities of Priapos and Prokonnesos in . Strabo related that Gyges, King of Lydia, granted his consent to the Milesians to settle Abydos; it is argued that this was carried out by Milesian mercenaries to act as a garrison to prevent Thracian raids into Asia Minor. The city became a thriving centre for tuna exportation as a result of the high yield of tuna in the Hellespont.
Abydos was ruled by Daphnis, a pro-Persian tyrant, in the 520s BC, but was occupied by the Persian Empire in 514. Darius I destroyed the city following his Scythian campaign in 512. Abydos participated in the Ionian Revolt in the early 5th century BC, however, the city returned briefly to Persian control as, in 480, at the onset of the Second Persian invasion of Greece, Xerxes I and the Persian army passed through Abydos on their march to Greece. After the failed Persian invasion, Abydos became a member of the Athenian-led Delian League, and was part of the Hellespontine district. Ostensibly an ally, Abydos was hostile to Athens throughout this time, and contributed a "phoros" of 4-6 talents. Xenophon documented that Abydos possessed gold mines at Astyra or Kremaste at the time of his writing.
During the Second Peloponnesian War, a Spartan expedition led by Dercylidas arrived at Abydos in early May 411 BC and successfully convinced the city to defect from the Delian League and fight against Athens, at which time he was made harmost (commander/governor) of Abydos. A Spartan fleet was defeated by Athens at Abydos in the autumn of 411 BC. Abydos was attacked by the Athenians in the winter of 409/408 BC, but was repelled by a Persian force led by Pharnabazus, satrap (governor) of Hellespontine Phrygia. Dercylidas held the office of harmost of Abydos until at least . According to Aristotle, Abydos had an oligarchic constitution at this time. At the beginning of the Corinthian War in 394 BC, Agesilaus II, King of Sparta, passed through Abydos into Thrace. Abydos remained an ally of Sparta throughout the war and Dercylidas served as harmost of the city from 394 until he was replaced by Anaxibius in ; the latter was killed in an ambush near Abydos by the Athenian general Iphicrates in . At the conclusion of the Corinthian War, under the terms of the Peace of Antalcidas in 387 BC, Abydos was annexed to the Persian Empire. Within the Persian Empire, Abydos was administered as part of the satrapy of Hellespontine Phrygia, and was ruled by the tyrant Philiscus in 368. In , the city came under the control of the tyrant Iphiades.
Abydos remained under Persian control until it was seized by a Macedonian army led by Parmenion, a general of Philip II, in the spring of 336 BC. In 335, whilst Parmenion besieged the city of Pitane, Abydos was besieged by a Persian army led by Memnon of Rhodes, forcing Parmenion to abandon his siege of Pitane and march north to relieve Abydos. Alexander ferried across from Sestos to Abydos in 334 and travelled south to the city of Troy, after which he returned to Abydos. The following day, Alexander left Abydos and led his army north to Percote. Alexander later established a royal mint at Abydos, as well as at other cities in Asia Minor.
After the death of Alexander the Great in 323 BC, Abydos, as part of the satrapy of Hellespontine Phrygia, came under the control of Leonnatus as a result of the Partition of Babylon. At the Partition of Triparadisus in 321 BC, Arrhidaeus succeeded Leonnatus as satrap of Hellespontine Phrygia.
In 302, during the Fourth War of the Diadochi, Lysimachus, King of Thrace, crossed over into Asia Minor and invaded the kingdom of Antigonus I. Unlike the neighbouring cities of Parium and Lampsacus which surrendered, Abydos resisted Lysimachus and was besieged. Lysimachus was forced to abandon the siege, however, after the arrival of a relief force sent by Demetrius, son of King Antigonus I. According to Polybius, by the third century BC, the neighbouring city of Arisbe had become subordinate to Abydos. The city of Dardanus also came under the control of Abydos at some point in the Hellenistic period. Abydos became part of the Seleucid Empire after 281 BC. The city was conquered by Ptolemy III Euergetes, King of Egypt, in 245 BC, and remained under Ptolemaic control until at least 241, as Abydos had become part of the Kingdom of Pergamon by c. 200 BC.
During the Second Macedonian War, Abydos was besieged by Philip V, King of Macedonia, in 200 BC, during which many of its citizens chose to commit suicide rather than surrender. Marcus Aemilius Lepidus met with Philip V during the siege to deliver an ultimatum on behalf of the Roman senate. Ultimately, the city was forced to surrender to Philip V due to a lack of reinforcements. The Macedonian occupation ended after the Peace of Flamininus at the end of the war in 196 BC. At this time, Abydos was substantially depopulated and partially ruined as a result of the Macedonian occupation.
In the spring of 196 BC, Abydos was seized by Antiochus III, "Megas Basileus" of the Seleucid Empire, who refortified the city in 192/191 BC. Antiochus III later withdrew from Abydos during the Roman-Seleucid War, thus allowing for the transportation of the Roman army into Asia Minor by October 190 BC. Dardanus was subsequently liberated from Abydene control, and the Treaty of Apamea of 188 BC returned Abydos to the Kingdom of Pergamon. A gymnasium was active at Abydos in the 2nd century BC.
Attalus III, King of Pergamon, bequeathed his kingdom to Rome upon his death in 133 BC, and thus Abydos became part of the province of Asia. The gold mines of Abydos at Astyra or Kremaste were near exhaustion at the time was Strabo was writing. The city was counted amongst the "telonia" (custom houses) of the province of Asia in the "lex portorii Asiae" of 62 AD, and formed part of the "conventus iuridicus Adramytteum". Abydos is mentioned in the "Tabula Peutingeriana" and Antonine Itinerary. The mint of Abydos ceased to function in the mid-3rd century AD.
It is believed that Abydos, with Sestos and Lampsacus, is referred to as one of the "three large capital cities" of the Roman Empire in "Weilüe", a 3rd-century AD Chinese text. The city was the centre for customs collection at the southern entrance of the Sea of Marmara, and was administered by a "komes ton Stenon" (count of the Straits) or an "archon" from the 3rd century to the 5th century AD. In the 6th century AD, Emperor Justinian I introduced the office of "komes Abydou" with responsibility for collecting customs duty in Abydos.
Pope Martin I rested at Abydos in the summer of 653 whilst en route to Constantinople. As a result of the administrative reforms of the 7th century, Abydos came to be administered as part of the theme of Opsikion. The office of "kommerkiarios" of Abydos is first attested in the mid-7th century, and was later sometimes combined with the office of "paraphylax", the military governor of the fort, introduced in the 8th century, at which time the office of "komes ton stenon" is last mentioned.
After the 7th century AD, Abydos became a major seaport. Maslama ibn Abd al-Malik, during his campaign against Constantinople, crossed over into Thrace at Abydos in July 717. The office of "archon" at Abydos was restored in the late 8th century and endured until the early 9th century. In 801, Empress Irene reduced commercial tariffs collected at Abydos. Emperor Nikephoros I, Irene's successor, introduced a tax on slaves purchased beyond the city. The city later also became part of the theme of the Aegean Sea and was the seat of a "tourmarches".
Abydos was sacked by an Arab fleet led by Leo of Tripoli in 904 AD whilst en route to Constantinople. The revolt of Bardas Phokas was defeated by Emperor Basil II at Abydos in 989 AD. In 992, the Venetians were granted reduced commercial tariffs at Abydos as a special privilege. In the early 11th century, Abydos became the seat of a separate command and the office of "strategos" (governor) of Abydos is first mentioned in 1004 with authority over the northern shore of the Hellespont and the islands of the Sea of Marmara.
In 1024, a Rus' raid led by a certain Chrysocheir defeated the local commander at Abydos and proceeded to travel south through the Hellespont. Following the Battle of Manzikert, Abydos was seized by the Seljuk Turks, but was recovered in 1086 AD, in which year Leo Kephalas was appointed "katepano" of Abydos. Abydos' population likely increased at this time as a result of the arrival of refugees from northwestern Anatolia who had fled the advance of the Turks. In 1092/1093, the city was attacked by Tzachas, a Turkish pirate. Emperor Manuel I Komnenos repaired Abydos' fortifications in the late 12th century.
By the 13th century AD, the crossing from Lampsacus to Kallipolis had become more common and largely replaced the crossing from Abydos to Sestos. During the Fourth Crusade, in 1204, the Venetians seized Abydos, and, following the Sack of Constantinople and the formation of the Latin Empire later that year, Emperor Baldwin granted the land between Abydos and Adramyttium to his brother Henry of Flanders. Henry of Flanders passed through Abydos on 11 November 1204 and continued his march to Adramyttium. Abydos was seized by the Empire of Nicaea, a successor state of the Eastern Roman Empire, during its offensive in 1206–1207, but was reconquered by the Latin Empire in 1212–1213. The city was later recovered by Emperor John III Vatatzes. Abydos declined in the 13th century, and was eventually abandoned between 1304 and 1310/1318 due to the threat of Turkish tribes and disintegration of Roman control over the region.
The bishopric of Abydus appears in all the "Notitiae Episcopatuum" of the Patriarchate of Constantinople from the mid-7th century until the time of Andronikos III Palaiologos (1341), first as a suffragan of Cyzicus and then from 1084 as a metropolitan see without suffragans. The earliest bishop mentioned in extant documents is Marcian, who signed the joint letter of the bishops of Hellespontus to Emperor Leo I the Thracian in 458, protesting about the murder of Proterius of Alexandria. A letter of Peter the Fuller (471–488) mentions a bishop of Abydus called Pamphilus. Ammonius signed the decretal letter of the Council of Constantinople in 518 against Severus of Antioch and others. Isidore was at the Third Council of Constantinople (680–681), John at the Trullan Council (692), Theodore at the Second Council of Nicaea (787). An unnamed bishop of Abydus was a counsellor of Emperor Nikephoros II in 969.
Seals attest Theodosius as bishop of Abydos in the 11th century, and John as metropolitan bishop of Abydos in the 11/12th century. Abydos remained a metropolitan see until the city fell to the Turks in the 14th century. The diocese is currently a titular see of the Patriarchate of Constantinople, and Gerasimos Papadopoulos was titular Bishop of Abydos from 1962 until his death in 1995. Simeon Kruzhkov was bishop of Abydos from May to September 1998. Kyrillos Katerelos was consecrated bishop of Abydos in 2008.
In 1222, during the Latin occupation, the papal legate Giovanni Colonna united the dioceses of Abydos and Madytos and placed the see under direct Papal authority. No longer a residential bishopric, Abydus is today listed by the Catholic Church as a titular see.
Notes
Citations | https://en.wikipedia.org/wiki?curid=1441 |
Acacia sensu lato
Acacia s.l. (pronounced or ), known commonly as mimosa, acacia, thorntree or wattle, is a polyphyletic genus of shrubs and trees belonging to the subfamily Mimosoideae of the family Fabaceae. It was described by the Swedish botanist Carl Linnaeus in 1773 based on the African species "Acacia nilotica". Many non-Australian species tend to be thorny, whereas the majority of Australian acacias are not. All species are pod-bearing, with sap and leaves often bearing large amounts of tannins and condensed tannins that historically found use as pharmaceuticals and preservatives.
The genus "Acacia" constitutes, in its traditional circumspection, the second largest genus in Fabaceae ("Astragalus" being the largest), with roughly 1,300 species, about 960 of them native to Australia, with the remainder spread around the tropical to warm-temperate regions of both hemispheres, including Europe, Africa, southern Asia, and the Americas (see List of "Acacia" species). The genus was divided into five separate genera under the tribe "Acacieae". The genus now called "Acacia" represents the majority of the Australian species and a few native to southeast Asia, Réunion, and Pacific Islands. Most of the species outside Australia, and a small number of Australian species, are classified into "Vachellia" and "Senegalia". The two final genera, "Acaciella" and "Mariosousa", each contain about a dozen species from the Americas (but see "Classification" below for the ongoing debate concerning their taxonomy).
English botanist and gardener Philip Miller adopted the name "Acacia" in 1754. The generic name is derived from (), the name given by early Greek botanist-physician Pedanius Dioscorides (middle to late first century) to the medicinal tree "A. nilotica" in his book "Materia Medica". This name derives from the Ancient Greek word for its characteristic thorns, (; "thorn"). The species name "nilotica" was given by Linnaeus from this tree's best-known range along the Nile river. This became the type species of the genus.
The traditional circumscription of "Acacia" eventually contained approximately 1,300 species. However, evidence began to accumulate that the genus as described was not monophyletic. Queensland botanist Les Pedley proposed the subgenus "Phyllodineae" be renamed "Racosperma" and published the binomial names. This was taken up in New Zealand but generally not followed in Australia, where botanists declared more study was needed.
Eventually, consensus emerged that "Acacia" needed to be split as it was not monophyletic. This led to Australian botanists Bruce Maslin and Tony Orchard pushing for the retypification of the genus with an Australian species instead of the original African type species, an exception to traditional rules of priority that required ratification by the International Botanical Congress. That decision has been controversial, and debate continued, with some taxonomists (and many other biologists) deciding to continue to use the traditional "Acacia sensu lato" circumscription of the genus, in defiance of decisions by an International Botanical Congress. However, a second International Botanical Congress has now confirmed the decision to apply the name "Acacia" to the mostly Australian plants, which some had been calling "Racosperma", and which had formed the overwhelming majority of "Acacia sensu lato". Debate continues regarding the traditional acacias of Africa, possibly placed in "Senegalia" and "Vachellia", and some of the American species, possibly placed in "Acaciella" and "Mariosousa".
Acacias belong to the subfamily Mimosoideae, the major clades of which may have formed in response to drying trends and fire regimes that accompanied increased seasonality during the late Oligocene to early Miocene (∼25 mya). Pedley (1978), following Vassal (1972), viewed Acacia as comprising three large subgenera, but subsequently (1986) raised the rank of these groups to genera Acacia, "Senegalia" ("s.l.") and "Racosperma", which was underpinned by later genetic studies.
In common parlance, the term "acacia" is occasionally applied to species of the genus "Robinia", which also belongs in the pea family. "Robinia pseudoacacia", an American species locally known as black locust, is sometimes called "false acacia" in cultivation in the United Kingdom and throughout Europe.
The leaves of acacias are compound pinnate in general. In some species, however, more especially in the Australian and Pacific Islands species, the leaflets are suppressed, and the leaf-stalks (petioles) become vertically flattened in order to serve the purpose of leaves. These are known as "phyllodes". The vertical orientation of the phyllodes protects them from intense sunlight since with their edges towards the sky and earth they do not intercept light as fully as horizontally placed leaves. A few species (such as "Acacia glaucoptera") lack leaves or phyllodes altogether but instead possess cladodes, modified leaf-like photosynthetic stems functioning as leaves.
The small flowers have five very small petals, almost hidden by the long stamens, and are arranged in dense, globular or cylindrical clusters; they are yellow or cream-colored in most species, whitish in some, or even purple ("Acacia purpureopetala") or red ("Acacia leprosa" 'Scarlet Blaze'). "Acacia" flowers can be distinguished from those of a large related genus, "Albizia", by their stamens, which are not joined at the base. Also, unlike individual "Mimosa" flowers, those of "Acacia" have more than ten stamens.
The plants often bear spines, especially those species growing in arid regions. These sometimes represent branches that have become short, hard, and pungent, though they sometimes represent leaf-stipules. "Acacia armata" is the kangaroo-thorn of Australia, and "Acacia erioloba" (syn. "Acacia eriolobata") is the camelthorn of Africa.
Acacia seeds can be difficult to germinate. Research has found that immersing the seeds in various temperatures (usually around 80 °C (176 °F)) and manual seed coat chipping can improve growth to around 80%.
In the Central American bullthorn acacias—"Acacia sphaerocephala", "Acacia cornigera" and "Acacia collinsii" — some of the spiny stipules are large, swollen and hollow. These afford shelter for several species of "Pseudomyrmex" ants, which feed on extrafloral nectaries on the leaf-stalk and small lipid-rich food-bodies at the tips of the leaflets called Beltian bodies. In return, the ants add protection to the plant against herbivores. Some species of ants will also remove competing plants around the acacia, cutting off the offending plants' leaves with their jaws and ultimately killing them. Other associated ant species appear to do nothing to benefit their hosts.
Similar mutualisms with ants occur on "Acacia" trees in Africa, such as the whistling thorn acacia. The acacias provide shelter for ants in similar swollen stipules and nectar in extrafloral nectaries for their symbiotic ants, such as "Crematogaster mimosae". In turn, the ants protect the plant by attacking large mammalian herbivores and stem-boring beetles that damage the plant.
The predominantly herbivorous spider "Bagheera kiplingi", which is found in Central America and Mexico, feeds on nubs at the tips of the acacia leaves, known as Beltian bodies, which contain high concentrations of protein. These nubs are produced by the acacia as part of a symbiotic relationship with certain species of ant, which also eat them.
In Australia, "Acacia" species are sometimes used as food plants by the larvae of hepialid moths of the genus "Aenetus" including "A. ligniveren". These burrow horizontally into the trunk then vertically down. Other Lepidoptera larvae which have been recorded feeding on "Acacia" include brown-tail, "Endoclita malabaricus" and turnip moth. The leaf-mining larvae of some bucculatricid moths also feed on "Acacia"; "Bucculatrix agilis" feeds exclusively on "Acacia horrida" and "Bucculatrix flexuosa" feeds exclusively on "Acacia nilotica".
Acacias contain a number of organic compounds that defend them from pests and grazing animals.
Acacia seeds are often used for food and a variety of other products.
In Myanmar, Laos, and Thailand, the feathery shoots of "Acacia pennata" (common name "cha-om", ชะอม and "su pout ywet" in Burmese) are used in soups, curries, omelettes, and stir-fries.
Various species of acacia yield gum. True gum arabic is the product of "Acacia senegal", abundant in dry tropical West Africa from Senegal to northern Nigeria.
"Acacia nilotica" (syn. "Acacia arabica") is the gum arabic tree of India, but yields a gum inferior to the true gum arabic. Gum arabic is used in a wide variety of food products, including some soft drinks and confections.
The ancient Egyptians used acacia gum in paints.
The gum of "Acacia xanthophloea" and "Acacia karroo" has a high sugar content and is sought out by the lesser bushbaby. "Acacia karroo" gum was once used for making confectionery and traded under the name "Cape Gum". It was also used medicinally to treat cattle suffering poisoning by "Moraea" species.
"Acacia" species have possible uses in folk medicine. A 19th-century Ethiopian medical text describes a potion made from an Ethiopian species (known as "grar") mixed with the root of the "tacha", then boiled, as a cure for rabies.
An astringent medicine high in tannins, called catechu or cutch, is procured from several species, but more especially from "Senegalia catechu" (syn. "Acacia catechu"), by boiling down the wood and evaporating the solution so as to get an extract. The catechu extract from "A. catechu" figures in the history of chemistry in giving its name to the catechin, catechol, and catecholamine chemical families ultimately derived from it.
A few species are widely grown as ornamentals in gardens; the most popular perhaps is "A. dealbata" (silver wattle), with its attractive glaucous to silvery leaves and bright yellow flowers; it is erroneously known as "mimosa" in some areas where it is cultivated, through confusion with the related genus "Mimosa".
Another ornamental acacia is the fever tree. Southern European florists use "A. baileyana", "A. dealbata", "A. pycnantha" and "A. retinodes" as cut flowers and the common name there for them is mimosa.
Ornamental species of acacias are also used by homeowners and landscape architects for home security. The sharp thorns of some species are a deterrent to trespassing, and may prevent break-ins if planted under windows and near drainpipes. The aesthetic characteristics of acacia plants, in conjunction with their home security qualities, makes them a reasonable alternative to constructed fences and walls.
"Acacia farnesiana" is used in the perfume industry due to its strong fragrance. The use of acacia as a fragrance dates back centuries.
Egyptian mythology has associated the acacia tree with characteristics of the tree of life, such as in the Myth of Osiris and Isis.
Several parts (mainly bark, root, and resin) of "Acacia" species are used to make incense for rituals. Acacia is used in incense mainly in India, Nepal, and China including in its Tibet region. Smoke from acacia bark is thought to keep demons and ghosts away and to put the gods in a good mood. Roots and resin from acacia are combined with rhododendron, acorus, cytisus, salvia, and some other components of incense. Both people and elephants like an alcoholic beverage made from acacia fruit.
According to Easton's Bible Dictionary, the acacia tree may be the “burning bush” (Exodus 3:2) which Moses encountered in the desert. Also, when God gave Moses the instructions for building the Tabernacle, he said to "make an ark" and "a table of acacia wood" (Exodus 25:10 & 23, Revised Standard Version). Also, in the Christian tradition, Christ's crown of thorns is thought to have been woven from acacia.
Acacia was used for Zulu warriors' iziQu (or isiKu) beads, which passed on through Robert Baden-Powell to the Scout movement's Wood Badge training award.
In Russia, Italy, and other countries, it is customary to present women with yellow mimosas (among other flowers) on International Women's Day (March 8). These "mimosas" may be from "A. dealbata" (silver wattle).
In 1918, May Gibbs, the popular Australian children's author, wrote the book 'Wattle Babies', in which a third-person narrator describes the lives of imaginary inhabitants of the Australian forests (the 'bush'). The main characters are the Wattle Babies, who are tiny people that look like acacia flowers and who interact with various forest creatures. Gibbs wrote "Wattle Babies are the sunshine of the Bush. In Winter, when the sky is grey and all the world seems cold, they put on their yellowest clothes and come out, for they have such cheerful hearts." Gibbs was referring to the fact that an abundance of acacias flower in August in Australia, in the midst of the southern hemisphere winter.
The bark of various Australian species, known as wattles, is very rich in tannin and forms an important article of export; important species include "A. pycnantha" (golden wattle), "A. decurrens" (tan wattle), "A. dealbata" (silver wattle) and "A. mearnsii" (black wattle).
Black wattle is grown in plantations in South Africa and South America. Most Australian "Acacia" species introduced to South Africa have become an enormous problem, due to their naturally aggressive propagation. The pods of "A. nilotica" (under the name of "neb-neb"), and of other African species, are also rich in tannin and used by tanners. In Yemen, the principal tannin substance was derived from the leaves of the salam-tree ("Acacia etbaica"), a tree known locally by the name "qaraẓ" ("garadh"). A bath solution of the crushed leaves of this tree, into which raw leather had been inserted for prolonged soaking, would take only 15 days for curing. The water and leaves, however, required changing after seven or eight days, and the leather needed to be turned over daily.
Some "Acacia" species are valuable as timber, such as "A. melanoxylon" (blackwood) from Australia, which attains a great size; its wood is used for furniture, and takes a high polish; and "A. omalophylla" (myall wood, also Australian), which yields a fragrant timber used for ornaments. "A. seyal" is thought to be the shittah-tree of the Bible, which supplied shittim-wood. According to the Book of Exodus, this was used in the construction of the Ark of the Covenant. "A. koa" from the Hawaiian Islands and "A. heterophylla" from Réunion are both excellent timber trees. Depending on abundance and regional culture, some "Acacia" species (e.g. "A. fumosa") are traditionally used locally as firewoods. It is also used to make homes for different animals.
In Indonesia (mainly in Sumatra) and in Malaysia (mainly in Sabah), plantations of "A. mangium" are being established to supply pulpwood to the paper industry.
Acacia wood pulp gives high opacity and below average bulk paper. This is suitable in lightweight offset papers used for Bibles and dictionaries. It is also used in paper tissue where it improves softness.
Acacias can be planted for erosion control, especially after mining or construction damage.
For the same reasons it is favored as an erosion-control plant, with its easy spreading and resilience, some varieties of acacia are potentially invasive species. One of the most globally significant invasive acacias is black wattle "A. mearnsii", which is taking over grasslands and abandoned agricultural areas worldwide, especially in moderate coastal and island regions where mild climate promotes its spread. Australian/New Zealand Weed Risk Assessment gives it a "high risk, score of 15" rating and it is considered one of the world's 100 most invasive species.
Extensive ecological studies should be performed before further introduction of acacia varieties, as this fast-growing genus, once introduced, spreads quickly and is extremely difficult to eradicate.
Nineteen different species of "Acacia" in the Americas contain cyanogenic glycosides, which, if exposed to an enzyme which specifically splits glycosides, can release hydrogen cyanide (HCN) in the "leaves". This sometimes results in the poisoning death of livestock.
If fresh plant material spontaneously produces 200 ppm or more HCN, then it is potentially toxic. This corresponds to about 7.5 μmol HCN per gram of fresh plant material. It turns out that, if acacia "leaves" lack the specific glycoside-splitting enzyme, then they may be less toxic than otherwise, even those containing significant quantities of cyanic glycosides.
Some "Acacia" species containing cyanogens include "Acacia erioloba", "A. cunninghamii", "A. obtusifolia", "A. sieberiana", and "A. sieberiana" var. "woodii"
The Arbre du Ténéré in Niger was the most isolated tree in the world, about from any other tree. The tree was knocked down by a truck driver in 1973.
In Nairobi, Kenya, the Thorn Tree Café is named after a Naivasha thorn tree ("Acacia xanthophloea") in its centre. Travelers used to pin notes to others to the thorns of the tree. The current tree is the third of the same variety. | https://en.wikipedia.org/wiki?curid=1445 |
Acapulco
Acapulco de Juárez (), commonly called Acapulco ( , also ), is a city, municipality and major seaport in the state of Guerrero on the Pacific coast of Mexico, south of Mexico City. Acapulco is located on a deep, semicircular bay and has been a port since the early colonial period of Mexico's history. It is a port of call for shipping and cruise lines running between Panama and San Francisco, California, United States. The city of Acapulco is the largest in the state, far larger than the state capital Chilpancingo. Acapulco is also Mexico's largest beach and balneario resort city.
The city is one of Mexico's oldest beach resorts, which came into prominence in the 1940s through the 1960s as a getaway for Hollywood stars and millionaires. Acapulco was once a popular tourist resort, but due to a massive upsurge in gang violence and murder since 2014 it no longer attracts many foreign tourists, and most now only come from Mexico itself. It is both the second deadliest city in Mexico and the second-deadliest city in the world, and the US government has warned its citizens not to travel there. In 2016 there were 918 murders, and the homicide rate was one of the highest in the world: 103 in every 100,000. In September 2018 the city's entire police force was disarmed by the military, due to suspicions that it has been infiltrated by drug gangs.
The resort area is divided into three parts: The north end of the bay and beyond is the "traditional" area, which encompasses the area from "Parque Papagayo" through the Zócalo and onto the beaches of "Caleta" and "Caletilla", the main part of the bay known as "Zona Dorada" ('golden zone' in Spanish), where the famous in the mid-20th century vacationed, and the south end, "Diamante" ('diamond' in Spanish), which is dominated by newer luxury high-rise hotels and condominiums.
The name "Acapulco" comes from Nahuatl language "Aca-pōl-co", and means "where the reeds were destroyed or washed away". The "de Juárez" was added to the official name in 1885 to honor Benito Juárez, former President of Mexico (1806–1872). The seal for the city shows broken reeds or cane. The island and municipality of Capul, in the Philippines, derives its name from Acapulco; Capul. Acapulco was the eastern end of the trans-Pacific sailing route from Acapulco to Manila, in what was then a Spanish colony.
By the 8th century around the Acapulco Bay area, there was a small culture which would first be dominated by the Olmecs, then by a number of others during the pre-Hispanic period before it ended in the 1520s. At Acapulco Bay itself, there were two Olmec sites, one by Playa Larga and the other on a hill known as "El Guitarrón". Olmec influence caused the small spread-out villages here to coalesce into larger entities and build ceremonial centers.
Later, Teotihuacan influence made its way here via Cuernavaca and Chilpancingo. Then Mayan influence arrived from the Isthmus of Tehuantepec and through what is now Oaxaca. This history is known through the archaeological artifacts that have been found here, especially at "Playa Hornos, Pie de la Cuesta", and "Tambuco".
In the 11th century, new waves of migration of Nahuas and "Coixas" came through here. These people were the antecedents of the Aztecs. In the later 15th century, after four years of military struggle, Acapulco became part of the Aztec Empire during the reign of Ahuizotl (1486–1502). It was annexed to a tributary province named "Tepecuacuilco". However, this was only transitory, as the Aztecs could only establish an unorganized military post at the city's outskirts. The city was in territory under control of the "Yopes", who continued defending it and living there until the arrival of the Spanish in the 1520s.
There are two stories about how Acapulco bay was discovered by Europeans. The first states that two years after the Spanish conquest of the Aztec Empire, Hernán Cortés sent explorers west to find gold. The explorers had subdued this area after 1523, and Captain Saavedra Cerón was authorized by Cortés to found a settlement here. The other states that the bay was discovered on December 13, 1526 by a small ship named the El Tepache Santiago captained by Santiago Guevara.
The first encomendero was established in 1525 at "Cacahuatepec", which is part of the modern Acapulco municipality. In 1531, a number of Spaniards, most notably Juan Rodriguez de Villafuerte, left the Oaxaca coast and founded the village of Villafuerte where the city of Acapulco now stands. Villafuerte was unable to subdue the local native peoples, and this eventually resulted in the Yopa Rebellion in the region of "Cuautepec". Hernán Cortés was obligated to send Vasco Porcayo to negotiate with the indigenous people giving concessions. The province of Acapulco became the encomendero of Rodriguez de Villafuerte who received taxes in the form of cocoa, cotton and corn.
Cortés established Acapulco as a major port by the early 1530s, with the first major road between Mexico City and the port constructed by 1531. The wharf, named Marqués, was constructed by 1533 between Bruja Point and Diamond Point. Soon after, the area was made an "alcadia" (major province or town).
Spanish trade in the Far East would give Acapulco a prominent position in the economy of New Spain. Galleons started arriving here from Asia by 1550, and in that year thirty Spanish families were sent to live here from Mexico City to have a permanent base of European residents. Acapulco would become the second most important port, after Veracruz, due to its direct trade with the Philippines. This trade would focus on the yearly Manila-Acapulco Galleon trade, which was the nexus of all kinds of communications between New Spain, Europe and Asia. In 1573, the port was granted the monopoly of the Manila trade.
The galleon trade made its yearly run from the mid-16th century until the early 19th. The luxury items it brought to New Spain attracted the attention of English and Dutch pirates, such as Francis Drake, Henry Morgan and Thomas Cavendish, who called it "The Black Ship". A Dutch fleet invaded Acapulco in 1615, destroying much of the town before being driven off. The Fort of San Diego was built the following year to protect the port and the cargo of arriving ships. The fort was destroyed by an earthquake in 1776 and was rebuilt between 1778 and 1783.
At the beginning of the 19th century, King Charles IV declared Acapulco a Ciudad Official and it became an essential part of the Spanish Crown. However, not long after, the Mexican War of Independence began. In 1810, José María Morelos y Pavón attacked and burnt down the city, after he defeated royalist commander Francisco Parés at the Battle of Tres Palos. The independence of Mexico in 1821 ended the run of the Manila Galleon.
Acapulco's importance as a port recovered during the California Gold Rush in the mid-19th-century, with ships going to and coming from Panama stopping here. This city was the besieged on 19 April 1854 by Antonio López de Santa Anna after Guerrero's leadership had rebelled by issuing the Plan de Ayutla. After an unsuccessful week of fighting, Santa Anna retreated.
In 1911, revolutionary forces took over the main plaza of Acapulco.
In 1920, the Prince of Wales (the future King Edward VIII) visited the area. Impressed by what he saw, he recommended the place to his compatriots in Europe, making it popular with the elite there. Much of the original hotel and trading infrastructure was built by a businessman named Albert B. Pullen from Corrigan, Texas, in the area now known as Old Acapulco. In 1933 Carlos Barnard started the first section of "Hotel El Mirador", with 12 rooms on the cliffs of La Quebrada. Wolf Schoenborn purchased large amounts of undeveloped land and Albert Pullen built the "Las Americas Hotel".
In the mid-1940s, the first commercial wharf and warehouses were built. In the early 1950s, President Miguel Alemán Valdés upgraded the port's infrastructure, installing electrical lines, drainage systems, roads and the first highway to connect the port with Mexico City.
The economy grew and foreign investment increased with it. During the 1950s, Acapulco became the fashionable place for millionaire Hollywood stars such as Elizabeth Taylor, Frank Sinatra, Eddie Fisher and Brigitte Bardot. The 1963 Hollywood movie "Fun in Acapulco", starring Elvis Presley, is set in Acapulco although the filming took place in the United States. Former swing musician Teddy Stauffer, the so-called "Mister Acapulco", was a hotel manager ("Villa Vera", "Casablanca"), who attracted many celebrities to Acapulco.
From a population of only 4,000 or 5,000 in the 1940s, by the early 1960s, Acapulco had a population of about 50,000. In 1958, the Diocese of Acapulco was created by Pope Pius XII. It became an archdiocese in 1983.
During the 1960s and 1970s, new hotel resorts were built, and accommodation and transport were made cheaper. It was no longer necessary to be a millionaire to spend a holiday in Acapulco; the foreign and Mexican middle class could now afford to travel here. However, as more hotels were built in the south part of the bay, the old hotels of the 1950s lost their grandeur. For the 1968 Summer Olympics in neighboring Mexico City, Acapulco hosted the sailing (then yachting) events.
In the 1970s, there was a significant expansion of the port.
The Miss Universe 1978 pageant took place in the city. In 1983, singer-songwriter Juan Gabriel wrote the song "Amor eterno", which pays homage to Acapulco. The song was first and most famously recorded by Rocio Durcal. Additionally, Acapulco is the hometown of actress, singer, and comedian Aída Pierce, who found fame during the 1980s, 1990s and the first decade of the 21st century.
The tollway known as the "Ruta del Sol" was built during the 1990s, crossing the mountains between Mexico City and Acapulco. The journey takes only about three-and-a-half hours, making Acapulco a favorite weekend destination for Mexico City inhabitants. It was in that time period that the economic impact of Acapulco as a tourist destination increased positively, and as a result new types of services emerged, such as the Colegio Nautilus. This educational project, backed by the state government, was created for the families of local and foreign investors and businessmen living in Acapulco who were in need of a bilingual and international education for their children.
The port continued to grow and in 1996, a new private company, API Acapulco, was created to manage operations. This consolidated operations and now Acapulco is the major port for car exports to the Pacific.
The city was devastated by Hurricane Pauline in 1997. The storm stranded tourists and left more than 100 dead in the city. Most of the victims were from the shantytowns built on steep hillsides that surround the city. Other victims were swept away by thirty-foot waves and winds. The main road, Avenida Costera, became a fast-moving river of sludge three feet in depth.
In the 21st century, the Mexican Drug War has had a negative effect on tourism in Acapulco as rival drug traffickers fight each other for the Guerrero coast route that brings drugs from South America as well as soldiers that have been fighting the cartels since 2006.
A major gun battle between 18 gunmen and soldiers took place in the summer of 2009 in the Old Acapulco seaside area, lasting hours and killing 16 of the gunmen and two soldiers. This came after the swine flu outbreak earlier in the year nearly paralyzed the Mexican economy, forcing hotels to give discounts to bring tourists back. However, hotel occupancy for 2009 was down five percent from the year before. The death of Arturo Beltran Leyva in December 2009 resulted in infighting among different groups within the Beltran Leyva cartel.
Gang violence continued to plague Acapulco through 2010 and into 2011, most notably with at least 15 dying in drug-related violence on March 13, 2010, and another 15 deaths on January 8, 2011. Among the first incident's dead were six members of the city police and the brother of an ex-mayor. In the second incident, the headless bodies of 15 young men were found dumped near the Plaza Senderos shopping center. On August 20, 2011, Mexican authorities reported that five headless bodies were found in Acapulco, three of which were placed in the city's main tourist area and two of which were cut into multiple pieces.
On February 4, 2013, six Spanish men were tied up and robbed and the six Spanish women with them were gang-raped by five masked gunmen who stormed a beach house on the outskirts of Acapulco, though after these accusations, none of the victims decided to press charges. On September 28, 2014, Mexican politician Braulio Zaragoza was gunned down at the "El Mirador" hotel in the city. He was the leader of the conservative opposition National Action Party (PAN) in southern Guerrero state. Several politicians have been targeted by drug cartels operating in the area. Investigations are under way, but no arrests have yet been made. The insecurity due to individuals involved with drug cartels has cost the city of Acapulco its popularity among national and international tourists. It was stated by the "Dirección General de Aeronáutica Civil" that the number of international flyers coming to Acapulco decreased from 355,760 flyers registered in 2006 to 52,684 flyers in the year 2015, the number of international tourists flying to Acapulco dropped 85% in the interval of nine years.
The city, located on the Pacific coast of Mexico in the state of Guerrero, is classified as one of the state's seven regions, dividing the rest of the Guerrero coast into the Costa Grande and the Costa Chica. Forty percent of the municipality is mountainous terrain; another forty percent is semi-flat; and the other twenty percent is flat. Altitude varies from sea level to . The highest peaks are "Potrero, San Nicolas", and "Alto Camarón". One major river runs through the municipality, the "Papagayo", along with a number of "arroyos" (streams). There are also two small lagoons, Tres Palos and Coyuca. along with a number of thermal springs.
Acapulco features a tropical wet and dry climate (Köppen: Aw): hot with distinct wet and dry seasons, with more even temperatures between seasons than resorts farther north in Mexico, but this varies depending on altitude. The warmest areas are next to the sea where the city is. Tropical storms and hurricanes are threats from May through November. The forested area tends to lose leaves during the winter dry season, with evergreen pines in the highest elevations. Fauna consists mostly of deer, small mammals, a wide variety of both land and seabirds, and marine animals such as turtles. Oddly enough, January, its coolest month, also features its all-time record high.
The temperature of the sea is quite stable, with lows of between January – March, and a high of in August.
As the seat of a municipality, the city of Acapulco is the government authority for over 700 other communities, which together have a territory of 1,880.60 km2. This municipality borders the municipalities of Chilpancingo, Juan R Escudero (Tierra Colorada), San Marcos, Coyuca de Benítez with the Pacific Ocean to the south.
The metropolitan area is made up of the municipalities of Acapulco de Juárez and Coyuca de Benitez. The area has a population () of 786,830.
Acapulco is the most populated city in the state of Guerrero, according to the results of the II Population and Housing Census 2010 carried out by the National Institute of Statistics and Geography (INEGI) with a census date of June 12, 2010, The city had until then a total population of 673 479 inhabitants, of that amount, 324 746 were men and 348 733 women. It is considered the twenty-second most populous city in Mexico and the tenth most populous metropolitan area in Mexico. It is also the city with the highest concentration of population of the homonymous municipality, representing 85.25 percent of the 789.971 inhabitants.
The metropolitan area of Acapulco is made up of six towns in the municipality of Acapulco de Juárez and four in the municipality of Coyuca de Benítez. In agreement with the last count and official delimitation realized in 2010 altogether by the National Institute of Statistics and Geography, the National Council of Population and the Secretariat of Social Development, the metropolitan area of Acapulco grouped a total of 863 431 inhabitants in a surface of 3 538.5 km², which placed it as the tenth most populated district in Mexico. It is estimated according to a study by the National Autonomous University of Mexico on climate and geography, carried out in 2002, that between 2015 and 2020 the city of Acapulco will exceed one million inhabitants.
Source: Instituto Nacional de Estadística y Geografía
Tourism is the main economic activity of the municipality and most of this is centered on Acapulco Bay. About seventy-three percent of the municipality's population is involved in commerce, most of it related to tourism and the port. Mining and manufacturing employ less than twenty percent and only about five percent is dedicated to agriculture. Industrial production is limited mostly to bottling, milk products, cement products, and ice and energy production. Agricultural products include tomatoes, corn, watermelon, beans, green chili peppers, and melons.
Acapulco is one of Mexico's oldest coastal tourist destinations, reaching prominence in the 1950s as the place where Hollywood stars and millionaires vacationed on the beach in an exotic locale. In modern times, tourists in Acapulco have been facing problems with corrupt local police who steal money by extortion and intimidate visitors with threats of jail.
The city is divided into three tourist areas.
Traditional Acapulco is the old part of the port, where hotels like Hotel Los Flamingos, owned by personalities Johnny Weissmuller and John Wayne are located, is on the northern end of the bay. Anchored by attractions such as the beaches of Caleta and Caletilla, the cliff divers of La Quebrada, and the city square, known as "El Zocalo". The heyday of this part of Acapulco ran from the late 1930s until the 1960s, with development continuing through the 1980s. This older section of town now caters to a mostly middle-class, almost exclusively Mexican clientele, while the glitzier newer section caters to the Mexican upper classes, many of whom never venture into the older, traditional part of town.
Acapulco Dorado had its development between the 1950s and the 1970s, and is about 25 minutes from the Acapulco International Airport. It is the area that presents the most tourist influx in the port, runs through much of the Acapulco bay, from Icacos, passing through Costera Miguel Aleman Avenue, which is the main one, to Papagayo Park. It has several hotels,
Acapulco Diamante, also known as Punta Diamante, is the newest and most developed part of the port, with investment having created one of the greatest concentrations of luxury facilities in Mexico, including exclusive hotels and resorts of international chains, residential complexes, luxury condominiums and private villas, spas, restaurants, shopping areas and a golf course. Starting at the Scenic Highway in Las Brisas, it includes Puerto Marqués and Punta Diamante and extends to Barra Vieja Beach. It is 10 minutes from the Acapulco International Airport. In this area, all along "Boulevard de las Naciones", almost all transportation is by car, limousine or golf cart.
Acapulco's reputation of a high-energy party town and the nightlife have long been draws of the city for tourists. From November to April, luxury liners stop here daily and include ships such as the MS "Queen Victoria", the MS "Rotterdam", "Crystal Harmony", and all the Princess line ships. Despite Acapulco's international fame, most of its visitors are from central Mexico, especially the affluent from Mexico City. Acapulco is one of the embarkation ports for the Mexican cruise line Ocean Star Cruises.
For the Christmas season of 2009, Acapulco received 470,000 visitors, most of whom are Mexican nationals, adding 785 million pesos to the economy. Eighty percent arrive by land and eighteen percent by air. The area has over 25,000 condominiums, most of which function as second homes for their Mexican owners. Acapulco is still popular with Mexican celebrities and the wealthy, such as Luis Miguel and Plácido Domingo, who maintain homes there.
While much of the glitz and glamour that made Acapulco famous still remains, from the latter 20th century on, the city has also taken on other less-positive reputations. Some consider it a "passé" resort, eclipsed by the newer Cancún and Cabo San Lucas. Over the years, a number of problems have developed here, especially in the bay and the older sections of the city. The large number of wandering vendors on the beaches, who offer everything from newspapers to massages, are a recognized problem. It is a bother to tourists who simply want to relax on the beach, but the government says it is difficult to eradicate, as there is a lot of unemployment and poverty here. Around the city are many small shantytowns that cling to the mountainsides, populated by migrants who have come here looking for work. In the last decade, drug-related violence has caused massive problems for the local tourism trade.
Another problem is the garbage that has accumulated in the bay. Although 60.65 tons have recently been extracted from the bays of Acapulco and nearby Zihuatanejo, more needs to be done. Most of trash removal during the off seasons is done on the beaches and in the waters closest to them. However, the center of the bay is not touched. The reason trash winds up in the bay is that it is common here to throw it in streets, rivers and the bay itself. The most common items cleaned out of the bay are beer bottles and car tires. Acapulco has seen some success in this area, having several beaches receiving the high "blue flag" certifications for cleanliness and water quality.
Acapulco's gastronomy is very rich. The following are typical dishes from the region:
Relleno is baked pork with a variety of vegetables and fruits such as potatoes, raisins, carrots and chiles. It is eaten with bread called "bolillo".
Pozole is a soup with a salsa base (it can be white, red or green), corn, meat that can be either pork or chicken and it is accompanied with "antojitos" (snacks) like tostadas, tacos and tamales. This dish is served as part of a weekly Thursday event in the city and the state, with many restaurants offering the meal with special entertainment, from bands to dancers to celebrity impersonators.
Acapulco's main attraction is its nightlife, as it has been for many decades. Nightclubs change names and owners frequently.
For example, Baby 'O has been open to the national and international public since 1976 and different celebrities have visited their installations such as Mexican singer Luis Miguel, Bono from U2 and Sylvester Stallone. Another nightclub is Palladium, located in the Escénica Avenue, the location gives the nightclub a beautiful view of the Santa Lucia Bay at night. Various DJs have had performances in Palladium among them DVBBS, Tom Swoon, Nervo and Junkie KID.
Informal lobby or poolside cocktail bars often offer free live entertainment. In addition, there is the beach bar zone, where younger crowds go. These are located along the Costera road, face the ocean and feature techno or alternative rock. Most are concentrated between the Fiesta Americana and Continental Plaza hotels. These places tend to open earlier and have more informal dress. There is a bungee jump in this area as well.
Another attraction at Acapulco is the La Quebrada Cliff Divers. The tradition started in the 1930s when young men casually competed against each other to see who could dive from the highest point into the sea below. Eventually, locals began to ask for tips for those coming to see the men dive. Today the divers are professionals, diving from heights of into an inlet that is only wide and deep, after praying first at a shrine to the Virgin of Guadalupe. On December 12, the feast day of this Virgin, freestyle cliff divers jump into the sea to honor her. Dives range from the simple to the complicated and end with the "Ocean of Fire" when the sea is lit with gasoline, making a circle of flames which the diver aims for. The spectacle can be seen from a public area which charges a small fee or from the Hotel Plaza Las Glorias/El Mirador from its bar or restaurant terrace.
There are a number of beaches in the Acapulco Bay and the immediate coastline. In the bay proper there are the La Angosta (in the Quebrada), Caleta, Caletilla, Dominguillo, Tlacopanocha, Hornos, Hornitos, Honda, Tamarindo, Condesa, Guitarrón, Icacos, Playuela, Playuelilla and Playa del Secreto. In the adjoining, smaller Bay of Puerto Marqués there is Pichilingue, Las Brisas, and Playa Roqueta. Facing open ocean just northwest of the bays is Pie de la Cuesta and southeast are Playa Revolcadero, Playa Aeromar, Playa Encantada and Barra Vieja. Two lagoons are in the area, Coyuca to the northwest of Acapulco Bay and Tres Palos to the southeast. Both lagoons have mangroves and offer boat tours. Tres Palos also has sea turtle nesting areas which are protected.
In addition to sunbathing, the beaches around the bay offer a number of services, such as boat rentals, boat tours, horseback riding, scuba diving and other aquatic sports. One popular cruise is from Caletilla Beach to Roqueta Island, which has places to snorkel, have lunch, and a lighthouse. There is also an underwater statue of the Virgin of Guadalupe here, created in 1958 by Armando Quesado in memory of a group of divers who died here. Many of the scuba-diving tours come to this area as well, where there are sunken ships, sea mountains, and cave rock formations. Another popular activity is deep-sea fishing. The major attraction is sail fishing. Fish caught here have weighed between 89 and 200 pounds. Sailfish are so plentiful that boat captains have been known to bet with a potential customer that if he does not catch anything, the trip is free.
In the old part of the city, there is a traditional main square called the Zócalo, lined with shade trees, cafés and shops. At the north end of the square is "Nuestra Señora de la Soledad" cathedral, with blue onion-shaped domes and Byzantine towers. The building was originally constructed as a movie set, but was later adapted into a church. Acapulco's most historic building is the Fort of San Diego, located east of the main square and originally built in 1616 to protect the city from pirate attacks. The fort was partially destroyed by the Dutch in the mid-17th century, rebuilt, then destroyed again in 1776 by an earthquake. It was rebuilt again by 1783 and this is the building that can be seen today, unchanged except for renovations done to it in 2000. Parts of the moats remain as well as the five bulwarks and the battlements. Today the fort serves as the Museo Histórico de Acapulco (Acapulco Historical Museum), which shows the port's history from the pre-Hispanic period until independence. There are temporary exhibits as well. For many years tourists could ride around the city in colorful horse-drawn carriages known as "calandrias", but the practice ended in February 2020 due to concerns about mistreatment of the animals.
The "Centro Internacional de Convivencia Infantil" or CICI is a sea-life and aquatic park located on Costera Miguel Aleman. It offers wave pools, water slides and water toboggans. There are also dolphin shows daily and a swim with dolphins program. The center mostly caters to children. Another place that is popular with children is the "Parque Papagayo": a large family park which has life-sized replicas of a Spanish galleon and the space shuttle Columbia, three artificial lakes, an aviary, a skating rink, rides, go-karts and more.
The Dolores Olmedo House is located in the traditional downtown of Acapulco and is noted for the murals by Diego Rivera that adorn it. Olmedo and Rivera had been friend since Olmedo was a child and Rivera spent the last two years of his life here. During that time, he painted nearly nonstop and created the outside walls with tile mosaics, featuring Aztec deities such as Quetzalcoatl. The interior of the home is covered in murals. The home is not a museum, so only the outside murals can be seen by the public.
There is a small museum called "Casa de la Máscara" (House of Masks) which is dedicated to masks, most of them from Mexico, but there are examples from many parts of the world. The collection contains about one thousand examples and is divided into seven rooms called Masks of the World, Mexico across History, The Huichols and the Jaguar, Alebrijes, and Dances of Guerrero, Devils and Death, Identity and Fantasy, and Afro-Indian masks.
The Botanical Garden of Acapulco is a tropical garden located on lands owned by the Universidad Loyola del Pacífico. Most of the plants here are native to the region, and many, such as the Peltogyne mexicana or purple stick tree, are in danger of extinction.
One cultural event that is held yearly in Acapulco is the "Festival Internacional de la Nao", which takes place in the Fort of San Diego, located near the Zócalo in downtown of the city. The Festival honors the remembrance of the city's interaction and trades with Oriental territories which started back in the Sixteenth Century. The Nao Festival consists of cultural activities with the support of organizations and embassies from India, China, Japan, Philippines, Thailand, Indonesia and South Korea. The variety of events go from film projections, musical interpretations and theatre to gastronomical classes, some of the events are specifically for kids.
The annual French Festival takes place throughout Acapulco city and offers a multitude of events that cement cultural links between Mexico and France. The main features are a fashion show and a gourmet food fair. The Cinépolis Galerías Diana and the Teatro Juan Ruíz de Alarcón present French and French literary figures who give talks on their specialised subjects. Even some of the local nightclubs feature French DJs. Other festivals celebrated here include Carnival, the feast of San Isidro Labrador on 15 May, and in November, a crafts and livestock fair called the Nao de China.
There are a number of golf courses in Acapulco including the Acapulco Princess and the Pierre Marqués course, the latter designed by Robert Trent Jones in 1972 for the World Cup Golf Tournament. The Mayan Palace course was designed by Pedro Guericia and an economical course called the Club de Golf Acapulco is near the convention center. The most exclusive course is that of the Tres Vidas Golf Club, designed by Robert von Hagge. It is located next to the ocean and is home to flocks of ducks and other birds.
Another famous sport tournament that has been held in Acapulco since 1993 is the Abierto Mexicano Telcel tennis tournament, an ATP 500 event that takes place in the tennis courts of the Princess Mundo Imperial, a resort located in the Diamante zone of Acapulco. Initially it was played in clay courts but it changed to hard court. The event has gained popularity within the passing of the years, attracting some of the top tennis players in the world including Novak Djokovic, Rafael Nadal and Marin Cilic. The total prize money is US$250,000.00 for WTA (women) and US$1,200,000.00 for ATP (men).
Acapulco also has a bullring, called the Plaza de Toros, near Caletilla Beach. The season runs during the winter and is called the Fiesta Brava.
Before 2010, over 100,000 American teenagers and young adults traveled to resort areas and balnearios throughout Mexico during spring break each year. The main reason students head to Mexico is the drinking age of 18 years (versus 21 for the United States), something that has been marketed by tour operators along with the sun and ocean. This has become attractive since the 1990s, especially since more traditional spring break places such as Daytona Beach, Florida, have enacted restrictions on drinking and other behaviors. This legislation has pushed spring break tourism to various parts of Mexico, with Acapulco as one of the top destinations.
In the late 1990s and early 2000s, Cancún had been favored as the spring break destination of choice. However, Cancún has taken some steps to control the reckless behavior associated with the event, and students have been looking for someplace new. This led many more to choose Acapulco, in spite of the fact that for many travelers, the flight is longer and more expensive than to Cancún. Many were attracted by the glitzy hotels on the south side and Acapulco's famous nightlife. In 2008, 22,500 students came to Acapulco for spring break. Hotels did not get that many in 2009, due mostly to the economic situation in the United States, and partially because of scares of drug-related violence.
In February 2009, the US State Department issued a travel alert directed at college students planning spring break trips to Acapulco. The warning—a result of violent activity springing from Mexico's drug cartel débâcle—took college campuses by storm, with some schools going so far as to warn their students about the risks of travel to Mexico over spring break. "The New York Times" tracked the travels of a Penn student on spring break in Acapulco just a week after the dissemination of the email, while Bill O'Reilly devoted a segment of his show, "The O'Reilly Factor", to urge students to stay away from Acapulco. In June 2009, a number of incidents occurred between the drug cartel and the government. These included coordinated attacks on police headquarters and open battles in the streets, involving large-caliber weapons and grenades. However, no incidents of violence against travelers on spring break were reported.
Many airlines fly to Acapulco International Airport. In the city, there are many buses and taxi services one can take to get from place to place, but most of the locals choose to walk to their destinations. However, an important mode of transportation is the government-subsidized 'Colectivo' cab system. These cabs cost 13 pesos per person to ride, but they are not private. The driver will pick up more passengers as long as seats are available, and will transport them to their destination based on first-come, first-served rules. The colectivos each travel a designated area of the city, the three main ones being Costera, Colosio, Coloso, or a mixture of the three. Coloso cabs travel mainly to old Acapulco. Colosio cabs travel through most of the tourist area of Acapulco. Costera cabs drive up and down the coast of Acapulco, where most of the hotels for visitors are located, but which includes some of old Acapulco. Where a driver will take you is partly his choice. Some are willing to travel to the other designated areas, especially during slow periods of the day.
The bus system is highly complex and can be rather confusing to an outsider. As far as transportation goes, it is the cheapest form, other than walking, in Acapulco. The most expensive buses have air conditioning, while the cheaper buses do not. For tourists, the Acapulco city government has established a system of yellow buses with Acapulco painted on the side of them. These buses are not for tourists only, but are certainly the nicest and most uniform of the bus systems. These buses travel the tourist section of Acapulco, driving up and down the coast. There are buses with specific routes and destinations, generally written on their windshields or shouted out by a barker riding in the front seat. Perhaps the most unusual thing about the privately operated buses is the fact that they are all highly decorated and personalized, with decals and home-made interior designs that range from comic book scenes, to pornography, and even to "Hello Kitty" themes.
The conflictive public transportation would be upgraded the 25th of June 2016 with the implementation of the Acabus. The Acabus infrastructure has a length of , counts with 16 stations that spread through the city of Acapulco and 5 routes. This project will help organize traffic because the buses now have a specific line on the roads and there would be more control over transportation and passengers.
In 2014, the idea to nominate the Manila-Acapulco Galleon Trade Route was initiated by the Mexican ambassador to UNESCO with the Filipino ambassador to UNESCO.
An Experts' Roundtable Meeting was held at the University of Santo Tomas (UST) on April 23, 2015 as part of the preparation of the Philippines for the possible transnational nomination of the Manila-Acapulco Galleon Trade Route to the World Heritage List. The nomination will be made jointly with Mexico.
The following are the experts and the topics they discussed during the roundtable meeting: Dr. Celestina Boncan on the Tornaviaje; Dr. Mary Jane A. Bolunia on Shipyards in the Bicol Region; Mr. Sheldon Clyde Jago-on, Bobby Orillaneda, and Ligaya Lacsina on Underwater Archaeology; Dr. Leovino Garcia on Maps and Cartography; Fr. Rene Javellana, S.J. on Fortifications in the Philippines; Felice Sta. Maria on Food; Dr. Fernando Zialcita on Textile; and Regalado Trota Jose on Historical Dimension. The papers presented and discussed during the roundtable meeting will be synthesized into a working document to establish the route's Outstanding Universal Value.
The Mexican side reiterated that they will also follow suit with the preparations for the route's nomination.
Spain has also backed the nomination of the Manila-Acapulco Trade Route Route in the UNESCO World Heritage Site list and has also suggested the Archives of the Manila-Acapulco Galleons to be nominated as part of a separate UNESCO list, the UNESCO Memory of the World Register. | https://en.wikipedia.org/wiki?curid=1446 |
Alan Kay
Alan Curtis Kay (born May 17, 1940) is an American computer scientist. He has been elected a Fellow of the American Academy of Arts and Sciences, the National Academy of Engineering, and the Royal Society of Arts. He is best known for his pioneering work on object-oriented programming and windowing graphical user interface (GUI) design.
He was the president of the Viewpoints Research Institute before its closure in 2018, and an adjunct professor of computer science at the University of California, Los Angeles. He is also on the advisory board of TTI/Vanguard. Until mid-2005, he was a senior fellow at HP Labs, a visiting professor at Kyoto University, and an adjunct professor at the Massachusetts Institute of Technology (MIT).
Kay is also a former professional jazz guitarist, composer, and theatrical designer, and an amateur classical pipe organist.
In an interview on education in America with the Davis Group Ltd., Kay said:
Originally from Springfield, Massachusetts, Kay's family relocated several times due to his father's career in physiology before ultimately settling in the New York metropolitan area when he was nine.
He attended the prestigious Brooklyn Technical High School, where he was suspended due to insubordination in his senior year. Having already accumulated enough credits to graduate, Kay then attended Bethany College in Bethany, West Virginia. He majored in biology and minored in mathematics before he was asked to leave by the administration for protesting the institution's Jewish quota.
Thereafter, Kay taught guitar in Denver, Colorado for a year and hastily enlisted in the United States Air Force when the local draft board inquired about his nonstudent status. Assigned as a computer programmer (a rare billet dominated by women due to the secretarial connotations of the field in the era) after passing an aptitude test, he devised an early cross-platform file transfer system.
Following his discharge, Kay enrolled at the University of Colorado Boulder, earning a bachelor's degree in mathematics and molecular biology in 1966. Before and during this time, he worked as a professional jazz guitarist. During his studies at CU, he wrote the music for an adaptation of "The Hobbit" and other campus theatricals.
In the autumn of 1966, he began graduate school at the University of Utah College of Engineering. He earned a Master of Science (M.S.) in electrical engineering in 1968 before taking his Doctor of Philosophy (Ph.D.) in computer science in 1969. His doctoral dissertation, "FLEX: A Flexible Extendable Language", described the invention of a computer language known as FLEX. While there, he worked with "fathers of computer graphics" David C. Evans (who had been recently recruited from the University of California, Berkeley to start Utah's computer science department) and Ivan Sutherland (best known for writing such pioneering programs as Sketchpad). Their mentorship greatly inspired Kay's evolving views on objects and programming. As he grew busier with research for the Defense Advanced Research Projects Agency (DARPA), he ended his musical career.
In 1968, he met Seymour Papert and learned of the programming language Logo, a dialect of Lisp optimized for educational purposes. This led him to learn of the work of Jean Piaget, Jerome Bruner, Lev Vygotsky, and of constructionist learning, further influencing his professional orientation.
Leaving Utah as an associate professor of computer science in 1969, Kay became a visiting researcher at the Stanford Artificial Intelligence Laboratory in anticipation of accepting a professorship at Carnegie Mellon University. Instead, in 1970, he joined the Xerox PARC research staff in Palo Alto, California. Throughout the decade, he developed prototypes of networked workstations using the programming language Smalltalk. These inventions were later commercialized by Apple in their Lisa and Macintosh computers.
Kay is one of the fathers of the idea of object-oriented programming, which he named, along with some colleagues at PARC. Some of the original object-oriented concepts, including the use of the words 'object' and 'class', had been developed for Simula 67 at the Norwegian Computing Center. Later he said:
I'm sorry that I long ago coined the term "objects" for this topic because it gets many people to focus on the lesser idea. The big idea is "messaging".
While at PARC, Kay conceived the Dynabook concept, a key progenitor of laptop and tablet computers and the e-book. He is also the architect of the modern overlapping windowing graphical user interface (GUI). Because the Dynabook was conceived as an educational platform, Kay is considered to be one of the first researchers into mobile learning; many features of the Dynabook concept have been adopted in the design of the One Laptop Per Child educational platform, with which Kay is actively involved.
The field of computing is awaiting new revolution to happen, according to Kay, in which educational communities, parents, and children will not see in it a set of tools invented by Douglas Engelbart, but a medium in the Marshall McLuhan sense. He wrote:
As with Simulas leading to OOP, this encounter finally hit me with what the destiny of personal computing really was going to be. Not a personal dynamic vehicle, as in Engelbart's metaphor opposed to the IBM "railroads", but something much more profound: a personal dynamic medium. With a vehicle one could wait until high school and give "drivers ed", but if it was a medium, it had to extend into the world of childhood.
From 1981 to 1984, Kay was Atari's Chief Scientist. In 1984, he became an Apple Fellow. Following the closure of the Apple Advanced Technology Group in 1997, he was recruited by his friend Bran Ferren, head of research and development at Disney, to join Walt Disney Imagineering as a Disney Fellow. He remained there until Ferren left to start Applied Minds Inc with Imagineer Danny Hillis, leading to the cessation of the Fellows program. In 2001, he founded Viewpoints Research Institute, a non-profit organization dedicated to children, learning, and advanced software development. For its first ten years, Kay and his Viewpoints group were based at Applied Minds in Glendale, California, where he and Ferren continued to work together on various projects. Kay was also a Senior Fellow at Hewlett-Packard until HP disbanded the Advanced Software Research Team on July 20, 2005.
Kay taught a Fall 2011 class, "Powerful Ideas: Useful Tools to Understand the World", at New York University's Interactive Telecommunications Program (ITP) along with full-time ITP faculty member Nancy Hechinger. The goal of the class was to devise new forms of teaching/learning based on fundamental, powerful concepts rather than traditional rote learning.
In December 1995, while still at Apple, Kay collaborated with many others to start the open source Squeak version of Smalltalk, and he continues to work on it. As part of this effort, in November 1996, his team began research on what became the Etoys system. More recently he started, along with David A. Smith, David P. Reed, Andreas Raab, Rick McGeer, Julian Lombardi and Mark McCahill, the Croquet Project, an open source networked 2D and 3D environment for collaborative work.
In 2001, it became clear that the Etoy architecture in Squeak had reached its limits in what the Morphic interface infrastructure could do. Andreas Raab was a researcher working in Kay's group, then at Hewlett-Packard. He proposed defining a "script process" and providing a default scheduling mechanism that avoids several more general problems. The result was a new user interface, proposed to replace the Squeak Morphic user interface in the future. Tweak added mechanisms of islands, asynchronous messaging, players and costumes, language extensions, projects, and tile scripting. Its underlying object system is class-based, but to users (during programming) it acts like it is prototype-based. Tweak objects are created and run in Tweak project windows.
In November 2005, at the World Summit on the Information Society, the MIT research laboratories unveiled a new laptop computer, for educational use around the world. It has many names: the $100 Laptop, the One Laptop per Child program, the Children's Machine, and the XO-1. The program was begun and is sustained by Kay's friend Nicholas Negroponte, and is based on Kay's Dynabook ideal. Kay is a prominent co-developer of the computer, focusing on its educational software using Squeak and Etoys.
Kay has lectured extensively on the idea that the computer revolution is very new, and all of the good ideas have not been universally implemented. Lectures at OOPSLA 1997 conference and his ACM Turing award talk, entitled "The Computer Revolution Hasn't Happened Yet" were informed by his experiences with Sketchpad, Simula, Smalltalk, and the bloated code of commercial software.
On August 31, 2006, Kay's proposal to the United States National Science Foundation (NSF) was granted, thus funding Viewpoints Research Institute for several years. The proposal title was: "STEPS Toward the Reinvention of Programming": A compact and Practical Model of Personal Computing as a Self-exploratorium. A sense of what Kay is trying to do comes from this quote, from the abstract of a seminar on this given at Intel Research Labs, Berkeley: "The conglomeration of commercial and most open source software consumes in the neighborhood of several hundreds of millions of lines of code these days. We wonder: how small could be an understandable practical "Model T" design that covers this functionality? 1M lines of code? 200K LOC? 100K LOC? 20K LOC?"
Alan Kay has received many awards and honors. Among them:
His other honors include the J-D Warnier Prix d'Informatique, the ACM Systems Software Award, the NEC Computers & Communication Foundation Prize, the Funai Foundation Prize, the Lewis Branscomb Technology Award, and the ACM SIGCSE Award for Outstanding Contributions to Computer Science Education. | https://en.wikipedia.org/wiki?curid=1449 |
APL (programming language)
APL (named after the book "A Programming Language") is a programming language developed in the 1960s by Kenneth E. Iverson. Its central datatype is the multidimensional array. It uses a large range of special graphic symbols to represent most functions and operators, leading to very concise code. It has been an important influence on the development of concept modeling, spreadsheets, functional programming, and computer math packages. It has also inspired several other programming languages.
A mathematical notation for manipulating arrays was developed by Kenneth E. Iverson, starting in 1957 at Harvard University. In 1960, he began work for IBM where he developed this notation with Adin Falkoff and published it in his book "A Programming Language" in 1962. The preface states its premise:
This notation was used inside IBM for short research reports on computer systems, such as the Burroughs B5000 and its stack mechanism when stack machines versus register machines were being evaluated by IBM for upcoming computers.
Iverson also used his notation in a draft of the chapter "A Programming Language", written for a book he was writing with Fred Brooks, "Automatic Data Processing", which would be published in 1963.
In 1979, Iverson received the Turing Award for his work on APL.
As early as 1962, the first attempt to use the notation to describe a complete computer system happened after Falkoff discussed with William C. Carter his work to standardize the instruction set for the machines that later became the IBM System/360 family.
In 1963, Herbert Hellerman, working at the IBM Systems Research Institute, implemented a part of the notation on an IBM 1620 computer, and it was used by students in a special high school course on calculating transcendental functions by series summation. Students tested their code in Hellerman's lab. This implementation of a part of the notation was called Personalized Array Translator (PAT).
In 1963, Falkoff, Iverson, and Edward H. Sussenguth Jr., all working at IBM, used the notation for a formal description of the IBM System/360 series machine architecture and functionality, which resulted in a paper published in "IBM Systems Journal" in 1964. After this was published, the team turned their attention to an implementation of the notation on a computer system. One of the motivations for this focus of implementation was the interest of John L. Lawrence who had new duties with Science Research Associates, an educational company bought by IBM in 1964. Lawrence asked Iverson and his group to help use the language as a tool to develop and use computers in education.
After Lawrence M. Breed and Philip S. Abrams of Stanford University joined the team at IBM Research, they continued their prior work on an implementation programmed in FORTRAN IV for a part of the notation which had been done for the IBM 7090 computer running on the IBSYS operating system. This work was finished in late 1965 and later named IVSYS (for Iverson system). The basis of this implementation was described in detail by Abrams in a Stanford University Technical Report, "An Interpreter for Iverson Notation" in 1966, the academic aspect of this was formally supervised by Niklaus Wirth. Like Hellerman's PAT system earlier, this implementation did not include the APL character set but used special English reserved words for functions and operators. The system was later adapted for a time-sharing system and, by November 1966, it had been reprogrammed for the IBM System/360 Model 50 computer running in a time sharing mode and was used internally at IBM.
A key development in the ability to use APL effectively, before the wide use of cathode ray tube (CRT) terminals, was the development of a special IBM Selectric typewriter interchangeable typing element with all the special APL characters on it. This was used on paper printing terminal workstations using the Selectric typewriter and typing element mechanism, such as the IBM 1050 and IBM 2741 terminal. Keycaps could be placed over the normal keys to show which APL characters would be entered and typed when that key was struck. For the first time, a programmer could type in and see proper APL characters as used in Iverson's notation and not be forced to use awkward English keyword representations of them. Falkoff and Iverson had the special APL Selectric typing elements, 987 and 988, designed in late 1964, although no APL computer system was available to use them. Iverson cited Falkoff as the inspiration for the idea of using an IBM Selectric typing element for the APL character set.
Many APL symbols, even with the APL characters on the Selectric typing element, still had to be typed in by over-striking two extant element characters. An example is the "grade up" character, which had to be made from a "delta" (shift-H) and a "Sheffer stroke" (shift-M). This was necessary because the APL character set was much larger than the 88 characters allowed on the typing element, even when letters were restricted to upper-case (capitals).
The first APL interactive login and creation of an APL workspace was in 1966 by Larry Breed using an IBM 1050 terminal at the IBM Mohansic Labs near Thomas J. Watson Research Center, the home of APL, in Yorktown Heights, New York.
IBM was chiefly responsible for introducing APL to the marketplace. APL was first available in 1967 for the IBM 1130 as "APL\1130". It would run in as little as 8k 16-bit words of memory, and used a dedicated 1 megabyte hard disk.
APL gained its foothold on mainframe timesharing systems from the late 1960s through the early 1980s, in part because it would support multiple users on lower-specification systems that had no dynamic address translation hardware. Additional improvements in performance for selected IBM System/370 mainframe systems included the "APL Assist Microcode" in which some support for APL execution was included in the processor's firmware, as distinct from being implemented entirely by higher-level software. Somewhat later, as suitably performing hardware was finally growing available in the mid- to late-1980s, many users migrated their applications to the personal computer environment.
Early IBM APL interpreters for IBM 360 and IBM 370 hardware implemented their own multi-user management instead of relying on the host services, thus they were their own timesharing systems. First introduced in 1966, the "APL\360" system was a multi-user interpreter. The ability to programmatically communicate with the operating system for information and setting interpreter system variables was done through special privileged "I-beam" functions, using both monadic and dyadic operations.
In 1973, IBM released "APL.SV", which was a continuation of the same product, but which offered shared variables as a means to access facilities outside of the APL system, such as operating system files. In the mid-1970s, the IBM mainframe interpreter was even adapted for use on the IBM 5100 desktop computer, which had a small CRT and an APL keyboard, when most other small computers of the time only offered BASIC. In the 1980s, the "VSAPL" program product enjoyed wide use with Conversational Monitor System (CMS), Time Sharing Option (TSO), VSPC, MUSIC/SP, and CICS users.
In 1973–1974, Patrick E. Hagerty directed the implementation of the University of Maryland APL interpreter for the 1100 line of the Sperry UNIVAC 1100/2200 series mainframe computers. At the time, Sperry had nothing. In 1974, student Alan Stebbens was assigned the task of implementing an internal function. Xerox APL was available from June 1975 for Xerox 560 and Sigma 6, 7, and 9 mainframes running CP-V and for Honeywell CP-6.
In the 1960s and 1970s, several timesharing firms arose that sold APL services using modified versions of the IBM APL\360 interpreter. In North America, the better-known ones were I. P. Sharp Associates, Scientific Time Sharing Corporation (STSC), Time Sharing Resources (TSR), and The Computer Company (TCC). CompuServe also entered the market in 1978 with an APL Interpreter based on a modified version of Digital Equipment Corp and Carnegie Mellon's, which ran on DEC's KI and KL 36-bit machines. CompuServe's APL was available both to its commercial market and the consumer information service. With the advent first of less expensive mainframes such as the IBM 4300, and later the personal computer, by the mid-1980s, the timesharing industry was all but gone.
"Sharp APL" was available from I. P. Sharp Associates, first as a timesharing service in the 1960s, and later as a program product starting around 1979. "Sharp APL" was an advanced APL implementation with many language extensions, such as "packages" (the ability to put one or more objects into a single variable), file system, nested arrays, and shared variables.
APL interpreters were available from other mainframe and mini-computer manufacturers also, notably Burroughs, Control Data Corporation (CDC), Data General, Digital Equipment Corporation (DEC), Harris, Hewlett-Packard (HP), Siemens AG, Xerox, and others.
Garth Foster of Syracuse University sponsored regular meetings of the APL implementers' community at Syracuse's Minnowbrook Conference Center in Blue Mountain Lake, New York. In later years, Eugene McDonnell organized similar meetings at the Asilomar Conference Grounds near Monterey, California, and at Pajaro Dunes near Watsonville, California. The SIGAPL special interest group of the Association for Computing Machinery continues to support the APL community.
On microcomputers, which became available from the mid 1970s onwards, BASIC became the dominant programming language. Nevertheless, some microcomputers provided APL instead - the first being the Intel 8008-based MCM/70 which was released in 1974 and which was primarily used in education. Another machine of this time was the VideoBrain Family Computer, released in 1977, which was supplied with its dialect of APL called APL/S.
The Commodore SuperPET, introduced in 1981, included an APL interpreter developed by the University of Waterloo.
In 1976, Bill Gates claimed in his Open Letter to Hobbyists that Microsoft Corporation was implementing APL for the Intel 8080 and Motorola 6800 but had "very little incentive to make [it] available to hobbyists" because of software piracy. It was never released.
Starting in the early 1980s, IBM APL development, under the leadership of Jim Brown, implemented a new version of the APL language that contained as its primary enhancement the concept of "nested arrays", where an array can contain other arrays, and new language features which facilitated integrating nested arrays into program workflow. Ken Iverson, no longer in control of the development of the APL language, left IBM and joined I. P. Sharp Associates, where one of his major contributions was directing the evolution of Sharp APL to be more in accord with his vision.
As other vendors were busy developing APL interpreters for new hardware, notably Unix-based microcomputers, APL2 was almost always the standard chosen for new APL interpreter developments. Even today, most APL vendors or their users cite APL2 compatibility, as a selling point for those products.
"APL2" for IBM mainframe computers is still available. IBM cites its use for problem solving, system design, prototyping, engineering and scientific computations, expert systems, for teaching mathematics and other subjects, visualization and database access and was first available for CMS and TSO in 1984. The APL2 Workstation edition (Windows, OS/2, AIX, Linux, and Solaris) followed much later in the early 1990s.
Various implementations of APL by APLX, Dyalog, et al., include extensions for object-oriented programming, support for .NET Framework, XML-array conversion primitives, graphing, operating system interfaces, and lambda calculus expressions.
APL has formed the basis of, or influenced, the following languages:
APL has been both criticized and praised for its choice of a unique, non-standard character set. Some who learn it become ardent adherents, suggesting that there is some weight behind Iverson's idea that the notation used does make a difference. In the 1960s and 1970s, few terminal devices and even display monitors could reproduce the APL character set. The most popular ones employed the IBM Selectric print mechanism used with a special APL type element. One of the early APL line terminals (line-mode operation only, "not" full screen) was the Texas Instruments TI Model 745 (circa 1977) with the full APL character set which featured half and full duplex telecommunications modes, for interacting with an APL time-sharing service or remote mainframe to run a remote computer job, called an RJE.
Over time, with the universal use of high-quality graphic displays, printing devices and Unicode support, the APL character font problem has largely been eliminated. However, entering APL characters requires the use of input method editors, keyboard mappings, virtual/on-screen APL symbol sets, or easy-reference printed keyboard cards which can frustrate beginners accustomed to other programming languages. With beginners who have no prior experience with other programming languages, a study involving high school students found that typing and using APL characters did not hinder the students in any measurable way.
In defense of APL use, APL requires less coding to type in, and keyboard mappings become memorized over time. Also, special APL keyboards are manufactured and in use today, as are freely available downloadable fonts for operating systems such as Microsoft Windows. The reported productivity gains assume that one will spend enough time working in APL to make it worthwhile to memorize the symbols, their semantics, and keyboard mappings, not to mention a substantial number of idioms for common tasks.
Unlike traditionally structured programming languages, APL code is typically structured as chains of monadic or dyadic functions, and operators acting on arrays. APL has many nonstandard "primitives" (functions and operators) that are indicated by a single symbol or a combination of a few symbols. All primitives are defined to have the same precedence, and always associate to the right. Thus, APL is "read" or best understood from right-to-left.
Early APL implementations (circa 1970 or so) had no programming loop-flow control structures, such as codice_1 or codice_2 loops, and codice_3 constructs. Instead, they used array operations, and use of structured programming constructs was often not necessary, since an operation could be performed on a full array in one statement. For example, the codice_4 function (codice_5) can replace for-loop iteration: ιN when applied to a scalar positive integer yields a one-dimensional array (vector), 1 2 3 ... N. More recent implementations of APL generally include comprehensive control structures, so that data structure and program control flow can be clearly and cleanly separated.
The APL environment is called a "workspace". In a workspace the user can define programs and data, i.e., the data values exist also outside the programs, and the user can also manipulate the data without having to define a program. In the examples below, the APL interpreter first types six spaces before awaiting the user's input. Its own output starts in column one.
The user can save the workspace with all values, programs, and execution status.
APL uses a set of non-ASCII symbols, which are an extension of traditional arithmetic and algebraic notation. Having single character names for single instruction, multiple data (SIMD) vector functions is one way that APL enables compact formulation of algorithms for data transformation such as computing Conway's Game of Life in one line of code. In nearly all versions of APL, it is theoretically possible to express any computable function in one expression, that is, in one line of code.
Because of the unusual character set, many programmers use special keyboards with APL keytops to write APL code. Although there are various ways to write APL code using only ASCII characters, in practice it is almost never done. (This may be thought to support Iverson's thesis about notation as a tool of thought.) Most if not all modern implementations use standard keyboard layouts, with special mappings or input method editors to access non-ASCII characters. Historically, the APL font has been distinctive, with uppercase italic alphabetic characters and upright numerals and symbols. Most vendors continue to display the APL character set in a custom font.
Advocates of APL claim that the examples of so-called "write-only code" (badly written and almost incomprehensible code) are almost invariably examples of poor programming practice or novice mistakes, which can occur in any language. Advocates also claim that they are far more productive with APL than with more conventional computer languages, and that working software can be implemented in far less time and with far fewer programmers than using other technology.
They also may claim that because it is compact and terse, APL lends itself well to larger-scale software development and complexity, because the number of lines of code can be reduced greatly. Many APL advocates and practitioners also view standard programming languages such as COBOL and Java as being comparatively tedious. APL is often found where time-to-market is important, such as with trading systems.
APL makes a clear distinction between "functions" and "operators". Functions take arrays (variables or constants or expressions) as arguments, and return arrays as results. Operators (similar to higher-order functions) take functions or arrays as arguments, and derive related functions. For example, the "sum" function is derived by applying the "reduction" operator to the "addition" function. Applying the same reduction operator to the "maximum" function (which returns the larger of two numbers) derives a function which returns the largest of a group (vector) of numbers. In the J language, Iverson substituted the terms "verb" for "function" and "adverb" or "conjunction" for "operator".
APL also identifies those features built into the language, and represented by a symbol, or a fixed combination of symbols, as "primitives". Most primitives are either functions or operators. Coding APL is largely a process of writing non-primitive functions and (in some versions of APL) operators. However a few primitives are considered to be neither functions nor operators, most noticeably assignment.
Some words used in APL literature have meanings that differ from those in both mathematics and the generality of computer science.
APL has explicit representations of functions, operators, and syntax, thus providing a basis for the clear and explicit statement of extended facilities in the language, and tools to experiment on them.
This displays "Hello, world":
'Hello, world'
'Hello World,' sample user session on YouTube
A design theme in APL is to define default actions in some cases that would produce syntax errors in most other programming languages.
The 'Hello, world' string constant above displays, because display is the default action on any expression for which no action is specified explicitly (e.g. assignment, function parameter).
Another example of this theme is that exponentiation in APL is written as "", which indicates raising 2 to the power 3 (this would be written as "" in some other languages and "" in FORTRAN and Python): many languages use * to signify multiplication as in 2*3 but APL uses for that. However, if no base is specified (as with the statement "" in APL, or "" in other languages), in most other programming languages one would have a syntax error. APL however assumes the missing base to be the natural logarithm constant e (2.71828...), and so interpreting "" as "".
Suppose that is an array of numbers. Then gives its average. Reading "right-to-left", gives the number of elements in X, and since is a dyadic operator, the term to its left is required as well. It is in parenthesis since otherwise X would be taken (so that the summation would be of , of each element of X divided by the number of elements in X), and adds all the elements of X. Building on this, calculates the standard deviation. Further, since assignment is an operator, it can appear within an expression, so
would place suitable values into T, AV and SD. Naturally, one would make this expression into a function for repeated use rather than retyping it each time.
This following immediate-mode expression generates a typical set of "Pick 6" lottery numbers: six pseudo-random integers ranging from 1 to 40, "guaranteed non-repeating", and displays them sorted in ascending order:
x[⍋x←6?40]
The above does a lot, concisely; although it seems complex to a new APLer. It combines the following APL "functions" (also called "primitives" and "glyphs"):
Since there is no function to the left of the left-most x to tell APL what to do with the result, it simply outputs it to the display (on a single line, separated by spaces) without needing any explicit instruction to do that.
codice_6 also has a monadic equivalent called codice_15, which simply returns one random integer between 1 and its sole operand [to the right of it], inclusive. Thus, a role-playing game program might use the expression codice_16 to roll a twenty-sided die.
The following expression finds all prime numbers from 1 to R. In both time and space, the calculation complexity is formula_1 (in Big O notation).
(~R∊R∘.×R)/R←1↓ιR
Executed from right to left, this means:
(Note, this assumes the APL origin is 1, i.e., indices start with 1. APL can be set to use 0 as the origin, so that codice_45 is codice_46, which is convenient for some calculations.)
The following expression sorts a word list stored in matrix X according to word length:
X[⍋X+.≠' ';]
The following function "life", written in Dyalog APL, takes a boolean matrix and calculates the new generation according to Conway's Game of Life. It demonstrates the power of APL to implement a complex algorithm in very little code, but it is also very hard to follow unless one has advanced knowledge of APL.
In the following example, also Dyalog, the first line assigns some HTML code to a variable codice_47 and then uses an APL expression to remove all the HTML tags (explanation):
This is emphasized text.
APL is used for many purposes including financial and insurance applications, artificial intelligence,
neural networks
and robotics. It has been argued that APL is a calculation tool and not a programming language; its symbolic nature and array capabilities have made it popular with domain experts and data scientists who do not have or require the skills of a computer programmer.
APL is well suited to image manipulation and computer animation, where graphic transformations can be encoded as matrix multiplications. One of the first commercial computer graphics houses, Digital Effects, produced an APL graphics product named "Visions", which was used to create television commercials and animation for the 1982 film "Tron". Latterly, the Stormwind boating simulator uses APL to implement its core logic, its interfacing to the rendering pipeline middleware and a major part of its physics engine.
Today, APL remains in use in a wide range of commercial and scientific applications, for example
investment management,
asset management,
health care,
and DNA profiling,
and by hobbyists.
The first implementation of APL using recognizable APL symbols was APL\360 which ran on the IBM System/360, and was completed in November 1966 though at that time remained in use only within IBM. In 1973 its implementors, Larry Breed, Dick Lathwell and Roger Moore, were awarded the Grace Murray Hopper Award from the Association for Computing Machinery (ACM). It was given "for their work in the design and implementation of APL\360, setting new standards in simplicity, efficiency, reliability and response time for interactive systems."
In 1975, the IBM 5100 microcomputer offered APL\360 as one of two built-in ROM-based interpreted languages for the computer, complete with a keyboard and display that supported all the special symbols used in the language.
Significant developments to APL\360 included CMS/APL, which made use of the virtual storage capabilities of CMS and APLSV, which introduced shared variables, system variables and system functions. It was subsequently ported to the IBM System/370 and VSPC platforms until its final release in 1983, after which it was replaced by APL2.
In 1968, APL\1130 became the first publicly available APL system, created by IBM for the IBM 1130. It became the most popular IBM Type-III Library software that IBM released.
APL*Plus and Sharp APL are versions of APL\360 with added business-oriented extensions such as data formatting and facilities to store APL arrays in external files. They were jointly developed by two companies, employing various members of the original IBM APL\360 development team.
The two companies were I. P. Sharp Associates (IPSA), an APL\360 services company formed in 1964 by Ian Sharp, Roger Moore and others, and STSC, a time-sharing and consulting service company formed in 1969 by Lawrence Breed and others. Together the two developed APL*Plus and thereafter continued to work together but develop APL separately as APL*Plus and Sharp APL. STSC ported APL*Plus to many platforms with versions being made for the VAX 11, PC and UNIX, whereas IPSA took a different approach to the arrival of the Personal Computer and made Sharp APL available on this platform using additional PC-XT/360 hardware. In 1993, Soliton Incorporated was formed to support Sharp APL and it developed Sharp APL into SAX (Sharp APL for Unix). , APL*Plus continues as APL2000 APL+Win.
In 1985, Ian Sharp, and Dan Dyer of STSC, jointly received the Kenneth E. Iverson Award for Outstanding Contribution to APL.
APL2 was a significant re-implementation of APL by IBM which was developed from 1971 and first released in 1984. It provides many additions to the language, of which the most notable is nested (non-rectangular) array support. it is available for mainframe computers running z/OS or z/VM and workstations running AIX, Linux, Sun Solaris, and Microsoft Windows.
The entire APL2 Products and Services Team was awarded the Iverson Award in 2007.
Dyalog APL was first released by British company Dyalog Ltd. in 1983 and, , is available for AIX, Linux (including on the Raspberry Pi), macOS and Microsoft Windows platforms. It is based on APL2, with extensions to support object-oriented programming and functional programming. Licences are free for personal/non-commercial use.
In 1995, two of the development team - John Scholes and Peter Donnelly - were awarded the Iverson Award for their work on the interpreter. Gitte Christensen and Morten Kromberg were joint recipients of the Iverson Award in 2016.
NARS2000 is an open-source APL interpreter written by Bob Smith, a prominent APL developer and implementor from STSC in the 1970s and 1980s. NARS2000 contains advanced features and new datatypes and runs natively on Microsoft Windows, and other platforms under Wine.
APLX is a cross-platform dialect of APL, based on APL2 and with several extensions, which was first released by British company MicroAPL in 2002. Although no longer in development or on commercial sale it is now available free of charge from Dyalog.
GNU APL is a free implementation of Extended APL as specified in ISO/IEC 13751:2001 and is thus an implementation of APL2. It runs on GNU/Linux and on Windows using Cygwin, and uses Unicode internally. It was written by Jürgen Sauermann.
Richard Stallman, founder of the GNU Project, was an early adopter of APL, using it to write a text editor as a high school student in the summer of 1969.
APL is traditionally an interpreted language, having language characteristics such as weak variable typing not well suited to compilation. However, with arrays as its core data structure it provides opportunities for performance gains through parallelism, parallel computing, massively parallel applications, and very-large-scale integration (VLSI), and from the outset APL has been regarded as a high-performance language - for example, it was noted for the speed with which it could perform complicated matrix operations "because it operates on arrays and performs operations like matrix inversion internally".
Nevertheless, APL is rarely purely interpreted and compilation or partial compilation techniques that are, or have been, used include the following:
Most APL interpreters support idiom recognition and evaluate common idioms as single operations. For example, by evaluating the idiom codice_48 as a single operation (where codice_49 is a Boolean vector and codice_50 is an array), the creation of two intermediate arrays is avoided.
Weak typing in APL means that a name may reference an array (of any datatype), a function or an operator. In general, the interpreter cannot know in advance which form it will be and must therefore perform analysis, syntax checking etc. at run-time. However, in certain circumstances, it is possible to deduce in advance what type a name is expected to reference and then generate bytecode which can be executed with reduced run-time overhead. This bytecode can also be optimised using compilation techniques such as constant folding or common subexpression elimination. The interpreter will execute the bytecode when present and when any assumptions which have been made are met. Dyalog APL includes support for optimised bytecode.
Compilation of APL has been the subject of research and experiment since the language first became available; the first compiler is considered to be the Burroughs APL-700 which was released around 1971. In order to be able to compile APL, language limitations have to be imposed. APEX is a research APL compiler which was written by Robert Bernecky and is available under the GNU Public License.
The STSC APL Compiler is a hybrid of a bytecode optimiser and a compiler - it enables compilation of functions to machine code provided that its sub-functions and globals are declared, but the interpreter is still used as a runtime library and to execute functions which do not meet the compilation requirements.
APL has been standardized by the American National Standards Institute (ANSI) working group X3J10 and International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC), ISO/IEC Joint Technical Committee 1 Subcommittee 22 Working Group 3. The Core APL language is specified in ISO 8485:1989, and the Extended APL language is specified in ISO/IEC 13751:2001. | https://en.wikipedia.org/wiki?curid=1451 |
ALGOL
ALGOL (; short for "Algorithmic Language") is a family of imperative computer programming languages originally developed in 1958. ALGOL heavily influenced many other languages and was the standard method for algorithm description used by the Association for Computing Machinery (ACM) in textbooks and academic sources until object-oriented languages came around, for more than thirty years.
In the sense that the syntax of most modern languages is "Algol-like", it was arguably the most influential of the four high-level programming languages among which it was roughly contemporary: FORTRAN, Lisp, and COBOL. It was designed to avoid some of the perceived problems with FORTRAN and eventually gave rise to many other programming languages, including PL/I, Simula, BCPL, B, Pascal, and C.
ALGOL introduced code blocks and the codice_1...codice_2 pairs for delimiting them. It was also the first language implementing nested function definitions with lexical scope. Moreover, it was the first programming language which gave detailed attention to formal language definition and through the "Algol 60 Report" introduced Backus–Naur form, a principal formal grammar notation for language design.
There were three major specifications, named after the years they were first published:
ALGOL 68 is substantially different from ALGOL 60 and was not well received, so that in general "Algol" means ALGOL 60 and dialects thereof.
The International Algebraic Language (IAL), renamed ALGOL 58, was highly influential and generally considered the ancestor of most of the modern programming languages (the so-called Algol-like languages). Further, "ALGOL object code" was a simple, compact, and stack-based instruction set architecture commonly used in teaching compiler construction and other high order languages; of which Algol is generally considered the first.
ALGOL was developed jointly by a committee of European and American computer scientists in a meeting in 1958 at the Swiss Federal Institute of Technology in Zurich (ETH Zurich; cf. ALGOL 58). It specified three different syntaxes: a reference syntax, a publication syntax, and an implementation syntax. The different syntaxes permitted it to use different keyword names and conventions for decimal points (commas vs periods) for different languages.
ALGOL was used mostly by research computer scientists in the United States and in Europe. Its use in commercial applications was hindered by the absence of standard input/output facilities in its description and the lack of interest in the language by large computer vendors other than Burroughs Corporation. ALGOL 60 did however become the standard for the publication of algorithms and had a profound effect on future language development.
John Backus developed the "Backus normal form" method of describing programming languages specifically for ALGOL 58. It was revised and expanded by Peter Naur for ALGOL 60, and at Donald Knuth's suggestion renamed Backus–Naur form.
Peter Naur: "As editor of the ALGOL Bulletin I was drawn into the international discussions of the language and was selected to be member of the European language design group in November 1959. In this capacity I was the editor of the ALGOL 60 report, produced as the result of the ALGOL 60 meeting in Paris in January 1960."
The following people attended the meeting in Paris (from 1 to 16 January):
Alan Perlis gave a vivid description of the meeting: "The meetings were exhausting, interminable, and exhilarating. One became aggravated when one's good ideas were discarded along with the bad ones of others. Nevertheless, diligence persisted during the entire period. The chemistry of the 13 was excellent."
ALGOL 60 inspired many languages that followed it. Tony Hoare remarked: "Here is a language so far ahead of its time that it was not only an improvement on its predecessors but also on nearly all its successors." The Scheme programming language, a variant of Lisp that adopted the block structure and lexical scope of ALGOL, also adopted the wording "Revised Report on the Algorithmic Language Scheme" for its standards documents in homage to ALGOL.
As Peter Landin noted, the language Algol was the first language to combine seamlessly imperative effects with the (call-by-name) lambda calculus. Perhaps the most elegant formulation of the language is due to John C. Reynolds, and it best exhibits its syntactic and semantic purity. Reynolds's idealized Algol also made a convincing methodological argument regarding the suitability of local effects in the context of call-by-name languages, to be contrasted with the global effects used by call-by-value languages such as ML. The conceptual integrity of the language made it one of the main objects of semantic research, along with Programming Computable Functions (PCF) and ML.
To date there have been at least 70 augmentations, extensions, derivations and sublanguages of Algol 60.
The Burroughs dialects included special Bootstrapping dialects such as ESPOL and NEWP. The latter is still used for Unisys MCP system software.
ALGOL 60 as officially defined had no I/O facilities; implementations defined their own in ways that were rarely compatible with each other. In contrast, ALGOL 68 offered an extensive library of "transput" (input/output) facilities.
ALGOL 60 allowed for two evaluation strategies for parameter passing: the common call-by-value, and call-by-name. Call-by-name has certain effects in contrast to call-by-reference. For example, without specifying the parameters as "value" or "reference", it is impossible to develop a procedure that will swap the values of two parameters if the actual parameters that are passed in are an integer variable and an array that is indexed by that same integer variable. Think of passing a pointer to swap(i, A[i]) in to a function. Now that every time swap is referenced, it is reevaluated. Say i := 1 and A[i] := 2, so every time swap is referenced it will return the other combination of the values ([1,2], [2,1], [1,2] and so on). A similar situation occurs with a random function passed as actual argument.
Call-by-name is known by many compiler designers for the interesting "thunks" that are used to implement it. Donald Knuth devised the "man or boy test" to separate compilers that correctly implemented "recursion and non-local references." This test contains an example of call-by-name.
ALGOL 68 was defined using a two-level grammar formalism invented by Adriaan van Wijngaarden and which bears his name. Van Wijngaarden grammars use a context-free grammar to generate an infinite set of productions that will recognize a particular ALGOL 68 program; notably, they are able to express the kind of requirements that in many other programming language standards are labelled "semantics" and have to be expressed in ambiguity-prone natural language prose, and then implemented in compilers as "ad hoc" code attached to the formal language parser.
procedure Absmax(a) Size:(n, m) Result:(y) Subscripts:(i, k);
Here is an example of how to produce a table using Elliott 803 ALGOL.
PUNCH(3) sends output to the teleprinter rather than the tape punch.
SAMELINE suppresses the carriage return + line feed normally printed between arguments.
ALIGNED(1,6) controls the format of the output with 1 digit before and 6 after the decimal point.
The following code samples are ALGOL 68 versions of the above ALGOL 60 code samples.
ALGOL 68 implementations used ALGOL 60's approaches to stropping. In ALGOL 68's case tokens with the bold typeface are reserved words, types (modes) or operators.
Note: lower (⌊) and upper (⌈) bounds of an array, and array slicing, are directly available to the programmer.
The variations and lack of portability of the programs from one implementation to another is easily demonstrated by the classic hello world program.
ALGOL 58 had no I/O facilities.
Since ALGOL 60 had no I/O facilities, there is no portable hello world program in ALGOL.
The next three examples are in Burroughs Extended Algol. The first two direct output at the interactive terminal they are run on. The first uses a character array, similar to C. The language allows the array identifier to be used as a pointer to the array, and hence in a REPLACE statement.
A simpler program using an inline format:
An even simpler program using the Display statement. Note that its output would end up at the system console ('SPO'):
An alternative example, using Elliott Algol I/O is as follows. Elliott Algol used different characters for "open-string-quote" and "close-string-quote":
Here is a version for the Elliott 803 Algol (A104) The standard Elliott 803 used 5 hole paper tape and thus only had upper case. The code lacked any quote characters so £ (UK Pound Sign) was used for open quote and ? (Question Mark) for close quote. Special sequences were placed in double quotes (e.g. ££L?? produced a new line on the teleprinter).
The ICT 1900 series Algol I/O version allowed input from paper tape or punched card. Paper tape 'full' mode allowed lower case. Output was to a line printer. The open and close quote characters were represented using '(' and ')' and spaces by %.
ALGOL 68 code was published with reserved words typically in lowercase, but bolded or underlined.
In the language of the "Algol 68 Report" the input/output facilities were collectively called the "Transput".
The ALGOLs were conceived at a time when character sets were diverse and evolving rapidly; also, the ALGOLs were defined so that only "uppercase" letters were required.
1960: IFIP – The Algol 60 language and report included several mathematical symbols which are available on modern computers and operating systems, but, unfortunately, were unsupported on most computing systems at the time. For instance: ×, ÷, ≤, ≥, ≠, ¬, ∨, ∧, ⊂, ≡, ␣ and ⏨.
1961 September: ASCII – The ASCII character set, then in an early stage of development, had the \ (Back slash) character added to it in order to support ALGOL's boolean operators /\ and \/.
1962: ALCOR – This character set included the unusual "᛭" runic cross character for multiplication and the "⏨" Decimal Exponent Symbol for floating point notation.
1964: GOST – The 1964 Soviet standard GOST 10859 allowed the encoding of 4-bit, 5-bit, 6-bit and 7-bit characters in ALGOL.
1968: The "Algol 68 Report" – used extant ALGOL characters, and further adopted →, ↓, ↑, □, ⌊, ⌈, ⎩, ⎧, ○, ⊥, and ¢ characters which can be found on the IBM 2741 keyboard with "typeball" (or "golf ball") print heads inserted (such as the APL golf ball). These became available in the mid-1960s while ALGOL 68 was being drafted. The report was translated into Russian, German, French, and Bulgarian, and allowed programming in languages with larger character sets, e.g., Cyrillic alphabet of the Soviet BESM-4. All ALGOL's characters are also part of the Unicode standard and most of them are available in several popular fonts.
2009 October: Unicode – The codice_3 (Decimal Exponent Symbol) for floating point notation was added to Unicode 5.2 for backward compatibility with historic Buran programme ALGOL software. | https://en.wikipedia.org/wiki?curid=1453 |
AWK
AWK is a domain-specific language designed for text processing and typically used as a data extraction and reporting tool. It is a standard feature of most Unix-like operating systems.
The AWK language is a data-driven scripting language consisting of a set of actions to be taken against streams of textual data – either run directly on files or used as part of a pipeline – for purposes of extracting or transforming text, such as producing formatted reports. The language extensively uses the string datatype, associative arrays (that is, arrays indexed by key strings), and regular expressions. While AWK has a limited intended application domain and was especially designed to support one-liner programs, the language is Turing-complete, and even the early Bell Labs users of AWK often wrote well-structured large AWK programs.
AWK was created at Bell Labs in the 1970s, and its name is derived from the surnames of its authors: Alfred Aho, Peter Weinberger, and Brian Kernighan. The acronym is pronounced the same as the bird auk, which is on the cover of "The AWK Programming Language". When written in all lowercase letters, as codice_1, it refers to the Unix or Plan 9 program that runs scripts written in the AWK programming language.
AWK was initially developed in 1977 by Alfred Aho (author of egrep), Peter J. Weinberger (who worked on tiny relational databases), and Brian Kernighan; it takes its name from their respective initials. According to Kernighan, one of the goals of AWK was to have a tool that would easily manipulate both numbers and strings.
AWK was also inspired by Marc Rochkind's programming language that was used to search for patterns in input data, and was implemented using yacc.
As one of the early tools to appear in Version 7 Unix, AWK added computational features to a Unix pipeline besides the Bourne shell, the only scripting language available in a standard Unix environment. It is one of the mandatory utilities of the Single UNIX Specification, and is required by the Linux Standard Base specification.
AWK was significantly revised and expanded in 1985–88, resulting in the GNU AWK implementation written by Paul Rubin, Jay Fenlason, and Richard Stallman, released in 1988. GNU AWK may be the most widely deployed version because it is included with GNU-based Linux packages. GNU AWK has been maintained solely by Arnold Robbins since 1994. Brian Kernighan's nawk (New AWK) source was first released in 1993 unpublicized, and publicly since the late 1990s; many BSD systems use it to avoid the GPL license.
AWK was preceded by sed (1974). Both were designed for text processing. They share the line-oriented, data-driven paradigm, and are particularly suited to writing one-liner programs, due to the implicit main loop and current line variables. The power and terseness of early AWK programs – notably the powerful regular expression handling and conciseness due to implicit variables, which facilitate one-liners – together with the limitations of AWK at the time, were important inspirations for the Perl language (1987). In the 1990s, Perl became very popular, competing with AWK in the niche of Unix text-processing languages.
An AWK program is a series of pattern action pairs, written as:
where "condition" is typically an expression and "action" is a series of commands. The input is split into records, where by default records are separated by newline characters so that the input is split into lines. The program tests each record against each of the conditions in turn, and executes the "action" for each expression that is true. Either the condition or the action may be omitted. The condition defaults to matching every record. The default action is to print the record. This is the same pattern-action structure as sed.
In addition to a simple AWK expression, such as codice_2 or codice_3, the condition can be codice_4 or codice_5 causing the action to be executed before or after all records have been read, or "pattern1, pattern2" which matches the range of records starting with a record that matches "pattern1" up to and including the record that matches "pattern2" before again trying to match against "pattern1" on future lines.
In addition to normal arithmetic and logical operators, AWK expressions include the tilde operator, codice_6, which matches a regular expression against a string. As handy syntactic sugar, "/regexp/" without using the tilde operator matches against the current record; this syntax derives from sed, which in turn inherited it from the ed editor, where codice_7 is used for searching. This syntax of using slashes as delimiters for regular expressions was subsequently adopted by Perl and ECMAScript, and is now common. The tilde operator was also adopted by Perl.
AWK commands are the statements that are substituted for "action" in the examples above. AWK commands can include function calls, variable assignments, calculations, or any combination thereof. AWK contains built-in support for many functions; many more are provided by the various flavors of AWK. Also, some flavors support the inclusion of dynamically linked libraries, which can also provide more functions.
The "print" command is used to output text. The output text is always terminated with a predefined string called the output record separator (ORS) whose default value is a newline. The simplest form of this command is:
Although these fields ("$X") may bear resemblance to variables (the $ symbol indicates variables in Perl), they actually refer to the fields of the current record. A special case, "$0", refers to the entire record. In fact, the commands "codice_8" and "codice_12" are identical in functionality.
The "print" command can also display the results of calculations and/or function calls:
/regex_pattern/ {
Output may be sent to a file:
/regex_pattern/ {
or through a pipe:
/regex_pattern/ {
Awk's built-in variables include the field variables: $1, $2, $3, and so on ($0 represents the entire record). They hold the text or values in the individual text-fields in a record.
Other variables include:
Variable names can use any of the characters [A-Za-z0-9_], with the exception of language keywords. The operators "+ - * /" represent addition, subtraction, multiplication, and division, respectively. For string concatenation, simply place two variables (or string constants) next to each other. It is optional to use a space in between if string constants are involved, but two variable names placed adjacent to each other require a space in between. Double quotes delimit string constants. Statements need not end with semicolons. Finally, comments can be added to programs by using "#" as the first character on a line.
In a format similar to C, function definitions consist of the keyword codice_22, the function name, argument names and the function body. Here is an example of a function.
function add_three (number) {
This statement can be invoked as follows:
Functions can have variables that are in the local scope. The names of these are added to the end of the argument list, though values for these should be omitted when calling the function. It is convention to add some whitespace in the argument list before the local variables, to indicate where the parameters end and the local variables begin.
Here is the customary "Hello, world" program written in AWK:
Note that an explicit codice_23 statement is not needed here; since the only pattern is codice_4, no command-line arguments are processed.
Print all lines longer than 80 characters. Note that the default action is to print the current line.
length($0) > 80
Count words in the input and print the number of lines, words, and characters (like wc):
As there is no pattern for the first line of the program, every line of input matches by default, so the increment actions are executed for every line. Note that codice_25 is shorthand for codice_26.
"s" is incremented by the numeric value of "$NF", which is the last word on the line as defined by AWK's field separator (by default, white-space). "NF" is the number of fields in the current line, e.g. 4. Since "$4" is the value of the fourth field, "$NF" is the value of the last field in the line regardless of how many fields this line has, or whether it has more or fewer fields than surrounding lines. $ is actually a unary operator with the highest operator precedence. (If the line has no fields, then "NF" is 0, "$0" is the whole line, which in this case is empty apart from possible white-space, and so has the numeric value 0.)
At the end of the input the "END" pattern matches, so "s" is printed. However, since there may have been no lines of input at all, in which case no value has ever been assigned to "s", it will by default be an empty string. Adding zero to a variable is an AWK idiom for coercing it from a string to a numeric value. (Concatenating an empty string is to coerce from a number to a string, e.g. "s """. Note, there's no operator to concatenate strings, they're just placed adjacently.) With the coercion the program prints "0" on an empty input, without it an empty line is printed.
The action statement prints each line numbered. The printf function emulates the standard C printf and works similarly to the print command described above. The pattern to match, however, works as follows: "NR" is the number of records, typically lines of input, AWK has so far read, i.e. the current line number, starting at 1 for the first line of input. "%" is the modulo operator. "NR % 4 == 1" is true for the 1st, 5th, 9th, etc., lines of input. Likewise, "NR % 4 == 3" is true for the 3rd, 7th, 11th, etc., lines of input. The range pattern is false until the first part matches, on line 1, and then remains true up to and including when the second part matches, on line 3. It then stays false until the first part matches again on line 5.
Thus, the program prints lines 1,2,3, skips line 4, and then 5,6,7, and so on. For each line, it prints the line number (on a 6 character-wide field) and then the line contents. For example, when executed on this input:
The previous program prints:
As a special case, when the first part of a range pattern is constantly true, e.g. "1", the range will start at the beginning of the input. Similarly, if the second part is constantly false, e.g. "0", the range will continue until the end of input. For example,
prints lines of input from the first line matching the regular expression "^--cut here--$", that is, a line containing only the phrase "--cut here--", to the end.
Word frequency using associative arrays:
BEGIN {
END {
The BEGIN block sets the field separator to any sequence of non-alphabetic characters. Note that separators can be regular expressions. After that, we get to a bare action, which performs the action on every input line. In this case, for every field on the line, we add one to the number of times that word, first converted to lowercase, appears. Finally, in the END block, we print the words with their frequencies. The line
creates a loop that goes through the array "words", setting "i" to each "subscript" of the array. This is different from most languages, where such a loop goes through each "value" in the array. The loop thus prints out each word followed by its frequency count. codice_27 was an addition to the One True awk (see below) made after the book was published.
This program can be represented in several ways. The first one uses the Bourne shell to make a shell script that does everything. It is the shortest of these methods:
pattern="$1"
shift
awk '/'"$pattern"'/ { print FILENAME ":" $0 }' "$@"
The codice_28 in the awk command is not protected by single quotes so that the shell does expand the variable but it needs to be put in double quotes to properly handle patterns containing spaces. A pattern by itself in the usual way checks to see if the whole line (codice_29) matches. codice_16 contains the current filename. awk has no explicit concatenation operator; two adjacent strings concatenate them. codice_29 expands to the original unchanged input line.
There are alternate ways of writing this. This shell script accesses the environment directly from within awk:
export pattern="$1"
shift
awk '$0 ~ ENVIRON["pattern"] { print FILENAME ":" $0 }' "$@"
This is a shell script that uses codice_32, an array introduced in a newer version of the One True awk after the book was published. The subscript of codice_32 is the name of an environment variable; its result is the variable's value. This is like the getenv function in various standard libraries and POSIX. The shell script makes an environment variable codice_34 containing the first argument, then drops that argument and has awk look for the pattern in each file.
codice_6 checks to see if its left operand matches its right operand; codice_36 is its inverse. Note that a regular expression is just a string and can be stored in variables.
The next way uses command-line variable assignment, in which an argument to awk can be seen as an assignment to a variable:
pattern="$1"
shift
awk '$0 ~ pattern { print FILENAME ":" $0 }' "pattern=$pattern" "$@"
Or You can use the "-v var=value" command line option (e.g. "awk -v pattern="$pattern" ...").
Finally, this is written in pure awk, without help from a shell or without the need to know too much about the implementation of the awk script (as the variable assignment on command line one does), but is a bit lengthy:
BEGIN {
The codice_4 is necessary not only to extract the first argument, but also to prevent it from being interpreted as a filename after the codice_4 block ends. codice_39, the number of arguments, is always guaranteed to be ≥1, as codice_40 is the name of the command that executed the script, most often the string codice_41. Also note that codice_42 is the empty string, codice_43. codice_44 initiates a comment that expands to the end of the line.
Note the codice_45 block. awk only checks to see if it should read from standard input before it runs the command. This means that
only works because the fact that there are no filenames is only checked before codice_46 is run! If you explicitly set codice_39 to 1 so that there are no arguments, awk will simply quit because it feels there are no more input files. Therefore, you need to explicitly say to read from standard input with the special filename codice_48.
On Unix-like operating systems self-contained AWK scripts can be constructed using the shebang syntax.
For example, a script that prints the content of a given file may be built by creating a file named codice_49 with the following content:
{ print $0 }
It can be invoked with: codice_50
The codice_51 tells AWK that the argument that follows is the file to read the AWK program from, which is the same flag that is used in sed. Since they are often used for one-liners, both these programs default to executing a program given as a command-line argument, rather than a separate file.
AWK was originally written in 1977 and distributed with Version 7 Unix.
In 1985 its authors started expanding the language, most significantly by adding user-defined functions. The language is described in the book "The AWK Programming Language", published 1988, and its implementation was made available in releases of UNIX System V. To avoid confusion with the incompatible older version, this version was sometimes called "new awk" or "nawk". This implementation was released under a free software license in 1996 and is still maintained by Brian Kernighan (see external links below).
Old versions of Unix, such as UNIX/32V, included codice_52, which converted AWK to C. Kernighan wrote a program to turn awk into C++; its state is not known. | https://en.wikipedia.org/wiki?curid=1456 |
Asgard
Asgard (Old Norse: Ásgarðr; “Enclosure of the Aesir”) is a location associated with gods. It is depicted in a multitude of Old Norse sagas and mythological texts. Some researchers have suggested Asgard to be one of the Nine Worlds surrounding the tree Yggdrasil. In Norse Mythology, Asgard is a fortified home to the Aesir tribe of gods located in the sky. Asgard consists of smaller realms that do not have as many depictions in mythological poems and prose. Asgard is set to be fully destroyed during Ragnarök, and later restored after the world's renewal.
The word Asgard consists of the Old Norse words āss (god) and garðr (“enclosure”). The latter is crucial to understanding the cultural and religious underpinnings of Norse mythology. It originates from the Germanic terms innangard (“inside the fence”) and utangard (“beyond the fence”). Innangard represents the ordered state of existence, while utangard embodies chaos and disorder. The walls surrounding Asgard signify its orderly nature and Aesir gods’ organised way of living.
Historians refer to three principal sources that depict Asgard. They include the Poetic Edda, the Prose Edda, and Heimskringla, which consists of several sagas.
The Poetic Edda consists of several Old Norse poems of unknown authorship that date back to the 13th century. The majority of these poems come from the medieval text Codex Regius, also known as Konungsbók.
Völuspá, the first poem in the Poetic Edda, provides some of the most complete and accurate depictions of the 12 lesser realms of Asgard, which include Breidablik, Valhalla, and Thrudheim. It also describes the Yggdrasil, a mythical tree that connects all Nine Worlds with Asgard located beneath one of its three roots. Finally, Völuspá provides a vague description of the location of Iðavöllr, one of the most common meeting places of Aesir gods.
Grímnismál is one of the shorter poems in the Poetic Edda. It contains a brief depiction of Bifröst, one of the 12 realms of Asgard that connects it to Midgard.
The Prose Edda, also referred to as the Younger Edda, is often attributed to the 13 th century historian Snorri Sturluson. As one of the most detailed descriptions of Norse mythology, the Prose Edda provides a thorough history of Asgard and its inhabitants. It consists of four parts: Prologue, Gylfaginning, Skáldskaparmál, and Háttatal.
In Prologue, Snorri Sturluson shares his interpretation of the Skaldic poems and legends. His analysis corresponds to modern historians’ belief that Aesir gods were, in fact, real clans that travelled from the East to northern territories. According to Snorri, Asgard represented the town of Troy before Greek warriors overtook it. After the defeat, Trojans moved to northern Europe, where they became a dominant group due to their “"advanced technologies and culture"”. Eventually, other tribes began to perceive the Trojans and their leader Trór (Thor in Old Norse) as gods.
Gylfaginning, the second part of the Prose Edda, contains mythological depictions of world creation, in chronological order. In this section, Snorri establishes the fundamentals of Norse mythology, such as the creation and fortification of Asgard, and introduces the main Aesir gods such as Thor, Odin, and Baldur. Gylfaginning also describes Ragnarök, an event that would bring destruction to the Nine Worlds and cause their subsequent rebirth.
In Skáldskaparmál, Snorri shifts focus to language and the nature of poetry. In the dialogue between the Norse god, Aegir, and the Skaldic god, Bragi, it illustrates how various aspects of poetry and nature are intertwined. This part of the Prose Edda highlights the war between Aesir and Vanir gods, including the fortification of Asgard.,
Heimskringla is a collection of sagas written by Snorri Sturluson that contains accounts on the Swedish and Norwegian king dynasties. The name of the saga comes from kringla heimsins (“the circle of the world”).
The first saga in the manuscript further develops Snorri’s historical interpretation of Old Norse mythos. In the Ynglinga Saga, he rejects his earlier notion of Troy as the historical location of Asgard. Snorri then provides an overview of Norse kings and their dynasties based on earlier sagas and poems. In his texts, he provides short depictions of Aesir gods, often searching for parallels between them and Norse kings.,
While many sources mention Asgard as consisting of numerous distinct realms, only a handful of sagas provide their descriptions.
Ruled by Odin, Valhalla is fortified with a golden hall where the souls of mighty warriors arrive after their deaths in battle. It also serves as a home to Valkyries who oversee the souls of the dead and guide them to Valhalla. As attested in the Poetic Edda, Odin amasses an army, einherjar, for Ragnarök, where his warriors are expected to join him in battle. They train daily against each other to hone their combat skills. However, only half of those who have fallen in combat reach Valhalla. The others arrive at another realm, Fólkvangr, where the goddess Freyja resides.
Bifröst differs from other realms, as it connects Asgard, the world of gods, with Midgard, the world of people. In the Prose Edda, Snorri describes it as a rainbow bridge that starts in Himinbjörg. The Poetic Edda ultimately predicts its destruction in Ragnarök during the attack of the Muspelheim forces.
Fólkvangr is a rarely depicted realm of Asgard. Besides accepting hand of those slain in battles, Fólkvangr’s principal inhabitants include Freyja and her two daughters, Gersemi and Hnoss. They reside in the main hall, Sessrúmnir, which is decorated with natural ornaments. Sagas in the Poetic Edda mention Fólkvangr’s rich flora and fauna, which correlates with Freyja’s love for nature and wild creatures.
Located on the border of Asgard, Himinbjörg is home to the god Heimdallr, who watches over Midgard and humanity. The Poetic Edda depicts Heimdallr as “drinking fine mead” in Himinbjörg while protecting the rainbow bridge, Bifröst. When enemies from Muspelheim destroy Bifröst, Heimdallr will blow in his horn Gjallarhorn to announce the beginning of Ragnarök.
According to Grímnismál, Bilskírnir is the largest building and one of the most significant realms of Asgard. It contains 540 rooms and serves as a residence of Thor, his wife Sif, and their many children. In the Prose Edda, Snorri predicts the partial destruction of Grímnismál during the battle between Thor and the World Serpent Jörmungandr when Ragnarok comes.
Upon arrival in Asgard, Aesir gods make it their home, as attested by Snorri in the Prose Edda. After counselling with the head of Mimir, Odin assigns other gods to rule separate parts of the land and build palaces. However, their territories remain open to attacks from enemies, forcing Aesir to protect their lands.
One day, an unnamed giant, claiming to be a skillful smith, arrives at Asgard on his stallion, Svadilfari. He offers help in erecting a protective wall around Asgard in a mere three winters. In return for this favour, he asks for the sun, moon, and marriage with Freyja. Despite Freyja’s opposition, the gods agree to fulfill his request if he builds a wall in just one winter. As part of the deal, they guarantee the giant’s safety.
As time goes on, the gods grow desperate, as it becomes apparent that the giant will construct the wall on time. To their surprise, his stallion contributes much of the progress, swiftly moving boulders and rocks., To preserve Freyja and keep the sun and moon, one of the gods, Loki, comes up with a plan. He changes his appearance to that of a mare, and distracts Svadilfari to slow down construction. Without the help of his stallion, the giant cannot complete his task in time, and Thor breaks his skull with a hammer. Several months later, Loki gives birth to an eight-legged stallion, Sleipnir, who later becomes Odin’s steed. Aesir gods later finish the wall and fully fortify Asgard for future battles.
Ragnarök consists of a series of foretold events that ultimately lead to the destruction and subsequent renewal of the world.
Ragnarök begins after the invasion of fire giants from Muspelheim, who destroy the Bifröst. This causes Heimdallr to blow the Gjallarhorn, announcing the upcoming doom of gods. Odin swiftly consults with the head of Mimir, who foretells the destruction of Asgard and Odin’s death.,
Aesir gods decide to march into battle, gathering their forces on the battlefield Vigrid (“Plain Where Battle Surges”). Their enemies, led by the fire giant Surt, march through Asgard, destroying many of the palaces and fortifications. Odin, Thor, Loki, Heimdallr, and other gods, die in the battle. As the Vigrid grounds become soaking wet with blood, the world is submerged underwater, ending everything that ever existed.
As attested in the Völuspá, after the destruction of the old world, a new one emerges. Several gods survive and restore Asgard, bringing it to the highest-ever levels of prosperity.
Thor first appeared in the Marvel Universe within comic series Journey into Mystery in the issues #83 during August 1962. Following this release, he becomes one of the central figures in the comics along with Loki and Odin. In the Marvel movie franchise, Thor and Loki make their first appearance together in the 2011 film Thor. After that, Thor becomes a regular character in the Marvel Cinematic Universe and reappears in several films, including the Avengers series. Asgard becomes the central element of the film , where it is destroyed following the Old Norse mythos. These and other Norse mythology elements also appear in video games, tv series, and books based on the Marvel Universe.
These depictions do not follow the Old Norse sagas and poems carefully. However, many philologists began to notice an increased interest in Norse mythology from the general public due to their popularity., | https://en.wikipedia.org/wiki?curid=1460 |
Apollo program
The Apollo program, also known as Project Apollo, was the third United States human spaceflight program carried out by the National Aeronautics and Space Administration (NASA), which succeeded in landing the first men on the Moon from 1969 to 1972. It was first conceived during Dwight D. Eisenhower's administration as a three-person spacecraft to follow the one-person Project Mercury, which put the first Americans in space. Apollo was later dedicated to President John F. Kennedy's national goal of "landing a man on the Moon by the end of this decade and returning him safely to the Earth" in an address to Congress on May 25, 1961. It was the third US human spaceflight program to fly, preceded by the two-person Project Gemini conceived in 1961 to extend spaceflight capability in support of Apollo.
Kennedy's goal was accomplished on the Apollo 11 mission when astronauts Neil Armstrong and Buzz Aldrin landed their Apollo Lunar Module (LM) on July 20, 1969, and walked on the lunar surface, while Michael Collins remained in lunar orbit in the command and service module (CSM), and all three landed safely on Earth on July 24. Five subsequent Apollo missions also landed astronauts on the Moon, the last, Apollo 17, in December 1972. In these six spaceflights, twelve men walked on the Moon.
Apollo ran from 1961 to 1972, with the first crewed flight in 1968. It encountered a major setback in 1967 when an Apollo 1 cabin fire killed the entire crew during a prelaunch test. After the first successful landing, sufficient flight hardware remained for nine follow-on landings with a plan for extended lunar geological and astrophysical exploration. Budget cuts forced the cancellation of three of these. Five of the remaining six missions achieved successful landings, but the Apollo 13 landing was prevented by an oxygen tank explosion in transit to the Moon, which destroyed the service module's capability to provide electrical power, crippling the CSM's propulsion and life support systems. The crew returned to Earth safely by using the lunar module as a "lifeboat" for these functions. Apollo used Saturn family rockets as launch vehicles, which were also used for an Apollo Applications Program, which consisted of Skylab, a space station that supported three crewed missions in 1973–74, and the Apollo–Soyuz Test Project, a joint US-Soviet Union Earth-orbit mission in 1975.
Apollo set several major human spaceflight milestones. It stands alone in sending crewed missions beyond low Earth orbit. Apollo 8 was the first crewed spacecraft to orbit another celestial body, and Apollo 11 was the first crewed spacecraft to land humans on one.
Overall the Apollo program returned of lunar rocks and soil to Earth, greatly contributing to the understanding of the Moon's composition and geological history. The program laid the foundation for NASA's subsequent human spaceflight capability, and funded construction of its Johnson Space Center and Kennedy Space Center. Apollo also spurred advances in many areas of technology incidental to rocketry and human spaceflight, including avionics, telecommunications, and computers.
The Apollo program was conceived during the Eisenhower administration in early 1960, as a follow-up to Project Mercury. While the Mercury capsule could support only one astronaut on a limited Earth orbital mission, Apollo would carry three. Possible missions included ferrying crews to a space station, circumlunar flights, and eventual crewed lunar landings.
The program was named after Apollo, the Greek god of light, music, and the Sun, by NASA manager Abe Silverstein, who later said, "I was naming the spacecraft like I'd name my baby." Silverstein chose the name at home one evening, early in 1960, because he felt "Apollo riding his chariot across the Sun was appropriate to the grand scale of the proposed program."
In July 1960, NASA Deputy Administrator Hugh L. Dryden announced the Apollo program to industry representatives at a series of Space Task Group conferences. Preliminary specifications were laid out for a spacecraft with a "mission module" cabin separate from the "command module" (piloting and reentry cabin), and a "propulsion and equipment module". On August 30, a feasibility study competition was announced, and on October 25, three study contracts were awarded to General Dynamics/Convair, General Electric, and the Glenn L. Martin Company. Meanwhile, NASA performed its own in-house spacecraft design studies led by Maxime Faget, to serve as a gauge to judge and monitor the three industry designs.
In November 1960, John F. Kennedy was elected president after a campaign that promised American superiority over the Soviet Union in the fields of space exploration and missile defense. Up to the election of 1960, Kennedy had been speaking out against the "missile gap" that he and many other senators felt had developed between the Soviet Union and the United States due to the inaction of President Eisenhower. Beyond military power, Kennedy used aerospace technology as a symbol of national prestige, pledging to make the US not "first but, first and, first if, but first period". Despite Kennedy's rhetoric, he did not immediately come to a decision on the status of the Apollo program once he became president. He knew little about the technical details of the space program, and was put off by the massive financial commitment required by a crewed Moon landing. When Kennedy's newly appointed NASA Administrator James E. Webb requested a 30 percent budget increase for his agency, Kennedy supported an acceleration of NASA's large booster program but deferred a decision on the broader issue.
On April 12, 1961, Soviet cosmonaut Yuri Gagarin became the first person to fly in space, reinforcing American fears about being left behind in a technological competition with the Soviet Union. At a meeting of the US House Committee on Science and Astronautics one day after Gagarin's flight, many congressmen pledged their support for a crash program aimed at ensuring that America would catch up. Kennedy was circumspect in his response to the news, refusing to make a commitment on America's response to the Soviets.
On April 20, Kennedy sent a memo to Vice President Lyndon B. Johnson, asking Johnson to look into the status of America's space program, and into programs that could offer NASA the opportunity to catch up. Johnson responded approximately one week later, concluding that "we are neither making maximum effort nor achieving results necessary if this country is to reach a position of leadership." His memo concluded that a crewed Moon landing was far enough in the future that it was likely the United States would achieve it first.
On May 25, 1961, twenty days after the first US crewed spaceflight "Freedom 7", Kennedy proposed the crewed Moon landing in a "Special Message to the Congress on Urgent National Needs":
At the time of Kennedy's proposal, only one American had flown in space—less than a month earlier—and NASA had not yet sent an astronaut into orbit. Even some NASA employees doubted whether Kennedy's ambitious goal could be met. By 1963, Kennedy even came close to agreeing to a joint US-USSR Moon mission, to eliminate duplication of effort.
With the clear goal of a crewed landing replacing the more nebulous goals of space stations and circumlunar flights, NASA decided that, in order to make progress quickly, it would discard the feasibility study designs of Convair, GE, and Martin, and proceed with Faget's command and service module design. The mission module was determined to be useful only as an extra room, and therefore unnecessary. They used Faget's design as the specification for another competition for spacecraft procurement bids in October 1961. On November 28, 1961, it was announced that North American Aviation had won the contract, although its bid was not rated as good as Martin's. Webb, Dryden and Robert Seamans chose it in preference due to North American's longer association with NASA and its predecessor.
Landing men on the Moon by the end of 1969 required the most sudden burst of technological creativity, and the largest commitment of resources ($25 billion; $ in dollars) ever made by any nation in peacetime. At its peak, the Apollo program employed 400,000 people and required the support of over 20,000 industrial firms and universities.
On July 1, 1960, NASA established the Marshall Space Flight Center (MSFC) in Huntsville, Alabama. MSFC designed the heavy lift-class Saturn launch vehicles, which would be required for Apollo.
It became clear that managing the Apollo program would exceed the capabilities of Robert R. Gilruth's Space Task Group, which had been directing the nation's crewed space program from NASA's Langley Research Center. So Gilruth was given authority to grow his organization into a new NASA center, the Manned Spacecraft Center (MSC). A site was chosen in Houston, Texas, on land donated by Rice University, and Administrator Webb announced the conversion on September 19, 1961. It was also clear NASA would soon outgrow its practice of controlling missions from its Cape Canaveral Air Force Station launch facilities in Florida, so a new Mission Control Center would be included in the MSC.
In September 1962, by which time two Project Mercury astronauts had orbited the Earth, Gilruth had moved his organization to rented space in Houston, and construction of the MSC facility was under way, Kennedy visited Rice to reiterate his challenge in a famous speech:
The MSC was completed in September 1963. It was renamed by the US Congress in honor of Lyndon Johnson soon after his death in 1973.
It also became clear that Apollo would outgrow the Canaveral launch facilities in Florida. The two newest launch complexes were already being built for the Saturn I and IB rockets at the northernmost end: LC-34 and LC-37. But an even bigger facility would be needed for the mammoth rocket required for the crewed lunar mission, so land acquisition was started in July 1961 for a Launch Operations Center (LOC) immediately north of Canaveral at Merritt Island. The design, development and construction of the center was conducted by Kurt H. Debus, a member of Dr. Wernher von Braun's original V-2 rocket engineering team. Debus was named the LOC's first Director. Construction began in November 1962. Upon Kennedy's death, President Johnson issued an executive order on November 29, 1963, to rename the LOC and Cape Canaveral in honor of Kennedy.
The LOC included Launch Complex 39, a Launch Control Center, and a Vertical Assembly Building (VAB) in which the space vehicle (launch vehicle and spacecraft) would be assembled on a mobile launcher platform and then moved by a crawler-transporter to one of several launch pads. Although at least three pads were planned, only two, designated AandB, were completed in October 1965. The LOC also included an Operations and Checkout Building (OCB) to which Gemini and Apollo spacecraft were initially received prior to being mated to their launch vehicles. The Apollo spacecraft could be tested in two vacuum chambers capable of simulating atmospheric pressure at altitudes up to , which is nearly a vacuum.
Administrator Webb realized that in order to keep Apollo costs under control, he had to develop greater project management skills in his organization, so he recruited Dr. George E. Mueller for a high management job. Mueller accepted, on the condition that he have a say in NASA reorganization necessary to effectively administer Apollo. Webb then worked with Associate Administrator (later Deputy Administrator) Seamans to reorganize the Office of Manned Space Flight (OMSF). On July 23, 1963, Webb announced Mueller's appointment as Deputy Associate Administrator for Manned Space Flight, to replace then Associate Administrator D. Brainerd Holmes on his retirement effective September 1. Under Webb's reorganization, the directors of the Manned Spacecraft Center (Gilruth), Marshall Space Flight Center (von Braun), and the Launch Operations Center (Debus) reported to Mueller.
Based on his industry experience on Air Force missile projects, Mueller realized some skilled managers could be found among high-ranking officers in the United States Air Force, so he got Webb's permission to recruit General Samuel C. Phillips, who gained a reputation for his effective management of the Minuteman program, as OMSF program controller. Phillips' superior officer Bernard A. Schriever agreed to loan Phillips to NASA, along with a staff of officers under him, on the condition that Phillips be made Apollo Program Director. Mueller agreed, and Phillips managed Apollo from January 1964, until it achieved the first human landing in July 1969, after which he returned to Air Force duty.
Once Kennedy had defined a goal, the Apollo mission planners were faced with the challenge of designing a spacecraft that could meet it while minimizing risk to human life, cost, and demands on technology and astronaut skill. Four possible mission modes were considered:
In early 1961, direct ascent was generally the mission mode in favor at NASA. Many engineers feared that rendezvous and docking, maneuvers which had not been attempted in Earth orbit, would be nearly impossible in lunar orbit. LOR advocates including John Houbolt at Langley Research Center emphasized the important weight reductions that were offered by the LOR approach. Throughout 1960 and 1961, Houbolt campaigned for the recognition of LOR as a viable and practical option. Bypassing the NASA hierarchy, he sent a series of memos and reports on the issue to Associate Administrator Robert Seamans; while acknowledging that he spoke "somewhat as a voice in the wilderness", Houbolt pleaded that LOR should not be discounted in studies of the question.
Seamans' establishment of an ad-hoc committee headed by his special technical assistant Nicholas E. Golovin in July 1961, to recommend a launch vehicle to be used in the Apollo program, represented a turning point in NASA's mission mode decision. This committee recognized that the chosen mode was an important part of the launch vehicle choice, and recommended in favor of a hybrid EOR-LOR mode. Its consideration of LOR—as well as Houbolt's ceaseless work—played an important role in publicizing the workability of the approach. In late 1961 and early 1962, members of the Manned Spacecraft Center began to come around to support LOR, including the newly hired deputy director of the Office of Manned Space Flight, Joseph Shea, who became a champion of LOR. The engineers at Marshall Space Flight Center (MSFC), which had much to lose from the decision, took longer to become convinced of its merits, but their conversion was announced by Wernher von Braun at a briefing on June 7, 1962.
But even after NASA reached internal agreement, it was far from smooth sailing. Kennedy's science advisor Jerome Wiesner, who had expressed his opposition to human spaceflight to Kennedy before the President took office, and had opposed the decision to land men on the Moon, hired Golovin, who had left NASA, to chair his own "Space Vehicle Panel", ostensibly to monitor, but actually to second-guess NASA's decisions on the Saturn V launch vehicle and LOR by forcing Shea, Seamans, and even Webb to defend themselves, delaying its formal announcement to the press on July 11, 1962, and forcing Webb to still hedge the decision as "tentative".
Wiesner kept up the pressure, even making the disagreement public during a two-day September visit by the President to Marshall Space Flight Center. Wiesner blurted out "No, that's no good" in front of the press, during a presentation by von Braun. Webb jumped in and defended von Braun, until Kennedy ended the squabble by stating that the matter was "still subject to final review". Webb held firm and issued a request for proposal to candidate Lunar Excursion Module (LEM) contractors. Wiesner finally relented, unwilling to settle the dispute once and for all in Kennedy's office, because of the President's involvement with the October Cuban Missile Crisis, and fear of Kennedy's support for Webb. NASA announced the selection of Grumman as the LEM contractor in November 1962.
Space historian James Hansen concludes that:
The LOR method had the advantage of allowing the lander spacecraft to be used as a "lifeboat" in the event of a failure of the command ship. Some documents prove this theory was discussed before and after the method was chosen. In 1964 an MSC study concluded, "The LM [as lifeboat]... was finally dropped, because no single reasonable CSM failure could be identified that would prohibit use of the SPS." Ironically, just such a failure happened on Apollo 13 when an oxygen tank explosion left the CSM without electrical power. The lunar module provided propulsion, electrical power and life support to get the crew home safely.
Faget's preliminary Apollo design employed a cone-shaped command module, supported by one of several service modules providing propulsion and electrical power, sized appropriately for the space station, cislunar, and lunar landing missions. Once Kennedy's Moon landing goal became official, detailed design began of a "command and service Module" (CSM) in which the crew would spend the entire direct-ascent mission and lift off from the lunar surface for the return trip, after being soft-landed by a larger landing propulsion module. The final choice of lunar orbit rendezvous changed the CSM's role to the translunar ferry used to transport the crew, along with a new spacecraft, the "Lunar Excursion Module" (LEM, later shortened to "Lunar Module", LM, but still pronounced "lem") which would take two men to the lunar surface and return them to the CSM.
The command module (CM) was the conical crew cabin, designed to carry three astronauts from launch to lunar orbit and back to an Earth ocean landing. It was the only component of the Apollo spacecraft to survive without major configuration changes as the program evolved from the early Apollo study designs. Its exterior was covered with an ablative heat shield, and had its own reaction control system (RCS) engines to control its attitude and steer its atmospheric entry path. Parachutes were carried to slow its descent to splashdown. The module was tall, in diameter, and weighed approximately .
A cylindrical service module (SM) supported the command module, with a service propulsion engine and an RCS with propellants, and a fuel cell power generation system with liquid hydrogen and liquid oxygen reactants. A high-gain S-band antenna was used for long-distance communications on the lunar flights. On the extended lunar missions, an orbital scientific instrument package was carried. The service module was discarded just before reentry. The module was long and in diameter. The initial lunar flight version weighed approximately fully fueled, while a later version designed to carry a lunar orbit scientific instrument package weighed just over .
North American Aviation won the contract to build the CSM, and also the second stage of the Saturn V launch vehicle for NASA. Because the CSM design was started early before the selection of lunar orbit rendezvous, the service propulsion engine was sized to lift the CSM off the Moon, and thus was oversized to about twice the thrust required for translunar flight. Also, there was no provision for docking with the lunar module. A 1964 program definition study concluded that the initial design should be continued as Block I which would be used for early testing, while Block II, the actual lunar spacecraft, would incorporate the docking equipment and take advantage of the lessons learned in Block I development.
The Apollo Lunar Module (LM) was designed to descend from lunar orbit to land two astronauts on the Moon and take them back to orbit to rendezvous with the command module. Not designed to fly through the Earth's atmosphere or return to Earth, its fuselage was designed totally without aerodynamic considerations and was of an extremely lightweight construction. It consisted of separate descent and ascent stages, each with its own engine. The descent stage contained storage for the descent propellant, surface stay consumables, and surface exploration equipment. The ascent stage contained the crew cabin, ascent propellant, and a reaction control system. The initial LM model weighed approximately , and allowed surface stays up to around 34 hours. An extended lunar module weighed over , and allowed surface stays of more than three days. The contract for design and construction of the lunar module was awarded to Grumman Aircraft Engineering Corporation, and the project was overseen by Thomas J. Kelly.
Before the Apollo program began, Wernher von Braun and his team of rocket engineers had started work on plans for very large launch vehicles, the Saturn series, and the even larger Nova series. In the midst of these plans, von Braun was transferred from the Army to NASA and was made Director of the Marshall Space Flight Center. The initial direct ascent plan to send the three-person Apollo command and service module directly to the lunar surface, on top of a large descent rocket stage, would require a Nova-class launcher, with a lunar payload capability of over . The June 11, 1962, decision to use lunar orbit rendezvous enabled the Saturn V to replace the Nova, and the MSFC proceeded to develop the Saturn rocket family for Apollo.
Since Apollo, like Mercury, used more than one launch vehicle for space missions, NASA used spacecraft-launch vehicle combination series numbers: AS-10x for Saturn I, AS-20x for Saturn IB, and AS-50x for Saturn V (compare Mercury-Redstone 3, Mercury-Atlas 6) to designate and plan all missions, rather than numbering them sequentially as in Project Gemini. This was changed by the time human flights began.
Since Apollo, like Mercury, would require a launch escape system (LES) in case of a launch failure, a relatively small rocket was required for qualification flight testing of this system. A rocket bigger than the Little Joe used by Mercury would be required, so the Little Joe II was built by General Dynamics/Convair. After an August 1963 qualification test flight, four LES test flights (A-001 through 004) were made at the White Sands Missile Range between May 1964 and January 1966.
Saturn I, the first US heavy lift launch vehicle, was initially planned to launch partially equipped CSMs in low Earth orbit tests. The S-I first stage burned RP-1 with liquid oxygen (LOX) oxidizer in eight clustered Rocketdyne H-1 engines, to produce of thrust. The S-IV second stage used six liquid hydrogen-fueled Pratt & Whitney RL-10 engines with of thrust. A planned Centaur (S-V) third stage with two RL-10 engines never flew on Saturn I.
The first four Saturn I test flights were launched from LC-34, with only the first stage live, carrying dummy upper stages filled with water. The first flight with a live S-IV was launched from LC-37. This was followed by five launches of boilerplate CSMs (designated AS-101 through AS-105) into orbit in 1964 and 1965. The last three of these further supported the Apollo program by also carrying Pegasus satellites, which verified the safety of the translunar environment by measuring the frequency and severity of micrometeorite impacts.
In September 1962, NASA planned to launch four crewed CSM flights on the Saturn I from late 1965 through 1966, concurrent with Project Gemini. The payload capacity would have severely limited the systems which could be included, so the decision was made in October 1963 to use the uprated Saturn IB for all crewed Earth orbital flights.
The Saturn IB was an upgraded version of the Saturn I. The S-IB first stage increased the thrust to by uprating the H-1 engine. The second stage replaced the S-IV with the S-IVB-200, powered by a single J-2 engine burning liquid hydrogen fuel with LOX, to produce of thrust. A restartable version of the S-IVB was used as the third stage of the Saturn V. The Saturn IB could send over into low Earth orbit, sufficient for a partially fueled CSM or the LM. Saturn IB launch vehicles and flights were designated with an AS-200 series number, "AS" indicating "Apollo Saturn" and the "2" indicating the second member of the Saturn rocket family.
Saturn V launch vehicles and flights were designated with an AS-500 series number, "AS" indicating "Apollo Saturn" and the "5" indicating Saturn V. The three-stage Saturn V was designed to send a fully fueled CSM and LM to the Moon. It was in diameter and stood tall with its lunar payload. Its capability grew to for the later advanced lunar landings. The S-IC first stage burned RP-1/LOX for a rated thrust of , which was upgraded to . The second and third stages burned liquid hydrogen, and the third stage was a modified version of the S-IVB, with thrust increased to and capability to restart the engine for translunar injection after reaching a parking orbit.
NASA's director of flight crew operations during the Apollo program was Donald K. "Deke" Slayton, one of the original Mercury Seven astronauts who was medically grounded in September 1962 due to a heart murmur. Slayton was responsible for making all Gemini and Apollo crew assignments.
Thirty-two astronauts were assigned to fly missions in the Apollo program. Twenty-four of these left Earth's orbit and flew around the Moon between December 1968 and December 1972 (three of them twice). Half of the 24 walked on the Moon's surface, though none of them returned to it after landing once. One of the moonwalkers was a trained geologist. Of the 32, Gus Grissom, Ed White, and Roger Chaffee were killed during a ground test in preparation for the Apollo 1 mission.
The Apollo astronauts were chosen from the Project Mercury and Gemini veterans, plus from two later astronaut groups. All missions were commanded by Gemini or Mercury veterans. Crews on all development flights (except the Earth orbit CSM development flights) through the first two landings on Apollo 11 and Apollo 12, included at least two (sometimes three) Gemini veterans. Dr. Harrison Schmitt, a geologist, was the first NASA scientist astronaut to fly in space, and landed on the Moon on the last mission, Apollo 17. Schmitt participated in the lunar geology training of all of the Apollo landing crews.
NASA awarded all 32 of these astronauts its highest honor, the Distinguished Service Medal, given for "distinguished service, ability, or courage", and personal "contribution representing substantial progress to the NASA mission". The medals were awarded posthumously to Grissom, White, and Chaffee in 1969, then to the crews of all missions from Apollo 8 onward. The crew that flew the first Earth orbital test mission Apollo 7, Walter M. Schirra, Donn Eisele, and Walter Cunningham, were awarded the lesser NASA Exceptional Service Medal, because of discipline problems with the flight director's orders during their flight. The NASA Administrator in October, 2008, decided to award them the Distinguished Service Medals, by this time posthumously to Schirra and Eisele.
The first lunar landing mission was planned to proceed as follows:
File:Apollo unmanned launches.png|thumb|right|upright=1.15|Apollo uncrewed development mission launches. Click on a launch image to read the main article about each mission|alt=Composite image of uncrewed development Apollo mission launches in chronological sequence.
rect 0 0 91 494 AS-201 first uncrewed CSM test
rect 92 0 181 494 AS-203 S-IVB stage development test
rect 182 0 270 494 AS-202 second uncrewed CSM test
rect 271 0 340 494 Apollo 4 first uncrewed Saturn V test
rect 341 0 434 494 Apollo 5 uncrewed LM test
rect 435 0 494 494 Apollo 6 second uncrewed Saturn V test
Two Block I CSMs were launched from LC-34 on suborbital flights in 1966 with the Saturn IB. The first, AS-201 launched on February 26, reached an altitude of and splashed down downrange in the Atlantic Ocean. The second, AS-202 on August 25, reached altitude and was recovered downrange in the Pacific Ocean. These flights validated the service module engine and the command module heat shield.
A third Saturn IB test, AS-203 launched from pad 37, went into orbit to support design of the S-IVB upper stage restart capability needed for the Saturn V. It carried a nose cone instead of the Apollo spacecraft, and its payload was the unburned liquid hydrogen fuel, the behavior of which engineers measured with temperature and pressure sensors, and a TV camera. This flight occurred on July 5, before AS-202, which was delayed because of problems getting the Apollo spacecraft ready for flight.
Two crewed orbital Block I CSM missions were planned: AS-204 and AS-205. The Block I crew positions were titled Command Pilot, Senior Pilot, and Pilot. The Senior Pilot would assume navigation duties, while the Pilot would function as a systems engineer. The astronauts would wear a modified version of the Gemini spacesuit.
After an uncrewed LM test flight AS-206, a crew would fly the first Block II CSM and LM in a dual mission known as AS-207/208, or AS-278 (each spacecraft would be launched on a separate Saturn IB). The Block II crew positions were titled Commander, Command Module Pilot, and Lunar Module Pilot. The astronauts would begin wearing a new Apollo A6L spacesuit, designed to accommodate lunar extravehicular activity (EVA). The traditional visor helmet was replaced with a clear "fishbowl" type for greater visibility, and the lunar surface EVA suit would include a water-cooled undergarment.
Deke Slayton, the grounded Mercury astronaut who became director of flight crew operations for the Gemini and Apollo programs, selected the first Apollo crew in January 1966, with Grissom as Command Pilot, White as Senior Pilot, and rookie Donn F. Eisele as Pilot. But Eisele dislocated his shoulder twice aboard the KC135 weightlessness training aircraft, and had to undergo surgery on January 27. Slayton replaced him with Chaffee. NASA announced the final crew selection for AS-204 on March 21, 1966, with the backup crew consisting of Gemini veterans James McDivitt and David Scott, with rookie Russell L. "Rusty" Schweickart. Mercury/Gemini veteran Wally Schirra, Eisele, and rookie Walter Cunningham were announced on September 29 as the prime crew for AS-205.
In December 1966, the AS-205 mission was canceled, since the validation of the CSM would be accomplished on the 14-day first flight, and AS-205 would have been devoted to space experiments and contribute no new engineering knowledge about the spacecraft. Its Saturn IB was allocated to the dual mission, now redesignated AS-205/208 or AS-258, planned for August 1967. McDivitt, Scott and Schweickart were promoted to the prime AS-258 crew, and Schirra, Eisele and Cunningham were reassigned as the Apollo1 backup crew.
The spacecraft for the AS-202 and AS-204 missions were delivered by North American Aviation to the Kennedy Space Center with long lists of equipment problems which had to be corrected before flight; these delays caused the launch of AS-202 to slip behind AS-203, and eliminated hopes the first crewed mission might be ready to launch as soon as November 1966, concurrently with the last Gemini mission. Eventually, the planned AS-204 flight date was pushed to February 21, 1967.
North American Aviation was prime contractor not only for the Apollo CSM, but for the SaturnV S-II second stage as well, and delays in this stage pushed the first uncrewed SaturnV flight AS-501 from late 1966 to November 1967. (The initial assembly of AS-501 had to use a dummy spacer spool in place of the stage.)
The problems with North American were severe enough in late 1965 to cause Manned Space Flight Administrator George Mueller to appoint program director Samuel Phillips to head a "tiger team" to investigate North American's problems and identify corrections. Phillips documented his findings in a December 19 letter to NAA president Lee Atwood, with a strongly worded letter by Mueller, and also gave a presentation of the results to Mueller and Deputy Administrator Robert Seamans. Meanwhile, Grumman was also encountering problems with the Lunar Module, eliminating hopes it would be ready for crewed flight in 1967, not long after the first crewed CSM flights.
Grissom, White, and Chaffee decided to name their flight Apollo1 as a motivational focus on the first crewed flight. They trained and conducted tests of their spacecraft at North American, and in the altitude chamber at the Kennedy Space Center. A "plugs-out" test was planned for January, which would simulate a launch countdown on LC-34 with the spacecraft transferring from pad-supplied to internal power. If successful, this would be followed by a more rigorous countdown simulation test closer to the February 21 launch, with both spacecraft and launch vehicle fueled.
The plugs-out test began on the morning of January 27, 1967, and immediately was plagued with problems. First, the crew noticed a strange odor in their spacesuits which delayed the sealing of the hatch. Then, communications problems frustrated the astronauts and forced a hold in the simulated countdown. During this hold, an electrical fire began in the cabin and spread quickly in the high pressure, 100% oxygen atmosphere. Pressure rose high enough from the fire that the cabin inner wall burst, allowing the fire to erupt onto the pad area and frustrating attempts to rescue the crew. The astronauts were asphyxiated before the hatch could be opened.
NASA immediately convened an accident review board, overseen by both houses of Congress. While the determination of responsibility for the accident was complex, the review board concluded that "deficiencies existed in command module design, workmanship and quality control". At the insistence of NASA Administrator Webb, North American removed Harrison Storms as command module program manager. Webb also reassigned Apollo Spacecraft Program Office (ASPO) Manager Joseph Francis Shea, replacing him with George Low.
To remedy the causes of the fire, changes were made in the Block II spacecraft and operational procedures, the most important of which were use of a nitrogen/oxygen mixture instead of pure oxygen before and during launch, and removal of flammable cabin and space suit materials. The Block II design already called for replacement of the Block I plug-type hatch cover with a quick-release, outward opening door. NASA discontinued the crewed Block I program, using the BlockI spacecraft only for uncrewed SaturnV flights. Crew members would also exclusively wear modified, fire-resistant A7L Block II space suits, and would be designated by the Block II titles, regardless of whether a LM was present on the flight or not.
On April 24, 1967, Mueller published an official Apollo mission numbering scheme, using sequential numbers for all flights, crewed or uncrewed. The sequence would start with Apollo 4 to cover the first three uncrewed flights while retiring the Apollo1 designation to honor the crew, per their widows' wishes.
In September 1967, Mueller approved a sequence of mission types which had to be successfully accomplished in order to achieve the crewed lunar landing. Each step had to be successfully accomplished before the next ones could be performed, and it was unknown how many tries of each mission would be necessary; therefore letters were used instead of numbers. The A missions were uncrewed Saturn V validation; B was uncrewed LM validation using the Saturn IB; C was crewed CSM Earth orbit validation using the Saturn IB; D was the first crewed CSM/LM flight (this replaced AS-258, using a single Saturn V launch); E would be a higher Earth orbit CSM/LM flight; F would be the first lunar mission, testing the LM in lunar orbit but without landing (a "dress rehearsal"); and G would be the first crewed landing. The list of types covered follow-on lunar exploration to include H lunar landings, I for lunar orbital survey missions, and J for extended-stay lunar landings.
The delay in the CSM caused by the fire enabled NASA to catch up on human-rating the LM and SaturnV. Apollo4 (AS-501) was the first uncrewed flight of the SaturnV, carrying a BlockI CSM on November 9, 1967. The capability of the command module's heat shield to survive a trans-lunar reentry was demonstrated by using the service module engine to ram it into the atmosphere at higher than the usual Earth-orbital reentry speed.
Apollo 5 (AS-204) was the first uncrewed test flight of LM in Earth orbit, launched from pad 37 on January 22, 1968, by the Saturn IB that would have been used for Apollo 1. The LM engines were successfully test-fired and restarted, despite a computer programming error which cut short the first descent stage firing. The ascent engine was fired in abort mode, known as a "fire-in-the-hole" test, where it was lit simultaneously with jettison of the descent stage. Although Grumman wanted a second uncrewed test, George Low decided the next LM flight would be crewed.
This was followed on April 4, 1968, by Apollo 6 (AS-502) which carried a CSM and a LM Test Article as ballast. The intent of this mission was to achieve trans-lunar injection, followed closely by a simulated direct-return abort, using the service module engine to achieve another high-speed reentry. The Saturn V experienced pogo oscillation, a problem caused by non-steady engine combustion, which damaged fuel lines in the second and third stages. Two S-II engines shut down prematurely, but the remaining engines were able to compensate. The damage to the third stage engine was more severe, preventing it from restarting for trans-lunar injection. Mission controllers were able to use the service module engine to essentially repeat the flight profile of Apollo 4. Based on the good performance of Apollo6 and identification of satisfactory fixes to the Apollo6 problems, NASA declared the SaturnV ready to fly men, canceling a third uncrewed test.
File:Apollo manned development missions insignia.png|thumb|right|upright=1.15|Apollo crewed development mission patches. Click on a patch to read the main article about that mission|alt=Composite image of six crewed Apollo development mission patches, from Apollo1 to Apollo 11.
rect 0 0 595 600 Apollo 1 unsuccessful first crewed CSM test
rect 596 0 1376 600 Apollo 7 first crewed CSM test
rect 1377 0 2076 600 Apollo 8 first crewed flight to the Moon
rect 0 601 595 1200 Apollo 9 crewed Earth orbital LM test
rect 596 601 1376 1200 Apollo 10 crewed lunar orbital LM test
rect 1377 601 2076 1200 Apollo 11 first crewed Moon landing
Apollo 7, launched from LC-34 on October 11, 1968, was the Cmission, crewed by Schirra, Eisele, and Cunningham. It was an 11-day Earth-orbital flight which tested the CSM systems.
Apollo 8 was planned to be the D mission in December 1968, crewed by McDivitt, Scott and Schweickart, launched on a SaturnV instead of two Saturn IBs. In the summer it had become clear that the LM would not be ready in time. Rather than waste the Saturn V on another simple Earth-orbiting mission, ASPO Manager George Low suggested the bold step of sending Apollo8 to orbit the Moon instead, deferring the Dmission to the next mission in March 1969, and eliminating the E mission. This would keep the program on track. The Soviet Union had sent two tortoises, mealworms, wine flies, and other lifeforms around the Moon on September 15, 1968, aboard Zond 5, and it was believed they might soon repeat the feat with human cosmonauts. The decision was not announced publicly until successful completion of Apollo 7. Gemini veterans Frank Borman and Jim Lovell, and rookie William Anders captured the world's attention by making ten lunar orbits in 20 hours, transmitting television pictures of the lunar surface on Christmas Eve, and returning safely to Earth.
The following March, LM flight, rendezvous and docking were successfully demonstrated in Earth orbit on Apollo 9, and Schweickart tested the full lunar EVA suit with its portable life support system (PLSS) outside the LM. The F mission was successfully carried out on Apollo 10 in May 1969 by Gemini veterans Thomas P. Stafford, John Young and Eugene Cernan. Stafford and Cernan took the LM to within of the lunar surface.
The G mission was achieved on Apollo 11 in July 1969 by an all-Gemini veteran crew consisting of Neil Armstrong, Michael Collins and Buzz Aldrin. Armstrong and Aldrin performed the first landing at the Sea of Tranquility at 20:17:40 UTC on July 20, 1969. They spent a total of 21 hours, 36 minutes on the surface, and spent 2hours, 31 minutes outside the spacecraft, walking on the surface, taking photographs, collecting material samples, and deploying automated scientific instruments, while continuously sending black-and-white television back to Earth. The astronauts returned safely on July 24.
File:Apollo lunar landing missions insignia.png|thumb|right|upright=1.15|Apollo production crewed lunar landing mission patches. Click on a patch to read the main article about that mission|alt=Composite image of six production crewed Apollo lunar landing mission patches, from Apollo 12 to Apollo 17.
rect 0 0 602 600 Apollo 12 second crewed Moon landing
rect 603 0 1205 600 Apollo 13 unsuccessful Moon landing attempt
rect 1206 0 1885 600 Apollo 14 third crewed Moon landing
rect 0 601 602 1200 Apollo 15 fourth crewed Moon landing
rect 603 601 1205 1200 Apollo 16 fifth crewed Moon landing
rect 1206 601 1885 1200 Apollo 17 sixth crewed Moon landing
In November 1969, Gemini veteran Charles "Pete" Conrad and rookie Alan L. Bean made a precision landing on Apollo 12 within walking distance of the Surveyor 3 uncrewed lunar probe, which had landed in April 1967 on the Ocean of Storms. The command module pilot was Gemini veteran Richard F. Gordon Jr. Conrad and Bean carried the first lunar surface color television camera, but it was damaged when accidentally pointed into the Sun. They made two EVAs totaling 7hours and 45 minutes. On one, they walked to the Surveyor, photographed it, and removed some parts which they returned to Earth.
The success of the first two landings allowed the remaining missions to be crewed with a single veteran as commander, with two rookies. Apollo 13 launched Lovell, Jack Swigert, and Fred Haise in April 1970, headed for the Fra Mauro formation. But two days out, a liquid oxygen tank exploded, disabling the service module and forcing the crew to use the LM as a "lifeboat" to return to Earth. Another NASA review board was convened to determine the cause, which turned out to be a combination of damage of the tank in the factory, and a subcontractor not making a tank component according to updated design specifications. Apollo was grounded again, for the remainder of 1970 while the oxygen tank was redesigned and an extra one was added.
The contracted batch of 15 Saturn Vs was enough for lunar landing missions through Apollo 20. NASA publicized a preliminary list of eight more planned landing sites, with plans to increase the mass of the CSM and LM for the last five missions, along with the payload capacity of the Saturn V. These final missions would combine the I and J types in the 1967 list, allowing the CMP to operate a package of lunar orbital sensors and cameras while his companions were on the surface, and allowing them to stay on the Moon for over three days. These missions would also carry the Lunar Roving Vehicle (LRV) increasing the exploration area and allowing televised liftoff of the LM. Also, the Block II spacesuit was revised for the extended missions to allow greater flexibility and visibility for driving the LRV.
About the time of the first landing in 1969, it was decided to use an existing Saturn V to launch the Skylab orbital laboratory pre-built on the ground, replacing the original plan to construct it in orbit from several Saturn IB launches; this eliminated Apollo 20. NASA's yearly budget also began to shrink in light of the successful landing, and NASA also had to make funds available for the development of the upcoming Space Shuttle. By 1971, the decision was made to also cancel missions 18 and 19. The two unused Saturn Vs became museum exhibits at the John F. Kennedy Space Center on Merritt Island, Florida, George C. Marshall Space Center in Huntsville, Alabama, Michoud Assembly Facility in New Orleans, Louisiana, and Lyndon B. Johnson Space Center in Houston, Texas.
The cutbacks forced mission planners to reassess the original planned landing sites in order to achieve the most effective geological sample and data collection from the remaining four missions. Apollo 15 had been planned to be the last of the H series missions, but since there would be only two subsequent missions left, it was changed to the first of three J missions.
Apollo 13's Fra Mauro mission was reassigned to Apollo 14, commanded in February 1971 by Mercury veteran Alan Shepard, with Stuart Roosa and Edgar Mitchell. This time the mission was successful. Shepard and Mitchell spent 33 hours and 31 minutes on the surface, and completed two EVAs totalling 9hours 24 minutes, which was a record for the longest EVA by a lunar crew at the time.
In August 1971, just after conclusion of the Apollo 15 mission, President Richard Nixon proposed canceling the two remaining lunar landing missions, Apollo 16 and 17. Office of Management and Budget Deputy Director Caspar Weinberger was opposed to this, and persuaded Nixon to keep the remaining missions.
Apollo 15 was launched on July 26, 1971, with David Scott, Alfred Worden and James Irwin. Scott and Irwin landed on July 30 near Hadley Rille, and spent just under two days, 19 hours on the surface. In over 18 hours of EVA, they collected about of lunar material.
Apollo 16 landed in the Descartes Highlands on April 20, 1972. The crew was commanded by John Young, with Ken Mattingly and Charles Duke. Young and Duke spent just under three days on the surface, with a total of over 20 hours EVA.
Apollo 17 was the last of the Apollo program, landing in the Taurus–Littrow region in December 1972. Eugene Cernan commanded Ronald E. Evans and NASA's first scientist-astronaut, geologist Dr. Harrison H. Schmitt. Schmitt was originally scheduled for Apollo 18, but the lunar geological community lobbied for his inclusion on the final lunar landing. Cernan and Schmitt stayed on the surface for just over three days and spent just over 23 hours of total EVA.
Source: "Apollo by the Numbers: A Statistical Reference" (Orloff 2004)
The Apollo program returned over of lunar rocks and soil to the Lunar Receiving Laboratory in Houston. Today, 75% of the samples are stored at the Lunar Sample Laboratory Facility built in 1979.
The rocks collected from the Moon are extremely old compared to rocks found on Earth, as measured by radiometric dating techniques. They range in age from about 3.2 billion years for the basaltic samples derived from the lunar maria, to about 4.6 billion years for samples derived from the highlands crust. As such, they represent samples from a very early period in the development of the Solar System, that are largely absent on Earth. One important rock found during the Apollo Program is dubbed the Genesis Rock, retrieved by astronauts David Scott and James Irwin during the Apollo 15 mission. This anorthosite rock is composed almost exclusively of the calcium-rich feldspar mineral anorthite, and is believed to be representative of the highland crust. A geochemical component called KREEP was discovered by Apollo 12, which has no known terrestrial counterpart. KREEP and the anorthositic samples have been used to infer that the outer portion of the Moon was once completely molten (see lunar magma ocean).
Almost all the rocks show evidence of impact process effects. Many samples appear to be pitted with micrometeoroid impact craters, which is never seen on Earth rocks, due to the thick atmosphere. Many show signs of being subjected to high-pressure shock waves that are generated during impact events. Some of the returned samples are of "impact melt" (materials melted near an impact crater.) All samples returned from the Moon are highly brecciated as a result of being subjected to multiple impact events.
Analysis of the composition of the lunar samples supports the giant impact hypothesis, that the Moon was created through impact of a large astronomical body with the Earth.
Apollo cost $25.4 billion (or approximately $ in dollars when adjusted for inflation via the GDP deflator index).
Of this amount, $20.2 billion ($ adjusted) was spent on the design, development, and production of the Saturn family of launch vehicles, the Apollo spacecraft, space suits, scientific experiments, and mission operations. The cost of constructing and operating Apollo-related ground facilities, such as the NASA human spaceflight centers and the global tracking and data acquisition network, added an additional $5.2 billion ($ adjusted).
The amount grows to $28 billion ($ adjusted) if the costs for related projects such as Project Gemini and the robotic Ranger, Surveyor, and Lunar Orbiter programs are included.
NASA's official cost breakdown, as reported to Congress in the Spring of 1973, is as follows:
Accurate estimates of human spaceflight costs were difficult in the early 1960s, as the capability was new and management experience was lacking. Preliminary cost analysis by NASA estimated $7 billion - $12 billion for a crewed lunar landing effort. NASA Administrator James Webb increased this estimate to $20 billion before reporting it to Vice President Johnson in April 1961.
Project Apollo was a massive undertaking, representing the largest research and development project in peacetime. At its peak, it employed over 400,000 employees and contractors around the country and accounted for more than half of NASA's total spending in the 1960s. It proved unsustainable.
After the first Moon landing, public and political interest waned, including that of President Nixon, who wanted to rein in federal spending. NASA's budget could not sustain Apollo missions which cost, on average, $445 million ($ adjusted) each while simultaneously developing the Space Shuttle. The final fiscal year of Apollo funding was 1973.
Looking beyond the crewed lunar landings, NASA investigated several post-lunar applications for Apollo hardware. The Apollo Extension Series ("Apollo X",) proposed up to 30 flights to Earth orbit, using the space in the Spacecraft Lunar Module Adapter (SLA) to house a small orbital laboratory (workshop). Astronauts would continue to use the CSM as a ferry to the station. This study was followed by design of a larger orbital workshop to be built in orbit from an empty S-IVB Saturn upper stage and grew into the Apollo Applications Program (AAP). The workshop was to be supplemented by the Apollo Telescope Mount, which could be attached to the ascent stage of the lunar module via a rack. The most ambitious plan called for using an empty S-IVB as an interplanetary spacecraft for a Venus fly-by mission.
The S-IVB orbital workshop was the only one of these plans to make it off the drawing board. Dubbed Skylab, it was assembled on the ground rather than in space, and launched in 1973 using the two lower stages of a Saturn V. It was equipped with an Apollo Telescope Mount. Skylab's last crew departed the station on February 8, 1974, and the station itself re-entered the atmosphere in 1979.
The Apollo-Soyuz Test Project also used Apollo hardware for the first joint nation space flight, paving the way for future cooperation with other nations in the Space Shuttle and International Space Station programs.
In 2008, Japan Aerospace Exploration Agency's SELENE probe observed evidence of the halo surrounding the Apollo 15 Lunar Module blast crater while orbiting above the lunar surface.
Beginning in 2009, NASA's robotic Lunar Reconnaissance Orbiter, while orbiting above the Moon, photographed the remnants of the Apollo program left on the lunar surface, and each site where crewed Apollo flights landed. All of the U.S. flags left on the Moon during the Apollo missions were found to still be standing, with the exception of the one left during the Apollo 11 mission, which was blown over during that mission's lift-off from the lunar surface and return to the mission Command Module in lunar orbit; the degree to which these flags retain their original colors remains unknown.
In a November 16, 2009, editorial, "The New York Times" opined:
The Apollo program has been called the greatest technological achievement in human history. Apollo stimulated many areas of technology, leading to over 1,800 spinoff products as of 2015. The flight computer design used in both the lunar and command modules was, along with the Polaris and Minuteman missile systems, the driving force behind early research into integrated circuits (ICs). By 1963, Apollo was using 60 percent of the United States' production of ICs. The crucial difference between the requirements of Apollo and the missile programs was Apollo's much greater need for reliability. While the Navy and Air Force could work around reliability problems by deploying more missiles, the political and financial cost of failure of an Apollo mission was unacceptably high.
Technologies and techniques required for Apollo were developed by Project Gemini. The Apollo project was enabled by NASA's adoption of new advances in semiconductor electronic technology, including metal-oxide-semiconductor field-effect transistors (MOSFETs) in the Interplanetary Monitoring Platform (IMP) and silicon integrated circuit chips in the Apollo Guidance Computer (AGC).
The crew of Apollo 8 sent the first live televised pictures of the Earth and the Moon back to Earth, and read from the creation story in the Book of Genesis, on Christmas Eve 1968. An estimated one-quarter of the population of the world saw—either live or delayed—the Christmas Eve transmission during the ninth orbit of the Moon, and an estimated one-fifth of the population of the world watched the live transmission of the Apollo 11 moonwalk.
The Apollo program also affected environmental activism in the 1970s due to photos taken by the astronauts. The most well known include "Earthrise", taken by William Anders on Apollo 8, and "The Blue Marble", taken by the Apollo 17 astronauts. "The Blue Marble" was released during a surge in environmentalism, and became a symbol of the environmental movement as a depiction of Earth's frailty, vulnerability, and isolation amid the vast expanse of space.
According to "The Economist", Apollo succeeded in accomplishing President Kennedy's goal of taking on the Soviet Union in the Space Race by accomplishing a singular and significant achievement, to demonstrate the superiority of the free-market system. The publication noted the irony that in order to achieve the goal, the program required the organization of tremendous public resources within a vast, centralized government bureaucracy.
Prior to Apollo 11's 40th anniversary in 2009, NASA searched for the original videotapes of the mission's live televised moonwalk. After an exhaustive three-year search, it was concluded that the tapes had probably been erased and reused. A new digitally remastered version of the best available broadcast television footage was released instead.
Numerous documentary films cover the Apollo program and the Space Race, including:
The Apollo program, or certain missions, have been dramatized in "Apollo 13" (1995), "Apollo 11" (1996), "From the Earth to the Moon" (1998), "The Dish" (2000), "Space Race" (2005), "Moonshot" (2009), and "First Man" (2018).
The Apollo program has been the focus of several works of fiction, including: | https://en.wikipedia.org/wiki?curid=1461 |
Assault
An assault is the act of inflicting physical harm or unwanted physical contact upon a person or, in some specific legal definitions, a threat or attempt to commit such an action. It is both a crime and a tort and, therefore, may result in criminal prosecution, civil liability, or both. Generally, the common law definition is the same in criminal and tort law.
Traditionally, common law legal systems had separate definitions for assault and battery. When this distinction is observed, battery refers to the actual bodily contact, whereas assault refers to a credible threat or attempt to cause battery. Some jurisdictions combined the two offences into assault and battery, which then became widely referred to as "assault". The result is that in many of these jurisdictions, assault has taken on a definition that is more in line with the traditional definition of battery. The legal systems of civil law and Scots law have never distinguished assault from battery.
Legal systems generally acknowledge that assaults can vary greatly in severity. In the United States, an assault can be charged as either a misdemeanor or a felony. In England and Wales and Australia, it can be charged as either common assault, assault occasioning actual bodily harm (ABH) or grievous bodily harm (GBH). Canada also has a three-tier system: assault, assault causing bodily harm and aggravated assault. Separate charges typically exist for sexual assaults, affray and assaulting a police officer. Assault may overlap with an attempted crime; for example an assault may be charged as an attempted murder if it was done with intent to kill.
In jurisdictions that make a distinction between the two, assault usually accompanies battery if the assailant both threatens to make unwanted contact and then carries through with this threat. See common assault. The elements of battery are that it is a volitional act, done for the purpose of causing a harmful or offensive contact with another person or under circumstances that make such contact substantially certain to occur, and which causes such contact.
Aggravated assault is, in some jurisdictions, a stronger form of assault, usually using a deadly weapon. A person has committed an aggravated assault when that person attempts to:
Aggravated assault can also be charged in cases of attempted harm against police officers or other public servants.
Although the range and precise application of defenses varies between jurisdictions, the following represents a list of the defenses that may apply to all levels of assault:
Exceptions exist to cover unsolicited physical contact which amount to normal social behavior known as de minimis harm. Assault can also be considered in cases involving the spitting on, or unwanted exposure of bodily fluids to others.
Consent may be a complete or partial defense to assault. In some jurisdictions, most notably England, it is not a defense where the degree of injury is severe, as long as there is no legally recognized good reason for the assault. This can have important consequences when dealing with issues such as consensual sadomasochistic sexual activity, the most notable case being the Operation Spanner case. Legally recognized good reasons for consent include surgery, activities within the rules of a game (mixed martial arts, wrestling, boxing, or contact sports), bodily adornment ("R v Wilson" [1996] Crim LR 573), or horseplay ("R v Jones" [1987] Crim LR 123). However, any activity outside the rules of the game is not legally recognized as a defense of consent. In Scottish law, consent is not a defense for assault.
Police officers and court officials have a general power to use force for the purpose of performing an arrest or generally carrying out their official duties. Thus, a court officer taking possession of goods under a court order may use force if reasonably necessary.
In some jurisdictions such as Singapore, judicial corporal punishment is part of the legal system. The officers who administer the punishment have immunity from prosecution for assault.
In the United States, the United Kingdom, Australia and Canada, corporal punishment administered to children by their parent or legal guardian is not legally considered to be assault unless it is deemed to be excessive or unreasonable. What constitutes "reasonable" varies in both statutory law and case law. Unreasonable physical punishment may be charged as assault or under a separate statute for child abuse.
Many countries, including some US states, also permit the use of corporal punishment for children in school. In English law, s. 58 Children Act 2004 limits the availability of the lawful correction defense to common assault under s. 39 Criminal Justice Act 1988.
This may or may not involve self-defense in that, using a reasonable degree of force to prevent another from committing a crime could involve preventing an assault, but it could be preventing a crime not involving the use of personal violence.
Some jurisdictions allow force to be used in defense of property, to prevent damage either in its own right, or under one or both of the preceding classes of defense in that a threat or attempt to damage property might be considered a crime (in English law, under s5 Criminal Damage Act 1971 it may be argued that the defendant has a "lawful excuse" to damaging property during the defense and a defense under s3 Criminal Law Act 1967) subject to the need to deter vigilantes and excessive self-help. Furthermore, some jurisdictions, such as Ohio, allow residents in their homes to use force when ejecting an intruder. The resident merely needs to assert to the court that they felt threatened by the intruder's presence.
This defense is not universal: in New Zealand (for example) homeowners have been convicted of assault for attacking burglars.
Assault is an offence under s. 265 of the Canadian Criminal Code. There is a wide range of the types of assault that can occur. Generally, an assault occurs when a person directly or indirectly applies force intentionally to another person without their consent. It can also occur when a person attempts to apply such force, or threatens to do so, without the consent of the other person. An injury need not occur for an assault to be committed, but the force used in the assault must be offensive in nature with an intention to apply force. It can be an assault to "tap", "pinch", "push", or direct another such minor action toward another, but an accidental application of force is not an assault.
The potential punishment for an assault in Canada varies depending on the manner in which the charge proceeds through the court system and the type of assault that is committed. The Criminal Code defines assault as a dual offence (indictable or summary offence). Police officers can arrest someone without a warrant for an assault if it is in the public's interest to do so notwithstanding S.495(2)(d) of the Code. This public interest is usually satisfied by preventing a continuation or repetition of the offence on the same victim.
Some variations on the ordinary crime of assault include:
An individual cannot consent to an assault with a weapon, assault causing bodily harm, aggravated assault, or any sexual assault. Consent will also be vitiated if two people consent to fight but serious bodily harm is intended and caused (R v Paice; R v Jobidon). A person cannot consent to serious bodily harm.
The Indian Penal Code covers the punishments and types of assault in Chapter 16, sections 351 through 358.
The Code further explains that "mere words do not amount to an assault. But the words which a person uses may give to their gestures or preparation such a meaning as may make those gestures or preparations amount to an assault". Assault is in Indian criminal law an attempt to use criminal force (with criminal force being described in s.350). The attempt itself has been made an offence in India, as in other states.
The Criminal Code Act (chapter 29 of Part V; sections 351 to 365) creates a number of offences of assault. Assault is defined by section 252 of that Act. Assault is a misdemeanor punishable by one year imprisonment; assault with "intent to have carnal knowledge of him or her" or who indecently assaults another, or who commits other more-serious variants of assault (as defined in the Act) are guilty of a felony, and longer prison terms are provided for.
Marshall Islands
The offence of assault is created by section 113 of the Criminal Code. A person is guilty of this offence if they unlawfully offer or attempt, with force or violence, to strike, beat, wound, or do bodily harm to, another.
Section 2 of the Non-Fatal Offences against the Person Act 1997 creates the offence of assault, and section 3 of that Act creates the offence of assault causing harm.
South African law does not draw the distinction between assault and battery. "Assault" is a common law crime defined as "unlawfully and intentionally applying force to the person of another, or inspiring a belief in that other that force is immediately to be applied to him". The law also recognises the crime of "assault with intent to cause grievous bodily harm", where grievous bodily harm is defined as "harm which in itself is such as seriously to interfere with health". The common law crime of "indecent assault" was repealed by the Criminal Law (Sexual Offences and Related Matters) Amendment Act, 2007, and replaced by a statutory crime of "sexual assault".
Abolished offences:
English law provides for two offences of assault: common assault and battery. Assault (or common assault) is committed if one intentionally or recklessly causes another person to apprehend immediate and unlawful personal violence. "Violence" in this context means any unlawful touching, though there is some debate over whether the touching must also be hostile. The terms "assault" and "common assault" often encompass the separate offence of battery, even in statutory settings such as s 40(3)(a) of the Criminal Justice Act 1988.
A common assault is an assault that lacks any of the aggravating features which Parliament has deemed serious enough to deserve a higher penalty. Section 39 of the Criminal Justice Act 1988 provides that common assault, like battery, is triable only in the magistrates' court in England and Wales (unless it is linked to a more serious offence, which is triable in the Crown Court). Additionally, if a defendant has been charged on an indictment with assault occasioning actual bodily harm (ABH), or racially/religiously aggravated assault, then a jury in the Crown Court may acquit the defendant of the more serious offence, but still convict of common assault if it finds common assault has been committed.
An assault which is aggravated by the scale of the injuries inflicted may be charged as offences causing "actual bodily harm" (ABH) or, in the severest cases, "grievous bodily harm" (GBH).
Other aggravated assault charges refer to assaults carried out against a specific target or with a specific intent:
In Scots Law, assault is defined as an "attack upon the person of another". There is no distinction made in Scotland between assault and battery (which is not a term used in Scots law), although, as in England and Wales, assault can be occasioned without a "physical" attack on another's person, as demonstrated in "Atkinson v. HM Advocate" wherein the accused was found guilty of assaulting a shop assistant by simply jumping over a counter wearing a ski mask. The court said:
Scottish law also provides for a more serious charge of aggravated assault on the basis of such factors as severity of injury, the use of a weapon, or "Hamesucken" (to assault a person in their own home). The "mens rea" for assault is simply "evil intent", although this has been held to mean no more than that assault "cannot be committed accidentally or recklessly or negligently" as upheld in "Lord Advocate's Reference No 2 of 1992" where it was found that a "hold-up" in a shop justified as a joke would still constitute an offence.
It is a separate offence to assault on a constable in the execution of their duty, under Section 90, Police and Fire Reform (Scotland) Act 2012 (previously Section 41 of the Police (Scotland) Act 1967) which provides that it is an offence for a person to, amongst other things, assault a constable in the execution of their duty or a person assisting a constable in the execution of their duty.
Several offences of assault exist in Northern Ireland. The Offences against the Person Act 1861 creates the offences of:
The Criminal Justice (Miscellaneous Provisions) Act (Northern Ireland) 1968 creates the offences of:
That Act formerly created the offence of 'Assault on a constable in the execution of his duty'. under section 7(1)(a), but that section has been superseded by section 66(1) of the Police (Northern Ireland) Act 1998 (c.32) which now provides that it is an offence for a person to, amongst other things, assault a constable in the execution of his duty, or a person assisting a constable in the execution of his duty.
The term 'assault', when used in legislation, commonly refers to both common assault and battery, even though the two offences remain distinct. Common assault involves intentionally or recklessly causing a person to apprehend the imminent infliction of unlawful force, whilst battery refers to the actual infliction of force.
Each state has legislation relating to the act of assault, and offences against the act that constitute assault are heard in the Magistrates Court of that state or indictable offences are heard in a District or Supreme Court of that State. The legislation that defines assault of each state outline what the elements are that make up the assault, where the assault is sectioned in legislation or criminal codes, and the penalties that apply for the offence of assault.
In New South Wales, the Crimes Act 1900 defines a range of assault offences deemed more serious than common assault and which attract heavier penalties. These include:
In the United States, assault may be defined as an attempt to commit a battery. However, the crime of assault can encompass acts in which no battery is intended, but the defendant's act nonetheless creates reasonable fear in others that a battery will occur.
Four elements were required at common law:
As the criminal law evolved, element one was weakened in most jurisdictions so that a reasonable fear of bodily injury would suffice. These four elements were eventually codified in most states.
The crime of assault generally requires that both the perpetrator and the victim of an assault be a natural person. Thus, unless the attack is directed by a person, an animal attack does not constitute an assault. However, the Unborn Victims of Violence Act of 2004 treats a fetus as a separate person for the purposes of assault and other violent crimes, under certain limited circumstances. See H.R. 1997/P.L. 108-212.
Possible examples of defenses, mitigating circumstances, or failures of proof that may be raised in response to an assault charge include:
Laws on assault vary by state. Since each state has its own criminal laws, there is no universal assault law. Acts classified as assault in one state may be classified as battery, menacing, intimidation, reckless endangerment, etc. in another state. Assault is often subdivided into two categories, simple assault and aggravated assault.
Modern American statutes may define assault as including:
In some states, consent is a complete defense to assault. In other jurisdictions, mutual consent is an incomplete defense to an assault charge such that an assault charge is prosecuted as a less significant offense such as a "petty misdemeanor".
States vary on whether it is possible to commit an "attempted assault" since it can be considered a double inchoate offense.
In Kansas the law on assault states:
In New York State, assault (as defined in the New York State Penal Code Article 120) requires an actual injury. Other states define this as battery; there is no crime of battery in New York. However, in New York if a person threatens another person with imminent injury without engaging in physical contact, that is called "menacing". A person who engages in that behavior is guilty of aggravated harassment in the second degree (a Class A misdemeanor; punishable with up to one year incarceration, probation for an extended time, and a permanent criminal record) when they threaten to cause physical harm to another person, and guilty of aggravated harassment in the first degree (a Class E felony) if they have a previous conviction for the same offense. New York also has specific laws against hazing, when such threats are made as requirement to join an organization.
North Dakota law states:
In Tennessee assault is defined as follows:
Assault in Ancient Greece was normally termed hubris. Contrary to modern usage, the term did not have the extended connotation of overweening pride, self-confidence or arrogance, often resulting in fatal retribution. In Ancient Greece, "hubris" referred to actions which, intentionally or not, shamed and humiliated the victim, and frequently the perpetrator as well. It was most evident in the public and private actions of the powerful and rich.
Violations of the law against hubris included, what would today be termed, assault and battery; sexual crimes ranging from forcible rape of women or children to consensual but improper activities; or the theft of public or sacred property. Two well-known cases are found in the speeches of Demosthenes, a prominent statesman and orator in ancient Greece. These two examples occurred when first, Meidias punched Demosthenes in the face in the theater (Against Meidias), and second when (in Against Konon) a defendant allegedly assaulted a man and crowed over the victim.
Hubris, though not specifically defined, was a legal term and was considered a crime in classical Athens. It was also considered the greatest sin of the ancient Greek world. That was so because it not only was proof of excessive pride, but also resulted in violent acts by or to those involved. The category of acts constituting hubris for the ancient Greeks apparently broadened from the original specific reference to mutilation of a corpse, or a humiliation of a defeated foe, or irreverent, "outrageous treatment", in general.
The meaning was eventually further generalized in its modern English usage to apply to any outrageous act or exhibition of pride or disregard for basic moral laws. Such an act may be referred to as an "act of hubris", or the person committing the act may be said to be hubristic. Atë, Greek for 'ruin, folly, delusion', is the action performed by the hero, usually because of their hubris, or great pride, that leads to their death or downfall.
Crucial to this definition are the ancient Greek concepts of honor (timē) and shame. The concept of timē included not only the exaltation of the one receiving honor, but also the shaming of the one overcome by the act of hubris. This concept of honor is akin to a zero-sum game. Rush Rehm simplifies this definition to the contemporary concept of "insolence, contempt, and excessive violence". | https://en.wikipedia.org/wiki?curid=1466 |
Álfheimr
In Norse cosmology, Alfheim (, "Land Of The Elves" or "Elfland"), also called Ljosalfheim ("Ljósálf[a]heimr", "home of the light-elves"), is home of the Light Elves.
Álfheim as an abode of the Elves is mentioned only twice in Old Norse texts.
The eddic poem "Grímnismál" describes twelve divine dwellings beginning in stanza 5 with:
Ýdalir call they the place where Ull
A hall for himself hath set;
And Álfheim the gods to Frey once gave
As a tooth-gift in ancient times.
A tooth-gift was a gift given to an infant on the cutting of the first tooth.
In the 12th century eddic prose "Gylfaginning", Snorri Sturluson relates it as the first of a series of abodes in heaven:
That which is called Álfheim is one, where dwell the peoples called "ljósálfar" [Light Elves]; but the "dökkálfar" [Dark Elves] dwell down in the earth, and they are unlike in appearance, but by far more unlike in nature. The Light-elves are fairer to look upon than the sun, but the Dark-elves are blacker than pitch.
The account later, in speaking of a hall in the Highest Heaven called Gimlé that shall survive when heaven and earth have died, explains:
It is said that another heaven is to the southward and upward of this one, and it is called Andlang ["Andlangr" 'Endlong'] but the third heaven is yet above that, and it is called Vídbláin ["Vídbláinn" 'Wide-blue'] and in that heaven we think this abode is. But we believe that none but Light-Elves inhabit these mansions now.
It is not indicated whether these heavens are identical to Álfheim or distinct. Some texts read Vindbláin ("Vindbláinn" 'Wind-blue') instead of Vídbláin.
Modern commentators speculate (or sometimes state as fact) that Álfheim was one of the nine worlds ("heima") mentioned in stanza 2 of the eddic poem "Völuspá". | https://en.wikipedia.org/wiki?curid=1478 |
Ask and Embla
In Norse mythology, Ask and Embla (from )—male and female respectively—were the first two humans, created by the gods. The pair are attested in both the "Poetic Edda", compiled in the 13th century from earlier traditional sources, and the "Prose Edda", written in the 13th century by Snorri Sturluson. In both sources, three gods, one of whom is Odin, find Ask and Embla and bestow upon them various corporeal and spiritual gifts. A number of theories have been proposed to explain the two figures, and there are occasional references to them in popular culture.
Old Norse literally means "ash tree" but the etymology of "embla" is uncertain, and two possibilities of the meaning of "embla" are generally proposed. The first meaning, "elm tree", is problematic, and is reached by deriving "*Elm-la" from "*Almilōn" and subsequently to ("elm"). The second suggestion is "vine", which is reached through "*Ambilō", which may be related to the Greek term (), itself meaning "vine, liana". The latter etymology has resulted in a number of theories.
According to Benjamin Thorpe "Grimm says the word embla, emla, signifies a busy woman, from amr, ambr, aml, ambl, assiduous labour; the same relation as Meshia and Meshiane, the ancient Persian names of the first man and woman, who were also formed from trees."
In stanza 17 of the "Poetic Edda" poem "Völuspá", the seeress reciting the poem states that Hœnir, Lóðurr and Odin once found Ask and Embla on land. The seeress says that the two were capable of very little, lacking in "ørlög" and says that they were given three gifts by the three gods:
The meaning of these gifts has been a matter of scholarly disagreement and translations therefore vary.
According to chapter 9 of the "Prose Edda" book "Gylfaginning", the three brothers Vili, Vé, and Odin, are the creators of the first man and woman. The brothers were once walking along a beach and found two trees there. They took the wood and from it created the first human beings; Ask and Embla. One of the three gave them the breath of life, the second gave them movement and intelligence, and the third gave them shape, speech, hearing and sight. Further, the three gods gave them clothing and names. Ask and Embla go on to become the progenitors of all humanity and were given a home within the walls of Midgard.
A Proto-Indo-European basis has been theorized for the duo based on the etymology of "embla" meaning "vine." In Indo-European societies, an analogy is derived from the drilling of fire and sexual intercourse. Vines were used as a flammable wood, where they were placed beneath a drill made of harder wood, resulting in fire. Further evidence of ritual making of fire in Scandinavia has been theorized from a depiction on a stone plate on a Bronze Age grave in Kivik, Scania, Sweden.
Jaan Puhvel comments that "ancient myths teem with trite 'first couples' of the type of Adam and his by-product Eve. In Indo-European tradition, these range from the Vedic Yama and Yamī and the Iranian Mašya and Mašyānag to the Icelandic Askr and Embla, with trees or rocks as preferred raw material, and dragon's teeth or other bony substance occasionally thrown in for good measure".
In his study of the comparative evidence for an origin of mankind from trees in Indo-European society, Anders Hultgård observes that "myths of the origin of mankind from trees or wood seem to be particularly connected with ancient Europe and Indo-Europe and Indo-European-speaking peoples of Asia Minor and Iran. By contrast the cultures of the Near East show almost exclusively the type of anthropogonic stories that derive man's origin from clay, earth or blood by means of a divine creation act".
Two wooden figures—the Braak Bog Figures—of "more than human height" were unearthed from a peat bog at Braak in Schleswig, Germany. The figures depict a nude male and a nude female. Hilda Ellis Davidson comments that these figures may represent a "Lord and Lady" of the Vanir, a group of Norse gods, and that "another memory of [these wooden deities] may survive in the tradition of the creation of Ask and Embla, the man and woman who founded the human race, created by the gods from trees on the seashore".
A figure named Æsc (Old English "ash tree") appears as the son of Hengest in the Anglo-Saxon genealogy for the kings of Kent. This has resulted in a number of theories that the figures may have had an earlier basis in pre-Norse Germanic mythology.
Connections have been proposed between Ask and Embla and the Vandal kings Assi and Ambri, attested in Paul the Deacon's 7th century AD work "Origo Gentis Langobardorum". There, the two ask the god Godan (Odin) for victory. The name "Ambri", like Embla, likely derives from "*Ambilō".
A stanza preceding the account of the creation of Ask and Embla in "Völuspá" provides a catalog of dwarfs, and stanza 10 has been considered as describing the creation of human forms from the earth. This may potentially mean that dwarfs formed humans, and that the three gods gave them life. Carolyne Larrington theorizes that humans are metaphorically designated as trees in Old Norse works (examples include "trees of jewellery" for women and "trees of battle" for men) due to the origin of humankind stemming from trees; Ask and Embla.
Ask and Embla have been the subject of a number of references and artistic depictions. A sculpture depicting the two stands in the southern Swedish city of Sölvesborg, created in 1948 by Stig Blomberg. Ask and Embla are depicted on two of the sixteen wooden panels found on the Oslo City Hall in Oslo, Norway, by Dagfin Werenskiold. | https://en.wikipedia.org/wiki?curid=1482 |
Alabama River
The Alabama River, in the U.S. state of Alabama, is formed by the Tallapoosa and Coosa rivers, which unite about north of Montgomery, near the suburb of Wetumpka.
The river flows west to Selma, then southwest until, about from Mobile, it unites with the Tombigbee, forming the Mobile and Tensaw rivers, which discharge into Mobile Bay.
The run of the Alabama is highly meandering. Its width varies from , and its depth from . Its length as measured by the United States Geological Survey is , and by steamboat measurement, .
The river crosses the richest agricultural and timber districts of the state. Railways connect it with the mineral regions of north-central Alabama.
After the Coosa and Tallapoosa rivers, the principal tributary of the Alabama is the Cahaba River, which is about long and joins the Alabama River about below Selma. The Alabama River's main tributary, the Coosa River, crosses the mineral region of Alabama and is navigable for light-draft boats from Rome, Georgia, to about above Wetumpka (about below Rome and below Greensport), and from Wetumpka to its junction with the Tallapoosa. The channel of the river has been considerably improved by the federal government.
The navigation of the Tallapoosa River – which has its source in Paulding County, Georgia, and is about long – is prevented by shoals and a fall at Tallassee, a few miles north of its junction with the Coosa. The Alabama is navigable throughout the year.
The river played an important role in the growth of the economy in the region during the 19th century as a source of transportation of goods. The river is still used for transportation of farming produce; however, it is not as important as it once was due to the construction of roads and railways.
Documented by Europeans first in 1701, the Alabama, Coosa, and Tallapoosa rivers were central to the homeland of the Creek Indians before their removal by United States forces to the Indian Territory in the 1830s.
The Edmund Pettus Bridge crosses the Alabama River near Selma. The bridge was the site of the famous marches for voting rights in 1965; the first became known as "Bloody Sunday" because the state and county police beat protesters after they crossed out of the city.
The Alabama River has three lock and dams between Montgomery and the Mobile River. The Robert F. Henry Lock & Dam is located at river mile 236.2, the Millers Ferry Lock & Dam is located at river mile 133.0, and the Claiborne Lock & Dam is located at river mile 72.5. | https://en.wikipedia.org/wiki?curid=1484 |
Alain de Lille
Alain de Lille (Alan of Lille) (Latin: "Alanus ab Insulis"; 11281202/03) was a French theologian and poet. He was born in Lille, some time before 1128. His exact date of death remains unclear as well, with most research pointing toward it being between 14 April 1202, and 5 April 1203.
Little is known of his life. Alain entered the schools no earlier than the late 1140s; first attending the school at Paris, and then at Chartres. He probably studied under masters such as Peter Abelard, Gilbert of Poitiers, and Thierry of Chartres. This is known through the writings of John of Salisbury, who is thought to have been a contemporary student of Alain of Lille. His earliest writings were probably written in the 1150s, and probably in Paris. Alain spent many years as a professor of Theology at the University of Paris and he attended the Lateran Council in 1179. Though the only accounts of his lectures seem to show a sort of eccentric style and approach, he was said to have been good friends with many other masters at the school in Paris, and taught there, as well as some time in southern France, into his old age. He afterwards inhabited Montpellier (he is sometimes called "Alanus de Montepessulano"), lived for a time outside the walls of any cloister, and finally retired to Cîteaux, where he died in 1202.
He had a very widespread reputation during his lifetime, and his knowledge caused him to be called "Doctor Universalis". Many of Alain's writings are unable to be exactly dated, and the circumstances and details surrounding his writing are often unknown as well. However, it does seem clear that his first notable work, "Summa Quoniam Homines", was completed somewhere between 1155 and 1165, with the most conclusive date being 1160, and was probably developed through his lectures at the school in Paris. Among his very numerous works two poems entitle him to a distinguished place in the Latin literature of the Middle Ages; one of these, the "De planctu naturae", is an ingenious satire on the vices of humanity. He created the allegory of grammatical "conjugation" which was to have its successors throughout the Middle Ages. The "Anticlaudianus", a treatise on morals as allegory, the form of which recalls the pamphlet of Claudian against Rufinus, is agreeably versified and relatively pure in its latinity.
As a theologian Alain de Lille shared in the mystic reaction of the second half of the 12th century against the scholastic philosophy. His mysticism, however, is far from being as absolute as that of the Victorines. In the "Anticlaudianus" he sums up as follows: Reason, guided by prudence, can unaided discover most of the truths of the physical order; for the apprehension of religious truths it must trust to faith. This rule is completed in his treatise, "Ars catholicae fidei", as follows: Theology itself may be demonstrated by reason. Alain even ventures an immediate application of this principle, and tries to prove geometrically the dogmas defined in the Creed. This bold attempt is entirely factitious and verbal, and it is only his employment of various terms not generally used in such a connection (axiom, theorem, corollary, etc.) that gives his treatise its apparent originality.
Alan's philosophy was a sort of mixture of Aristotelian logic and Neoplatonic philosophy. The Platonist seemed to outweigh the Aristotelian in Alan, but he felt strongly that the divine is all intelligibility and argued this notion through much Aristotelian logic combined with Pythagorean mathematics.
One of Alain's most notable works was one he modeled after Boethius’ "Consolation of Philosophy", to which he gave the title "De Planctu Naturae", or "The Plaint of Nature", and which was most likely written in the late 1160s. In this work, Alan uses prose and verse to illustrate the way in which nature defines its own position as inferior to that of God. He also attempts to illustrate the way in which humanity, through sexual perversion and specifically homosexuality, has defiled itself from nature and God. In "Anticlaudianus", another of his notable works, Alan uses a poetical dialogue to illustrate the way in which nature comes to the realization of her failure in producing the perfect man. She has only the ability to create a soulless body, and thus she is "persuaded to undertake the journey to heaven to ask for a soul," and "the Seven Liberal Arts produce a chariot for her... the Five Senses are the horses". The "Anticlaudianus" was translated into French and German in the following century, and toward 1280 was re-worked into a musical anthology by Adam de la Bassée. One of Alan's most popular and widely distributed works is his manual on preaching, "Ars Praedicandi", or "The Art of Preaching". This work shows how Alan saw theological education as being a fundamental preliminary step in preaching and strove to give clergyman a manuscript to be "used as a practical manual" when it came to the formation of sermons and art of preaching.
Alain wrote three very large theological textbooks, one being his first work, "Summa Quoniam Homines". Another of his theological textbooks that strove to be more minute in its focus, is his "De Fide Catholica", dated somewhere between 1185 and 1200, Alan sets out to refute heretical views, specifically that of the Waldensians and Cathars. In his third theological textbook, "Regulae Caelestis Iuris", he presents a set of what seems to be theological rules; this was typical of the followers of Gilbert of Poitiers, of which Alan could be associated. Other than these theological textbooks, and the aforementioned works of the mixture of prose and poetry, Alan of Lille had numerous other works on numerous subjects, primarily including Speculative Theology, Theoretical Moral Theology, Practical Moral Theology, and various collections of poems.
Alain de Lille has often been confounded with other persons named Alain, in particular with another Alanus (Alain, bishop of Auxerre), Alan, abbot of Tewkesbury, Alain de Podio, etc. Certain facts of their lives have been attributed to him, as well as some of their works: thus the "Life of St Bernard" should be ascribed to Alain of Auxerre and the "Commentary upon Merlin" to Alan of Tewkesbury. Alan of Lille was not the author of a "Memoriale rerum difficilium", published under his name, nor of "Moralium dogma philosophorum", nor of the satirical "Apocalypse of Golias" once attributed to him; and it is exceedingly doubtful whether the "Dicta Alani de lapide philosophico" really issued from his pen. On the other hand, it now seems practically demonstrated that Alain de Lille was the author of the "Ars catholicae fidei" and the treatise "Contra haereticos".
In his sermons on capital sins, Alain argued that sodomy and homicide are the most serious sins, since they call forth the wrath of God, which led to the destruction of Sodom and Gomorrah. His chief work on penance, the "Liber poenitenitalis" dedicated to Henry de Sully, exercised great influence on the many manuals of penance produced as a result of the Fourth Lateran Council. Alain's identification of the sins against nature included bestiality, masturbation, oral and anal intercourse, incest, adultery and rape. In addition to his battle against moral decay, Alan wrote a work against Islam, Judaism and Christian heretics dedicated to William VIII of Montpellier.
Attribution: | https://en.wikipedia.org/wiki?curid=1485 |
Alemanni
The Alemanni (also "Alamanni"; "Suebi" "Swabians") were a confederation of Germanic tribes on the Upper Rhine River. First mentioned by Cassius Dio in the context of the campaign of Caracalla of 213, the Alemanni captured the in 260, and later expanded into present-day Alsace, and northern Switzerland, leading to the establishment of the Old High German language in those regions, by the eighth century named "Alamannia".
In 496, the Alemanni were conquered by Frankish leader Clovis and incorporated into his dominions. Mentioned as still pagan allies of the Christian Franks, the Alemanni were gradually Christianized during the seventh century. The is a record of their customary law during this period. Until the eighth century, Frankish suzerainty over Alemannia was mostly nominal. After an uprising by Theudebald, Duke of Alamannia, though, Carloman executed the Alamannic nobility and installed Frankish dukes.
During the later and weaker years of the Carolingian Empire, the Alemannic counts became almost independent, and a struggle for supremacy took place between them and the Bishopric of Constance. The chief family in Alamannia was that of the counts of , who were sometimes called margraves, and one of whom, Burchard II, established the Duchy of Swabia, which was recognized by Henry the Fowler in 919 and became a stem duchy of the Holy Roman Empire.
The area settled by the Alemanni corresponds roughly to the area where Alemannic German dialects remain spoken, including German Swabia and Baden, French Alsace, German-speaking Switzerland, Liechtenstein and Austrian Vorarlberg.
The French language name of Germany, , is derived from their name, from Old French "aleman(t)", from French loaned into a number of other languages, including Middle English which commonly used the term "Almains" for Germans. Likewise, the Arabic name for Germany is المانيا (Almania), the Spanish is Alemania, the Portuguese is Alemanha, Welsh is Yr Almaen and Persian is (Alman).
According to Gaius Asinius Quadratus (quoted in the mid-sixth century by Byzantine historian Agathias), the name "Alamanni" (Ἀλαμανοι) means "all men". It indicates that they were a conglomeration drawn from various Germanic tribes. The Romans and the Greeks called them as such mentioned. This derivation was accepted by Edward Gibbon, in his "Decline and Fall of the Roman Empire" and by the anonymous contributor of notes assembled from the papers of Nicolas Fréret, published in 1753.
This etymology has remained the standard derivation of the name.
An alternative suggestion proposes derivation from "*alah" "sanctuary".
Walafrid Strabo in the 9th century remarked, in discussing the people of Switzerland and the surrounding regions, that only foreigners called them the Alemanni, but that they gave themselves the name of "Suebi".
The Suebi are given the alternative name of "Ziuwari" (as "Cyuuari") in an Old High German gloss, interpreted by Jacob Grimm as "Martem colentes" ("worshippers of Mars").
The Alemanni were first mentioned by Cassius Dio describing the campaign of Caracalla in 213. At that time, they apparently dwelt in the basin of the Main, to the south of the Chatti.
Cassius Dio portrays the Alemanni as victims of this treacherous emperor. They had asked for his help, according to Dio, but instead he colonized their country, changed their place names, and executed their warriors under a pretext of coming to their aid. When he became ill, the Alemanni claimed to have put a hex on him. Caracalla, it was claimed, tried to counter this influence by invoking his ancestral spirits.
In retribution, Caracalla then led the Legio II "Traiana Fortis" against the Alemanni, who lost and were pacified for a time. The legion was as a result honored with the name "Germanica." The fourth-century fictional Historia Augusta, "Life of Antoninus Caracalla", relates (10.5) that Caracalla then assumed the name "Alemannicus," at which Helvius Pertinax jested that he should really be called "Geticus Maximus," because in the year before he had murdered his brother, Geta.
Through much of his short reign, Caracalla was known for unpredictable and arbitrary operations launched by surprise after a pretext of peace negotiations. If he had any reasons of state for such actions, they remained unknown to his contemporaries. Whether or not the Alemanni had been previously neutral, they were certainly further influenced by Caracalla to become thereafter notoriously implacable enemies of Rome.
This mutually antagonistic relationship is perhaps the reason why the Roman writers persisted in calling the Alemanni ”barbari," meaning "savages." The archaeology, however, shows that they were largely Romanized, lived in Roman-style houses and used Roman artifacts, the Alemannic women having adopted the Roman fashion of the "tunica" even earlier than the men.
Most of the Alemanni were probably at the time, in fact, resident in or close to the borders of Germania Superior. Although Dio is the earliest writer to mention them, Ammianus Marcellinus used the name to refer to Germans on the Limes Germanicus in the time of Trajan's governorship of the province shortly after it was formed, around 98-99 AD. At that time, the entire frontier was being fortified for the first time. Trees from the earliest fortifications found in Germania Inferior are dated by dendrochronology to 99-100 AD.
Ammianus relates (xvii.1.11) that much later the Emperor Julian undertook a punitive expedition against the Alemanni, who by then were in Alsace, and crossed the Main (Latin "Menus"), entering the forest, where the trails were blocked by felled trees. As winter was upon them, they reoccupied a
"fortification which was founded on the soil of the Alemanni that Trajan wished to be called with his own name".
In this context, the use of Alemanni is possibly an anachronism, but it reveals that Ammianus believed they were the same people, which is consistent with the location of the Alemanni of Caracalla's campaigns.
"Germania" by Tacitus (AD 90) in Chapter 42 states that the Hermunduri were a tribe certainly located in the region that later became Thuringia. Tacitus states that they traded with Rhaetia, which in Ptolemy is located across the Danube from Germania Superior, suggesting that the Alemanni originally in part derived from the Hermunduri.
However, no Hermunduri appear in Ptolemy, though after the time of Ptolemy, the Hermunduri joined with the Marcomanni in the wars of 166–180 against the empire.
Tacitus says that the source of the Elbe is among the Hermunduri, somewhat to the east of the upper Main. He places them also between the Naristi (Varisti), whose location was at the very edge of the Black Forest, and the Marcomanni and Quadi. Moreover, the Hermunduri were broken in the Marcomannic Wars and made a separate peace with Rome.
The Alemanni thus were probably not primarily the Hermunduri, although some elements of them may have been present.
Before the mention of Alemanni in the time of Caracalla, one would search in vain for Alemanni in the moderately detailed geography of southern Germany in Claudius Ptolemy, written in Greek in the mid-second century; at that time, the people who later used that name likely were known by other designations.
Nevertheless, some conclusions can be drawn from Ptolemy. Germania Superior is easily identified. Following up the Rhine one comes to a town, Mattiacum, which must be at the border of the Roman Germany (vicinity of Wiesbaden). Upstream from it and between the Rhine and Abnoba (in the Black Forest) are the Ingriones, Intuergi, Vangiones, Caritni and Vispi, some of whom were there since the days of the early empire or before. On the other side of the northern Black Forest were the Chatti about where Hesse is today, on the lower Main.
Historic Swabia was eventually replaced by today's Baden-Württemberg, but it had been the most significant territory of mediaeval Alamannia, comprising all Germania Superior and territory east to Bavaria. It did not include the upper Main, but that is where Caracalla campaigned. Moreover, the territory of Germania Superior was not originally included among the Alemanni's possessions.
However, if one looks for the peoples in the region from the upper Main in the north, south to the Danube and east to the Czech Republic where the Quadi and Marcomanni were located, Ptolemy does not give any tribes. The Tubanti are just south of the Chatti and at the other end of what was then the Black Forest, the Varisti, whose location is known. One possible reason for this distribution is that the population preferred not to live in the forest except in troubled times. The region between the forest and the Danube, though, included about a dozen settlements, or "cantons".
Ptolemy's view of Germans in the region indicates that the tribal structure had lost its grip in the Black Forest region and was replaced by a canton structure. The tribes stayed in the Roman province, perhaps because the Romans offered stability. Also, Caracalla perhaps felt more comfortable about campaigning in the upper Main because he was not declaring war on any specific historic tribe, such as the Chatti or Cherusci, against whom Rome had suffered grievous losses. By Caracalla's time, the name Alemanni was being used by cantons themselves banding together for purposes of supporting a citizen army (the "war bands").
The term Suebi has a double meaning in the sources. On the one hand Tacitus' "Germania" tells us (Chapters 38, 39) that they occupy more than half of Germany, use a distinctive hair style, and are spiritually centered on the Semnones. On the other hand, the Suebi of the upper Danube are described as though they were a tribe.
The solution to the puzzle as well as explaining the historical circumstances leading to the choice of the Agri Decumates as a defensive point and the concentration of Germans there are probably to be found in the German attack on the Gallic fortified town of Vesontio in 58 BC. The upper Rhine and Danube appear to form a funnel pointing straight at Vesontio.
Julius Caesar in "Gallic Wars" tells us (1.51) that Ariovistus had gathered an army from a wide region of Germany, but especially the Harudes, Marcomanni, Triboci, Vangiones, Nemetes and Sedusii. The Suebi were being invited to join. They lived in 100 cantons (4.1) from which 1000 young men per year were chosen for military service, a citizen-army by our standards and by comparison with the Roman professional army.
Ariovistus had become involved in an invasion of Gaul, which the German wished to settle. Intending to take the strategic town of Vesontio, he concentrated his forces on the Rhine near Lake Constance, and when the Suebi arrived, he crossed. The Gauls had called to Rome for military aid. Caesar occupied the town first and defeated the Germans before its walls, slaughtering most of the German army as it tried to flee across the river (1.36ff). He did not pursue the retreating remnants, leaving what was left of the German army and their dependents intact on the other side of the Rhine.
The Gauls were ambivalent in their policies toward the Romans. In 53 BC the Treveri broke their alliance and attempted to break free of Rome. Caesar foresaw that they would now attempt to ally themselves with the Germans. He crossed the Rhine to forestall that event, a successful strategy. Remembering their expensive defeat at the Battle of Vesontio, the Germans withdrew to the Black Forest, concentrating there a mixed population dominated by Suebi. As they had left their tribal homes behind, they probably took over all the former Celtic cantons along the Danube.
The Alemanni were continually engaged in conflicts with the Roman Empire in the 3rd and 4th centuries. They launched a major invasion of Gaul and northern Italy in 268, when the Romans were forced to denude much of their German frontier of troops in response to a massive invasion of the Goths from the east. Their raids throughout the three parts of Gaul were traumatic: Gregory of Tours (died ca 594) mentions their destructive force at the time of Valerian and Gallienus (253–260), when the Alemanni assembled under their "king", whom he calls Chrocus, who "by the advice, it is said, of his wicked mother, and overran the whole of the Gauls, and destroyed from their foundations all the temples which had been built in ancient times. And coming to Clermont he set on fire, overthrew and destroyed that shrine which they call "Vasso Galatae" in the Gallic tongue," martyring many Christians ("Historia Francorum" Book I.32–34). Thus 6th-century Gallo-Romans of Gregory's class, surrounded by the ruins of Roman temples and public buildings, attributed the destruction they saw to the plundering raids of the Alemanni.
In the early summer of 268, the Emperor Gallienus halted their advance into Italy, but then had to deal with the Goths. When the Gothic campaign ended in Roman victory at the Battle of Naissus in September, Gallienus' successor Claudius Gothicus turned north to deal with the Alemanni, who were swarming over all Italy north of the Po River.
After efforts to secure a peaceful withdrawal failed, Claudius forced the Alemanni to battle at the Battle of Lake Benacus in November. The Alemanni were routed, forced back into Germany, and did not threaten Roman territory for many years afterwards.
Their most famous battle against Rome took place in Argentoratum (Strasbourg), in 357, where they were defeated by Julian, later Emperor of Rome, and their king Chnodomarius was taken prisoner to Rome.
On January 2, 366, the Alemanni yet again crossed the frozen Rhine in large numbers, to invade the Gallic provinces, this time being defeated by Valentinian (see Battle of Solicinium). In the great mixed invasion of 406, the Alemanni appear to have crossed the Rhine river a final time, conquering and then settling what is today Alsace and a large part of the Swiss Plateau. The crossing is described in Wallace Breem's historical novel "Eagle in the Snow". The Chronicle of Fredegar gives the account. At "Alba Augusta" (Alba-la-Romaine) the devastation was so complete, that the Christian bishop retired to Viviers, but in Gregory's account at Mende in Lozère, also deep in the heart of Gaul, bishop Privatus was forced to sacrifice to idols in the very cave where he was later venerated. It is thought this detail may be a generic literary ploy to epitomize the horrors of barbarian violence.
The kingdom of Alamannia between Strasbourg and Augsburg lasted until 496, when the Alemanni were conquered by Clovis I at the Battle of Tolbiac. The war of Clovis with the Alemanni forms the setting for the conversion of Clovis, briefly treated by Gregory of Tours. (Book II.31) Subsequently, the Alemanni formed part of the Frankish dominions and were governed by a Frankish duke.
In 746, Carloman ended an uprising by summarily executing all Alemannic nobility at the blood court at Cannstatt, and for the following century, Alemannia was ruled by Frankish dukes. Following the treaty of Verdun of 843, Alemannia became a province of the eastern kingdom of Louis the German, the precursor of the Holy Roman Empire. The duchy persisted until 1268.
The German spoken today over the range of the former Alemanni is termed Alemannic German, and is recognised among the subgroups of the High German languages. Alemannic runic inscriptions such as those on the Pforzen buckle are among the earliest testimonies of Old High German.
The High German consonant shift is thought to have originated around the 5th century either in Alemannia or among the Lombards; before that the dialect spoken by Alemannic tribes was little different from that of other West Germanic peoples.
"Alemannia" lost its distinct jurisdictional identity when Charles Martel absorbed it into the Frankish empire, early in the 8th century. Today, "Alemannic" is a linguistic term, referring to Alemannic German, encompassing the dialects of the southern two thirds of Baden-Württemberg (German State), in western Bavaria (German State), in Vorarlberg (Austrian State), Swiss German in Switzerland and the Alsatian language of the Alsace (France).
The Alemanni established a series of territorially defined "pagi" (cantons) on the east bank of the Rhine. The exact number and extent of these "pagi" is unclear and probably changed over time.
"Pagi", usually pairs of "pagi" combined, formed kingdoms ("regna") which, it is generally believed, were permanent and hereditary. Ammianus describes Alemanni rulers with various terms: "reges excelsiores ante alios" ("paramount kings"), "reges proximi" ("neighbouring kings"), "reguli" ("petty kings") and "regales" ("princes"). This may be a formal hierarchy, or they may be vague, overlapping terms, or a combination of both. In 357, there appear to have been two paramount kings (Chnodomar and Westralp) who probably acted as presidents of the confederation and seven other kings ("reges"). Their territories were small and mostly strung along the Rhine (although a few were in the hinterland). It is possible that the "reguli" were the rulers of the two "pagi" in each kingdom. Underneath the royal class were the nobles (called "optimates" by the Romans) and warriors (called "armati" by the Romans). The warriors consisted of professional warbands and levies of free men. Each nobleman could raise an average of c. 50 warriors.
The Christianization of the Alemanni took place during Merovingian times (6th to 8th centuries). We know that in the 6th century, the Alemanni were predominantly pagan, and in the 8th century, they were predominantly Christian. The intervening 7th century was a period of genuine syncretism during which Christian symbolism and doctrine gradually grew in influence.
Some scholars have speculated that members of the Alemannic elite such as king Gibuld due to Visigothic influence may have been converted to Arianism even in the later 5th century.
In the mid-6th century, the Byzantine historian Agathias records, in the context of the wars of the Goths and Franks against Byzantium, that the Alemanni fighting among the troops of Frankish king Theudebald were like the Franks in all respects except religion, since
He also spoke of the particular ruthlessness of the Alemanni in destroying Christian sanctuaries and plundering churches while the genuine Franks were respectful towards those sanctuaries. Agathias expresses his hope that the Alemanni would assume better manners through prolonged contact with the Franks, which is by all appearances, in a manner of speaking, what eventually happened.
Apostles of the Alemanni were Columbanus and his disciple Saint Gall. Jonas of Bobbio records that Columbanus was active in Bregenz, where he disrupted a beer sacrifice to Wodan. Despite these activities, for some time, the Alemanni seem to have continued their pagan cult activities, with only superficial or syncretistic Christian elements. In particular, there is no change in burial practice, and tumulus warrior graves continued to be erected throughout Merovingian times. Syncretism of traditional Germanic animal-style with Christian symbolism is also present in artwork, but Christian symbolism becomes more and more prevalent during the 7th century. Unlike the later Christianization of the Saxons and of the Slavs, the Alemanni seem to have adopted Christianity gradually, and voluntarily, spread in emulation of the Merovingian elite.
From c. the 520s to the 620s, there was a surge of Alemannic Elder Futhark inscriptions. About 70 specimens have survived, roughly half of them on fibulae, others on belt buckles (see Pforzen buckle, Bülach fibula) and other jewelry and weapon parts. Use of runes subsides with the advance of Christianity.
The Nordendorf fibula (early 7th century) clearly records pagan theonyms, "logaþorewodanwigiþonar " read as "Wodan and Donar are magicians/sorcerers", but this may be interpreted as either a pagan invocation of the powers of these deities, or a Christian protective charm against them.
A runic inscription on a fibula found at Bad Ems reflects Christian pious sentiment (and is also explicitly marked with a Christian cross), reading "god fura dih deofile ᛭" ("God for/before you, Theophilus!", or alternatively "God before you, Devil!"). Dated to between AD 660 and 690, it marks the end of the native Alemannic tradition of runic literacy. Bad Ems is in Rhineland-Palatinate, on the northwestern boundary of Alemannic settlement, where Frankish influence would have been strongest.
The establishment of the bishopric of Konstanz cannot be dated exactly and was possibly undertaken by Columbanus himself (before 612). In any case, it existed by 635, when Gunzo appointed John of Grab bishop. Constance was a missionary bishopric in newly converted lands, and did not look back on late Roman church history unlike the Raetian bishopric of Chur (established 451) and Basel (an episcopal seat from 740, and which continued the line of Bishops of Augusta Raurica, see Bishop of Basel). The establishment of the church as an institution recognized by worldly rulers is also visible in legal history. In the early 7th century "Pactus Alamannorum" hardly ever mentions the special privileges of the church, while Lantfrid's "Lex Alamannorum" of 720 has an entire chapter reserved for ecclesial matters alone.
A genetic study published in "Science Advances" in September 2018 examined the remains of eight individuals buried at a 7th century Alemannic graveyard in Niederstotzingen, Germany. This is the richest and most complete Alemannic graveyard ever found. The highest ranking individual at the graveyard was a male with Frankish grave goods. Four males were found to be closely related to him. They were all cariers of types of the paternal haplogroup R1b1a2a1a1c2b2b. A sixth male was a carrier of the paternal haplogroup R1b1a2a1a1c2b2b1a1 and the maternal haplogroup U5a1a1. Along with the five closely related individuals, he displayed close genetic links to northern and eastern Europe, particularly Lithuania and Iceland. Two individuals buried at the cemetery were found to be genetically different from both the others and each other, displaying genetic links to Southern Europe, particularly northern Spain. Along with the sixth male, they might have been adoptees. | https://en.wikipedia.org/wiki?curid=1486 |
NYSE American
NYSE American, formerly known as the American Stock Exchange (AMEX), and more recently as NYSE MKT, is an American stock exchange situated in New York City. AMEX was previously a mutual organization, owned by its members. Until 1953, it was known as the New York Curb Exchange.
NYSE Euronext acquired AMEX on October 1, 2008, with AMEX integrated with the Alternext European small-cap exchange and renamed the NYSE Alternext U.S. In March 2009, NYSE Alternext U.S. was changed to NYSE Amex Equities. On May 10, 2012, NYSE Amex Equities changed its name to NYSE MKT LLC.
Following the SEC approval of competing stock exchange IEX in 2016, NYSE MKT rebranded as NYSE American and introduced a 350-microsecond delay in trading, referred to as a "speed bump", which is also present on the IEX.
The exchange grew out of the loosely organized curb market of curbstone brokers on Broad Street in Manhattan. Efforts to organize and standardize the market started early in the 20th century under Emanuel S. Mendels and Carl H. Pforzheimer. The curb brokers had been kicked out of the Mills Building front by 1907, and had moved to the pavement outside the Blair Building where cabbies lined up. There they were given a "little domain of asphalt" fenced off by the police on Broad Street between Exchange Place and Beaver Street. As of 1907, the curb market operated starting at 10 AM, each day except Sundays, until a gong at 3 PM. Orders for the purchase and sale of securities were shouted down from the windows of nearby brokerages, with the execution of the sale then shouted back up to the brokerage.
As of 1907, E. S. Mendels gave the brokers rules "by right of seniority", but the curb brokers intentionally avoided organizing. According to the "Times", this came from a general belief that if a curb exchange was organized, the exchange authorities would force members to sell their other exchange memberships. However, in 1908 the New York Curb Market Agency was established, which developed appropriate trading rules for curbstone brokers, organized by Mendels. The informal Curb Association formed in 1910 to weed out undesirables. The curb exchange was for years at odds with the New York Stock Exchange (NYSE), or "Big Board", operating several buildings away. Explained the "New York Times" in 1910, the Big Board looked at the curb as "a trading place for 'cats and dogs.'" On April 1, 1910, however, when the NYSE abolished its unlisted department, the NYSE stocks "made homeless by the abolition" were "refused domicile" by the curb brokers on Broad Street until they had complied with the "Curb list" of requirements. In 1911, Mendels and his advisers drew up a constitution and formed the New York Curb Market Association, which can be considered the first formal constitution of American Stock Exchange.
In 1920, journalist Edwin C. Hill wrote that the curb exchange on lower Broad Street was a "roaring, swirling whirlpool" that "tears control of a gold-mine from an unlucky operator, and pauses to auction a puppy-dog. It is like nothing else under the astonished sky that is its only roof." After a group of Curb brokers formed a real estate company to design a building, Starrett & Van Vleck designed the new exchange building on Greenwich Street in Lower Manhattan between Thames and Rector, at 86 Trinity Place. It opened in 1921, and the curbstone brokers moved indoors on June 27, 1921. In 1929, the New York Curb Market changed its name to the New York Curb Exchange. The Curb Exchange soon became the leading international stock market, and according to historian Robert Sobel, "had more individual foreign issues on its list than [...] all other American securities markets combined."
Edward Reid McCormick, was the first president of the New York Curb Market Association and is credited with moving the market indoors. George Rea was approached about the position of president of the New York Curb Exchange in 1939. He was unanimously elected as the first paid president in the history of the Curb Exchange. He was paid $25,000 per year and held the position for 3 years before offering his resignation in 1942. He left the position having "done such a good job that there is virtually no need for a full-time successor."
In 1953 the Curb Exchange was renamed the American Stock Exchange. The exchange was shaken by a scandal in 1961, and in 1962 began a reorganization. Its reputation recently damaged by charges of mismanagement, in 1962 the American Stock Exchange named Edwin Etherington its president. Writes CNN, he and executive vice president Paul Kolton were "tapped in 1962 to clean up and reinvigorate the scandal-plagued American Stock Exchange." At AMEX for five years, he was credited with improving opportunities for minorities and women. In 1971, Johnson Products Company became the first African American-owned company to be listed on the American Stock Exchange.
As of 1971, it was the second largest stock exchange in the United States. Paul Kolton succeeded Ralph S. Saul as AMEX president on June 17, 1971, making him the first person to be selected from within the exchange to serve as its leader, succeeding Ralph S. Saul, who announced his resignation in March 1971. In November 1972, Kolton was named as the exchange's first chief executive officer and its first salaried top executive. As chairman, Kolton oversaw the introduction of options trading. Kolton opposed the idea of a merger with the New York Stock Exchange while he headed the exchange saying that "two independent, viable exchanges are much more likely to be responsive to new pressures and public needs than a single institution". Kolton announced in July 1977 that he would be leaving his position at the American Exchange in November of that year.
In 1977, Thomas Peterffy purchased a seat on the American Stock Exchange and played a role in developing Interactive Brokers, an electronic trading platform. Peterffy created a major stir among traders by introducing handheld computers onto the trading floor in the early 1980s.
As of 2003, AMEX was the only U.S. stock market to permit the transmission of buy and sell orders through hand signals.
In October 2008 NYSE Euronext completed acquisition of the AMEX for $260 million in stock. Before the closing of the acquisition, NYSE Euronext announced that the AMEX would be integrated with the Alternext European small-cap exchange and renamed the NYSE Alternext U.S. The American Stock Exchange merged with the New York Stock Exchange (NYSE Euronext) on October 1, 2008. Post merger, the Amex equities business was branded "NYSE Alternext US". As part of the re-branding exercise, NYSE Alternext US was re-branded as NYSE Amex Equities. On December 1, 2008, the Curb Exchange building at 86 Trinity Place was closed, and the Amex Equities trading floor was moved to the NYSE Trading floor at 11 Wall Street. 90 years after its 1921 opening, the old New York Curb Market building was empty but remained standing. In March 2009, NYSE Alternext U.S. was changed to NYSE Amex Equities. On May 10, 2012, NYSE Amex Equities changed its name to NYSE MKT LLC.
In June 2016, a competing stock exchange IEX (which operated with a 350-microsecond delay in trading), gained approval from the SEC, despite lobbying protests by the NYSE and other exchanges and trading firms.
On July 24, 2017, the NYSE renamed NYSE MKT to NYSE American, and announced plans to introduce its own 350-microsecond "speed bump" in trading on the small and mid-cap company exchange. | https://en.wikipedia.org/wiki?curid=1488 |
Alfred Russel Wallace
Alfred Russel Wallace (8 January 18237 November 1913) was a British naturalist, explorer, geographer, anthropologist, biologist and illustrator. He is best known for independently conceiving the theory of evolution through natural selection; his paper on the subject was jointly published with some of Charles Darwin's writings in 1858." "This prompted Darwin to publish some of his own ideas in "On the Origin of Species".
Like Darwin, Wallace did extensive fieldwork; first in the Amazon River basin, and then in the Malay Archipelago, where he identified the faunal divide now termed the Wallace Line, which separates the Indonesian archipelago into two distinct parts: a western portion in which the animals are largely of Asian origin, and an eastern portion where the fauna reflect Australasia.
He was considered the 19th century's leading expert on the geographical distribution of animal species and is sometimes called the "father of biogeography". Wallace was one of the leading evolutionary thinkers of the 19th century and made many other contributions to the development of evolutionary theory besides being co-discoverer of natural selection. These included the concept of warning colouration in animals, and the Wallace effect, a hypothesis on how natural selection could contribute to speciation by encouraging the development of barriers against hybridisation. Wallace's 1904 book "Man's Place in the Universe" was the first serious attempt by a biologist to evaluate the likelihood of life on other planets. He was also one of the first scientists to write a serious exploration of the subject of whether there was life on Mars.
Wallace was strongly attracted to unconventional ideas (such as evolution). His advocacy of spiritualism and his belief in a non-material origin for the higher mental faculties of humans strained his relationship with some members of the scientific establishment.
Aside from scientific work, he was a social activist who was critical of what he considered to be an unjust social and economic system (capitalism) in 19th-century Britain. His interest in natural history resulted in his being one of the first prominent scientists to raise concerns over the environmental impact of human activity. He was also a prolific author who wrote on both scientific and social issues; his account of his adventures and observations during his explorations in Singapore, Indonesia and Malaysia, "The Malay Archipelago", was both popular and highly regarded. Since its publication in 1869, it has never been out of print.
Wallace had financial difficulties throughout much of his life. His Amazon and Far Eastern trips were supported by the sale of specimens he collected and, after he lost most of the considerable money he made from those sales in unsuccessful investments, he had to support himself mostly from the publications he produced. Unlike some of his contemporaries in the British scientific community, such as Darwin and Charles Lyell, he had no family wealth to fall back on, and he was unsuccessful in finding a long-term salaried position, receiving no regular income until he was awarded a small government pension, through Darwin's efforts, in 1881.
Alfred Wallace was born in the Welsh village of Llanbadoc, near Usk, Monmouthshire. He was the eighth of nine children of Thomas Vere Wallace and Mary Anne Greenell. Mary Anne was English; Thomas Wallace was probably of Scottish ancestry. His family, like many Wallaces, claimed a connection to William Wallace, a leader of Scottish forces during the Wars of Scottish Independence in the 13th century. Thomas Wallace graduated in law but never practised law. He owned some income-generating property, but bad investments and failed business ventures resulted in a steady deterioration of the family's financial position. His mother was from a middle-class English family from Hertford, north of London. When Wallace was five years old, his family moved to Hertford. There he attended Hertford Grammar School until financial difficulties forced his family to withdraw him in 1836 when he was aged 14.
Wallace then moved to London to board with his older brother John, a 19-year-old apprentice builder. This was a stopgap measure until William, his oldest brother, was ready to take him on as an apprentice surveyor. While in London, Alfred attended lectures and read books at the London Mechanics Institute (current Birkbeck, University of London). Here he was exposed to the radical political ideas of the Welsh social reformer Robert Owen and of Thomas Paine. He left London in 1837 to live with William and work as his apprentice for six years.
At the end of 1839, they moved to Kington, Hereford, near the Welsh border, before eventually settling at Neath in Glamorgan in Wales. Between 1840 and 1843, Wallace did land surveying work in the countryside of the west of England and Wales. By the end of 1843, William's business had declined due to difficult economic conditions, and Wallace, at the age of 20, left in January.
One result of Wallace's early travels is a modern controversy about his nationality. Since Wallace was born in Monmouthshire, some sources have considered him to be Welsh. However, some historians have questioned this because neither of his parents was Welsh, his family only briefly lived in Monmouthshire, the Welsh people Wallace knew in his childhood considered him to be English, and because Wallace himself consistently referred to himself as English rather than Welsh (even when writing about his time in Wales). One Wallace scholar has stated that the most reasonable interpretation is therefore that he was an Englishman born in Wales.
After a brief period of unemployment, he was hired as a master at the Collegiate School in Leicester to teach drawing, mapmaking, and surveying. Wallace spent many hours at the library in Leicester: he read "An Essay on the Principle of Population" by Thomas Robert Malthus, and one evening he met the entomologist Henry Bates. Bates was 19 years old, and in 1843 he had published a paper on beetles in the journal "Zoologist". He befriended Wallace and started him collecting insects. His brother William died in March 1845, and Wallace left his teaching position to assume control of his brother's firm in Neath, but his brother John and he were unable to make the business work. After a few months, Wallace found work as a civil engineer for a nearby firm that was working on a survey for a proposed railway in the Vale of Neath.
Wallace's work on the survey involved spending a lot of time outdoors in the countryside, allowing him to indulge his new passion for collecting insects. Wallace persuaded his brother John to join him in starting another architecture and civil engineering firm, which carried out a number of projects, including the design of a building for the Neath Mechanics' Institute, founded in 1843. William Jevons, the founder of that institute, was impressed by Wallace and persuaded him to give lectures there on science and engineering. In the autumn of 1846, John and he purchased a cottage near Neath, where they lived with their mother and sister Fanny (his father had died in 1843).
During this period, he read avidly, exchanging letters with Bates about Robert Chambers' anonymously published evolutionary treatise "Vestiges of the Natural History of Creation", Charles Darwin's "The Voyage of the Beagle", and Charles Lyell's "Principles of Geology".
Inspired by the chronicles of earlier and contemporary travelling naturalists, including Alexander von Humboldt, Ida Laura Pfeiffer, Charles Darwin and especially William Henry Edwards, Wallace decided that he too wanted to travel abroad as a naturalist. In 1848, Wallace and Henry Bates left for Brazil aboard the "Mischief". Their intention was to collect insects and other animal specimens in the Amazon Rainforest for their private collections, selling the duplicates to museums and collectors back in Britain in order to fund the trip. Wallace also hoped to gather evidence of the transmutation of species.
Wallace and Bates spent most of their first year collecting near Belém, then explored inland separately, occasionally meeting to discuss their findings. In 1849, they were briefly joined by another young explorer, botanist Richard Spruce, along with Wallace's younger brother Herbert. Herbert left soon thereafter (dying two years later from yellow fever), but Spruce, like Bates, would spend over ten years collecting in South America.
Wallace continued charting the Rio Negro for four years, collecting specimens and making notes on the peoples and languages he encountered as well as the geography, flora, and fauna. On 12 July 1852, Wallace embarked for the UK on the brig "Helen". After 26 days at sea, the ship's cargo caught fire and the crew was forced to abandon ship. All of the specimens Wallace had on the ship, mostly collected during the last, and most interesting, two years of his trip, were lost. He managed to save a few notes and pencil sketches and little else.
Wallace and the crew spent ten days in an open boat before being picked up by the brig "Jordeson", which was sailing from Cuba to London. The "Jordeson"'s provisions were strained by the unexpected passengers, but after a difficult passage on very short rations the ship finally reached its destination on 1 October 1852.
After his return to the UK, Wallace spent 18 months in London living on the insurance payment for his lost collection and selling a few specimens that had been shipped back to Britain prior to his starting his exploration of the Rio Negro until the Indian town of Jativa on Orinoco River basin and as far west as Micúru (Mitú) on the Vaupés River. He was deeply impressed by the grandeur of the virgin forest, by the variety and beauty of the butterflies and birds, and by his first encounter with Indians on the Uaupés River area, an experience he never forgot. During this period, despite having lost almost all of the notes from his South American expedition, he wrote six academic papers (which included "On the Monkeys of the Amazon") and two books; "Palm Trees of the Amazon and Their Uses" and "Travels on the Amazon". He also made connections with a number of other British naturalists.
From 1854 to 1862, age 31 to 39, Wallace travelled through the Malay Archipelago or East Indies (now Singapore, Malaysia and Indonesia), to collect specimens for sale and to study natural history. A set of 80 bird skeletons he collected in Indonesia and associated documentation can be found in the Cambridge University Museum of Zoology. Wallace had as many as a hundred assistants who collected on his behalf. Among these, his most trusted assistant was a Malay by the name of Ali who later called himself Ali Wallace. While Wallace collected insects, many of the bird specimens were collected by his assistants including around 5000 collected and prepared by Ali. Wallace's observations of the marked zoological differences across a narrow strait in the archipelago led to his proposing the zoogeographical boundary now known as the Wallace line.
Wallace collected more than 126,000 specimens in the Malay Archipelago (more than 80,000 beetles alone). Several thousand of them represented species new to science. One of his better-known species descriptions during this trip is that of the gliding tree frog "Rhacophorus nigropalmatus", known as Wallace's flying frog. While he was exploring the archipelago, he refined his thoughts about evolution and had his famous insight on natural selection. In 1858 he sent an article outlining his theory to Darwin; it was published, along with a description of Darwin's own theory, in the same year.
Accounts of his studies and adventures there were eventually published in 1869 as "The Malay Archipelago", which became one of the most popular books of scientific exploration of the 19th century, and has never been out of print. It was praised by scientists such as Darwin (to whom the book was dedicated), and Charles Lyell, and by non-scientists such as the novelist Joseph Conrad, who called it his "favorite bedside companion" and used it as source of information for several of his novels, especially "Lord Jim".
In 1862, Wallace returned to England, where he moved in with his sister Fanny Sims and her husband Thomas. While recovering from his travels, Wallace organised his collections and gave numerous lectures about his adventures and discoveries to scientific societies such as the Zoological Society of London. Later that year, he visited Darwin at Down House, and became friendly with both Charles Lyell and Herbert Spencer. During the 1860s, Wallace wrote papers and gave lectures defending natural selection. He also corresponded with Darwin about a variety of topics, including sexual selection, warning colouration, and the possible effect of natural selection on hybridisation and the divergence of species. In 1865, he began investigating spiritualism.
After a year of courtship, Wallace became engaged in 1864 to a young woman whom, in his autobiography, he would only identify as Miss L. Miss L. was the daughter of Lewis Leslie who played chess with Wallace. However, to Wallace's great dismay, she broke off the engagement. In 1866, Wallace married Annie Mitten. Wallace had been introduced to Mitten through the botanist Richard Spruce, who had befriended Wallace in Brazil and who was also a good friend of Annie Mitten's father, William Mitten, an expert on mosses. In 1872, Wallace built the Dell, a house of concrete, on land he leased in Grays in Essex, where he lived until 1876. The Wallaces had three children: Herbert (1867–1874), Violet (1869–1945), and William (1871–1951).
In the late 1860s and 1870s, Wallace was very concerned about the financial security of his family. While he was in the Malay Archipelago, the sale of specimens had brought in a considerable amount of money, which had been carefully invested by the agent who sold the specimens for Wallace. However, on his return to the UK, Wallace made a series of bad investments in railways and mines that squandered most of the money, and he found himself badly in need of the proceeds from the publication of "The Malay Archipelago".
Despite assistance from his friends, he was never able to secure a permanent salaried position such as a curatorship in a museum. To remain financially solvent, Wallace worked grading government examinations, wrote 25 papers for publication between 1872 and 1876 for various modest sums, and was paid by Lyell and Darwin to help edit some of their own works.
In 1876, Wallace needed a £500 advance from the publisher of "The Geographical Distribution of Animals" to avoid having to sell some of his personal property. Darwin was very aware of Wallace's financial difficulties and lobbied long and hard to get Wallace awarded a government pension for his lifetime contributions to science. When the £200 annual pension was awarded in 1881, it helped to stabilise Wallace's financial position by supplementing the income from his writings.
John Stuart Mill was impressed by remarks criticising English society that Wallace had included in "The Malay Archipelago". Mill asked him to join the general committee of his Land Tenure Reform Association, but the association dissolved after Mill's death in 1873. Wallace had written only a handful of articles on political and social issues between 1873 and 1879 when, at the age of 56, he entered the debates over trade policy and land reform in earnest. He believed that rural land should be owned by the state and leased to people who would make whatever use of it that would benefit the largest number of people, thus breaking the often-abused power of wealthy landowners in British society.
In 1881, Wallace was elected as the first president of the newly formed Land Nationalisation Society. In the next year, he published a book, "Land Nationalisation; Its Necessity and Its Aims", on the subject. He criticised the UK's free trade policies for the negative impact they had on working-class people. In 1889, Wallace read "Looking Backward" by Edward Bellamy and declared himself a socialist, despite his earlier foray as a speculative investor. After reading "Progress and Poverty", the best selling book by the progressive land reformist Henry George, Wallace described it as "Undoubtedly the most remarkable and important book of the present century."
Wallace opposed eugenics, an idea supported by other prominent 19th-century evolutionary thinkers, on the grounds that contemporary society was too corrupt and unjust to allow any reasonable determination of who was fit or unfit. In the 1890 article "Human Selection" he wrote, "Those who succeed in the race for wealth are by no means the best or the most intelligent ..." In 1898, Wallace wrote a paper advocating a pure paper money system, not backed by silver or gold, which impressed the economist Irving Fisher so much that he dedicated his 1920 book "Stabilizing the Dollar" to Wallace.
Wallace wrote on other social and political topics including his support for women's suffrage, and repeatedly on the dangers and wastefulness of militarism. In an essay published in 1899 Wallace called for popular opinion to be rallied against warfare by showing people: "...that all modern wars are dynastic; that they are caused by the ambition, the interests, the jealousies, and the insatiable greed of power of their rulers, or of the great mercantile and financial classes which have power and influence over their rulers; and that the results of war are never good for the people, who yet bear all its burthens". In a letter published by the Daily Mail in 1909, with aviation in its infancy, he advocated an international treaty to ban the military use of aircraft, arguing against the idea "...that this new horror is "inevitable," and that all we can do is to be sure and be in the front rank of the aerial assassins—for surely no other term can so fitly describe the dropping of, say, ten thousand bombs at midnight into an enemy's capital from an invisible flight of airships."
In 1898, Wallace published a book entitled "The Wonderful Century: Its Successes and Its Failures" about developments in the 19th century. The first part of the book covered the major scientific and technical advances of the century; the second part covered what Wallace considered to be its social failures including: the destruction and waste of wars and arms races, the rise of the urban poor and the dangerous conditions in which they lived and worked, a harsh criminal justice system that failed to reform criminals, abuses in a mental health system based on privately owned sanatoriums, the environmental damage caused by capitalism, and the evils of European colonialism. Wallace continued his social activism for the rest of his life, publishing the book "The Revolt of Democracy" just weeks before his death.
Wallace continued his scientific work in parallel with his social commentary. In 1880, he published "Island Life" as a sequel to "The Geographic Distribution of Animals". In November 1886, Wallace began a ten-month trip to the United States to give a series of popular lectures. Most of the lectures were on Darwinism (evolution through natural selection), but he also gave speeches on biogeography, spiritualism, and socio-economic reform. During the trip, he was reunited with his brother John who had emigrated to California years before. He also spent a week in Colorado, with the American botanist Alice Eastwood as his guide, exploring the flora of the Rocky Mountains and gathering evidence that would lead him to a theory on how glaciation might explain certain commonalities between the mountain flora of Europe, Asia and North America, which he published in 1891 in the paper "English and American Flowers". He met many other prominent American naturalists and viewed their collections. His 1889 book "Darwinism" used information he collected on his American trip and information he had compiled for the lectures.
On 7 November 1913, Wallace died at home in the country house he called Old Orchard, which he had built a decade earlier. He was 90 years old. His death was widely reported in the press. "The New York Times" called him "the last of the giants belonging to that wonderful group of intellectuals that included, among others, Darwin, Huxley, Spencer, Lyell, and Owen, whose daring investigations revolutionised and evolutionised the thought of the century." Another commentator in the same edition said: "No apology need be made for the few literary or scientific follies of the author of that great book on the 'Malay Archipelago'."
Some of Wallace's friends suggested that he be buried in Westminster Abbey, but his wife followed his wishes and had him buried in the small cemetery at Broadstone, Dorset. Several prominent British scientists formed a committee to have a medallion of Wallace placed in Westminster Abbey near where Darwin had been buried. The medallion was unveiled on 1 November 1915.
Unlike Darwin, Wallace began his career as a travelling naturalist already believing in the transmutation of species. The concept had been advocated by Jean-Baptiste Lamarck, Geoffroy Saint-Hilaire, Erasmus Darwin, and Robert Grant, among others. It was widely discussed, but not generally accepted by leading naturalists, and was considered to have radical, even revolutionary connotations.
Prominent anatomists and geologists such as Georges Cuvier, Richard Owen, Adam Sedgwick, and Charles Lyell attacked it vigorously. It has been suggested that Wallace accepted the idea of the transmutation of species in part because he was always inclined to favour radical ideas in politics, religion and science, and because he was unusually open to marginal, even fringe, ideas in science.
He was also profoundly influenced by Robert Chambers' work, "Vestiges of the Natural History of Creation", a highly controversial work of popular science published anonymously in 1844 that advocated an evolutionary origin for the solar system, the earth, and living things. Wallace wrote to Henry Bates in 1845:
In 1847, he wrote to Bates:
Wallace deliberately planned some of his fieldwork to test the hypothesis that under an evolutionary scenario closely related species should inhabit neighbouring territories. During his work in the Amazon basin, he came to realise that geographical barriers—such as the Amazon and its major tributaries—often separated the ranges of closely allied species, and he included these observations in his 1853 paper "On the Monkeys of the Amazon". Near the end of the paper he asks the question, "Are very closely allied species ever separated by a wide interval of country?"
In February 1855, while working in Sarawak on the island of Borneo, Wallace wrote "On the Law which has Regulated the Introduction of New Species", a paper which was published in the "Annals and Magazine of Natural History" in September 1855. In this paper, he discussed observations regarding the geographic and geologic distribution of both living and fossil species, what would become known as biogeography. His conclusion that "Every species has come into existence coincident both in space and time with a closely allied species" has come to be known as the "Sarawak Law". Wallace thus answered the question he had posed in his earlier paper on the monkeys of the Amazon river basin. Although it contained no mention of any possible mechanisms for evolution, this paper foreshadowed the momentous paper he would write three years later.
The paper shook Charles Lyell's belief that species were immutable. Although his friend Charles Darwin had written to him in 1842 expressing support for transmutation, Lyell had continued to be strongly opposed to the idea. Around the start of 1856, he told Darwin about Wallace's paper, as did Edward Blyth who thought it "Good! Upon the whole! ... Wallace has, I think put the matter well; and according to his theory the various domestic races of animals have been fairly developed into "species"." Despite this hint, Darwin mistook Wallace's conclusion for the progressive creationism of the time and wrote that it was "nothing very new ... Uses my simile of tree [but] it seems all creation with him." Lyell was more impressed and opened a notebook on species, in which he grappled with the consequences, particularly for human ancestry. Darwin had already shown his theory to their mutual friend Joseph Hooker and now, for the first time, he spelt out the full details of natural selection to Lyell. Although Lyell could not agree, he urged Darwin to publish to establish priority. Darwin demurred at first, then began writing up a "species sketch" of his continuing work in May 1856.
By February 1858, Wallace had been convinced by his biogeographical research in the Malay Archipelago that evolution was real. He later wrote in his autobiography:
According to his autobiography, it was while he was in bed with a fever that Wallace thought about Malthus's idea of positive checks on human population and had the idea of natural selection. His autobiography says that he was on the island of Ternate at the time; but historians have said that based on his journal he was on the island of Gilolo. From 1858 to 1861, he rented a house on Ternate from the Dutchman Maarten Dirk van Renesse van Duivenbode, which he used as a base for expeditions to other islands such as Gilolo.
Wallace describes how he discovered natural selection as follows:
Wallace had once briefly met Darwin, and was one of the correspondents whose observations Darwin used to support his own theories. Although Wallace's first letter to Darwin has been lost, Wallace carefully kept the letters he received. In the first letter, dated 1 May 1857, Darwin commented that Wallace's letter of 10 October which he had recently received, as well as Wallace's paper "On the Law which has regulated the Introduction of New Species" of 1855, showed that they thought alike, with similar conclusions, and said that he was preparing his own work for publication in about two years time. The second letter, dated 22 December 1857, said how glad he was that Wallace was theorising about distribution, adding that "without speculation there is no good and original observation" but commented that "I believe I go much further than you". Wallace believed this and sent Darwin his February 1858 essay, "On the Tendency of Varieties to Depart Indefinitely From the Original Type", asking Darwin to review it and pass it to Charles Lyell if he thought it worthwhile. Although Wallace had sent several articles for journal publication during his travels through the Malay archipelago, the Ternate essay was in a private letter. Darwin received the essay on 18 June 1858. Although the essay did not use Darwin's term "natural selection", it did outline the mechanics of an evolutionary divergence of species from similar ones due to environmental pressures. In this sense, it was very similar to the theory that Darwin had worked on for 20 years, but had yet to publish. Darwin sent the manuscript to Charles Lyell with a letter saying "he could not have made a better short abstract! Even his terms now stand as heads of my chapters ... he does not say he wishes me to publish, but I shall, of course, at once write and offer to send to any journal." Distraught about the illness of his baby son, Darwin put the problem to Charles Lyell and Joseph Hooker, who decided to publish the essay in a joint presentation together with unpublished writings which highlighted Darwin's priority. Wallace's essay was presented to the Linnean Society of London on 1 July 1858, along with excerpts from an essay which Darwin had disclosed privately to Hooker in 1847 and a letter Darwin had written to Asa Gray in 1857.
Communication with Wallace in the far-off Malay Archipelago involved months of delay, so he was not part of this rapid publication. Wallace accepted the arrangement after the fact, happy that he had been included at all, and never expressed bitterness in public or in private. Darwin's social and scientific status was far greater than Wallace's, and it was unlikely that, without Darwin, Wallace's views on evolution would have been taken seriously. Lyell and Hooker's arrangement relegated Wallace to the position of co-discoverer, and he was not the social equal of Darwin or the other prominent British natural scientists. However, the joint reading of their papers on natural selection associated Wallace with the more famous Darwin. This, combined with Darwin's (as well as Hooker's and Lyell's) advocacy on his behalf, would give Wallace greater access to the highest levels of the scientific community. The reaction to the reading was muted, with the president of the Linnean Society remarking in May 1859 that the year had not been marked by any striking discoveries; but, with Darwin's publication of "On the Origin of Species" later in 1859, its significance became apparent. When Wallace returned to the UK, he met Darwin. Although some of Wallace's iconoclastic opinions in the ensuing years would test Darwin's patience, they remained on friendly terms for the rest of Darwin's life.
Over the years, a few people have questioned this version of events. In the early 1980s, two books, one written by Arnold Brackman and another by John Langdon Brooks, even suggested not only that there had been a conspiracy to rob Wallace of his proper credit, but that Darwin had actually stolen a key idea from Wallace to finish his own theory. These claims have been examined in detail by a number of scholars who have not found them convincing. Shipping schedules show that, contrary to these accusations, Wallace's letter could not have been delivered earlier than the date shown in Darwin's letter to Lyell.
After Wallace returned to England in 1862, he became one of the staunchest defenders of Darwin's "On the Origin of Species". In one incident in 1863 that particularly pleased Darwin, Wallace published the short paper "Remarks on the Rev. S. Haughton's Paper on the Bee's Cell, And on the Origin of Species" to rebut a paper by a professor of geology at the University of Dublin that had sharply criticised Darwin's comments in the "Origin" on how hexagonal honey bee cells could have evolved through natural selection.
An even longer defence was a 1867 article in the "Quarterly Journal of Science" called "Creation by Law". It reviewed the book "The Reign of Law" by George Campbell, the 8th Duke of Argyll which aimed to refute natural selection.
After an 1870 meeting of the British Science Association, Wallace wrote to Darwin complaining that there were "no opponents left who know anything of natural history, so that there are none of the good discussions we used to have."
Historians of science have noted that, while Darwin considered the ideas in Wallace's paper to be essentially the same as his own, there were differences. Darwin emphasised competition between individuals of the same species to survive and reproduce, whereas Wallace emphasised environmental pressures on varieties and species forcing them to become adapted to their local conditions, leading populations in different locations to diverge. Some historians, notably Peter J. Bowler, have suggested the possibility that in the paper he mailed to Darwin, Wallace did not discus selection of individual variations but group selection. However, Malcolm Kottler showed that Wallace was indeed discussing individual variations.
Others have noted that another difference was that Wallace appeared to have envisioned natural selection as a kind of feedback mechanism keeping species and varieties adapted to their environment (now called 'stabilizing", as opposed to 'directional' selection). They point to a largely overlooked passage of Wallace's famous 1858 paper:
The cybernetician and anthropologist Gregory Bateson observed in the 1970s that, although writing it only as an example, Wallace had "probably said the most powerful thing that'd been said in the 19th Century". Bateson revisited the topic in his 1979 book "Mind and Nature: A Necessary Unity", and other scholars have continued to explore the connection between natural selection and systems theory.
Warning coloration was one of a number of contributions by Wallace in the area of the evolution of animal coloration and in particular protective coloration. It was also a lifelong disagreement with Darwin about the importance of sexual selection.
In 1867, Darwin wrote to Wallace about a problem in explaining how some caterpillars could have evolved conspicuous colour schemes. Darwin had come to believe that many conspicuous animal colour schemes were due to sexual selection. However, this could not apply to caterpillars. Wallace responded that he and Henry Bates had observed that many of the most spectacular butterflies had a peculiar odour and taste, and that he had been told by John Jenner Weir that birds would not eat a certain kind of common white moth because they found it unpalatable. "Now, as the white moth is as conspicuous at dusk as a coloured caterpillar in the daylight", it seemed likely that the conspicuous colours served as a warning to predators and thus could have evolved through natural selection. Darwin was impressed by the idea. At a later meeting of the Entomological Society, Wallace asked for any evidence anyone might have on the topic. In 1869, Weir published data from experiments and observations involving brightly coloured caterpillars that supported Wallace's idea.
Wallace attributed less importance than Darwin to sexual selection. In his 1878 book "Tropical Nature and Other Essays", he wrote extensively about the coloration of animals and plants and proposed alternative explanations for a number of cases Darwin had attributed to sexual selection. He revisited the topic at length in his 1889 book "Darwinism". In 1890, he wrote a critical review in "Nature" of his friend Edward Bagnall Poulton's "The Colours of Animals" which supported Darwin on sexual selection, attacking especially Poulton's claims on the "aesthetic preferences of the insect world".
In 1889, Wallace wrote the book "Darwinism", which explained and defended natural selection. In it, he proposed the hypothesis that natural selection could drive the reproductive isolation of two varieties by encouraging the development of barriers against hybridisation. Thus it might contribute to the development of new species. He suggested the following scenario: When two populations of a species had diverged beyond a certain point, each adapted to particular conditions, hybrid offspring would be less adapted than either parent form and so natural selection would tend to eliminate the hybrids. Furthermore, under such conditions, natural selection would favour the development of barriers to hybridisation, as individuals that avoided hybrid matings would tend to have more fit offspring, and thus contribute to the reproductive isolation of the two incipient species.
This idea came to be known as the Wallace effect, later referred to as reinforcement. Wallace had suggested to Darwin that natural selection could play a role in preventing hybridisation in private correspondence as early as 1868, but had not worked it out to this level of detail. It continues to be a topic of research in evolutionary biology today, with both computer simulation and empirical results supporting its validity.
In 1864, Wallace published a paper, "The Origin of Human Races and the Antiquity of Man Deduced from the Theory of 'Natural Selection'", applying the theory to humankind. Darwin had not yet publicly addressed the subject, although Thomas Huxley had in "Evidence as to Man's Place in Nature". He explained the apparent stability of the human stock by pointing to the vast gap in cranial capacities between humans and the great apes. Unlike some other Darwinists, including Darwin himself, he did not "regard modern primitives as almost filling the gap between man and ape".
He saw the evolution of humans in two stages: achieving a bipedal posture freeing the hands to carry out the dictates of the brain, and the "recognition of the human brain as a totally new factor in the history of life. Wallace was apparently the first evolutionist to recognize clearly that ... with the emergence of that bodily specialization which constitutes the human brain, bodily specialization itself might be said to be outmoded." For this paper he won Darwin's praise.
Shortly afterwards, Wallace became a spiritualist. At about the same time, he began to maintain that natural selection cannot account for mathematical, artistic, or musical genius, as well as metaphysical musings, and wit and humour. He eventually said that something in "the unseen universe of Spirit" had interceded at least three times in history. The first was the creation of life from inorganic matter. The second was the introduction of consciousness in the higher animals. And the third was the generation of the higher mental faculties in humankind. He also believed that the raison d'être of the universe was the development of the human spirit. These views greatly disturbed Darwin, who argued that spiritual appeals were not necessary and that sexual selection could easily explain apparently non-adaptive mental phenomena.
While some historians have concluded that Wallace's belief that natural selection was insufficient to explain the development of consciousness and the human mind was directly caused by his adoption of spiritualism, other Wallace scholars have disagreed, and some maintain that Wallace never believed natural selection applied to those areas. Reaction to Wallace's ideas on this topic among leading naturalists at the time varied. Charles Lyell endorsed Wallace's views on human evolution rather than Darwin's. Wallace's belief that human consciousness could not be entirely a product of purely material causes was shared by a number of prominent intellectuals in the late 19th and early 20th centuries. However, many, including Huxley, Hooker, and Darwin himself, were critical of Wallace.
As the historian of science Michael Shermer has stated, Wallace's views in this area were at odds with two major tenets of the emerging Darwinian philosophy, which were that evolution was not teleological (purpose driven) and that it was not anthropocentric (human-centred). Much later in his life Wallace returned to these themes, that evolution suggested that the universe might have a purpose and that certain aspects of living organisms might not be explainable in terms of purely materialistic processes, in a 1909 magazine article entitled "The World of Life", which he later expanded into a book of the same name; a work that Shermer said anticipated some ideas about design in nature and directed evolution that would arise from various religious traditions throughout the 20th century.
In many accounts of the development of evolutionary theory, Wallace is mentioned only in passing as simply being the stimulus to the publication of Darwin's own theory. In reality, Wallace developed his own distinct evolutionary views which diverged from Darwin's, and was considered by many (especially Darwin) to be a leading thinker on evolution in his day, whose ideas could not be ignored. One historian of science has pointed out that, through both private correspondence and published works, Darwin and Wallace exchanged knowledge and stimulated each other's ideas and theories over an extended period. Wallace is the most-cited naturalist in Darwin's "Descent of Man", occasionally in strong disagreement.
Both Darwin and Wallace agreed on the importance of natural selection, and some of the factors responsible for it: competition between species and geographical isolation. But Wallace believed that evolution had a purpose ("teleology") in maintaining species' fitness to their environment, whereas Darwin hesitated to attribute any purpose to a random natural process. Scientific discoveries since the 19th century support Darwin's viewpoint, by identifying several additional mechanisms and triggers:
Wallace remained an ardent defender of natural selection for the rest of his life. By the 1880s, evolution was widely accepted in scientific circles, but natural selection less so. In 1889, Wallace published the book "Darwinism" as a response to the scientific critics of natural selection. Of all Wallace's books, it is the most cited by scholarly publications.
In 1872, at the urging of many of his friends, including Darwin, Philip Sclater, and Alfred Newton, Wallace began research for a general review of the geographic distribution of animals. Initial progress was slow, in part because classification systems for many types of animals were in flux. He resumed the work in earnest in 1874 after the publication of a number of new works on classification. Extending the system developed by Sclater for birds—which divided the earth into six separate geographic regions for describing species distribution—to cover mammals, reptiles and insects as well, Wallace created the basis for the zoogeographic regions still in use today. He discussed all of the factors then known to influence the current and past geographic distribution of animals within each geographic region.
These factors included the effects of the appearance and disappearance of land bridges (such as the one currently connecting North America and South America) and the effects of periods of increased glaciation. He provided maps showing factors, such as elevation of mountains, depths of oceans, and the character of regional vegetation, that affected the distribution of animals. He also summarised all the known families and genera of the higher animals and listed their known geographic distributions. The text was organised so that it would be easy for a traveller to learn what animals could be found in a particular location. The resulting two-volume work, "The Geographical Distribution of Animals", was published in 1876 and served as the definitive text on zoogeography for the next 80 years.
The book included evidence from the fossil record to discuss the processes of evolution and migration that had led to the geographical distribution of modern species. For example, he discussed how fossil evidence showed that tapirs had originated in the Northern Hemisphere, migrating between North America and Eurasia and then, much more recently, to South America after which the northern species became extinct, leaving the modern distribution of two isolated groups of tapir species in South America and Southeast Asia. Wallace was very aware of, and interested in, the mass extinction of megafauna in the late Pleistocene. In "The Geographical Distribution of Animals" (1876) he wrote, "We live in a zoologically impoverished world, from which all the hugest, and fiercest, and strangest forms have recently disappeared". He added that he believed the most likely cause for the rapid extinctions was glaciation, but by the time he wrote "World of Life" (1911) he had come to believe those extinctions were "due to man's agency".
In 1880, Wallace published the book "Island Life" as a sequel to "The Geographical Distribution of Animals". It surveyed the distribution of both animal and plant species on islands. Wallace classified islands into oceanic and two types of continental islands.
Oceanic islands, such as the Galapagos and Hawaiian Islands (then called Sandwich Islands) formed in mid-ocean and never part of any large continent. Such islands were characterised by a complete lack of terrestrial mammals and amphibians, and their inhabitants (except migratory birds and species introduced by humans) were typically the result of accidental colonisation and subsequent evolution.
Continental islands were divided into those that were recently separated from a continent (like Britain) and those much less recently (like Madagascar). Wallace discussed how that difference affected flora and fauna. He discussed how isolation affected evolution and how that could result in the preservation of classes of animals, such as the lemurs of Madagascar that were remnants of once widespread continental faunas. He extensively discussed how changes of climate, particularly periods of increased glaciation, may have affected the distribution of flora and fauna on some islands, and the first portion of the book discusses possible causes of these great ice ages. "Island Life" was considered a very important work at the time of its publication. It was discussed extensively in scientific circles both in published reviews and in private correspondence.
Wallace's extensive work in biogeography made him aware of the impact of human activities on the natural world. In "Tropical Nature and Other Essays" (1878), he warned about the dangers of deforestation and soil erosion, especially in tropical climates prone to heavy rainfall. Noting the complex interactions between vegetation and climate, he warned that the extensive clearing of rainforest for coffee cultivation in Ceylon (now called Sri Lanka) and India would adversely impact the climate in those countries and lead to their impoverishment due to soil erosion. In "Island Life", Wallace again mentioned deforestation and invasive species. On the impact of European colonisation on the island of Saint Helena, he wrote:
Wallace's comments on environment grew more strident later in his career. In "The World of Life" (1911) he wrote:
Wallace's 1904 book "Man's Place in the Universe" was the first serious attempt by a biologist to evaluate the likelihood of life on other planets. He concluded that the Earth was the only planet in the solar system that could possibly support life, mainly because it was the only one in which water could exist in the liquid phase. More controversially he maintained that it was unlikely that other stars in the galaxy could have planets with the necessary properties (the existence of other galaxies not having been proved at the time).
His treatment of Mars in this book was brief, and in 1907, Wallace returned to the subject with a book "Is Mars Habitable?" to criticise the claims made by Percival Lowell that there were Martian canals built by intelligent beings. Wallace did months of research, consulted various experts, and produced his own scientific analysis of the Martian climate and atmospheric conditions. Among other things, Wallace pointed out that spectroscopic analysis had shown no signs of water vapour in the Martian atmosphere, that Lowell's analysis of Mars's climate was seriously flawed and badly overestimated the surface temperature, and that low atmospheric pressure would make liquid water, let alone a planet-girding irrigation system, impossible. Richard Milner comments: "It was the brilliant and eccentric evolutionist Alfred Russel Wallace ... who effectively debunked Lowell's illusionary network of Martian canals." Wallace originally became interested in the topic because his anthropocentric philosophy inclined him to believe that man would likely be unique in the universe.
Wallace also wrote poetic verse, an example being 'A Description of Javita' from his book "Travels on the Amazon".
The poem begins:
'Tis where the streams divide, to swell the floods
Of the two mighty rivers of our globe;
Where gushing brooklets in their narrow beds'
There is an Indian village; all around,
The dark, eternal, boundless forest spreads
Its varied foliage. Stately palm-trees rise
On every side, and numerous trees unknown
Save by strange names uncouth to English ears.
Here I dwelt awhile the one white man
Among perhaps two hundred living souls.
They pass a peaceful and contented life'
I'd be an Indian here, and live content
To fish, and hunt, and paddle my canoe,
And see my children grow, like young wild fawns,
In health of body and in peace of mind,
Rich without wealth, and happy without gold !
The poem is referenced and partially recited in the BBC television series 'The Ascent of Man'.
In a letter to his brother-in-law in 1861, Wallace wrote:
Wallace was an enthusiast of phrenology. Early in his career, he experimented with hypnosis, then known as mesmerism. He used some of his students in Leicester as subjects, with considerable success. When he began his experiments with mesmerism, the topic was very controversial and early experimenters, such as John Elliotson, had been harshly criticised by the medical and scientific establishment. Wallace drew a connection between his experiences with mesmerism and his later investigations into spiritualism. In 1893, he wrote:
Wallace began investigating spiritualism in the summer of 1865, possibly at the urging of his older sister Fanny Sims, who had been involved with it for some time. After reviewing the literature on the topic and attempting to test the phenomena he witnessed at séances, he came to accept that the belief was connected to a natural reality. For the rest of his life, he remained convinced that at least some séance phenomena were genuine, no matter how many accusations of fraud sceptics made or how much evidence of trickery was produced. Historians and biographers have disagreed about which factors most influenced his adoption of spiritualism. It has been suggested by one biographer that the emotional shock he had received a few months earlier, when his first fiancée broke their engagement, contributed to his receptiveness to spiritualism. Other scholars have preferred to emphasise instead Wallace's desire to find rational and scientific explanations for all phenomena, both material and non-material, of the natural world and of human society.
Spiritualism appealed to many educated Victorians who no longer found traditional religious doctrine, such as that of the Church of England, acceptable yet were unsatisfied with the completely materialistic and mechanical view of the world that was increasingly emerging from 19th-century science. However, several scholars who have researched Wallace's views in depth have emphasised that, for him, spiritualism was a matter of science and philosophy rather than religious belief. Among other prominent 19th-century intellectuals involved with spiritualism were the social reformer Robert Owen, who was one of Wallace's early idols, the physicists William Crookes and Lord Rayleigh, the mathematician Augustus De Morgan, and the Scottish publisher Robert Chambers.
During the 1860s the stage magician John Nevil Maskelyne exposed the trickery of the Davenport brothers. Wallace was unable to accept that he had replicated their feats utilizing natural methods, and stated that Maskelyne possessed supernatural powers. However, in one of his writings Wallace dismissed Maskelyne, referring to a lecture exposing his tricks.
In 1874, Wallace visited the spirit photographer Frederick Hudson. A photograph of him with his deceased mother was produced and Wallace declared the photograph genuine, declaring "even if he had by some means obtained possession of all the photographs ever taken of my mother, they would not have been of the slightest use to him in the manufacture of these pictures. I see no escape from the conclusion that some spiritual being, acquainted with my mother's various aspects during life, produced these recognisable impressions on the plate." However, Hudson's photographs had previously been exposed as fraudulent in 1872.
Wallace's very public advocacy of spiritualism and his repeated defence of spiritualist mediums against allegations of fraud in the 1870s damaged his scientific reputation. In 1875 Wallace published the evidence he believed proved his position in his book "On Miracles and Modern Spiritualism" which is a compilation of essays he wrote over a period of time. In his chapter entitled 'Modern Spiritualism: Evidence of Men of Science', Wallace refers to "three men of the highest eminence in their respective departments" who were Professor De Morgan, Professor Hare and Judge Edmonds who all investigated spiritualist phenomena. However, Wallace himself is only quoting their results and was not present at any of their investigations. His vehement defence of spiritualism strained his relationships with previously friendly scientists such as Henry Bates, Thomas Huxley, and even Darwin, who felt he was overly credulous. Evidence of this can be seen in Wallace's letters dated 22 November and 1 December 1866, to Thomas Huxley asking him if he would be interested in getting involved in scientific spiritualist investigations which Huxley, politely but emphatically, declined on the basis that he had neither the time nor the inclination. Others, such as the physiologist William Benjamin Carpenter and zoologist E. Ray Lankester became openly and publicly hostile to Wallace over the issue. Wallace and other scientists who defended spiritualism, notably William Crookes, were subject to much criticism from the press, with "The Lancet" as the leading English medical journal of the time being particularly harsh. The controversy affected the public perception of Wallace's work for the rest of his career. When, in 1879, Darwin first tried to rally support among naturalists to get a civil pension awarded to Wallace, Joseph Hooker responded:
Hooker eventually relented and agreed to support the pension request.
In 1870, a flat-Earth proponent named John Hampden offered a £500 wager (equivalent to about £ in present-day terms) in a magazine advertisement to anyone who could demonstrate a convex curvature in a body of water such as a river, canal, or lake. Wallace, intrigued by the challenge and short of money at the time, designed an experiment in which he set up two objects along a six-mile (10 km) stretch of canal. Both objects were at the same height above the water, and he mounted a telescope on a bridge at the same height above the water as well. When seen through the telescope, one object appeared higher than the other, showing the curvature of the earth.
The judge for the wager, the editor of "Field" magazine, declared Wallace the winner, but Hampden refused to accept the result. He sued Wallace and launched a campaign, which persisted for several years, of writing letters to various publications and to organisations of which Wallace was a member denouncing him as a swindler and a thief. Wallace won multiple libel suits against Hampden, but the resulting litigation cost Wallace more than the amount of the wager, and the controversy frustrated him for years.
In the early 1880s, Wallace was drawn into the debate over mandatory smallpox vaccination. Wallace originally saw the issue as a matter of personal liberty; but, after studying some of the statistics provided by anti-vaccination activists, he began to question the efficacy of vaccination. At the time, the germ theory of disease was very new and far from universally accepted. Moreover, no one knew enough about the human immune system to understand why vaccination worked. When Wallace did some research, he discovered instances where supporters of vaccination had used questionable, in a few cases completely phony, statistics to support their arguments. Always suspicious of authority, Wallace suspected that physicians had a vested interest in promoting vaccination, and became convinced that reductions in the incidence of smallpox that had been attributed to vaccination were, in fact, due to better hygiene and improvements in public sanitation.
Another factor in Wallace's thinking was his belief that, because of the action of natural selection, organisms were in a state of balance with their environment, and that everything in nature, even disease-causing organisms, served a useful purpose in the natural order of things; he feared vaccination might upset that natural balance with unfortunate results. Wallace and other anti-vaccinationists pointed out that vaccination, which at the time was often done in a sloppy and unsanitary manner, could be dangerous.
In 1890, Wallace gave evidence before a Royal Commission investigating the controversy. When the commission examined the material he had submitted to support his testimony, they found errors, including some questionable statistics. "The Lancet" averred that Wallace and the other anti-vaccination activists were being selective in their choice of statistics, ignoring large quantities of data inconsistent with their position. The commission found that smallpox vaccination was effective and should remain compulsory, though they did recommend some changes in procedures to improve safety, and that the penalties for people who refused to comply be made less severe. Years later, in 1898, Wallace wrote a pamphlet, "Vaccination a Delusion; Its Penal Enforcement a Crime", attacking the commission's findings. It, in turn, was attacked by "The Lancet", which stated that it contained many of the same errors as his evidence given to the commission.
As a result of his writing, at the time of his death Wallace had been for many years a well-known figure both as a scientist and as a social activist. He was often sought out by journalists and others for his views on a variety of topics. He received honorary doctorates and a number of professional honours, such the Royal Society's Royal Medal and Darwin Medal in 1868 and 1890 respectively, and the Order of Merit in 1908. Above all, his role as the co-discoverer of natural selection and his work on zoogeography marked him out as an exceptional figure.
He was undoubtedly one of the greatest natural history explorers of the 19th century. Despite this, his fame faded quickly after his death. For a long time, he was treated as a relatively obscure figure in the history of science. A number of reasons have been suggested for this lack of attention, including his modesty, his willingness to champion unpopular causes without regard for his own reputation, and the discomfort of much of the scientific community with some of his unconventional ideas.
Recently, he has become a less obscure figure with the publication of several book-length biographies on him, as well as anthologies of his writings. In 2007 a literary critic for "New Yorker" magazine observed that five such biographies and two such anthologies had been published since 2000. There has also been a web page created that is dedicated to Wallace scholarship.
In a 2010 book, the environmentalist Tim Flannery claimed that Wallace was 'the first modern scientist to comprehend how essential cooperation is to our survival,' and suggested that Wallace's understanding of natural selection and his later work on the atmosphere be seen as a forerunner to modern ecological thinking.
The Natural History Museum, London, co-ordinated commemorative events for the Wallace centenary worldwide in the 'Wallace100' project in 2013. On 24 January, his portrait was unveiled in the Main Hall of the museum by Bill Bailey, a fervent admirer. On the BBC Two programme "Bill Bailey's Jungle Hero", first broadcast on 21 April 2013, Bailey revealed how Wallace cracked evolution by revisiting places where Wallace discovered exotic species. Episode one featured orangutans and flying frogs in Bailey's journey through Borneo. Episode two featured birds of paradise. On 7 November 2013, the 100th anniversary of Wallace's death, Sir David Attenborough unveiled a statue of Wallace at the museum. The statue was donated by the A. R. Wallace Memorial Fund, and was sculpted by Anthony Smith. It depicts Wallace as a young man, collecting in the jungle. November 2013 also marked the debut of "The Animated Life of A. R. Wallace", a paper-puppet animation film dedicated to Wallace's centennial.
Wallace was a prolific author. In 2002, a historian of science published a quantitative analysis of Wallace's publications. He found that Wallace had published 22 full-length books and at least 747 shorter pieces, 508 of which were scientific papers (191 of them published in "Nature"). He further broke down the 747 short pieces by their primary subjects as follows. 29% were on biogeography and natural history, 27% were on evolutionary theory, 25% were social commentary, 12% were on Anthropology, and 7% were on spiritualism and phrenology. An online bibliography of Wallace's writings has more than 750 entries.
A more comprehensive list of Wallace's publications that are available online, as well as a full bibliography of all of Wallace's writings, has been compiled by the historian Charles H. Smith at The Alfred Russel Wallace Page. | https://en.wikipedia.org/wiki?curid=1494 |
Australian Labor Party
The Australian Labor Party (ALP), also simply known as Labor and historically spelt Labour, is a major centre-left political party in Australia. The party has been in opposition at the federal level since the 2013 federal election. The party is a federal party with branches in each state and territory. Labor is in government in the states of Victoria, Queensland and Western Australia and also in the Australian Capital Territory and the Northern Territory. The party competes against the Liberal/National Coalition for political office at the federal, state and sometimes local levels. It is the oldest political party in Australia.
Labor's constitution has long stated: "The Australian Labor Party is a democratic socialist party and has the objective of the democratic socialisation of industry, production, distribution and exchange, to the extent necessary to eliminate exploitation and other anti-social features in these fields". This "socialist objective" was introduced in 1921, but was later qualified by two further objectives: "maintenance of and support for a competitive non-monopolistic private sector" and "the right to own private property". Labor governments have not attempted the "democratic socialisation" of any industry since the 1940s, when the Chifley Government failed to nationalise the private banks, and in fact have privatised several industries such as aviation and banking. Labor's current National Platform describes the party as "a modern social democratic party".
The ALP was not founded as a federal party until after the first sitting of the Australian Parliament in 1901. Nevertheless, it is regarded as descended from labour parties founded in the various Australian colonies by the emerging labour movement in Australia, formally beginning in 1891. Colonial labour parties contested seats from 1891, and federal seats following Federation at the 1901 federal election. The ALP formed the world's first labour party government as well as the world's first social democratic government at a national level. Labor was the first party in Australia to win a majority in either house of the Australian Parliament, at the 1910 federal election. At federal and state/colony level, the Australian Labor Party predates, among others, both the British Labour Party and the New Zealand Labour Party in party formation, government, and policy implementation. Internationally, the ALP is a member of the Progressive Alliance network of social-democratic parties, having previously been a member of the Socialist International.
In standard Australian English, the word "labour" is spelled with a (much of Australian English is based on British English). However, the political party uses the spelling "Labor", without a . There was originally no standardised spelling of the party's name, with "Labor" and "Labour" both in common usage. According to Ross McMullin, who wrote an official history of the Labor Party, the title page of the proceedings of Federal Conference used the spelling "Labor" in 1902, "Labour" in 1905 and 1908, and then "Labor" from 1912 onwards. In 1908, James Catts put forward a motion at Federal Conference that "the name of the party be the Australian Labour Party", which was carried by 22 votes to two. A separate motion recommending state branches to adopt the name was defeated. There was no uniformity of party names until 1918, when Federal Conference resolved that state branches should adopt the name "Australian Labor Party" – now spelled without a . Each state branch had previously used a different name, due to their different origins.
Despite the ALP officially adopting the spelling without a , it took decades for the official spelling to achieve widespread acceptance. According to McMullin, "the way the spelling of 'Labor Party' was consolidated had more to do with the chap who ended up being in charge of printing the federal conference report than any other reason". Some sources have attributed the official choice of "Labor" to influence from King O'Malley, who was born in the United States and was reputedly an advocate of spelling reform; the spelling without a is the standard form in American English. It has been suggested that the adoption of the spelling without a "signified one of the ALP's earliest attempts at modernisation", and served the purpose of differentiating the party from the Australian labour movement as a whole and distinguishing it from other British Empire labour parties. The decision to include the word "Australian" in the party's name – rather than just "Labour Party" as in the United Kingdom – has been attributed to "the greater importance of nationalism for the founders of the colonial parties".
The Australian Labor Party has its origins in the Labour parties founded in the 1890s in the Australian colonies prior to federation. Labor tradition ascribes the founding of Queensland Labour to a meeting of striking pastoral workers under a ghost gum tree (the "Tree of Knowledge") in Barcaldine, Queensland in 1891. The Balmain, New South Wales branch of the party claims to be the oldest in Australia. Labour as a parliamentary party dates from 1891 in New South Wales and South Australia, 1893 in Queensland, and later in the other colonies.
The first election contested by Labour candidates was the 1891 New South Wales election, when Labour candidates (then called the Labor Electoral League of New South Wales) won 35 of 141 seats. The major parties were the Protectionist and Free Trade parties and Labour held the balance of power. It offered parliamentary support in exchange for policy concessions. The United Labor Party (ULP) of South Australia was founded in 1891, and three candidates were that year elected to the South Australian Legislative Council. The first successful South Australian House of Assembly candidate was John McPherson at the 1892 East Adelaide by-election. Richard Hooper however was elected as an Independent Labor candidate at the 1891 Wallaroo by-election, while he was the first "labor" member of the House of Assembly he was not a member of the newly formed ULP.
At the 1893 South Australian elections the ULP was immediately elevated to balance of power status with 10 of 54 lower house seats. The liberal government of Charles Kingston was formed with the support of the ULP, ousting the conservative government of John Downer. So successful, less than a decade later at the 1905 state election, Thomas Price formed the world's first stable Labor government. John Verran led Labor to form the state's first of many majority governments at the 1910 state election.
In 1899, Anderson Dawson formed a minority Labour government in Queensland, the first in the world, which lasted one week while the conservatives regrouped after a split.
The colonial Labour parties and the trade unions were mixed in their support for the Federation of Australia. Some Labour representatives argued against the proposed constitution, claiming that the Senate as proposed was too powerful, similar to the anti-reformist colonial upper houses and the British House of Lords. They feared that federation would further entrench the power of the conservative forces. However, the first Labour leader and Prime Minister Chris Watson was a supporter of federation.
Historian Celia Hamilton, examining New South Wales, argues for the central role of Irish Catholics. Before 1890, they opposed Henry Parkes, the main Liberal leader, and of free trade, seeing them both as the ideals of Protestant Englishmen who represented landholding and large business interests. In the strike of 1890 the leading Catholic, Sydney's Archbishop Patrick Francis Moran was sympathetic toward unions, but Catholic newspapers were negative. After 1900, says Hamilton, Irish Catholics were drawn to the Labour Party because its stress on equality and social welfare fitted with their status as manual labourers and small farmers. In the 1910 elections Labour gained in the more Catholic areas and the representation of Catholics increased in Labour's parliamentary ranks.
The federal parliament in 1901 was contested by each state Labour Party. In total, they won 14 of the 75 seats in the House of Representatives, collectively holding the balance of power, and the Labour members now met as the Federal Parliamentary Labour Party (informally known as the caucus) on 8 May 1901 at Parliament House, Melbourne, the meeting place of the first federal Parliament. The caucus decided to support the incumbent Protectionist Party in minority government, while the Free Trade Party formed the opposition. It was some years before there was any significant structure or organisation at a national level. Labour under Chris Watson doubled its vote at the 1903 federal election and continued to hold the balance of power. In April 1904, however, Watson and Alfred Deakin fell out over the issue of extending the scope of industrial relations laws concerning the Conciliation and Arbitration Bill to cover state public servants, the fallout causing Deakin to resign. Free Trade leader George Reid declined to take office, which saw Watson become the first Labour Prime Minister of Australia, and the world's first Labour head of government at a national level (Anderson Dawson had led a short-lived Labour government in Queensland in December 1899), though his was a minority government that lasted only four months. He was aged only 37, and is still the youngest Prime Minister in Australia's history.
George Reid of the Free Trade Party adopted a strategy of trying to reorient the party system along Labour vs. non-Labour lines prior to the 1906 federal election and renamed his Free Trade Party to the Anti-Socialist Party. Reid envisaged a spectrum running from socialist to anti-socialist, with the Protectionist Party in the middle. This attempt struck a chord with politicians who were steeped in the Westminster tradition and regarded a two-party system as very much the norm.
Although Watson further strengthened Labour's position in 1906, he stepped down from the leadership the following year, to be succeeded by Andrew Fisher who formed a minority government lasting seven months from late 1908 to mid 1909. At the 1910 federal election, Fisher led Labor to victory, forming Australia's first elected federal majority government, Australia's first elected Senate majority, the world's first Labour Party majority government at a national level, and after the 1904 Chris Watson minority government the world's second Labour Party government at a national level. It was the first time a Labour Party had controlled any house of a legislature, and the first time the party controlled both houses of a bicameral legislature. The state branches were also successful, except in Victoria, where the strength of Deakinite liberalism inhibited the party's growth. The state branches formed their first majority governments in New South Wales and South Australia in 1910, Western Australia in 1911, Queensland in 1915 and Tasmania in 1925. Such success eluded equivalent social democratic and labour parties in other countries for many years.
Analysis of the early NSW Labor caucus reveals "a band of unhappy amateurs", made up of blue collar workers, a squatter, a doctor, and even a mine owner, indicating that the idea that only the socialist working class formed Labor is untrue. In addition, many members from the working class supported the liberal notion of free trade between the colonies; in the first grouping of state MPs, 17 of the 35 were free-traders.
In the aftermath of World War I and the Russian Revolution of 1917, support for socialism grew in trade union ranks, and at the 1921 All-Australian Trades Union Congress a resolution was passed calling for "the socialisation of industry, production, distribution and exchange." The 1922 Labor Party National Conference adopted a similarly worded "socialist objective," which remained official policy for many years. The resolution was immediately qualified, however, by the "Blackburn amendment," which said that "socialisation" was desirable only when was necessary to "eliminate exploitation and other anti-social features." In practice the socialist objective was a dead letter. Only once has a federal Labor government attempted to nationalise any industry (Ben Chifley's bank nationalisation of 1947), and that was held by the High Court to be unconstitutional. The commitment to nationalisation was dropped by Gough Whitlam, and Bob Hawke's government carried out many free market reforms including the floating of the dollar and privatisation of state enterprises such as Qantas airways and the Commonwealth Bank.
The Labor Party is commonly described as a social democratic party, and its constitution stipulates that it is a democratic socialist party. The party was created by, and has always been influenced by, the trade unions, and in practice its policy at any given time has usually been the policy of the broader labour movement. Thus at the first federal election 1901 Labor's platform called for a White Australia policy, a citizen army and compulsory arbitration of industrial disputes. Labor has at various times supported high tariffs and low tariffs, conscription and pacifism, White Australia and multiculturalism, nationalisation and privatisation, isolationism and internationalism.
Historically, Labor and its affiliated unions were strong defenders of the White Australia policy, which banned all non-European migration to Australia. This policy was partly motivated by 19th century theories about "racial purity" and by fears of economic competition from low-wage overseas workers which was shared by the vast majority of Australians and all major political parties. In practice the Labor party opposed all migration, on the grounds that immigrants competed with Australian workers and drove down wages, until after World War II, when the Chifley Government launched a major immigration program. The party's opposition to non-European immigration did not change until after the retirement of Arthur Calwell as leader in 1967. Subsequently, Labor has become an advocate of multiculturalism, although some of its trade union base and some of its members continue to oppose high immigration levels.
The Curtin and Chifley governments governed Australia through the latter half of the Second World War and initial stages of transition to peace. Labor leader John Curtin became prime minister in October 1941 when two independents crossed the floor of Parliament. Labor, led by Curtin, then led Australia through the years of the Pacific War. In December 1941, Curtin announced that "Australia looks to America, free of any pangs as to our traditional links or kinship with the United Kingdom", thus helping to establish the Australian-American alliance (later formalised as ANZUS by the Menzies Government). Remembered as a strong war time leader and for a landslide win at the 1943 federal election, Curtin died in office just prior to the end of the war and was succeeded by Ben Chifley. Chifley Labor won the 1946 federal election and oversaw Australia's initial transition to a peacetime economy.
Labor was defeated at the 1949 federal election. At the conference of the New South Wales Labor Party in June 1949, Chifley sought to define the labour movement as follows:
To a large extent, Chifley saw centralisation of the economy as the means to achieve such ambitions. With an increasingly uncertain economic outlook, after his attempt to nationalise the banks and a strike by the Communist-dominated Miners' Federation, Chifley lost office in 1949 to Robert Menzies' Liberal-National Coalition. Labor commenced a 23-year period in opposition. The party was primarily led during this time by H. V. Evatt and Arthur Calwell.
Various ideological beliefs were factionalised under reforms to the ALP under Gough Whitlam, resulting in what is now known as the Socialist Left who tend to favour a more interventionist economic policy and more socially progressive ideals, and Labor Right, the now dominant faction that tends to be more economically liberal and focus to a lesser extent on social issues. The Whitlam Labor government, marking a break with Labor's socialist tradition, pursued social-democratic policies rather than democratic socialist policies. In contrast to earlier Labor leaders, Whitlam also cut tariffs by 25 percent. Whitlam led the Federal Labor Party back to office at the 1972 and 1974 federal elections, and passed a large amount of legislation. The Whitlam Government lost office following the 1975 Australian constitutional crisis and dismissal by Governor-General John Kerr after the Coalition blocked supply in the Senate after a series of political scandals, and was defeated at the 1975 federal election. Whitlam remains the only Prime Minister to have his commission terminated in that manner. Whitlam also lost the 1977 federal election and subsequently resigned as leader.
Bill Hayden succeeded Whitlam as leader in the 1980 federal election the party managed to gain more seats however they still lost. In 1983, Bob Hawke became leader of the party after Hayden resigned to avoid a leadership spill.
Bob Hawke led Labor back to office at the 1983 federal election and the party won 4 elections under Hawke. In December 1991 Paul Keating defeated Bob Hawke in a leadership spill. The Party then won the 1993 federal election. The Hawke–Keating Government was in power for 13 years with 5 terms until defeated by John Howard at the 1996 federal election. This was the longest period the party was in Government.
Kim Beazley led the party to the 1998 federal election, winning 51 percent of the two-party-preferred vote but falling short on seats, and lost ground at the 2001 federal election. Mark Latham led Labor to the 2004 federal election but lost further ground. Beazley replaced Latham in 2005. Beazley in turn was challenged by Kevin Rudd.
Rudd went on to defeat John Howard at the 2007 federal election with 52.7 percent of the two-party vote. The Rudd Government ended prior to the 2010 federal election with the replacement of Rudd as leader of the Party by deputy leader Julia Gillard. The Gillard Government was commissioned to govern in a hung parliament following the election with a one-seat parliamentary majority and 50.12 percent of the two-party vote. The Gillard government lasted until 2013 when Gillard lost a leadership spill with Rudd becoming leader once again. The party subsequently lost the 2013 federal election.
After the 2013 election, Rudd resigned as leader and Bill Shorten became leader of the party. The party narrowly lost the 2016 federal election however it gained 14 seats and was 7 seats away from majority Government. It remained in opposition after the 2019 federal election despite having been ahead in opinion polls for 2 years. The party lost some of the seats it had gained at the previous election. After the 2019 election, Shorten stood down as leader. Anthony Albanese was elected as leader unopposed.
Between the 2007 federal election and the 2008 Western Australian state election, Labor was in government nationally and in all eight state and territory legislatures. This was the first time any single party or any coalition had achieved this since the ACT and the NT gained self-government. Labor narrowly lost government in Western Australia at the 2008 state election and Victoria at the 2010 state election. These losses were further compounded by landslide defeats in New South Wales in 2011, Queensland in 2012, the Northern Territory in 2012, Federally in 2013 and Tasmania in 2014. Labor secured a good result in the Australian Capital Territory in 2012 and, despite losing its majority, the party retained government in South Australia in 2014.
However, most of these reversals proved only temporary with Labor returning to government in Victoria in 2014 and in Queensland in 2015 after spending only one term in opposition in both states. Furthermore, after winning the 2014 Fisher by-election by nine votes from a 7.3 percent swing, the Labor government in South Australia went from minority to majority government. Labor won landslide victories in the 2016 Northern Territory election, the 2017 Western Australian election and the 2018 Victorian state election. However, Labor lost the 2018 South Australian state election after 16 years in government. Despite favourable polling, the party also did not return to government in the 2019 New South Wales state election or the 2019 federal election. The latter has been considered a historic upset due to Labor's consistent and significant polling lead; the result has been likened to the Coalition's loss in the 1993 federal election, with 2019 retrospectively referred to as "unloseable election".
The policy of the Australian Labor Party is contained in its National Platform, which is approved by delegates to Labor's National Conference, held every three years. According to the Labor Party's website, "The Platform is the result of a rigorous and constructive process of consultation, spanning the nation and including the cooperation and input of state and territory policy committees, local branches, unions, state and territory governments, and individual Party members. The Platform provides the policy foundation from which we can continue to work towards the election of a federal Labor Government."
The platform gives a general indication of the policy direction which a future Labor government would follow, but does not commit the party to specific policies. It maintains that "Labor's traditional values will remain a constant on which all Australians can rely." While making it clear that Labor is fully committed to a market economy, it says that: "Labor believes in a strong role for national government – the one institution all Australians truly own and control through our right to vote." Labor "will not allow the benefits of change to be concentrated in fewer and fewer hands, or located only in privileged communities. The benefits must be shared by all Australians and all our regions." The platform and Labor "believe that all people are created equal in their entitlement to dignity and respect, and should have an equal chance to achieve their potential." For Labor, "government has a critical role in ensuring fairness by: ensuring equal opportunity; removing unjustifiable discrimination; and achieving a more equitable distribution of wealth, income and status." Further sections of the platform stress Labor's support for equality and human rights, labour rights and democracy.
In practice, the platform provides only general policy guidelines to Labor's federal, state and territory parliamentary leaderships. The policy Labor takes into an election campaign is determined by the Cabinet (if the party is in office) or the Shadow Cabinet (if it is in opposition), in consultation with key interest groups within the party, and is contained in the parliamentary Leader's policy speech delivered during the election campaign. When Labor is in office, the policies it implements are determined by the Cabinet, subject to the platform. Generally, it is accepted that while the platform binds Labor governments, how and when it is implemented remains the prerogative of the parliamentary caucus. It is now rare for the platform to conflict with government policy, as the content of the platform is usually developed in close collaboration with the party's parliamentary leadership as well as the factions. However, where there is a direct contradiction with the platform, Labor governments have sought to change the platform as a prerequisite for a change in policy. For example, privatisation legislation under the Hawke government occurred only after holding a special national conference to debate changing the platform.
The Australian Labor Party National Executive is the party's chief administrative authority, subject only to Labor's national conference. The executive is responsible for organising the triennial national conference; carrying out the decisions of the conference; interpreting the national constitution, the national platform and decisions of the national conference; and directing federal members.
The party holds a national conference every three years, which consists of delegates representing the state and territory branches (many coming from affiliated trade unions, although there is no formal requirement for unions to be represented at the national conference). The national conference decides the party's platform, elects the national executive and appoints office-bearers such as the national secretary, who also serves as national campaign director during elections. The current national secretary is Paul Erickson. The most recent national conference was the 48th conference held in December 2018.
The head office of the ALP, the national secretariat, is managed by the national secretary. It plays a dual role of administration and a national campaign strategy. It acts as a permanent secretariat to the national executive by managing and assisting in all administrative affairs of the party. As the national secretary also serves as national campaign director during elections, it is also responsible for the national campaign strategy and organisation.
The elected members of the Labor party in both houses of the national Parliament meet as the Federal Parliamentary Labor Party, also known as the Australian Labor Party Caucus (see also caucus). Besides discussing parliamentary business and tactics, the Caucus also is involved in the election of the federal parliamentary leaders.
Until 2013, the parliamentary leaders were elected by the Caucus from among its members. The leader has historically been a member of the House of Representatives. Since October 2013, a ballot of both the Caucus and by the Labor Party's rank-and-file members determined the party leader and the deputy leader. When the Labor Party is in government, the party leader is the Prime Minister and the deputy leader is the Deputy Prime Minister. If a Labor prime minister resigns or dies in office, the deputy leader acts as prime minister and party leader until a successor is elected. The deputy prime minister also acts as prime minister when the prime minister is on leave or out of the country. Members of the Ministry are also chosen by Caucus, though the leader may allocate portfolios to the ministers.
The Australian Labor Party is a federal party, consisting of eight branches from each state and territory. While the National Executive is responsible for national campaign strategy, each state and territory are an autonomous branch and are responsible for campaigning in their own jurisdictions for federal, state and local elections. State and territory branches consist of both individual members and affiliated trade unions, who between them decide the party's policies, elect its governing bodies and choose its candidates for public office.
Members join a state branch and pay a membership fee, which is graduated according to income. The majority of trade unions in Australia are affiliated to the party at a state level. Union affiliation is direct and not through the Australian Council of Trade Unions. Affiliated unions pay an affiliation fee based on the size of their membership. Union affiliation fees make up a large part of the party's income. Another source of funds for the party are political donations and public funding.
Members are generally expected to attend at least one meeting of their local branch each year, although there are differences in the rules from state to state. In practice only a dedicated minority regularly attend meetings. Many members are only active during election campaigns.
The members and unions elect delegates to state and territory conferences (usually held annually, although more frequent conferences are often held). These conferences decide policy, and elect state or territory executives, a state or territory president (an honorary position usually held for a one-year term), and a state or territory secretary (a full-time professional position). However, ACT Labor directly elects its president. The larger branches also have full-time assistant secretaries and organisers. In the past the ratio of conference delegates coming from the branches and affiliated unions has varied from state to state, however under recent national reforms at least 50% of delegates at all state and territory conferences must be elected by branches.
In some states it also contests local government elections or endorses local candidates. In others it does not, preferring to allow its members to run as non-endorsed candidates. The process of choosing candidates is called preselection. Candidates are preselected by different methods in the various states and territories. In some they are chosen by ballots of all party members, in others by panels or committees elected by the state conference, in still others by a combination of these two.
Country Labor is a subsection of the ALP, and is used as a designation by candidates contesting elections in rural areas. It functions as a sort of ginger group within the party, and is somewhat analogous to its youth wing. The Country Labor Party is registered as a separate party in New South Wales, and is also registered with the Australian Electoral Commission (AEC) for federal elections. It does not have the same status in other states and, consequently, that designation cannot be used on the ballot paper.
The creation of a separation designation for rural candidates was first suggested at the June 1999 ALP state conference in New South Wales. In May 2000, following Labor's success at the 2000 Benalla by-election in Victoria, Kim Beazley announced that the ALP intended to register a separate "Country Labor Party" with the AEC; this occurred in October 2000. The Country Labor designation is most frequently used in New South Wales. According to the ALP's financial statements for the 2015–16 financial year, NSW Country Labor had around 2,600 members (around 17 percent of the party total), but almost no assets. It recorded a severe funding shortfall at the 2015 New South Wales election, and had to rely on a $1.68-million loan from the party proper to remain solvent. It had been initially assumed that the party proper could provide the money from its own resources, but the NSW Electoral Commission ruled that this was impermissible because the parties were registered separately. Instead the party proper had to loan Country Labor the required funds at a commercial interest rate.
Australian Young Labor is the youth wing of the Australian Labor Party, where all members under age 26 are automatically members. It is the peak youth body within the ALP. Former presidents of AYL have included former NSW Premier Bob Carr, Federal Manager of Opposition Business Tony Burke, former Special Minister of State Senator John Faulkner, former Australian Workers Union National Secretary and current Member for Maribyrnong and former Federal Labor Leader Bill Shorten as well as dozens of State Ministers and MPs. The current National President is Jason Byrne from South Australia.
The Australian Labor Party is beginning to formally recognise single interest groups within the party. The national platform currently encourages state branches to formally establish these groups known as policy action caucuses. Examples of such groups include the Labor Environment Action Network, Rainbow Labor, and Labor for Refugees. The Tasmanian Branch of the Australian Labor Party recently gave these groups voting and speaking rights at their state conference.
The Labor Party has always had a left wing and a right wing, but since the 1970s it has been organised into formal factions, to which party members may belong and often pay an additional membership fee. The two largest factions are Labor Unity (National Right) and the Socialist Left (National Left). Labor Unity generally supports free-market policies and the US alliance and tends to be conservative on some social issues. The Socialist Left, although it seldom openly espouses socialism, favours more state intervention in the economy, is generally less enthusiastic about the US alliance and is often more liberal on social issues. The national factions are themselves divided into sub-factions, primarily state-based such as Centre Unity in New South Wales and Labor Forum in Queensland.
Some trade unions are affiliated with the Labor Party and are also factionally aligned. The largest unions supporting the right faction are the Australian Workers' Union (AWU), the Shop, Distributive and Allied Employees' Association (SDA) and the Transport Workers Union (TWU). Important unions supporting the left include the Australian Manufacturing Workers Union (AMWU), United Workers Union, the Construction, Forestry, Maritime, Mining and Energy Union (CFMMEU) and the Community and Public Sector Union (CPSU).
Preselections are usually conducted along factional lines, although sometimes a non-factional candidate will be given preferential treatment (this happened with Cheryl Kernot in 1998 and again with Peter Garrett in 2004). Deals between the factions to divide up the safe seats between them often take place. Preselections, particularly for safe Labor seats, can sometimes be strongly contested. A particularly fierce preselection sometimes gives rise to accusations of branch stacking (signing up large numbers of nominal party members to vote in preselection ballots), personation, multiple voting and, on occasions, fraudulent electoral enrolment. Trade unions were in the past accused of giving inflated membership figures to increase their influence over preselections, but party rules changes have stamped out this practice. Preselection results are sometimes challenged, and the National Executive is sometimes called on to arbitrate these disputes.
Anthony Albanese is the leader of the federal Labor party, serving since 30 May 2019. The deputy leader is Richard Marles, also serving since 30 May 2019.
The current leaders of state and territory Labor branches are the following:
For the 2015–2016 financial year, the top ten disclosed donors to the ALP were the Health Services Union NSW ($389,000), Village Roadshow ($257,000), Electrical Trades Union of Australia ($171,000), National Automotive Leasing and Salary Packaging Association ($153,000), Westfield Corporation ($150,000), Randazzo C&G Developments ($120,000), Macquarie Telecom ($113,000), Woodside Energy ($110,000), ANZ Bank ($100,000) and Ying Zhou ($100,000).
The Labor Party also receives undisclosed funding through several methods, such as "associated entities". John Curtin House, Industry 2020, IR21 and the Happy Wanderers Club are entities which have been used to funnel donations to the Labor Party without disclosing the source.
A 2019 report found that the Labor Party received $33,000 from pro-gun groups during the 2011–2018 periods, threatening to undermine Australian gun control laws. However, the Coalition received over $82,000 in donations from pro-gun groups, almost doubling Labor's pro-gun donors. | https://en.wikipedia.org/wiki?curid=1495 |
Animal Farm
Animal Farm is an allegorical novella by George Orwell, first published in England on 17 August 1945. The book tells the story of a group of farm animals who rebel against their human farmer, hoping to create a society where the animals can be equal, free, and happy. Ultimately, however, the rebellion is betrayed, and the farm ends up in a state as bad as it was before, under the dictatorship of a pig named Napoleon.
According to Orwell, the fable reflects events leading up to the Russian Revolution of 1917 and then on into the Stalinist era of the Soviet Union. Orwell, a democratic socialist, was a critic of Joseph Stalin and hostile to Moscow-directed Stalinism, an attitude that was critically shaped by his experiences during the Spanish Civil War. The Soviet Union had become a brutal dictatorship built upon a cult of personality and enforced by a reign of terror. In a letter to Yvonne Davet, Orwell described "Animal Farm" as a satirical tale against Stalin (""""), and in his essay "Why I Write" (1946), wrote that "Animal Farm" was the first book in which he tried, with full consciousness of what he was doing, "to fuse political purpose and artistic purpose into one whole".
The original title was "Animal Farm: A Fairy Story," but U.S. publishers dropped the subtitle when it was published in 1946, and only one of the translations during Orwell's lifetime kept it. Other titular variations include subtitles like "A Satire" and "A Contemporary Satire". Orwell suggested the title ' for the French translation, which abbreviates to URSA, the Latin word for "bear", a symbol of Russia. It also played on the French name of the Soviet Union, '.
Orwell wrote the book between November 1943 and February 1944, when the United Kingdom was in its wartime alliance with the Soviet Union against Nazi Germany, and the British intelligentsia held Stalin in high esteem, a phenomenon Orwell hated. The manuscript was initially rejected by a number of British and American publishers, including one of Orwell's own, Victor Gollancz, which delayed its publication. It became a great commercial success when it did appear partly because international relations were transformed as the wartime alliance gave way to the Cold War.
"Time" magazine chose the book as one of the 100 best English-language novels (1923 to 2005); it also featured at number 31 on the Modern Library List of Best 20th-Century Novels, and number 46 on the BBC's The Big Read poll. It won a Retrospective Hugo Award in 1996 and is included in the Great Books of the Western World selection.
The poorly-run Manor Farm near Willingdon, England, is ripened for rebellion from its animal populace by neglect at the hands of the irresponsible and alcoholic farmer, Mr. Jones. One night, the exalted boar, Old Major, organizes a meeting, at which he calls for the overthrow of humans and teaches the animals a revolutionary song called "Beasts of England". When Old Major dies, two young pigs, Snowball and Napoleon, assume command and stage a revolt, driving Mr. Jones off the farm and renaming the property "Animal Farm". They adopt the Seven Commandments of Animalism, the most important of which is, "All animals are equal". The decree is painted in large letters on one side of the barn. Snowball teaches the animals to read and write, while Napoleon educates young puppies on the principles of Animalism. Food is plentiful, and the farm runs smoothly. The pigs elevate themselves to positions of leadership and set aside special food items, ostensibly for their personal health. Following an unsuccessful attempt by Mr. Jones and his associates to retake the farm (later dubbed the "Battle of the Cowshed"), Snowball announces his plans to modernize the farm by building a windmill. Napoleon argues against this idea, and matters come to head, which culminate in Napoleon's dogs chasing Snowball away and Napoleon declaring himself supreme commander.
Napoleon enacts changes to the governance structure of the farm, replacing meetings with a committee of pigs who will run the farm. Through a young pig named Squealer, Napoleon claims credit for the windmill idea, claiming that Snowball actually was only trying to win animals to his side. The animals work harder with the promise of easier lives with the windmill. When the animals find the windmill collapsed after a violent storm, Napoleon and Squealer convince the animals that Snowball is trying to sabotage their project and begin to purge the farm of animals Napoleon accuses of consorting with his old rival. When some animals recall the Battle of the Cowshed, Napoleon (who was nowhere to be found during the battle) gradually smears Snowball to the point of saying he is a collaborator of Mr. Jones, even dismissing the fact that Snowball was given an award of courage while falsely representing himself as the main hero of the battle. "Beasts of England" is replaced with "Animal Farm", while an anthem glorifying Napoleon, who appears to be adopting the lifestyle of a man ("Comrade Napoleon"), is composed and sung. Many animals who later claim to be helping Snowball in plots are executed by Napoleon's dogs, which troubles the rest of the animals. Despite their hardships, the animals are easily placated by Napoleon's retort that they are better off than they were under Mr. Jones, as well as by the sheep's continual bleating of “four legs good, two legs bad”.
Mr. Frederick, a neighbouring farmer, attacks the farm, using blasting powder to blow up the restored windmill. Although the animals win the battle, they do so at great cost, as many, including Boxer the workhorse, are wounded. Although he recovers from this, Boxer eventually collapses while working on the windmill (being almost 12 years old at that point). He is taken away in a knacker's van, and a donkey called Benjamin alerts the animals of this, but Squealer quickly waves off their alarm by persuading the animals that the van had been purchased from the knacker by an animal hospital and that the previous owner's signboard had not been repainted. Squealer subsequently reports Boxer's death and honours him with a festival the following day. (However, Napoleon had in fact engineered the sale of Boxer to the knacker, allowing him and his inner circle to acquire money to buy whisky for themselves.)
Years pass, the windmill is rebuilt, and another windmill is constructed, which makes the farm a good amount of income. However, the ideals that Snowball discussed, including stalls with electric lighting, heating, and running water, are forgotten, with Napoleon advocating that the happiest animals live simple lives. In addition to Boxer, many of the animals who participated in the rebellion are dead or old. Mr. Jones, having moved away after giving up on reclaiming his farm, has also died. The pigs start to resemble humans, as they walk upright, carry whips, drink alcohol, and wear clothes. The Seven Commandments are abridged to just one phrase: ""All animals are equal, but some animals are more equal than others."" The maxim ""Four legs good, two legs bad"" is changed to ""Four legs good, two legs better."" Napoleon holds a dinner party for the pigs and local farmers, with whom he celebrates a new alliance. He abolishes the practice of the revolutionary traditions and restores the name "The Manor Farm". The men and pigs start playing cards, flattering and praising each other while cheating at the game. Both Napoleon and Mr. Pilkington, one of the farmers, play the Ace of Spades at the same time and both sides begin fighting loudly over who cheated first. When the animals outside look at the pigs and men, they can no longer distinguish between the two.
George Orwell's "Animal Farm" is an example of a political satire that was intended to have a "wider application," according to Orwell himself, in terms of its relevance. Stylistically, the work shares many similarities with some of Orwell's other works, most notably "1984," as both have been considered works of Swiftian Satire. Furthermore, these two prominent works seem to suggest Orwell's bleak view of the future for humanity; he seems to stress the potential/current threat of dystopias similar to those in "Animal Farm" and "1984". In these kinds of works, Orwell distinctly references the disarray and traumatic conditions of Europe following the Second World War. Orwell's style and writing philosophy as a whole was very concerned with the pursuit of truth in writing. Orwell was committed to communicating in a way that was straightforward, given the way that he felt words were commonly used in politics to deceive and confuse. For this reason, he is careful, in "Animal Farm", to make sure the narrator speaks in an unbiased and uncomplicated fashion. The difference is seen in the way that the animals speak and interact, as the generally moral animals seem to speak their minds clearly, while the wicked animals on the farm, such as Napoleon, twist language in such a way that it meets their own insidious desires. This style reflects Orwell's close proximation to the issues facing Europe at the time and his determination to comment critically on Stalin's Soviet Russia.
George Orwell wrote the manuscript in 1943 and 1944 after his experiences during the Spanish Civil War, which he described in "Homage to Catalonia" (1938). In the preface of a 1947 Ukrainian edition of "Animal Farm", he explained how escaping the communist purges in Spain taught him "how easily totalitarian propaganda can control the opinion of enlightened people in democratic countries." This motivated Orwell to expose and strongly condemn what he saw as the Stalinist corruption of the original socialist ideals. "Homage to Catalonia" sold poorly; after seeing Arthur Koestler's best-selling, "Darkness at Noon," about the same war, Orwell decided that fiction was the best way to describe totalitarianism.
Immediately prior to writing the book, Orwell had quit the BBC. He was also upset about a booklet for propagandists the Ministry of Information had put out. The booklet included instructions on how to quell ideological fears of the Soviet Union, such as directions to claim that the Red Terror was a figment of Nazi imagination.
In the preface, Orwell described the source of the idea of setting the book on a farm:
Orwell initially encountered difficulty getting the manuscript published, largely due to fears that the book might upset the alliance between Britain, the United States, and the Soviet Union. Four publishers refused to publish "Animal Farm", yet one had initially accepted the work, but declined it after consulting the Ministry of Information. Eventually, Secker and Warburg published the first edition in 1945.
During the Second World War, it became clear to Orwell that anti-Soviet literature was not something which most major publishing houses would touch—including his regular publisher Gollancz. He also submitted the manuscript to Faber and Faber, where the poet T. S. Eliot (who was a director of the firm) rejected it; Eliot wrote back to Orwell praising the book's "good writing" and "fundamental integrity", but declared that they would only accept it for publication if they had some sympathy for the viewpoint "which I take to be generally Trotskyite". Eliot said he found the view "not convincing", and contended that the pigs were made out to be the best to run the farm; he posited that someone might argue "what was needed... was not more communism but more public-spirited pigs". Orwell let André Deutsch, who was working for Nicholson & Watson in 1944, read the typescript, and Deutsch was convinced that Nicholson & Watson would want to publish it; however, they did not, and "lectured Orwell on what they perceived to be errors in "Animal Farm"." In his "London Letter" on 17 April 1944 for "Partisan Review", Orwell wrote that it was "now next door to impossible to get anything overtly anti-Russian printed. Anti-Russian books do appear, but mostly from Catholic publishing firms and always from a religious or frankly reactionary angle."
The publisher Jonathan Cape, who had initially accepted "Animal Farm", subsequently rejected the book after an official at the British Ministry of Information warned him off—although the civil servant who it is assumed gave the order was later found to be a Soviet spy. Writing to Leonard Moore, a partner in the literary agency of Christy & Moore, publisher Jonathan Cape explained that the decision had been taken on the advice of a senior official in the Ministry of Information. Such flagrant anti-Soviet bias was unacceptable, and the choice of pigs as the dominant class was thought to be especially offensive. It may reasonably be assumed that the "important official" was a man named Peter Smollett, who was later unmasked as a Soviet agent. Orwell was suspicious of Smollett/Smolka, and he would be one of the names Orwell included in his list of Crypto-Communists and Fellow-Travellers sent to the Information Research Department in 1949. The publisher wrote to Orwell, saying:
Frederic Warburg also faced pressures against publication, even from people in his own office and from his wife Pamela, who felt that it was not the moment for ingratitude towards Stalin and the heroic Red Army, which had played a major part in defeating Adolf Hitler. A Russian translation was printed in the paper "Posev", and in giving permission for a Russian translation of "Animal Farm", Orwell refused in advance all royalties. A translation in Ukrainian, which was produced in Germany, was confiscated in large part by the American wartime authorities and handed over to the Soviet repatriation commission.
In October 1945, Orwell wrote to Frederic Warburg expressing interest in pursuing the possibility that the political cartoonist David Low might illustrate "Animal Farm". Low had written a letter saying that he had had "a good time with "ANIMAL FARM"—an excellent bit of satire—it would illustrate perfectly." Nothing came of this, and a trial issue produced by Secker & Warburg in 1956 illustrated by John Driver was abandoned, but the Folio Society published an edition in 1984 illustrated by Quentin Blake and an edition illustrated by the cartoonist Ralph Steadman was published by Secker & Warburg in 1995 to celebrate the fiftieth anniversary of the first edition of "Animal Farm".
Orwell originally wrote a preface complaining about British self-censorship and how the British people were suppressing criticism of the USSR, their World War II ally:
Although the first edition allowed space for the preface, it was not included, and as of June 2009 most editions of the book have not included it.
Secker and Warburg published the first edition of "Animal Farm" in 1945 without an introduction. However, the publisher had provided space for a preface in the author's proof composited from the manuscript. For reasons unknown, no preface was supplied, and the page numbers had to be renumbered at the last minute.
In 1972, Ian Angus found the original typescript titled "The Freedom of the Press", and Bernard Crick published it, together with his own introduction, in "The Times Literary Supplement" on 15 September 1972 as "How the essay came to be written". Orwell's essay criticised British self-censorship by the press, specifically the suppression of unflattering descriptions of Stalin and the Soviet government. The same essay also appeared in the Italian 1976 edition of "Animal Farm" with another introduction by Crick, claiming to be the first edition with the preface. Other publishers were still declining to publish it.
Contemporary reviews of the work were not universally positive. Writing in the American "New Republic" magazine, George Soule expressed his disappointment in the book, writing that it "puzzled and saddened me. It seemed on the whole dull. The allegory turned out to be a creaking machine for saying in a clumsy way things that have been said better directly." Soule believed that the animals were not consistent enough with their real-world inspirations, and said, "It seems to me that the failure of this book (commercially it is already assured of tremendous success) arises from the fact that the satire deals not with something the author has experienced, but rather with stereotyped ideas about a country which he probably does not know very well".
"The Guardian" on 24 August 1945 called "Animal Farm" "a delightfully humorous and caustic satire on the rule of the many by the few". Tosco Fyvel, writing in "Tribune" on the same day, called the book "a gentle satire on a certain State and on the illusions of an age which may already be behind us." Julian Symons responded, on 7 September, "Should we not expect, in "Tribune" at least, acknowledgement of the fact that it is a satire not at all gentle upon a particular State—Soviet Russia? It seems to me that a reviewer should have the courage to identify Napoleon with Stalin, and Snowball with Trotsky, and express an opinion favourable or unfavourable to the author, upon a political ground. In a hundred years time perhaps, "Animal Farm" may be simply a fairy story, today it is a political satire with a good deal of point." "Animal Farm" has been subject to much comment in the decades since these early remarks.
The CIA from 1952 to 1957 in Operation Aedinosaur sent millions of balloons carrying copies of the novel into Poland, Hungary, and Czechoslovakia, whose air forces tried to shoot the balloons down.
"Time" magazine chose "Animal Farm" as one of the 100 best English-language novels (1923 to 2005); it also featured at number 31 on the Modern Library List of Best 20th-Century Novels. It won a Retrospective Hugo Award in 1996 and is included in the Great Books of the Western World selection.
Popular reading in schools, a 2016 UK poll saw "Animal Farm" ranked the nation's favourite book from school.
"Animal Farm" has also faced an array of challenges in school settings around the US. The following are examples of this controversy that has existed around Orwell's work:
"Animal Farm" has also faced similar forms of resistance in other countries. The ALA also mentions the way that the book was prevented from being featured at the International Book Fair in Moscow, Russia, in 1977 and banned from schools in the United Arab Emirates for references to practices or actions that defy Arab or Islamic beliefs, such as pigs or alcohol. In the same manner, "Animal Farm" has also faced relatively recent issues in China. In 2018, the government made the decision to censor all online posts about or referring to "Animal Farm".
The pigs Snowball, Napoleon, and Squealer adapt Old Major's ideas into "a complete system of thought", which they formally name Animalism, an allegoric reference to Communism, not to be confused with the philosophy Animalism. Soon after, Napoleon and Squealer partake in activities associated with the humans (drinking alcohol, sleeping in beds, trading), which were explicitly prohibited by the Seven Commandments. Squealer is employed to alter the Seven Commandments to account for this humanisation, an allusion to the Soviet government's revising of history in order to exercise control of the people's beliefs about themselves and their society.
The original commandments are:
These commandments are also distilled into the maxim "Four legs good, two legs bad!" which is primarily used by the sheep on the farm, often to disrupt discussions and disagreements between animals on the nature of Animalism.
Later, Napoleon and his pigs secretly revise some commandments to clear themselves of accusations of law-breaking. The changed commandments are as follows, with the changes bolded:
No animal shall sleep in a bed with sheets.
No animal shall drink alcohol to excess.
No animal shall kill any other animal without cause.
Eventually, these are replaced with the maxims, "All animals are equal, but some animals are more equal than others", and "Four legs good, two legs better" as the pigs become more human. This is an ironic twist to the original purpose of the Seven Commandments, which were supposed to keep order within Animal Farm by uniting the animals together against the humans and preventing animals from following the humans' evil habits. Through the revision of the commandments, Orwell demonstrates how simply political dogma can be turned into malleable propaganda.
Orwell biographer Jeffrey Meyers has written, "virtually every detail has political significance in this allegory." Orwell himself wrote in 1946, "Of course I intended it primarily as a satire on the Russian revolution... [and] "that kind" of revolution (violent conspiratorial revolution, led by unconsciously power-hungry people) can only lead to a change of masters [-] revolutions only effect a radical improvement when the masses are alert." In a preface for a 1947 Ukrainian edition, he stated, "... for the past ten years I have been convinced that the destruction of the Soviet myth was essential if we wanted a revival of the socialist movement. On my return from Spain [in 1937] I thought of exposing the Soviet myth in a story that could be easily understood by almost anyone and which could be easily translated into other languages."
The revolt of the animals against Farmer Jones is Orwell's analogy with the October 1917 Bolshevik Revolution. The "Battle of the Cowshed" has been said to represent the allied invasion of Soviet Russia in 1918, and the defeat of the White Russians in the Russian Civil War. The pigs' rise to preeminence mirrors the rise of a Stalinist bureaucracy in the USSR, just as Napoleon's emergence as the farm's sole leader reflects Stalin's emergence. The pigs' appropriation of milk and apples for their own use, "the turning point of the story" as Orwell termed it in a letter to Dwight Macdonald, stands as an analogy for the crushing of the left-wing 1921 Kronstadt revolt against the Bolsheviks, and the difficult efforts of the animals to build the windmill suggest the various Five Year Plans. The puppies controlled by Napoleon parallel the nurture of the secret police in the Stalinist structure, and the pigs' treatment of the other animals on the farm recalls the internal terror faced by the populace in the 1930s. In chapter seven, when the animals confess their nonexistent crimes and are killed, Orwell directly alludes to the purges, confessions and show trials of the late 1930s. These contributed to Orwell's conviction that the Bolshevik revolution had been corrupted and the Soviet system become rotten.
Peter Edgerly Firchow and Peter Davison contend that the "Battle of the Windmill," specifically referencing "the Battle of Stalingrad and the Battle of Moscow," represents World War II. During the battle, Orwell first wrote, "All the animals, including Napoleon" took cover. Orwell had the publisher alter this to "All the animals except Napoleon" in recognition of Stalin's decision to remain in Moscow during the German advance. Orwell requested the change after he met Józef Czapski in Paris in March 1945. Czapski, a survivor of the Katyn Massacre and an opponent of the Soviet regime, told Orwell, as Orwell wrote to Arthur Koestler, that it had been "the character [and] greatness of Stalin" that saved Russia from the German invasion.
Other connections that writers have suggested illustrate Orwell's telescoping of Russian history from 1917 to 1943 include the wave of rebelliousness that ran through the countryside after the Rebellion, which stands for the abortive revolutions in Hungary and in Germany (Ch IV); the conflict between Napoleon and Snowball (Ch V), paralleling "the two rival and quasi-Messianic beliefs that seemed pitted against one another: Trotskyism, with its faith in the revolutionary vocation of the proletariat of the West; and Stalinism with its glorification of Russia's socialist destiny"; Napoleon's dealings with Whymper and the Willingdon markets (Ch VI), paralleling the Treaty of Rapallo; and Frederick's forged bank notes, paralleling the Hitler-Stalin pact of August 1939, after which Frederick attacks Animal Farm without warning and destroys the windmill.
The book's close, with the pigs and men in a kind of rapprochement, reflected Orwell's view of the 1943 Tehran Conference that seemed to display the establishment of "the best possible relations between the USSR and the West" — but in reality were destined, as Orwell presciently predicted, to continue to unravel. The disagreement between the allies and the start of the Cold War is suggested when Napoleon and Pilkington, both suspicious, "played an ace of spades simultaneously".
Similarly, the music in the novel, starting with "Beasts of England" and the later anthems, parallels "The Internationale" and its adoption and repudiation by the Soviet authorities as to the Anthem of the USSR in the 1920s and 1930s.
"Animal Farm" has been adapted to film twice. Both differ from the novel and have been accused of taking significant liberties, including sanitising some aspects.
In 2012, an HFR-3D version of "Animal Farm", potentially directed by Andy Serkis, was announced.
A BBC radio version, produced by Rayner Heppenstall, was broadcast in January 1947. Orwell listened to the production at his home in Canonbury Square, London, with Hugh Gordon Porteous, amongst others. Orwell later wrote to Heppenstall that Porteous, "who had not read the book, grasped what was happening after a few minutes."
A further radio production, again using Orwell's own dramatisation of the book, was broadcast in January 2013 on BBC Radio 4. Tamsin Greig narrated, and the cast included Nicky Henson as Napoleon, Toby Jones as the propagandist Squealer, and Ralph Ineson as Boxer.
A theatrical version, with music by Richard Peaslee and lyrics by Adrian Mitchell, was staged at the National Theatre London on 25 April 1984, directed by Peter Hall. It toured nine cities in 1985.
A solo version, adapted and performed by Guy Masterson, premièred at the Traverse Theatre Edinburgh in January 1995 and has toured worldwide since.
In 1950 Norman Pett and his writing partner Don Freeman were secretly hired by the British Foreign Office to adapt "Animal Farm" into a comic strip. This comic was not published in the U.K. but ran in Brazilian and Burmese newspapers. | https://en.wikipedia.org/wiki?curid=620 |
Amphibian
Amphibians are ectothermic, tetrapod vertebrates of the class Amphibia. Modern amphibians are all Lissamphibia. They inhabit a wide variety of habitats, with most species living within terrestrial, fossorial, arboreal or freshwater aquatic ecosystems. Thus amphibians typically start out as larvae living in water, but some species have developed behavioural adaptations to bypass this.
The young generally undergo metamorphosis from larva with gills to an adult air-breathing form with lungs. Amphibians use their skin as a secondary respiratory surface and some small terrestrial salamanders and frogs lack lungs and rely entirely on their skin. They are superficially similar to lizards but, along with mammals and birds, reptiles are amniotes and do not require water bodies in which to breed. With their complex reproductive needs and permeable skins, amphibians are often ecological indicators; in recent decades there has been a dramatic decline in amphibian populations for many species around the globe.
The earliest amphibians evolved in the Devonian period from sarcopterygian fish with lungs and bony-limbed fins, features that were helpful in adapting to dry land. They diversified and became dominant during the Carboniferous and Permian periods, but were later displaced by reptiles and other vertebrates. Over time, amphibians shrank in size and decreased in diversity, leaving only the modern subclass Lissamphibia.
The three modern orders of amphibians are Anura (the frogs and toads), Urodela (the salamanders), and Apoda (the caecilians). The number of known amphibian species is approximately 8,000, of which nearly 90% are frogs. The smallest amphibian (and vertebrate) in the world is a frog from New Guinea ("Paedophryne amauensis") with a length of just . The largest living amphibian is the South China giant salamander ("Andrias sligoi"), but this is dwarfed by the extinct "Prionosuchus" from the middle Permian of Brazil. The study of amphibians is called batrachology, while the study of both reptiles and amphibians is called herpetology.
The word "amphibian" is derived from the Ancient Greek term ἀμφίβιος ("amphíbios"), which means "both kinds of life", "ἀμφί" meaning "of both kinds" and "βιος" meaning "life". The term was initially used as a general adjective for animals that could live on land or in water, including seals and otters. Traditionally, the class Amphibia includes all tetrapod vertebrates that are not amniotes. Amphibia in its widest sense ("sensu lato") was divided into three subclasses, two of which are extinct:
The actual number of species in each group depends on the taxonomic classification followed. The two most common systems are the classification adopted by the website AmphibiaWeb, University of California, Berkeley and the classification by herpetologist Darrel Frost and the American Museum of Natural History, available as the online reference database "Amphibian Species of the World". The numbers of species cited above follows Frost and the total number of known amphibian species as of March 31, 2019 is exactly 8,000, of which nearly 90% are frogs.
With the phylogenetic classification, the taxon Labyrinthodontia has been discarded as it is a polyparaphyletic group without unique defining features apart from shared primitive characteristics. Classification varies according to the preferred phylogeny of the author and whether they use a stem-based or a node-based classification. Traditionally, amphibians as a class are defined as all tetrapods with a larval stage, while the group that includes the common ancestors of all living amphibians (frogs, salamanders and caecilians) and all their descendants is called Lissamphibia. The phylogeny of Paleozoic amphibians is uncertain, and Lissamphibia may possibly fall within extinct groups, like the Temnospondyli (traditionally placed in the subclass Labyrinthodontia) or the Lepospondyli, and in some analyses even in the amniotes. This means that advocates of phylogenetic nomenclature have removed a large number of basal Devonian and Carboniferous amphibian-type tetrapod groups that were formerly placed in Amphibia in Linnaean taxonomy, and included them elsewhere under cladistic taxonomy. If the common ancestor of amphibians and amniotes is included in Amphibia, it becomes a paraphyletic group.
All modern amphibians are included in the subclass Lissamphibia, which is usually considered a clade, a group of species that have evolved from a common ancestor. The three modern orders are Anura (the frogs and toads), Caudata (or Urodela, the salamanders), and Gymnophiona (or Apoda, the caecilians). It has been suggested that salamanders arose separately from a Temnospondyl-like ancestor, and even that caecilians are the sister group of the advanced reptiliomorph amphibians, and thus of amniotes. Although the fossils of several older proto-frogs with primitive characteristics are known, the oldest "true frog" is "Prosalirus bitis", from the Early Jurassic Kayenta Formation of Arizona. It is anatomically very similar to modern frogs. The oldest known caecilian is another Early Jurassic species, "Eocaecilia micropodia", also from Arizona. The earliest salamander is "Beiyanerpeton jianpingensis" from the Late Jurassic of northeastern China.
Authorities disagree as to whether Salientia is a superorder that includes the order Anura, or whether Anura is a sub-order of the order Salientia. The Lissamphibia are traditionally divided into three orders, but an extinct salamander-like family, the Albanerpetontidae, is now considered part of Lissamphibia alongside the superorder Salientia. Furthermore, Salientia includes all three recent orders plus the Triassic proto-frog, "Triadobatrachus".
The first major groups of amphibians developed in the Devonian period, around 370 million years ago, from lobe-finned fish which were similar to the modern coelacanth and lungfish. These ancient lobe-finned fish had evolved multi-jointed leg-like fins with digits that enabled them to crawl along the sea bottom. Some fish had developed primitive lungs that help them breathe air when the stagnant pools of the Devonian swamps were low in oxygen. They could also use their strong fins to hoist themselves out of the water and onto dry land if circumstances so required. Eventually, their bony fins would evolve into limbs and they would become the ancestors to all tetrapods, including modern amphibians, reptiles, birds, and mammals. Despite being able to crawl on land, many of these prehistoric tetrapodomorph fish still spent most of their time in the water. They had started to develop lungs, but still breathed predominantly with gills.
Many examples of species showing transitional features have been discovered. "Ichthyostega" was one of the first primitive amphibians, with nostrils and more efficient lungs. It had four sturdy limbs, a neck, a tail with fins and a skull very similar to that of the lobe-finned fish, "Eusthenopteron". Amphibians evolved adaptations that allowed them to stay out of the water for longer periods. Their lungs improved and their skeletons became heavier and stronger, better able to support the weight of their bodies on land. They developed "hands" and "feet" with five or more digits; the skin became more capable of retaining body fluids and resisting desiccation. The fish's hyomandibula bone in the hyoid region behind the gills diminished in size and became the stapes of the amphibian ear, an adaptation necessary for hearing on dry land. An affinity between the amphibians and the teleost fish is the multi-folded structure of the teeth and the paired supra-occipital bones at the back of the head, neither of these features being found elsewhere in the animal kingdom.
At the end of the Devonian period (360 million years ago), the seas, rivers and lakes were teeming with life while the land was the realm of early plants and devoid of vertebrates, though some, such as "Ichthyostega", may have sometimes hauled themselves out of the water. It is thought they may have propelled themselves with their forelimbs, dragging their hindquarters in a similar manner to that used by the elephant seal. In the early Carboniferous (360 to 345 million years ago), the climate became wet and warm. Extensive swamps developed with mosses, ferns, horsetails and calamites. Air-breathing arthropods evolved and invaded the land where they provided food for the carnivorous amphibians that began to adapt to the terrestrial environment. There were no other tetrapods on the land and the amphibians were at the top of the food chain, occupying the ecological position currently held by the crocodile. Though equipped with limbs and the ability to breathe air, most still had a long tapering body and strong tail. They were the top land predators, sometimes reaching several metres in length, preying on the large insects of the period and the many types of fish in the water. They still needed to return to water to lay their shell-less eggs, and even most modern amphibians have a fully aquatic larval stage with gills like their fish ancestors. It was the development of the amniotic egg, which prevents the developing embryo from drying out, that enabled the reptiles to reproduce on land and which led to their dominance in the period that followed.
After the Carboniferous rainforest collapse amphibian dominance gave way to reptiles, and amphibians were further devastated by the Permian–Triassic extinction event. During the Triassic Period (250 to 200 million years ago), the reptiles continued to out-compete the amphibians, leading to a reduction in both the amphibians' size and their importance in the biosphere. According to the fossil record, Lissamphibia, which includes all modern amphibians and is the only surviving lineage, may have branched off from the extinct groups Temnospondyli and Lepospondyli at some period between the Late Carboniferous and the Early Triassic. The relative scarcity of fossil evidence precludes precise dating, but the most recent molecular study, based on multilocus sequence typing, suggests a Late Carboniferous/Early Permian origin for extant amphibians.
The origins and evolutionary relationships between the three main groups of amphibians is a matter of debate. A 2005 molecular phylogeny, based on rDNA analysis, suggests that salamanders and caecilians are more closely related to each other than they are to frogs. It also appears that the divergence of the three groups took place in the Paleozoic or early Mesozoic (around 250 million years ago), before the breakup of the supercontinent Pangaea and soon after their divergence from the lobe-finned fish. The briefness of this period, and the swiftness with which radiation took place, would help account for the relative scarcity of primitive amphibian fossils. There are large gaps in the fossil record, but the discovery of a Gerobatrachus hottoni from the Early Permian in Texas in 2008 provided a missing link with many of the characteristics of modern frogs. Molecular analysis suggests that the frog–salamander divergence took place considerably earlier than the palaeontological evidence indicates. Newer research indicates that the common ancestor of all Lissamphibians lived about 315 million years ago, and that stereospondyls are the closest relatives to the caecilians.
As they evolved from lunged fish, amphibians had to make certain adaptations for living on land, including the need to develop new means of locomotion. In the water, the sideways thrusts of their tails had propelled them forward, but on land, quite different mechanisms were required. Their vertebral columns, limbs, limb girdles and musculature needed to be strong enough to raise them off the ground for locomotion and feeding. Terrestrial adults discarded their lateral line systems and adapted their sensory systems to receive stimuli via the medium of the air. They needed to develop new methods to regulate their body heat to cope with fluctuations in ambient temperature. They developed behaviours suitable for reproduction in a terrestrial environment. Their skins were exposed to harmful ultraviolet rays that had previously been absorbed by the water. The skin changed to become more protective and prevent excessive water loss.
The superclass Tetrapoda is divided into four classes of vertebrate animals with four limbs. Reptiles, birds and mammals are amniotes, the eggs of which are either laid or carried by the female and are surrounded by several membranes, some of which are impervious. Lacking these membranes, amphibians require water bodies for reproduction, although some species have developed various strategies for protecting or bypassing the vulnerable aquatic larval stage. They are not found in the sea with the exception of one or two frogs that live in brackish water in mangrove swamps; the Anderson's salamander meanwhile occurs in brackish or salt water lakes. On land, amphibians are restricted to moist habitats because of the need to keep their skin damp.
Modern amphibians have a simplified anatomy compared to their acenstors due to paedomorphosis, caused by two evolutionary trends; miniaturization and an unusually large genome, which gives tham a slower metabolic, growth and development rate compared to other vertebrates.
The smallest amphibian (and vertebrate) in the world is a microhylid frog from New Guinea ("Paedophryne amauensis") first discovered in 2012. It has an average length of and is part of a genus that contains four of the world's ten smallest frog species. The largest living amphibian is the Chinese giant salamander ("Andrias davidianus") but this is a great deal smaller than the largest amphibian that ever existed—the extinct "Prionosuchus", a crocodile-like temnospondyl dating to 270 million years ago from the middle Permian of Brazil. The largest frog is the African Goliath frog ("Conraua goliath"), which can reach and weigh .
Amphibians are ectothermic (cold-blooded) vertebrates that do not maintain their body temperature through internal physiological processes. Their metabolic rate is low and as a result, their food and energy requirements are limited. In the adult state, they have tear ducts and movable eyelids, and most species have ears that can detect airborne or ground vibrations. They have muscular tongues, which in many species can be protruded. Modern amphibians have fully ossified vertebrae with articular processes. Their ribs are usually short and may be fused to the vertebrae. Their skulls are mostly broad and short, and are often incompletely ossified. Their skin contains little keratin and lacks scales, apart from a few fish-like scales in certain caecilians. The skin contains many mucous glands and in some species, poison glands (a type of granular gland). The hearts of amphibians have three chambers, two atria and one ventricle. They have a urinary bladder and nitrogenous waste products are excreted primarily as urea. Most amphibians lay their eggs in water and have aquatic larvae that undergo metamorphosis to become terrestrial adults. Amphibians breathe by means of a pump action in which air is first drawn into the buccopharyngeal region through the nostrils. These are then closed and the air is forced into the lungs by contraction of the throat. They supplement this with gas exchange through the skin.
The order Anura (from the Ancient Greek "a(n)-" meaning "without" and "oura" meaning "tail") comprises the frogs and toads. They usually have long hind limbs that fold underneath them, shorter forelimbs, webbed toes with no claws, no tails, large eyes and glandular moist skin. Members of this order with smooth skins are commonly referred to as frogs, while those with warty skins are known as toads. The difference is not a formal one taxonomically and there are numerous exceptions to this rule. Members of the family Bufonidae are known as the "true toads". Frogs range in size from the Goliath frog ("Conraua goliath") of West Africa to the "Paedophryne amauensis", first described in Papua New Guinea in 2012, which is also the smallest known vertebrate. Although most species are associated with water and damp habitats, some are specialised to live in trees or in deserts. They are found worldwide except for polar areas.
Anura is divided into three suborders that are broadly accepted by the scientific community, but the relationships between some families remain unclear. Future molecular studies should provide further insights into their evolutionary relationships. The suborder Archaeobatrachia contains four families of primitive frogs. These are Ascaphidae, Bombinatoridae, Discoglossidae and Leiopelmatidae which have few derived features and are probably paraphyletic with regard to other frog lineages. The six families in the more evolutionarily advanced suborder Mesobatrachia are the fossorial Megophryidae, Pelobatidae, Pelodytidae, Scaphiopodidae and Rhinophrynidae and the obligatorily aquatic Pipidae. These have certain characteristics that are intermediate between the two other suborders. Neobatrachia is by far the largest suborder and includes the remaining families of modern frogs, including most common species. Ninety-six percent of the over 5,000 extant species of frog are neobatrachians.
The order Caudata (from the Latin "cauda" meaning "tail") consists of the salamanders—elongated, low-slung animals that mostly resemble lizards in form. This is a symplesiomorphic trait and they are no more closely related to lizards than they are to mammals. Salamanders lack claws, have scale-free skins, either smooth or covered with tubercles, and tails that are usually flattened from side to side and often finned. They range in size from the Chinese giant salamander ("Andrias davidianus"), which has been reported to grow to a length of , to the diminutive "Thorius pennatulus" from Mexico which seldom exceeds in length. Salamanders have a mostly Laurasian distribution, being present in much of the Holarctic region of the northern hemisphere. The family Plethodontidae is also found in Central America and South America north of the Amazon basin; South America was apparently invaded from Central America by about the start of the Miocene, 23 million years ago. Urodela is a name sometimes used for all the extant species of salamanders. Members of several salamander families have become paedomorphic and either fail to complete their metamorphosis or retain some larval characteristics as adults. Most salamanders are under long. They may be terrestrial or aquatic and many spend part of the year in each habitat. When on land, they mostly spend the day hidden under stones or logs or in dense vegetation, emerging in the evening and night to forage for worms, insects and other invertebrates.
The suborder Cryptobranchoidea contains the primitive salamanders. A number of fossil cryptobranchids have been found, but there are only three living species, the Chinese giant salamander ("Andrias davidianus"), the Japanese giant salamander ("Andrias japonicus") and the hellbender ("Cryptobranchus alleganiensis") from North America. These large amphibians retain several larval characteristics in their adult state; gills slits are present and the eyes are unlidded. A unique feature is their ability to feed by suction, depressing either the left side of their lower jaw or the right. The males excavate nests, persuade females to lay their egg strings inside them, and guard them. As well as breathing with lungs, they respire through the many folds in their thin skin, which has capillaries close to the surface.
The suborder Salamandroidea contains the advanced salamanders. They differ from the cryptobranchids by having fused prearticular bones in the lower jaw, and by using internal fertilisation. In salamandrids, the male deposits a bundle of sperm, the spermatophore, and the female picks it up and inserts it into her cloaca where the sperm is stored until the eggs are laid. The largest family in this group is Plethodontidae, the lungless salamanders, which includes 60% of all salamander species. The family Salamandridae includes the true salamanders and the name "newt" is given to members of its subfamily Pleurodelinae.
The third suborder, Sirenoidea, contains the four species of sirens, which are in a single family, Sirenidae. Members of this order are eel-like aquatic salamanders with much reduced forelimbs and no hind limbs. Some of their features are primitive while others are derived. Fertilisation is likely to be external as sirenids lack the cloacal glands used by male salamandrids to produce spermatophores and the females lack spermathecae for sperm storage. Despite this, the eggs are laid singly, a behaviour not conducive for external fertilisation.
The order Gymnophiona (from the Greek "gymnos" meaning "naked" and "ophis" meaning "serpent") or Apoda comprises the caecilians. These are long, cylindrical, limbless animals with a snake- or worm-like form. The adults vary in length from 8 to 75 centimetres (3 to 30 inches) with the exception of Thomson's caecilian ("Caecilia thompsoni"), which can reach . A caecilian's skin has a large number of transverse folds and in some species contains tiny embedded dermal scales. It has rudimentary eyes covered in skin, which are probably limited to discerning differences in light intensity. It also has a pair of short tentacles near the eye that can be extended and which have tactile and olfactory functions. Most caecilians live underground in burrows in damp soil, in rotten wood and under plant debris, but some are aquatic. Most species lay their eggs underground and when the larvae hatch, they make their way to adjacent bodies of water. Others brood their eggs and the larvae undergo metamorphosis before the eggs hatch. A few species give birth to live young, nourishing them with glandular secretions while they are in the oviduct. Caecilians have a mostly Gondwanan distribution, being found in tropical regions of Africa, Asia and Central and South America.
The structure contains some typical characteristics common to terrestrial vertebrates, such as the presence of highly cornified outer layers, renewed periodically through a moulting process controlled by the pituitary and thyroid glands. Local thickenings (often called warts) are common, such as those found on toads. The outside of the skin is shed periodically mostly in one piece, in contrast to mammals and birds where it is shed in flakes. Amphibians often eat the sloughed skin. Caecilians are unique among amphibians in having mineralized dermal scales embedded in the dermis between the furrows in the skin. The similarity of these to the scales of bony fish is largely superficial. Lizards and some frogs have somewhat similar osteoderms forming bony deposits in the dermis, but this is an example of convergent evolution with similar structures having arisen independently in diverse vertebrate lineages.
Amphibian skin is permeable to water. Gas exchange can take place through the skin (cutaneous respiration) and this allows adult amphibians to respire without rising to the surface of water and to hibernate at the bottom of ponds. To compensate for their thin and delicate skin, amphibians have evolved mucous glands, principally on their heads, backs and tails. The secretions produced by these help keep the skin moist. In addition, most species of amphibian have granular glands that secrete distasteful or poisonous substances. Some amphibian toxins can be lethal to humans while others have little effect. The main poison-producing glands, the parotoids, produce the neurotoxin bufotoxin and are located behind the ears of toads, along the backs of frogs, behind the eyes of salamanders and on the upper surface of caecilians.
The skin colour of amphibians is produced by three layers of pigment cells called chromatophores. These three cell layers consist of the melanophores (occupying the deepest layer), the guanophores (forming an intermediate layer and containing many granules, producing a blue-green colour) and the lipophores (yellow, the most superficial layer). The colour change displayed by many species is initiated by hormones secreted by the pituitary gland. Unlike bony fish, there is no direct control of the pigment cells by the nervous system, and this results in the colour change taking place more slowly than happens in fish. A vividly coloured skin usually indicates that the species is toxic and is a warning sign to predators.
Amphibians have a skeletal system that is structurally homologous to other tetrapods, though with a number of variations. They all have four limbs except for the legless caecilians and a few species of salamander with reduced or no limbs. The bones are hollow and lightweight. The musculoskeletal system is strong to enable it to support the head and body. The bones are fully ossified and the vertebrae interlock with each other by means of overlapping processes. The pectoral girdle is supported by muscle, and the well-developed pelvic girdle is attached to the backbone by a pair of sacral ribs. The ilium slopes forward and the body is held closer to the ground than is the case in mammals.
In most amphibians, there are four digits on the fore foot and five on the hind foot, but no claws on either. Some salamanders have fewer digits and the amphiumas are eel-like in appearance with tiny, stubby legs. The sirens are aquatic salamanders with stumpy forelimbs and no hind limbs. The caecilians are limbless. They burrow in the manner of earthworms with zones of muscle contractions moving along the body. On the surface of the ground or in water they move by undulating their body from side to side.
In frogs, the hind legs are larger than the fore legs, especially so in those species that principally move by jumping or swimming. In the walkers and runners the hind limbs are not so large, and the burrowers mostly have short limbs and broad bodies. The feet have adaptations for the way of life, with webbing between the toes for swimming, broad adhesive toe pads for climbing, and keratinised tubercles on the hind feet for digging (frogs usually dig backwards into the soil). In most salamanders, the limbs are short and more or less the same length and project at right angles from the body. Locomotion on land is by walking and the tail often swings from side to side or is used as a prop, particularly when climbing. In their normal gait, only one leg is advanced at a time in the manner adopted by their ancestors, the lobe-finned fish. Some salamanders in the genus "Aneides" and certain plethodontids climb trees and have long limbs, large toepads and prehensile tails. In aquatic salamanders and in frog tadpoles, the tail has dorsal and ventral fins and is moved from side to side as a means of propulsion. Adult frogs do not have tails and caecilians have only very short ones.
Salamanders use their tails in defence and some are prepared to jettison them to save their lives in a process known as autotomy. Certain species in the Plethodontidae have a weak zone at the base of the tail and use this strategy readily. The tail often continues to twitch after separation which may distract the attacker and allow the salamander to escape. Both tails and limbs can be regenerated. Adult frogs are unable to regrow limbs but tadpoles can do so.
Amphibians have a juvenile stage and an adult stage, and the circulatory systems of the two are distinct. In the juvenile (or tadpole) stage, the circulation is similar to that of a fish; the two-chambered heart pumps the blood through the gills where it is oxygenated, and is spread around the body and back to the heart in a single loop. In the adult stage, amphibians (especially frogs) lose their gills and develop lungs. They have a heart that consists of a single ventricle and two atria. When the ventricle starts contracting, deoxygenated blood is pumped through the pulmonary artery to the lungs. Continued contraction then pumps oxygenated blood around the rest of the body. Mixing of the two bloodstreams is minimized by the anatomy of the chambers.
The nervous system is basically the same as in other vertebrates, with a central brain, a spinal cord, and nerves throughout the body. The amphibian brain is less well developed than that of reptiles, birds and mammals but is similar in morphology and function to that of a fish. It is believed amphibians are capable of perceiving pain. The brain consists of equal parts, cerebrum, midbrain and cerebellum. Various parts of the cerebrum process sensory input, such as smell in the olfactory lobe and sight in the optic lobe, and it is additionally the centre of behaviour and learning. The cerebellum is the center of muscular coordination and the medulla oblongata controls some organ functions including heartbeat and respiration. The brain sends signals through the spinal cord and nerves to regulate activity in the rest of the body. The pineal body, known to regulate sleep patterns in humans, is thought to produce the hormones involved in hibernation and aestivation in amphibians.
Tadpoles retain the lateral line system of their ancestral fishes, but this is lost in terrestrial adult amphibians. Some caecilians possess electroreceptors that allow them to locate objects around them when submerged in water. The ears are well developed in frogs. There is no external ear, but the large circular eardrum lies on the surface of the head just behind the eye. This vibrates and sound is transmitted through a single bone, the stapes, to the inner ear. Only high-frequency sounds like mating calls are heard in this way, but low-frequency noises can be detected through another mechanism. There is a patch of specialized haircells, called "papilla amphibiorum", in the inner ear capable of detecting deeper sounds. Another feature, unique to frogs and salamanders, is the columella-operculum complex adjoining the auditory capsule which is involved in the transmission of both airborne and seismic signals. The ears of salamanders and caecilians are less highly developed than those of frogs as they do not normally communicate with each other through the medium of sound.
The eyes of tadpoles lack lids, but at metamorphosis, the cornea becomes more dome-shaped, the lens becomes flatter, and eyelids and associated glands and ducts develop. The adult eyes are an improvement on invertebrate eyes and were a first step in the development of more advanced vertebrate eyes. They allow colour vision and depth of focus. In the retinas are green rods, which are receptive to a wide range of wavelengths.
Many amphibians catch their prey by flicking out an elongated tongue with a sticky tip and drawing it back into the mouth before seizing the item with their jaws. Some use inertial feeding to help them swallow the prey, repeatedly thrusting their head forward sharply causing the food to move backwards in their mouth by inertia. Most amphibians swallow their prey whole without much chewing so they possess voluminous stomachs. The short oesophagus is lined with cilia that help to move the food to the stomach and mucus produced by glands in the mouth and pharynx eases its passage. The enzyme chitinase produced in the stomach helps digest the chitinous cuticle of arthropod prey.
Amphibians possess a pancreas, liver and gall bladder. The liver is usually large with two lobes. Its size is determined by its function as a glycogen and fat storage unit, and may change with the seasons as these reserves are built or used up. Adipose tissue is another important means of storing energy and this occurs in the abdomen (in internal structures called fat bodies), under the skin and, in some salamanders, in the tail.
There are two kidneys located dorsally, near the roof of the body cavity. Their job is to filter the blood of metabolic waste and transport the urine via ureters to the urinary bladder where it is stored before being passed out periodically through the cloacal vent. Larvae and most aquatic adult amphibians excrete the nitrogen as ammonia in large quantities of dilute urine, while terrestrial species, with a greater need to conserve water, excrete the less toxic product urea. Some tree frogs with limited access to water excrete most of their metabolic waste as uric acid.
The lungs in amphibians are primitive compared to those of amniotes, possessing few internal septa and large alveoli, and consequently having a comparatively slow diffusion rate for oxygen entering the blood. Ventilation is accomplished by buccal pumping. Most amphibians, however, are able to exchange gases with the water or air via their skin. To enable sufficient cutaneous respiration, the surface of their highly vascularised skin must remain moist to allow the oxygen to diffuse at a sufficiently high rate. Because oxygen concentration in the water increases at both low temperatures and high flow rates, aquatic amphibians in these situations can rely primarily on cutaneous respiration, as in the Titicaca water frog and the hellbender salamander. In air, where oxygen is more concentrated, some small species can rely solely on cutaneous gas exchange, most famously the plethodontid salamanders, which have neither lungs nor gills. Many aquatic salamanders and all tadpoles have gills in their larval stage, with some (such as the axolotl) retaining gills as aquatic adults.
For the purpose of reproduction most amphibians require fresh water although some lay their eggs on land and have developed various means of keeping them moist. A few (e.g. "Fejervarya raja") can inhabit brackish water, but there are no true marine amphibians. There are reports, however, of particular amphibian populations unexpectedly invading marine waters. Such was the case with the Black Sea invasion of the natural hybrid "Pelophylax esculentus" reported in 2010.
Several hundred frog species in adaptive radiations (e.g., "Eleutherodactylus", the Pacific "Platymantis", the Australo-Papuan microhylids, and many other tropical frogs), however, do not need any water for breeding in the wild. They reproduce via direct development, an ecological and evolutionary adaptation that has allowed them to be completely independent from free-standing water. Almost all of these frogs live in wet tropical rainforests and their eggs hatch directly into miniature versions of the adult, passing through the tadpole stage within the egg. Reproductive success of many amphibians is dependent not only on the quantity of rainfall, but the seasonal timing.
In the tropics, many amphibians breed continuously or at any time of year. In temperate regions, breeding is mostly seasonal, usually in the spring, and is triggered by increasing day length, rising temperatures or rainfall. Experiments have shown the importance of temperature, but the trigger event, especially in arid regions, is often a storm. In anurans, males usually arrive at the breeding sites before females and the vocal chorus they produce may stimulate ovulation in females and the endocrine activity of males that are not yet reproductively active.
In caecilians, fertilisation is internal, the male extruding an intromittent organ, the phallodeum, and inserting it into the female cloaca. The paired Müllerian glands inside the male cloaca secrete a fluid which resembles that produced by mammalian prostate glands and which may transport and nourish the sperm. Fertilisation probably takes place in the oviduct.
The majority of salamanders also engage in internal fertilisation. In most of these, the male deposits a spermatophore, a small packet of sperm on top of a gelatinous cone, on the substrate either on land or in the water. The female takes up the sperm packet by grasping it with the lips of the cloaca and pushing it into the vent. The spermatozoa move to the spermatheca in the roof of the cloaca where they remain until ovulation which may be many months later. Courtship rituals and methods of transfer of the spermatophore vary between species. In some, the spermatophore may be placed directly into the female cloaca while in others, the female may be guided to the spermatophore or restrained with an embrace called amplexus. Certain primitive salamanders in the families Sirenidae, Hynobiidae and Cryptobranchidae practice external fertilisation in a similar manner to frogs, with the female laying the eggs in water and the male releasing sperm onto the egg mass.
With a few exceptions, frogs use external fertilisation. The male grasps the female tightly with his forelimbs either behind the arms or in front of the back legs, or in the case of "Epipedobates tricolor", around the neck. They remain in amplexus with their cloacae positioned close together while the female lays the eggs and the male covers them with sperm. Roughened nuptial pads on the male's hands aid in retaining grip. Often the male collects and retains the egg mass, forming a sort of basket with the hind feet. An exception is the granular poison frog ("Oophaga granulifera") where the male and female place their cloacae in close proximity while facing in opposite directions and then release eggs and sperm simultaneously. The tailed frog ("Ascaphus truei") exhibits internal fertilisation. The "tail" is only possessed by the male and is an extension of the cloaca and used to inseminate the female. This frog lives in fast-flowing streams and internal fertilisation prevents the sperm from being washed away before fertilisation occurs. The sperm may be retained in storage tubes attached to the oviduct until the following spring.
Most frogs can be classified as either prolonged or explosive breeders. Typically, prolonged breeders congregate at a breeding site, the males usually arriving first, calling and setting up territories. Other satellite males remain quietly nearby, waiting for their opportunity to take over a territory. The females arrive sporadically, mate selection takes place and eggs are laid. The females depart and territories may change hands. More females appear and in due course, the breeding season comes to an end. Explosive breeders on the other hand are found where temporary pools appear in dry regions after rainfall. These frogs are typically fossorial species that emerge after heavy rains and congregate at a breeding site. They are attracted there by the calling of the first male to find a suitable place, perhaps a pool that forms in the same place each rainy season. The assembled frogs may call in unison and frenzied activity ensues, the males scrambling to mate with the usually smaller number of females.
There is a direct competition between males to win the attention of the females in salamanders and newts, with elaborate courtship displays to keep the female's attention long enough to get her interested in choosing him to mate with. Some species store sperm through long breeding seasons, as the extra time may allow for interactions with rival sperm.
Most amphibians go through metamorphosis, a process of significant morphological change after birth. In typical amphibian development, eggs are laid in water and larvae are adapted to an aquatic lifestyle. Frogs, toads and salamanders all hatch from the egg as larvae with external gills. Metamorphosis in amphibians is regulated by thyroxine concentration in the blood, which stimulates metamorphosis, and prolactin, which counteracts thyroxine's effect. Specific events are dependent on threshold values for different tissues. Because most embryonic development is outside the parental body, it is subject to many adaptations due to specific environmental circumstances. For this reason tadpoles can have horny ridges instead of teeth, whisker-like skin extensions or fins. They also make use of a sensory lateral line organ similar to that of fish. After metamorphosis, these organs become redundant and will be reabsorbed by controlled cell death, called apoptosis. The variety of adaptations to specific environmental circumstances among amphibians is wide, with many discoveries still being made.
The egg of an amphibian is typically surrounded by a transparent gelatinous covering secreted by the oviducts and containing mucoproteins and mucopolysaccharides. This capsule is permeable to water and gases, and swells considerably as it absorbs water. The ovum is at first rigidly held, but in fertilised eggs the innermost layer liquefies and allows the embryo to move freely. This also happens in salamander eggs, even when they are unfertilised. Eggs of some salamanders and frogs contain unicellular green algae. These penetrate the jelly envelope after the eggs are laid and may increase the supply of oxygen to the embryo through photosynthesis. They seem to both speed up the development of the larvae and reduce mortality. Most eggs contain the pigment melanin which raises their temperature through the absorption of light and also protects them against ultraviolet radiation. Caecilians, some plethodontid salamanders and certain frogs lay eggs underground that are unpigmented. In the wood frog ("Rana sylvatica"), the interior of the globular egg cluster has been found to be up to warmer than its surroundings, which is an advantage in its cool northern habitat.
The eggs may be deposited singly or in small groups, or may take the form of spherical egg masses, rafts or long strings. In terrestrial caecilians, the eggs are laid in grape-like clusters in burrows near streams. The amphibious salamander "Ensatina" attaches its similar clusters by stalks to underwater stems and roots. The greenhouse frog ("Eleutherodactylus planirostris") lays eggs in small groups in the soil where they develop in about two weeks directly into juvenile frogs without an intervening larval stage. The tungara frog ("Physalaemus pustulosus") builds a floating nest from foam to protect its eggs. First a raft is built, then eggs are laid in the centre, and finally a foam cap is overlaid. The foam has anti-microbial properties. It contains no detergents but is created by whipping up proteins and lectins secreted by the female.
The eggs of amphibians are typically laid in water and hatch into free-living larvae that complete their development in water and later transform into either aquatic or terrestrial adults. In many species of frog and in most lungless salamanders (Plethodontidae), direct development takes place, the larvae growing within the eggs and emerging as miniature adults. Many caecilians and some other amphibians lay their eggs on land, and the newly hatched larvae wriggle or are transported to water bodies. Some caecilians, the alpine salamander ("Salamandra atra") and some of the African live-bearing toads ("Nectophrynoides spp.") are viviparous. Their larvae feed on glandular secretions and develop within the female's oviduct, often for long periods. Other amphibians, but not caecilians, are ovoviviparous. The eggs are retained in or on the parent's body, but the larvae subsist on the yolks of their eggs and receive no nourishment from the adult. The larvae emerge at varying stages of their growth, either before or after metamorphosis, according to their species. The toad genus "Nectophrynoides" exhibits all of these developmental patterns among its dozen or so members.
Frog larvae are known as tadpoles and typically have oval bodies and long, vertically flattened tails with fins. The free-living larvae are normally fully aquatic, but the tadpoles of some species (such as "Nannophrys ceylonensis") are semi-terrestrial and live among wet rocks. Tadpoles have cartilaginous skeletons, gills for respiration (external gills at first, internal gills later), lateral line systems and large tails that they use for swimming. Newly hatched tadpoles soon develop gill pouches that cover the gills. The lungs develop early and are used as accessory breathing organs, the tadpoles rising to the water surface to gulp air. Some species complete their development inside the egg and hatch directly into small frogs. These larvae do not have gills but instead have specialised areas of skin through which respiration takes place. While tadpoles do not have true teeth, in most species, the jaws have long, parallel rows of small keratinized structures called keradonts surrounded by a horny beak. Front legs are formed under the gill sac and hind legs become visible a few days later.
Iodine and T4 (over stimulate the spectacular apoptosis [programmed cell death] of the cells of the larval gills, tail and fins) also stimulate the evolution of nervous systems transforming the aquatic, vegetarian tadpole into the terrestrial, carnivorous frog with better neurological, visuospatial, olfactory and cognitive abilities for hunting.
In fact, tadpoles developing in ponds and streams are typically herbivorous. Pond tadpoles tend to have deep bodies, large caudal fins and small mouths; they swim in the quiet waters feeding on growing or loose fragments of vegetation. Stream dwellers mostly have larger mouths, shallow bodies and caudal fins; they attach themselves to plants and stones and feed on the surface films of algae and bacteria. They also feed on diatoms, filtered from the water through the gills, and stir up the sediment at bottom of the pond, ingesting edible fragments. They have a relatively long, spiral-shaped gut to enable them to digest this diet. Some species are carnivorous at the tadpole stage, eating insects, smaller tadpoles and fish. Young of the Cuban tree frog ("Osteopilus septentrionalis") can occasionally be cannibalistic, the younger tadpoles attacking a larger, more developed tadpole when it is undergoing metamorphosis.
At metamorphosis, rapid changes in the body take place as the lifestyle of the frog changes completely. The spiral‐shaped mouth with horny tooth ridges is reabsorbed together with the spiral gut. The animal develops a large jaw, and its gills disappear along with its gill sac. Eyes and legs grow quickly, and a tongue is formed. There are associated changes in the neural networks such as development of stereoscopic vision and loss of the lateral line system. All this can happen in about a day. A few days later, the tail is reabsorbed, due to the higher thyroxine concentration required for this to take place.
At hatching, a typical salamander larva has eyes without lids, teeth in both upper and lower jaws, three pairs of feathery external gills, a somewhat laterally flattened body and a long tail with dorsal and ventral fins. The forelimbs may be partially developed and the hind limbs are rudimentary in pond-living species but may be rather more developed in species that reproduce in moving water. Pond-type larvae often have a pair of balancers, rod-like structures on either side of the head that may prevent the gills from becoming clogged up with sediment. Some members of the genera "Ambystoma" and "Dicamptodon" have larvae that never fully develop into the adult form, but this varies with species and with populations. The northwestern salamander ("Ambystoma gracile") is one of these and, depending on environmental factors, either remains permanently in the larval state, a condition known as neoteny, or transforms into an adult. Both of these are able to breed. Neoteny occurs when the animal's growth rate is very low and is usually linked to adverse conditions such as low water temperatures that may change the response of the tissues to the hormone thyroxine. Other factors that may inhibit metamorphosis include lack of food, lack of trace elements and competition from conspecifics. The tiger salamander ("Ambystoma tigrinum") also sometimes behaves in this way and may grow particularly large in the process. The adult tiger salamander is terrestrial, but the larva is aquatic and able to breed while still in the larval state. When conditions are particularly inhospitable on land, larval breeding may allow continuation of a population that would otherwise die out. There are fifteen species of obligate neotenic salamanders, including species of "Necturus", "Proteus" and "Amphiuma", and many examples of facultative ones that adopt this strategy under appropriate environmental circumstances.
Lungless salamanders in the family Plethodontidae are terrestrial and lay a small number of unpigmented eggs in a cluster among damp leaf litter. Each egg has a large yolk sac and the larva feeds on this while it develops inside the egg, emerging fully formed as a juvenile salamander. The female salamander often broods the eggs. In the genus "Ensatinas", the female has been observed to coil around them and press her throat area against them, effectively massaging them with a mucous secretion.
In newts and salamanders, metamorphosis is less dramatic than in frogs. This is because the larvae are already carnivorous and continue to feed as predators when they are adults so few changes are needed to their digestive systems. Their lungs are functional early, but the larvae do not make as much use of them as do tadpoles. Their gills are never covered by gill sacs and are reabsorbed just before the animals leave the water. Other changes include the reduction in size or loss of tail fins, the closure of gill slits, thickening of the skin, the development of eyelids, and certain changes in dentition and tongue structure. Salamanders are at their most vulnerable at metamorphosis as swimming speeds are reduced and transforming tails are encumbrances on land. Adult salamanders often have an aquatic phase in spring and summer, and a land phase in winter. For adaptation to a water phase, prolactin is the required hormone, and for adaptation to the land phase, thyroxine. External gills do not return in subsequent aquatic phases because these are completely absorbed upon leaving the water for the first time.
Most terrestrial caecilians that lay eggs do so in burrows or moist places on land near bodies of water. The development of the young of "Ichthyophis glutinosus", a species from Sri Lanka, has been much studied. The eel-like larvae hatch out of the eggs and make their way to water. They have three pairs of external red feathery gills, a blunt head with two rudimentary eyes, a lateral line system and a short tail with fins. They swim by undulating their body from side to side. They are mostly active at night, soon lose their gills and make sorties onto land. Metamorphosis is gradual. By the age of about ten months they have developed a pointed head with sensory tentacles near the mouth and lost their eyes, lateral line systems and tails. The skin thickens, embedded scales develop and the body divides into segments. By this time, the caecilian has constructed a burrow and is living on land.
In the majority of species of caecilians, the young are produced by viviparity. "Typhlonectes compressicauda", a species from South America, is typical of these. Up to nine larvae can develop in the oviduct at any one time. They are elongated and have paired sac-like gills, small eyes and specialised scraping teeth. At first, they feed on the yolks of the eggs, but as this source of nourishment declines they begin to rasp at the ciliated epithelial cells that line the oviduct. This stimulates the secretion of fluids rich in lipids and mucoproteins on which they feed along with scrapings from the oviduct wall. They may increase their length sixfold and be two-fifths as long as their mother before being born. By this time they have undergone metamorphosis, lost their eyes and gills, developed a thicker skin and mouth tentacles, and reabsorbed their teeth. A permanent set of teeth grow through soon after birth.
The ringed caecilian ("Siphonops annulatus") has developed a unique adaptation for the purposes of reproduction. The progeny feed on a skin layer that is specially developed by the adult in a phenomenon known as maternal dermatophagy. The brood feed as a batch for about seven minutes at intervals of approximately three days which gives the skin an opportunity to regenerate. Meanwhile, they have been observed to ingest fluid exuded from the maternal cloaca.
The care of offspring among amphibians has been little studied but, in general, the larger the number of eggs in a batch, the less likely it is that any degree of parental care takes place. Nevertheless, it is estimated that in up to 20% of amphibian species, one or both adults play some role in the care of the young. Those species that breed in smaller water bodies or other specialised habitats tend to have complex patterns of behaviour in the care of their young.
Many woodland salamanders lay clutches of eggs under dead logs or stones on land. The black mountain salamander ("Desmognathus welteri") does this, the mother brooding the eggs and guarding them from predation as the embryos feed on the yolks of their eggs. When fully developed, they break their way out of the egg capsules and disperse as juvenile salamanders. The male hellbender, a primitive salamander, excavates an underwater nest and encourages females to lay there. The male then guards the site for the two or three months before the eggs hatch, using body undulations to fan the eggs and increase their supply of oxygen.
The male "Colostethus subpunctatus", a tiny frog, protects the egg cluster which is hidden under a stone or log. When the eggs hatch, the male transports the tadpoles on his back, stuck there by a mucous secretion, to a temporary pool where he dips himself into the water and the tadpoles drop off. The male midwife toad ("Alytes obstetricans") winds egg strings round his thighs and carries the eggs around for up to eight weeks. He keeps them moist and when they are ready to hatch, he visits a pond or ditch and releases the tadpoles. The female gastric-brooding frog ("Rheobatrachus spp.") reared larvae in her stomach after swallowing either the eggs or hatchlings; however, this stage was never observed before the species became extinct. The tadpoles secrete a hormone that inhibits digestion in the mother whilst they develop by consuming their very large yolk supply. The pouched frog ("Assa darlingtoni") lays eggs on the ground. When they hatch, the male carries the tadpoles around in brood pouches on his hind legs. The aquatic Surinam toad ("Pipa pipa") raises its young in pores on its back where they remain until metamorphosis. The granular poison frog ("Oophaga granulifera") is typical of a number of tree frogs in the poison dart frog family Dendrobatidae. Its eggs are laid on the forest floor and when they hatch, the tadpoles are carried one by one on the back of an adult to a suitable water-filled crevice such as the axil of a leaf or the rosette of a bromeliad. The female visits the nursery sites regularly and deposits unfertilised eggs in the water and these are consumed by the tadpoles.
With a few exceptions, adult amphibians are predators, feeding on virtually anything that moves that they can swallow. The diet mostly consists of small prey that do not move too fast such as beetles, caterpillars, earthworms and spiders. The sirens ("Siren spp.") often ingest aquatic plant material with the invertebrates on which they feed and a Brazilian tree frog ("Xenohyla truncata") includes a large quantity of fruit in its diet. The Mexican burrowing toad ("Rhinophrynus dorsalis") has a specially adapted tongue for picking up ants and termites. It projects it with the tip foremost whereas other frogs flick out the rear part first, their tongues being hinged at the front.
Food is mostly selected by sight, even in conditions of dim light. Movement of the prey triggers a feeding response. Frogs have been caught on fish hooks baited with red flannel and green frogs ("Rana clamitans") have been found with stomachs full of elm seeds that they had seen floating past. Toads, salamanders and caecilians also use smell to detect prey. This response is mostly secondary because salamanders have been observed to remain stationary near odoriferous prey but only feed if it moves. Cave-dwelling amphibians normally hunt by smell. Some salamanders seem to have learned to recognize immobile prey when it has no smell, even in complete darkness.
Amphibians usually swallow food whole but may chew it lightly first to subdue it. They typically have small hinged pedicellate teeth, a feature unique to amphibians. The base and crown of these are composed of dentine separated by an uncalcified layer and they are replaced at intervals. Salamanders, caecilians and some frogs have one or two rows of teeth in both jaws, but some frogs ("Rana spp.") lack teeth in the lower jaw, and toads ("Bufo spp.") have no teeth. In many amphibians there are also vomerine teeth attached to a facial bone in the roof of the mouth.
The tiger salamander ("Ambystoma tigrinum") is typical of the frogs and salamanders that hide under cover ready to ambush unwary invertebrates. Others amphibians, such as the "Bufo spp." toads, actively search for prey, while the Argentine horned frog ("Ceratophrys ornata") lures inquisitive prey closer by raising its hind feet over its back and vibrating its yellow toes. Among leaf litter frogs in Panama, frogs that actively hunt prey have narrow mouths and are slim, often brightly coloured and toxic, while ambushers have wide mouths and are broad and well-camouflaged. Caecilians do not flick their tongues, but catch their prey by grabbing it with their slightly backward-pointing teeth. The struggles of the prey and further jaw movements work it inwards and the caecilian usually retreats into its burrow. The subdued prey is gulped down whole.
When they are newly hatched, frog larvae feed on the yolk of the egg. When this is exhausted some move on to feed on bacteria, algal crusts, detritus and raspings from submerged plants. Water is drawn in through their mouths, which are usually at the bottom of their heads, and passes through branchial food traps between their mouths and their gills where fine particles are trapped in mucus and filtered out. Others have specialised mouthparts consisting of a horny beak edged by several rows of labial teeth. They scrape and bite food of many kinds as well as stirring up the bottom sediment, filtering out larger particles with the papillae around their mouths. Some, such as the spadefoot toads, have strong biting jaws and are carnivorous or even cannibalistic.
The calls made by caecilians and salamanders are limited to occasional soft squeaks, grunts or hisses and have not been much studied. A clicking sound sometimes produced by caecilians may be a means of orientation, as in bats, or a form of communication. Most salamanders are considered voiceless, but the California giant salamander ("Dicamptodon ensatus") has vocal cords and can produce a rattling or barking sound. Some species of salamander emit a quiet squeak or yelp if attacked.
Frogs are much more vocal, especially during the breeding season when they use their voices to attract mates. The presence of a particular species in an area may be more easily discerned by its characteristic call than by a fleeting glimpse of the animal itself. In most species, the sound is produced by expelling air from the lungs over the vocal cords into an air sac or sacs in the throat or at the corner of the mouth. This may distend like a balloon and acts as a resonator, helping to transfer the sound to the atmosphere, or the water at times when the animal is submerged. The main vocalisation is the male's loud advertisement call which seeks to both encourage a female to approach and discourage other males from intruding on its territory. This call is modified to a quieter courtship call on the approach of a female or to a more aggressive version if a male intruder draws near. Calling carries the risk of attracting predators and involves the expenditure of much energy. Other calls include those given by a female in response to the advertisement call and a release call given by a male or female during unwanted attempts at amplexus. When a frog is attacked, a distress or fright call is emitted, often resembling a scream. The usually nocturnal Cuban tree frog ("Osteopilus septentrionalis") produces a rain call when there is rainfall during daylight hours.
Little is known of the territorial behaviour of caecilians, but some frogs and salamanders defend home ranges. These are usually feeding, breeding or sheltering sites. Males normally exhibit such behaviour though in some species, females and even juveniles are also involved. Although in many frog species, females are larger than males, this is not the case in most species where males are actively involved in territorial defence. Some of these have specific adaptations such as enlarged teeth for biting or spines on the chest, arms or thumbs.
In salamanders, defence of a territory involves adopting an aggressive posture and if necessary attacking the intruder. This may involve snapping, chasing and sometimes biting, occasionally causing the loss of a tail. The behaviour of red back salamanders ("Plethodon cinereus") has been much studied. 91% of marked individuals that were later recaptured were within a metre (yard) of their original daytime retreat under a log or rock. A similar proportion, when moved experimentally a distance of , found their way back to their home base. The salamanders left odour marks around their territories which averaged in size and were sometimes inhabited by a male and female pair. These deterred the intrusion of others and delineated the boundaries between neighbouring areas. Much of their behaviour seemed stereotyped and did not involve any actual contact between individuals. An aggressive posture involved raising the body off the ground and glaring at the opponent who often turned away submissively. If the intruder persisted, a biting lunge was usually launched at either the tail region or the naso-labial grooves. Damage to either of these areas can reduce the fitness of the rival, either because of the need to regenerate tissue or because it impairs its ability to detect food.
In frogs, male territorial behaviour is often observed at breeding locations; calling is both an announcement of ownership of part of this resource and an advertisement call to potential mates. In general, a deeper voice represents a heavier and more powerful individual, and this may be sufficient to prevent intrusion by smaller males. Much energy is used in the vocalization and it takes a toll on the territory holder who may be displaced by a fitter rival if he tires. There is a tendency for males to tolerate the holders of neighbouring territories while vigorously attacking unknown intruders. Holders of territories have a "home advantage" and usually come off better in an encounter between two similar-sized frogs. If threats are insufficient, chest to chest tussles may take place. Fighting methods include pushing and shoving, deflating the opponent's vocal sac, seizing him by the head, jumping on his back, biting, chasing, splashing, and ducking him under the water.
Amphibians have soft bodies with thin skins, and lack claws, defensive armour, or spines. Nevertheless, they have evolved various defence mechanisms to keep themselves alive. The first line of defence in salamanders and frogs is the mucous secretion that they produce. This keeps their skin moist and makes them slippery and difficult to grip. The secretion is often sticky and distasteful or toxic. Snakes have been observed yawning and gaping when trying to swallow African clawed frogs ("Xenopus laevis"), which gives the frogs an opportunity to escape. Caecilians have been little studied in this respect, but the Cayenne caecilian ("Typhlonectes compressicauda") produces toxic mucus that has killed predatory fish in a feeding experiment in Brazil. In some salamanders, the skin is poisonous. The rough-skinned newt ("Taricha granulosa") from North America and other members of its genus contain the neurotoxin tetrodotoxin (TTX), the most toxic non-protein substance known and almost identical to that produced by pufferfish. Handling the newts does not cause harm, but ingestion of even the most minute amounts of the skin is deadly. In feeding trials, fish, frogs, reptiles, birds and mammals were all found to be susceptible. The only predators with some tolerance to the poison are certain populations of common garter snake ("Thamnophis sirtalis").
In locations where both snake and salamander co-exist, the snakes have developed immunity through genetic changes and they feed on the amphibians with impunity. Coevolution occurs with the newt increasing its toxic capabilities at the same rate as the snake further develops its immunity. Some frogs and toads are toxic, the main poison glands being at the side of the neck and under the warts on the back. These regions are presented to the attacking animal and their secretions may be foul-tasting or cause various physical or neurological symptoms. Altogether, over 200 toxins have been isolated from the limited number of amphibian species that have been investigated.
Poisonous species often use bright colouring to warn potential predators of their toxicity. These warning colours tend to be red or yellow combined with black, with the fire salamander ("Salamandra salamandra") being an example. Once a predator has sampled one of these, it is likely to remember the colouration next time it encounters a similar animal. In some species, such as the fire-bellied toad ("Bombina spp."), the warning colouration is on the belly and these animals adopt a defensive pose when attacked, exhibiting their bright colours to the predator. The frog "Allobates zaparo" is not poisonous, but mimics the appearance of other toxic species in its locality, a strategy that may deceive predators.
Many amphibians are nocturnal and hide during the day, thereby avoiding diurnal predators that hunt by sight. Other amphibians use camouflage to avoid being detected. They have various colourings such as mottled browns, greys and olives to blend into the background. Some salamanders adopt defensive poses when faced by a potential predator such as the North American northern short-tailed shrew ("Blarina brevicauda"). Their bodies writhe and they raise and lash their tails which makes it difficult for the predator to avoid contact with their poison-producing granular glands. A few salamanders will autotomise their tails when attacked, sacrificing this part of their anatomy to enable them to escape. The tail may have a constriction at its base to allow it to be easily detached. The tail is regenerated later, but the energy cost to the animal of replacing it is significant.
Some frogs and toads inflate themselves to make themselves look large and fierce, and some spadefoot toads ("Pelobates spp") scream and leap towards the attacker. Giant salamanders of the genus "Andrias", as well as Ceratophrine and "Pyxicephalus" frogs possess sharp teeth and are capable of drawing blood with a defensive bite. The blackbelly salamander ("Desmognathus quadramaculatus") can bite an attacking common garter snake ("Thamnophis sirtalis") two or three times its size on the head and often manages to escape.
In amphibians, there is evidence of habituation, associative learning through both classical and instrumental learning, and discrimination abilities.
In one experiment, when offered live fruit flies ("Drosophila virilis"), salamanders chose the larger of 1 vs 2 and 2 vs 3. Frogs can distinguish between low numbers (1 vs 2, 2 vs 3, but not 3 vs 4) and large numbers (3 vs 6, 4 vs 8, but not 4 vs 6) of prey. This is irrespective of other characteristics, i.e. surface area, volume, weight and movement, although discrimination among large numbers may be based on surface area.
Dramatic declines in amphibian populations, including population crashes and mass localized extinction, have been noted since the late 1980s from locations all over the world, and amphibian declines are thus perceived to be one of the most critical threats to global biodiversity. In 2004, the International Union for Conservation of Nature (IUCN) reported stating that currently birds, mammals, and amphibians extinction rates were at minimum 48 times greater than natural extinction rates—possibly 1,024 times higher.
In 2006 there were believed to be 4,035 species of amphibians that depended on water at some stage during their life cycle. Of these, 1,356 (33.6%) were considered to be threatened and this figure is likely to be an underestimate because it excludes 1,427 species for which there was insufficient data to assess their status. A number of causes are believed to be involved, including habitat destruction and modification, over-exploitation, pollution, introduced species, global warming, endocrine-disrupting pollutants, destruction of the ozone layer (ultraviolet radiation has shown to be especially damaging to the skin, eyes, and eggs of amphibians), and diseases like chytridiomycosis. However, many of the causes of amphibian declines are still poorly understood, and are a topic of ongoing discussion.
With their complex reproductive needs and permeable skins, amphibians are often considered to be ecological indicators. In many terrestrial ecosystems, they constitute one of the largest parts of the vertebrate biomass. Any decline in amphibian numbers will affect the patterns of predation. The loss of carnivorous species near the top of the food chain will upset the delicate ecosystem balance and may cause dramatic increases in opportunistic species. In the Middle East, a growing appetite for eating frog legs and the consequent gathering of them for food was linked to an increase in mosquitoes. Predators that feed on amphibians are affected by their decline. The western terrestrial garter snake ("Thamnophis elegans") in California is largely aquatic and depends heavily on two species of frog that are decreasing in numbers, the Yosemite toad ("Bufo canorus") and the mountain yellow-legged frog ("Rana muscosa"), putting the snake's future at risk. If the snake were to become scarce, this would affect birds of prey and other predators that feed on it. Meanwhile, in the ponds and lakes, fewer frogs means fewer tadpoles. These normally play an important role in controlling the growth of algae and also forage on detritus that accumulates as sediment on the bottom. A reduction in the number of tadpoles may lead to an overgrowth of algae, resulting in depletion of oxygen in the water when the algae later die and decompose. Aquatic invertebrates and fish might then die and there would be unpredictable ecological consequences.
A global strategy to stem the crisis was released in 2005 in the form of the Amphibian Conservation Action Plan. Developed by over eighty leading experts in the field, this call to action details what would be required to curtail amphibian declines and extinctions over the following five years and how much this would cost. The Amphibian Specialist Group of the IUCN is spearheading efforts to implement a comprehensive global strategy for amphibian conservation. Amphibian Ark is an organization that was formed to implement the ex-situ conservation recommendations of this plan, and they have been working with zoos and aquaria around the world, encouraging them to create assurance colonies of threatened amphibians. One such project is the Panama Amphibian Rescue and Conservation Project that built on existing conservation efforts in Panama to create a country-wide response to the threat of chytridiomycosis. | https://en.wikipedia.org/wiki?curid=621 |
Alaska
Alaska (; ; ; Alutiiq: "Alas'kaaq;" Tlingit: "Anáaski;" ) is a state located in the northwest extremity of the United States West Coast, just across the Bering Strait from Asia. An exclave of the U.S., it borders the Canadian province of British Columbia and territory of Yukon to the east and southeast and has a maritime border with Russia's Chukotka Autonomous Okrug to the west. To the north are the Chukchi and Beaufort seas of the Arctic Ocean, while the Pacific Ocean lies to the south and southwest.
Alaska is the largest U.S. state by area and the seventh-largest subnational division in the world. It is the third-least populous and the most sparsely populated state, but by far the continent's most populous territory located mostly north of the 60th parallel, with an estimated population of 738,432 as 2015—more than quadruple the combined populations of Northern Canada and Greenland. Approximately half of Alaska's residents live within the Anchorage metropolitan area. The state capital of Juneau is the second largest city in the United States by area, comprising more territory than the states of Rhode Island and Delaware.
Alaska was occupied by various indigenous peoples for thousands of years before the arrival of Europeans. The state is considered the entry point for the settlement of North America by way of the Bering land bridge. The Russians were the first Europeans to settle the area beginning in the 18th century, eventually establishing the colony of Alaska that spanned most of the current state. The expense and difficulty of maintaining this distant possession prompted its sale to the U.S. in 1867 for US$7.2 million, or approximately two cents per acre ($4.74/km2). The area went through several administrative changes before becoming organized as a territory on May 11, 1912. It was admitted as the 49th state of the U.S. on January 3, 1959.
While it has one of the smallest state economies in the country, Alaska's per capita income is among the highest, owing to a diversified economy dominated by fishing, natural gas, and oil, all of which it has in abundance. United States armed forces bases and tourism are also a significant part of the economy; more than half the state is federally owned public land, including a multitude of national forests, parks, and wildlife refuges.
Alaska's indigenous population is proportionally the highest of any U.S. state, at over 15 percent. Close to two dozen native languages are spoken, and Alaskan Natives exercise considerable influence in local and state politics.
The name "Alaska" () was introduced in the Russian colonial period when it was used to refer to the Alaska Peninsula. It was derived from an Aleut-language idiom, which figuratively refers to the mainland. Literally, it means "object to which the action of the sea is directed".
Alaska is the northernmost and westernmost state in the United States and has the most easterly longitude in the United States because the Aleutian Islands extend into the Eastern Hemisphere. Alaska is the only non-contiguous U.S. state on continental North America; about of British Columbia (Canada) separates Alaska from Washington. It is technically part of the continental U.S., but is sometimes not included in colloquial use; Alaska is not part of the contiguous U.S., often called "the Lower 48". The capital city, Juneau, is situated on the mainland of the North American continent but is not connected by road to the rest of the North American highway system.
The state is bordered by Canada's Yukon and British Columbia to the east, the Gulf of Alaska and the Pacific Ocean to the south and southwest, the Bering Sea, Bering Strait, and Chukchi Sea to the west and the Arctic Ocean to the north. Alaska's territorial waters touch Russia's territorial waters in the Bering Strait, as the Russian Big Diomede Island and Alaskan Little Diomede Island are only apart. Alaska has a longer coastline than all the other U.S. states combined.
Alaska is the largest state in the United States by total area. At it is more than twice the size of Texas and—counting territorial waters—larger than Texas, California, and Montana combined. (Alaska is larger than all but 18 of the world's "countries".)
There are no officially defined borders demarcating the various regions of Alaska, but there are six widely accepted regions:
The most populous region of Alaska, containing Anchorage, the Matanuska-Susitna Valley and the Kenai Peninsula. Rural, mostly unpopulated areas south of the Alaska Range and west of the Wrangell Mountains also fall within the definition of South Central, as do the Prince William Sound area and the communities of Cordova and Valdez.
Also referred to as the Panhandle or Inside Passage, this is the region of Alaska closest to the rest of the United States. As such, this was where most of the initial non-indigenous settlement occurred in the years following the Alaska Purchase. The region is dominated by the Alexander Archipelago as well as the Tongass National Forest, the largest national forest in the United States. It contains the state capital Juneau, the former capital Sitka, and Ketchikan, at one time Alaska's largest city. The Alaska Marine Highway provides a vital surface transportation link throughout the area, as only three communities (Haines, Hyder and Skagway) enjoy direct connections to the contiguous North American road system. Officially designated in 1963.
The Interior is the largest region of Alaska; much of it is uninhabited wilderness. Fairbanks is the only large city in the region. Denali National Park and Preserve is located here. "Denali" is the highest mountain in North America.
Southwest Alaska is a sparsely inhabited region stretching some inland from the Bering Sea. Most of the population lives along the coast. Kodiak Island is also located in Southwest. The massive Yukon–Kuskokwim Delta, one of the largest river deltas in the world, is here. Portions of the Alaska Peninsula are considered part of Southwest, with the remaining portions included with the Aleutian Islands (see below).
The North Slope is mostly tundra peppered with small villages. The area is known for its massive reserves of crude oil, and contains both the National Petroleum Reserve–Alaska and the Prudhoe Bay Oil Field. The city of Utqiagvik, formerly known as Barrow, is the northernmost city in the United States and is located here. The Northwest Arctic area, anchored by Kotzebue and also containing the Kobuk River valley, is often regarded as being part of this region. However, the respective Inupiat of the North Slope and of the Northwest Arctic seldom consider themselves to be one people.
More than 300 small volcanic islands make up this chain, which stretches more than into the Pacific Ocean. Some of these islands fall in the Eastern Hemisphere, but the International Date Line was drawn west of 180° to keep the whole state, and thus the entire North American continent, within the same legal day. Two of the islands, Attu and Kiska, were occupied by Japanese forces during World War II.
With its myriad islands, Alaska has nearly of tidal shoreline. The Aleutian Islands chain extends west from the southern tip of the Alaska Peninsula. Many active volcanoes are found in the Aleutians and in coastal regions. Unimak Island, for example, is home to Mount Shishaldin, which is an occasionally smoldering volcano that rises to above the North Pacific. It is the most perfect volcanic cone on Earth, even more symmetrical than Japan's Mount Fuji. The chain of volcanoes extends to Mount Spurr, west of Anchorage on the mainland. Geologists have identified Alaska as part of Wrangellia, a large region consisting of multiple states and Canadian provinces in the Pacific Northwest, which is actively undergoing continent building.
One of the world's largest tides occurs in Turnagain Arm, just south of Anchorage, where tidal differences can be more than .
Alaska has more than three million lakes. Marshlands and wetland permafrost cover (mostly in northern, western and southwest flatlands). Glacier ice covers about of Alaska. The Bering Glacier is the largest glacier in North America, covering alone.
According to an October 1998 report by the United States Bureau of Land Management, approximately 65% of Alaska is owned and managed by the U.S. federal government as public lands, including a multitude of national forests, national parks, and national wildlife refuges. Of these, the Bureau of Land Management manages , or 23.8% of the state. The Arctic National Wildlife Refuge is managed by the United States Fish and Wildlife Service. It is the world's largest wildlife refuge, comprising .
Of the remaining land area, the state of Alaska owns , its entitlement under the Alaska Statehood Act. A portion of that acreage is occasionally ceded to organized boroughs, under the statutory provisions pertaining to newly formed boroughs. Smaller portions are set aside for rural subdivisions and other homesteading-related opportunities. These are not very popular due to the often remote and roadless locations. The University of Alaska, as a land grant university, also owns substantial acreage which it manages independently.
Another are owned by 12 regional, and scores of local, Native corporations created under the Alaska Native Claims Settlement Act (ANCSA) of 1971. Regional Native corporation Doyon, Limited often promotes itself as the largest private landowner in Alaska in advertisements and other communications. Provisions of ANCSA allowing the corporations' land holdings to be sold on the open market starting in 1991 were repealed before they could take effect. Effectively, the corporations hold title (including subsurface title in many cases, a privilege denied to individual Alaskans) but cannot sell the land. Individual Native allotments can be and are sold on the open market, however.
Various private interests own the remaining land, totaling about one percent of the state. Alaska is, by a large margin, the state with the smallest percentage of private land ownership when Native corporation holdings are excluded.
The climate in Southeast Alaska is a mid-latitude oceanic climate (Köppen climate classification: "Cfb") in the southern sections and a subarctic oceanic climate (Köppen "Cfc") in the northern parts. On an annual basis, Southeast is both the wettest and warmest part of Alaska with milder temperatures in the winter and high precipitation throughout the year. Juneau averages over of precipitation a year, and Ketchikan averages over . This is also the only region in Alaska in which the average daytime high temperature is above freezing during the winter months.
The climate of Anchorage and south central Alaska is mild by Alaskan standards due to the region's proximity to the seacoast. While the area gets less rain than southeast Alaska, it gets more snow, and days tend to be clearer. On average, Anchorage receives of precipitation a year, with around of snow, although there are areas in the south central which receive far more snow. It is a subarctic climate () due to its brief, cool summers.
The climate of Western Alaska is determined in large part by the Bering Sea and the Gulf of Alaska. It is a subarctic oceanic climate in the southwest and a continental subarctic climate farther north. The temperature is somewhat moderate considering how far north the area is. This region has a tremendous amount of variety in precipitation. An area stretching from the northern side of the Seward Peninsula to the Kobuk River valley (i. e., the region around Kotzebue Sound) is technically a desert, with portions receiving less than of precipitation annually. On the other extreme, some locations between Dillingham and Bethel average around of precipitation.
The climate of the interior of Alaska is subarctic. Some of the highest and lowest temperatures in Alaska occur around the area near Fairbanks. The summers may have temperatures reaching into the 90s °F (the low-to-mid 30s °C), while in the winter, the temperature can fall below . Precipitation is sparse in the Interior, often less than a year, but what precipitation falls in the winter tends to stay the entire winter.
The highest and lowest recorded temperatures in Alaska are both in the Interior. The highest is in Fort Yukon (which is just inside the arctic circle) on June 27, 1915, making Alaska tied with Hawaii as the state with the lowest high temperature in the United States. The lowest official Alaska temperature is in Prospect Creek on January 23, 1971, one degree above the lowest temperature recorded in continental North America (in Snag, Yukon, Canada).
The climate in the extreme north of Alaska is Arctic () with long, very cold winters and short, cool summers. Even in July, the average low temperature in Utqiagvik is . Precipitation is light in this part of Alaska, with many places averaging less than per year, mostly as snow which stays on the ground almost the entire year.
Numerous indigenous peoples occupied Alaska for thousands of years before the arrival of European peoples to the area. Linguistic and DNA studies done here have provided evidence for the settlement of North America by way of the Bering land bridge. At the Upward Sun River site in the Tanana River Valley in Alaska, remains of a six-week-old infant were found. The baby's DNA showed that she belonged to a population that was genetically separate from other native groups present elsewhere in the New World at the end of the Pleistocene. Ben Potter, the University of Alaska Fairbanks archaeologist who unearthed the remains at the Upward River Sun site in 2013, named this new group Ancient Beringians. The Tlingit people developed a society with a matrilineal kinship system of property inheritance and descent in what is today Southeast Alaska, along with parts of British Columbia and the Yukon. Also in Southeast were the Haida, now well known for their unique arts. The Tsimshian people came to Alaska from British Columbia in 1887, when President Grover Cleveland, and later the U.S. Congress, granted them permission to settle on Annette Island and found the town of Metlakatla. All three of these peoples, as well as other indigenous peoples of the Pacific Northwest Coast, experienced smallpox outbreaks from the late 18th through the mid-19th century, with the most devastating epidemics occurring in the 1830s and 1860s, resulting in high fatalities and social disruption.
The Aleutian Islands are still home to the Aleut people's seafaring society, although they were the first Native Alaskans to be exploited by the Russians. Western and Southwestern Alaska are home to the Yup'ik, while their cousins the Alutiiq ~ Sugpiaq lived in what is now Southcentral Alaska. The Gwich'in people of the northern Interior region are Athabaskan and primarily known today for their dependence on the caribou within the much-contested Arctic National Wildlife Refuge. The North Slope and Little Diomede Island are occupied by the widespread Inupiat people.
Some researchers believe the first Russian settlement in Alaska was established in the 17th century. According to this hypothesis, in 1648 several koches of Semyon Dezhnyov's expedition came ashore in Alaska by storm and founded this settlement. This hypothesis is based on the testimony of Chukchi geographer Nikolai Daurkin, who had visited Alaska in 1764–1765 and who had reported on a village on the Kheuveren River, populated by "bearded men" who "pray to the icons". Some modern researchers associate Kheuveren with Koyuk River.
The first European vessel to reach Alaska is generally held to be the "St. Gabriel" under the authority of the surveyor M. S. Gvozdev and assistant navigator I. Fyodorov on August 21, 1732, during an expedition of Siberian cossak A. F. Shestakov and Russian explorer Dmitry Pavlutsky (1729–1735).
Another European contact with Alaska occurred in 1741, when Vitus Bering led an expedition for the Russian Navy aboard the "St. Peter". After his crew returned to Russia with sea otter pelts judged to be the finest fur in the world, small associations of fur traders began to sail from the shores of Siberia toward the Aleutian Islands. The first permanent European settlement was founded in 1784.
Between 1774 and 1800, Spain sent several expeditions to Alaska to assert its claim over the Pacific Northwest. In 1789 a Spanish settlement and fort were built in Nootka Sound. These expeditions gave names to places such as Valdez, Bucareli Sound, and Cordova. Later, the Russian-American Company carried out an expanded colonization program during the early-to-mid-19th century.
Sitka, renamed New Archangel from 1804 to 1867, on Baranof Island in the Alexander Archipelago in what is now Southeast Alaska, became the capital of Russian America. It remained the capital after the colony was transferred to the United States. The Russians never fully colonized Alaska, and the colony was never very profitable. Evidence of Russian settlement in names and churches survive throughout southeast Alaska.
William H. Seward, the United States Secretary of State, negotiated the Alaska Purchase (also known as Seward's Folly) with the Russians in 1867 for $7.2 million; the purchase was made on March 30, 1867. Six months later the commissioners arrived in Sitka and the formal transfer was arranged; the formal flag-raising took place at Fort Sitka on October 18, 1867. In the ceremony 250 uniformed U.S. soldiers marched to the governor's house at "Castle Hill", where the Russian troops lowered the Russian flag and the U.S. flag was raised. This event is celebrated as Alaska Day, a legal holiday on October 18.
Alaska was loosely governed by the military initially, and was administered as a district starting in 1884, with a governor appointed by the President of the United States. A federal district court was headquartered in Sitka.
For most of Alaska's first decade under the United States flag, Sitka was the only community inhabited by American settlers. They organized a "provisional city government", which was Alaska's first municipal government, but not in a legal sense. Legislation allowing Alaskan communities to legally incorporate as cities did not come about until 1900, and home rule for cities was extremely limited or unavailable until statehood took effect in 1959.
Starting in the 1890s and stretching in some places to the early 1910s, gold rushes in Alaska and the nearby Yukon Territory brought thousands of miners and settlers to Alaska. Alaska was officially incorporated as an organized territory in 1912. Alaska's capital, which had been in Sitka until 1906, was moved north to Juneau. Construction of the Alaska Governor's Mansion began that same year. European immigrants from Norway and Sweden also settled in southeast Alaska, where they entered the fishing and logging industries.
During World War II, the Aleutian Islands Campaign focused on Attu, Agattu and Kiska, all which were occupied by the Empire of Japan. During the Japanese occupation, a white American civilian and two United States Navy personnel were killed at Attu and Kiska respectively, and nearly a total of 50 Aleut civilians and eight sailors were interned in Japan. About half of the Aleuts died during the period of internment. Unalaska/Dutch Harbor and Adak became significant bases for the United States Army, United States Army Air Forces and United States Navy. The United States Lend-Lease program involved flying American warplanes through Canada to Fairbanks and then Nome; Soviet pilots took possession of these aircraft, ferrying them to fight the German invasion of the Soviet Union. The construction of military bases contributed to the population growth of some Alaskan cities.
Statehood for Alaska was an important cause of James Wickersham early in his tenure as a congressional delegate. Decades later, the statehood movement gained its first real momentum following a territorial referendum in 1946. The Alaska Statehood Committee and Alaska's Constitutional Convention would soon follow. Statehood supporters also found themselves fighting major battles against political foes, mostly in the U.S. Congress but also within Alaska. Statehood was approved by Congress on July 7, 1958. Alaska was officially proclaimed a state on January 3, 1959.
In 1960, the Census Bureau reported Alaska's population as 77.2% White, 3% Black, and 18.8% American Indian and Alaska Native.
On March 27, 1964, the massive Good Friday earthquake killed 133 people and destroyed several villages and portions of large coastal communities, mainly by the resultant tsunamis and landslides. It was the second-most-powerful earthquake in recorded history, with a moment magnitude of 9.2 (more than a thousand times as powerful as the 1989 San Francisco earthquake). The time of day (5:36 pm), time of year (spring) and location of the epicenter were all cited as factors in potentially sparing thousands of lives, particularly in Anchorage.
The 1968 discovery of oil at Prudhoe Bay and the 1977 completion of the Trans-Alaska Pipeline System led to an oil boom. Royalty revenues from oil have funded large state budgets from 1980 onward. That same year, not coincidentally, Alaska repealed its state income tax.
In 1989, the "Exxon Valdez" hit a reef in the Prince William Sound, spilling more than of crude oil over of coastline. Today, the battle between philosophies of development and conservation is seen in the contentious debate over oil drilling in the Arctic National Wildlife Refuge and the proposed Pebble Mine.
The Alaska Heritage Resources Survey (AHRS) is a restricted inventory of all reported historic and prehistoric sites within the state of Alaska; it is maintained by the Office of History and Archaeology. The survey's inventory of cultural resources includes objects, structures, buildings, sites, districts, and travel ways, with a general provision that they are more than fifty years old. , more than 35,000 sites have been reported.
The United States Census Bureau estimates that the population of Alaska was 731,545 on July 1, 2019, a 3.00% increase since the 2010 United States Census.
In 2010, Alaska ranked as the 47th state by population, ahead of North Dakota, Vermont, and Wyoming (and Washington, D.C.). Estimates show North Dakota ahead . Alaska is the least densely populated state, and one of the most sparsely populated areas in the world, at , with the next state, Wyoming, at . Alaska is by far the largest U.S. state by area, and the tenth wealthiest (per capita income). , the state's unemployment rate was 6.6%.
, it is one of 14 U.S. states that still have only one telephone area code.
According to the 2010 United States Census, Alaska, had a population of 710,231. In terms of race and ethnicity, the state was 66.7% White (64.1% Non-Hispanic White), 14.8% American Indian and Alaska Native, 5.4% Asian, 3.3% Black or African American, 1.0% Native Hawaiian and Other Pacific Islander, 1.6% from Some Other Race, and 7.3% from Two or More Races. Hispanics or Latinos of any race made up 5.5% of the population.
, 50.7% of Alaska's population younger than one year of age belonged to minority groups (i.e., did not have two parents of non-Hispanic white ancestry).
According to the 2011 American Community Survey, 83.4% of people over the age of five spoke only English at home. About 3.5% spoke Spanish at home, 2.2% spoke another Indo-European language, about 4.3% spoke an Asian language (including Tagalog), and about 5.3% spoke other languages at home.
The Alaska Native Language Center at the University of Alaska Fairbanks claims that at least 20 Alaskan native languages exist and there are also some languages with different dialects. Most of Alaska's native languages belong to either the Eskimo–Aleut or Na-Dene language families; however, some languages are thought to be isolates (e.g. Haida) or have not yet been classified (e.g. Tsimshianic). nearly all of Alaska's native languages were classified as either threatened, shifting, moribund, nearly extinct, or dormant languages.
A total of 5.2% of Alaskans speak one of the state's 20 indigenous languages, known locally as "native languages".
In October 2014, the governor of Alaska signed a bill declaring the state's 20 indigenous languages to have official status. This bill gave them symbolic recognition as official languages, though they have not been adopted for official use within the government. The 20 languages that were included in the bill are:
According to statistics collected by the Association of Religion Data Archives from 2010, about 34% of Alaska residents were members of religious congregations. 100,960 people identified as Evangelical Protestants, 50,866 as Roman Catholic, and 32,550 as mainline Protestants. Roughly 4% are Mormon, 0.5% are Jewish, 1% are Muslim, 0.5% are Buddhist, 0.2% are Bahá'í, and 0.5% are Hindu. The largest religious denominations in Alaska were the Catholic Church with 50,866 adherents, non-denominational Evangelical Protestants with 38,070 adherents, The Church of Jesus Christ of Latter-day Saints with 32,170 adherents, and the Southern Baptist Convention with 19,891 adherents. Alaska has been identified, along with Pacific Northwest states Washington and Oregon, as being the least religious states of the USA, in terms of church membership,
In 1795, the First Russian Orthodox Church was established in Kodiak. Intermarriage with Alaskan Natives helped the Russian immigrants integrate into society. As a result, an increasing number of Russian Orthodox churches gradually became established within Alaska. Alaska also has the largest Quaker population (by percentage) of any state. In 2009 there were 6,000 Jews in Alaska (for whom observance of halakha may pose special problems). Alaskan Hindus often share venues and celebrations with members of other Asian religious communities, including Sikhs and Jains. In 2010, Alaskan Hindus established the Sri Ganesha Temple of Alaska, making it the first Hindu Temple in Alaska and the northernmost Hindu Temple in the world. There are an estimated 2,000–3,000 Hindus in Alaska. The vast majority of Hindus live in Anchorage or Fairbanks.
Estimates for the number of Muslims in Alaska range from 2,000 to 5,000. The Islamic Community Center of Anchorage began efforts in the late 1990s to construct a mosque in Anchorage. They broke ground on a building in south Anchorage in 2010 and were nearing completion in late 2014. When completed, the mosque will be the first in the state and one of the northernmost mosques in the world. There's also a Bahá'í Center.
-Total employment 2016
-Number of employer establishments
The 2018 gross state product was $55 billion, 48th in the nation. Its per capita personal income for 2018 was $73,000, ranking 7th in the nation. According to a 2013 study by Phoenix Marketing International, Alaska had the fifth-largest number of millionaires per capita in the United States, with a ratio of 6.75 percent. The oil and gas industry dominates the Alaskan economy, with more than 80% of the state's revenues derived from petroleum extraction. Alaska's main export product (excluding oil and natural gas) is seafood, primarily salmon, cod, Pollock and crab.
Agriculture represents a very small fraction of the Alaskan economy. Agricultural production is primarily for consumption within the state and includes nursery stock, dairy products, vegetables, and livestock. Manufacturing is limited, with most foodstuffs and general goods imported from elsewhere.
Employment is primarily in government and industries such as natural resource extraction, shipping, and transportation. Military bases are a significant component of the economy in the Fairbanks North Star, Anchorage and Kodiak Island boroughs, as well as Kodiak. Federal subsidies are also an important part of the economy, allowing the state to keep taxes low. Its industrial outputs are crude petroleum, natural gas, coal, gold, precious metals, zinc and other mining, seafood processing, timber and wood products. There is also a growing service and tourism sector. Tourists have contributed to the economy by supporting local lodging.
Alaska has vast energy resources, although its oil reserves have been largely depleted. Major oil and gas reserves were found in the Alaska North Slope (ANS) and Cook Inlet basins, but according to the Energy Information Administration, by February 2014 Alaska had fallen to fourth place in the nation in crude oil production after Texas, North Dakota, and California. Prudhoe Bay on Alaska's North Slope is still the second highest-yielding oil field in the United States, typically producing about , although by early 2014 North Dakota's Bakken Formation was producing over . Prudhoe Bay was the largest conventional oil field ever discovered in North America, but was much smaller than Canada's enormous Athabasca oil sands field, which by 2014 was producing about of unconventional oil, and had hundreds of years of producible reserves at that rate.
The Trans-Alaska Pipeline can transport and pump up to of crude oil per day, more than any other crude oil pipeline in the United States. Additionally, substantial coal deposits are found in Alaska's bituminous, sub-bituminous, and lignite coal basins. The United States Geological Survey estimates that there are of undiscovered, technically recoverable gas from natural gas hydrates on the Alaskan North Slope. Alaska also offers some of the highest hydroelectric power potential in the country from its numerous rivers. Large swaths of the Alaskan coastline offer wind and geothermal energy potential as well.
Alaska's economy depends heavily on increasingly expensive diesel fuel for heating, transportation, electric power and light. Although wind and hydroelectric power are abundant and underdeveloped, proposals for statewide energy systems (e.g. with special low-cost electric interties) were judged uneconomical (at the time of the report, 2001) due to low (less than 50¢/gal) fuel prices, long distances and low population. The cost of a gallon of gas in urban Alaska today is usually thirty to sixty cents higher than the national average; prices in rural areas are generally significantly higher but vary widely depending on transportation costs, seasonal usage peaks, nearby petroleum development infrastructure and many other factors.
The Alaska Permanent Fund is a constitutionally authorized appropriation of oil revenues, established by voters in 1976 to manage a surplus in state petroleum revenues from oil, largely in anticipation of the then recently constructed Trans-Alaska Pipeline System. The fund was originally proposed by Governor Keith Miller on the eve of the 1969 Prudhoe Bay lease sale, out of fear that the legislature would spend the entire proceeds of the sale (which amounted to $900 million) at once. It was later championed by Governor Jay Hammond and Kenai state representative Hugh Malone. It has served as an attractive political prospect ever since, diverting revenues which would normally be deposited into the general fund.
The Alaska Constitution was written so as to discourage dedicating state funds for a particular purpose. The Permanent Fund has become the rare exception to this, mostly due to the political climate of distrust existing during the time of its creation. From its initial principal of $734,000, the fund has grown to $50 billion as a result of oil royalties and capital investment programs. Most if not all the principal is invested conservatively outside Alaska. This has led to frequent calls by Alaskan politicians for the Fund to make investments within Alaska, though such a stance has never gained momentum.
Starting in 1982, dividends from the fund's annual growth have been paid out each year to eligible Alaskans, ranging from an initial $1,000 in 1982 (equal to three years' payout, as the distribution of payments was held up in a lawsuit over the distribution scheme) to $3,269 in 2008 (which included a one-time $1,200 "Resource Rebate"). Every year, the state legislature takes out 8% from the earnings, puts 3% back into the principal for inflation proofing, and the remaining 5% is distributed to all qualifying Alaskans. To qualify for the Permanent Fund Dividend, one must have lived in the state for a minimum of 12 months, maintain constant residency subject to allowable absences, and not be subject to court judgments or criminal convictions which fall under various disqualifying classifications or may subject the payment amount to civil garnishment.
The Permanent Fund is often considered to be one of the leading examples of a "Basic income" policy in the world.
The cost of goods in Alaska has long been higher than in the contiguous 48 states. Federal government employees, particularly United States Postal Service (USPS) workers and active-duty military members, receive a Cost of Living Allowance usually set at 25% of base pay because, while the cost of living has gone down, it is still one of the highest in the country.
Rural Alaska suffers from extremely high prices for food and consumer goods compared to the rest of the country, due to the relatively limited transportation infrastructure.
Due to the northern climate and short growing season, relatively little farming occurs in Alaska. Most farms are in either the Matanuska Valley, about northeast of Anchorage, or on the Kenai Peninsula, about southwest of Anchorage. The short 100-day growing season limits the crops that can be grown, but the long sunny summer days make for productive growing seasons. The primary crops are potatoes, carrots, lettuce, and cabbage.
The Tanana Valley is another notable agricultural locus, especially the Delta Junction area, about southeast of Fairbanks, with a sizable concentration of farms growing agronomic crops; these farms mostly lie north and east of Fort Greely. This area was largely set aside and developed under a state program spearheaded by Hammond during his second term as governor. Delta-area crops consist predominantly of barley and hay. West of Fairbanks lies another concentration of small farms catering to restaurants, the hotel and tourist industry, and community-supported agriculture.
Alaskan agriculture has experienced a surge in growth of market gardeners, small farms and farmers' markets in recent years, with the highest percentage increase (46%) in the nation in growth in farmers' markets in 2011, compared to 17% nationwide. The peony industry has also taken off, as the growing season allows farmers to harvest during a gap in supply elsewhere in the world, thereby filling a niche in the flower market.
Alaska, with no counties, lacks county fairs. However, a small assortment of state and local fairs (with the Alaska State Fair in Palmer the largest), are held mostly in the late summer. The fairs are mostly located in communities with historic or current agricultural activity, and feature local farmers exhibiting produce in addition to more high-profile commercial activities such as carnival rides, concerts and food. "Alaska Grown" is used as an agricultural slogan.
Alaska has an abundance of seafood, with the primary fisheries in the Bering Sea and the North Pacific. Seafood is one of the few food items that is often cheaper within the state than outside it. Many Alaskans take advantage of salmon seasons to harvest portions of their household diet while fishing for subsistence, as well as sport. This includes fish taken by hook, net or wheel.
Hunting for subsistence, primarily caribou, moose, and Dall sheep is still common in the state, particularly in remote Bush communities. An example of a traditional native food is Akutaq, the Eskimo ice cream, which can consist of reindeer fat, seal oil, dried fish meat and local berries.
Alaska's reindeer herding is concentrated on Seward Peninsula, where wild caribou can be prevented from mingling and migrating with the domesticated reindeer.
Most food in Alaska is transported into the state from "Outside", and shipping costs make food in the cities relatively expensive. In rural areas, subsistence hunting and gathering is an essential activity because imported food is prohibitively expensive. Although most small towns and villages in Alaska lie along the coastline, the cost of importing food to remote villages can be high, because of the terrain and difficult road conditions, which change dramatically, due to varying climate and precipitation changes. The cost of transport can reach as high as 50¢ per pound ($1.10/kg) or more in some remote areas, during the most difficult times, if these locations can be reached at all during such inclement weather and terrain conditions. The cost of delivering a of milk is about $3.50 in many villages where per capita income can be $20,000 or less. Fuel cost per gallon is routinely twenty to thirty cents higher than the continental United States average, with only Hawaii having higher prices.
Alaska has few road connections compared to the rest of the U.S. The state's road system covers a relatively small area of the state, linking the central population centers and the Alaska Highway, the principal route out of the state through Canada. The state capital, Juneau, is not accessible by road, only a car ferry; this has spurred debate over decades about moving the capital to a city on the road system, or building a road connection from Haines. The western part of Alaska has no road system connecting the communities with the rest of Alaska.
The Interstate Highways in Alaska consists of a total of 1082 miles. One unique feature of the Alaska Highway system is the Anton Anderson Memorial Tunnel, an active Alaska Railroad tunnel recently upgraded to provide a paved roadway link with the isolated community of Whittier on Prince William Sound to the Seward Highway about southeast of Anchorage at Portage. At , the tunnel was the longest road tunnel in North America until 2007. The tunnel is the longest combination road and rail tunnel in North America.
Built around 1915, the Alaska Railroad (ARR) played a key role in the development of Alaska through the 20th century. It links north Pacific shipping through providing critical infrastructure with tracks that run from Seward to Interior Alaska by way of South Central Alaska, passing through Anchorage, Eklutna, Wasilla, Talkeetna, Denali, and Fairbanks, with spurs to Whittier, Palmer and North Pole. The cities, towns, villages, and region served by ARR tracks are known statewide as "The Railbelt". In recent years, the ever-improving paved highway system began to eclipse the railroad's importance in Alaska's economy.
The railroad played a vital role in Alaska's development, moving freight into Alaska while transporting natural resources southward (i.e., coal from the Usibelli coal mine near Healy to Seward and gravel from the Matanuska Valley to Anchorage). It is well known for its summertime tour passenger service.
The Alaska Railroad was one of the last railroads in North America to use cabooses in regular service and still uses them on some gravel trains. It continues to offer one of the last flag stop routes in the country. A stretch of about of track along an area north of Talkeetna remains inaccessible by road; the railroad provides the only transportation to rural homes and cabins in the area. Until construction of the Parks Highway in the 1970s, the railroad provided the only land access to most of the region along its entire route.
In northern Southeast Alaska, the White Pass and Yukon Route also partly runs through the state from Skagway northwards into Canada (British Columbia and Yukon Territory), crossing the border at White Pass Summit. This line is now mainly used by tourists, often arriving by cruise liner at Skagway. It was featured in the 1983 BBC television series "Great Little Railways."
The Alaska Rail network is not connected to Outside. (The nearest link to the North American railway network is the northwest terminus of the Canadian National Railway at Prince Rupert, British Columbia, several hundred miles to the southeast.) In 2000, the U.S. Congress authorized $6 million to study the feasibility of a rail link between Alaska, Canada, and the lower 48.
Alaska Rail Marine provides car float service between Whittier and Seattle.
Many cities, towns and villages in the state do not have road or highway access; the only modes of access involve travel by air, river, or the sea.
Alaska's well-developed state-owned ferry system (known as the Alaska Marine Highway) serves the cities of southeast, the Gulf Coast and the Alaska Peninsula. The ferries transport vehicles as well as passengers. The system also operates a ferry service from Bellingham, Washington and Prince Rupert, British Columbia, in Canada through the Inside Passage to Skagway. The Inter-Island Ferry Authority also serves as an important marine link for many communities in the Prince of Wales Island region of Southeast and works in concert with the Alaska Marine Highway.
In recent years, cruise lines have created a summertime tourism market, mainly connecting the Pacific Northwest to Southeast Alaska and, to a lesser degree, towns along Alaska's gulf coast. The population of Ketchikan for example fluctuates dramatically on many days—up to four large cruise ships can dock there at the same time.
Cities not served by road, sea, or river can be reached only by air, foot, dogsled, or snowmachine, accounting for Alaska's extremely well developed bush air services—an Alaskan novelty. Anchorage and, to a lesser extent Fairbanks, is served by many major airlines. Because of limited highway access, air travel remains the most efficient form of transportation in and out of the state. Anchorage recently completed extensive remodeling and construction at Ted Stevens Anchorage International Airport to help accommodate the upsurge in tourism (in 2012–2013, Alaska received almost two million visitors).
Regular flights to most villages and towns within the state that are commercially viable are challenging to provide, so they are heavily subsidized by the federal government through the Essential Air Service program. Alaska Airlines is the only major airline offering in-state travel with jet service (sometimes in combination cargo and passenger Boeing 737-400s) from Anchorage and Fairbanks to regional hubs like Bethel, Nome, Kotzebue, Dillingham, Kodiak, and other larger communities as well as to major Southeast and Alaska Peninsula communities.
The bulk of remaining commercial flight offerings come from small regional commuter airlines such as Ravn Alaska, PenAir, and Frontier Flying Service. The smallest towns and villages must rely on scheduled or chartered bush flying services using general aviation aircraft such as the Cessna Caravan, the most popular aircraft in use in the state. Much of this service can be attributed to the Alaska bypass mail program which subsidizes bulk mail delivery to Alaskan rural communities. The program requires 70% of that subsidy to go to carriers who offer passenger service to the communities.
Many communities have small air taxi services. These operations originated from the demand for customized transport to remote areas. Perhaps the most quintessentially Alaskan plane is the bush seaplane. The world's busiest seaplane base is Lake Hood, located next to Ted Stevens Anchorage International Airport, where flights bound for remote villages without an airstrip carry passengers, cargo, and many items from stores and warehouse clubs. In 2006 Alaska had the highest number of pilots per capita of any U.S. state.
Another Alaskan transportation method is the dogsled. In modern times (that is, any time after the mid-late 1920s), dog mushing is more of a sport than a true means of transportation. Various races are held around the state, but the best known is the Iditarod Trail Sled Dog Race, a trail from Anchorage to Nome (although the distance varies from year to year, the official distance is set at ). The race commemorates the famous 1925 serum run to Nome in which mushers and dogs like Togo and Balto took much-needed medicine to the diphtheria-stricken community of Nome when all other means of transportation had failed. Mushers from all over the world come to Anchorage each March to compete for cash, prizes, and prestige. The "Serum Run" is another sled dog race that more accurately follows the route of the famous 1925 relay, leaving from the community of Nenana (southwest of Fairbanks) to Nome.
In areas not served by road or rail, primary transportation in summer is by all-terrain vehicle and in winter by snowmobile or "snow machine", as it is commonly referred to in Alaska.
Alaska's internet and other data transport systems are provided largely through the two major telecommunications companies: GCI and Alaska Communications. GCI owns and operates what it calls the Alaska United Fiber Optic system and as of late 2011 Alaska Communications advertised that it has "two fiber optic paths to the lower 48 and two more across Alaska. In January 2011, it was reported that a $1 billion project to connect Asia and rural Alaska was being planned, aided in part by $350 million in stimulus from the federal government.
Like all other U.S. states, Alaska is governed as a republic, with three branches of government: an executive branch consisting of the governor of Alaska and his or her appointees which head executive departments; a legislative branch consisting of the Alaska House of Representatives and Alaska Senate; and a judicial branch consisting of the Alaska Supreme Court and lower courts.
The state of Alaska employs approximately 16,000 people statewide.
The Alaska Legislature consists of a 40-member House of Representatives and a 20-member Senate. Senators serve four-year terms and House members two. The governor of Alaska serves four-year terms. The lieutenant governor runs separately from the governor in the primaries, but during the general election, the nominee for governor and nominee for lieutenant governor run together on the same ticket.
Alaska's court system has four levels: the Alaska Supreme Court, the Alaska Court of Appeals, the superior courts and the district courts. The superior and district courts are trial courts. Superior courts are courts of general jurisdiction, while district courts hear only certain types of cases, including misdemeanor criminal cases and civil cases valued up to $100,000.
The Supreme Court and the Court of Appeals are appellate courts. The Court of Appeals is required to hear appeals from certain lower-court decisions, including those regarding criminal prosecutions, juvenile delinquency, and habeas corpus. The Supreme Court hears civil appeals and may in its discretion hear criminal appeals.
Although in its early years of statehood Alaska was a Democratic state, since the early 1970s it has been characterized as Republican-leaning. Local political communities have often worked on issues related to land use development, fishing, tourism, and individual rights. Alaska Natives, while organized in and around their communities, have been active within the Native corporations. These have been given ownership over large tracts of land, which require stewardship.
Alaska was formerly the only state in which possession of one ounce or less of marijuana in one's home was completely legal under state law, though the federal law remains in force.
The state has an independence movement favoring a vote on secession from the United States, with the Alaskan Independence Party.
Six Republicans and four Democrats have served as governor of Alaska. In addition, Republican governor Wally Hickel was elected to the office for a second term in 1990 after leaving the Republican party and briefly joining the Alaskan Independence Party ticket just long enough to be reelected. He officially rejoined the Republican party in 1994.
Alaska's voter initiative making marijuana legal took effect on February 24, 2015, placing Alaska alongside Colorado and Washington as the first three U.S. states where recreational marijuana is legal. The new law means people over 21 can consume small amounts of pot—if they can find it. (It is still illegal to sell.) The first legal marijuana store opened in Valdez in October 2016.
To finance state government operations, Alaska depends primarily on petroleum revenues and federal subsidies. This allows it to have the lowest individual tax burden in the United States. It is one of five states with no sales tax, one of seven states with no individual income tax, and -- along with New Hampshire -- one of two that has neither. The Department of Revenue Tax Division reports regularly on the state's revenue sources. The Department also issues an annual summary of its operations, including new state laws that directly affect the tax division.
While Alaska has no state sales tax, 89 municipalities collect a local sales tax, from 1.0–7.5%, typically 3–5%. Other local taxes levied include raw fish taxes, hotel, motel, and bed-and-breakfast 'bed' taxes, severance taxes, liquor and tobacco taxes, gaming (pull tabs) taxes, tire taxes and fuel transfer taxes. A part of the revenue collected from certain state taxes and license fees (such as petroleum, aviation motor fuel, telephone cooperative) is shared with municipalities in Alaska.
Fairbanks has one of the highest property taxes in the state as no sales or income taxes are assessed in the Fairbanks North Star Borough (FNSB). A sales tax for the FNSB has been voted on many times, but has yet to be approved, leading lawmakers to increase taxes dramatically on goods such as liquor and tobacco.
In 2014 the Tax Foundation ranked Alaska as having the fourth most "business friendly" tax policy, behind only Wyoming, South Dakota, and Nevada.
Alaska regularly supports Republicans in presidential elections and has done so since statehood. Republicans have won the state's electoral college votes in all but one election that it has participated in (1964). No state has voted for a Democratic presidential candidate fewer times. Alaska was carried by Democratic nominee Lyndon B. Johnson during his landslide election in 1964, while the 1960 and 1968 elections were close. Since 1972, however, Republicans have carried the state by large margins. In 2008, Republican John McCain defeated Democrat Barack Obama in Alaska, 59.49% to 37.83%. McCain's running mate was Sarah Palin, the state's governor and the first Alaskan on a major party ticket. Obama lost Alaska again in 2012, but he captured 40% of the state's vote in that election, making him the first Democrat to do so since 1968.
The Alaska Bush, central Juneau, midtown and downtown Anchorage, and the areas surrounding the University of Alaska Fairbanks campus and Ester have been strongholds of the Democratic Party. The Matanuska-Susitna Borough, the majority of Fairbanks (including North Pole and the military base), and South Anchorage typically have the strongest Republican showing. , well over half of all registered voters have chosen "non-partisan" or "undeclared" as their affiliation, despite recent attempts to close primaries to unaffiliated voters.
Because of its population relative to other U.S. states, Alaska has only one member in the U.S. House of Representatives. This seat is held by Republican Don Young, who was re-elected to his 21st consecutive term in 2012. Alaska's at-large congressional district is one of the largest parliamentary constituencies in the world by area.
In 2008, Governor Sarah Palin became the first Republican woman to run on a national ticket when she became John McCain's running mate. She continued to be a prominent national figure even after resigning from the governor's job in July 2009.
Alaska's United States senators belong to Class2 and Class3. In 2008, Democrat Mark Begich, mayor of Anchorage, defeated long-time Republican senator Ted Stevens. Stevens had been convicted on seven felony counts of failing to report gifts on Senate financial discloser forms one week before the election. The conviction was set aside in April 2009 after evidence of prosecutorial misconduct emerged.
Republican Frank Murkowski held the state's other senatorial position. After being elected governor in 2002, he resigned from the Senate and appointed his daughter, State representative Lisa Murkowski, as his successor. She won full six-year terms in 2004, 2010 and 2016.
Alaska is not divided into counties, as most of the other U.S. states, but it is divided into "boroughs". Many of the more densely populated parts of the state are part of Alaska's 16 boroughs, which function somewhat similarly to counties in other states. However, unlike county-equivalents in the other 49 states, the boroughs do not cover the entire land area of the state. The area not part of any borough is referred to as the Unorganized Borough.
The Unorganized Borough has no government of its own, but the U.S. Census Bureau in cooperation with the state divided the Unorganized Borough into 11 census areas solely for the purposes of statistical analysis and presentation. A "recording district" is a mechanism for administration of the public record in Alaska. The state is divided into 34 recording districts which are centrally administered under a State Recorder. All recording districts use the same acceptance criteria, fee schedule, etc., for accepting documents into the public record.
Whereas many U.S. states use a three-tiered system of decentralization—state/county/township—most of Alaska uses only two tiers—state/borough. Owing to the low population density, most of the land is located in the Unorganized Borough. As the name implies, it has no intermediate borough government but is administered directly by the state government. In 2000, 57.71% of Alaska's area has this status, with 13.05% of the population.
Anchorage merged the city government with the Greater Anchorage Area Borough in 1975 to form the Municipality of Anchorage, containing the city proper and the communities of Eagle River, Chugiak, Peters Creek, Girdwood, Bird, and Indian. Fairbanks has a separate borough (the Fairbanks North Star Borough) and municipality (the City of Fairbanks).
The state's most populous city is Anchorage, home to 278,700 people in 2006, 225,744 of whom live in the urbanized area. The richest location in Alaska by per capita income is Halibut Cove ($89,895). Yakutat City, Sitka, Juneau, and Anchorage are the four largest cities in the U.S. by area.
As reflected in the 2010 United States Census, Alaska has a total of 355 incorporated cities and census-designated places (CDPs). The tally of cities includes four unified municipalities, essentially the equivalent of a consolidated city–county. The majority of these communities are located in the rural expanse of Alaska known as "The Bush" and are unconnected to the contiguous North American road network. The table at the bottom of this section lists the 100 largest cities and census-designated places in Alaska, in population order.
Of Alaska's 2010 Census population figure of 710,231, 20,429 people, or 2.88% of the population, did not live in an incorporated city or census-designated place. Approximately three-quarters of that figure were people who live in urban and suburban neighborhoods on the outskirts of the city limits of Ketchikan, Kodiak, Palmer and Wasilla. CDPs have not been established for these areas by the United States Census Bureau, except that seven CDPs were established for the Ketchikan-area neighborhoods in the 1980 Census (Clover Pass, Herring Cove, Ketchikan East, Mountain Point, North Tongass Highway, Pennock Island and Saxman East), but have not been used since. The remaining population was scattered throughout Alaska, both within organized boroughs and in the Unorganized Borough, in largely remote areas.
The Alaska Department of Education and Early Development administers many school districts in Alaska. In addition, the state operates a boarding school, Mt. Edgecumbe High School in Sitka, and provides partial funding for other boarding schools, including Nenana Student Living Center in Nenana and The Galena Interior Learning Academy in Galena.
There are more than a dozen colleges and universities in Alaska. Accredited universities in Alaska include the University of Alaska Anchorage, University of Alaska Fairbanks, University of Alaska Southeast, and Alaska Pacific University. Alaska is the only state that has no institutions that are part of NCAA Division I.
The Alaska Department of Labor and Workforce Development operates AVTEC, Alaska's Institute of Technology. Campuses in Seward and Anchorage offer one-week to 11-month training programs in areas as diverse as Information Technology, Welding, Nursing, and Mechanics.
Alaska has had a problem with a "brain drain". Many of its young people, including most of the highest academic achievers, leave the state after high school graduation and do not return. , Alaska did not have a law school or medical school. The University of Alaska has attempted to combat this by offering partial four-year scholarships to the top 10% of Alaska high school graduates, via the Alaska Scholars Program.
The Alaska State Troopers are Alaska's statewide police force. They have a long and storied history, but were not an official organization until 1941. Before the force was officially organized, law enforcement in Alaska was handled by various federal agencies. Larger towns usually have their own local police and some villages rely on "Public Safety Officers" who have police training but do not carry firearms. In much of the state, the troopers serve as the only police force available. In addition to enforcing traffic and criminal law, wildlife Troopers enforce hunting and fishing regulations. Due to the varied terrain and wide scope of the Troopers' duties, they employ a wide variety of land, air, and water patrol vehicles.
Many rural communities in Alaska are considered "dry", having outlawed the importation of alcoholic beverages. Suicide rates for rural residents are higher than urban.
Domestic abuse and other violent crimes are also at high levels in the state; this is in part linked to alcohol abuse. Alaska has the highest rate of sexual assault in the nation, especially in rural areas. The average age of sexually assaulted victims is 16 years old. In four out of five cases, the suspects were relatives, friends or acquaintances.
Some of Alaska's popular annual events are the Iditarod Trail Sled Dog Race from Anchorage to Nome, World Ice Art Championships in Fairbanks, the Blueberry Festival and Alaska Hummingbird Festival in Ketchikan, the Sitka Whale Fest, and the Stikine River Garnet Fest in Wrangell. The Stikine River attracts the largest springtime concentration of American bald eagles in the world.
The Alaska Native Heritage Center celebrates the rich heritage of Alaska's 11 cultural groups. Their purpose is to encourage cross-cultural exchanges among all people and enhance self-esteem among Native people. The Alaska Native Arts Foundation promotes and markets Native art from all regions and cultures in the State, using the internet.
Influences on music in Alaska include the traditional music of Alaska Natives as well as folk music brought by later immigrants from Russia and Europe. Prominent musicians from Alaska include singer Jewel, traditional Aleut flautist Mary Youngblood, folk singer-songwriter Libby Roderick, Christian music singer-songwriter Lincoln Brewster, metal/post hardcore band 36 Crazyfists and the groups Pamyua and Portugal. The Man.
There are many established music festivals in Alaska, including the Alaska Folk Festival, the Fairbanks Summer Arts Festival, the Anchorage Folk Festival, the Athabascan Old-Time Fiddling Festival, the Sitka Jazz Festival, and the Sitka Summer Music Festival. The most prominent orchestra in Alaska is the Anchorage Symphony Orchestra, though the Fairbanks Symphony Orchestra and Juneau Symphony are also notable. The Anchorage Opera is currently the state's only professional opera company, though there are several volunteer and semi-professional organizations in the state as well.
The official state song of Alaska is "Alaska's Flag", which was adopted in 1955; it celebrates the flag of Alaska.
Alaska's first independent picture entirely made in Alaska was "The Chechahcos", produced by Alaskan businessman Austin E. Lathrop and filmed in and around Anchorage. Released in 1924 by the Alaska Moving Picture Corporation, it was the only film the company made.
One of the most prominent movies filmed in Alaska is MGM's "Eskimo/Mala The Magnificent", starring Alaska Native Ray Mala. In 1932 an expedition set out from MGM's studios in Hollywood to Alaska to film what was then billed as "The Biggest Picture Ever Made". Upon arriving in Alaska, they set up "Camp Hollywood" in Northwest Alaska, where they lived during the duration of the filming. Louis B. Mayer spared no expense in spite of the remote location, going so far as to hire the chef from the Hotel Roosevelt in Hollywood to prepare meals.
When "Eskimo" premiered at the Astor Theatre in New York City, the studio received the largest amount of feedback in its history. "Eskimo" was critically acclaimed and released worldwide; as a result, Mala became an international movie star. "Eskimo" won the first Oscar for Best Film Editing at the Academy Awards, and showcased and preserved aspects of Inupiat culture on film.
The 1983 Disney movie "Never Cry Wolf" was at least partially shot in Alaska. The 1991 film "White Fang", based on Jack London's novel and starring Ethan Hawke, was filmed in and around Haines. Steven Seagal's 1994 "On Deadly Ground", starring Michael Caine, was filmed in part at the Worthington Glacier near Valdez. The 1999 John Sayles film "Limbo", starring David Strathairn, Mary Elizabeth Mastrantonio, and Kris Kristofferson, was filmed in Juneau.
The psychological thriller "Insomnia", starring Al Pacino and Robin Williams, was shot in Canada, but was set in Alaska. The 2007 film directed by Sean Penn, "Into The Wild", was partially filmed and set in Alaska. The film, which is based on the novel of the same name, follows the adventures of Christopher McCandless, who died in a remote abandoned bus along the Stampede Trail west of Healy in 1992.
Many films and television shows set in Alaska are not filmed there; for example, "Northern Exposure", set in the fictional town of Cicely, Alaska, was filmed in Roslyn, Washington. The 2007 horror feature "30 Days of Night" is set in Barrow, Alaska, but was filmed in New Zealand.
Many reality television shows are filmed in Alaska. In 2011 the "Anchorage Daily News" found ten set in the state. | https://en.wikipedia.org/wiki?curid=624 |
Agriculture
Agriculture is the science and art of cultivating plants and livestock. Agriculture was the key development in the rise of sedentary human civilization, whereby farming of domesticated species created food surpluses that enabled people to live in cities. The history of agriculture began thousands of years ago. After gathering wild grains beginning at least 105,000 years ago, nascent farmers began to plant them around 11,500 years ago. Pigs, sheep and cattle were domesticated over 10,000 years ago. Plants were independently cultivated in at least 11 regions of the world. Industrial agriculture based on large-scale monoculture in the twentieth century came to dominate agricultural output, though about 2 billion people still depended on subsistence agriculture into the twenty-first.
Modern agronomy, plant breeding, agrochemicals such as pesticides and fertilizers, and technological developments have sharply increased yields, while causing widespread ecological and environmental damage. Selective breeding and modern practices in animal husbandry have similarly increased the output of meat, but have raised concerns about animal welfare and environmental damage. Environmental issues include contributions to global warming, depletion of aquifers, deforestation, antibiotic resistance, and growth hormones in industrial meat production. Genetically modified organisms are widely used, although some are banned in certain countries.
The major agricultural products can be broadly grouped into foods, fibers, fuels and raw materials (such as rubber). Food classes include cereals (grains), vegetables, fruits, oils, meat, milk, fungi and eggs. Over one-third of the world's workers are employed in agriculture, second only to the service sector, although the number of agricultural workers in developed countries has decreased significantly over the centuries.
The word "agriculture" is a late Middle English adaptation of Latin "agricultūra", from "ager", "field", and "cultūra", "cultivation" or "growing". While agriculture usually refers to human activities, certain species of ant, termite and ambrosia beetle also cultivate crops. Agriculture is defined with varying scopes, in its broadest sense using natural resources to "produce commodities which maintain life, including food, fiber, forest products, horticultural crops, and their related services". Thus defined, it includes arable farming, horticulture, animal husbandry and forestry, but horticulture and forestry are in practice often excluded.
The development of agriculture enabled the human population to grow many times larger than could be sustained by hunting and gathering. Agriculture began independently in different parts of the globe, and included a diverse range of taxa, in at least 11 separate centres of origin. Wild grains were collected and eaten from at least 105,000 years ago. From around 11,500 years ago, the eight Neolithic founder crops, emmer and einkorn wheat, hulled barley, peas, lentils, bitter vetch, chick peas and flax were cultivated in the Levant. Rice was domesticated in China between 11,500 and 6,200 BC with the earliest known cultivation from 5,700 BC, followed by mung, soy and azuki beans. Sheep were domesticated in Mesopotamia between 13,000 and 11,000 years ago. Cattle were domesticated from the wild aurochs in the areas of modern Turkey and Pakistan some 10,500 years ago. Pig production emerged in Eurasia, including Europe, East Asia and Southwest Asia, where wild boar were first domesticated about 10,500 years ago. In the Andes of South America, the potato was domesticated between 10,000 and 7,000 years ago, along with beans, coca, llamas, alpacas, and guinea pigs. Sugarcane and some root vegetables were domesticated in New Guinea around 9,000 years ago. Sorghum was domesticated in the Sahel region of Africa by 7,000 years ago. Cotton was domesticated in Peru by 5,600 years ago, and was independently domesticated in Eurasia. In Mesoamerica, wild teosinte was bred into maize by 6,000 years ago.
Scholars have offered multiple hypotheses to explain the historical origins of agriculture. Studies of the transition from hunter-gatherer to agricultural societies indicate an initial period of intensification and increasing sedentism; examples are the Natufian culture in the Levant, and the Early Chinese Neolithic in China. Then, wild stands that had previously been harvested started to be planted, and gradually came to be domesticated.
In Eurasia, the Sumerians started to live in villages from about 8,000 BC, relying on the Tigris and Euphrates rivers and a canal system for irrigation. Ploughs appear in pictographs around 3,000 BC; seed-ploughs around 2,300 BC. Farmers grew wheat, barley, vegetables such as lentils and onions, and fruits including dates, grapes, and figs. Ancient Egyptian agriculture relied on the Nile River and its seasonal flooding. Farming started in the predynastic period at the end of the Paleolithic, after 10,000 BC. Staple food crops were grains such as wheat and barley, alongside industrial crops such as flax and papyrus. In India, wheat, barley and jujube were domesticated by 9,000 BC, soon followed by sheep and goats. Cattle, sheep and goats were domesticated in Mehrgarh culture by 8,000–6,000 BC. Cotton was cultivated by the 5th–4th millennium BC. Archeological evidence indicates an animal-drawn plough from 2,500 BC in the Indus Valley Civilisation.
In China, from the 5th century BC there was a nationwide granary system and widespread silk farming. Water-powered grain mills were in use by the 1st century BC, followed by irrigation. By the late 2nd century, heavy ploughs had been developed with iron ploughshares and mouldboards. These spread westwards across Eurasia. Asian rice was domesticated 8,200–13,500 years ago – depending on the molecular clock estimate that is used – on the Pearl River in southern China with a single genetic origin from the wild rice "Oryza rufipogon". In Greece and Rome, the major cereals were wheat, emmer, and barley, alongside vegetables including peas, beans, and olives. Sheep and goats were kept mainly for dairy products.
In the Americas, crops domesticated in Mesoamerica (apart from teosinte) include squash, beans, and cocoa. Cocoa was being domesticated by the Mayo Chinchipe of the upper Amazon around 3,000 BC.
The turkey was probably domesticated in Mexico or the American Southwest. The Aztecs developed irrigation systems, formed terraced hillsides, fertilized their soil, and developed chinampas or artificial islands. The Mayas used extensive canal and raised field systems to farm swampland from 400 BC. Coca was domesticated in the Andes, as were the peanut, tomato, tobacco, and pineapple. Cotton was domesticated in Peru by 3,600 BC. Animals including llamas, alpacas, and guinea pigs were domesticated there. In North America, the indigenous people of the East domesticated crops such as sunflower, tobacco, squash and "Chenopodium". Wild foods including wild rice and maple sugar were harvested. The domesticated strawberry is a hybrid of a Chilean and a North American species, developed by breeding in Europe and North America. The indigenous people of the Southwest and the Pacific Northwest practiced forest gardening and fire-stick farming. The natives controlled fire on a regional scale to create a low-intensity fire ecology that sustained a low-density agriculture in loose rotation; a sort of "wild" permaculture. A system of companion planting called the Three Sisters was developed on the Great Plains. The three crops were winter squash, maize, and climbing beans.
Indigenous Australians, long supposed to have been nomadic hunter-gatherers, practised systematic burning to enhance natural productivity in fire-stick farming. The Gunditjmara and other groups developed eel farming and fish trapping systems from some 5,000 years ago. There is evidence of 'intensification' across the whole continent over that period. In two regions of Australia, the central west coast and eastern central, early farmers cultivated yams, native millet, and bush onions, possibly in permanent settlements.
In the Middle Ages, both in the Islamic world and in Europe, agriculture transformed with improved techniques and the diffusion of crop plants, including the introduction of sugar, rice, cotton and fruit trees (such as the orange) to Europe by way of Al-Andalus. After 1492 the Columbian exchange brought New World crops such as maize, potatoes, tomatoes, sweet potatoes and manioc to Europe, and Old World crops such as wheat, barley, rice and turnips, and livestock (including horses, cattle, sheep and goats) to the Americas.
Irrigation, crop rotation, and fertilizers advanced from the 17th century with the British Agricultural Revolution, allowing global population to rise significantly. Since 1900 agriculture in developed nations, and to a lesser extent in the developing world, has seen large rises in productivity as mechanization replaces human labor, and assisted by synthetic fertilizers, pesticides, and selective breeding. The Haber-Bosch method allowed the synthesis of ammonium nitrate fertilizer on an industrial scale, greatly increasing crop yields and sustaining a further increase in global population. Modern agriculture has raised or encountered ecological, political, and economic issues including water pollution, biofuels, genetically modified organisms, tariffs and farm subsidies, leading to alternative approaches such as the organic movement.
Pastoralism involves managing domesticated animals. In nomadic pastoralism, herds of livestock are moved from place to place in search of pasture, fodder, and water. This type of farming is practised in arid and semi-arid regions of Sahara, Central Asia and some parts of India.
In shifting cultivation, a small area of forest is cleared by cutting and burning the trees. The cleared land is used for growing crops for a few years until the soil becomes too infertile, and the area is abandoned. Another patch of land is selected and the process is repeated. This type of farming is practiced mainly in areas with abundant rainfall where the forest regenerates quickly. This practice is used in Northeast India, Southeast Asia, and the Amazon Basin.
Subsistence farming is practiced to satisfy family or local needs alone, with little left over for transport elsewhere. It is intensively practiced in Monsoon Asia and South-East Asia. An estimated 2.5 billion subsistence farmers worked in 2018, cultivating about 60% of the earth's arable land.
Intensive farming is cultivation to maximise productivity, with a low fallow ratio and a high use of inputs (water, fertilizer, pesticide and automation). It is practiced mainly in developed countries.
From the twentieth century, intensive agriculture increased productivity. It substituted synthetic fertilizers and pesticides for labor, but caused increased water pollution, and often involved farm subsidies. In recent years there has been a backlash against the environmental effects of conventional agriculture, resulting in the organic, regenerative, and sustainable agriculture movements. One of the major forces behind this movement has been the European Union, which first certified organic food in 1991 and began reform of its Common Agricultural Policy (CAP) in 2005 to phase out commodity-linked farm subsidies, also known as decoupling. The growth of organic farming has renewed research in alternative technologies such as integrated pest management, selective breeding, and controlled-environment agriculture. Recent mainstream technological developments include genetically modified food. Demand for non-food biofuel crops, development of former farm lands, rising transportation costs, climate change, growing consumer demand in China and India, and population growth, are threatening food security in many parts of the world. The International Fund for Agricultural Development posits that an increase in smallholder agriculture may be part of the solution to concerns about food prices and overall food security, given the favorable experience of Vietnam. Soil degradation and diseases such as stem rust are major concerns globally; approximately 40% of the world's agricultural land is seriously degraded. By 2015, the agricultural output of China was the largest in the world, followed by the European Union, India and the United States. Economists measure the total factor productivity of agriculture and by this measure agriculture in the United States is roughly 1.7 times more productive than it was in 1948.
Following the three-sector theory, the number of people employed in agriculture and other primary activities (such as fishing) can be more than 80% in the least developed countries, and less than 2% in the most highly developed countries. Since the Industrial Revolution, many countries have made the transition to developed economies, and the proportion of people working in agriculture has steadily fallen. During the 16th century in Europe, for example, between 55 and 75% of the population was engaged in agriculture; by the 19th century, this had dropped to between 35 and 65%. In the same countries today, the figure is less than 10%.
At the start of the 21st century, some one billion people, or over 1/3 of the available work force, were employed in agriculture. It constitutes approximately 70% of the global employment of children, and in many countries employs the largest percentage of women of any industry. The service sector overtook the agricultural sector as the largest global employer in 2007.
Agriculture, specifically farming, remains a hazardous industry, and farmers worldwide remain at high risk of work-related injuries, lung disease, noise-induced hearing loss, skin diseases, as well as certain cancers related to chemical use and prolonged sun exposure. On industrialized farms, injuries frequently involve the use of agricultural machinery, and a common cause of fatal agricultural injuries in developed countries is tractor rollovers. Pesticides and other chemicals used in farming can also be hazardous to worker health, and workers exposed to pesticides may experience illness or have children with birth defects. As an industry in which families commonly share in work and live on the farm itself, entire families can be at risk for injuries, illness, and death. Ages 0–6 may be an especially vulnerable population in agriculture; common causes of fatal injuries among young farm workers include drowning, machinery and motor accidents, including with all-terrain vehicles.
The International Labour Organization considers agriculture "one of the most hazardous of all economic sectors". It estimates that the annual work-related death toll among agricultural employees is at least 170,000, twice the average rate of other jobs. In addition, incidences of death, injury and illness related to agricultural activities often go unreported. The organization has developed the Safety and Health in Agriculture Convention, 2001, which covers the range of risks in the agriculture occupation, the prevention of these risks and the role that individuals and organizations engaged in agriculture should play.
In the United States, agriculture has been identified by the National Institute for Occupational Safety and Health as a priority industry sector in the National Occupational Research Agenda to identify and provide intervention strategies for occupational health and safety issues.
In the European Union, the European Agency for Safety and Health at Work has issued guidelines on implementing health and safety directives in agriculture, livestock farming, horticulture, and forestry. The Agricultural Safety and Health Council of America (ASHCA) also holds a yearly summit to discuss safety.
Overall production varies by country as listed.
Cropping systems vary among farms depending on the available resources and constraints; geography and climate of the farm; government policy; economic, social and political pressures; and the philosophy and culture of the farmer.
Shifting cultivation (or slash and burn) is a system in which forests are burnt, releasing nutrients to support cultivation of annual and then perennial crops for a period of several years. Then the plot is left fallow to regrow forest, and the farmer moves to a new plot, returning after many more years (10–20). This fallow period is shortened if population density grows, requiring the input of nutrients (fertilizer or manure) and some manual pest control. Annual cultivation is the next phase of intensity in which there is no fallow period. This requires even greater nutrient and pest control inputs.
Further industrialization led to the use of monocultures, when one cultivar is planted on a large acreage. Because of the low biodiversity, nutrient use is uniform and pests tend to build up, necessitating the greater use of pesticides and fertilizers. Multiple cropping, in which several crops are grown sequentially in one year, and intercropping, when several crops are grown at the same time, are other kinds of annual cropping systems known as polycultures.
In subtropical and arid environments, the timing and extent of agriculture may be limited by rainfall, either not allowing multiple annual crops in a year, or requiring irrigation. In all of these environments perennial crops are grown (coffee, chocolate) and systems are practiced such as agroforestry. In temperate environments, where ecosystems were predominantly grassland or prairie, highly productive annual farming is the dominant agricultural system.
Important categories of food crops include cereals, legumes, forage, fruits and vegetables. Natural fibers include cotton, wool, hemp, silk and flax. Specific crops are cultivated in distinct growing regions throughout the world. Production is listed in millions of metric tons, based on FAO estimates.
Animal husbandry is the breeding and raising of animals for meat, milk, eggs, or wool, and for work and transport. Working animals, including horses, mules, oxen, water buffalo, camels, llamas, alpacas, donkeys, and dogs, have for centuries been used to help cultivate fields, harvest crops, wrangle other animals, and transport farm products to buyers.
Livestock production systems can be defined based on feed source, as grassland-based, mixed, and landless. , 30% of Earth's ice- and water-free area was used for producing livestock, with the sector employing approximately 1.3 billion people. Between the 1960s and the 2000s, there was a significant increase in livestock production, both by numbers and by carcass weight, especially among beef, pigs and chickens, the latter of which had production increased by almost a factor of 10. Non-meat animals, such as milk cows and egg-producing chickens, also showed significant production increases. Global cattle, sheep and goat populations are expected to continue to increase sharply through 2050. Aquaculture or fish farming, the production of fish for human consumption in confined operations, is one of the fastest growing sectors of food production, growing at an average of 9% a year between 1975 and 2007.
During the second half of the 20th century, producers using selective breeding focused on creating livestock breeds and crossbreeds that increased production, while mostly disregarding the need to preserve genetic diversity. This trend has led to a significant decrease in genetic diversity and resources among livestock breeds, leading to a corresponding decrease in disease resistance and local adaptations previously found among traditional breeds.
Grassland based livestock production relies upon plant material such as shrubland, rangeland, and pastures for feeding ruminant animals. Outside nutrient inputs may be used, however manure is returned directly to the grassland as a major nutrient source. This system is particularly important in areas where crop production is not feasible because of climate or soil, representing 30–40 million pastoralists. Mixed production systems use grassland, fodder crops and grain feed crops as feed for ruminant and monogastric (one stomach; mainly chickens and pigs) livestock. Manure is typically recycled in mixed systems as a fertilizer for crops.
Landless systems rely upon feed from outside the farm, representing the de-linking of crop and livestock production found more prevalently in Organisation for Economic Co-operation and Development member countries. Synthetic fertilizers are more heavily relied upon for crop production and manure utilization becomes a challenge as well as a source for pollution. Industrialized countries use these operations to produce much of the global supplies of poultry and pork. Scientists estimate that 75% of the growth in livestock production between 2003 and 2030 will be in confined animal feeding operations, sometimes called factory farming. Much of this growth is happening in developing countries in Asia, with much smaller amounts of growth in Africa. Some of the practices used in commercial livestock production, including the usage of growth hormones, are controversial.
Tillage is the practice of breaking up the soil with tools such as the plow or harrow to prepare for planting, for nutrient incorporation, or for pest control. Tillage varies in intensity from conventional to no-till. It may improve productivity by warming the soil, incorporating fertilizer and controlling weeds, but also renders soil more prone to erosion, triggers the decomposition of organic matter releasing CO2, and reduces the abundance and diversity of soil organisms.
Pest control includes the management of weeds, insects, mites, and diseases. Chemical (pesticides), biological (biocontrol), mechanical (tillage), and cultural practices are used. Cultural practices include crop rotation, culling, cover crops, intercropping, composting, avoidance, and resistance. Integrated pest management attempts to use all of these methods to keep pest populations below the number which would cause economic loss, and recommends pesticides as a last resort.
Nutrient management includes both the source of nutrient inputs for crop and livestock production, and the method of utilization of manure produced by livestock. Nutrient inputs can be chemical inorganic fertilizers, manure, green manure, compost and minerals. Crop nutrient use may also be managed using cultural techniques such as crop rotation or a fallow period. Manure is used either by holding livestock where the feed crop is growing, such as in managed intensive rotational grazing, or by spreading either dry or liquid formulations of manure on cropland or pastures.
Water management is needed where rainfall is insufficient or variable, which occurs to some degree in most regions of the world. Some farmers use irrigation to supplement rainfall. In other areas such as the Great Plains in the U.S. and Canada, farmers use a fallow year to conserve soil moisture to use for growing a crop in the following year. Agriculture represents 70% of freshwater use worldwide.
According to a report by the International Food Policy Research Institute, agricultural technologies will have the greatest impact on food production if adopted in combination with each other; using a model that assessed how eleven technologies could impact agricultural productivity, food security and trade by 2050, the International Food Policy Research Institute found that the number of people at risk from hunger could be reduced by as much as 40% and food prices could be reduced by almost half.
Payment for ecosystem services is a method of providing additional incentives to encourage farmers to conserve some aspects of the environment. Measures might include paying for reforestation upstream of a city, to improve the supply of fresh water.
Crop alteration has been practiced by humankind for thousands of years, since the beginning of civilization. Altering crops through breeding practices changes the genetic make-up of a plant to develop crops with more beneficial characteristics for humans, for example, larger fruits or seeds, drought-tolerance, or resistance to pests. Significant advances in plant breeding ensued after the work of geneticist Gregor Mendel. His work on dominant and recessive alleles, although initially largely ignored for almost 50 years, gave plant breeders a better understanding of genetics and breeding techniques. Crop breeding includes techniques such as plant selection with desirable traits, self-pollination and cross-pollination, and molecular techniques that genetically modify the organism.
Domestication of plants has, over the centuries increased yield, improved disease resistance and drought tolerance, eased harvest and improved the taste and nutritional value of crop plants. Careful selection and breeding have had enormous effects on the characteristics of crop plants. Plant selection and breeding in the 1920s and 1930s improved pasture (grasses and clover) in New Zealand. Extensive X-ray and ultraviolet induced mutagenesis efforts (i.e. primitive genetic engineering) during the 1950s produced the modern commercial varieties of grains such as wheat, corn (maize) and barley.
The Green Revolution popularized the use of conventional hybridization to sharply increase yield by creating "high-yielding varieties". For example, average yields of corn (maize) in the US have increased from around 2.5 tons per hectare (t/ha) (40 bushels per acre) in 1900 to about 9.4 t/ha (150 bushels per acre) in 2001. Similarly, worldwide average wheat yields have increased from less than 1 t/ha in 1900 to more than 2.5 t/ha in 1990. South American average wheat yields are around 2 t/ha, African under 1 t/ha, and Egypt and Arabia up to 3.5 to 4 t/ha with irrigation. In contrast, the average wheat yield in countries such as France is over 8 t/ha. Variations in yields are due mainly to variation in climate, genetics, and the level of intensive farming techniques (use of fertilizers, chemical pest control, growth control to avoid lodging).
Genetically modified organisms (GMO) are organisms whose genetic material has been altered by genetic engineering techniques generally known as recombinant DNA technology. Genetic engineering has expanded the genes available to breeders to utilize in creating desired germlines for new crops. Increased durability, nutritional content, insect and virus resistance and herbicide tolerance are a few of the attributes bred into crops through genetic engineering. For some, GMO crops cause food safety and food labeling concerns. Numerous countries have placed restrictions on the production, import or use of GMO foods and crops. Currently a global treaty, the Biosafety Protocol, regulates the trade of GMOs. There is ongoing discussion regarding the labeling of foods made from GMOs, and while the EU currently requires all GMO foods to be labeled, the US does not.
Herbicide-resistant seed has a gene implanted into its genome that allows the plants to tolerate exposure to herbicides, including glyphosate. These seeds allow the farmer to grow a crop that can be sprayed with herbicides to control weeds without harming the resistant crop. Herbicide-tolerant crops are used by farmers worldwide. With the increasing use of herbicide-tolerant crops, comes an increase in the use of glyphosate-based herbicide sprays. In some areas glyphosate resistant weeds have developed, causing farmers to switch to other herbicides. Some studies also link widespread glyphosate usage to iron deficiencies in some crops, which is both a crop production and a nutritional quality concern, with potential economic and health implications.
Other GMO crops used by growers include insect-resistant crops, which have a gene from the soil bacterium "Bacillus thuringiensis" (Bt), which produces a toxin specific to insects. These crops resist damage by insects. Some believe that similar or better pest-resistance traits can be acquired through traditional breeding practices, and resistance to various pests can be gained through hybridization or cross-pollination with wild species. In some cases, wild species are the primary source of resistance traits; some tomato cultivars that have gained resistance to at least 19 diseases did so through crossing with wild populations of tomatoes.
Agriculture imposes multiple external costs upon society through effects such as pesticide damage to nature (especially herbicides and insecticides), nutrient runoff, excessive water usage, and loss of natural environment. A 2000 assessment of agriculture in the UK determined total external costs for 1996 of £2,343 million, or £208 per hectare. A 2005 analysis of these costs in the US concluded that cropland imposes approximately $5 to $16 billion ($30 to $96 per hectare), while livestock production imposes $714 million. Both studies, which focused solely on the fiscal impacts, concluded that more should be done to internalize external costs. Neither included subsidies in their analysis, but they noted that subsidies also influence the cost of agriculture to society.
Agriculture seeks to increase yield and to reduce costs. Yield increases with inputs such as fertilisers and removal of pathogens, predators, and competitors (such as weeds). Costs decrease with increasing scale of farm units, such as making fields larger; this means removing hedges, ditches and other areas of habitat. Pesticides kill insects, plants and fungi. These and other measures have cut biodiversity to very low levels on intensively farmed land.
In 2010, the International Resource Panel of the United Nations Environment Programme assessed the environmental impacts of consumption and production. It found that agriculture and food consumption are two of the most important drivers of environmental pressures, particularly habitat change, climate change, water use and toxic emissions. Agriculture is the main source of toxins released into the environment, including insecticides, especially those used on cotton. The 2011 UNEP Green Economy report states that "[a]agricultural operations, excluding land use changes, produce approximately 13 per cent of anthropogenic global GHG emissions. This includes GHGs emitted by the use of inorganic fertilizers agro-chemical pesticides and herbicides; (GHG emissions resulting from production of these inputs are included in industrial emissions); and fossil fuel-energy inputs. "On average we find that the total amount of fresh residues from agricultural and forestry production for second- generation biofuel production amounts to 3.8 billion tonnes per year between 2011 and 2050 (with an average annual growth rate of 11 per cent throughout the period analysed, accounting for higher growth during early years, 48 per cent for 2011–2020 and an average 2 per cent annual expansion after 2020)."
A senior UN official, Henning Steinfeld, said that "Livestock are one of the most significant contributors to today's most serious environmental problems". Livestock production occupies 70% of all land used for agriculture, or 30% of the land surface of the planet. It is one of the largest sources of greenhouse gases, responsible for 18% of the world's greenhouse gas emissions as measured in CO2 equivalents. By comparison, all transportation emits 13.5% of the CO2. It produces 65% of human-related nitrous oxide (which has 296 times the global warming potential of CO2,) and 37% of all human-induced methane (which is 23 times as warming as CO2.) It also generates 64% of the ammonia emission. Livestock expansion is cited as a key factor driving deforestation; in the Amazon basin 70% of previously forested area is now occupied by pastures and the remainder used for feedcrops. Through deforestation and land degradation, livestock is also driving reductions in biodiversity. Furthermore, the UNEP states that "methane emissions from global livestock are projected to increase by 60 per cent by 2030 under current practices and consumption patterns."
Land transformation, the use of land to yield goods and services, is the most substantial way humans alter the Earth's ecosystems, and is considered the driving force in the loss of biodiversity. Estimates of the amount of land transformed by humans vary from 39 to 50%. Land degradation, the long-term decline in ecosystem function and productivity, is estimated to be occurring on 24% of land worldwide, with cropland overrepresented. The UN-FAO report cites land management as the driving factor behind degradation and reports that 1.5 billion people rely upon the degrading land. Degradation can be deforestation, desertification, soil erosion, mineral depletion, or chemical degradation (acidification and salinization).
Agriculture lead to rise in Zoonotic disease like the Coronavirus disease 2019, by degrading natural buffers between humans and animals, reducing biodiversity and creating big groups of genetically similar animals.
Eutrophication, excessive nutrients in aquatic ecosystems resulting in algal bloom and anoxia, leads to fish kills, loss of biodiversity, and renders water unfit for drinking and other industrial uses. Excessive fertilization and manure application to cropland, as well as high livestock stocking densities cause nutrient (mainly nitrogen and phosphorus) runoff and leaching from agricultural land. These nutrients are major nonpoint pollutants contributing to eutrophication of aquatic ecosystems and pollution of groundwater, with harmful effects on human populations. Fertilisers also reduce terrestrial biodiversity by increasing competition for light, favouring those species that are able to benefit from the added nutrients.
Agriculture accounts for 70 percent of withdrawals of freshwater resources. Agriculture is a major draw on water from aquifers, and currently draws from those underground water sources at an unsustainable rate. It is long known that aquifers in areas as diverse as northern China, the Upper Ganges and the western US are being depleted, and new research extends these problems to aquifers in Iran, Mexico and Saudi Arabia. Increasing pressure is being placed on water resources by industry and urban areas, meaning that water scarcity is increasing and agriculture is facing the challenge of producing more food for the world's growing population with reduced water resources. Agricultural water usage can also cause major environmental problems, including the destruction of natural wetlands, the spread of water-borne diseases, and land degradation through salinization and waterlogging, when irrigation is performed incorrectly.
Pesticide use has increased since 1950 to 2.5million short tons annually worldwide, yet crop loss from pests has remained relatively constant. The World Health Organization estimated in 1992 that three million pesticide poisonings occur annually, causing 220,000 deaths. Pesticides select for pesticide resistance in the pest population, leading to a condition termed the "pesticide treadmill" in which pest resistance warrants the development of a new pesticide.
An alternative argument is that the way to "save the environment" and prevent famine is by using pesticides and intensive high yield farming, a view exemplified by a quote heading the Center for Global Food Issues website: 'Growing more per acre leaves more land for nature'. However, critics argue that a trade-off between the environment and a need for food is not inevitable, and that pesticides simply replace good agronomic practices such as crop rotation. The Push–pull agricultural pest management technique involves intercropping, using plant aromas to repel pests from crops (push) and to lure them to a place from which they can then be removed (pull).
Global warming and agriculture are interrelated on a global scale. Global warming affects agriculture through changes in average temperatures, rainfall, and weather extremes (like storms and heat waves); changes in pests and diseases; changes in atmospheric carbon dioxide and ground-level ozone concentrations; changes in the nutritional quality of some foods; and changes in sea level. Global warming is already affecting agriculture, with effects unevenly distributed across the world. Future climate change will probably negatively affect crop production in low latitude countries, while effects in northern latitudes may be positive or negative. Global warming will probably increase the risk of food insecurity for some vulnerable groups, such as the poor.
Animal husbandry is also responsible for greenhouse gas production of and a percentage of the world's methane, and future land infertility, and the displacement of wildlife. Agriculture contributes to climate change by anthropogenic emissions of greenhouse gases, and by the conversion of non-agricultural land such as forest for agricultural use. Agriculture, forestry and land-use change contributed around 20 to 25% to global annual emissions in 2010. A range of policies can reduce the risk of negative climate change impacts on agriculture, and greenhouse gas emissions from the agriculture sector.
Current farming methods have resulted in over-stretched water resources, high levels of erosion and reduced soil fertility. There is not enough water to continue farming using current practices; therefore how critical water, land, and ecosystem resources are used to boost crop yields must be reconsidered. A solution would be to give value to ecosystems, recognizing environmental and livelihood tradeoffs, and balancing the rights of a variety of users and interests. Inequities that result when such measures are adopted would need to be addressed, such as the reallocation of water from poor to rich, the clearing of land to make way for more productive farmland, or the preservation of a wetland system that limits fishing rights.
Technological advancements help provide farmers with tools and resources to make farming more sustainable. Technology permits innovations like conservation tillage, a farming process which helps prevent land loss to erosion, reduces water pollution, and enhances carbon sequestration. Other potential practices include conservation agriculture, agroforestry, improved grazing, avoided grassland conversion, and biochar. Current mono-crop farming practices in the United States preclude widespread adoption of sustainable practices, such as 2-3 crop rotations that incorporate grass or hay with annual crops, unless negative emission goals such as soil carbon sequestration become policy.
According to a report by the International Food Policy Research Institute (IFPRI), agricultural technologies will have the greatest impact on food production if adopted in combination with each other; using a model that assessed how eleven technologies could impact agricultural productivity, food security and trade by 2050, IFPRI found that the number of people at risk from hunger could be reduced by as much as 40% and food prices could be reduced by almost half. The caloric demand of Earth's projected population, with current climate change predictions, can be satisfied by additional improvement of agricultural methods, expansion of agricultural areas, and a sustainability-oriented consumer mindset.
Since the 1940s, agricultural productivity has increased dramatically, due largely to the increased use of energy-intensive mechanization, fertilizers and pesticides. The vast majority of this energy input comes from fossil fuel sources. Between the 1960s and the 1980s, the Green Revolution transformed agriculture around the globe, with world grain production increasing significantly (between 70% and 390% for wheat and 60% to 150% for rice, depending on geographic area) as world population doubled. Heavy reliance on petrochemicals has raised concerns that oil shortages could increase costs and reduce agricultural output.
Industrialized agriculture depends on fossil fuels in two fundamental ways: direct consumption on the farm and manufacture of inputs used on the farm. Direct consumption includes the use of lubricants and fuels to operate farm vehicles and machinery.
Indirect consumption includes the manufacture of fertilizers, pesticides, and farm machinery. In particular, the production of nitrogen fertilizer can account for over half of agricultural energy usage. Together, direct and indirect consumption by US farms accounts for about 2% of the nation's energy use. Direct and indirect energy consumption by U.S. farms peaked in 1979, and has since gradually declined. Food systems encompass not just agriculture but off-farm processing, packaging, transporting, marketing, consumption, and disposal of food and food-related items. Agriculture accounts for less than one-fifth of food system energy use in the US.
Agricultural economics is economics as it relates to the "production, distribution and consumption of [agricultural] goods and services". Combining agricultural production with general theories of marketing and business as a discipline of study began in the late 1800s, and grew significantly through the 20th century. Although the study of agricultural economics is relatively recent, major trends in agriculture have significantly affected national and international economies throughout history, ranging from tenant farmers and sharecropping in the post-American Civil War Southern United States to the European feudal system of manorialism. In the United States, and elsewhere, food costs attributed to food processing, distribution, and agricultural marketing, sometimes referred to as the value chain, have risen while the costs attributed to farming have declined. This is related to the greater efficiency of farming, combined with the increased level of value addition (e.g. more highly processed products) provided by the supply chain. Market concentration has increased in the sector as well, and although the total effect of the increased market concentration is likely increased efficiency, the changes redistribute economic surplus from producers (farmers) and consumers, and may have negative implications for rural communities.
National government policies can significantly change the economic marketplace for agricultural products, in the form of taxation, subsidies, tariffs and other measures. Since at least the 1960s, a combination of trade restrictions, exchange rate policies and subsidies have affected farmers in both the developing and the developed world. In the 1980s, non-subsidized farmers in developing countries experienced adverse effects from national policies that created artificially low global prices for farm products. Between the mid-1980s and the early 2000s, several international agreements limited agricultural tariffs, subsidies and other trade restrictions.
However, , there was still a significant amount of policy-driven distortion in global agricultural product prices. The three agricultural products with the greatest amount of trade distortion were sugar, milk and rice, mainly due to taxation. Among the oilseeds, sesame had the greatest amount of taxation, but overall, feed grains and oilseeds had much lower levels of taxation than livestock products. Since the 1980s, policy-driven distortions have seen a greater decrease among livestock products than crops during the worldwide reforms in agricultural policy. Despite this progress, certain crops, such as cotton, still see subsidies in developed countries artificially deflating global prices, causing hardship in developing countries with non-subsidized farmers. Unprocessed commodities such as corn, soybeans, and cattle are generally graded to indicate quality, affecting the price the producer receives. Commodities are generally reported by production quantities, such as volume, number or weight.
Agricultural science is a broad multidisciplinary field of biology that encompasses the parts of exact, natural, economic and social sciences used in the practice and understanding of agriculture. It covers topics such as agronomy, plant breeding and genetics, plant pathology, crop modelling, soil science, entomology, production techniques and improvement, study of pests and their management, and study of adverse environmental effects such as soil degradation, waste management, and bioremediation.
The scientific study of agriculture began in the 18th century, when Johann Friedrich Mayer conducted experiments on the use of gypsum (hydrated calcium sulphate) as a fertilizer. Research became more systematic when in 1843, John Lawes and Henry Gilbert began a set of long-term agronomy field experiments at Rothamsted Research Station in England; some of them, such as the Park Grass Experiment, are still running. In America, the Hatch Act of 1887 provided funding for what it was the first to call "agricultural science", driven by farmers' interest in fertilizers. In agricultural entomology, the USDA began to research biological control in 1881; it instituted its first large program in 1905, searching Europe and Japan for natural enemies of the gypsy moth and brown-tail moth, establishing parasitoids (such as solitary wasps) and predators of both pests in the USA.
Agricultural policy is the set of government decisions and actions relating to domestic agriculture and imports of foreign agricultural products. Governments usually implement agricultural policies with the goal of achieving a specific outcome in the domestic agricultural product markets. Some overarching themes include risk management and adjustment (including policies related to climate change, food safety and natural disasters), economic stability (including policies related to taxes), natural resources and environmental sustainability (especially water policy), research and development, and market access for domestic commodities (including relations with global organizations and agreements with other countries). Agricultural policy can also touch on food quality, ensuring that the food supply is of a consistent and known quality, food security, ensuring that the food supply meets the population's needs, and conservation. Policy programs can range from financial programs, such as subsidies, to encouraging producers to enroll in voluntary quality assurance programs.
There are many influences on the creation of agricultural policy, including consumers, agribusiness, trade lobbies and other groups. Agribusiness interests hold a large amount of influence over policy making, in the form of lobbying and campaign contributions. Political action groups, including those interested in environmental issues and labor unions, also provide influence, as do lobbying organizations representing individual agricultural commodities. The Food and Agriculture Organization of the United Nations (FAO) leads international efforts to defeat hunger and provides a forum for the negotiation of global agricultural regulations and agreements. Dr. Samuel Jutzi, director of FAO's animal production and health division, states that lobbying by large corporations has stopped reforms that would improve human health and the environment. For example, proposals in 2010 for a voluntary code of conduct for the livestock industry that would have provided incentives for improving standards for health, and environmental regulations, such as the number of animals an area of land can support without long-term damage, were successfully defeated due to large food company pressure. | https://en.wikipedia.org/wiki?curid=627 |
Aldous Huxley
Aldous Leonard Huxley (26 July 1894 – 22 November 1963) was an English writer and philosopher. He wrote nearly fifty books—both novels and non-fiction works—as well as wide-ranging essays, narratives, and poems.
Born into the prominent Huxley family, he graduated from Balliol College, Oxford with an undergraduate degree in English literature. Early in his career, he published short stories and poetry and edited the literary magazine "Oxford Poetry", before going on to publish travel writing, satire, and screenplays. He spent the latter part of his life in the United States, living in Los Angeles from 1937 until his death. By the end of his life, Huxley was widely acknowledged as one of the foremost intellectuals of his time. He was nominated for the Nobel Prize in Literature seven times and was elected Companion of Literature by the Royal Society of Literature in 1962.
Huxley was a pacifist. He grew interested in philosophical mysticism and universalism, addressing these subjects with works such as "The Perennial Philosophy" (1945)—which illustrates commonalities between Western and Eastern mysticism—and "The Doors of Perception" (1954)—which interprets his own psychedelic experience with mescaline. In his most famous novel "Brave New World" (1932) and his final novel "Island" (1962), he presented his vision of dystopia and utopia, respectively.
Huxley was born in Godalming, Surrey, England, in 1894. He was the third son of the writer and schoolmaster Leonard Huxley, who edited "Cornhill Magazine", and his first wife, Julia Arnold, who founded Prior's Field School. Julia was the niece of poet and critic Matthew Arnold and the sister of Mrs. Humphry Ward. Aldous was the grandson of Thomas Henry Huxley, the zoologist, agnostic, and controversialist ("Darwin's Bulldog"). His brother Julian Huxley and half-brother Andrew Huxley also became outstanding biologists. Aldous had another brother, Noel Trevenen Huxley (1889–1914), who took his own life after a period of clinical depression.
As a child, Huxley's nickname was "Ogie", short for "Ogre". He was described by his brother, Julian, as someone who frequently "[contemplated] the strangeness of things". According to his cousin and contemporary, Gervas Huxley, he had an early interest in drawing.
Huxley's education began in his father's well-equipped botanical laboratory, after which he enrolled at Hillside School near Godalming. He was taught there by his own mother for several years until she became terminally ill. After Hillside he went on to Eton College. His mother died in 1908, when he was 14 (his father later remarried). He contracted the eye disease Keratitis punctata in 1911; this "left [him] practically blind for two to three years." This "ended his early dreams of becoming a doctor." In October 1913, Huxley entered Balliol College, Oxford, where he studied English literature. He volunteered for the British Army in January 1916, for the Great War; however, he was rejected on health grounds, being half-blind in one eye. His eyesight later partly recovered. He edited "Oxford Poetry" in 1916, and in June of that year graduated BA with first class honours. His brother Julian wrote:
Following his years at Balliol, Huxley, being financially indebted to his father, decided to find employment. He taught French for a year at Eton College, where Eric Blair (who was to take the pen name George Orwell) and Steven Runciman were among his pupils. He was mainly remembered as being an incompetent schoolmaster unable to keep order in class. Nevertheless, Blair and others spoke highly of his excellent command of language.
Significantly, Huxley also worked for a time during the 1920s at Brunner and Mond, an advanced chemical plant in Billingham in County Durham, northeast England. According to the introduction to the latest edition of his science fiction novel "Brave New World" (1932), the experience he had there of "an ordered universe in a world of planless incoherence" was an important source for the novel.
Huxley completed his first (unpublished) novel at the age of 17 and began writing seriously in his early twenties, establishing himself as a successful writer and social satirist. His first published novels were social satires, "Crome Yellow" (1921), "Antic Hay" (1923), "Those Barren Leaves" (1925), and "Point Counter Point" (1928). "Brave New World" was his fifth novel and first dystopian work. In the 1920s he was also a contributor to "Vanity Fair" and British "Vogue" magazines.
During the First World War, Huxley spent much of his time at Garsington Manor near Oxford, home of Lady Ottoline Morrell, working as a farm labourer. There he met several Bloomsbury Group figures, including Bertrand Russell, Alfred North Whitehead, and Clive Bell. Later, in "Crome Yellow" (1921) he caricatured the Garsington lifestyle. Jobs were very scarce, but in 1919 John Middleton Murry was reorganising the "Athenaeum" and invited Huxley to join the staff. He accepted immediately, and quickly married the Belgian refugee Maria Nys, also at Garsington. They lived with their young son in Italy part of the time during the 1920s, where Huxley would visit his friend D. H. Lawrence. Following Lawrence's death in 1930, Huxley edited Lawrence's letters (1932).
Works of this period included important novels on the dehumanising aspects of scientific progress, most famously "Brave New World", and on pacifist themes (for example, "Eyeless in Gaza"). In "Brave New World", set in a dystopian London, Huxley portrays a society operating on the principles of mass production and Pavlovian conditioning. Huxley was strongly influenced by F. Matthias Alexander, and included him as a character in "Eyeless in Gaza".
Beginning in this period, Huxley began to write and edit non-fiction works on pacifist issues, including "Ends and Means", "An Encyclopedia of Pacifism", and "Pacifism and Philosophy", and was an active member of the Peace Pledge Union.
In 1937 Huxley moved to Hollywood with his wife Maria, son Matthew Huxley, and friend Gerald Heard. He lived in the U.S., mainly in southern California, until his death, and also for a time in Taos, New Mexico, where he wrote "Ends and Means" (published in 1937). The book contains tracts on war, religion, nationalism and ethics.
Heard introduced Huxley to Vedanta (Upanishad-centered philosophy), meditation, and vegetarianism through the principle of ahimsa. In 1938, Huxley befriended Jiddu Krishnamurti, whose teachings he greatly admired. Huxley and Krishnamurti entered into an enduring exchange (sometimes edging on debate) over many years, with Krishnamurti representing the more rarefied, detached, ivory-tower perspective and Huxley, with his pragmatic concerns, the more socially and historically informed position. Huxley provided an introduction to Krishnamurti's quintessential statement, "The First and Last Freedom" (1954).
Huxley also became a Vedantist in the circle of Hindu Swami Prabhavananda, and introduced Christopher Isherwood to this circle. Not long afterward, Huxley wrote his book on widely held spiritual values and ideas, "The Perennial Philosophy", which discussed the teachings of renowned mystics of the world. Huxley's book affirmed a sensibility that insists there are realities beyond the generally accepted "five senses" and that there is genuine meaning for humans beyond both sensual satisfactions and sentimentalities.
Huxley became a close friend of Remsen Bird, president of Occidental College. He spent much time at the college, which is in the Eagle Rock neighbourhood of Los Angeles. The college appears as "Tarzana College" in his satirical novel "After Many a Summer" (1939). The novel won Huxley a British literary award, the 1939 James Tait Black Memorial Prize for fiction. Huxley also incorporated Bird into the novel.
During this period, Huxley earned a substantial income as a Hollywood screenwriter; Christopher Isherwood, in his autobiography "My Guru and His Disciple", states that Huxley earned more than $3,000 per week (approximately $50,000 in 2020 dollars) as a screenwriter, and that he used much of it to transport Jewish and left-wing writer and artist refugees from Hitler's Germany to the US. In March 1938, Huxley's friend Anita Loos, a novelist and screenwriter, put him in touch with Metro-Goldwyn-Mayer (MGM), which hired him for "Madame Curie" which was originally to star Greta Garbo and be directed by George Cukor. (Eventually, the film was completed by MGM in 1943 with a different director and cast.) Huxley received screen credit for "Pride and Prejudice" (1940) and was paid for his work on a number of other films, including "Jane Eyre" (1944). He was commissioned by Walt Disney in 1945 to write a script based on "Alice's Adventures in Wonderland" and the biography of the story's author, Lewis Carroll. The script was not used, however.
Huxley wrote an introduction to the posthumous publication of J. D. Unwin's 1940 book "Hopousia or The Sexual and Economic Foundations of a New Society".
On 21 October 1949, Huxley wrote to George Orwell, author of "Nineteen Eighty-Four", congratulating him on "how fine and how profoundly important the book is." In his letter to Orwell, he predicted:
Huxley had deeply felt apprehensions about the future the developed world might make for itself. From these, he made some warnings in his writings and talks. In a 1958 televised interview conducted by journalist Mike Wallace, Huxley outlined several major concerns: the difficulties and dangers of world overpopulation; the tendency toward distinctly hierarchical social organisation; the crucial importance of evaluating the use of technology in mass societies susceptible to persuasion; the tendency to promote modern politicians to a naive public as well-marketed commodities.
In the fall semester of 1960, Huxley was invited by Professor Huston Smith to be the Carnegie Visiting Professor of Humanities at the Massachusetts Institute of Technology (MIT). As part of the MIT centennial program of events organised by the Department of Humanities, Huxley presented a series of lectures titled, "What a Piece of Work is a Man" which concerned history, language, and art.
In 1953, Huxley and Maria applied for United States citizenship and presented themselves for examination. When Huxley refused to bear arms for the U.S. and would not state that his objections were based on religious ideals, the only excuse allowed under the McCarran Act, the judge had to adjourn the proceedings. He withdrew his application. Nevertheless, he remained in the U.S. In 1959 Huxley turned down an offer of a Knight Bachelor by the Macmillan government without putting forward a reason; his brother Julian had been knighted in 1958, while another brother Andrew would be knighted in 1974.
Beginning in 1939 and continuing until his death in 1963, Huxley had an extensive association with the Vedanta Society of Southern California, founded and headed by Swami Prabhavananda. Together with Gerald Heard, Christopher Isherwood and other followers, he was initiated by the Swami and was taught meditation and spiritual practices.
In 1944, Huxley wrote the introduction to the "Bhagavad Gita: The Song of God", translated by Swami Prabhavananda and Christopher Isherwood, which was published by the Vedanta Society of Southern California.
From 1941 until 1960, Huxley contributed 48 articles to "Vedanta and the West", published by the society. He also served on the editorial board with Isherwood, Heard, and playwright John Van Druten from 1951 through 1962.
Huxley also occasionally lectured at the Hollywood and Santa Barbara Vedanta temples. Two of those lectures have been released on CD: "Knowledge and Understanding" and "Who Are We?" from 1955. Nonetheless, Huxley's agnosticism, together with his speculative propensity, made it difficult for him to fully embrace any form of institutionalised religion.
In the spring of 1953, Huxley had his first experience with the psychedelic drug mescaline. Huxley had initiated a correspondence with Doctor Humphry Osmond, a British psychiatrist then employed in a Canadian institution, and eventually asked him to supply a dose of mescaline; Osmond obliged and supervised Huxley's session in southern California. After the publication of "The Doors of Perception", in which he recounted this experience, Huxley and Swami Prabhavananda disagreed about the meaning and importance of the psychedelic drug experience, which may have caused the relationship to cool, but Huxley continued to write articles for the society's journal, lecture at the temple, and attend social functions. Huxley later had an experience on mescaline that he considered more profound than those detailed in "The Doors of Perception".
Huxley wrote that "The mystical experience is doubly valuable; it is valuable because it gives the experiencer a better understanding of himself and the world and because it may help him to lead a less self-centered and more creative life."
Differing accounts exist about the details of the quality of Huxley's eyesight at specific points in his life. In about 1939 Huxley encountered the Bates method for better eyesight, and a teacher, Margaret Darst Corbett, who was able to teach the method to him. In 1940, Huxley relocated from Hollywood to a "ranchito" in the high desert hamlet of Llano, California, in northern Los Angeles County. Huxley then said that his sight improved dramatically with the Bates Method and the extreme and pure natural lighting of the southwestern American desert. He reported that, for the first time in more than 25 years, he was able to read without glasses and without strain. He even tried driving a car along the dirt road beside the ranch. He wrote a book about his successes with the Bates Method, "The Art of Seeing", which was published in 1942 (U.S.), 1943 (UK). The book contained some generally disputed theories, and its publication created a growing degree of popular controversy about Huxley's eyesight.
It was, and is, widely believed that Huxley was nearly blind since the illness in his teens, despite the partial recovery that had enabled him to study at Oxford. For example, some ten years after publication of "The Art of Seeing", in 1952, Bennett Cerf was present when Huxley spoke at a Hollywood banquet, wearing no glasses and apparently reading his paper from the lectern without difficulty: "Then suddenly he faltered—and the disturbing truth became obvious. He wasn't reading his address at all. He had learned it by heart. To refresh his memory he brought the paper closer and closer to his eyes. When it was only an inch or so away he still couldn't read it, and had to fish for a magnifying glass in his pocket to make the typing visible to him. It was an agonising moment".
Brazilian author João Ubaldo Ribeiro, who as a young journalist spent several evenings in the Huxleys' company in the late 1950s, wrote that Huxley had said to him, with a wry smile, "I can hardly see at all. And I don't give a damn, really".
On the other hand, Huxley's second wife, Laura Archera, later emphasised in her biographical account, "This Timeless Moment": "One of the great achievements of his life: that of having regained his sight". After revealing a letter she wrote to the "Los Angeles Times" disclaiming the label of Huxley as a "poor fellow who can hardly see" by Walter C. Alvarez, she tempered her statement with, "Although I feel it was an injustice to treat Aldous as though he were blind, it is true there were many indications of his impaired vision. For instance, although Aldous did not wear glasses, he would quite often use a magnifying lens". Laura Huxley proceeded to elaborate a few nuances of inconsistency peculiar to Huxley's vision. Her account, in this respect, agrees with the following sample of Huxley's own words from "The Art of Seeing": "The most characteristic fact about the functioning of the total organism, or any part of the organism, is that it is not constant, but highly variable". Nevertheless, the topic of Huxley's eyesight continues to endure similar, significant controversy.
American popular science author Steven Johnson, in his book "Mind Wide Open", quotes Huxley about his difficulties with visual encoding: "I am and, for as long as I can remember, I have always been a poor visualizer. Words, even the pregnant words of poets, do not evoke pictures in my mind. No hypnagogic visions greet me on the verge of sleep. When I recall something, the memory does not present itself to me as a vividly seen event or object. By an effort of the will, I can evoke a not very vivid image of what happened yesterday afternoon ...".
Huxley married Maria Nys (10 September 1899 – 12 February 1955), a Belgian he met at Garsington, Oxfordshire, in 1919. They had one child, Matthew Huxley (19 April 1920 – 10 February 2005), who had a career as an author, anthropologist, and prominent epidemiologist. In 1955, Maria Huxley died of cancer.
In 1956, Huxley married Laura Archera (1911–2007), also an author, as well as a violinist and psychotherapist. She wrote "This Timeless Moment", a biography of Huxley. She told the story of their marriage through Mary Ann Braubach's 2010 documentary, "Huxley on Huxley".
Huxley was diagnosed with laryngeal cancer in 1960; in the years that followed, with his health deteriorating, he wrote the Utopian novel "Island", and gave lectures on "Human Potentialities" both at the UCSF Medical Center and at the Esalen Institute. These lectures were fundamental to the beginning of the Human Potential Movement.
Huxley was a close friend of Jiddu Krishnamurti and Rosalind Rajagopal and was involved in the creation of the Happy Valley School, now Besant Hill School of Happy Valley, in Ojai, California.
The most substantial collection of Huxley's few remaining papers, following the destruction of most in a fire, is at the Library of the University of California, Los Angeles. Some are also at the Stanford University Libraries.
On 9 April 1962, Huxley was informed he was elected Companion of Literature by the Royal Society of Literature, the senior literary organisation in Britain, and he accepted the title via letter on 28 April 1962. The correspondence between Huxley and the society is kept at the Cambridge University Library. The society invited Huxley to appear at a banquet and give a lecture at Somerset House, London in June 1963. Huxley wrote a draft of the speech he intended to give at the society; however, his deteriorating health meant he was not able to attend.
On his deathbed, unable to speak owing to advanced laryngeal cancer, Huxley made a written request to his wife Laura for "LSD, 100 µg, intramuscular." According to her account of his death in "This Timeless Moment", she obliged with an injection at 11:20 a.m. and a second dose an hour later; Huxley died aged 69, at 5:20 p.m. (Los Angeles time), on 22 November 1963.
Media coverage of Huxley's death, along with that of fellow British author C. S. Lewis, was overshadowed by the assassination of American President John F. Kennedy on the same day, less than seven hours before Huxley's death. In an article for "New York" magazine titled "The Eclipsed Celebrity Death Club", Christopher Bonanos wrote,
This coincidence served as the basis for Peter Kreeft's book "Between Heaven and Hell: A Dialog Somewhere Beyond Death with John F. Kennedy, C. S. Lewis, & Aldous Huxley", which imagines a conversation among the three men taking place in Purgatory following their deaths.
Huxley's memorial service took place in London in December 1963; it was led by his elder brother Julian. On 27 October 1971 his ashes were interred in the family grave at the Watts Cemetery, home of the Watts Mortuary Chapel in Compton, Guildford, Surrey, England.
Huxley had been a long-time friend of Russian composer Igor Stravinsky, who later dedicated his last orchestral composition to Huxley. Stravinsky began "Variations" in Santa Fé, New Mexico, in July 1963, and completed the composition in Hollywood on 28 October 1964. It was first performed in Chicago on 17 April 1965, by the Chicago Symphony Orchestra conducted by Robert Craft. | https://en.wikipedia.org/wiki?curid=628 |
Algae
Algae (; singular alga ) is an informal term for a large and diverse group of photosynthetic eukaryotic organisms. It is a polyphyletic grouping, including species from multiple distinct clades. Included organisms range from unicellular microalgae, such as "Chlorella" and the diatoms, to multicellular forms, such as the giant kelp, a large brown alga which may grow up to 50 m in length. Most are aquatic and autotrophic and lack many of the distinct cell and tissue types, such as stomata, xylem and phloem, which are found in land plants. The largest and most complex marine algae are called seaweeds, while the most complex freshwater forms are the Charophyta, a division of green algae which includes, for example, "Spirogyra" and stoneworts.
No definition of algae is generally accepted. One definition is that algae "have chlorophyll as their primary photosynthetic pigment and lack a sterile covering of cells around their reproductive cells". Although cyanobacteria are often referred to as "blue-green algae", most authorities exclude all prokaryotes from the definition of algae.
Algae constitute a polyphyletic group since they do not include a common ancestor, and although their plastids seem to have a single origin, from cyanobacteria, they were acquired in different ways. Green algae are examples of algae that have primary chloroplasts derived from endosymbiotic cyanobacteria. Diatoms and brown algae are examples of algae with secondary chloroplasts derived from an endosymbiotic red alga.
Algae exhibit a wide range of reproductive strategies, from simple asexual cell division to complex forms of sexual reproduction.
Algae lack the various structures that characterize land plants, such as the phyllids (leaf-like structures) of bryophytes, rhizoids in nonvascular plants, and the roots, leaves, and other organs found in tracheophytes (vascular plants). Most are phototrophic, although some are mixotrophic, deriving energy both from photosynthesis and uptake of organic carbon either by osmotrophy, myzotrophy, or phagotrophy. Some unicellular species of green algae, many golden algae, euglenids, dinoflagellates, and other algae have become heterotrophs (also called colorless or apochlorotic algae), sometimes parasitic, relying entirely on external energy sources and have limited or no photosynthetic apparatus. Some other heterotrophic organisms, such as the apicomplexans, are also derived from cells whose ancestors possessed plastids, but are not traditionally considered as algae. Algae have photosynthetic machinery ultimately derived from cyanobacteria that produce oxygen as a by-product of photosynthesis, unlike other photosynthetic bacteria such as purple and green sulfur bacteria. Fossilized filamentous algae from the Vindhya basin have been dated back to 1.6 to 1.7 billion years ago.
The singular is the Latin word for 'seaweed' and retains that meaning in English. The etymology is obscure. Although some speculate that it is related to Latin , 'be cold', no reason is known to associate seaweed with temperature. A more likely source is , 'binding, entwining'.
The Ancient Greek word for 'seaweed' was (), which could mean either the seaweed (probably red algae) or a red dye derived from it. The Latinization, , meant primarily the cosmetic rouge. The etymology is uncertain, but a strong candidate has long been some word related to the Biblical (), 'paint' (if not that word itself), a cosmetic eye-shadow used by the ancient Egyptians and other inhabitants of the eastern Mediterranean. It could be any color: black, red, green, or blue.
Accordingly, the modern study of marine and freshwater algae is called either phycology or algology, depending on whether the Greek or Latin root is used. The name "fucus" appears in a number of taxa.
The committee on the International Code of Botanical Nomenclature has recommended certain suffixes for use in the classification of algae. These are -phyta for division, "-phyceae" for class, "-phycideae" for subclass, "-ales" for order, "-inales" for suborder, "-aceae" for family, "-oidease" for subfamily, a Greek-based name for genus, and a Latin-based name for species.
The primary classification of algae is based on certain morphological features. The chief among these are (a) pigment constitution of the cell, (b) chemical nature of stored food materials, (c) kind, number, point of insertion and relative length of the flagella on the motile cell, (d) chemical composition of cell wall and (e) presence or absence of a definitely organized nucleus in the cell or any other significant details of cell structure.
Although Carolus Linnaeus (1754) included algae along with lichens in his 25th class Cryptogamia, he did not elaborate further on the classification of algae.
Jean Pierre Étienne Vaucher (1803) was perhaps the first to propose a system of classification of algae, and he recognized three groups, Conferves, Ulves, and Tremelles. While Johann Heinrich Friedrich Link (1820) classified algae on the basis of the colour of the pigment and structure, William Henry Harvey (1836) proposed a system of classification on the basis of the habitat and the pigment. J. G. Agardh (1849–1898) divided algae into six orders: Diatomaceae, Nostochineae, Confervoideae, Ulvaceae, Floriadeae and Fucoideae. Around 1880, algae along with fungi were grouped under Thallophyta, a division created by Eichler (1836). Encouraged by this, Adolf Engler and Karl A. E. Prantl (1912) proposed a revised scheme of classification of algae and included fungi in algae as they were of opinion that fungi have been derived from algae. The scheme proposed by Engler and Prantl is summarised as follows:
The algae contain chloroplasts that are similar in structure to cyanobacteria. Chloroplasts contain circular DNA like that in cyanobacteria and are interpreted as representing reduced endosymbiotic cyanobacteria. However, the exact origin of the chloroplasts is different among separate lineages of algae, reflecting their acquisition during different endosymbiotic events. The table below describes the composition of the three major groups of algae. Their lineage relationships are shown in the figure in the upper right. Many of these groups contain some members that are no longer photosynthetic. Some retain plastids, but not chloroplasts, while others have lost plastids entirely.
Phylogeny based on plastid not nucleocytoplasmic genealogy:
Linnaeus, in "Species Plantarum" (1753), the starting point for modern botanical nomenclature, recognized 14 genera of algae, of which only four are currently considered among algae. In "Systema Naturae", Linnaeus described the genera "Volvox" and "Corallina", and a species of "Acetabularia" (as "Madrepora"), among the animals.
In 1768, Samuel Gottlieb Gmelin (1744–1774) published the "Historia Fucorum", the first work dedicated to marine algae and the first book on marine biology to use the then new binomial nomenclature of Linnaeus. It included elaborate illustrations of seaweed and marine algae on folded leaves.
W.H.Harvey (1811–1866) and Lamouroux (1813) were the first to divide macroscopic algae into four divisions based on their pigmentation. This is the first use of a biochemical criterion in plant systematics. Harvey's four divisions are: red algae (Rhodospermae), brown algae (Melanospermae), green algae (Chlorospermae), and Diatomaceae.
At this time, microscopic algae were discovered and reported by a different group of workers (e.g., O. F. Müller and Ehrenberg) studying the Infusoria (microscopic organisms). Unlike macroalgae, which were clearly viewed as plants, microalgae were frequently considered animals because they are often motile. Even the nonmotile (coccoid) microalgae were sometimes merely seen as stages of the lifecycle of plants, macroalgae, or animals.
Although used as a taxonomic category in some pre-Darwinian classifications, e.g., Linnaeus (1753), de Jussieu (1789), Horaninow (1843), Agassiz (1859), Wilson & Cassin (1864), in further classifications, the "algae" are seen as an artificial, polyphyletic group.
Throughout the 20th century, most classifications treated the following groups as divisions or classes of algae: cyanophytes, rhodophytes, chrysophytes, xanthophytes, bacillariophytes, phaeophytes, pyrrhophytes (cryptophytes and dinophytes), euglenophytes, and chlorophytes. Later, many new groups were discovered (e.g., Bolidophyceae), and others were splintered from older groups: charophytes and glaucophytes (from chlorophytes), many heterokontophytes (e.g., synurophytes from chrysophytes, or eustigmatophytes from xanthophytes), haptophytes (from chrysophytes), and chlorarachniophytes (from xanthophytes).
With the abandonment of plant-animal dichotomous classification, most groups of algae (sometimes all) were included in Protista, later also abandoned in favour of Eukaryota. However, as a legacy of the older plant life scheme, some groups that were also treated as protozoans in the past still have duplicated classifications (see ambiregnal protists).
Some parasitic algae (e.g., the green algae "Prototheca" and "Helicosporidium", parasites of metazoans, or "Cephaleuros", parasites of plants) were originally classified as fungi, sporozoans, or protistans of "incertae sedis", while others (e.g., the green algae "Phyllosiphon" and "Rhodochytrium", parasites of plants, or the red algae "Pterocladiophila" and "Gelidiocolax mammillatus", parasites of other red algae, or the dinoflagellates "Oodinium", parasites of fish) had their relationship with algae conjectured early. In other cases, some groups were originally characterized as parasitic algae (e.g., "Chlorochytrium"), but later were seen as endophytic algae. Some filamentous bacteria (e.g., "Beggiatoa") were originally seen as algae. Furthermore, groups like the apicomplexans are also parasites derived from ancestors that possessed plastids, but are not included in any group traditionally seen as algae.
The first land plants probably evolved from shallow freshwater charophyte algae much like "Chara" almost 500 million years ago. These probably had an isomorphic alternation of generations and were probably filamentous. Fossils of isolated land plant spores suggest land plants may have been around as long as 475 million years ago.
A range of algal morphologies is exhibited, and convergence of features in unrelated groups is common. The only groups to exhibit three-dimensional multicellular thalli are the reds and browns, and some chlorophytes. Apical growth is constrained to subsets of these groups: the florideophyte reds, various browns, and the charophytes. The form of charophytes is quite different from those of reds and browns, because they have distinct nodes, separated by internode 'stems'; whorls of branches reminiscent of the horsetails occur at the nodes. Conceptacles are another polyphyletic trait; they appear in the coralline algae and the Hildenbrandiales, as well as the browns.
Most of the simpler algae are unicellular flagellates or amoeboids, but colonial and nonmotile forms have developed independently among several of the groups. Some of the more common organizational levels, more than one of which may occur in the lifecycle of a species, are
In three lines, even higher levels of organization have been reached, with full tissue differentiation. These are the brown algae,—some of which may reach 50 m in length (kelps)—the red algae, and the green algae. The most complex forms are found among the charophyte algae (see Charales and Charophyta), in a lineage that eventually led to the higher land plants. The innovation that defines these nonalgal plants is the presence of female reproductive organs with protective cell layers that protect the zygote and developing embryo. Hence, the land plants are referred to as the Embryophytes.
Many algae, particularly members of the Characeae, have served as model experimental organisms to understand the mechanisms of the water permeability of membranes, osmoregulation, turgor regulation, salt tolerance, cytoplasmic streaming, and the generation of action potentials.
Phytohormones are found not only in higher plants, but in algae, too.
Some species of algae form symbiotic relationships with other organisms. In these symbioses, the algae supply photosynthates (organic substances) to the host organism providing protection to the algal cells. The host organism derives some or all of its energy requirements from the algae. Examples are:
Lichens are defined by the International Association for Lichenology to be "an association of a fungus and a photosynthetic symbiont resulting in a stable vegetative body having a specific structure". The fungi, or mycobionts, are mainly from the Ascomycota with a few from the Basidiomycota. In nature they do not occur separate from lichens. It is unknown when they began to associate. One mycobiont associates with the same phycobiont species, rarely two, from the green algae, except that alternatively, the mycobiont may associate with a species of cyanobacteria (hence "photobiont" is the more accurate term). A photobiont may be associated with many different mycobionts or may live independently; accordingly, lichens are named and classified as fungal species. The association is termed a morphogenesis because the lichen has a form and capabilities not possessed by the symbiont species alone (they can be experimentally isolated). The photobiont possibly triggers otherwise latent genes in the mycobiont.
Trentepohlia is an example of a common green alga genus worldwide that can grow on its own or be lichenised. Lichen thus share some of the habitat and often similar appearance with specialized species of algae ("aerophytes") growing on exposed surfaces such as tree trunks and rocks and sometimes discoloring them.
Coral reefs are accumulated from the calcareous exoskeletons of marine invertebrates of the order Scleractinia (stony corals). These animals metabolize sugar and oxygen to obtain energy for their cell-building processes, including secretion of the exoskeleton, with water and carbon dioxide as byproducts. Dinoflagellates (algal protists) are often endosymbionts in the cells of the coral-forming marine invertebrates, where they accelerate host-cell metabolism by generating sugar and oxygen immediately available through photosynthesis using incident light and the carbon dioxide produced by the host. Reef-building stony corals (hermatypic corals) require endosymbiotic algae from the genus "Symbiodinium" to be in a healthy condition. The loss of "Symbiodinium" from the host is known as coral bleaching, a condition which leads to the deterioration of a reef.
Endosymbiontic green algae live close to the surface of some sponges, for example, breadcrumb sponges ("Halichondria panicea"). The alga is thus protected from predators; the sponge is provided with oxygen and sugars which can account for 50 to 80% of sponge growth in some species.
Rhodophyta, Chlorophyta, and Heterokontophyta, the three main algal divisions, have lifecycles which show considerable variation and complexity. In general, an asexual phase exists where the seaweed's cells are diploid, a sexual phase where the cells are haploid, followed by fusion of the male and female gametes. Asexual reproduction permits efficient population increases, but less variation is possible. Commonly, in sexual reproduction of unicellular and colonial algae, two specialized, sexually compatible, haploid gametes make physical contact and fuse to form a zygote. To ensure a successful mating, the development and release of gametes is highly synchronized and regulated; pheromones may play a key role in these processes. Sexual reproduction allows for more variation and provides the benefit of efficient recombinational repair of DNA damages during meiosis, a key stage of the sexual cycle. However, sexual reproduction is more costly than asexual reproduction. Meiosis has been shown to occur in many different species of algae.
The "Algal Collection of the US National Herbarium" (located in the National Museum of Natural History) consists of approximately 320,500 dried specimens, which, although not exhaustive (no exhaustive collection exists), gives an idea of the order of magnitude of the number of algal species (that number remains unknown). Estimates vary widely. For example, according to one standard textbook, in the British Isles the "UK Biodiversity Steering Group Report" estimated there to be 20,000 algal species in the UK. Another checklist reports only about 5,000 species. Regarding the difference of about 15,000 species, the text concludes: "It will require many detailed field surveys before it is possible to provide a reliable estimate of the total number of species ..."
Regional and group estimates have been made, as well:
and so on, but lacking any scientific basis or reliable sources, these numbers have no more credibility than the British ones mentioned above. Most estimates also omit microscopic algae, such as phytoplankton.
The most recent estimate suggests 72,500 algal species worldwide.
The distribution of algal species has been fairly well studied since the founding of phytogeography in the mid-19th century. Algae spread mainly by the dispersal of spores analogously to the dispersal of Plantae by seeds and spores. This dispersal can be accomplished by air, water, or other organisms. Due to this, spores can be found in a variety of environments: fresh and marine waters, air, soil, and in or on other organisms. Whether a spore is to grow into an organism depends on the combination of the species and the environmental conditions where the spore lands.
The spores of freshwater algae are dispersed mainly by running water and wind, as well as by living carriers. However, not all bodies of water can carry all species of algae, as the chemical composition of certain water bodies limits the algae that can survive within them. Marine spores are often spread by ocean currents. Ocean water presents many vastly different habitats based on temperature and nutrient availability, resulting in phytogeographic zones, regions, and provinces.
To some degree, the distribution of algae is subject to floristic discontinuities caused by geographical features, such as Antarctica, long distances of ocean or general land masses. It is, therefore, possible to identify species occurring by locality, such as "Pacific algae" or "North Sea algae". When they occur out of their localities, hypothesizing a transport mechanism is usually possible, such as the hulls of ships. For example, "Ulva reticulata" and "U. fasciata" travelled from the mainland to Hawaii in this manner.
Mapping is possible for select species only: "there are many valid examples of confined distribution patterns." For example, "Clathromorphum" is an arctic genus and is not mapped far south of there. However, scientists regard the overall data as insufficient due to the "difficulties of undertaking such studies."
Algae are prominent in bodies of water, common in terrestrial environments, and are found in unusual environments, such as on snow and ice. Seaweeds grow mostly in shallow marine waters, under deep; however, some such as "Navicula pennata" have been recorded to a depth of .
The various sorts of algae play significant roles in aquatic ecology. Microscopic forms that live suspended in the water column (phytoplankton) provide the food base for most marine food chains. In very high densities (algal blooms), these algae may discolor the water and outcompete, poison, or asphyxiate other life forms.
Algae can be used as indicator organisms to monitor pollution in various aquatic systems. In many cases, algal metabolism is sensitive to various pollutants. Due to this, the species composition of algal populations may shift in the presence of chemical pollutants. To detect these changes, algae can be sampled from the environment and maintained in laboratories with relative ease.
On the basis of their habitat, algae can be categorized as: aquatic (planktonic, benthic, marine, freshwater, lentic, lotic), terrestrial, aerial (subaerial), lithophytic, halophytic (or euryhaline), psammon, thermophilic, cryophilic, epibiont (epiphytic, epizoic), endosymbiont (endophytic, endozoic), parasitic, calcifilic or lichenic (phycobiont).
In classical Chinese, the word is used both for "algae" and (in the modest tradition of the imperial scholars) for "literary talent". The third island in Kunming Lake beside the Summer Palace in Beijing is known as the Zaojian Tang Dao, which thus simultaneously means "Island of the Algae-Viewing Hall" and "Island of the Hall for Reflecting on Literary Talent".
Agar, a gelatinous substance derived from red algae, has a number of commercial uses. It is a good medium on which to grow bacteria and fungi, as most microorganisms cannot digest agar.
Alginic acid, or alginate, is extracted from brown algae. Its uses range from gelling agents in food, to medical dressings. Alginic acid also has been used in the field of biotechnology as a biocompatible medium for cell encapsulation and cell immobilization. Molecular cuisine is also a user of the substance for its gelling properties, by which it becomes a delivery vehicle for flavours.
Between 100,000 and 170,000 wet tons of "Macrocystis" are harvested annually in New Mexico for alginate extraction and abalone feed.
To be competitive and independent from fluctuating support from (local) policy on the long run, biofuels should equal or beat the cost level of fossil fuels. Here, algae-based fuels hold great promise, directly related to the potential to produce more biomass per unit area in a year than any other form of biomass. The break-even point for algae-based biofuels is estimated to occur by 2025.
For centuries, seaweed has been used as a fertilizer; George Owen of Henllys writing in the 16th century referring to drift weed in South Wales:
Today, algae are used by humans in many ways; for example, as fertilizers, soil conditioners, and livestock feed. Aquatic and microscopic species are cultured in clear tanks or ponds and are either harvested or used to treat effluents pumped through the ponds. Algaculture on a large scale is an important type of aquaculture in some places. Maerl is commonly used as a soil conditioner.
Naturally growing seaweeds are an important source of food, especially in Asia. They provide many vitamins including: A, B1, B2, B6, niacin, and C, and are rich in iodine, potassium, iron, magnesium, and calcium. In addition, commercially cultivated microalgae, including both algae and cyanobacteria, are marketed as nutritional supplements, such as spirulina, "Chlorella" and the vitamin-C supplement from "Dunaliella", high in beta-carotene.
Algae are national foods of many nations: China consumes more than 70 species, including "fat choy", a cyanobacterium considered a vegetable; Japan, over 20 species such as "nori" and "aonori"; Ireland, dulse; Chile, cochayuyo. Laver is used to make laver bread in Wales, where it is known as ; in Korea, . It is also used along the west coast of North America from California to British Columbia, in Hawaii and by the Māori of New Zealand. Sea lettuce and badderlocks are salad ingredients in Scotland, Ireland, Greenland, and Iceland. Algae is being considered a potential solution for world hunger problem.
The oils from some algae have high levels of unsaturated fatty acids. For example, "Parietochloris incisa" is very high in arachidonic acid, where it reaches up to 47% of the triglyceride pool. Some varieties of algae favored by vegetarianism and veganism contain the long-chain, essential omega-3 fatty acids, docosahexaenoic acid (DHA) and eicosapentaenoic acid (EPA). Fish oil contains the omega-3 fatty acids, but the original source is algae (microalgae in particular), which are eaten by marine life such as copepods and are passed up the food chain. Algae have emerged in recent years as a popular source of omega-3 fatty acids for vegetarians who cannot get long-chain EPA and DHA from other vegetarian sources such as flaxseed oil, which only contains the short-chain alpha-linolenic acid (ALA).
Agricultural Research Service scientists found that 60–90% of nitrogen runoff and 70–100% of phosphorus runoff can be captured from manure effluents using a horizontal algae scrubber, also called an algal turf scrubber (ATS). Scientists developed the ATS, which consists of shallow, 100-foot raceways of nylon netting where algae colonies can form, and studied its efficacy for three years. They found that algae can readily be used to reduce the nutrient runoff from agricultural fields and increase the quality of water flowing into rivers, streams, and oceans. Researchers collected and dried the nutrient-rich algae from the ATS and studied its potential as an organic fertilizer. They found that cucumber and corn seedlings grew just as well using ATS organic fertilizer as they did with commercial fertilizers. Algae scrubbers, using bubbling upflow or vertical waterfall versions, are now also being used to filter aquaria and ponds.
Various polymers can be created from algae, which can be especially useful in the creation of bioplastics. These include hybrid plastics, cellulose based plastics, poly-lactic acid, and bio-polyethylene. Several companies have begun to produce algae polymers commercially, including for use in flip-flops and in surf boards.
The alga "Stichococcus bacillaris" has been seen to colonize silicone resins used at archaeological sites; biodegrading the synthetic substance.
The natural pigments (carotenoids and chlorophylls) produced by algae can be used as alternatives to chemical dyes and coloring agents.
The presence of some individual algal pigments, together with specific pigment concentration ratios, are taxon-specific: analysis of their concentrations with various analytical methods, particularly high-performance liquid chromatography, can therefore offer deep insight into the taxonomic composition and relative abundance of natural algae populations in sea water samples.
Carrageenan, from the red alga "Chondrus crispus", is used as a stabilizer in milk products. | https://en.wikipedia.org/wiki?curid=633 |
Analysis of variance
Analysis of variance (ANOVA) is a collection of statistical models and their associated estimation procedures (such as the "variation" among and between groups) used to analyze the differences among group means in a sample. ANOVA was developed by the statistician Ronald Fisher. The ANOVA is based on the law of total variance, where the observed variance in a particular variable is partitioned into components attributable to different sources of variation. In its simplest form, ANOVA provides a statistical test of whether two or more population means are equal, and therefore generalizes the "t"-test beyond two means.
While the analysis of variance reached fruition in the 20th century, antecedents extend centuries into the past according to Stigler. These include hypothesis testing, the partitioning of sums of squares, experimental techniques and the additive model. Laplace was performing hypothesis testing in the 1770s. The development of least-squares methods by Laplace and Gauss circa 1800 provided an improved method of combining observations (over the existing practices then used in astronomy and geodesy). It also initiated much study of the contributions to sums of squares. Laplace knew how to estimate a variance from a residual (rather than a total) sum of squares. By 1827, Laplace was using least squares methods to address ANOVA problems regarding measurements of atmospheric tides. Before 1800, astronomers had isolated observational errors resulting
from reaction times (the "personal equation") and had developed methods of reducing the errors. The experimental methods used in the study of the personal equation were later accepted by the emerging field of psychology which developed strong (full factorial) experimental methods to which randomization and blinding were soon added. An eloquent non-mathematical explanation of the additive effects model was
available in 1885.
Ronald Fisher introduced the term variance and proposed its formal analysis in a 1918 article "The Correlation Between Relatives on the Supposition of Mendelian Inheritance". His first application of the analysis of variance was published in 1921. Analysis of variance became widely known after being included in Fisher's 1925 book "Statistical Methods for Research Workers".
Randomization models were developed by several researchers. The first was published in Polish by Jerzy Neyman in 1923.
One of the attributes of ANOVA that ensured its early popularity was computational elegance. The structure of the additive model allows solution for the additive coefficients by simple algebra rather than by matrix calculations. In the era of mechanical calculators this simplicity was critical. The determination of statistical significance also required access to tables of the F function which were supplied by early statistics texts.
The analysis of variance can be used as an exploratory tool to explain observations. A dog show provides an example. A dog show is not a random sampling of the breed: it is typically limited to dogs that are adult, pure-bred, and exemplary. A histogram of dog weights from a show might plausibly be rather complex, like the yellow-orange distribution shown in the illustrations. Suppose we wanted to predict the weight of a dog based on a certain set of characteristics of each dog. One way to do that is to "explain" the distribution of weights by dividing the dog population into groups based on those characteristics. A successful grouping will split dogs such that (a) each group has a low variance of dog weights (meaning the group is relatively homogeneous) and (b) the mean of each group is distinct (if two groups have the same mean, then it isn't reasonable to conclude that the groups are, in fact, separate in any meaningful way).
In the illustrations to the right, groups are identified as "X"1, "X"2, etc. In the first illustration, the dogs are divided according to the product (interaction) of two binary groupings: young vs old, and short-haired vs long-haired (e.g., group 1 is young, short-haired dogs, group 2 is young, long-haired dogs, etc.). Since the distributions of dog weight within each of the groups (shown in blue) has a relatively large variance, and since the means are very similar across groups, grouping dogs by these characteristics does not produce an effective way to explain the variation in dog weights: knowing which group a dog is in doesn't allow us to predict its weight much better than simply knowing the dog is in a dog show. Thus, this grouping fails to explain the variation in the overall distribution (yellow-orange).
An attempt to explain the weight distribution by grouping dogs as "pet vs working breed" and "less athletic vs more athletic" would probably be somewhat more successful (fair fit). The heaviest show dogs are likely to be big, strong, working breeds, while breeds kept as pets tend to be smaller and thus lighter. As shown by the second illustration, the distributions have variances that are considerably smaller than in the first case, and the means are more distinguishable. However, the significant overlap of distributions, for example, means that we cannot distinguish "X"1 and "X"2 reliably. Grouping dogs according to a coin flip might produce distributions that look similar.
An attempt to explain weight by breed is likely to produce a very good fit. All Chihuahuas are light and all St Bernards are heavy. The difference in weights between Setters and Pointers does not justify separate breeds. The analysis of variance provides the formal tools to justify these intuitive judgments. A common use of the method is the analysis of experimental data or the development of models. The method has some advantages over correlation: not all of the data must be numeric and one result of the method is a judgment in the confidence in an explanatory relationship.
ANOVA is a form of statistical hypothesis testing heavily used in the analysis of experimental data. A test result (calculated from the null hypothesis and the sample) is called statistically significant if it is deemed unlikely to have occurred by chance, "assuming the truth of the null hypothesis". A statistically significant result, when a probability ("p"-value) is less than a pre-specified threshold (significance level), justifies the rejection of the null hypothesis, but only if the a priori probability of the null hypothesis is not high.
In the typical application of ANOVA, the null hypothesis is that all groups are random samples from the same population. For example, when studying the effect of different treatments on similar samples of patients, the null hypothesis would be that all treatments have the same effect (perhaps none). Rejecting the null hypothesis is taken to mean that the differences in observed effects between treatment groups are unlikely to be due to random chance.
By construction, hypothesis testing limits the rate of Type I errors (false positives) to a significance level. Experimenters also wish to limit Type II errors (false negatives).
The rate of Type II errors depends largely on sample size (the rate is larger for smaller samples), significance
level (when the standard of proof is high, the chances of overlooking
a discovery are also high) and effect size (a smaller effect size is more prone to Type II error).
The terminology of ANOVA is largely from the statistical
design of experiments. The experimenter adjusts factors and
measures responses in an attempt to determine an effect. Factors are
assigned to experimental units by a combination of randomization and
blocking to ensure the validity of the results. Blinding keeps the
weighing impartial. Responses show a variability that is partially
the result of the effect and is partially random error.
ANOVA is the synthesis of several ideas and it is used for multiple
purposes. As a consequence, it is difficult to define concisely or precisely.
"Classical" ANOVA for balanced data does three things at once:
In short, ANOVA is a statistical tool used in several ways to develop and confirm an explanation for the observed data.
Additionally:
As a result:
ANOVA "has long enjoyed the status of being the most used (some would
say abused) statistical technique in psychological research."
ANOVA "is probably the most useful technique in the field of
statistical inference."
ANOVA is difficult to teach, particularly for complex experiments, with split-plot designs being notorious. In some cases the proper
application of the method is best determined by problem pattern recognition
followed by the consultation of a classic authoritative test.
(Condensed from the "NIST Engineering Statistics Handbook": Section 5.7. A
Glossary of DOE Terminology.)
material. "Randomization has three roles in applications: as a device
for eliminating biases, for example from unobserved explanatory
variables and selection effects; as a basis for estimating standard
errors; and as a foundation for formally exact significance tests."
Cox (2006, page 192) Hinkelmann and Kempthorne use randomization
both in experimental design and for statistical analysis.
There are three classes of models used in the analysis of variance, and these are outlined here.
The fixed-effects model (class I) of analysis of variance applies to situations in which the experimenter applies one or more treatments to the subjects of the experiment to see whether the response variable values change. This allows the experimenter to estimate the ranges of response variable values that the treatment would generate in the population as a whole.
Random-effects model (class II) is used when the treatments are not fixed. This occurs when the various factor levels are sampled from a larger population. Because the levels themselves are random variables, some assumptions and the method of contrasting the treatments (a multi-variable generalization of simple differences) differ from the fixed-effects model.
A mixed-effects model (class III) contains experimental factors of both fixed and random-effects types, with appropriately different interpretations and analysis for the two types.
Example:
Teaching experiments could be performed by a college or university department
to find a good introductory textbook, with each text considered a
treatment. The fixed-effects model would compare a list of candidate
texts. The random-effects model would determine whether important
differences exist among a list of randomly selected texts. The
mixed-effects model would compare the (fixed) incumbent texts to
randomly selected alternatives.
Defining fixed and random effects has proven elusive, with competing
definitions arguably leading toward a linguistic quagmire.
The analysis of variance has been studied from several approaches, the most common of which uses a linear model that relates the response to the treatments and blocks. Note that the model is linear in parameters but may be nonlinear across factor levels. Interpretation is easy when data is balanced across factors but much deeper understanding is needed for unbalanced data.
The analysis of variance can be presented in terms of a linear model, which makes the following assumptions about the probability distribution of the responses:
The separate assumptions of the textbook model imply that the errors are independently, identically, and normally distributed for fixed effects models, that is, that the errors (formula_1) are independent and
In a randomized controlled experiment, the treatments are randomly assigned to experimental units, following the experimental protocol. This randomization is objective and declared before the experiment is carried out. The objective random-assignment is used to test the significance of the null hypothesis, following the ideas of C. S. Peirce and Ronald Fisher. This design-based analysis was discussed and developed by Francis J. Anscombe at Rothamsted Experimental Station and by Oscar Kempthorne at Iowa State University. Kempthorne and his students make an assumption of "unit treatment additivity", which is discussed in the books of Kempthorne and David R. Cox.
In its simplest form, the assumption of unit-treatment additivity states that the observed response formula_3 from experimental unit formula_4 when receiving treatment formula_5 can be written as the sum of the unit's response formula_6 and the treatment-effect formula_7, that is
The assumption of unit-treatment additivity implies that, for every treatment formula_5, the formula_5th treatment has exactly the same effect formula_11 on every experiment unit.
The assumption of unit treatment additivity usually cannot be directly falsified, according to Cox and Kempthorne. However, many "consequences" of treatment-unit additivity can be falsified. For a randomized experiment, the assumption of unit-treatment additivity "implies" that the variance is constant for all treatments. Therefore, by contraposition, a necessary condition for unit-treatment additivity is that the variance is constant.
The use of unit treatment additivity and randomization is similar to the design-based inference that is standard in finite-population survey sampling.
Kempthorne uses the randomization-distribution and the assumption of "unit treatment additivity" to produce a "derived linear model", very similar to the textbook model discussed previously. The test statistics of this derived linear model are closely approximated by the test statistics of an appropriate normal linear model, according to approximation theorems and simulation studies. However, there are differences. For example, the randomization-based analysis results in a small but (strictly) negative correlation between the observations. In the randomization-based analysis, there is "no assumption" of a "normal" distribution and certainly "no assumption" of "independence". On the contrary, "the observations are dependent"!
The randomization-based analysis has the disadvantage that its exposition involves tedious algebra and extensive time. Since the randomization-based analysis is complicated and is closely approximated by the approach using a normal linear model, most teachers emphasize the normal linear model approach. Few statisticians object to model-based analysis of balanced randomized experiments.
However, when applied to data from non-randomized experiments or observational studies, model-based analysis lacks the warrant of randomization. For observational data, the derivation of confidence intervals must use "subjective" models, as emphasized by Ronald Fisher and his followers. In practice, the estimates of treatment-effects from observational studies generally are often inconsistent. In practice, "statistical models" and observational data are useful for suggesting hypotheses that should be treated very cautiously by the public.
The normal-model based ANOVA analysis assumes the independence, normality and
homogeneity of variances of the residuals. The
randomization-based analysis assumes only the homogeneity of the
variances of the residuals (as a consequence of unit-treatment
additivity) and uses the randomization procedure of the experiment.
Both these analyses require homoscedasticity, as an assumption for the normal-model analysis and as a consequence of randomization and additivity for the randomization-based analysis.
However, studies of processes that
change variances rather than means (called dispersion effects) have
been successfully conducted using ANOVA. There are
"no" necessary assumptions for ANOVA in its full generality, but the
"F"-test used for ANOVA hypothesis testing has assumptions and practical
limitations which are of continuing interest.
Problems which do not satisfy the assumptions of ANOVA can often be transformed to satisfy the assumptions.
The property of unit-treatment additivity is not invariant under a "change of scale", so statisticians often use transformations to achieve unit-treatment additivity. If the response variable is expected to follow a parametric family of probability distributions, then the statistician may specify (in the protocol for the experiment or observational study) that the responses be transformed to stabilize the variance. Also, a statistician may specify that logarithmic transforms be applied to the responses, which are believed to follow a multiplicative model.
According to Cauchy's functional equation theorem, the logarithm is the only continuous transformation that transforms real multiplication to addition.
ANOVA is used in the analysis of comparative experiments, those in
which only the difference in outcomes is of interest. The statistical
significance of the experiment is determined by a ratio of two
variances. This ratio is independent of several possible alterations
to the experimental observations: Adding a constant to all
observations does not alter significance. Multiplying all
observations by a constant does not alter significance. So ANOVA
statistical significance result is independent of constant bias and
scaling errors as well as the units used in expressing observations.
In the era of mechanical calculation it was common to
subtract a constant from all observations (when equivalent to
dropping leading digits) to simplify data entry. This is an example of data
coding.
The calculations of ANOVA can be characterized as computing a number
of means and variances, dividing two variances and comparing the ratio
to a handbook value to determine statistical significance. Calculating
a treatment effect is then trivial: "the effect of any treatment is
estimated by taking the difference between the mean of the
observations which receive the treatment and the general mean".
ANOVA uses traditional standardized terminology. The definitional
equation of sample variance is
formula_12, where the
divisor is called the degrees of freedom (DF), the summation is called
the sum of squares (SS), the result is called the mean square (MS) and
the squared terms are deviations from the sample mean. ANOVA
estimates 3 sample variances: a total variance based on all the
observation deviations from the grand mean, an error variance based on
all the observation deviations from their appropriate
treatment means, and a treatment variance. The treatment variance is
based on the deviations of treatment means from the grand mean, the
result being multiplied by the number of observations in each
treatment to account for the difference between the variance of
observations and the variance of means.
The fundamental technique is a partitioning of the total sum of squares "SS" into components related to the effects used in the model. For example, the model for a simplified ANOVA with one type of treatment at different levels.
The number of degrees of freedom "DF" can be partitioned in a similar way: one of these components (that for error) specifies a chi-squared distribution which describes the associated sum of squares, while the same is true for "treatments" if there is no treatment effect.
See also Lack-of-fit sum of squares.
The "F"-test is used for comparing the factors of the total deviation. For example, in one-way, or single-factor ANOVA, statistical significance is tested for by comparing the F test statistic
where "MS" is mean square, formula_17 = number of treatments and
formula_18 = total number of cases
to the "F"-distribution with formula_19, formula_20 degrees of freedom. Using the "F"-distribution is a natural candidate because the test statistic is the ratio of two scaled sums of squares each of which follows a scaled chi-squared distribution.
The expected value of F is formula_21 (where formula_22 is the treatment sample size)
which is 1 for no treatment effect. As values of F increase above 1, the evidence is increasingly inconsistent with the null hypothesis. Two apparent experimental methods of increasing F are increasing the sample size and reducing the error variance by tight experimental controls.
There are two methods of concluding the ANOVA hypothesis test, both of which produce the same result:
The ANOVA "F"-test is known to be nearly optimal in the sense of minimizing false negative errors for a fixed rate of false positive errors (i.e. maximizing power for a fixed significance level). For example, to test the hypothesis that various medical treatments have exactly the same effect, the "F"-test's "p"-values closely approximate the permutation test's p-values: The approximation is particularly close when the design is balanced. Such permutation tests characterize tests with maximum power against all alternative hypotheses, as observed by Rosenbaum. The ANOVA "F"-test (of the null-hypothesis that all treatments have exactly the same effect) is recommended as a practical test, because of its robustness against many alternative distributions.
ANOVA consists of separable parts; partitioning sources of variance
and hypothesis testing can be used individually. ANOVA is used to
support other statistical tools. Regression is first used to fit more
complex models to data, then ANOVA is used to compare models with the
objective of selecting simple(r) models that adequately describe the
data. "Such models could be fit without any reference to ANOVA, but
ANOVA tools could then be used to make some sense of the fitted models,
and to test hypotheses about batches of coefficients."
"[W]e think of the analysis of variance as a way of understanding and structuring
multilevel models—not as an alternative to regression but as a tool
for summarizing complex high-dimensional inferences ..."
The simplest experiment suitable for ANOVA analysis is the completely
randomized experiment with a single factor. More complex experiments
with a single factor involve constraints on randomization and include
completely randomized blocks and Latin squares (and variants:
Graeco-Latin squares, etc.). The more complex experiments share many
of the complexities of multiple factors. A relatively complete
discussion of the analysis (models, data summaries, ANOVA table) of
the completely randomized experiment is
available.
ANOVA generalizes to the study of the effects of multiple factors.
When the experiment includes observations at all combinations of
levels of each factor, it is termed factorial.
Factorial experiments
are more efficient than a series of single factor experiments and the
efficiency grows as the number of factors increases. Consequently, factorial designs are heavily used.
The use of ANOVA to study the effects of multiple factors has a complication. In a 3-way ANOVA with factors x, y and z, the ANOVA model includes terms for the main effects (x, y, z) and terms for interactions (xy, xz, yz, xyz).
All terms require hypothesis tests. The proliferation of interaction terms increases the risk that some hypothesis test will produce a false positive by chance. Fortunately, experience says that high order interactions are rare.
The ability to detect interactions is a major advantage of multiple
factor ANOVA. Testing one factor at a time hides interactions, but
produces apparently inconsistent experimental results.
Caution is advised when encountering interactions; Test
interaction terms first and expand the analysis beyond ANOVA if
interactions are found. Texts vary in their recommendations regarding
the continuation of the ANOVA procedure after encountering an
interaction. Interactions complicate the interpretation of
experimental data. Neither the calculations of significance nor the
estimated treatment effects can be taken at face value. "A
significant interaction will often mask the significance of main effects." Graphical methods are recommended
to enhance understanding. Regression is often useful. A lengthy discussion of interactions is available in Cox (1958). Some interactions can be removed (by transformations) while others cannot.
A variety of techniques are used with multiple factor ANOVA to reduce expense. One technique used in factorial designs is to minimize replication (possibly no replication with support of analytical trickery) and to combine groups when effects are found to be statistically (or practically) insignificant. An experiment with many insignificant factors may collapse into one with a few factors supported by many replications.
Numerous fully worked numerical examples are available in standard textbooks and online. A
simple case uses one-way (a single factor) analysis.
Some analysis is required in support of the "design" of the experiment while other analysis is performed after changes in the factors are formally found to produce statistically significant changes in the responses. Because experimentation is iterative, the results of one experiment alter plans for following experiments.
In the design of an experiment, the number of experimental units is planned to satisfy the goals of the experiment. Experimentation is often sequential.
Early experiments are often designed to provide mean-unbiased estimates of treatment effects and of experimental error. Later experiments are often designed to test a hypothesis that a treatment effect has an important magnitude; in this case, the number of experimental units is chosen so that the experiment is within budget and has adequate power, among other goals.
Reporting sample size analysis is generally required in psychology. "Provide information on sample size and the process that led to sample size decisions." The analysis, which is written in the experimental protocol before the experiment is conducted, is examined in grant applications and administrative review boards.
Besides the power analysis, there are less formal methods for selecting the number of experimental units. These include graphical methods based on limiting
the probability of false negative errors, graphical methods based on an expected variation increase (above the residuals) and methods based on achieving a desired confident interval.
Power analysis is often applied in the context of ANOVA in order to assess the probability of successfully rejecting the null hypothesis if we assume a certain ANOVA design, effect size in the population, sample size and significance level. Power analysis can assist in study design by determining what sample size would be required in order to have a reasonable chance of rejecting the null hypothesis when the alternative hypothesis is true.
Several standardized measures of effect have been proposed for ANOVA to summarize the strength of the association between a predictor(s) and the dependent variable or the overall standardized difference of the complete model. Standardized effect-size estimates facilitate comparison of findings across studies and disciplines. However, while standardized effect sizes are commonly used in much of the professional literature, a non-standardized measure of effect size that has immediately "meaningful" units may be preferable for reporting purposes.
It is always appropriate to carefully consider outliers. They have a disproportionate impact on statistical conclusions and are often the result of errors.
It is prudent to verify that the assumptions of ANOVA have been met. Residuals are examined or analyzed to confirm homoscedasticity and gross normality. Residuals should have the appearance of (zero mean normal distribution) noise when plotted as a function of anything including time and
modeled data values. Trends hint at interactions among factors or among observations. One rule of thumb: "If the largest standard deviation is less than twice the smallest standard deviation, we can use methods based on the assumption of equal standard deviations and our results
will still be approximately correct."
A statistically significant effect in ANOVA is often followed up with one or more different follow-up tests. This can be done in order to assess which groups are different from which other groups or to test various other focused hypotheses. Follow-up tests are often distinguished in terms of whether they are planned (a priori) or post hoc. Planned tests are determined before looking at the data and post hoc tests are performed after looking at the data.
Often one of the "treatments" is none, so the treatment group can act as a control. Dunnett's test (a modification of the "t"-test) tests whether each of the other treatment groups has the same mean as the control.
Post hoc tests such as Tukey's range test most commonly compare every group mean with every other group mean and typically incorporate some method of controlling for Type I errors. Comparisons, which are most commonly planned, can be either simple or compound. Simple comparisons compare one group mean with one other group mean. Compound comparisons typically compare two sets of groups means where one set has two or more groups (e.g., compare average group means of group A, B and C with group D). Comparisons can also look at tests of trend, such as linear and quadratic relationships, when the independent variable involves ordered levels.
Following ANOVA with pair-wise multiple-comparison tests has been criticized on several grounds. There are many such tests (10 in one table) and recommendations regarding their use are vague or conflicting.
There are several types of ANOVA. Many statisticians base ANOVA on the design of the experiment, especially on the protocol that specifies the random assignment of treatments to subjects; the protocol's description of the assignment mechanism should include a specification of the structure of the treatments and of any blocking. It is also common to apply ANOVA to observational data using an appropriate statistical model.
Some popular designs use the following types of ANOVA:
Balanced experiments (those with an equal sample size for each treatment) are relatively easy to interpret; Unbalanced
experiments offer more complexity. For single-factor (one-way) ANOVA, the adjustment for unbalanced data is easy, but the unbalanced analysis lacks both robustness and power. For more complex designs the lack of balance leads to further complications. "The orthogonality property of main effects and interactions present in balanced data does not carry over to the unbalanced case. This means that the usual analysis of variance techniques do not apply.
Consequently, the analysis of unbalanced factorials is much more difficult than that for balanced designs." In the general case, "The analysis of variance can also be applied to unbalanced data, but then the sums of squares, mean squares, and "F"-ratios will depend on the order in which the sources of variation
are considered." The simplest techniques for handling unbalanced data restore balance by either throwing out data or by synthesizing missing data. More complex techniques use regression.
ANOVA is (in part) a test of statistical significance. The American Psychological Association (and many other organisations) holds the view that simply reporting statistical significance is insufficient and that reporting confidence bounds is preferred.
While ANOVA is conservative (in maintaining a significance level) against multiple comparisons in one dimension, it is not conservative against comparisons in multiple dimensions.
A common mistake is to use an ANOVA (or Kruskal–Wallis) for analysis of ordered groups, e.g. in time sequence (changes over months), in disease severity (mild, moderate, severe), or in distance from a set point (10 km, 25 km, 50 km). Data in three or more ordered groups that are defined by the researcher should be analysed by Linear Trend Estimation.
ANOVA is considered to be a special case of linear regression which in turn is a special case of the general linear model. All consider the observations to be the sum of a model (fit) and a residual (error) to be minimized.
The Kruskal–Wallis test and the Friedman test are nonparametric tests, which do not rely on an assumption of normality.
Below we make clear the connection between multi-way ANOVA and linear regression.
Linearly re-order the data so that formula_23 observation is associated with a response formula_24 and factors formula_25 where formula_26 denotes the different factors and formula_27 is the total number of factors. In one-way ANOVA formula_28 and in two-way ANOVA formula_29. Furthermore, we assume the formula_30 factor has formula_31 levels, namely formula_32. Now, we can one-hot encode the factors into the formula_33 dimensional vector formula_34.
The one-hot encoding function formula_35 is defined such that the formula_36 entry of formula_37 is
formula_38
The vector formula_34 is the concatenation of all of the above vectors for all formula_40. Thus, formula_41. In order to obtain a fully general formula_27-way interaction ANOVA we must also concatenate every additional interaction term in the vector formula_34 and then add an intercept term. Let that vector be formula_44.
With this notation in place, we now have the exact connection with linear regression. We simply regress response formula_24 against the vector formula_44. However, there is a concern about identifiability. In order to overcome such issues we assume that the sum of the parameters within each set of interactions is equal to zero. From here, one can use "F"-statistics or other methods to determine the relevance of the individual factors.
We can consider the 2-way interaction example where we assume that the first factor has 2 levels and the second factor has 3 levels.
Define formula_47 if formula_48 and formula_49 if formula_50, i.e. formula_51 is the one-hot encoding of the first factor and formula_40 is the one-hot encoding of the second factor.
With that,
formula_53
where the last term is an intercept term. For a more concrete example suppose that
formula_54
Then,
formula_55 | https://en.wikipedia.org/wiki?curid=634 |
Alkane
In organic chemistry, an alkane, or paraffin (a historical name that also has other meanings), is an acyclic saturated hydrocarbon. In other words, an alkane consists of hydrogen and carbon atoms arranged in a tree structure in which all the carbon–carbon bonds are single. Alkanes have the general chemical formula C"n"H2"n"+2. The alkanes range in complexity from the simplest case of methane (CH4), where "n" = 1 (sometimes called the parent molecule), to arbitrarily large and complex molecules, like pentacontane (C50H102) or 6-ethyl-2-methyl-5-(1-methylethyl) octane, an isomer of tetradecane (C14H30).
IUPAC defines alkanes as "acyclic branched or unbranched hydrocarbons having the general formula "n"2"n"+2, and therefore consisting entirely of hydrogen atoms and saturated carbon atoms". However, some sources use the term to denote "any" saturated hydrocarbon, including those that are either monocyclic (i.e. the cycloalkanes) or polycyclic, despite their having a distinct general formula (i.e. cycloalkanes are C"n"H2"n").
In an alkane, each carbon atom is sp3-hybridized with 4 sigma bonds (either C–C or C–H), and each hydrogen atom is joined to one of the carbon atoms (in a C–H bond). The longest series of linked carbon atoms in a molecule is known as its carbon skeleton or carbon backbone. The number of carbon atoms may be considered as the size of the alkane.
One group of the higher alkanes are waxes, solids at standard ambient temperature and pressure (SATP), for which the number of carbon atoms in the carbon backbone is greater than about 17.
With their repeated –CH2 units, the alkanes constitute a homologous series of organic compounds in which the members differ in molecular mass by multiples of 14.03 u (the total mass of each such methylene-bridge unit, which comprises a single carbon atom of mass 12.01 u and two hydrogen atoms of mass ~1.01 u each).
Methane is produced by methanogenic bacteria and some long-chain alkanes function as pheromones in certain animal species or as protective waxes in plants and fungi. Nevertheless, most alkanes do not have much biological activity. They can be viewed as molecular trees upon which can be hung the more active/reactive functional groups of biological molecules.
The alkanes have two main commercial sources: petroleum (crude oil) and natural gas.
An alkyl group is an alkane-based molecular fragment that bears one open valence for bonding. They are generally abbreviated with the symbol for any organyl group, R, although Alk is sometimes used to specifically symbolize an alkyl group (as opposed to an alkenyl group or aryl group).
Saturated hydrocarbons are hydrocarbons having only single covalent bonds between their carbons. They can be:
According to the definition by IUPAC, the former two are alkanes, whereas the third group is called cycloalkanes. Saturated hydrocarbons can also combine any of the linear, cyclic (e.g., polycyclic) and branching structures; the general formula is , where "k" is the number of independent loops. Alkanes are the acyclic (loopless) ones, corresponding to "k" = 0.
Alkanes with more than three carbon atoms can be arranged in various ways, forming structural isomers. The simplest isomer of an alkane is the one in which the carbon atoms are arranged in a single chain with no branches. This isomer is sometimes called the "n"-isomer ("n" for "normal", although it is not necessarily the most common). However the chain of carbon atoms may also be branched at one or more points. The number of possible isomers increases rapidly with the number of carbon atoms. For example, for acyclic alkanes:
Branched alkanes can be chiral. For example, 3-methylhexane and its higher homologues are chiral due to their stereogenic center at carbon atom number 3. The above list only includes differences of connectivity, not stereochemistry. In addition to the alkane isomers, the chain of carbon atoms may form one or more rings. Such compounds are called cycloalkanes, and are also excluded from the above list because changing the number of rings changes the molecular formula. Cyclobutane and methylcyclopropane are isomers of each other, but are not isomers of butane.
The IUPAC nomenclature (systematic way of naming compounds) for alkanes is based on identifying hydrocarbon chains. Unbranched, saturated hydrocarbon chains are named systematically with a Greek numerical prefix denoting the number of carbons and the suffix "-ane".
In 1866, August Wilhelm von Hofmann suggested systematizing nomenclature by using the whole sequence of vowels a, e, i, o and u to create suffixes -ane, -ene, -ine (or -yne), -one, -une, for the hydrocarbons C"n"H2"n"+2, C"n"H2"n", C"n"H2"n"−2, C"n"H2"n"−4, C"n"H2"n"−6. Now, the first three name hydrocarbons with single, double and triple bonds; "-one" represents a ketone; "-ol" represents an alcohol or OH group; "-oxy-" means an ether and refers to oxygen between two carbons, so that methoxymethane is the IUPAC name for dimethyl ether.
It is difficult or impossible to find compounds with more than one IUPAC name. This is because shorter chains attached to longer chains are prefixes and the convention includes brackets. Numbers in the name, referring to which carbon a group is attached to, should be as low as possible so that 1- is implied and usually omitted from names of organic compounds with only one side-group. Symmetric compounds will have two ways of arriving at the same name.
Straight-chain alkanes are sometimes indicated by the prefix "n-" or ""n"-"(for "normal") where a non-linear isomer exists. Although this is not strictly necessary, the usage is still common in cases where there is an important difference in properties between the straight-chain and branched-chain isomers, e.g., "n"-hexane or 2- or 3-methylpentane. Alternative names for this group are: linear paraffins or "n"-paraffins.
The members of the series (in terms of number of carbon atoms) are named as follows:
The first four names were derived from methanol, ether, propionic acid and butyric acid, respectively (hexadecane is also sometimes referred to as cetane). Alkanes with five or more carbon atoms are named by adding the suffix -ane to the appropriate numerical multiplier prefix with elision of any terminal vowel ("-a" or "-o") from the basic numerical term. Hence, pentane, C5H12; hexane, C6H14; heptane, C7H16; octane, C8H18; etc. The prefix is generally Greek, however alkanes with a carbon atom count ending in nine, for example nonane, use the Latin prefix non-. For a more complete list, see List of alkanes.
Simple branched alkanes often have a common name using a prefix to distinguish them from linear alkanes, for example "n"-pentane, isopentane, and neopentane.
IUPAC naming conventions can be used to produce a systematic name.
The key steps in the naming of more complicated branched alkanes are as follows:
Though technically distinct from the alkanes, this class of hydrocarbons is referred to by some as the "cyclic alkanes." As their description implies, they contain one or more rings.
Simple cycloalkanes have a prefix "cyclo-" to distinguish them from alkanes. Cycloalkanes are named as per their acyclic counterparts with respect to the number of carbon atoms in their backbones, e.g., cyclopentane (C5H10) is a cycloalkane with 5 carbon atoms just like pentane (C5H12), but they are joined up in a five-membered ring. In a similar manner, propane and cyclopropane, butane and cyclobutane, etc.
Substituted cycloalkanes are named similarly to substituted alkanes – the cycloalkane ring is stated, and the substituents are according to their position on the ring, with the numbering decided by the Cahn–Ingold–Prelog priority rules.
The trivial (non-systematic) name for alkanes is 'paraffins'. Together, alkanes are known as the 'paraffin series'. Trivial names for compounds are usually historical artifacts. They were coined before the development of systematic names, and have been retained due to familiar usage in industry. Cycloalkanes are also called naphthenes.
It is almost certain that the term 'paraffin' stems from the petrochemical industry. Branched-chain alkanes are called isoparaffins. The use of the term "paraffin" is a general term and often does not distinguish between pure compounds and mixtures of isomers, i.e., compounds of the same chemical formula, e.g., pentane and isopentane.
The following trivial names are retained in the IUPAC system:
Some non-IUPAC trivial names are occasionally used:
All alkanes are colorless. Alkanes with the lowest molecular weights are gasses, those of intermediate molecular weight are liquids, and the heaviest are waxy solids.
Alkanes experience intermolecular van der Waals forces. Stronger intermolecular van der Waals forces give rise to greater boiling points of alkanes.
There are two determinants for the strength of the van der Waals forces:
Under standard conditions, from CH4 to C4H10 alkanes are gaseous; from C5H12 to C17H36 they are liquids; and after C18H38 they are solids. As the boiling point of alkanes is primarily determined by weight, it should not be a surprise that the boiling point has almost a linear relationship with the size (molecular weight) of the molecule. As a rule of thumb, the boiling point rises 20–30 °C for each carbon added to the chain; this rule applies to other homologous series.
A straight-chain alkane will have a boiling point higher than a branched-chain alkane due to the greater surface area in contact, thus the greater van der Waals forces, between adjacent molecules. For example, compare isobutane (2-methylpropane) and n-butane (butane), which boil at −12 and 0 °C, and 2,2-dimethylbutane and 2,3-dimethylbutane which boil at 50 and 58 °C, respectively. For the latter case, two molecules 2,3-dimethylbutane can "lock" into each other better than the cross-shaped 2,2-dimethylbutane, hence the greater van der Waals forces.
On the other hand, cycloalkanes tend to have higher boiling points than their linear counterparts due to the locked conformations of the molecules, which give a plane of intermolecular contact.
The melting points of the alkanes follow a similar trend to boiling points for the same reason as outlined above. That is, (all other things being equal) the larger the molecule the higher the melting point. There is one significant difference between boiling points and melting points. Solids have more rigid and fixed structure than liquids. This rigid structure requires energy to break down. Thus the better put together solid structures will require more energy to break apart. For alkanes, this can be seen from the graph above (i.e., the blue line). The odd-numbered alkanes have a lower trend in melting points than even numbered alkanes. This is because even numbered alkanes pack well in the solid phase, forming a well-organized structure, which requires more energy to break apart. The odd-numbered alkanes pack less well and so the "looser" organized solid packing structure requires less energy to break apart. For a visualization of the crystal structures see.
The melting points of branched-chain alkanes can be either higher or lower than those of the corresponding straight-chain alkanes, again depending on the ability of the alkane in question to pack well in the solid phase: This is particularly true for isoalkanes (2-methyl isomers), which often have melting points higher than those of the linear analogues.
Alkanes do not conduct electricity in any way, nor are they substantially polarized by an electric field. For this reason, they do not form hydrogen bonds and are insoluble in polar solvents such as water. Since the hydrogen bonds between individual water molecules are aligned away from an alkane molecule, the coexistence of an alkane and water leads to an increase in molecular order (a reduction in entropy). As there is no significant bonding between water molecules and alkane molecules, the second law of thermodynamics suggests that this reduction in entropy should be minimized by minimizing the contact between alkane and water: Alkanes are said to be hydrophobic as they repel water.
Their solubility in nonpolar solvents is relatively high, a property that is called lipophilicity. Alkanes are, for example, miscible in all proportions among themselves.
The density of the alkanes usually increases with the number of carbon atoms but remains less than that of water. Hence, alkanes form the upper layer in an alkane–water mixture.
The molecular structure of the alkanes directly affects their physical and chemical characteristics. It is derived from the electron configuration of carbon, which has four valence electrons. The carbon atoms in alkanes are always sp3-hybridized, that is to say that the valence electrons are said to be in four equivalent orbitals derived from the combination of the 2s orbital and the three 2p orbitals. These orbitals, which have identical energies, are arranged spatially in the form of a tetrahedron, the angle of cos−1(−) ≈ 109.47° between them.
An alkane has only C–H and C–C single bonds. The former result from the overlap of an sp3 orbital of carbon with the 1s orbital of a hydrogen; the latter by the overlap of two sp3 orbitals on adjacent carbon atoms. The bond lengths amount to 1.09 × 10−10 m for a C–H bond and 1.54 × 10−10 m for a C–C bond.
The spatial arrangement of the bonds is similar to that of the four sp3 orbitals—they are tetrahedrally arranged, with an angle of 109.47° between them. Structural formulae that represent the bonds as being at right angles to one another, while both common and useful, do not correspond with the reality.
The structural formula and the bond angles are not usually sufficient to completely describe the geometry of a molecule. There is a further degree of freedom for each carbon–carbon bond: the torsion angle between the atoms or groups bound to the atoms at each end of the bond. The spatial arrangement described by the torsion angles of the molecule is known as its conformation.
Ethane forms the simplest case for studying the conformation of alkanes, as there is only one C–C bond. If one looks down the axis of the C–C bond, one will see the so-called Newman projection. The hydrogen atoms on both the front and rear carbon atoms have an angle of 120° between them, resulting from the projection of the base of the tetrahedron onto a flat plane. However, the torsion angle between a given hydrogen atom attached to the front carbon and a given hydrogen atom attached to the rear carbon can vary freely between 0° and 360°. This is a consequence of the free rotation about a carbon–carbon single bond. Despite this apparent freedom, only two limiting conformations are important: eclipsed conformation and staggered conformation.
The two conformations differ in energy: the staggered conformation is 12.6 kJ/mol lower in energy (more stable) than the eclipsed conformation (the least stable).
This difference in energy between the two conformations, known as the torsion energy, is low compared to the thermal energy of an ethane molecule at ambient temperature. There is constant rotation about the C–C bond. The time taken for an ethane molecule to pass from one staggered conformation to the next, equivalent to the rotation of one CH3 group by 120° relative to the other, is of the order of 10−11 seconds.
The case of higher alkanes is more complex but based on similar principles, with the antiperiplanar conformation always being the most favored around each carbon–carbon bond. For this reason, alkanes are usually shown in a zigzag arrangement in diagrams or in models. The actual structure will always differ somewhat from these idealized forms, as the differences in energy between the conformations are small compared to the thermal energy of the molecules: Alkane molecules have no fixed structural form, whatever the models may suggest.
Virtually all organic compounds contain carbon–carbon, and carbon–hydrogen bonds, and so show some of the features of alkanes in their spectra. Alkanes are notable for having no other groups, and therefore for the "absence" of other characteristic spectroscopic features of a functional group like –OH, –CHO, –COOH etc.
The carbon–hydrogen stretching mode gives a strong absorption between 2850 and 2960 cm−1, while the carbon–carbon stretching mode absorbs between 800 and 1300 cm−1. The carbon–hydrogen bending modes depend on the nature of the group: methyl groups show bands at 1450 cm−1 and 1375 cm−1, while methylene groups show bands at 1465 cm−1 and 1450 cm−1. Carbon chains with more than four carbon atoms show a weak absorption at around 725 cm−1.
The proton resonances of alkanes are usually found at "δ"H = 0.5–1.5. The carbon-13 resonances depend on the number of hydrogen atoms attached to the carbon: "δ"C = 8–30 (primary, methyl, –CH3), 15–55 (secondary, methylene, –CH2–), 20–60 (tertiary, methyne, C–H) and quaternary. The carbon-13 resonance of quaternary carbon atoms is characteristically weak, due to the lack of nuclear Overhauser effect and the long relaxation time, and can be missed in weak samples, or samples that have not been run for a sufficiently long time.
Alkanes have a high ionization energy, and the molecular ion is usually weak. The fragmentation pattern can be difficult to interpret, but, in the case of branched chain alkanes, the carbon chain is preferentially cleaved at tertiary or quaternary carbons due to the relative stability of the resulting free radicals. The fragment resulting from the loss of a single methyl group ("M" − 15) is often absent, and other fragments are often spaced by intervals of fourteen mass units, corresponding to sequential loss of CH2 groups.
Alkanes are only weakly reactive with most chemical compounds. The acid dissociation constant (p"K"a) values of all alkanes are estimated to range from 50 to 70, depending on the extrapolation method, hence they are extremely weak acids that are practically inert to bases (see: carbon acids). They are also extremely weak bases, undergoing no observable protonation in pure sulfuric acid ("H"0 ~ –12), although superacids that are at least millions of times stronger have been known to protonate them to give hypercoordinate alkanium ions (see: methanium ion). Similarly, they only show reactivity with the strongest of electrophilic reagents (e.g., dioxiranes and salts containing the NF4+ cation). By virtue of their strongly C–H bonds (~100 kcal/mol) and C–C bonds (~90 kcal/mol, but usually less sterically accessible), they are also relatively unreactive toward free radicals, although many electron-deficient radicals will react with alkanes in the absence of other electron-rich bonds (see below). This inertness is the source of the term "paraffins" (with the meaning here of "lacking affinity"). In crude oil the alkane molecules have remained chemically unchanged for millions of years.
Free radicals, molecules with unpaired electrons, play a large role in most reactions of alkanes, such as cracking and reformation where long-chain alkanes are converted into shorter-chain alkanes and straight-chain alkanes into branched-chain isomers. Moreover, redox reactions of alkanes involving free radical intermediates, in particular with oxygen and the halogens, are possible as the carbon atoms are in a strongly reduced state; in the case of methane, carbon is in its lowest possible oxidation state (−4). Reaction with oxygen ("if" present in sufficient quantity to satisfy the reaction stoichiometry) leads to combustion without any smoke, producing carbon dioxide and water. Free radical halogenation reactions occur with halogens, leading to the production of haloalkanes. In addition, alkanes have been shown to interact with, and bind to, certain transition metal complexes in C–H bond activation reactions.
In highly branched alkanes, the bond angle may differ significantly from the optimal value (109.5°) to accommodate bulky groups. Such distortions introduce a tension in the molecule, known as steric hindrance or strain. Strain substantially increases reactivity.
However, in general and perhaps surprisingly, when branching is not extensive enough to make highly disfavorable 1,2- and 1,3-alkyl–alkyl steric interactions (worth ~3.1 kcal/mol and ~3.7 kcal/mol in the case of the eclipsing conformations of butane and pentane, respectively) unavoidable, the branched alkanes are actually more thermodynamically stable than their linear (or less branched) isomers. For example, the highly branched 2,2,3,3-tetramethylbutane is about 1.9 kcal/mol more stable than its linear isomer, "n"-octane. Due to the subtlety of this effect, the exact reasons for this rule have been vigorously debated in the chemical literature and is yet unsettled. Several explanations, including stabilization of branched alkanes by electron correlation, destabilization of linear alkanes by steric repulsion, stabilization by neutral hyperconjugation, and/or electrostatic effects have been advanced as possibilities. The controversy is related to the question of whether the traditional explanation of hyperconjugation is the primary factor governing the stability of alkyl radicals.
All alkanes react with oxygen in a combustion reaction, although they become increasingly difficult to ignite as the number of carbon atoms increases. The general equation for complete combustion is:
In the absence of sufficient oxygen, carbon monoxide or even soot can be formed, as shown below:
For example, methane:
See the alkane heat of formation table for detailed data.
The standard enthalpy change of combustion, Δc"H"⊖, for alkanes increases by about 650 kJ/mol per CH2 group. Branched-chain alkanes have lower values of Δc"H"⊖ than straight-chain alkanes of the same number of carbon atoms, and so can be seen to be somewhat more stable.
Alkanes react with halogens in a so-called "free radical halogenation" reaction. The hydrogen atoms of the alkane are progressively replaced by halogen atoms. Free radicals are the reactive species that participate in the reaction, which usually leads to a mixture of products. The reaction is highly exothermic, and can lead to an explosion.
These reactions are an important industrial route to halogenated hydrocarbons. There are three steps:
Experiments have shown that all halogenation produces a mixture of all possible isomers, indicating that all hydrogen atoms are susceptible to reaction. The mixture produced, however, is not a statistical mixture: Secondary and tertiary hydrogen atoms are preferentially replaced due to the greater stability of secondary and tertiary free-radicals. An example can be seen in the monobromination of propane:
Cracking breaks larger molecules into smaller ones. This can be done with a thermal or catalytic method. The thermal cracking process follows a homolytic mechanism with formation of free-radicals. The catalytic cracking process involves the presence of acid catalysts (usually solid acids such as silica-alumina and zeolites), which promote a heterolytic (asymmetric) breakage of bonds yielding pairs of ions of opposite charges, usually a carbocation and the very unstable hydride anion. Carbon-localized free radicals and cations are both highly unstable and undergo processes of chain rearrangement, C–C scission in position beta (i.e., cracking) and intra- and intermolecular hydrogen transfer or hydride transfer. In both types of processes, the corresponding reactive intermediates (radicals, ions) are permanently regenerated, and thus they proceed by a self-propagating chain mechanism. The chain of reactions is eventually terminated by radical or ion recombination.
Dragan and his colleague were the first to report about isomerization in alkanes. Isomerization and reformation are processes in which straight-chain alkanes are heated in the presence of a platinum catalyst. In isomerization, the alkanes become branched-chain isomers. In other words, it does not lose any carbons or hydrogens, keeping the same molecular weight. In reformation, the alkanes become cycloalkanes or aromatic hydrocarbons, giving off hydrogen as a by-product. Both of these processes raise the octane number of the substance. Butane is the most common alkane that is put under the process of isomerization, as it makes many branched alkanes with high octane numbers.
Alkanes will react with steam in the presence of a nickel catalyst to give hydrogen. Alkanes can be chlorosulfonated and nitrated, although both reactions require special conditions. The fermentation of alkanes to carboxylic acids is of some technical importance. In the Reed reaction, sulfur dioxide, chlorine and light convert hydrocarbons to sulfonyl chlorides. Nucleophilic Abstraction can be used to separate an alkane from a metal. Alkyl groups can be transferred from one compound to another by transmetalation reactions. A mixture of antimony pentafluoride (SbF5) and fluorosulfonic acid (HSO3F), called magic acid, can protonate alkanes.
Alkanes form a small portion of the atmospheres of the outer gas planets such as Jupiter (0.1% methane, 2 ppm ethane), Saturn (0.2% methane, 5 ppm ethane), Uranus (1.99% methane, 2.5 ppm ethane) and Neptune (1.5% methane, 1.5 ppm ethane). Titan (1.6% methane), a satellite of Saturn, was examined by the "Huygens" probe, which indicated that Titan's atmosphere periodically rains liquid methane onto the moon's surface. Also on Titan the Cassini mission has imaged seasonal methane/ethane lakes near the polar regions of Titan. Methane and ethane have also been detected in the tail of the comet Hyakutake. Chemical analysis showed that the abundances of ethane and methane were roughly equal, which is thought to imply that its ices formed in interstellar space, away from the Sun, which would have evaporated these volatile molecules. Alkanes have also been detected in meteorites such as carbonaceous chondrites.
Traces of methane gas (about 0.0002% or 1745 ppb) occur in the Earth's atmosphere, produced primarily by methanogenic microorganisms, such as Archaea in the gut of ruminants.
The most important commercial sources for alkanes are natural gas and oil. Natural gas contains primarily methane and ethane, with some propane and butane: oil is a mixture of liquid alkanes and other hydrocarbons. These hydrocarbons were formed when marine animals and plants (zooplankton and phytoplankton) died and sank to the bottom of ancient seas and were covered with sediments in an anoxic environment and converted over many millions of years at high temperatures and high pressure to their current form. Natural gas resulted thereby for example from the following reaction:
These hydrocarbon deposits, collected in porous rocks trapped beneath impermeable cap rocks, comprise commercial oil fields. They have formed over millions of years and once exhausted cannot be readily replaced. The depletion of these hydrocarbons reserves is the basis for what is known as the energy crisis.
Methane is also present in what is called biogas, produced by animals and decaying matter, which is a possible renewable energy source.
Alkanes have a low solubility in water, so the content in the oceans is negligible; however, at high pressures and low temperatures (such as at the bottom of the oceans), methane can co-crystallize with water to form a solid methane clathrate (methane hydrate). Although this cannot be commercially exploited at the present time, the amount of combustible energy of the known methane clathrate fields exceeds the energy content of all the natural gas and oil deposits put together. Methane extracted from methane clathrate is, therefore, a candidate for future fuels.
Acyclic alkanes occur in nature in various ways.
Certain types of bacteria can metabolize alkanes: they prefer even-numbered carbon chains as they are easier to degrade than odd-numbered chains.
On the other hand, certain archaea, the methanogens, produce large quantities of methane by the metabolism of carbon dioxide or other oxidized organic compounds. The energy is released by the oxidation of hydrogen:
Methanogens are also the producers of marsh gas in wetlands, and release about two billion tonnes of methane per year—the atmospheric content of this gas is produced nearly exclusively by them. The methane output of cattle and other herbivores, which can release 30 to 50 gallons per day, and of termites, is also due to methanogens. They also produce this simplest of all alkanes in the intestines of humans. Methanogenic archaea are, hence, at the end of the carbon cycle, with carbon being released back into the atmosphere after having been fixed by photosynthesis. It is probable that our current deposits of natural gas were formed in a similar way.
Alkanes also play a role, if a minor role, in the biology of the three eukaryotic groups of organisms: fungi, plants and animals. Some specialized yeasts, e.g., "Candida tropicale", "Pichia" sp., "Rhodotorula" sp., can use alkanes as a source of carbon or energy. The fungus "Amorphotheca resinae" prefers the longer-chain alkanes in aviation fuel, and can cause serious problems for aircraft in tropical regions.
In plants, the solid long-chain alkanes are found in the plant cuticle and epicuticular wax of many species, but are only rarely major constituents. They protect the plant against water loss, prevent the leaching of important minerals by the rain, and protect against bacteria, fungi, and harmful insects. The carbon chains in plant alkanes are usually odd-numbered, between 27 and 33 carbon atoms in length and are made by the plants by decarboxylation of even-numbered fatty acids. The exact composition of the layer of wax is not only species-dependent but changes also with the season and such environmental factors as lighting conditions, temperature or humidity.
More volatile short-chain alkanes are also produced by and found in plant tissues. The Jeffrey pine is noted for producing exceptionally high levels of "n"-heptane in its resin, for which reason its distillate was designated as the zero point for one octane rating. Floral scents have also long been known to contain volatile alkane components, and "n"-nonane is a significant component in the scent of some roses. Emission of gaseous and volatile alkanes such as ethane, pentane, and hexane by plants has also been documented at low levels, though they are not generally considered to be a major component of biogenic air pollution.
Edible vegetable oils also typically contain small fractions of biogenic alkanes with a wide spectrum of carbon numbers, mainly 8 to 35, usually peaking in the low to upper 20s, with concentrations up to dozens of milligrams per kilogram (parts per million by weight) and sometimes over a hundred for the total alkane fraction.
Alkanes are found in animal products, although they are less important than unsaturated hydrocarbons. One example is the shark liver oil, which is approximately 14% pristane (2,6,10,14-tetramethylpentadecane, C19H40). They are important as pheromones, chemical messenger materials, on which insects depend for communication. In some species, e.g. the support beetle "Xylotrechus colonus", pentacosane (C25H52), 3-methylpentaicosane (C26H54) and 9-methylpentaicosane (C26H54) are transferred by body contact. With others like the tsetse fly "Glossina morsitans morsitans", the pheromone contains the four alkanes 2-methylheptadecane (C18H38), 17,21-dimethylheptatriacontane (C39H80), 15,19-dimethylheptatriacontane (C39H80) and 15,19,23-trimethylheptatriacontane (C40H82), and acts by smell over longer distances. Waggle-dancing honey bees produce and release two alkanes, tricosane and pentacosane.
One example, in which both plant and animal alkanes play a role, is the ecological relationship between the sand bee ("Andrena nigroaenea") and the early spider orchid ("Ophrys sphegodes"); the latter is dependent for pollination on the former. Sand bees use pheromones in order to identify a mate; in the case of "A. nigroaenea", the females emit a mixture of tricosane (C23H48), pentacosane (C25H52) and heptacosane (C27H56) in the ratio 3:3:1, and males are attracted by specifically this odor. The orchid takes advantage of this mating arrangement to get the male bee to collect and disseminate its pollen; parts of its flower not only resemble the appearance of sand bees but also produce large quantities of the three alkanes in the same ratio as female sand bees. As a result, numerous males are lured to the blooms and attempt to copulate with their imaginary partner: although this endeavor is not crowned with success for the bee, it allows the orchid to transfer its pollen,
which will be dispersed after the departure of the frustrated male to other blooms.
As stated earlier, the most important source of alkanes is natural gas and crude oil. Alkanes are separated in an oil refinery by fractional distillation and processed into many products.
The Fischer–Tropsch process is a method to synthesize liquid hydrocarbons, including alkanes, from carbon monoxide and hydrogen. This method is used to produce substitutes for petroleum distillates.
There is usually little need for alkanes to be synthesized in the laboratory, since they are usually commercially available. Also, alkanes are generally unreactive chemically or biologically, and do not undergo functional group interconversions cleanly. When alkanes are produced in the laboratory, it is often a side-product of a reaction. For example, the use of "n"-butyllithium as a strong base gives the conjugate acid, "n"-butane as a side-product:
However, at times it may be desirable to make a section of a molecule into an alkane-like functionality (alkyl group) using the above or similar methods. For example, an ethyl group is an alkyl group; when this is attached to a hydroxy group, it gives ethanol, which is not an alkane. To do so, the best-known methods are hydrogenation of alkenes:
Alkanes or alkyl groups can also be prepared directly from alkyl halides in the Corey–House–Posner–Whitesides reaction. The Barton–McCombie deoxygenation removes hydroxyl groups from alcohols e.g.
and the Clemmensen reduction removes carbonyl groups from aldehydes and ketones to form alkanes or alkyl-substituted compounds e.g.:
The applications of alkanes depend on the number of carbon atoms. The first four alkanes are used mainly for heating and cooking purposes, and in some countries for electricity generation. Methane and ethane are the main components of natural gas; they are normally stored as gases under pressure. It is, however, easier to transport them as liquids: This requires both compression and cooling of the gas.
Propane and butane are gases at atmospheric pressure that can be liquefied at fairly low pressures and are commonly known as liquified petroleum gas (LPG). Propane is used in propane gas burners and as a fuel for road vehicles, butane in space heaters and disposable cigarette lighters. Both are used as propellants in aerosol sprays.
From pentane to octane the alkanes are highly volatile liquids. They are used as fuels in internal combustion engines, as they vaporize easily on entry into the combustion chamber without forming droplets, which would impair the uniformity of the combustion. Branched-chain alkanes are preferred as they are much less prone to premature ignition, which causes knocking, than their straight-chain homologues. This propensity to premature ignition is measured by the octane rating of the fuel, where 2,2,4-trimethylpentane ("isooctane") has an arbitrary value of 100, and heptane has a value of zero. Apart from their use as fuels, the middle alkanes are also good solvents for nonpolar substances.
Alkanes from nonane to, for instance, hexadecane (an alkane with sixteen carbon atoms) are liquids of higher viscosity, less and less suitable for use in gasoline. They form instead the major part of diesel and aviation fuel. Diesel fuels are characterized by their cetane number, cetane being an old name for hexadecane. However, the higher melting points of these alkanes can cause problems at low temperatures and in polar regions, where the fuel becomes too thick to flow correctly.
Alkanes from hexadecane upwards form the most important components of fuel oil and lubricating oil. In the latter function, they work at the same time as anti-corrosive agents, as their hydrophobic nature means that water cannot reach the metal surface. Many solid alkanes find use as paraffin wax, for example, in candles. This should not be confused however with true wax, which consists primarily of esters.
Alkanes with a chain length of approximately 35 or more carbon atoms are found in bitumen, used, for example, in road surfacing. However, the higher alkanes have little value and are usually split into lower alkanes by cracking.
Some synthetic polymers such as polyethylene and polypropylene are alkanes with chains containing hundreds or thousands of carbon atoms. These materials are used in innumerable applications, and billions of kilograms of these materials are made and used each year.
Alkanes are chemically very inert apolar molecules which are not very reactive as organic compounds. This inertness yields serious ecological issues if they are released into the environment. Due to their lack of functional groups and low water solubility, alkanes show poor bioavailability for microorganisms.
There are, however, some microorganisms possessing the metabolic capacity to utilize "n"-alkanes as both carbon and energy sources. Some bacterial species are highly specialised in degrading alkanes; these are referred to as hydrocarbonoclastic bacteria.
Methane is flammable, explosive and dangerous to inhale, because it is a colorless, odorless gas, special caution must be taken around methane. Ethane is also extremely flammable, dangerous to inhale and explosive. Both of these may cause suffocation. Similarly, propane is flammable and explosive. It may cause drowsiness or unconsciousness if inhaled. Butane has the same hazards to consider as propane.
Alkanes also pose a threat to the environment. Branched alkanes have a lower biodegradability than unbranched alkanes. However, methane is ranked as the most dangerous greenhouse gas. Although the amount of methane in the atmosphere is low, it does pose a threat to the environment. | https://en.wikipedia.org/wiki?curid=639 |
Appellate procedure in the United States
United States appellate procedure involves the rules and regulations for filing appeals in state courts and federal courts. The nature of an appeal can vary greatly depending on the type of case and the rules of the court in the jurisdiction where the case was prosecuted. There are many types of standard of review for appeals, such as "de novo" and abuse of discretion. However, most appeals begin when a party files a petition for review to a higher court for the purpose of overturning the lower court's decision.
An appellate court is a court that hears cases on appeal from another court. Depending on the particular legal rules that apply to each circumstance, a party to a court case who is unhappy with the result might be able to challenge that result in an appellate court on specific grounds. These grounds typically could include errors of law, fact, procedure or due process. In different jurisdictions, appellate courts are also called appeals courts, courts of appeals, superior courts, or supreme courts.
The specific procedures for appealing, including even whether there is a right of appeal from a particular type of decision, can vary greatly from state to state. The right to file an appeal can also vary from state to state; for example, the New Jersey Constitution vests judicial power in a Supreme Court, a Superior Court, and other courts of limited jurisdiction, with an appellate court being part of the Superior Court.
A party who files an appeal is called an "appellant", "plaintiff in error", "petitioner" or "pursuer", and a party on the other side is called an "appellee". A "cross-appeal" is an appeal brought by the respondent. For example, suppose at trial the judge found for the plaintiff and ordered the defendant to pay $50,000. If the defendant files an appeal arguing that he should not have to pay any money, then the plaintiff might file a cross-appeal arguing that the defendant should have to pay $200,000 instead of $50,000.
The appellant is the party who, having lost part or all their claim in a lower court decision, is appealing to a higher court to have their case reconsidered. This is usually done on the basis that the lower court judge erred in the application of law, but it may also be possible to appeal on the basis of court misconduct, or that a finding of fact was entirely unreasonable to make on the evidence.
The appellant in the new case can be either the plaintiff (or claimant), defendant, third-party intervenor, or respondent (appellee) from the lower case, depending on who was the losing party. The winning party from the lower court, however, is now the respondent. In unusual cases the appellant can be the victor in the court below, but still appeal.
An appellee is the party to an appeal in which the lower court judgment was in its favor. The appellee is required to respond to the petition, oral arguments, and legal briefs of the appellant. In general, the appellee takes the procedural posture that the lower court's decision should be affirmed.
An appeal "as of right" is one that is guaranteed by statute or some underlying constitutional or legal principle. The appellate court cannot refuse to listen to the appeal. An appeal "by leave" or "permission" requires the appellant to obtain leave to appeal; in such a situation either or both of the lower court and the court may have the discretion to grant or refuse the appellant's demand to appeal the lower court's decision. In the Supreme Court, review in most cases is available only if the Court exercises its discretion and grants a writ of certiorari.
In tort, equity, or other civil matters either party to a previous case may file an appeal. In criminal matters, however, the state or prosecution generally has no appeal "as of right". And due to the double jeopardy principle, the state or prosecution may never appeal a jury or bench verdict of acquittal. But in some jurisdictions, the state or prosecution may appeal "as of right" from a trial court's dismissal of an indictment in whole or in part or from a trial court's granting of a defendant's suppression motion. Likewise, in some jurisdictions, the state or prosecution may appeal an issue of law "by leave" from the trial court or the appellate court. The ability of the prosecution to appeal a decision in favor of a defendant varies significantly internationally. All parties must present grounds to appeal, or it will not be heard.
By convention in some law reports, the appellant is named first. This can mean that where it is the defendant who appeals, the name of the case in the law reports reverses (in some cases twice) as the appeals work their way up the court hierarchy. This is not always true, however. In the federal courts, the parties' names always stay in the same order as the lower court when an appeal is taken to the circuit courts of appeals, and are re-ordered only if the appeal reaches the Supreme Court.
Many jurisdictions recognize two types of appeals, particularly in the criminal context. The first is the traditional "direct" appeal in which the appellant files an appeal with the next higher court of review. The second is the collateral appeal or post-conviction petition, in which the petitioner-appellant files the appeal in a court of first instance—usually the court that tried the case.
The key distinguishing factor between direct and collateral appeals is that the former occurs in state courts, and the latter in federal courts.
Relief in post-conviction is rare and is most often found in capital or violent felony cases. The typical scenario involves an incarcerated defendant locating DNA evidence demonstrating the defendant's actual innocence.
"Appellate review" is the general term for the process by which courts with appellate jurisdiction take jurisdiction of matters decided by lower courts. It is distinguished from judicial review, which refers to the court's overriding constitutional or statutory right to determine if a legislative act or administrative decision is defective for jurisdictional or other reasons (which may vary by jurisdiction).
In most jurisdictions the normal and preferred way of seeking appellate review is by filing an appeal of the final judgment. Generally, an appeal of the judgment will also allow appeal of all other orders or rulings made by the trial court in the course of the case. This is because such orders cannot be appealed "as of right". However, certain critical interlocutory court orders, such as the denial of a request for an interim injunction, or an order holding a person in contempt of court, can be appealed immediately although the case may otherwise not have been fully disposed of.
There are two distinct forms of appellate review, "direct" and "collateral". For example, a criminal defendant may be convicted in state court, and lose on "direct appeal" to higher state appellate courts, and if unsuccessful, mount a "collateral" action such as filing for a writ of habeas corpus in the federal courts. Generally speaking, "[d]irect appeal statutes afford defendants the opportunity to challenge the merits of a judgment and allege errors of law or fact. ... [Collateral review], on the other hand, provide[s] an independent and civil inquiry into the validity of a conviction and sentence, and as such are generally limited to challenges to constitutional, jurisdictional, or other fundamental violations that occurred at trial." "Graham v. Borgen", 483 F 3d. 475 (7th Cir. 2007) (no. 04-4103) (slip op. at 7) (citation omitted).
In Anglo-American common law courts, appellate review of lower court decisions may also be obtained by filing a petition for review by prerogative writ in certain cases. There is no corresponding right to a writ in any pure or continental civil law legal systems, though some mixed systems such as Quebec recognize these prerogative writs.
After exhausting the first appeal as of right, defendants usually petition the highest state court to review the decision. This appeal is known as a direct appeal. The highest state court, generally known as the Supreme Court, exercises discretion over whether it will review the case. On direct appeal, a prisoner challenges the grounds of the conviction based on an error that occurred at trial or some other stage in the adjudicative process.
An appellant's claim(s) must usually be preserved at trial. This means that the defendant had to object to the error when it occurred in the trial. Because constitutional claims are of great magnitude, appellate courts might be more lenient to review the claim even if it was not preserved. For example, Connecticut applies the following standard to review unpreserved claims: 1.the record is adequate to review the alleged claim of error; 2. the claim is of constitutional magnitude alleging the violation of a fundamental right; 3. the alleged constitutional violation clearly exists and clearly deprived the defendant of a fair trial; 4. if subject to harmless error analysis, the state has failed to demonstrate harmlessness of the alleged constitutional violation beyond a reasonable doubt.
All States have a post-conviction relief process. Similar to federal post-conviction relief, an appellant can petition the court to correct alleged fundamental errors that were not corrected on direct review. Typical claims might include ineffective assistance of counsel and actual innocence based on new evidence. These proceedings are normally separate from the direct appeal, however some states allow for collateral relief to be sought on direct appeal. After direct appeal, the conviction is considered final. An appeal from the post conviction court proceeds just as a direct appeal. That is, it goes to the intermediate appellate court, followed by the highest court. If the petition is granted the appellant could be released from incarceration, the sentence could be modified, or a new trial could be ordered.
A "notice of appeal" is a form or document that in many cases is required to begin an appeal. The form is completed by the appellant or by the appellant's legal representative. The nature of this form can vary greatly from country to country and from court to court within a country.
The specific rules of the legal system will dictate exactly how the appeal is officially begun. For example, the appellant might have to file the notice of appeal with the appellate court, or with the court from which the appeal is taken, or both.
Some courts have samples of a notice of appeal on the court's own web site. In New Jersey, for example, the Administrative Office of the Court has promulgated a form of notice of appeal for use by appellants, though using this exact form is not mandatory and the failure to use it is not a jurisdictional defect provided that all pertinent information is set forth in whatever form of notice of appeal is used.
The deadline for beginning an appeal can often be very short: traditionally, it is measured in days, not months. This can vary from country to country, as well as within a country, depending on the specific rules in force. In the U.S. federal court system, criminal defendants must file a notice of appeal within 10 days of the entry of either the judgment or the order being appealed, or the right to appeal is forfeited.
Generally speaking the appellate court examines the record of evidence presented in the trial court and the law that the lower court applied and decides whether that decision was legally sound or not. The appellate court will typically be deferential to the lower court's findings of fact (such as whether a defendant committed a particular act), unless clearly erroneous, and so will focus on the court's application of the law to those facts (such as whether the act found by the court to have occurred fits a legal definition at issue).
If the appellate court finds no defect, it "affirms" the judgment. If the appellate court does find a legal defect in the decision "below" (i.e., in the lower court), it may "modify" the ruling to correct the defect, or it may nullify ("reverse" or "vacate") the whole decision or any part of it. It may, in addition, send the case back ("remand" or "remit") to the lower court for further proceedings to remedy the defect.
In some cases, an appellate court may review a lower court decision "de novo" (or completely), challenging even the lower court's findings of fact. This might be the proper standard of review, for example, if the lower court resolved the case by granting a pre-trial motion to dismiss or motion for summary judgment which is usually based only upon written submissions to the trial court and not on any trial testimony.
Another situation is where appeal is by way of "re-hearing". Certain jurisdictions permit certain appeals to cause the trial to be heard afresh in the appellate court.
Sometimes, the appellate court finds a defect in the procedure the parties used in filing the appeal and dismisses the appeal without considering its merits, which has the same effect as affirming the judgment below. (This would happen, for example, if the appellant waited too long, under the appellate court's rules, to file the appeal.)
Generally, there is no trial in an appellate court, only consideration of the record of the evidence presented to the trial court and all the pre-trial and trial court proceedings are reviewed—unless the appeal is by way of re-hearing, new evidence will usually only be considered on appeal in "very" rare instances, for example if that material evidence was unavailable to a party for some very significant reason such as prosecutorial misconduct.
In some systems, an appellate court will only consider the written decision of the lower court, together with any written evidence that was before that court and is relevant to the appeal. In other systems, the appellate court will normally consider the record of the lower court. In those cases the record will first be certified by the lower court.
The appellant has the opportunity to present arguments for the granting of the appeal and the appellee (or respondent) can present arguments against it. Arguments of the parties to the appeal are presented through their appellate lawyers, if represented, or "pro se" if the party has not engaged legal representation. Those arguments are presented in written briefs and sometimes in oral argument to the court at a hearing. At such hearings each party is allowed a brief presentation at which the appellate judges ask questions based on their review of the record below and the submitted briefs.
In an adversarial system, appellate courts do not have the power to review lower court decisions unless a party appeals it. Therefore, if a lower court has ruled in an improper manner, or against legal precedent, that judgment will stand if not appealed – even if it might have been overturned on appeal.
The United States legal system generally recognizes two types of appeals: a trial "de novo" or an appeal on the record.
A trial de novo is usually available for review of informal proceedings conducted by some minor judicial tribunals in proceedings that do not provide all the procedural attributes of a formal judicial trial. If unchallenged, these decisions have the power to settle more minor legal disputes once and for all. If a party is dissatisfied with the finding of such a tribunal, one generally has the power to request a trial "de novo" by a court of record. In such a proceeding, all issues and evidence may be developed newly, as though never heard before, and one is not restricted to the evidence heard in the lower proceeding. Sometimes, however, the decision of the lower proceeding is itself admissible as evidence, thus helping to curb frivolous appeals.
In some cases, an application for "trial de novo" effectively erases the prior trial as if it had never taken place. The Supreme Court of Virginia has stated that '"This Court has repeatedly held that the effect of an appeal to circuit court is to "annul the judgment of the inferior tribunal as completely as if there had been no previous trial."' The only exception to this is that if a defendant appeals a conviction for a crime having multiple levels of offenses, where they are convicted on a lesser offense, the appeal is of the lesser offense; the conviction represents an acquittal of the more serious offenses. "[A] trial on the same charges in the circuit court does not violate double jeopardy principles, . . . subject only to the limitation that conviction in [the] district court for an offense lesser included in the one charged constitutes an acquittal of the greater offense,
permitting trial de novo in the circuit court only for the lesser-included offense."
In an appeal on the record from a decision in a judicial proceeding, both appellant and respondent are bound to base their arguments wholly on the proceedings and body of evidence as they were presented in the lower tribunal. Each seeks to prove to the higher court that the result they desired was the just result. Precedent and case law figure prominently in the arguments. In order for the appeal to succeed, the appellant must prove that the lower court committed reversible error, that is, an impermissible action by the court acted to cause a result that was unjust, and which would not have resulted had the court acted properly. Some examples of reversible error would be erroneously instructing the jury on the law applicable to the case, permitting seriously improper argument by an attorney, admitting or excluding evidence improperly, acting outside the court's jurisdiction, injecting bias into the proceeding or appearing to do so, juror misconduct, etc. The failure to formally object at the time, to what one views as improper action in the lower court, may result in the affirmance of the lower court's judgment on the grounds that one did not "preserve the issue for appeal" by objecting.
In cases where a judge rather than a jury decided issues of fact, an appellate court will apply an "abuse of discretion" standard of review. Under this standard, the appellate court gives deference to the lower court's view of the evidence, and reverses its decision only if it were a clear abuse of discretion. This is usually defined as a decision outside the bounds of reasonableness. On the other hand, the appellate court normally gives less deference to a lower court's decision on issues of law, and may reverse if it finds that the lower court applied the wrong legal standard.
In some cases, an appellant may successfully argue that the law under which the lower decision was rendered was unconstitutional or otherwise invalid, or may convince the higher court to order a new trial on the basis that evidence earlier sought was concealed or only recently discovered. In the case of new evidence, there must be a high probability that its presence or absence would have made a material difference in the trial. Another issue suitable for appeal in criminal cases is effective assistance of counsel. If a defendant has been convicted and can prove that his lawyer did not adequately handle his case and that there is a reasonable probability that the result of the trial would have been different had the lawyer given competent representation, he is entitled to a new trial.
A lawyer traditionally starts an oral argument to any appellate court with the words "May it please the court."
After an appeal is heard, the "mandate" is a formal notice of a decision by a court of appeal; this notice is transmitted to the trial court and, when filed by the clerk of the trial court, constitutes the final judgment on the case, unless the appeal court has directed further proceedings in the trial court. The mandate is distinguished from the appeal court's opinion, which sets out the legal reasoning for its decision. In some jurisdictions the mandate is known as the "remittitur".
The result of an appeal can be:
There can be multiple outcomes, so that the reviewing court can affirm some rulings, reverse others and remand the case all at the same time. Remand is not required where there is nothing left to do in the case. "Generally speaking, an appellate court's judgment provides 'the final directive of the appeals courts as to the matter appealed, setting out with specificity the court's determination that the action appealed from should be affirmed, reversed, remanded or modified'".
Some reviewing courts who have discretionary review may send a case back without comment other than "review improvidently granted". In other words, after looking at the case, they chose not to say anything. The result for the case of "review improvidently granted" is effectively the same as affirmed, but without that extra higher court stamp of approval. | https://en.wikipedia.org/wiki?curid=640 |
Answer (law)
In law, an Answer was originally a solemn assertion in opposition to someone or something, and thus generally any counter-statement or defense, a reply to a question or response, or objection, or a correct solution of a problem.
In the common law, an Answer is the first pleading by a defendant, usually filed and served upon the plaintiff within a certain strict time limit after a civil complaint or criminal information or indictment has been served upon the defendant. It may have been preceded by an "optional" "pre-answer" motion to dismiss or demurrer; if such a motion is unsuccessful, the defendant "must" file an answer to the complaint or risk an adverse default judgment.
In a criminal case, there is usually an arraignment or some other kind of appearance before the defendant comes to court. The pleading in the criminal case, which is entered on the record in open court, is usually either guilty or not guilty. Generally speaking in private, civil cases there is no plea entered of guilt or innocence. There is only a judgment that grants money damages or some other kind of equitable remedy such as restitution or a permanent injunction. Criminal cases may lead to fines or other punishment, such as imprisonment.
The famous Latin "Responsa Prudentium" ("answers of the learned ones") were the accumulated views of many successive generations of Roman lawyers, a body of legal opinion which gradually became authoritative.
During debates of a contentious nature, deflection, colloquially known as 'changing the topic', has been widely observed, and is often seen as a failure to answer a question. | https://en.wikipedia.org/wiki?curid=642 |
Appellate court
An appellate court, commonly called an appeals court, court of appeals (American English), appeal court (British English), court of second instance or second instance court, is any court of law that is empowered to hear an appeal of a trial court or other lower tribunal. In most jurisdictions, the court system is divided into at least three levels: the trial court, which initially hears cases and reviews evidence and testimony to determine the facts of the case; at least one intermediate appellate court; and a supreme court (or court of last resort) which primarily reviews the decisions of the intermediate courts. A jurisdiction's supreme court is that jurisdiction's highest appellate court. Appellate courts nationwide can operate under varying rules.
The authority of appellate courts to review the decisions of lower courts varies widely from one jurisdiction to another. In some areas, the appellate court has limited powers of review. Generally, an appellate court's judgment provides the final directive of the appeals courts as to the matter appealed, setting out with specificity the court's determination that the action appealed from should be affirmed, reversed, remanded or modified.
While in many appellate courts have jurisdiction over all cases decided by lower courts, some systems have appellate courts divided by the type of jurisdiction they exercise. Some jurisdictions have specialized appellate courts, such as the Texas Court of Criminal Appeals, which only hears appeals raised in criminal cases, and the U.S. Court of Appeals for the Federal Circuit, which has general jurisdiction but derives most of its caseload from patent cases, on one hand, and appeals from the Court of Federal Claims on the other. In the United States, Alabama, Tennessee, and Oklahoma also have separate courts of criminal appeals. Texas and Oklahoma have the final determination of criminal cases vested in their respective courts of criminal appeals, while Alabama and Tennessee allow decisions of its court of criminal appeals to be finally appealed to the state supreme court.
Court of Criminal Appeals include:
The Court of Appeal of New Zealand, located in Wellington, is New Zealand's principal intermediate appellate court. In practice, most appeals are resolved at this intermediate appellate level, rather than in the Supreme Court.
The Court of Appeal of Sri Lanka, located in Colombo, is the second senior court in the Sri Lankan legal system.
In the United States, both state and federal appellate courts are usually restricted to examining whether the lower court made the correct legal determinations, rather than hearing direct evidence and determining what the facts of the case were. Furthermore, U.S. appellate courts are usually restricted to hearing appeals based on matters that were originally brought up before the trial court. Hence, such an appellate court will not consider an appellant's argument if it is based on a theory that is raised for the first time in the appeal.
In most U.S. states, and in U.S. federal courts, parties before the court are allowed one appeal as of right. This means that a party who is unsatisfied with the outcome of a trial may bring an appeal to contest that outcome. However, appeals may be costly, and the appellate court must find an error on the part of the court below that justifies upsetting the verdict. Therefore, only a small proportion of trial court decisions result in appeals. Some appellate courts, particularly supreme courts, have the power of discretionary review, meaning that they can decide whether they will hear an appeal brought in a particular case.
Many U.S. jurisdictions title their appellate court an court of appeal or court of appeals. Historically, others have titled their appellate court a court of errors (or court of errors and appeals), on the premise that it was intended to correct errors made by lower courts. Examples of such courts include the New Jersey Court of Errors and Appeals (which existed from 1844 to 1947), the Connecticut Supreme Court of Errors (which has been renamed the Connecticut Supreme Court), the Kentucky Court of Errors (renamed the Kentucky Supreme Court), and the Mississippi High Court of Errors and Appeals (since renamed the Supreme Court of Mississippi). In some jurisdictions, a court able to hear appeals is known as an appellate division.
The phrase "court of appeals" most often refers to intermediate appellate courts. However, the Maryland and New York systems are different. The Maryland Court of Appeals and the New York Court of Appeals are the highest appellate courts in those states. The New York Supreme Court is a trial court of general jurisdiction. Depending on the system, certain courts may serve as both trial courts and appellate courts, hearing appeals of decisions made by courts with more limited jurisdiction. | https://en.wikipedia.org/wiki?curid=643 |
Arraignment
Arraignment is a formal reading of a criminal charging document in the presence of the defendant, to inform him of the charges against the defendant. In response to arraignment, the accused is expected to enter a plea. Acceptable pleas vary among jurisdictions, but they generally include "guilty", "not guilty", and the peremptory pleas (or pleas in bar) setting out reasons why a trial cannot proceed. Pleas of "nolo contendere" (no contest) and the ""Alford" plea" are allowed in some circumstances.
In Australia, arraignment is the first of eleven stages in a criminal trial, and involves the clerk of the court reading out the indictment. The judge will testify during the indictment process.
In every province in Canada except British Columbia, defendants are arraigned on the day of their trial. In British Columbia, arraignment takes place in one of the first few court appearances by the defendant or their lawyer. The defendant is asked whether he or she pleads guilty or not guilty to each charge.
In France, the general rule is that one cannot remain in police custody for more than 24 hours from the time of the arrest. However, police custody can last another 24 hours in specific circumstances, especially if the offence is punishable by at least one year's imprisonment, or if the investigation is deemed to require the extra time, and can last up to 96 hours in certain cases involving terrorism, drug trafficking or organised crime. The police needs to have the consent of the prosecutor (in the vast majority of cases, the prosecutor will consent).
In Germany, if one has been arrested and taken into custody by the police one must be brought before a judge as soon as possible and at the latest on the day after the arrest.
At the first appearance, the accused is read the charges and asked for a plea. The available pleas are, guilty, not guilty, and no plea. No plea allows the defendant to get legal advice on the plea, which must be made on the second appearance.
In South Africa, arraignment is defined as the calling upon the accused to appear, the informing of the accused of the crime charged against him, the demanding of the accused whether he be guilty or not guilty, and the entering of his plea. His plea having been entered he is said to stand arraigned.
In England, Wales, and Northern Ireland, arraignment is the first of eleven stages in a criminal trial, and involves the clerk of the court reading out the indictment.
In England and Wales, the police cannot legally detain anyone for more than 24 hours without charging them unless an officer with the rank of superintendent (or above) authorises detention for a further 12 hours (36 hours total), or a judge (who will be a magistrate) authorises detention by the police before charge for up to a maximum of 96 hours, but for terrorism-related offences people can be held by the police for up to 28 days before charge. If they are not released after being charged, they should be brought before a court as soon as practicable.
Under the United States Federal Rules of Criminal Procedure, "arraignment shall [...] [consist of an] open [...] reading [of] the indictment [...] to the defendant [...] and call[] on him to plead thereto. He/she shall be given a copy of the indictment [...] before he/she is called upon to plead."
In federal courts, arraignment takes place in two stages. The first is called the initial arraignment and must take place within 48 hours of an individual's arrest, 72 hours if the individual was arrested on the weekend and not able to go before a judge until Monday. During this arraignment the defendant is informed of the pending legal charges and is informed of his or her right to retain counsel. The presiding judge also decides at what amount, if any, to set bail. During the second arraignment, a post-indictment arraignment (PIA), the defendant is allowed to enter a plea.
In New York, most people arrested must be released if they are not arraigned within 24 hours.
In California, arraignments must be conducted without unnecessary delay and, in any event, within 48 hours of arrest, excluding weekends and holidays.
The wording of the arraignment varies from jurisdiction to jurisdiction. However, it generally conforms with the following principles:
Video arraignment is the act of conducting the arraignment process using some form of videoconferencing technology. Use of video arraignment system allows the courts to conduct the requisite arraignment process without the need to transport the defendant to the courtroom by using an audio-visual link between the location where the defendant is being held and the courtroom.
Use of the video arraignment process addresses the problems associated with having to transport defendants. The transportation of defendants requires time, puts additional demands on the public safety organizations to provide for the safety of the public, court personnel and for the security of the population held in detention. It also addresses the rising costs of transportation.
If the defendant pleads guilty, an evidentiary hearing usually follows. The court is not required to accept a guilty plea. During the hearing, the judge assesses the offense, the mitigating factors, and the defendant's character, and passes sentence.
If the defendant pleads not guilty, a date is set for a preliminary hearing or a trial.
In the past, a defendant who refused to plead (or "stood mute") was subject to peine forte et dure (Law French for "strong and hard punishment"). Today in common-law jurisdictions, the court enters a plea of not guilty for a defendant who refuses to enter a plea. The rationale for this is the defendant's right to silence.
This is also often the stage at which arguments for or against pre-trial release and bail may be made, depending on the alleged crime and jurisdiction. | https://en.wikipedia.org/wiki?curid=649 |
America the Beautiful
"America the Beautiful" is an American patriotic song. The lyrics were written by Katharine Lee Bates, and the music was composed by church organist and choirmaster Samuel A. Ward at Grace Episcopal Church in Newark, New Jersey. The two never met.
Bates originally wrote the words as a poem, "Pikes Peak", first published in the Fourth of July edition of the church periodical "The Congregationalist" in 1895. At that time, the poem was titled "America" for publication. Ward had originally written the music, "Materna", for the hymn "O Mother dear, Jerusalem" in 1882, though it was not first published until 1892. Ward's music combined with the Bates poem was first published in 1910 and titled "America the Beautiful". The song is one of the most popular of the many U.S. patriotic songs.
In 1893, at the age of 33, Bates, an English professor at Wellesley College, had taken a train trip to Colorado Springs, Colorado, to teach a short summer school session at Colorado College. Several of the sights on her trip inspired her, and they found their way into her poem, including the World's Columbian Exposition in Chicago, the "White City" with its promise of the future contained within its gleaming white buildings; the wheat fields of America's heartland Kansas, through which her train was riding on July 16; and the majestic view of the Great Plains from high atop Pikes Peak.
On the pinnacle of that mountain, the words of the poem started to come to her, and she wrote them down upon returning to her hotel room at the original Antlers Hotel. The poem was initially published two years later in "The Congregationalist" to commemorate the Fourth of July. It quickly caught the public's fancy. An amended version was published in 1904.
The first known melody written for the song was sent in by Silas Pratt when the poem was published in "The Congregationalist". By 1900, at least 75 different melodies had been written. A hymn tune composed in 1882 by Samuel A. Ward, the organist and choir director at Grace Church, Newark, was generally considered the best music as early as 1910 and is still the popular tune today. Just as Bates had been inspired to write her poem, Ward, too, was inspired. The tune came to him while he was on a ferryboat trip from Coney Island back to his home in New York City after a leisurely summer day and he immediately wrote it down. He composed the tune for the old hymn "O Mother Dear, Jerusalem", retitling the work "Materna". Ward's music combined with Bates's poem were first published together in 1910 and titled "America the Beautiful".
Ward died in 1903, not knowing the national stature his music would attain. Bates was more fortunate, since the song's popularity was well established by the time of her death in 1929.
At various times in the more than one hundred years that have elapsed since the song was written, particularly during the John F. Kennedy administration, there have been efforts to give "America the Beautiful" legal status either as a national hymn or as a national anthem equal to, or in place of, "The Star-Spangled Banner", but so far this has not succeeded. Proponents prefer "America the Beautiful" for various reasons, saying it is easier to sing, more melodic, and more adaptable to new orchestrations while still remaining as easily recognizable as "The Star-Spangled Banner". Some prefer "America the Beautiful" over "The Star-Spangled Banner" due to the latter's war-oriented imagery; others prefer "The Star-Spangled Banner" for the same reason. While that national dichotomy has stymied any effort at changing the tradition of the national anthem, "America the Beautiful" continues to be held in high esteem by a large number of Americans, and was even being considered before 1931 as a candidate to become the national anthem of the United States.
This song was used as the background music of the television broadcast of the Tiangong-1 launch.
The song is often included in songbooks in a wide variety of religious congregations in the United States.
Bing Crosby included the song in a medley on his album "101 Gang Songs" (1961).
Frank Sinatra recorded the song with Nelson Riddle during the sessions for "The Concert Sinatra" in February 1963, for a projected 45 single release. The 45 was not commercially issued however, but the song was later added as a bonus track to the enhanced 2012 CD release of "The Concert Sinatra".
In 1976, while the United States celebrated its bicentennial, a soulful version popularized by Ray Charles peaked at number 98 on the US R&B chart.
Three different renditions of the song have entered the Hot Country Songs charts. The first was by Charlie Rich, which went to number 22 in 1976. A second, by Mickey Newbury, peaked at number 82 in 1980. An all-star version of "America the Beautiful" performed by country singers Trace Adkins, Sherrié Austin, Billy Dean, Vince Gill, Carolyn Dawn Johnson, Toby Keith, Brenda Lee, Lonestar, Lyle Lovett, Lila McCann, Lorrie Morgan, Jamie O'Neal, The Oak Ridge Boys, Collin Raye, Kenny Rogers, Keith Urban and Phil Vassar reached number 58 in July 2001. The song re-entered the chart following the September 11 attacks.
Popularity of the song increased greatly following the September 11 attacks; at some sporting events it was sung in addition to the traditional singing of the national anthem. During the first taping of the "Late Show with David Letterman" following the attacks, CBS newsman Dan Rather cried briefly as he quoted the fourth verse.
For Super Bowl XLVIII, The Coca-Cola Company aired a multilingual version of the song, sung in several different languages. The commercial received some criticism on social media sites, such as Twitter and Facebook, and from some conservatives, such as Glenn Beck. Despite the controversies, Coca-Cola later reused the Super Bowl ad during Super Bowl LI, the opening ceremonies of the 2014 Winter Olympics and 2016 Summer Olympics and for patriotic holidays.
"From sea to shining sea", originally used in the charters of some of the English Colonies in North America, is an American idiom meaning "from the Atlantic Ocean to the Pacific Ocean" (or vice versa). Other songs that have used this phrase include the American patriotic song "God Bless the U.S.A." and Schoolhouse Rock's "Elbow Room". The phrase and the song are also the namesake of the Shining Sea Bikeway, a bike path in Bates's hometown of Falmouth, Massachusetts. The phrase is similar to the Latin phrase """" ("From sea to sea"), which is the official motto of Canada.
"Purple mountain majesties" refers to the shade of the Pikes Peak in Colorado Springs, Colorado, which inspired Bates to write the poem.
In the 2003 Tori Amos song "Amber Waves," the "America the Beautiful" lyric "for amber waves of grain" is appropriated to create a personification; Amos imagines Amber Waves as an exotic dancer, like she by the same name, portrayed by Julianne Moore, in "Boogie Nights".
Lynn Sherr's 2001 book "America the Beautiful" discusses the origins of the song and the backgrounds of its authors in depth. The book points out that the poem has the same meter as that of "Auld Lang Syne"; the songs can be sung interchangeably. Additionally, Sherr discusses the evolution of the lyrics, for instance, changes to the original third verse written by Bates.
Melinda M. Ponder, in her 2017 biography "Katharine Lee Bates: From Sea to Shining Sea", draws heavily on Bates's diaries and letters to trace the history of the poem and its place in American culture. | https://en.wikipedia.org/wiki?curid=651 |
Assistive technology
Assistive technology (AT) is assistive, adaptive, and rehabilitative devices for people with disabilities or the elderly population. People who have disabilities often have difficulty performing activities of daily living (ADLs) independently, or even with assistance. ADLs are self-care activities that include toileting, mobility (ambulation), eating, bathing, dressing, grooming, and personal device care. Assistive technology can ameliorate the effects of disabilities that limit the ability to perform ADLs. Assistive technology promotes greater independence by enabling people to perform tasks they were formerly unable to accomplish, or had great difficulty accomplishing, by providing enhancements to, or changing methods of interacting with, the technology needed to accomplish such tasks. For example, wheelchairs provide independent mobility for those who cannot walk, while assistive eating devices can enable people who cannot feed themselves to do so. Due to assistive technology, people with disabilities have an opportunity of a more positive and easygoing lifestyle, with an increase in "social participation," "security and control," and a greater chance to "reduce institutional costs without significantly increasing household expenses."
Adaptive technology and assistive technology are different. "Assistive technology" is something that is used to help individuals with disabilities, while "adaptive technology" covers items that are specifically designed for people with disabilities and would seldom be used by a non-disabled person. In other words, assistive technology is any object or system that helps people with disabilities, while adaptive technology is specifically designed for people with disabilities. Consequently, adaptive technology is a subset of assistive technology. Adaptive technology often refers specifically to electronic and information technology access.
Occupational therapy (OT) is a healthcare profession that specializes in maintaining or improving the quality of life for individuals that experience challenges when independently performing life's occupations. According to the "Occupational Therapy Practice Framework: Domain and Process" (3rd ed.; AOTA, 2014), occupations include areas related to all basic and instrumental activities of daily living (ADLs), rest and sleep, education, work, play, leisure and social participation. Occupational therapists have the specialized skill of employing assistive technology (AT) in the improvement and maintenance of optimal, functional participation in occupations. The application of AT enables an individual to adapt aspects of the environment, that may otherwise be challenging, to the user in order to optimize functional participation in those occupations. As a result, occupational therapists may educate, recommend, and promote the use of AT to improve the quality of life for their clients.
Wheelchairs are devices that can be manually propelled or electrically propelled, and that include a seating system and are designed to be a substitute for the normal mobility that most people have. Wheelchairs and other mobility devices allow people to perform mobility-related activities of daily living which include feeding, toileting, dressing, grooming, and bathing. The devices come in a number of variations where they can be propelled either by hand or by motors where the occupant uses electrical controls to manage motors and seating control actuators through a joystick, sip-and-puff control, head switches or other input devices. Often there are handles behind the seat for someone else to do the pushing or input devices for caregivers. Wheelchairs are used by people for whom walking is difficult or impossible due to illness, injury, or disability. People with both sitting and walking disability often need to use a wheelchair or walker.
Newer advancements in wheelchair design include prototypes that enable wheelchairs to climb stairs, or propel using segway technology.
Patient transfer devices generally allow patients with impaired mobility to be moved by caregivers between beds, wheelchairs, commodes, toilets, chairs, stretchers, shower benches, automobiles, swimming pools, and other patient support systems (i.e., radiology, surgical, or examining tables). The most common devices are Patient lifts (for vertical transfer), Transfer benches, stretcher or convertible chairs (for lateral, supine transfer), sit-to-stand lifts (for moving patients from one seated position to another i.e., from wheelchairs to commodes), air bearing inflatable mattresses (for supine transfer i.e., transfer from a gurney to an operating room table), and sliding boards (usually used for transfer from a bed to a wheelchair). Highly dependent patients who cannot assist their caregiver in moving them often require a Patient lift (a floor or ceiling-suspended sling lift) which though invented in 1955 and in common use since the early 1960s is still considered the state-of-the-art transfer device by OSHA and the American Nursing Association.
A walker or walking frame or Rollator is a tool for disabled people who need additional support to maintain balance or stability while walking. It consists of a frame that is about waist high, approximately twelve inches deep and slightly wider than the user. Walkers are also available in other sizes, such as for children, or for heavy people. Modern walkers are height-adjustable. The front two legs of the walker may or may not have wheels attached depending on the strength and abilities of the person using it. It is also common to see caster wheels or glides on the back legs of a walker with wheels on the front.
A prosthesis, prosthetic, or prosthetic limb is a device that replaces a missing body part. It is part of the field of biomechatronics, the science of using mechanical devices with human muscle, skeleton, and nervous systems to assist or enhance motor control lost by trauma, disease, or defect. Prostheses are typically used to replace parts lost by injury (traumatic) or missing from birth (congenital) or to supplement defective body parts. Inside the body, artificial heart valves are in common use with artificial hearts and lungs seeing less common use but under active technology development. Other medical devices and aids that can be considered prosthetics include hearing aids, artificial eyes, palatal obturator, gastric bands, and dentures.
Prostheses are specifically "not" orthoses, although given certain circumstances a prosthesis might end up performing some or all of the same functionary benefits as an orthosis. Prostheses are technically the complete finished item. For instance, a C-Leg knee alone is "not" a prosthesis, but only a prosthetic "component". The complete prosthesis would consist of the attachment system to the residual limb — usually a "socket", and all the attachment hardware components all the way down to and including the terminal device. Keep this in mind as nomenclature is often interchanged.
The terms "prosthetic" and "orthotic" are adjectives used to describe devices such as a prosthetic knee. The terms "prosthetics" and "orthotics" are used to describe the respective allied health fields.
A powered exoskeleton is a wearable mobile machine that is powered by a system of electric motors, pneumatics, levers, hydraulics, or a combination of technologies that allow for limb movement with increased strength and endurance. Its design aims to provide back support, sense the user's motion, and send a signal to motors which manage the gears. The exoskeleton supports the shoulder, waist and thigh, and assists movement for lifting and holding heavy items, while lowering back stress.
People with balance and motor function challenges often need specialized equipment to sit or stand safely and securely. This equipment is frequently specialized for specific settings such as in a classroom or nursing home. Positioning is often important in seating arrangements to ensure that user's body pressure is distributed equally without inhibiting movement in a desired way.
Positioning devices have been developed to aid in allowing people to stand and bear weight on their legs without risk of a fall. These standers are generally grouped into two categories based on the position of the occupant. Prone standers distribute the body weight to the front of the individual and usually have a tray in front of them. This makes them good for users who are actively trying to carry out some task. Supine standers distribute the body weight to the back and are good for cases where the user has more limited mobility or is recovering from injury.
Many people with serious visual impairments live independently, using a wide range of tools and techniques. Examples of assistive technology for visually impairment include screen readers, screen magnifiers, Braille embossers, desktop video magnifiers, and voice recorders.
Screen readers are used to help the visually impaired to easily access electronic information. These software programs run on a computer in order to convey the displayed information through voice (text-to-speech) or braille (refreshable braille displays) in combination with magnification for low vision users in some cases. There are a variety of platforms and applications available for a variety of costs with differing feature sets.
Some example of screen readers are Apple VoiceOver, Google TalkBack and Microsoft Narrator. This software is provided free of charge on all Apple devices. Apple VoiceOver includes the option to magnify the screen, control the keyboard, and provide verbal descriptions to describe what is happening on the screen. There are thirty languages to select from. It also has the capacity to read aloud file content, as well as web pages, E-mail messages, and word processing files.
As mentioned above, screen readers may rely on the assistance of text-to-speech tools. To use the text-to-speech tools, the documents must in an electronic form, that is uploaded as the digital format. However, people usually will use the hard copy documents scanned into the computer, which is cannot be recognized by the text-to-speech software. To solve this issue, people always use Optical Character Recognition technology accompanied with text-to-speech software.
Braille is a system of raised dots formed into units called braille cells. A full braille cell is made up of six dots, with two parallel rows of three dots, but other combinations and quantities of dots represent other letters, numbers, punctuation marks, or words. People can then use their fingers to read the code of raised dots.
A braille embosser is, simply put, a printer for braille. Instead of a standard printer adding ink onto a page, the braille embosser imprints the raised dots of braille onto a page. Some braille embossers combine both braille and ink so the documents can be read with either sight or touch.
A refreshable braille display or braille terminal is an electro-mechanical device for displaying braille characters, usually by means of round-tipped pins raised through holes in a flat surface. Computer users who cannot use a computer monitor use it to read a braille output version of the displayed text.
Desktop video magnifiers are electronic devices that use a camera and a display screen to perform digital magnification of printed materials. They enlarge printed pages for those with low vision. A camera connects to a monitor that displays real-time images, and the user can control settings such as magnification, focus, contrast, underlining, highlighting, and other screen preferences. They come in a variety of sizes and styles; some are small and portable with handheld cameras, while others are much larger and mounted on a fixed stand.
A screen magnifier is software that interfaces with a computer's graphical output to present enlarged screen content. It allows users to enlarge the texts and graphics on their computer screens for easier viewing. Similar to desktop video magnifiers, this technology assists people with low vision. After the user loads the software into their computer's memory, it serves as a kind of "computer magnifying glass." Wherever the computer cursor moves, it enlarges the area around it. This allows greater computer accessibility for a wide range of visual abilities.
A large-print keyboard has large letters printed on the keys. On the keyboard shown, the round buttons at the top control software which can magnify the screen (zoom in), change the background color of the screen, or make the mouse cursor on the screen larger. The "bump dots" on the keys, installed in this case by the organization using the keyboards, help the user find the right keys in a tactile way.
Assistive technology for navigation has exploded on the IEEE Xplore database since 2000, with over 7,500 engineering articles written on assistive technologies and visual impairment in the past 25 years, and over 1,300 articles on solving the problem of navigation for people who are blind or visually impaired. As well, over 600 articles on augmented reality and visual impairment have appeared in the engineering literature since 2000. Most of these articles were published within the past 5 years, and the number of articles in this area is increasing every year. GPS, accelerometers, gyroscopes, and cameras can pinpoint the exact location of the user and provide information on what's in the immediate vicinity, and assistance in getting to a destination.
Wearable technology are smart electronic devices that can be worn on the body as an implant or an accessory. New technologies are exploring how the visually impaired can receive visual information through wearable devices.
Some wearable devices for visual impairment include:
Personal emergency response systems (PERS), or Telecare (UK term), are a particular sort of assistive technology that use electronic sensors connected to an alarm system to help caregivers manage risk and help vulnerable people stay independent at home longer. An example would be the systems being put in place for senior people such as fall detectors, thermometers (for hypothermia risk), flooding and unlit gas sensors (for people with mild dementia). Notably, these alerts can be customized to the particular person's risks. When the alert is triggered, a message is sent to a caregiver or contact center who can respond appropriately.
In human–computer interaction, computer accessibility (also known as accessible computing) refers to the accessibility of a computer system to all people, regardless of disability or severity of impairment, examples include web accessibility guidelines. Another approach is for the user to present a token to the computer terminal, such as a smart card, that has configuration information to adjust the computer speed, text size, etc. to their particular needs. This is useful where users want to access public computer based terminals in Libraries, ATM, Information kiosks etc. The concept is encompassed by the CEN EN 1332-4 Identification Card Systems – Man-Machine Interface. This development of this standard has been supported in Europe by SNAPI and has been successfully incorporated into the Lasseo specifications, but with limited success due to the lack of interest from public computer terminal suppliers.
People in the d/Deaf and hard of hearing community have a more difficult time receiving auditory information as compared to hearing individuals. These individuals often rely on visual and tactile mediums for receiving and communicating information. The use of assistive technology and devices provides this community with various solutions to auditory communication needs by providing higher sound (for those who are hard of hearing), tactile feedback, visual cues and improved technology access. Individuals who are deaf or hard of hearing utilize a variety of assistive technologies that provide them with different access to information in numerous environments. Most devices either provide amplified sound or alternate ways to access information through vision and/or vibration. These technologies can be grouped into three general categories: Hearing Technology, alerting devices, and communication support.
A hearing aid or deaf aid is an electro-acoustic device which is designed to amplify sound for the wearer, usually with the aim of making speech more intelligible, and to correct impaired hearing as measured by audiometry. This type of assistive technology helps people with hearing loss participate more fully in their hearing communities by allowing them to hear more clearly. They amplify any and all sound waves through use of a microphone, amplifier, and speaker. There is a wide variety of hearing aids available, including digital, in-the-ear, in-the-canal, behind-the-ear, and on-the-body aids.
Assistive listening devices include FM, infrared, and loop assistive listening devices. This type of technology allows people with hearing difficulties to focus on a speaker or subject by getting rid of extra background noises and distractions, making places like auditoriums, classrooms, and meetings much easier to participate in. The assistive listening device usually uses a microphone to capture an audio source near to its origin and broadcast it wirelessly over an FM (Frequency Modulation) transmission, IR (Infra Red) transmission, IL (Induction Loop) transmission, or other transmission methods. The person who is listening may use an FM/IR/IL Receiver to tune into the signal and listen at his/her preferred volume.
This type of assistive technology allows users to amplify the volume and clarity of their phone calls so that they can easily partake in this medium of communication. There are also options to adjust the frequency and tone of a call to suit their individual hearing needs. Additionally, there is a wide variety of amplified telephones to choose from, with different degrees of amplification. For example, a phone with 26 to 40 decibel is generally sufficient for mild hearing loss, while a phone with 71 to 90 decibel is better for more severe hearing loss.
Augmentative and alternative communication (AAC) is an umbrella term that encompasses methods of communication for those with impairments or restrictions on the production or comprehension of spoken or written language. AAC systems are extremely diverse and depend on the capabilities of the user. They may be as basic as pictures on a board that are used to request food, drink, or other care; or they can be advanced speech generating devices, based on speech synthesis, that are capable of storing hundreds of phrases and words.
Assistive Technology for Cognition (ATC) is the use of technology (usually high tech) to augment and assist cognitive processes such as attention, memory, self-regulation, navigation, emotion recognition and management, planning, and sequencing activity. Systematic reviews of the field have found that the number of ATC are growing rapidly, but have focused on memory and planning, that there is emerging evidence for efficacy, that a lot of scope exists to develop new ATC. Examples of ATC include: NeuroPage which prompts users about meetings, Wakamaru, which provides companionship and reminds users to take medicine and calls for help if something is wrong, and telephone Reassurance systems.
Memory aids are any type of assistive technology that helps a user learn and remember certain information. Many memory aids are used for cognitive impairments such as reading, writing, or organizational difficulties. For example, a Smartpen records handwritten notes by creating both a digital copy and an audio recording of the text. Users simply tap certain parts of their notes, the pen saves it, and reads it back to them. From there, the user can also download their notes onto a computer for increased accessibility. Digital voice recorders are also used to record "in the moment" information for fast and easy recall at a later time.
Educational software is software that assists people with reading, learning, comprehension, and organizational difficulties. Any accommodation software such as text readers, notetakers, text enlargers, organization tools, word predictions, and talking word processors falls under the category of educational software.
Adaptive eating devices include items commonly used by the general population like spoons and forks and plates. However they become assistive technology when they are modified to accommodate the needs of people who have difficulty using standard cutlery due to a disabling condition. Common modifications include increasing the size of the utensil handle to make it easier to grasp. Plates and bowls may have a guard on the edge that stops food being pushed off of the dish when it is being scooped. More sophisticated equipment for eating includes manual and powered feeding devices. These devices support those who have little or no hand and arm function and enable them to eat independently.
Assistive technology in sports is an area of technology design that is growing. Assistive technology is the array of new devices created to enable sports enthusiasts who have disabilities to play. Assistive technology may be used in adaptive sports, where an existing sport is modified to enable players with a disability to participate; or, assistive technology may be used to invent completely new sports with athletes with disabilities exclusively in mind.
An increasing number of people with disabilities are participating in sports, leading to the development of new assistive technology. Assistive technology devices can be simple, or "low-tech", or they may use highly advanced technology. "Low-tech" devices can include velcro gloves and adaptive bands and tubes. "High-tech" devices can include all-terrain wheelchairs and adaptive bicycles. Accordingly, assistive technology can be found in sports ranging from local community recreation to the elite Paralympic Games. More complex assistive technology devices have been developed over time, and as a result, sports for people with disabilities "have changed from being a clinical therapeutic tool to an increasingly competition-oriented activity".
In the United States there are two major pieces of legislation that govern the use of assistive technology within the school system. The first is Section 504 of the Rehabilitation Act of 1973 and the second being the Individuals with Disabilities Education Act (IDEA) which was first enacted in 1975 under the name The Education for All Handicapped Children Act. In 2004, during the reauthorization period for IDEA, the National Instructional Material Access Center (NIMAC) was created which provided a repository of accessible text including publisher's textbooks to students with a qualifying disability. Files provided are in XML format and used as a starting platform for braille readers, screen readers, and other digital text software. IDEA defines assistive technology as follows: "any item, piece of equipment, or product system, whether acquired commercially off the shelf, modified, or customized, that is used to increase, maintain, or improve functional capabilities of a child with a disability. (B) Exception.--The term does not include a medical device that is surgically implanted, or the replacement of such device."
Assistive technology in this area is broken down into low, mid, and high tech categories. Low tech encompasses equipment that is often low cost and does not include batteries or requires charging. Examples include adapted paper and pencil grips for writing or masks and color overlays for reading. Mid tech supports used in the school setting include the use of handheld spelling dictionaries and portable word processors used to keyboard writing. High tech supports involve the use of tablet devices and computers with accompanying software. Software supports for writing include the use of auditory feedback while keyboarding, word prediction for spelling, and speech to text. Supports for reading include the use of text to speech (TTS) software and font modification via access to digital text. Limited supports are available for math instruction and mostly consist of grid based software to allow younger students to keyboard equations and auditory feedback of more complex equations using MathML and Daisy.
One of the largest problems that affect people with disabilities is discomfort with prostheses. An experiment performed in Massachusetts utilized 20 people with various sensors attached to their arms. The subjects tried different arm exercises, and the sensors recorded their movements. All of the data helped engineers develop new engineering concepts for prosthetics.
Assistive technology may attempt to improve the ergonomics of the devices themselves such as Dvorak and other alternative keyboard layouts, which offer more ergonomic layouts of the keys.
Assistive technology devices have been created to enable people with disabilities to use modern touch screen mobile computers such as the iPad, iPhone and iPod touch. The Pererro is a plug and play adapter for iOS devices which uses the built in Apple VoiceOver feature in combination with a basic switch. This brings touch screen technology to those who were previously unable to use it. Apple, with the release of iOS 7 had introduced the ability to navigate apps using switch control. Switch access could be activated either through an external bluetooth connected switch, single touch of the screen, or use of right and left head turns using the device's camera. Additional accessibility features include the use of Assistive Touch which allows a user to access multi-touch gestures through pre-programmed onscreen buttons.
For users with physical disabilities a large variety of switches are available and customizable to the user's needs varying in size, shape, or amount of pressure required for activation. Switch access may be placed near any area of the body which has consistent and reliable mobility and less subject to fatigue. Common sites include the hands, head, and feet. Eye gaze and head mouse systems can also be used as an alternative mouse navigation. A user may utilize single or multiple switch sites and the process often involves a scanning through items on a screen and activating the switch once the desired object is highlighted.
The form of home automation called assistive domotics focuses on making it possible for elderly and disabled people to live independently. Home automation is becoming a viable option for the elderly and disabled who would prefer to stay in their own homes rather than move to a healthcare facility. This field uses much of the same technology and equipment as home automation for security, entertainment, and energy conservation but tailors it towards elderly and disabled users. For example, automated prompts and reminders utilize motion sensors and pre-recorded audio messages; an automated prompt in the kitchen may remind the resident to turn off the oven, and one by the front door may remind the resident to lock the door.
Overall, assistive technology aims to allow people with disabilities to "participate more fully in all aspects of life (home, school, and community)" and increases their opportunities for "education, social interactions, and potential for meaningful employment". It creates greater independence and control for disabled individuals. For example, in one study of 1,342 infants, toddlers and preschoolers, all with some kind of developmental, physical, sensory, or cognitive disability, the use of assistive technology created improvements in child development. These included improvements in "cognitive, social, communication, literacy, motor, adaptive, and increases in engagement in learning activities". Additionally, it has been found to lighten caregiver load. Both family and professional caregivers benefit from assistive technology. Through its use, the time that a family member or friend would need to care for a patient significantly decreases. However, studies show that care time for a professional caregiver increases when assistive technology is used. Nonetheless, their work load is significantly easier as the assistive technology frees them of having to perform certain tasks. There are several platforms that use machine learning to identify the appropriate assistive device to suggest to patients, making assistive devices more accessible. | https://en.wikipedia.org/wiki?curid=653 |
Abacus
The abacus ("plural" abaci or abacuses), also called a counting frame, is a calculating tool that was in use in the ancient Near East, Europe, China, and Russia, centuries before the adoption of the written Arabic numeral system. The exact origin of the abacus is still unknown. Today, abacuses are often constructed as a bamboo frame with beads sliding on wires, but originally they were beans or stones moved in grooves of sand or on tablets of wood, stone, or metal.
Abacuses come in different designs. Some designs, like the bead frame consisting of beads divided into tens, are used mainly to teach arithmetic, although they remain popular in the post-Soviet states as a tool. Other designs, such as the Japanese soroban, have been used for practical calculations even involving several digits. For any particular abacus design, there are usually numerous different methods to perform a certain type of calculation, which may include basic operations like addition and multiplication, or even more complex ones, such as calculating square roots. Some of these methods may work with non-natural numbers (numbers such as and ).
Although today many use calculators and computers instead of abacuses to calculate, abacuses still remain in common use in some countries. Merchants, traders and clerks in some parts of Eastern Europe, Russia, China and Africa use abacuses, and they are still used to teach arithmetic to children. Some people who are unable to use a calculator because of visual impairment may use an abacus.
The use of the word "abacus" dates before 1387 AD, when a Middle English work borrowed the word from Latin to describe a sandboard abacus. The Latin word came from Greek ("abax") which means something without base, and improperly, any piece of rectangular board or plank.
Alternatively, without reference to ancient texts on etymology, it has been suggested that it means "a square tablet strewn with dust", or "drawing-board covered with dust (for the use of mathematics)" (the exact shape of the Latin perhaps reflects the genitive form of the Greek word, "abakos"). Whereas the table strewn with dust definition is popular, there are those that do not place credence in this at all and in fact state that it is not proven. Greek itself is probably a borrowing of a Northwest Semitic, perhaps Phoenician, and cognate with the Hebrew word "ʾābāq" (), or "dust" (in post-Biblical sense meaning "sand used as a writing surface").
The preferred plural of "abacus" is a subject of disagreement, with both "abacuses" and "abaci" (soft or hard "c") in use. The user of an abacus is called an "abacist".
The period 2700–2300 BC saw the first appearance of the Sumerian abacus, a table of successive columns which delimited the successive orders of magnitude of their sexagesimal number system.
Some scholars point to a character from the Babylonian cuneiform which may have been derived from a representation of the abacus. It is the belief of Old Babylonian scholars such as Carruccio that Old Babylonians "may have used the abacus for the operations of addition and subtraction; however, this primitive device proved difficult to use for more complex calculations".
The use of the abacus in Ancient Egypt is mentioned by the Greek historian Herodotus, who writes that the Egyptians manipulated the pebbles from right to left, opposite in direction to the Greek left-to-right method. Archaeologists have found ancient disks of various sizes that are thought to have been used as counters. However, wall depictions of this instrument have not been discovered.
During the Achaemenid Empire, around 600 BC the Persians first began to use the abacus. Under the Parthian, Sassanian and Iranian empires, scholars concentrated on exchanging knowledge and inventions with the countries around them – India, China, and the Roman Empire, when it is thought to have been exported to other countries.
The earliest archaeological evidence for the use of the Greek abacus dates to the 5th century BC. Also Demosthenes (384 BC–322 BC) talked of the need to use pebbles for calculations too difficult for your head. A play by Alexis from the 4th century BC mentions an abacus and pebbles for accounting, and both Diogenes and Polybius mention men that sometimes stood for more and sometimes for less, like the pebbles on an abacus. The Greek abacus was a table of wood or marble, pre-set with small counters in wood or metal for mathematical calculations. This Greek abacus saw use in Achaemenid Persia, the Etruscan civilization, Ancient Rome and, until the French Revolution, the Western Christian world.
A tablet found on the Greek island Salamis in 1846 AD (the Salamis Tablet), dates back to 300 BC, making it the oldest counting board discovered so far. It is a slab of white marble long, wide, and thick, on which are 5 groups of markings. In the center of the tablet is a set of 5 parallel lines equally divided by a vertical line, capped with a semicircle at the intersection of the bottom-most horizontal line and the single vertical line. Below these lines is a wide space with a horizontal crack dividing it. Below this crack is another group of eleven parallel lines, again divided into two sections by a line perpendicular to them, but with the semicircle at the top of the intersection; the third, sixth and ninth of these lines are marked with a cross where they intersect with the vertical line. Also from this time frame the "Darius Vase" was unearthed in 1851. It was covered with pictures including a "treasurer" holding a wax tablet in one hand while manipulating counters on a table with the other.
The earliest known written documentation of the Chinese abacus dates to the 2nd century BC.
The Chinese abacus, known as the "suanpan" (算盤/算盘, lit. "calculating tray"), is typically tall and comes in various widths depending on the operator. It usually has more than seven rods. There are two beads on each rod in the upper deck and five beads each in the bottom. The beads are usually rounded and made of a hardwood. The beads are counted by moving them up or down towards the beam; beads moved toward the beam are counted, while those moved away from it are not. One of the top beads is 5, while one of the bottom beads is 1. Each rod has a number under it, showing the place value.The "suanpan" can be reset to the starting position instantly by a quick movement along the horizontal axis to spin all the beads away from the horizontal beam at the center.
The prototype of the Chinese abacus appeared during the Han Dynasty, and the beads are oval. The Song Dynasty and earlier used the 1:4 type or four-beads abacus similar to the modern abacus including the shape of the beads commonly known as Japanese-style abacus.
In the early Ming Dynasty, the abacus began to appear in the form of 1:5 abacus. The upper deck had one bead and the bottom had five beads.
In the late Ming Dynasty, the abacus styles appeared in the form of 2:5. The upper deck had two beads, and the bottom had five beads.
Various calculation techniques were devised for "Suanpan" enabling efficient calculations. There are currently schools teaching students how to use it.
In the long scroll "Along the River During the Qingming Festival" painted by Zhang Zeduan during the Song dynasty (960–1297), a "suanpan" is clearly visible beside an account book and doctor's prescriptions on the counter of an apothecary's (Feibao).
The similarity of the Roman abacus to the Chinese one suggests that one could have inspired the other, as there is some evidence of a trade relationship between the Roman Empire and China. However, no direct connection can be demonstrated, and the similarity of the abacuses may be coincidental, both ultimately arising from counting with five fingers per hand. Where the Roman model (like most modern Korean and Japanese) has 4 plus 1 bead per decimal place, the standard "suanpan" has 5 plus 2. Incidentally, this allows use with a hexadecimal numeral system (or any base up to 18) which may have been used for traditional Chinese measures of weight. (Instead of running on wires as in the Chinese, Korean, and Japanese models, the beads of Roman model run in grooves, presumably making arithmetic calculations much slower.
Another possible source of the "suanpan" is Chinese counting rods, which operated with a decimal system but lacked the concept of zero as a place holder. The zero was probably introduced to the Chinese in the Tang dynasty (618–907) when travel in the Indian Ocean and the Middle East would have provided direct contact with India, allowing them to acquire the concept of zero and the decimal point from Indian merchants and mathematicians.
The normal method of calculation in ancient Rome, as in Greece, was by moving counters on a smooth table. Originally pebbles ("calculi") were used. Later, and in medieval Europe, jetons were manufactured. Marked lines indicated units, fives, tens etc. as in the Roman numeral system. This system of 'counter casting' continued into the late Roman empire and in medieval Europe, and persisted in limited use into the nineteenth century. Due to Pope Sylvester II's reintroduction of the abacus with modifications, it became widely used in Europe once again during the 11th century This abacus used beads on wires, unlike the traditional Roman counting boards, which meant the abacus could be used much faster.
Writing in the 1st century BC, Horace refers to the wax abacus, a board covered with a thin layer of black wax on which columns and figures were inscribed using a stylus.
One example of archaeological evidence of the Roman abacus, shown here in reconstruction, dates to the 1st century AD. It has eight long grooves containing up to five beads in each and eight shorter grooves having either one or no beads in each. The groove marked I indicates units, X tens, and so on up to millions. The beads in the shorter grooves denote fives –five units, five tens etc., essentially in a bi-quinary coded decimal system, related to the Roman numerals. The short grooves on the right may have been used for marking Roman "ounces" (i.e. fractions).
The "Abhidharmakośabhāṣya" of Vasubandhu (316-396), a Sanskrit work on Buddhist philosophy, says that the second-century CE philosopher Vasumitra said that "placing a wick (Sanskrit "vartikā") on the number one ("ekāṅka") means it is a one, while placing the wick on the number hundred means it is called a hundred, and on the number one thousand means it is a thousand". It is unclear exactly what this arrangement may have been. Around the 5th century, Indian clerks were already finding new ways of recording the contents of the Abacus. Hindu texts used the term "śūnya" (zero) to indicate the empty column on the abacus.
In Japanese, the abacus is called "soroban" (, lit. "Counting tray"), imported from China in the 14th century. It was probably in use by the working class a century or more before the ruling class started, as the class structure did not allow for devices used by the lower class to be adopted or used by the ruling class. The 1/4 abacus, which removes the seldom used second and fifth bead became popular in the 1940s.
Today's Japanese abacus is a 1:4 type, four-bead abacus was introduced from China in the Muromachi era. It adopts the form of the upper deck one bead and the bottom four beads. The top bead on the upper deck was equal to five and the bottom one is equal to one like the Chinese or Korean abacus, and the decimal number can be expressed, so the abacus is designed as one four abacus. The beads are always in the shape of a diamond. The quotient division is generally used instead of the division method; at the same time, in order to make the multiplication and division digits consistently use the division multiplication. Later, Japan had a 3:5 abacus called 天三算盤, which is now the Ize Rongji collection of Shansi Village in Yamagata City. There were also 2:5 type abacus.
With the four-bead abacus spread, it is also common to use Japanese abacus around the world. There are also improved Japanese abacus in various places. One of the Japanese-made abacus made in China is an aluminum frame plastic bead abacus. The file is next to the four beads, and the "clearing" button, press the clearing button, immediately put the upper bead to the upper position, the lower bead is dialed to the lower position, immediately clearing, easy to use.
The abacus is still manufactured in Japan today even with the proliferation, practicality, and affordability of pocket electronic calculators. The use of the soroban is still taught in Japanese primary schools as part of mathematics, primarily as an aid to faster mental calculation. Using visual imagery of a soroban, one can arrive at the answer in the same time as, or even faster than, is possible with a physical instrument.
The Chinese abacus migrated from China to Korea around 1400 AD. Koreans call it "jupan" (주판), "supan" (수판) or "jusan" (주산).
The four beads abacus( 1:4 ) was introduced to Korea Goryeo Dynasty from the China during Song Dynasty, later the five beads abacus (5:1) abacus was introduced to Korean from China during the Ming Dynasty.
Some sources mention the use of an abacus called a "nepohualtzintzin" in ancient Aztec culture. This Mesoamerican abacus used a 5-digit base-20 system.
The word Nepōhualtzintzin comes from Nahuatl and it is formed by the roots; "Ne" – personal -; "pōhual" or "pōhualli" – the account -; and "tzintzin" – small similar elements. Its complete meaning was taken as: counting with small similar elements by somebody. Its use was taught in the Calmecac to the "temalpouhqueh" , who were students dedicated to take the accounts of skies, from childhood.
The Nepōhualtzintzin was divided in two main parts separated by a bar or intermediate cord. In the left part there were four beads, which in the first row have unitary values (1, 2, 3, and 4), and in the right side there are three beads with values of 5, 10, and 15 respectively. In order to know the value of the respective beads of the upper rows, it is enough to multiply by 20 (by each row), the value of the corresponding account in the first row.
Altogether, there were 13 rows with 7 beads in each one, which made up 91 beads in each Nepōhualtzintzin. This was a basic number to understand, 7 times 13, a close relation conceived between natural phenomena, the underworld and the cycles of the heavens. One Nepōhualtzintzin (91) represented the number of days that a season of the year lasts, two Nepōhualtzitzin (182) is the number of days of the corn's cycle, from its sowing to its harvest, three Nepōhualtzintzin (273) is the number of days of a baby's gestation, and four Nepōhualtzintzin (364) completed a cycle and approximate a year (1 days short). When translated into modern computer arithmetic, the Nepōhualtzintzin amounted to the rank from 10 to the 18 in floating point, which calculated stellar as well as infinitesimal amounts with absolute precision, meant that no round off was allowed.
The rediscovery of the Nepōhualtzintzin was due to the Mexican engineer David Esparza Hidalgo, who in his wanderings throughout Mexico found diverse engravings and paintings of this instrument and reconstructed several of them made in gold, jade, encrustations of shell, etc. There have also been found very old Nepōhualtzintzin attributed to the Olmec culture, and even some bracelets of Mayan origin, as well as a diversity of forms and materials in other cultures.
George I. Sanchez, "Arithmetic in Maya", Austin-Texas, 1961 found another base 5, base 4 abacus in the Yucatán Peninsula that also computed calendar data. This was a finger abacus, on one hand 0, 1, 2, 3, and 4 were used; and on the other hand 0, 1, 2 and 3 were used. Note the use of zero at the beginning and end of the two cycles. Sanchez worked with Sylvanus Morley, a noted Mayanist.
The quipu of the Incas was a system of colored knotted cords used to record numerical data, like advanced tally sticks – but not used to perform calculations. Calculations were carried out using a yupana (Quechua for "counting tool"; see figure) which was still in use after the conquest of Peru. The working principle of a yupana is unknown, but in 2001 an explanation of the mathematical basis of these instruments was proposed by Italian mathematician Nicolino De Pasquale. By comparing the form of several yupanas, researchers found that calculations were based using the Fibonacci sequence 1, 1, 2, 3, 5 and powers of 10, 20 and 40 as place values for the different fields in the instrument. Using the Fibonacci sequence would keep the number of grains within any one field at a minimum.
The Russian abacus, the "schoty" (, plural from , counting), usually has a single slanted deck, with ten beads on each wire (except one wire, usually positioned near the user, with four beads for quarter-ruble fractions). Older models have another 4-bead wire for quarter-kopeks, which were minted until 1916. The Russian abacus is often used vertically, with each wire from left to right like lines in a book. The wires are usually bowed to bulge upward in the center, to keep the beads pinned to either of the two sides. It is cleared when all the beads are moved to the right. During manipulation, beads are moved to the left. For easy viewing, the middle 2 beads on each wire (the 5th and 6th bead) usually are of a different color from the other eight beads. Likewise, the left bead of the thousands wire (and the million wire, if present) may have a different color.
As a simple, cheap and reliable device, the Russian abacus was in use in all shops and markets throughout the former Soviet Union, and the usage of it was taught in most schools until the 1990s. Even the 1874 invention of mechanical calculator, Odhner arithmometer, had not replaced them in Russia; according to Yakov Perelman, even in his times, some businessmen attempting to import such devices into the Russian Empire were known to give up and leave in despair after being shown the work of a skilled abacus operator. Likewise the mass production of Felix arithmometers since 1924 did not significantly reduce their use in the Soviet Union. The Russian abacus began to lose popularity only after the mass production of microcalculators had started in the Soviet Union in 1974. Today it is regarded as an archaism and replaced by the handheld calculator.
The Russian abacus was brought to France around 1820 by the mathematician Jean-Victor Poncelet, who served in Napoleon's army and had been a prisoner of war in Russia. The abacus had fallen out of use in western Europe in the 16th century with the rise of decimal notation and algorismic methods. To Poncelet's French contemporaries, it was something new. Poncelet used it, not for any applied purpose, but as a teaching and demonstration aid. The Turks and the Armenian people also used abacuses similar to the Russian schoty. It was named a "coulba" by the Turks and a "choreb" by the Armenians.
Around the world, abacuses have been used in pre-schools and elementary schools as an aid in teaching the numeral system and arithmetic.
In Western countries, a bead frame similar to the Russian abacus but with straight wires and a vertical frame has been common (see image). It is still often seen as a plastic or wooden toy.
The wire frame may be used either with positional notation like other abacuses (thus the 10-wire version may represent numbers up to 9,999,999,999), or each bead may represent one unit (so that e.g. 74 can be represented by shifting all beads on 7 wires and 4 beads on the 8th wire, so numbers up to 100 may be represented). Teaching multiplication, e.g. 6 times 7 may be represented by shifting 7 beads on 6 wires. In the bead frame shown, the gap between the 5th and 6th wire, corresponding to the color change between the 5th and the 6th bead on each wire, suggests the latter use.
The red-and-white abacus is used in contemporary primary schools for a wide range of number-related lessons. The twenty bead version, referred to by its Dutch name "rekenrek" ("calculating frame"), is often used, sometimes on a string of beads, sometimes on a rigid framework.
By learning how to calculate with abacus, one can improve one's mental calculation which becomes faster and more accurate in doing large number calculations. Abacus‐based mental calculation (AMC) was derived from the abacus which means doing calculation, including addition, subtraction, multiplication, and division, in mind with an imagined abacus. It is a high-level cognitive skill that runs through calculations with an effective algorithm. People doing long-term AMC training show higher numerical memory capacity and have more effectively connected neural pathways. They are able to retrieve memory to deal with complex processes to calculate. The processing of AMC involves both the visuospatial and visuomotor processing which generate the visual abacus and perform the movement of the imaginary bead. Since the only thing needed to be remembered is the finial position of beads, it takes less memory and less computation time.
The binary abacus is used to explain how computers manipulate numbers. The abacus shows how numbers, letters, and signs can be stored in a binary system on a computer, or via ASCII. The device consists of a series of beads on parallel wires arranged in three separate rows. The beads represent a switch on the computer in either an "on" or "off" position.
An adapted abacus, invented by Tim Cranmer, called a Cranmer abacus is still commonly used by individuals who are blind. A piece of soft fabric or rubber is placed behind the beads so that they do not move inadvertently. This keeps the beads in place while the users feel or manipulate them. They use an abacus to perform the mathematical functions multiplication, division, addition, subtraction, square root and cube root.
Although blind students have benefited from talking calculators, the abacus is still very often taught to these students in early grades, both in public schools and state schools for the blind. Blind students also complete mathematical assignments using a braille-writer and Nemeth code (a type of braille code for mathematics) but large multiplication and long division problems can be long and difficult. The abacus gives blind and visually impaired students a tool to compute mathematical problems that equals the speed and mathematical knowledge required by their sighted peers using pencil and paper. Many blind people find this number machine a very useful tool throughout life. | https://en.wikipedia.org/wiki?curid=655 |
Acid
An acid is a molecule or ion capable of donating a proton (hydrogen ion H+) (a Brønsted–Lowry acid), or, alternatively, capable of forming a covalent bond with an electron pair (a Lewis acid).
The first category of acids are the proton donors, or Brønsted–Lowry acids. In the special case of aqueous solutions, proton donors form the hydronium ion H3O+ and are known as Arrhenius acids. Brønsted and Lowry generalized the Arrhenius theory to include non-aqueous solvents. A Brønsted or Arrhenius acid usually contains a hydrogen atom bonded to a chemical structure that is still energetically favorable after loss of H+.
Aqueous Arrhenius acids have characteristic properties which provide a practical description of an acid. Acids form aqueous solutions with a sour taste, can turn blue litmus red, and react with bases and certain metals (like calcium) to form salts. The word "acid" is derived from the Latin "acidus/acēre", meaning 'sour'. An aqueous solution of an acid has a pH less than 7 and is colloquially also referred to as "acid" (as in "dissolved in acid"), while the strict definition refers only to the solute. A lower pH means a higher acidity, and thus a higher concentration of positive hydrogen ions in the solution. Chemicals or substances having the property of an acid are said to be acidic.
Common aqueous acids include hydrochloric acid (a solution of hydrogen chloride which is found in gastric acid in the stomach and activates digestive enzymes), acetic acid (vinegar is a dilute aqueous solution of this liquid), sulfuric acid (used in car batteries), and citric acid (found in citrus fruits). As these examples show, acids (in the colloquial sense) can be solutions or pure substances, and can be derived from acids (in the strict sense) that are solids, liquids, or gases. Strong acids and some concentrated weak acids are corrosive, but there are exceptions such as carboranes and boric acid.
The second category of acids are Lewis acids, which form a covalent bond with an electron pair. An example is boron trifluoride (BF3), whose boron atom has a vacant orbital which can form a covalent bond by sharing a lone pair of electrons on an atom in a base, for example the nitrogen atom in ammonia (NH3). Lewis considered this as a generalization of the Brønsted definition, so that an acid is a chemical species that accepts electron pairs either directly "or" by releasing protons (H+) into the solution, which then accept electron pairs. However, hydrogen chloride, acetic acid, and most other Brønsted–Lowry acids cannot form a covalent bond with an electron pair and are therefore not Lewis acids. Conversely, many Lewis acids are not Arrhenius or Brønsted–Lowry acids. In modern terminology, an "acid" is implicitly a Brønsted acid and not a Lewis acid, since chemists almost always refer to a Lewis acid explicitly as "a Lewis acid".
Modern definitions are concerned with the fundamental chemical reactions common to all acids.
Most acids encountered in everyday life are aqueous solutions, or can be dissolved in water, so the Arrhenius and Brønsted-Lowry definitions are the most relevant.
The Brønsted-Lowry definition is the most widely used definition; unless otherwise specified, acid-base reactions are assumed to involve the transfer of a proton (H+) from an acid to a base.
Hydronium ions are acids according to all three definitions. Although alcohols and amines can be Brønsted-Lowry acids, they can also function as Lewis bases due to the lone pairs of electrons on their oxygen and nitrogen atoms.
The Swedish chemist Svante Arrhenius attributed the properties of acidity to hydrogen ions (H+) or protons in 1884. An Arrhenius acid is a substance that, when added to water, increases the concentration of H+ ions in the water. Note that chemists often write H+("aq") and refer to the hydrogen ion when describing acid-base reactions but the free hydrogen nucleus, a proton, does not exist alone in water, it exists as the hydronium ion, H3O+. Thus, an Arrhenius acid can also be described as a substance that increases the concentration of hydronium ions when added to water. Examples include molecular substances such as HCl and acetic acid.
An Arrhenius base, on the other hand, is a substance which increases the concentration of hydroxide (OH−) ions when dissolved in water. This decreases the concentration of hydronium because the ions react to form H2O molecules:
H3O + OH ⇌ H2O(l) + H2O(l)
Due to this equilibrium, any increase in the concentration of hydronium is accompanied by a decrease in the concentration of hydroxide. Thus, an Arrhenius acid could also be said to be one that decreases hydroxide concentration, while an Arrhenius base increases it.
In an acidic solution, the concentration of hydronium ions is greater than 10−7 moles per liter. Since pH is defined as the negative logarithm of the concentration of hydronium ions, acidic solutions thus have a pH of less than 7.
While the Arrhenius concept is useful for describing many reactions, it is also quite limited in its scope. In 1923 chemists Johannes Nicolaus Brønsted and Thomas Martin Lowry independently recognized that acid-base reactions involve the transfer of a proton. A Brønsted-Lowry acid (or simply Brønsted acid) is a species that donates a proton to a Brønsted-Lowry base. Brønsted-Lowry acid-base theory has several advantages over Arrhenius theory. Consider the following reactions of acetic acid (CH3COOH), the organic acid that gives vinegar its characteristic taste:
Both theories easily describe the first reaction: CH3COOH acts as an Arrhenius acid because it acts as a source of H3O+ when dissolved in water, and it acts as a Brønsted acid by donating a proton to water. In the second example CH3COOH undergoes the same transformation, in this case donating a proton to ammonia (NH3), but does not relate to the Arrhenius definition of an acid because the reaction does not produce hydronium. Nevertheless, CH3COOH is both an Arrhenius and a Brønsted-Lowry acid.
Brønsted-Lowry theory can be used to describe reactions of molecular compounds in nonaqueous solution or the gas phase. Hydrogen chloride (HCl) and ammonia combine under several different conditions to form ammonium chloride, NH4Cl. In aqueous solution HCl behaves as hydrochloric acid and exists as hydronium and chloride ions. The following reactions illustrate the limitations of Arrhenius's definition:
As with the acetic acid reactions, both definitions work for the first example, where water is the solvent and hydronium ion is formed by the HCl solute. The next two reactions do not involve the formation of ions but are still proton-transfer reactions. In the second reaction hydrogen chloride and ammonia (dissolved in benzene) react to form solid ammonium chloride in a benzene solvent and in the third gaseous HCl and NH3 combine to form the solid.
A third, only marginally related concept was proposed in 1923 by Gilbert N. Lewis, which includes reactions with acid-base characteristics that do not involve a proton transfer. A Lewis acid is a species that accepts a pair of electrons from another species; in other words, it is an electron pair acceptor. Brønsted acid-base reactions are proton transfer reactions while Lewis acid-base reactions are electron pair transfers. Many Lewis acids are not Brønsted-Lowry acids. Contrast how the following reactions are described in terms of acid-base chemistry:
In the first reaction a fluoride ion, F−, gives up an electron pair to boron trifluoride to form the product tetrafluoroborate. Fluoride "loses" a pair of valence electrons because the electrons shared in the B—F bond are located in the region of space between the two atomic nuclei and are therefore more distant from the fluoride nucleus than they are in the lone fluoride ion. BF3 is a Lewis acid because it accepts the electron pair from fluoride. This reaction cannot be described in terms of Brønsted theory because there is no proton transfer. The second reaction can be described using either theory. A proton is transferred from an unspecified Brønsted acid to ammonia, a Brønsted base; alternatively, ammonia acts as a Lewis base and transfers a lone pair of electrons to form a bond with a hydrogen ion. The species that gains the electron pair is the Lewis acid; for example, the oxygen atom in H3O+ gains a pair of electrons when one of the H—O bonds is broken and the electrons shared in the bond become localized on oxygen. Depending on the context, a Lewis acid may also be described as an oxidizer or an electrophile. Organic Brønsted acids, such as acetic, citric, or oxalic acid, are not Lewis acids. They dissociate in water to produce a Lewis acid, H+, but at the same time also yield an equal amount of a Lewis base (acetate, citrate, or oxalate, respectively, for the acids mentioned). Few, if any, of the acids discussed in the following are Lewis acids.
Reactions of acids are often generalized in the form HA H+ + A−, where HA represents the acid and A− is the conjugate base. This reaction is referred to as protolysis. The protonated form (HA) of an acid is also sometimes referred to as the free acid.
Acid-base conjugate pairs differ by one proton, and can be interconverted by the addition or removal of a proton (protonation and deprotonation, respectively). Note that the acid can be the charged species and the conjugate base can be neutral in which case the generalized reaction scheme could be written as HA+ H+ + A. In solution there exists an equilibrium between the acid and its conjugate base. The equilibrium constant "K" is an expression of the equilibrium concentrations of the molecules or the ions in solution. Brackets indicate concentration, such that [H2O] means "the concentration of H2O". The acid dissociation constant "K"a is generally used in the context of acid-base reactions. The numerical value of "K"a is equal to the product of the concentrations of the products divided by the concentration of the reactants, where the reactant is the acid (HA) and the products are the conjugate base and H+.
The stronger of two acids will have a higher "K"a than the weaker acid; the ratio of hydrogen ions to acid will be higher for the stronger acid as the stronger acid has a greater tendency to lose its proton. Because the range of possible values for "K"a spans many orders of magnitude, a more manageable constant, p"K"a is more frequently used, where p"K"a = −log10 "K"a. Stronger acids have a smaller p"K"a than weaker acids. Experimentally determined p"K"a at 25 °C in aqueous solution are often quoted in textbooks and reference material.
Arrhenius acids are named according to their anions. In the classical naming system, the ionic suffix is dropped and replaced with a new suffix, according to the table following. The prefix "hydro-" is used when the acid is made up of just hydrogen and one other element. For example, HCl has chloride as its anion, so the hydro- prefix is used, and the -ide suffix makes the name take the form hydrochloric acid.
"Classical naming system:"
In the IUPAC naming system, "aqueous" is simply added to the name of the ionic compound. Thus, for hydrogen chloride, as an acid solution, the IUPAC name is aqueous hydrogen chloride.
The strength of an acid refers to its ability or tendency to lose a proton. A strong acid is one that completely dissociates in water; in other words, one mole of a strong acid HA dissolves in water yielding one mole of H+ and one mole of the conjugate base, A−, and none of the protonated acid HA. In contrast, a weak acid only partially dissociates and at equilibrium both the acid and the conjugate base are in solution. Examples of strong acids are hydrochloric acid (HCl), hydroiodic acid (HI), hydrobromic acid (HBr), perchloric acid (HClO4), nitric acid (HNO3) and sulfuric acid (H2SO4). In water each of these essentially ionizes 100%. The stronger an acid is, the more easily it loses a proton, H+. Two key factors that contribute to the ease of deprotonation are the polarity of the H—A bond and the size of atom A, which determines the strength of the H—A bond. Acid strengths are also often discussed in terms of the stability of the conjugate base.
Stronger acids have a larger "K"a and a more negative p"K"a than weaker acids.
Sulfonic acids, which are organic oxyacids, are a class of strong acids. A common example is toluenesulfonic acid (tosylic acid). Unlike sulfuric acid itself, sulfonic acids can be solids. In fact, polystyrene functionalized into polystyrene sulfonate is a solid strongly acidic plastic that is filterable.
Superacids are acids stronger than 100% sulfuric acid. Examples of superacids are fluoroantimonic acid, magic acid and perchloric acid. Superacids can permanently protonate water to give ionic, crystalline hydronium "salts". They can also quantitatively stabilize carbocations.
While "K"a measures the strength of an acid compound, the strength of an aqueous acid solution is measured by pH, which is an indication of the concentration of hydronium in the solution. The pH of a simple solution of an acid compound in water is determined by the dilution of the compound and the compound's "K"a.
Monoprotic acids, also known as monobasic acids, are those acids that are able to donate one proton per molecule during the process of dissociation (sometimes called ionization) as shown below (symbolized by HA):
Common examples of monoprotic acids in mineral acids include hydrochloric acid (HCl) and nitric acid (HNO3). On the other hand, for organic acids the term mainly indicates the presence of one carboxylic acid group and sometimes these acids are known as monocarboxylic acid. Examples in organic acids include formic acid (HCOOH), acetic acid (CH3COOH) and benzoic acid (C6H5COOH).
Polyprotic acids, also known as polybasic acids, are able to donate more than one proton per acid molecule, in contrast to monoprotic acids that only donate one proton per molecule. Specific types of polyprotic acids have more specific names, such as diprotic (or dibasic) acid (two potential protons to donate), and triprotic (or tribasic) acid (three potential protons to donate).
A diprotic acid (here symbolized by H2A) can undergo one or two dissociations depending on the pH. Each dissociation has its own dissociation constant, Ka1 and Ka2.
The first dissociation constant is typically greater than the second; i.e., "K"a1 > "K"a2. For example, sulfuric acid (H2SO4) can donate one proton to form the bisulfate anion (HSO), for which "K"a1 is very large; then it can donate a second proton to form the sulfate anion (SO), wherein the "K"a2 is intermediate strength. The large "K"a1 for the first dissociation makes sulfuric a strong acid. In a similar manner, the weak unstable carbonic acid can lose one proton to form bicarbonate anion and lose a second to form carbonate anion (CO). Both "K"a values are small, but "K"a1 > "K"a2 .
A triprotic acid (H3A) can undergo one, two, or three dissociations and has three dissociation constants, where "K"a1 > "K"a2 > "K"a3.
An inorganic example of a triprotic acid is orthophosphoric acid (H3PO4), usually just called phosphoric acid. All three protons can be successively lost to yield H2PO, then HPO, and finally PO, the orthophosphate ion, usually just called phosphate. Even though the positions of the three protons on the original phosphoric acid molecule are equivalent, the successive "K"a values differ since it is energetically less favorable to lose a proton if the conjugate base is more negatively charged. An organic example of a triprotic acid is citric acid, which can successively lose three protons to finally form the citrate ion.
Although the subsequent loss of each hydrogen ion is less favorable, all of the conjugate bases are present in solution. The fractional concentration, "α" (alpha), for each species can be calculated. For example, a generic diprotic acid will generate 3 species in solution: H2A, HA−, and A2−. The fractional concentrations can be calculated as below when given either the pH (which can be converted to the [H+]) or the concentrations of the acid with all its conjugate bases:
A plot of these fractional concentrations against pH, for given "K"1 and "K"2, is known as a Bjerrum plot. A pattern is observed in the above equations and can be expanded to the general "n" -protic acid that has been deprotonated "i" -times:
\alpha_{\ce H_{n-i} A^{i-} }= | https://en.wikipedia.org/wiki?curid=656 |
Asphalt
Asphalt, also known as bitumen (, ), is a sticky, black, highly viscous liquid or semi-solid form of petroleum. It may be found in natural deposits or may be a refined product, and is classed as a pitch. Before the 20th century, the term asphaltum was also used. The word is derived from the Ancient Greek ἄσφαλτος "ásphaltos". The Pitch Lake is the largest natural deposit of asphalt in the world, estimated to contain 10 million tons. It is located in La Brea in southwest Trinidad, within the Siparia Regional Corporation.
The primary use (70%) of asphalt is in road construction, where it is used as the glue or binder mixed with aggregate particles to create asphalt concrete. Its other main uses are for bituminous waterproofing products, including production of roofing felt and for sealing flat roofs.
In material sciences and engineering, the terms "asphalt" and "bitumen" are often used interchangeably to mean both natural and manufactured forms of the substance, although there is regional variation as to which term is most common. Worldwide, geologists tend to favor the term "bitumen" for the naturally occurring material. For the manufactured material, which is a refined residue from the distillation process of selected crude oils, "bitumen" is the prevalent term in much of the world; however, in American English, "asphalt" is more commonly used. To help avoid confusion, the phrase "liquid asphalt", "asphalt binder", or "asphalt cement" is used in the U.S. Colloquially, various forms of asphalt are sometimes referred to as "tar", as in the name of the La Brea Tar Pits, although tar is a different material.
Naturally occurring asphalt is sometimes specified by the term "crude bitumen". Its viscosity is similar to that of cold molasses while the material obtained from the fractional distillation of crude oil boiling at is sometimes referred to as "refined bitumen". The Canadian province of Alberta has most of the world's reserves of natural asphalt in the Athabasca oil sands, which cover , an area larger than England.
Asphalt properties change with temperature, which means that there is a specific range where viscosity permits adequate compaction by providing lubrication between particles during the compaction process. Low temperature prevents aggregate particles from moving, and the required density is not possible to achieve.
The word "asphalt" is derived from the late Middle English, in turn from French "asphalte", based on Late Latin "asphalton", "asphaltum", which is the latinisation of the Greek ("ásphaltos", "ásphalton"), a word meaning "asphalt/bitumen/pitch", which perhaps derives from , "not, without", i.e. the alpha privative, and ("sphallein"), "to cause to fall, baffle, (in passive) err, (in passive) be balked of". The first use of asphalt by the ancients was in the nature of a cement for securing or joining together various objects, and it thus seems likely that the name itself was expressive of this application. Specifically, Herodotus mentioned that bitumen was brought to Babylon to build its gigantic fortification wall. From the Greek, the word passed into late Latin, and thence into French ("asphalte") and English ("asphaltum" and "asphalt"). In French, the term "asphalte" is used for naturally occurring asphalt-soaked limestone deposits, and for specialised manufactured products with fewer voids or greater bitumen content than the "asphaltic concrete" used to pave roads.
The expression "bitumen" originated in the Sanskrit words "jatu", meaning "pitch", and "jatu-krit", meaning "pitch creating" or "pitch producing" (referring to coniferous or resinous trees). The Latin equivalent is claimed by some to be originally "gwitu-men" (pertaining to pitch), and by others, "pixtumens" (exuding or bubbling pitch), which was subsequently shortened to "bitumen", thence passing via French into English. From the same root is derived the Anglo-Saxon word "cwidu" (mastix), the German word "Kitt" (cement or mastic) and the old Norse word "kvada".
In British English, "bitumen" is used instead of "asphalt". The word "asphalt" is instead used to refer to asphalt concrete, a mixture of construction aggregate and asphalt itself (also called "tarmac" in common parlance). Bitumen mixed with clay was usually called "asphaltum", but the term is less commonly used today.
In Australian English, the word "asphalt" is used to describe a mix of construction aggregate. "Bitumen" refers to the liquid derived from the heavy-residues from crude oil distillation.
In American English, "asphalt" is equivalent to the British "bitumen". However, "asphalt" is also commonly used as a shortened form of "asphalt concrete" (therefore equivalent to the British "asphalt" or "tarmac").
In Canadian English, the word "bitumen" is used to refer to the vast Canadian deposits of extremely heavy crude oil, while "asphalt" is used for the oil refinery product. Diluted bitumen (diluted with naphtha to make it flow in pipelines) is known as "dilbit" in the Canadian petroleum industry, while bitumen "upgraded" to synthetic crude oil is known as "syncrude", and syncrude blended with bitumen is called "synbit".
"Bitumen" is still the preferred geological term for naturally occurring deposits of the solid or semi-solid form of petroleum. "Bituminous rock" is a form of sandstone impregnated with bitumen. The oil sands of Alberta, Canada are a similar material.
Neither of the terms "asphalt" or "bitumen" should be confused with tar or coal tars. Tar is the thick liquid product of the dry distillation and pyrolysis of organic hydrocarbons primarily sourced from vegetation masses, whether fossilized as with coal, or freshly harvested. The majority of bitumen, on the other hand, was formed naturally when vast quantities of organic animal materials were deposited by water and buried hundreds of metres deep at the diagenetic point, where the disorganized fatty hydrocarbon molecules joined together in long chains in the absence of oxygen. Bitumen occurs as a solid or highly viscous liquid. It may even be mixed in with coal deposits. Bitumen, and coal using the Bergius process, can be refined into petrols such as gasoline, and bitumen may be distilled into tar, not the other way around.
The components of asphalt include four main classes of compounds:
The naphthene aromatics and polar aromatics are typically the majority components. Most natural bitumens also contain organosulfur compounds, resulting in an overall sulfur content of up to 4%. Nickel and vanadium are found at <10 parts per million, as is typical of some petroleum.
The substance is soluble in carbon disulfide. It is commonly modelled as a colloid, with asphaltenes as the dispersed phase and maltenes as the continuous phase. "It is almost impossible to separate and identify all the different molecules of asphalt, because the number of molecules with different chemical structure is extremely large".
Asphalt may be confused with coal tar, which is a visually similar black, thermoplastic material produced by the destructive distillation of coal. During the early and mid-20th century, when town gas was produced, coal tar was a readily available byproduct and extensively used as the binder for road aggregates. The addition of coal tar to macadam roads led to the word "tarmac", which is now used in common parlance to refer to road-making materials. However, since the 1970s, when natural gas succeeded town gas, asphalt has completely overtaken the use of coal tar in these applications. Other examples of this confusion include the La Brea Tar Pits and the Canadian oil sands, both of which actually contain natural bitumen rather than tar. "Pitch" is another term sometimes informally used at times to refer to asphalt, as in Pitch Lake.
For economic and other reasons, asphalt is sometimes sold combined with other materials, often without being labeled as anything other than simply "asphalt".
Of particular note is the use of re-refined engine oil bottoms – "REOB" or "REOBs"—the residue of recycled automotive engine oil collected from the bottoms of re-refining vacuum distillation towers, in the manufacture of asphalt. REOB contains various elements and compounds found in recycled engine oil: additives to the original oil and materials accumulating from its circulation in the engine (typically iron and copper). Some research has indicated a correlation between this adulteration of asphalt and poorer-performing pavement.
The majority of asphalt used commercially is obtained from petroleum. Nonetheless, large amounts of asphalt occur in concentrated form in nature. Naturally occurring deposits of bitumen are formed from the remains of ancient, microscopic algae (diatoms) and other once-living things. These remains were deposited in the mud on the bottom of the ocean or lake where the organisms lived. Under the heat (above 50 °C) and pressure of burial deep in the earth, the remains were transformed into materials such as bitumen, kerogen, or petroleum.
Natural deposits of bitumen include lakes such as the Pitch Lake in Trinidad and Tobago and Lake Bermudez in Venezuela. Natural seeps occur in the La Brea Tar Pits and in the Dead Sea.
Bitumen also occurs in unconsolidated sandstones known as "oil sands" in Alberta, Canada, and the similar "tar sands" in Utah, US.
The Canadian province of Alberta has most of the world's reserves, in three huge deposits covering , an area larger than England or New York state. These bituminous sands contain of commercially established oil reserves, giving Canada the third largest oil reserves in the world. Although historically it was used without refining to pave roads, nearly all of the output is now used as raw material for oil refineries in Canada and the United States.
The world's largest deposit of natural bitumen, known as the Athabasca oil sands, is located in the McMurray Formation of Northern Alberta. This formation is from the early Cretaceous, and is composed of numerous lenses of oil-bearing sand with up to 20% oil. Isotopic studies show the oil deposits to be about 110 million years old. Two smaller but still very large formations occur in the Peace River oil sands and the Cold Lake oil sands, to the west and southeast of the Athabasca oil sands, respectively. Of the Alberta deposits, only parts of the Athabasca oil sands are shallow enough to be suitable for surface mining. The other 80% has to be produced by oil wells using enhanced oil recovery techniques like steam-assisted gravity drainage.
Much smaller heavy oil or bitumen deposits also occur in the Uinta Basin in Utah, US. The Tar Sand Triangle deposit, for example, is roughly 6% bitumen.
Bitumen may occur in hydrothermal veins. An example of this is within the Uinta Basin of Utah, in the US, where there is a swarm of laterally and vertically extensive veins composed of a solid hydrocarbon termed Gilsonite. These veins formed by the polymerization and solidification of hydrocarbons that were mobilized from the deeper oil shales of the Green River Formation during burial and diagenesis.
Bitumen is similar to the organic matter in carbonaceous meteorites. However, detailed studies have shown these materials to be distinct. The vast Alberta bitumen resources are considered to have started out as living material from marine plants and animals, mainly algae, that died millions of years ago when an ancient ocean covered Alberta. They were covered by mud, buried deeply over time, and gently cooked into oil by geothermal heat at a temperature of . Due to pressure from the rising of the Rocky Mountains in southwestern Alberta, 80 to 55 million years ago, the oil was driven northeast hundreds of kilometres and trapped into underground sand deposits left behind by ancient river beds and ocean beaches, thus forming the oil sands.
The use of natural bitumen for waterproofing, and as an adhesive dates at least to the fifth millennium BC, with a crop storage basket discovered in Mehrgarh, of the Indus Valley Civilization, lined with it. By the 3rd millennium BC refined rock asphalt was in use in the region, and was used to waterproof the Great Bath in Mohenjo-daro.
In the ancient Middle East, the Sumerians used natural bitumen deposits for mortar between bricks and stones, to cement parts of carvings, such as eyes, into place, for ship caulking, and for waterproofing. The Greek historian Herodotus said hot bitumen was used as mortar in the walls of Babylon.
The long Euphrates Tunnel beneath the river Euphrates at Babylon in the time of Queen Semiramis (c. 800 BC) was reportedly constructed of burnt bricks covered with bitumen as a waterproofing agent.
Bitumen was used by ancient Egyptians to embalm mummies. The Persian word for asphalt is "moom", which is related to the English word mummy. The Egyptians' primary source of bitumen was the Dead Sea, which the Romans knew as "Palus Asphaltites" (Asphalt Lake).
In approximately 40 AD, Dioscorides described the Dead Sea material as "Judaicum bitumen", and noted other places in the region where it could be found. The Sidon bitumen is thought to refer to material found at Hasbeya in Lebanon. Pliny also refers to bitumen being found in Epirus. Bitumen was a valuable strategic resource. It was the object of the first known battle for a hydrocarbon deposit – between the Seleucids and the Nabateans in 312 BC.
In the ancient Far East, natural bitumen was slowly boiled to get rid of the higher fractions, leaving a thermoplastic material of higher molecular weight that when layered on objects became quite hard upon cooling. This was used to cover objects that needed waterproofing, such as scabbards and other items. Statuettes of household deities were also cast with this type of material in Japan, and probably also in China.
In North America, archaeological recovery has indicated that bitumen was sometimes used to adhere stone projectile points to wooden shafts. In Canada, aboriginal people used bitumen seeping out of the banks of the Athabasca and other rivers to waterproof birch bark canoes, and also heated it in smudge pots to ward off mosquitoes in the summer.
In 1553, Pierre Belon described in his work "Observations" that "pissasphalto", a mixture of pitch and bitumen, was used in the Republic of Ragusa (now Dubrovnik, Croatia) for tarring of ships.
An 1838 edition of "Mechanics Magazine" cites an early use of asphalt in France. A pamphlet dated 1621, by "a certain Monsieur d'Eyrinys, states that he had discovered the existence (of asphaltum) in large quantities in the vicinity of Neufchatel", and that he proposed to use it in a variety of ways – "principally in the construction of air-proof granaries, and in protecting, by means of the arches, the water-courses in the city of Paris from the intrusion of dirt and filth", which at that time made the water unusable. "He expatiates also on the excellence of this material for forming level and durable terraces" in palaces, "the notion of forming such terraces in the streets not one likely to cross the brain of a Parisian of that generation".
But the substance was generally neglected in France until the revolution of 1830. In the 1830s there was a surge of interest, and asphalt became widely used "for pavements, flat roofs, and the lining of cisterns, and in England, some use of it had been made of it for similar purposes". Its rise in Europe was "a sudden phenomenon", after natural deposits were found "in France at Osbann (Bas-Rhin), the Parc (Ain) and the Puy-de-la-Poix (Puy-de-Dôme)", although it could also be made artificially. One of the earliest uses in France was the laying of about 24,000 square yards of Seyssel asphalt at the Place de la Concorde in 1835.
Among the earlier uses of bitumen in the United Kingdom was for etching. William Salmon's "Polygraphice" (1673) provides a recipe for varnish used in etching, consisting of three ounces of virgin wax, two ounces of mastic, and one ounce of asphaltum. By the fifth edition in 1685, he had included more asphaltum recipes from other sources.
The first British patent for the use of asphalt was "Cassell's patent asphalte or bitumen" in 1834. Then on 25 November 1837, Richard Tappin Claridge patented the use of Seyssel asphalt (patent #7849), for use in asphalte pavement, having seen it employed in France and Belgium when visiting with Frederick Walter Simms, who worked with him on the introduction of asphalt to Britain. Dr T. Lamb Phipson writes that his father, Samuel Ryland Phipson, a friend of Claridge, was also "instrumental in introducing the asphalte pavement (in 1836)".
Claridge obtained a patent in Scotland on 27 March 1838, and obtained a patent in Ireland on 23 April 1838. In 1851, extensions for the 1837 patent and for both 1838 patents were sought by the trustees of a company previously formed by Claridge. "Claridge's Patent Asphalte Company"—formed in 1838 for the purpose of introducing to Britain "Asphalte in its natural state from the mine at Pyrimont Seysell in France",—"laid one of the first asphalt pavements in Whitehall". Trials were made of the pavement in 1838 on the footway in Whitehall, the stable at Knightsbridge Barracks, "and subsequently on the space at the bottom of the steps leading from Waterloo Place to St. James Park". "The formation in 1838 of Claridge's Patent Asphalte Company (with a distinguished list of aristocratic patrons, and Marc and Isambard Brunel as, respectively, a trustee and consulting engineer), gave an enormous impetus to the development of a British asphalt industry". "By the end of 1838, at least two other companies, Robinson's and the Bastenne company, were in production", with asphalt being laid as paving at Brighton, Herne Bay, Canterbury, Kensington, the Strand, and a large floor area in Bunhill-row, while meantime Claridge's Whitehall paving "continue(d) in good order".
In 1838, there was a flurry of entrepreneurial activity involving asphalt, which had uses beyond paving. For example, asphalt could also be used for flooring, damp proofing in buildings, and for waterproofing of various types of pools and baths, both of which were also proliferating in the 19th century. On the London stockmarket, there were various claims as to the exclusivity of asphalt quality from France, Germany and England. And numerous patents were granted in France, with similar numbers of patent applications being denied in England due to their similarity to each other. In England, "Claridge's was the type most used in the 1840s and 50s".
In 1914, Claridge's Company entered into a joint venture to produce tar-bound macadam, with materials manufactured through a subsidiary company called Clarmac Roads Ltd. Two products resulted, namely "Clarmac", and "Clarphalte", with the former being manufactured by Clarmac Roads and the latter by Claridge's Patent Asphalte Co., although "Clarmac" was more widely used. However, the First World War ruined the Clarmac Company, which entered into liquidation in 1915. The failure of Clarmac Roads Ltd had a flow-on effect to Claridge's Company, which was itself compulsorily wound up, ceasing operations in 1917, having invested a substantial amount of funds into the new venture, both at the outset and in a subsequent attempt to save the Clarmac Company.
Bitumen was thought in 19th century Britain to contain chemicals with medicinal properties. Extracts from bitumen were used to treat catarrh and some forms of asthma and as a remedy against worms, especially the tapeworm.
The first use of bitumen in the New World was by indigenous peoples. On the west coast, as early as the 13th century, the Tongva, Luiseño and Chumash peoples collected the naturally occurring bitumen that seeped to the surface above underlying petroleum deposits. All three groups used the substance as an adhesive. It is found on many different artifacts of tools and ceremonial items. For example, it was used on rattles to adhere gourds or turtle shells to rattle handles. It was also used in decorations. Small round shell beads were often set in asphaltum to provide decorations. It was used as a sealant on baskets to make them watertight for carrying water, possibly poisoning those who drank the water. Asphalt was used also to seal the planks on ocean-going canoes.
Asphalt was first used to pave streets in the 1870s. At first naturally occurring "bituminous rock" was used, such as at Ritchie Mines in Macfarlan in Ritchie County, West Virginia from 1852 to 1873. In 1876, asphalt-based paving was used to pave Pennsylvania Avenue in Washington DC, in time for the celebration of the national centennial.
In the horse-drawn era, US streets were mostly unpaved and covered with dirt or gravel. Especially where mud or trenching often made streets difficult to pass, pavements were sometimes made of diverse materials including wooden planks, cobble stones or other stone blocks, or bricks. Unpaved roads produced uneven wear and hazards for pedestrians. In the late 19th century with the rise of the popular bicycle, bicycle clubs were important in pushing for more general pavement of streets. Advocacy for pavement increased in the early 20th century with the rise of the automobile. Asphalt gradually became an ever more common method of paving. St. Charles Avenue in New Orleans was paved its whole length with asphalt by 1889.
In 1900 Manhattan alone had 130,000 horses, pulling streetcars, wagons, and carriages, and leaving their waste behind. They were not fast, and pedestrians could dodge and scramble their way across the crowded streets. Small towns continued to rely on dirt and gravel, but larger cities wanted much better streets. They looked to wood or granite blocks by the 1850s. In 1890, a third of Chicago's 2000 miles of streets were paved, chiefly with wooden blocks, which gave better traction than mud. Brick surfacing was a good compromise, but even better was asphalt paving, which was easy to install and to cut through to get at sewers. With London and Paris serving as models, Washington laid 400,000 square yards of asphalt paving by 1882; it became the model for Buffalo, Philadelphia and elsewhere. By the end of the century, American cities boasted 30 million square yards of asphalt paving, well ahead of brick. The streets became faster and more dangerous so electric traffic lights were installed. Electric trolleys (at 12 miles per hour) became the main transportation service for middle class shoppers and office workers until they bought automobiles after 1945 and commuted from more distant suburbs in privacy and comfort on asphalt highways.
Canada has the world's largest deposit of natural bitumen in the Athabasca oil sands, and Canadian First Nations along the Athabasca River had long used it to waterproof their canoes. In 1719, a Cree named Wa-Pa-Su brought a sample for trade to Henry Kelsey of the Hudson's Bay Company, who was the first recorded European to see it. However, it wasn't until 1787 that fur trader and explorer Alexander MacKenzie saw the Athabasca oil sands and said, "At about 24 miles from the fork (of the Athabasca and Clearwater Rivers) are some bituminous fountains into which a pole of 20 feet long may be inserted without the least resistance."
The value of the deposit was obvious from the start, but the means of extracting the bitumen was not. The nearest town, Fort McMurray, Alberta, was a small fur trading post, other markets were far away, and transportation costs were too high to ship the raw bituminous sand for paving. In 1915, Sidney Ells of the Federal Mines Branch experimented with separation techniques and used the product to pave 600 feet of road in Edmonton, Alberta. Other roads in Alberta were paved with material extracted from oil sands, but it was generally not economic. During the 1920s Dr. Karl A. Clark of the Alberta Research Council patented a hot water oil separation process and entrepreneur Robert C. Fitzsimmons built the Bitumount oil separation plant, which between 1925 and 1958 produced up to per day of bitumen using Dr. Clark's method. Most of the bitumen was used for waterproofing roofs, but other uses included fuels, lubrication oils, printers ink, medicines, rust- and acid-proof paints, fireproof roofing, street paving, patent leather, and fence post preservatives. Eventually Fitzsimmons ran out of money and the plant was taken over by the Alberta government. Today the Bitumount plant is a Provincial Historic Site.
Bitumen was used in early photographic technology. In 1826 or 1827, it was used by French scientist Joseph Nicéphore Niépce to make the oldest surviving photograph from nature. The bitumen was thinly coated onto a pewter plate which was then exposed in a camera. Exposure to light hardened the bitumen and made it insoluble, so that when it was subsequently rinsed with a solvent only the sufficiently light-struck areas remained. Many hours of exposure in the camera were required, making bitumen impractical for ordinary photography, but from the 1850s to the 1920s it was in common use as a photoresist in the production of printing plates for various photomechanical printing processes.
Bitumen was the nemesis of many artists during the 19th century. Although widely used for a time, it ultimately proved unstable for use in oil painting, especially when mixed with the most common diluents, such as linseed oil, varnish and turpentine. Unless thoroughly diluted, bitumen never fully solidifies and will in time corrupt the other pigments with which it comes into contact. The use of bitumen as a glaze to set in shadow or mixed with other colors to render a darker tone resulted in the eventual deterioration of many paintings, for instance those of Delacroix. Perhaps the most famous example of the destructiveness of bitumen is Théodore Géricault's Raft of the Medusa (1818–1819), where his use of bitumen caused the brilliant colors to degenerate into dark greens and blacks and the paint and canvas to buckle.
The vast majority of refined asphalt is used in construction: primarily as a constituent of products used in paving and roofing applications. According to the requirements of the end use, asphalt is produced to specification. This is achieved either by refining or blending. It is estimated that the current world use of asphalt is approximately 102 million tonnes per year. Approximately 85% of all the asphalt produced is used as the binder in asphalt concrete for roads. It is also used in other paved areas such as airport runways, car parks and footways. Typically, the production of asphalt concrete involves mixing fine and coarse aggregates such as sand, gravel and crushed rock with asphalt, which acts as the binding agent. Other materials, such as recycled polymers (e.g., rubber tyres), may be added to the asphalt to modify its properties according to the application for which the asphalt is ultimately intended.
A further 10% of global asphalt production is used in roofing applications, where its waterproofing qualities are invaluable.
The remaining 5% of asphalt is used mainly for sealing and insulating purposes in a variety of building materials, such as pipe coatings, carpet tile backing and paint. Asphalt is applied in the construction and maintenance of many structures, systems, and components, such as the following:
The largest use of asphalt is for making asphalt concrete for road surfaces; this accounts for approximately 85% of the asphalt consumed in the United States. There are about 4,000 asphalt concrete mixing plants in the US, and a similar number in Europe.
Asphalt concrete pavement mixes are typically composed of 5% asphalt cement and 95% aggregates (stone, sand, and gravel). Due to its highly viscous nature, asphalt cement must be heated so it can be mixed with the aggregates at the asphalt mixing facility. The temperature required varies depending upon characteristics of the asphalt and the aggregates, but warm-mix asphalt technologies allow producers to reduce the temperature required.
The weight of an asphalt pavement depends upon the aggregate type, the asphalt, and the air void content. An average example in the United States is about 112 pounds per square yard, per inch of pavement thickness.
When maintenance is performed on asphalt pavements, such as milling to remove a worn or damaged surface, the removed material can be returned to a facility for processing into new pavement mixtures. The asphalt in the removed material can be reactivated and put back to use in new pavement mixes. With some 95% of paved roads being constructed of or surfaced with asphalt, a substantial amount of asphalt pavement material is reclaimed each year. According to industry surveys conducted annually by the Federal Highway Administration and the National Asphalt Pavement Association, more than 99% of the asphalt removed each year from road surfaces during widening and resurfacing projects is reused as part of new pavements, roadbeds, shoulders and embankments or stockpiled for future use.
Asphalt concrete paving is widely used in airports around the world. Due to the sturdiness and ability to be repaired quickly, it is widely used for runways.
Mastic asphalt is a type of asphalt that differs from dense graded asphalt (asphalt concrete) in that it has a higher asphalt (binder) content, usually around 7–10% of the whole aggregate mix, as opposed to rolled asphalt concrete, which has only around 5% asphalt. This thermoplastic substance is widely used in the building industry for waterproofing flat roofs and tanking underground. Mastic asphalt is heated to a temperature of and is spread in layers to form an impervious barrier about thick.
A number of technologies allow asphalt to be applied at mild temperatures. The viscosity can be lowered by emulsfying the asphalt by the addition of fatty amines. 02-25% is the content of these emulsifying agents. The cationic amines enhance the binding of the asphalt to the surface of the crushed rock.
Asphalt emulsions are used in a wide variety of applications. Chipseal involves spraying the road surface with asphalt emulsion followed by a layer of crushed rock, gravel or crushed slag. Slurry seal is a mixture of asphalt emulsion and fine crushed aggregate that is spread on the surface of a road. Cold-mixed asphalt can also be made from asphalt emulsion to create pavements similar to hot-mixed asphalt, several inches in depth, and asphalt emulsions are also blended into recycled hot-mix asphalt to create low-cost pavements.
Synthetic crude oil, also known as syncrude, is the output from a bitumen upgrader facility used in connection with oil sand production in Canada. Bituminous sands are mined using enormous (100 ton capacity) power shovels and loaded into even larger (400 ton capacity) dump trucks for movement to an upgrading facility. The process used to extract the bitumen from the sand is a hot water process originally developed by Dr. Karl Clark of the University of Alberta during the 1920s. After extraction from the sand, the bitumen is fed into a bitumen upgrader which converts it into a light crude oil equivalent. This synthetic substance is fluid enough to be transferred through conventional oil pipelines and can be fed into conventional oil refineries without any further treatment. By 2015 Canadian bitumen upgraders were producing over per day of synthetic crude oil, of which 75% was exported to oil refineries in the United States.
In Alberta, five bitumen upgraders produce synthetic crude oil and a variety of other products: The Suncor Energy upgrader near Fort McMurray, Alberta produces synthetic crude oil plus diesel fuel; the Syncrude Canada, Canadian Natural Resources, and Nexen upgraders near Fort McMurray produce synthetic crude oil; and the Shell Scotford Upgrader near Edmonton produces synthetic crude oil plus an intermediate feedstock for the nearby Shell Oil Refinery. A sixth upgrader, under construction in 2015 near Redwater, Alberta, will upgrade half of its crude bitumen directly to diesel fuel, with the remainder of the output being sold as feedstock to nearby oil refineries and petrochemical plants.
Canadian bitumen does not differ substantially from oils such as Venezuelan extra-heavy and Mexican heavy oil in chemical composition, and the real difficulty is moving the extremely viscous bitumen through oil pipelines to the refinery. Many modern oil refineries are extremely sophisticated and can process non-upgraded bitumen directly into products such as gasoline, diesel fuel, and refined asphalt without any preprocessing. This is particularly common in areas such as the US Gulf coast, where refineries were designed to process Venezuelan and Mexican oil, and in areas such as the US Midwest where refineries were rebuilt to process heavy oil as domestic light oil production declined. Given the choice, such heavy oil refineries usually prefer to buy bitumen rather than synthetic oil because the cost is lower, and in some cases because they prefer to produce more diesel fuel and less gasoline. By 2015 Canadian production and exports of non-upgraded bitumen exceeded that of synthetic crude oil at over per day, of which about 65% was exported to the United States.
Because of the difficulty of moving crude bitumen through pipelines, non-upgraded bitumen is usually diluted with natural-gas condensate in a form called dilbit or with synthetic crude oil, called synbit. However, to meet international competition, much non-upgraded bitumen is now sold as a blend of multiple grades of bitumen, conventional crude oil, synthetic crude oil, and condensate in a standardized benchmark product such as Western Canadian Select. This sour, heavy crude oil blend is designed to have uniform refining characteristics to compete with internationally marketed heavy oils such as Mexican Mayan or Arabian Dubai Crude.
Asphalt was used starting in the 1960s as a hydrophobic matrix aiming to encapsulate radioactive waste such as medium-activity salts (mainly soluble sodium nitrate and sodium sulfate) produced by the reprocessing of spent nuclear fuels or radioactive sludges from sedimentation ponds. Bituminised radioactive waste containing highly radiotoxic alpha-emitting transuranic elements from nuclear reprocessing plants have been produced at industrial scale in France, Belgium and Japan, but this type of waste conditioning has been abandoned because operational safety issues (risks of fire, as occurred in a bituminisation plant at Tokai Works in Japan) and long-term stability problems related to their geological disposal in deep rock formations. One of the main problem is the swelling of asphalt exposed to radiation and to water. Asphalt swelling is first induced by radiation because of the presence of hydrogen gas bubbles generated by alpha and gamma radiolysis. A second mechanism is the matrix swelling when the encapsulated hygroscopic salts exposed to water or moisture start to rehydrate and to dissolve. The high concentration of salt in the pore solution inside the bituminised matrix is then responsible for osmotic effects inside the bituminised matrix. The water moves in the direction of the concentrated salts, the asphalt acting as a semi-permeable membrane. This also causes the matrix to swell. The swelling pressure due to osmotic effect under constant volume can be as high as 200 bar. If not properly managed, this high pressure can cause fractures in the near field of a disposal gallery of bituminised medium-level waste. When the bituminised matrix has been altered by swelling, encapsulated radionuclides are easily leached by the contact of ground water and released in the geosphere. The high ionic strength of the concentrated saline solution also favours the migration of radionuclides in clay host rocks. The presence of chemically reactive nitrate can also affect the redox conditions prevailing in the host rock by establishing oxidizing conditions, preventing the reduction of redox-sensitive radionuclides. Under their higher valences, radionuclides of elements such as selenium, technetium, uranium, neptunium and plutonium have a higher solubility and are also often present in water as non-retarded anions. This makes the disposal of medium-level bituminised waste very challenging.
Different type of asphalt have been used: blown bitumen (partly oxidized with air oxygen at high temperature after distillation, and harder) and direct distillation bitumen (softer). Blown bitumens like Mexphalte, with a high content of saturated hydrocarbons, are more easily biodegraded by microorganisms than direct distillation bitumen, with a low content of saturated hydrocarbons and a high content of aromatic hydrocarbons.
Concrete encapsulation of radwaste is presently considered a safer alternative by the nuclear industry and the waste management organisations.
Roofing shingles and roll roofing account for most of the remaining asphalt consumption. Other uses include cattle sprays, fence-post treatments, and waterproofing for fabrics. Asphalt is used to make Japan black, a lacquer known especially for its use on iron and steel, and it is also used in paint and marker inks by some exterior paint supply companies to increase the weather resistance and permanence of the paint or ink, and to make the color darker. Asphalt is also used to seal some alkaline batteries during the manufacturing process.
About 40,000,000 tons were produced in 1984. It is obtained as the "heavy" (i.e., difficult to distill) fraction. Material with a boiling point greater than around 500 °C is considered asphalt. Vacuum distillation separates it from the other components in crude oil (such as naphtha, gasoline and diesel). The resulting material is typically further treated to extract small but valuable amounts of lubricants and to adjust the properties of the material to suit applications. In a de-asphalting unit, the crude asphalt is treated with either propane or butane in a supercritical phase to extract the lighter molecules, which are then separated. Further processing is possible by "blowing" the product: namely reacting it with oxygen. This step makes the product harder and more viscous.
Asphalt is typically stored and transported at temperatures around . Sometimes diesel oil or kerosene are mixed in before shipping to retain liquidity; upon delivery, these lighter materials are separated out of the mixture. This mixture is often called "bitumen feedstock", or BFS. Some dump trucks route the hot engine exhaust through pipes in the dump body to keep the material warm. The backs of tippers carrying asphalt, as well as some handling equipment, are also commonly sprayed with a releasing agent before filling to aid release. Diesel oil is no longer used as a release agent due to environmental concerns.
Naturally occurring crude bitumen impregnated in sedimentary rock is the prime feed stock for petroleum production from "oil sands", currently under development in Alberta, Canada. Canada has most of the world's supply of natural bitumen, covering 140,000 square kilometres (an area larger than England), giving it the second-largest proven oil reserves in the world. The Athabasca oil sands are the largest bitumen deposit in Canada and the only one accessible to surface mining, although recent technological breakthroughs have resulted in deeper deposits becoming producible by "in situ" methods. Because of oil price increases after 2003, producing bitumen became highly profitable, but as a result of the decline after 2014 it became uneconomic to build new plants again. By 2014, Canadian crude bitumen production averaged about per day and was projected to rise to per day by 2020. The total amount of crude bitumen in Alberta that could be extracted is estimated to be about , which at a rate of would last about 200 years.
Although uncompetitive economically, asphalt can be made from nonpetroleum-based renewable resources such as sugar, molasses and rice, corn and potato starches. Asphalt can also be made from waste material by fractional distillation of used motor oil, which is sometimes otherwise disposed of by burning or dumping into landfills. Use of motor oil may cause premature cracking in colder climates, resulting in roads that need to be repaved more frequently.
Nonpetroleum-based asphalt binders can be made light-colored. Lighter-colored roads absorb less heat from solar radiation, reducing their contribution to the urban heat island effect. Parking lots that use asphalt alternatives are called green parking lots.
Selenizza is a naturally occurring solid hydrocarbon bitumen found in native deposits in Selenice, in Albania, the only European asphalt mine still in use. The bitumen is found in the form of veins, filling cracks in a more or less horizontal direction. The bitumen content varies from 83% to 92% (soluble in carbon disulphide), with a penetration value near to zero and a softening point (ring and ball) around 120 °C. The insoluble matter, consisting mainly of silica ore, ranges from 8% to 17%.
Albanian bitumen extraction has a long history and was practiced in an organized way by the Romans. After centuries of silence, the first mentions of Albanian bitumen appeared only in 1868, when the Frenchman Coquand published the first geological description of the deposits of Albanian bitumen. In 1875, the exploitation rights were granted to the Ottoman government and in 1912, they were transferred to the Italian company Simsa. Since 1945, the mine was exploited by the Albanian government and from 2001 to date, the management passed to a French company, which organized the mining process for the manufacture of the natural bitumen on an industrial scale.
Today the mine is predominantly exploited in an open pit quarry but several of the many underground mines (deep and extending over several km) still remain viable. Selenizza is produced primarily in granular form, after melting the bitumen pieces selected in the mine.
Selenizza is mainly used as an additive in the road construction sector. It is mixed with traditional asphalt to improve both the viscoelastic properties and the resistance to ageing. It may be blended with the hot asphalt in tanks, but its granular form allows it to be fed in the mixer or in the recycling ring of normal asphalt plants. Other typical applications include the production of mastic asphalts for sidewalks, bridges, car-parks and urban roads as well as drilling fluid additives for the oil and gas industry. Selenizza is available in powder or in granular material of various particle sizes and is packaged in sacks or in thermal fusible polyethylene bags.
A life-cycle assessment study of the natural selenizza compared with petroleum asphalt has shown that the environmental impact of the selenizza is about half the impact of the road asphalt produced in oil refineries in terms of carbon dioxide emission.
Although asphalt typically makes up only 4 to 5 percent (by weight) of the pavement mixture, as the pavement's binder, it is also the most expensive part of the cost of the road-paving material.
During asphalt's early use in modern paving, oil refiners gave it away. However, asphalt is, today, a highly traded commodity. Its prices increased substantially in the early 21st Century. A U.S. government report states:
The report indicates that an "average" 1-mile (1.6-kilometer)-long, four-lane highway would include "300 tons of asphalt," which, "in 2002 would have cost around $48,000. By 2006 this would have increased to $96,000 and by 2012 to $183,000... an increase of about $135,000 for every mile of highway in just 10 years."
People can be exposed to asphalt in the workplace by breathing in fumes or skin absorption. The National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit of 5 mg/m3 over a 15-minute period.
Asphalt is basically an inert material that must be heated or diluted to a point where it becomes workable for the production of materials for paving, roofing, and other applications. In examining the potential health hazards associated with asphalt, the International Agency for Research on Cancer (IARC) determined that it is the application parameters, predominantly temperature, that affect occupational exposure and the potential bioavailable carcinogenic hazard/risk of the asphalt emissions. In particular, temperatures greater than 199 °C (390 °F), were shown to produce a greater exposure risk than when asphalt was heated to lower temperatures, such as those typically used in asphalt pavement mix production and placement. IARC has classified paving asphalt fumes as a Class 2B possible carcinogen, indicating inadequate evidence of carcinogenicity in humans.
In India Asphalt known as "shilajit" found in the Himalayas is consumed by people and is considered to have medicinal properties according to Ayurveda. | https://en.wikipedia.org/wiki?curid=657 |
American National Standards Institute
The American National Standards Institute (ANSI ) is a private non-profit organization that oversees the development of voluntary consensus standards for products, services, processes, systems, and personnel in the United States. The organization also coordinates U.S. standards with international standards so that American products can be used worldwide.
ANSI accredits standards that are developed by representatives of other standards organizations, government agencies, consumer groups, companies, and others. These standards ensure that the characteristics and performance of products are consistent, that people use the same definitions and terms, and that products are tested the same way. ANSI also accredits organizations that carry out product or personnel certification in accordance with requirements defined in international standards.
The organization's headquarters are in Washington, D.C. ANSI's operations office is located in New York City. The ANSI annual operating budget is funded by the sale of publications, membership dues and fees, accreditation services, fee-based programs, and international standards programs.
ANSI was originally formed in 1918, when five engineering societies and three government agencies founded the American Engineering Standards Committee (AESC). In 1928, the AESC became the American Standards Association (ASA). In 1966, the ASA was reorganized and became United States of America Standards Institute (USASI). The present name was adopted in 1969.
Prior to 1918, these five founding engineering societies:
had been members of the United Engineering Society (UES). At the behest of the AIEE, they invited the U.S. government Departments of War, Navy (combined in 1947 to become the Department of Defense or DOD) and Commerce to join in founding a national standards organization.
According to Adam Stanton, the first permanent secretary and head of staff in 1919, AESC started as an ambitious program and little else. Staff for the first year consisted of one executive, Clifford B. LePage, who was on loan from a founding member, ASME. An annual budget of $7,500 was provided by the founding bodies.
In 1931, the organization (renamed ASA in 1928) became affiliated with the U.S. National Committee of the International Electrotechnical Commission (IEC), which had been formed in 1904 to develop electrical and electronics standards.
ANSI's members are government agencies, organizations, academic and international bodies, and individuals. In total, the Institute represents the interests of more than 270,000 companies and organizations and 30 million professionals worldwide.
Although ANSI itself does not develop standards, the Institute oversees the development and use of standards by accrediting the procedures of standards developing organizations. ANSI accreditation signifies that the procedures used by standards developing organizations meet the Institute's requirements for openness, balance, consensus, and due process.
ANSI also designates specific standards as American National Standards, or ANS, when the Institute determines that the standards were developed in an environment that is equitable, accessible and responsive to the requirements of various stakeholders.
Voluntary consensus standards quicken the market acceptance of products while making clear how to improve the safety of those products for the protection of consumers. There are approximately 9,500 American National Standards that carry the ANSI designation.
The American National Standards process involves:
In addition to facilitating the formation of standards in the United States, ANSI promotes the use of U.S. standards internationally, advocates U.S. policy and technical positions in international and regional standards organizations, and encourages the adoption of international standards as national standards where appropriate.
The Institute is the official U.S. representative to the two major international standards organizations, the International Organization for Standardization (ISO), as a founding member, and the International Electrotechnical Commission (IEC), via the U.S. National Committee (USNC). ANSI participates in almost the entire technical program of both the ISO and the IEC, and administers many key committees and subgroups. In many instances, U.S. standards are taken forward to ISO and IEC, through ANSI or the USNC, where they are adopted in whole or in part as international standards.
Adoption of ISO and IEC standards as American standards increased from 0.2% in 1986 to 15.5% in May 2012.
The Institute administers nine standards panels:
Each of the panels works to identify, coordinate, and harmonize voluntary standards relevant to these areas.
In 2009, ANSI and the National Institute of Standards and Technology (NIST) formed the Nuclear Energy Standards Coordination Collaborative (NESCC). NESCC is a joint initiative to identify and respond to the current need for standards in the nuclear industry. | https://en.wikipedia.org/wiki?curid=659 |
Apollo 11
Apollo 11 was the spaceflight that first landed humans on the Moon. Commander Neil Armstrong and lunar module pilot Buzz Aldrin formed the American crew that landed the Apollo Lunar Module "Eagle" on July 20, 1969, at 20:17 UTC. Armstrong became the first person to step onto the lunar surface six hours and 39 minutes later on July 21 at 02:56 UTC; Aldrin joined him 19 minutes later. They spent about two and a quarter hours together outside the spacecraft, and they collected of lunar material to bring back to Earth. Command module pilot Michael Collins flew the Command Module "Columbia" alone in lunar orbit while they were on the Moon's surface. Armstrong and Aldrin spent 21 hours, 36 minutes on the lunar surface at a site they named Tranquility Base before lifting off to rejoin "Columbia" in lunar orbit.
Apollo 11 was launched by a Saturn V rocket from Kennedy Space Center on Merritt Island, Florida, on July 16 at 13:32 UTC, and it was the fifth crewed mission of NASA's Apollo program. The Apollo spacecraft had three parts: a command module (CM) with a cabin for the three astronauts, the only part that returned to Earth; a service module (SM), which supported the command module with propulsion, electrical power, oxygen, and water; and a lunar module (LM) that had two stages—a descent stage for landing on the Moon and an ascent stage to place the astronauts back into lunar orbit.
After being sent to the Moon by the Saturn V's third stage, the astronauts separated the spacecraft from it and traveled for three days until they entered lunar orbit. Armstrong and Aldrin then moved into "Eagle" and landed in the Sea of Tranquility on July 20. The astronauts used "Eagle"s ascent stage to lift off from the lunar surface and rejoin Collins in the command module. They jettisoned "Eagle" before they performed the maneuvers that propelled "Columbia" out of the last of its 30 lunar orbits onto a trajectory back to Earth. They returned to Earth and splashed down in the Pacific Ocean on July 24 after more than eight days in space.
Armstrong's first step onto the lunar surface was broadcast on live TV to a worldwide audience. He described the event as "one small step for [a] man, one giant leap for mankind." Apollo 11 effectively ended the Space Race and fulfilled a national goal proposed in 1961 by President John F. Kennedy: "before this decade is out, of landing a man on the Moon and returning him safely to the Earth."
In the late 1950s and early 1960s, the United States was engaged in the Cold War, a geopolitical rivalry with the Soviet Union. On October 4, 1957, the Soviet Union launched Sputnik 1, the first artificial satellite. This surprise success fired fears and imaginations around the world. It demonstrated that the Soviet Union had the capability to deliver nuclear weapons over intercontinental distances, and challenged American claims of military, economic and technological superiority. This precipitated the Sputnik crisis, and triggered the Space Race. President Dwight D. Eisenhower responded to the Sputnik challenge by creating the National Aeronautics and Space Administration (NASA), and initiating Project Mercury, which aimed to launch a man into Earth orbit. But on April 12, 1961, Soviet cosmonaut Yuri Gagarin became the first person in space, and the first to orbit the Earth. Nearly a month later, on May 5, 1961, Alan Shepard became the first American in space, completing a 15-minute suborbital journey. After being recovered from the Atlantic Ocean, he received a congratulatory telephone call from Eisenhower's successor, John F. Kennedy.
Since the Soviet Union had higher lift capacity launch vehicles, Kennedy chose, from among options presented by NASA, a challenge beyond the capacity of the existing generation of rocketry, so that the US and Soviet Union would be starting from a position of equality. A crewed mission to the Moon would serve this purpose.
On May 25, 1961, Kennedy addressed the United States Congress on "Urgent National Needs" and declared:
On September 12, 1962, Kennedy delivered another speech before a crowd of about 40,000 people in the Rice University football stadium in Houston, Texas. A widely quoted refrain from the middle portion of the speech reads as follows:
In spite of that, the proposed program faced the opposition of many Americans and was dubbed a "moondoggle" by Norbert Wiener, a mathematician at the Massachusetts Institute of Technology. The effort to land a man on the Moon already had a name: Project Apollo. When Kennedy met with Nikita Khrushchev, the Premier of the Soviet Union in June 1961, he proposed making the Moon landing a joint project, but Khrushchev did not take up the offer. Kennedy again proposed a joint expedition to the Moon in a speech to the United Nations General Assembly on September 20, 1963. The idea of a joint Moon mission was abandoned after Kennedy's death.
An early and crucial decision was choosing lunar orbit rendezvous over both direct ascent and Earth orbit rendezvous. A space rendezvous is an orbital maneuver in which two spacecraft navigate through space and meet up. In July 1962 NASA head James Webb announced that lunar orbit rendezvous would be used and that the Apollo spacecraft would have three major parts: a command module (CM) with a cabin for the three astronauts, and the only part that returned to Earth; a service module (SM), which supported the command module with propulsion, electrical power, oxygen, and water; and a lunar module (LM) that had two stages—a descent stage for landing on the Moon, and an ascent stage to place the astronauts back into lunar orbit. This design meant the spacecraft could be launched by a single Saturn V rocket that was then under development.
Technologies and techniques required for Apollo were developed by Project Gemini. The Apollo project was enabled by NASA's adoption of new advances in semiconductor electronic technology, including metal-oxide-semiconductor field-effect transistors (MOSFETs) in the Interplanetary Monitoring Platform (IMP) and silicon integrated circuit (IC) chips in the Apollo Guidance Computer (AGC).
Project Apollo was abruptly halted by the Apollo 1 fire on January 27, 1967, in which astronauts Gus Grissom, Ed White, and Roger B. Chaffee died, and the subsequent investigation. In October 1968, Apollo 7 evaluated the command module in Earth orbit, and in December Apollo 8 tested it in lunar orbit. In March 1969, Apollo 9 put the lunar module through its paces in Earth orbit, and in May Apollo 10 conducted a "dress rehearsal" in lunar orbit. By July 1969, all was in readiness for Apollo 11 to take the final step onto the Moon.
The Soviet Union competed with the US in the Space Race, but its early lead was lost through repeated failures in development of the N1 launcher, which was comparable to the Saturn V. The Soviets tried to beat the US to return lunar material to the Earth by means of uncrewed probes. On July 13, three days before Apollo 11's launch, the Soviet Union launched Luna 15, which reached lunar orbit before Apollo 11. During descent, a malfunction caused Luna 15 to crash in Mare Crisium about two hours before Armstrong and Aldrin took off from the Moon's surface to begin their voyage home. The Nuffield Radio Astronomy Laboratories radio telescope in England recorded transmissions from Luna 15 during its descent, and these were released in July 2009 for the 40th anniversary of Apollo 11.
The initial crew assignment of Commander Neil Armstrong, Command Module Pilot (CMP) Jim Lovell, and Lunar Module Pilot (LMP) Buzz Aldrin on the backup crew for Apollo9 was officially announced on November 20, 1967. Lovell and Aldrin had previously flown together as the crew of Gemini 12. Due to design and manufacturing delays in the LM, Apollo8 and Apollo9 swapped prime and backup crews, and Armstrong's crew became the backup for Apollo8. Based on the normal crew rotation scheme, Armstrong was then expected to command Apollo 11.
There would be one change. Michael Collins, the CMP on the Apollo8 crew, began experiencing trouble with his legs. Doctors diagnosed the problem as a bony growth between his fifth and sixth vertebrae, requiring surgery. Lovell took his place on the Apollo8 crew, and when Collins recovered he joined Armstrong's crew as CMP. In the meantime, Fred Haise filled in as backup LMP, and Aldrin as backup CMP for Apollo 8. Apollo 11 was the second American mission where all the crew members had prior spaceflight experience, the first being Apollo 10. The next was STS-26 in 1988.
Deke Slayton gave Armstrong the option to replace Aldrin with Lovell, since some thought Aldrin was difficult to work with. Armstrong had no issues working with Aldrin but thought it over for a day before declining. He thought Lovell deserved to command his own mission (eventually Apollo 13).
The Apollo 11 prime crew had none of the close cheerful camaraderie characterized by that of Apollo 12. Instead they forged an amiable working relationship. Armstrong in particular was notoriously aloof, but Collins, who considered himself a loner, confessed to rebuffing Aldrin's attempts to create a more personal relationship. Aldrin and Collins described the crew as "amiable strangers". Armstrong did not agree with the assessment, and said "... all the crews I was on worked very well together."
The backup crew consisted of Lovell as Commander, William Anders as CMP, and Haise as LMP. Anders had flown with Lovell on Apollo8. In early 1969, he accepted a job with the National Aeronautics and Space Council effective August 1969, and announced he would retire as an astronaut at that time. Ken Mattingly was moved from the support crew into parallel training with Anders as backup CMP in case Apollo 11 was delayed past its intended July launch date, at which point Anders would be unavailable.
By the normal crew rotation in place during Apollo, Lovell, Mattingly, and Haise were scheduled to fly on Apollo 14 after backing up for Apollo 11. Later, Lovell's crew was forced to switch places with Alan Shepard's tentative Apollo 13 crew to give Shepard more training time.
During Projects Mercury and Gemini, each mission had a prime and a backup crew. For Apollo, a third crew of astronauts was added, known as the support crew. The support crew maintained the flight plan, checklists and mission ground rules, and ensured the prime and backup crews were apprised of changes. They developed procedures, especially those for emergency situations, so these were ready for when the prime and backup crews came to train in the simulators, allowing them to concentrate on practicing and mastering them. For Apollo 11, the support crew consisted of Ken Mattingly, Ronald Evans and Bill Pogue.
The capsule communicator (CAPCOM) was an astronaut at the Mission Control Center in Houston, Texas, who was the only person who communicated directly with the flight crew. For Apollo 11, the CAPCOMs were: Charles Duke, Ronald Evans, Bruce McCandless II, James Lovell, William Anders, Ken Mattingly, Fred Haise, Don L. Lind, Owen K. Garriott and Harrison Schmitt.
The flight directors for this mission were:
Other key personnel who played important roles in the Apollo 11 mission include the following.
The Apollo 11 mission emblem was designed by Collins, who wanted a symbol for "peaceful lunar landing by the United States". At Lovell's suggestion, he chose the bald eagle, the national bird of the United States, as the symbol. Tom Wilson, a simulator instructor, suggested an olive branch in its beak to represent their peaceful mission. Collins added a lunar background with the Earth in the distance. The sunlight in the image was coming from the wrong direction; the shadow should have been in the lower part of the Earth instead of the left. Aldrin, Armstrong and Collins decided the Eagle and the Moon would be in their natural colors, and decided on a blue and gold border. Armstrong was concerned that "eleven" would not be understood by non-English speakers, so they went with "Apollo 11", and they decided not to put their names on the patch, so it would "be representative of "everyone" who had worked toward a lunar landing".
An illustrator at the MSC did the artwork, which was then sent off to NASA officials for approval. The design was rejected. Bob Gilruth, the director of the MSC felt the talons of the eagle looked "too warlike". After some discussion, the olive branch was moved to the talons. When the Eisenhower dollar coin was released in 1971, the patch design provided the eagle for its reverse side. The design was also used for the smaller Susan B. Anthony dollar unveiled in 1979.
After the crew of Apollo 10 named their spacecraft "Charlie Brown" and "Snoopy", assistant manager for public affairs Julian Scheer wrote to George M. Low, the Manager of the Apollo Spacecraft Program Office at the Manned Spacecraft Center (MSC), to suggest the Apollo 11 crew be less flippant in naming their craft. The name "Snowcone" was used for the CM and "Haystack" was used for the LM in both internal and external communications during early mission planning.
The LM was named "Eagle" after the motif which was featured prominently on the mission insignia. At Scheer's suggestion, the CM was named "Columbia" after "Columbiad", the giant cannon that launched a spacecraft (also from Florida) in Jules Verne's 1865 novel "From the Earth to the Moon". It also referred to Columbia, a historical name of the United States. In Collins' 1976 book, he said "Columbia" was in reference to Christopher Columbus.
The astronauts had personal preference kits (PPKs), small bags containing personal items of significance they wanted to take with them on the mission. Five PPKs were carried on Apollo 11: three (one for each astronaut) were stowed on "Columbia" before launch, and two on "Eagle".
Neil Armstrong's LM PPK contained a piece of wood from the Wright brothers' 1903 "Wright Flyer"s left propeller and a piece of fabric from its wing, along with a diamond-studded astronaut pin originally given to Slayton by the widows of the Apollo1 crew. This pin had been intended to be flown on that mission and given to Slayton afterwards, but following the disastrous launch pad fire and subsequent funerals, the widows gave the pin to Slayton. Armstrong took it with him on Apollo 11.
NASA's Apollo Site Selection Board announced five potential landing sites on February 8, 1968. These were the result of two years' worth of studies based on high-resolution photography of the lunar surface by the five uncrewed probes of the Lunar Orbiter program and information about surface conditions provided by the Surveyor program. The best Earth-bound telescopes could not resolve features with the resolution Project Apollo required. The landing site had to be close to the lunar equator to minimize the amount of propellant required, clear of obstacles to minimize maneuvering, and flat to simplify the task of the landing radar. Scientific value was not a consideration.
Areas that appeared promising on photographs taken on Earth were often found to be totally unacceptable. The original requirement that the site be free of craters had to be relaxed, as no such site was found. Five sites were considered: Sites1 and2 were in the Sea of Tranquility ("Mare Tranquilitatis"); Site3 was in the Central Bay ("Sinus Medii"); and Sites4 and5 were in the Ocean of Storms ("Oceanus Procellarum").
The final site selection was based on seven criteria:
The requirement for the Sun angle was particularly restrictive, limiting the launch date to one day per month. A landing just after dawn was chosen to limit the temperature extremes the astronauts would experience. The Apollo Site Selection Board selected Site2, with Sites 3and5 as backups in the event of the launch being delayed. In May 1969, Apollo 10's lunar module flew to within of Site2, and reported it was acceptable.
During the first press conference after the Apollo 11 crew was announced, the first question was, "Which one of you gentlemen will be the first man to step onto the lunar surface?" Slayton told the reporter it had not been decided, and Armstrong added that it was "not based on individual desire".
One of the first versions of the egress checklist had the lunar module pilot exit the spacecraft before the commander, which matched what had been done on Gemini missions, where the commander had never performed the spacewalk. Reporters wrote in early 1969 that Aldrin would be the first man to walk on the Moon, and Associate Administrator George Mueller told reporters he would be first as well. Aldrin heard that Armstrong would be the first because Armstrong was a civilian, which made Aldrin livid. Aldrin attempted to persuade other lunar module pilots he should be first, but they responded cynically about what they perceived as a lobbying campaign. Attempting to stem interdepartmental conflict, Slayton told Aldrin that Armstrong would be first since he was the commander. The decision was announced in a press conference on April 14, 1969.
For decades, Aldrin believed the final decision was largely driven by the lunar module's hatch location. Because the astronauts had their spacesuits on and the spacecraft was so small, maneuvering to exit the spacecraft was difficult. The crew tried a simulation in which Aldrin left the spacecraft first, but he damaged the simulator while attempting to egress. While this was enough for mission planners to make their decision, Aldrin and Armstrong were left in the dark on the decision until late spring. Slayton told Armstrong the plan was to have him leave the spacecraft first, if he agreed. Armstrong said, "Yes, that's the way to do it."
The media accused Armstrong of exercising his commander's prerogative to exit the spacecraft first. Chris Kraft revealed in his 2001 autobiography that a meeting occurred between Gilruth, Slayton, Low, and himself to make sure Aldrin would not be the first to walk on the Moon. They argued that the first person to walk on the Moon should be like Charles Lindbergh, a calm and quiet person. They made the decision to change the flight plan so the commander was the first to egress from the spacecraft.
The ascent stage of lunar module LM-5 arrived at the Kennedy Space Center on January 8, 1969, followed by the descent stage four days later, and Command and Service Module CM-107 on January 23. There were several differences between LM-5 and Apollo 10's LM-4; LM-5 had a VHF radio antenna to facilitate communication with the astronauts during their EVA on the lunar surface; a lighter ascent engine; more thermal protection on the landing gear; and a package of scientific experiments known as the Early Apollo Scientific Experiments Package (EASEP). The only change in the configuration of the command module was the removal of some insulation from the forward hatch. The command and service modules were mated on January 29, and moved from the Operations and Checkout Building to the Vehicle Assembly Building on April 14.
The S-IVB third stage of Saturn V AS-506 had arrived on January 18, followed by the S-II second stage on February 6, S-IC first stage on February 20, and the Saturn V Instrument Unit on February 27. At 12:30 on May 20, the assembly departed the Vehicle Assembly Building atop the crawler-transporter, bound for Launch Pad 39A, part of Launch Complex 39, while Apollo 10 was still on its way to the Moon. A countdown test commenced on June 26, and concluded on July 2. The launch complex was floodlit on the night of July 15, when the crawler-transporter carried the mobile service structure back to its parking area. In the early hours of the morning, the fuel tanks of the S-II and S-IVB stages were filled with liquid hydrogen. Fueling was completed by three hours before launch. Launch operations were partly automated, with 43 programs written in the ATOLL programming language.
Slayton roused the crew shortly after 0400, and they showered, shaved, and had the traditional pre-flight breakfast of steak and eggs with Slayton and the backup crew. They then donned their space suits and began breathing pure oxygen. At 0630, they headed out to Launch Complex 39. Haise entered "Columbia" about three hours and ten minutes before launch time. Along with a technician, he helped Armstrong into the left hand couch at 06:54. Five minutes later, Collins joined him, taking up his position on the right hand couch. Finally, Aldrin entered, taking the center couch. Haise left around two hours and ten minutes before launch. The closeout crew sealed the hatch, and the cabin was purged and pressurized. The closeout crew then left the launch complex about an hour before launch time. The countdown became automated at three minutes and twenty seconds before launch time. Over 450 personnel were at the consoles in the firing room.
An estimated one million spectators watched the launch of Apollo 11 from the highways and beaches in the vicinity of the launch site. Dignitaries included the Chief of Staff of the United States Army, General William Westmoreland, four cabinet members, 19 state governors, 40 mayors, 60 ambassadors and 200 congressmen. Vice President Spiro Agnew viewed the launch with the former president, Lyndon B. Johnson and his wife Lady Bird Johnson. Around 3,500 media representatives were present. About two-thirds were from the United States; the rest came from 55 other countries. The launch was televised live in 33 countries, with an estimated 25 million viewers in the United States alone. Millions more around the world listened to radio broadcasts. President Richard Nixon viewed the launch from his office in the White House with his NASA liaison officer, Apollo astronaut Frank Borman.
Saturn V AS-506 launched Apollo 11 on July 16, 1969, at 13:32:00 UTC (9:32:00 EDT). At 13.2 seconds into the flight, the launch vehicle began to roll into its flight azimuth of 72.058°. Full shutdown of the first-stage engines occurred about 2minutes and 42 seconds into the mission, followed by separation of the S-IC and ignition of the S-II engines. The second stage engines then cut off and separated at about 9minutes and 8seconds, allowing the first ignition of the S-IVB engine a few seconds later.
Apollo 11 entered Earth orbit at an altitude of by , twelve minutes into its flight. After one and a half orbits, a second ignition of the S-IVB engine pushed the spacecraft onto its trajectory toward the Moon with the trans-lunar injection (TLI) burn at 16:22:13 UTC. About 30 minutes later, with Collins in the left seat and at the controls, the transposition, docking, and extraction maneuver was performed. This involved separating "Columbia" from the spent S-IVB stage, turning around, and docking with "Eagle" still attached to the stage. After the LM was extracted, the combined spacecraft headed for the Moon, while the rocket stage flew on a trajectory past the Moon. This was done to avoid the third stage colliding with the spacecraft, the Earth, or the Moon. A slingshot effect from passing around the Moon threw it into an orbit around the Sun.
On July 19 at 17:21:50 UTC, Apollo 11 passed behind the Moon and fired its service propulsion engine to enter lunar orbit. In the thirty orbits that followed, the crew saw passing views of their landing site in the southern Sea of Tranquility about southwest of the crater Sabine D. The site was selected in part because it had been characterized as relatively flat and smooth by the automated Ranger 8 and Surveyor 5 landers and the Lunar Orbiter mapping spacecraft and unlikely to present major landing or EVA challenges. It lay about southeast of the Surveyor5 landing site, and southwest of Ranger8's crash site.
At 12:52:00 UTC on July 20, Aldrin and Armstrong entered "Eagle", and began the final preparations for lunar descent. At 17:44:00 "Eagle" separated from "Columbia". Collins, alone aboard "Columbia", inspected "Eagle" as it pirouetted before him to ensure the craft was not damaged, and that the landing gear was correctly deployed. Armstrong exclaimed: "The "Eagle" has wings!"
As the descent began, Armstrong and Aldrin found themselves passing landmarks on the surface two or three seconds early, and reported that they were "long"; they would land miles west of their target point. "Eagle" was traveling too fast. The problem could have been mascons—concentrations of high mass that could have altered the trajectory. Flight Director Gene Kranz speculated that it could have resulted from extra air pressure in the docking tunnel. Or it could have been the result of "Eagle"s pirouette maneuver.
Five minutes into the descent burn, and above the surface of the Moon, the LM guidance computer (LGC) distracted the crew with the first of several unexpected 1201 and 1202 program alarms. Inside Mission Control Center, computer engineer Jack Garman told Guidance Officer Steve Bales it was safe to continue the descent, and this was relayed to the crew. The program alarms indicated "executive overflows", meaning the guidance computer could not complete all its tasks in real time and had to postpone some of them. Margaret Hamilton, the Director of Apollo Flight Computer Programming at the MIT Charles Stark Draper Laboratory later recalled:
During the mission, the cause was diagnosed as the rendezvous radar switch being in the wrong position, causing the computer to process data from both the rendezvous and landing radars at the same time. Software engineer Don Eyles concluded in a 2005 Guidance and Control Conference paper that the problem was due to a hardware design bug previously seen during testing of the first uncrewed LM in Apollo 5. Having the rendezvous radar on (so it was warmed up in case of an emergency landing abort) should have been irrelevant to the computer, but an electrical phasing mismatch between two parts of the rendezvous radar system could cause the stationary antenna to appear to the computer as dithering back and forth between two positions, depending upon how the hardware randomly powered up. The extra spurious cycle stealing, as the rendezvous radar updated an involuntary counter, caused the computer alarms.
When Armstrong again looked outside, he saw that the computer's landing target was in a boulder-strewn area just north and east of a crater (later determined to be West crater), so he took semi-automatic control. Armstrong considered landing short of the boulder field so they could collect geological samples from it, but could not since their horizontal velocity was too high. Throughout the descent, Aldrin called out navigation data to Armstrong, who was busy piloting "Eagle". Now above the surface, Armstrong knew their propellant supply was dwindling and was determined to land at the first possible landing site.
Armstrong found a clear patch of ground and maneuvered the spacecraft towards it. As he got closer, now above the surface, he discovered his new landing site had a crater in it. He cleared the crater and found another patch of level ground. They were now from the surface, with only 90 seconds of propellant remaining. Lunar dust kicked up by the LM's engine began to impair his ability to determine the spacecraft's motion. Some large rocks jutted out of the dust cloud, and Armstrong focused on them during his descent so he could determine the spacecraft's speed.
A light informed Aldrin that at least one of the probes hanging from "Eagle" footpads had touched the surface a few moments before the landing and he said: "Contact light!" Armstrong was supposed to immediately shut the engine down, as the engineers suspected the pressure caused by the engine's own exhaust reflecting off the lunar surface could make it explode, but he forgot. Three seconds later, "Eagle" landed and Armstrong shut the engine down. Aldrin immediately said "Okay, engine stop. ACA—out of detent." Armstrong acknowledged: "Out of detent. Auto." Aldrin continued: "Mode control—both auto. Descent engine command override off. Engine arm—off. 413 is in."
ACA was the Attitude Control Assembly—the LM's control stick. Output went to the LGC to command the reaction control system (RCS) jets to fire. "Out of Detent" meant the stick had moved away from its centered position; it was spring-centered like the turn indicator in a car. LGC address 413 contained the variable that indicated the LM had landed.
"Eagle" landed at 20:17:40 UTC on Sunday July 20 with of usable fuel remaining. Information available to the crew and mission controllers during the landing showed the LM had enough fuel for another 25 seconds of powered flight before an abort without touchdown would have become unsafe, but post-mission analysis showed that the real figure was probably closer to 50 seconds. Apollo 11 landed with less fuel than most subsequent missions, and the astronauts encountered a premature low fuel warning. This was later found to be the result of greater propellant 'slosh' than expected, uncovering a fuel sensor. On subsequent missions, extra anti-slosh baffles were added to the tanks to prevent this.
Armstrong acknowledged Aldrin's completion of the post landing checklist with "Engine arm is off", before responding to the CAPCOM, Charles Duke, with the words, "Houston, Tranquility Base here. The "Eagle" has landed." Armstrong's unrehearsed change of call sign from "Eagle" to "Tranquility Base" emphasized to listeners that landing was complete and successful. Duke mispronounced his reply as he expressed the relief at Mission Control: "Roger, Twan—Tranquility, we copy you on the ground. You got a bunch of guys about to turn blue. We're breathing again. Thanks a lot."
Two and a half hours after landing, before preparations began for the EVA, Aldrin radioed to Earth:
He then took communion privately. At this time NASA was still fighting a lawsuit brought by atheist Madalyn Murray O'Hair (who had objected to the Apollo8 crew reading from the Book of Genesis) demanding that their astronauts refrain from broadcasting religious activities while in space. As such, Aldrin chose to refrain from directly mentioning taking communion on the Moon. Aldrin was an elder at the Webster Presbyterian Church, and his communion kit was prepared by the pastor of the church, Dean Woodruff. Webster Presbyterian possesses the chalice used on the Moon and commemorates the event each year on the Sunday closest to July 20. The schedule for the mission called for the astronauts to follow the landing with a five-hour sleep period, but they chose to begin preparations for the EVA early, thinking they would be unable to sleep.
Preparations for Neil Armstrong and Buzz Aldrin to walk on the Moon began at 23:43. These took longer than expected; three and a half hours instead of two. During training on Earth, everything required had been neatly laid out in advance, but on the Moon the cabin contained a large number of other items as well, such as checklists, food packets, and tools. Six hours and thirty-nine minutes after landing Armstrong and Aldrin were ready to go outside, and "Eagle" was depressurized.
"Eagle"s hatch was opened at 02:39:33. Armstrong initially had some difficulties squeezing through the hatch with his portable life support system (PLSS). Some of the highest heart rates recorded from Apollo astronauts occurred during LM egress and ingress. At 02:51 Armstrong began his descent to the lunar surface. The remote control unit on his chest kept him from seeing his feet. Climbing down the nine-rung ladder, Armstrong pulled a D-ring to deploy the modular equipment stowage assembly (MESA) folded against "Eagle" side and activate the TV camera.
Apollo 11 used slow-scan television (TV) incompatible with broadcast TV, so it was displayed on a special monitor, and a conventional TV camera viewed this monitor, significantly reducing the quality of the picture. The signal was received at Goldstone in the United States, but with better fidelity by Honeysuckle Creek Tracking Station near Canberra in Australia. Minutes later the feed was switched to the more sensitive Parkes radio telescope in Australia. Despite some technical and weather difficulties, ghostly black and white images of the first lunar EVA were received and broadcast to at least 600 million people on Earth. Copies of this video in broadcast format were saved and are widely available, but recordings of the original slow scan source transmission from the lunar surface were likely destroyed during routine magnetic tape re-use at NASA.
While still on the ladder, Armstrong uncovered a plaque mounted on the LM descent stage bearing two drawings of Earth (of the Western and Eastern Hemispheres), an inscription, and signatures of the astronauts and President Nixon. The inscription read:
At the behest of the Nixon administration to add a reference to God, NASA included the vague date as a reason to include A.D., which stands for Anno Domini, "in the year of our Lord" (although it should have been placed before the year, not after).
After describing the surface dust as "very fine-grained" and "almost like a powder", at 02:56:15, six and a half hours after landing, Armstrong stepped off "Eagle" footpad and declared: "That's one small step for [a] man, one giant leap for mankind."
Armstrong intended to say "That's one small step for a man", but the word "a" is not audible in the transmission, and thus was not initially reported by most observers of the live broadcast. When later asked about his quote, Armstrong said he believed he said "for a man", and subsequent printed versions of the quote included the "a" in square brackets. One explanation for the absence may be that his accent caused him to slur the words "for a" together; another is the intermittent nature of the audio and video links to Earth, partly because of storms near Parkes Observatory. More recent digital analysis of the tape claims to reveal the "a" may have been spoken but obscured by static. Other analysis points to the claims of static and slurring as "face-saving fabrication", and that Armstrong himself later admitted to misspeaking the line.
About seven minutes after stepping onto the Moon's surface, Armstrong collected a contingency soil sample using a sample bag on a stick. He then folded the bag and tucked it into a pocket on his right thigh. This was to guarantee there would be some lunar soil brought back in case an emergency required the astronauts to abandon the EVA and return to the LM. Twelve minutes after the sample was collected, he removed the TV camera from the MESA and made a panoramic sweep, then mounted it on a tripod. The TV camera cable remained partly coiled and presented a tripping hazard throughout the EVA. Still photography was accomplished with a Hasselblad camera which could be operated hand held or mounted on Armstrong's Apollo space suit. Aldrin joined Armstrong on the surface. He described the view with the simple phrase: "Magnificent desolation."
Armstrong said moving in the lunar gravity, one-sixth of Earth's, was "even perhaps easier than the simulations ... It's absolutely no trouble to walk around." Aldrin joined him on the surface and tested methods for moving around, including two-footed kangaroo hops. The PLSS backpack created a tendency to tip backward, but neither astronaut had serious problems maintaining balance. Loping became the preferred method of movement. The astronauts reported that they needed to plan their movements six or seven steps ahead. The fine soil was quite slippery. Aldrin remarked that moving from sunlight into "Eagle" shadow produced no temperature change inside the suit, but the helmet was warmer in sunlight, so he felt cooler in shadow. The MESA failed to provide a stable work platform and was in shadow, slowing work somewhat. As they worked, the moonwalkers kicked up gray dust which soiled the outer part of their suits.
The astronauts planted the Lunar Flag Assembly containing a flag of the United States on the lunar surface, in clear view of the TV camera. Aldrin remembered, "Of all the jobs I had to do on the Moon the one I wanted to go the smoothest was the flag raising." But the astronauts struggled with the telescoping rod and could only jam the pole a couple of inches (5 cm) into the hard lunar surface. Aldrin was afraid it might topple in front of TV viewers. But he gave "a crisp West Point salute". Before Aldrin could take a photo of Armstrong with the flag, President Richard Nixon spoke to them through a telephone-radio transmission which Nixon called "the most historic phone call ever made from the White House." Nixon originally had a long speech prepared to read during the phone call, but Frank Borman, who was at the White House as a NASA liaison during Apollo 11, convinced Nixon to keep his words brief.
They deployed the EASEP, which included a passive seismic experiment package used to measure moonquakes and a retroreflector array used for the lunar laser ranging experiment. Then Armstrong walked from the LM to snap photos at the rim of Little West Crater while Aldrin collected two core samples. He used the geologist's hammer to pound in the tubes—the only time the hammer was used on Apollo 11—but was unable to penetrate more than deep. The astronauts then collected rock samples using scoops and tongs on extension handles. Many of the surface activities took longer than expected, so they had to stop documenting sample collection halfway through the allotted 34 minutes. Aldrin shoveled of soil into the box of rocks in order to pack them in tightly. Two types of rocks were found in the geological samples: basalt and breccia. Three new minerals were discovered in the rock samples collected by the astronauts: armalcolite, tranquillityite, and pyroxferroite. Armalcolite was named after Armstrong, Aldrin, and Collins. All have subsequently been found on Earth.
Mission Control used a coded phrase to warn Armstrong his metabolic rates were high, and that he should slow down. He was moving rapidly from task to task as time ran out. As metabolic rates remained generally lower than expected for both astronauts throughout the walk, Mission Control granted the astronauts a 15-minute extension. In a 2010 interview, Armstrong explained that NASA limited the first moonwalk's time and distance because there was no empirical proof of how much cooling water the astronauts' PLSS backpacks would consume to handle their body heat generation while working on the Moon.
Aldrin entered "Eagle" first. With some difficulty the astronauts lifted film and two sample boxes containing of lunar surface material to the LM hatch using a flat cable pulley device called the Lunar Equipment Conveyor (LEC). This proved to be an inefficient tool, and later missions preferred to carry equipment and samples up to the LM by hand. Armstrong reminded Aldrin of a bag of memorial items in his sleeve pocket, and Aldrin tossed the bag down. Armstrong then jumped onto the ladder's third rung, and climbed into the LM. After transferring to LM life support, the explorers lightened the ascent stage for the return to lunar orbit by tossing out their PLSS backpacks, lunar overshoes, an empty Hasselblad camera, and other equipment. The hatch was closed again at 05:11:13. They then pressurized the LM and settled down to sleep.
Presidential speech writer William Safire had prepared an "In Event of Moon Disaster" announcement for Nixon to read in the event the Apollo 11 astronauts were stranded on the Moon. The remarks were in a memo from Safire to Nixon's White House Chief of Staff H. R. Haldeman, in which Safire suggested a protocol the administration might follow in reaction to such a disaster. According to the plan, Mission Control would "close down communications" with the LM, and a clergyman would "commend their souls to the deepest of the deep" in a public ritual likened to burial at sea. The last line of the prepared text contained an allusion to Rupert Brooke's First World War poem, "The Soldier".
While moving inside the cabin, Aldrin accidentally damaged the circuit breaker that would arm the main engine for lift off from the Moon. There was a concern this would prevent firing the engine, stranding them on the Moon. A felt-tip pen was sufficient to activate the switch; had this not worked, the LM circuitry could have been reconfigured to allow firing the ascent engine.
After more than hours on the lunar surface, in addition to the scientific instruments, the astronauts left behind: an Apollo 1 mission patch in memory of astronauts Roger Chaffee, Gus Grissom, and Edward White, who died when their command module caught fire during a test in January 1967; two memorial medals of Soviet cosmonauts Vladimir Komarov and Yuri Gagarin, who died in 1967 and 1968 respectively; a memorial bag containing a gold replica of an olive branch as a traditional symbol of peace; and a silicon message disk carrying the goodwill statements by Presidents Eisenhower, Kennedy, Johnson, and Nixon along with messages from leaders of 73 countries around the world. The disk also carries a listing of the leadership of the US Congress, a listing of members of the four committees of the House and Senate responsible for the NASA legislation, and the names of NASA's past and present top management.
After about seven hours of rest, the crew was awakened by Houston to prepare for the return flight. Two and a half hours later, at 17:54:00 UTC, they lifted off in "Eagle" ascent stage to rejoin Collins aboard "Columbia" in lunar orbit. Film taken from the LM ascent stage upon liftoff from the Moon reveals the American flag, planted some from the descent stage, whipping violently in the exhaust of the ascent stage engine. Aldrin looked up in time to witness the flag topple: "The ascent stage of the LM separated ... I was concentrating on the computers, and Neil was studying the attitude indicator, but I looked up long enough to see the flag fall over." Subsequent Apollo missions planted their flags farther from the LM.
During his day flying solo around the Moon, Collins never felt lonely. Although it has been said "not since Adam has any human known such solitude", Collins felt very much a part of the mission. In his autobiography he wrote: "this venture has been structured for three men, and I consider my third to be as necessary as either of the other two". In the 48 minutes of each orbit when he was out of radio contact with the Earth while "Columbia" passed round the far side of the Moon, the feeling he reported was not fear or loneliness, but rather "awareness, anticipation, satisfaction, confidence, almost exultation".
One of Collins' first tasks was to identify the lunar module on the ground. To give Collins an idea where to look, Mission Control radioed that they believed the lunar module landed about four miles off target. Each time he passed over the suspected lunar landing site, he tried in vain to find the module. On his first orbits on the back side of the Moon, Collins performed maintenance activities such as dumping excess water produced by the fuel cells and preparing the cabin for Armstrong and Aldrin to return.
Just before he reached the dark side on the third orbit, Mission Control informed Collins there was a problem with the temperature of the coolant. If it became too cold, parts of "Columbia" might freeze. Mission Control advised him to assume manual control and implement Environmental Control System Malfunction Procedure 17. Instead, Collins flicked the switch on the system from automatic to manual and back to automatic again, and carried on with normal housekeeping chores, while keeping an eye on the temperature. When "Columbia" came back around to the near side of the Moon again, he was able to report that the problem had been resolved. For the next couple of orbits, he described his time on the back side of the Moon as "relaxing". After Aldrin and Armstrong completed their EVA, Collins slept so he could be rested for the rendezvous. While the flight plan called for "Eagle" to meet up with "Columbia", Collins was prepared for a contingency in which he would fly "Columbia" down to meet "Eagle".
"Eagle" rendezvoused with "Columbia" at 21:24 UTC on July 21, and the two docked at 21:35. "Eagle"s ascent stage was jettisoned into lunar orbit at 23:41. Just before the Apollo 12 flight, it was noted that "Eagle" was still likely to be orbiting the Moon. Later NASA reports mentioned that "Eagle" orbit had decayed, resulting in it impacting in an "uncertain location" on the lunar surface.
On July 23, the last night before splashdown, the three astronauts made a television broadcast in which Collins commented:
Aldrin added:
Armstrong concluded:
On the return to Earth, a bearing at the Guam tracking station failed, potentially preventing communication on the last segment of the Earth return. A regular repair was not possible in the available time but the station director, Charles Force, had his ten-year-old son Greg use his small hands to reach into the housing and pack it with grease. Greg was later thanked by Armstrong.
The aircraft carrier , under the command of Captain Carl J. Seiberlich, was selected as the primary recovery ship (PRS) for Apollo 11 on June 5, replacing its sister ship, the LPH , which had recovered Apollo 10 on May 26. "Hornet" was then at her home port of Long Beach, California. On reaching Pearl Harbor on July 5, "Hornet" embarked the Sikorsky SH-3 Sea King helicopters of HS-4, a unit which specialized in recovery of Apollo spacecraft, specialized divers of UDT Detachment Apollo, a 35-man NASA recovery team, and about 120 media representatives. To make room, most of "Hornet"s air wing was left behind in Long Beach. Special recovery equipment was also loaded, including a boilerplate command module used for training.
On July 12, with Apollo 11 still on the launch pad, "Hornet" departed Pearl Harbor for the recovery area in the central Pacific, in the vicinity of . A presidential party consisting of Nixon, Borman, Secretary of State William P. Rogers and National Security Advisor Henry Kissinger flew to Johnston Atoll on Air Force One, then to the command ship in Marine One. After a night on board, they would fly to "Hornet" in Marine One for a few hours of ceremonies. On arrival aboard "Hornet", the party was greeted by the Commander-in-Chief, Pacific Command (CINCPAC), Admiral John S. McCain Jr., and NASA Administrator Thomas O. Paine, who flew to "Hornet" from Pago Pago in one of "Hornet"s carrier onboard delivery aircraft.
Weather satellites were not yet common, but US Air Force Captain Hank Brandli had access to top secret spy satellite images. He realized that a storm front was headed for the Apollo recovery area. Poor visibility which could make locating the capsule difficult, and strong upper-level winds which "would have ripped their parachutes to shreds" according to Brandli; posed a serious threat to the safety of the mission. Brandli alerted Navy Captain Willard S. Houston Jr., the commander of the Fleet Weather Center at Pearl Harbor, who had the required security clearance. On their recommendation, Rear Admiral Donald C. Davis, commander of Manned Spaceflight Recovery Forces, Pacific, advised NASA to change the recovery area, each man risking their careers. A new location was selected northeast.
This altered the flight plan. A different sequence of computer programs was used, one never before attempted. In a conventional entry, P64 was followed by P67. For a skip-out re-entry, P65 and P66 were employed to handle the exit and entry parts of the skip. In this case, because they were extending the re-entry but not actually skipping out, P66 was not invoked and instead P65 led directly to P67. The crew were also warned they would not be in a full-lift (heads-down) attitude when they entered P67. The first program's acceleration subjected the astronauts to ; the second, to .
Before dawn on July 24, "Hornet" launched four Sea King helicopters and three Grumman E-1 Tracers. Two of the E-1s were designated as "air boss" while the third acted as a communications relay aircraft. Two of the Sea Kings carried divers and recovery equipment. The third carried photographic equipment, and the fourth carried the decontamination swimmer and the flight surgeon. At 16:44 UTC (05:44 local time) "Columbia"s drogue parachutes were deployed. This was observed by the helicopters. Seven minutes later "Columbia" struck the water forcefully east of Wake Island, south of Johnston Atoll, and from "Hornet", at . with seas and winds at from the east were reported under broken clouds at with visibility of at the recovery site. Reconnaissance aircraft flying to the original splashdown location reported the conditions Brandli and Houston had predicted.
During splashdown, "Columbia" landed upside down but was righted within ten minutes by flotation bags activated by the astronauts. A diver from the Navy helicopter hovering above attached a sea anchor to prevent it from drifting. More divers attached flotation collars to stabilize the module and positioned rafts for astronaut extraction.
The divers then passed biological isolation garments (BIGs) to the astronauts, and assisted them into the life raft. The possibility of bringing back pathogens from the lunar surface was considered remote, but NASA took precautions at the recovery site. The astronauts were rubbed down with a sodium hypochlorite solution and "Columbia" wiped with Betadine to remove any lunar dust that might be present. The astronauts were winched on board the recovery helicopter. BIGs were worn until they reached isolation facilities on board "Hornet". The raft containing decontamination materials was intentionally sunk.
After touchdown on "Hornet" at 17:53 UTC, the helicopter was lowered by the elevator into the hangar bay, where the astronauts walked the to the Mobile Quarantine Facility (MQF), where they would begin the Earth-based portion of their 21 days of quarantine. This practice would continue for two more Apollo missions, Apollo 12 and Apollo 14, before the Moon was proven to be barren of life, and the quarantine process dropped. Nixon welcomed the astronauts back to Earth. He told them: "As a result of what you've done, the world has never been closer together before."
After Nixon departed, "Hornet" was brought alongside the "Columbia", which was lifted aboard by the ship's crane, placed on a dolly and moved next to the MQF. It was then attached to the MQF with a flexible tunnel, allowing the lunar samples, film, data tapes and other items to be removed. "Hornet" returned to Pearl Harbor, where the MQF was loaded onto a Lockheed C-141 Starlifter and airlifted to the Manned Spacecraft Center. The astronauts arrived at the Lunar Receiving Laboratory at 10:00 UTC on July 28. "Columbia" was taken to Ford Island for deactivation, and its pyrotechnics made safe. It was then taken to Hickham Air Force Base, from whence it was flown to Houston in a Douglas C-133 Cargomaster, reaching the Lunar Receiving Laboratory on July 30.
In accordance with the Extra-Terrestrial Exposure Law, a set of regulations promulgated by NASA on July 16 to codify its quarantine protocol, the astronauts continued in quarantine. After three weeks in confinement (first in the Apollo spacecraft, then in their trailer on "Hornet", and finally in the Lunar Receiving Laboratory), the astronauts were given a clean bill of health. On August 10, 1969, the Interagency Committee on Back Contamination met in Atlanta and lifted the quarantine on the astronauts, on those who had joined them in quarantine (NASA physician William Carpentier and MQF project engineer John Hirasaki), and on "Columbia" itself. Loose equipment from the spacecraft remained in isolation until the lunar samples were released for study.
On August 13, the three astronauts rode in ticker-tape parades in their honor in New York and Chicago, with an estimated six million attendees. On the same evening in Los Angeles there was an official state dinner to celebrate the flight, attended by members of Congress, 44 governors, the Chief Justice of the United States, and ambassadors from 83 nations at the Century Plaza Hotel. Nixon and Agnew honored each astronaut with a presentation of the Presidential Medal of Freedom.
The three astronauts spoke before a joint session of Congress on September 16, 1969. They presented two US flags, one to the House of Representatives and the other to the Senate, that they had carried with them to the surface of the Moon. The flag of American Samoa on Apollo 11 is on display at the Jean P. Haydon Museum in Pago Pago, the capital of American Samoa.
This celebration began a 38-day world tour that brought the astronauts to 22 foreign countries and included visits with the leaders of many countries. The crew toured from September 29 to November 5. Many nations honored the first human Moon landing with special features in magazines or by issuing Apollo 11 commemorative postage stamps or coins.
Humans walking on the Moon and returning safely to Earth accomplished Kennedy's goal set eight years earlier. In Mission Control during the Apollo 11 landing, Kennedy's speech flashed on the screen, followed by the words "TASK ACCOMPLISHED, July 1969". The success of Apollo 11 demonstrated the United States' technological superiority; and with the success of Apollo 11, America had won the Space Race.
New phrases permeated into the English language. "If they can send a man to the Moon, why can't they ...?" became a common saying following Apollo 11. Armstrong's words on the lunar surface also spun off various parodies.
While most people celebrated the accomplishment, disenfranchised Americans saw it as a symbol of the divide in America, evidenced by protesters outside of Kennedy Space Center the day before Apollo 11 launched. This is not to say they were not awed by it. Ralph Abernathy, leading a protest march, was so captivated by the spectacle of the Apollo 11 launch that he forgot what he was going to say. Racial and financial inequalities frustrated citizens who wondered why money spent on the Apollo program was not spent taking care of humans on Earth. A poem by Gil Scott-Heron called "Whitey on the Moon" illustrated the racial inequality in the United States that was highlighted by the Space Race. The poem starts with:
Twenty percent of the world's population watched humans walk on the Moon for the first time. While Apollo 11 sparked the interest of the world, the follow-on Apollo missions did not hold the interest of the nation. One possible explanation was the shift in complexity. Landing someone on the Moon was an easy goal to understand; lunar geology was too abstract for the average person. Another is that Kennedy's goal of landing humans on the Moon had already been accomplished. A well-defined objective helped Project Apollo accomplish its goal, but after it was completed it was hard to justify continuing the lunar missions.
While most Americans were proud of their nation's achievements in space exploration, only once during the late 1960s did the Gallup Poll indicate that a majority of Americans favored "doing more" in space as opposed to "doing less". By 1973, 59 percent of those polled favored cutting spending on space exploration. The Space Race had ended, and Cold War tensions were easing as the US and Soviet Union entered the era of détente. This was also a time when inflation was rising, which put pressure on the government to reduce spending. What saved the space program was that it was one of the few government programs that had achieved something great. Drastic cuts, warned Caspar Weinberger, the deputy director of the Office of Management and Budget, might send a signal that "our best years are behind us".
After the Apollo 11 mission, officials from the Soviet Union said landing humans on the Moon was dangerous and unnecessary. At the time the Soviet Union was attempting to retrieve lunar samples robotically. The Soviets publicly denied there was a race to the Moon, and indicated they were not making an attempt. Mstislav Keldysh said in July 1969, "We are concentrating wholly on the creation of large satellite systems." It was revealed in 1989 that the Soviets had tried to send people to the Moon, but were unable due to technological difficulties. The public's reaction in the Soviet Union was mixed. The Soviet government limited the release of information about the lunar landing, which affected the reaction. A portion of the populace did not give it any attention, and another portion was angered by it.
The Apollo 11 landing is referenced in the songs "Armstrong, Aldrin and Collins" by The Byrds on the 1969 album "Ballad of Easy Rider" and "Coon on the Moon" by Howlin' Wolf on the 1973 album "The Back Door Wolf".
The Command Module "Columbia" went on a tour of the United States, visiting 49 state capitals, the District of Columbia, and Anchorage, Alaska. In 1971, it was transferred to the Smithsonian Institution, and was displayed at the National Air and Space Museum (NASM) in Washington, DC. It was in the central "Milestones of Flight" exhibition hall in front of the Jefferson Drive entrance, sharing the main hall with other pioneering flight vehicles such as the "Wright Flyer", "Spirit of St. Louis", Bell X-1, North American X-15 and "Friendship 7".
"Columbia" was moved in 2017 to the NASM Mary Baker Engen Restoration Hangar at the Steven F. Udvar-Hazy Center in Chantilly, Virginia, to be readied for a four-city tour titled "Destination Moon: The Apollo 11 Mission". This included Space Center Houston from October 14, 2017, to March 18, 2018, the Saint Louis Science Center from April 14 to September 3, 2018, the Senator John Heinz History Center in Pittsburgh from September 29, 2018, to February 18, 2019, and its last location at Seattle's Museum of Flight from March 16 to September 2, 2019. Continued renovations at the Smithsonian allowed time for an additional stop for the capsule, and it was moved to the Cincinnati Museum Center. The ribbon cutting ceremony was on September 29, 2019.
For 40 years Armstrong's and Aldrin's space suits were displayed in the museum's "Apollo to the Moon" exhibit, until it permanently closed on December 3, 2018, to be replaced by a new gallery which was scheduled to open in 2022. A special display of Armstrong's suit was unveiled for the 50th anniversary of Apollo 11 in July 2019. The quarantine trailer, the flotation collar and the flotation bags are in the Smithsonian's Steven F. Udvar-Hazy Center annex near Washington Dulles International Airport in Chantilly, Virginia, where they are on display along with a test lunar module.
The descent stage of the LM "Eagle" remains on the Moon. In 2009, the Lunar Reconnaissance Orbiter (LRO) imaged the various Apollo landing sites on the surface of the Moon, for the first time with sufficient resolution to see the descent stages of the lunar modules, scientific instruments, and foot trails made by the astronauts. The remains of the ascent stage lie at an unknown location on the lunar surface, after being abandoned and impacting the Moon. The location is uncertain because "Eagle" ascent stage was not tracked after it was jettisoned, and the lunar gravity field is sufficiently non-uniform to make the orbit of the spacecraft unpredictable after a short time.
In March 2012 a team of specialists financed by Amazon founder Jeff Bezos located the F-1 engines from the S-IC stage that launched Apollo 11 into space. They were found on the Atlantic seabed using advanced sonar scanning. His team brought parts of two of the five engines to the surface. In July 2013, a conservator discovered a serial number under the rust on one of the engines raised from the Atlantic, which NASA confirmed was from Apollo 11. The S-IVB third stage which performed Apollo 11's trans-lunar injection remains in a solar orbit near to that of Earth.
The main repository for the Apollo Moon rocks is the Lunar Sample Laboratory Facility at the Lyndon B. Johnson Space Center in Houston, Texas. For safekeeping, there is also a smaller collection stored at White Sands Test Facility near Las Cruces, New Mexico. Most of the rocks are stored in nitrogen to keep them free of moisture. They are handled only indirectly, using special tools. Over 100 research laboratories around the world conduct studies of the samples, and approximately 500 samples are prepared and sent to investigators every year.
In November 1969, Nixon asked NASA to make up about 250 presentation Apollo 11 lunar sample displays for 135 nations, the fifty states of the United States and its possessions, and the United Nations. Each display included Moon dust from Apollo 11. The rice-sized particles were four small pieces of Moon soil weighing about 50 mg and were enveloped in a clear acrylic button about as big as a United States half dollar coin. This acrylic button magnified the grains of lunar dust. The Apollo 11 lunar sample displays were given out as goodwill gifts by Nixon in 1970.
The Passive Seismic Experiment ran until the command uplink failed on August 25, 1969. The downlink failed on December 14, 1969. , the Lunar Laser Ranging experiment remains operational.
Armstrong's Hasselblad camera was thought to be lost or left on the Moon surface. In 2015, after Armstrong died in 2012, his widow contacted the National Air and Space Museum to inform them she had found a white cloth bag in one of Armstrong's closets. The bag contained a forgotten camera that had been used to capture images of the first Moon landing. The camera is currently on display at the National Air and Space Museum.
On July 15, 2009, Life.com released a photo gallery of previously unpublished photos of the astronauts taken by "Life" photographer Ralph Morse prior to the Apollo 11 launch. From July 16 to 24, 2009, NASA streamed the original mission audio on its website in real time 40 years to the minute after the events occurred. It is in the process of restoring the video footage and has released a preview of key moments. In July 2010, air-to-ground voice recordings and film footage shot in Mission Control during the Apollo 11 powered descent and landing was re-synchronized and released for the first time. The John F. Kennedy Presidential Library and Museum set up an Adobe Flash website that rebroadcasts the transmissions of Apollo 11 from launch to landing on the Moon.
On July 20, 2009, Armstrong, Aldrin, and Collins met with U.S. President Barack Obama at the White House. "We expect that there is, as we speak, another generation of kids out there who are looking up at the sky and are going to be the next Armstrong, Collins, and Aldrin", Obama said. "We want to make sure that NASA is going to be there for them when they want to take their journey." On August 7, 2009, an act of Congress awarded the three astronauts a Congressional Gold Medal, the highest civilian award in the United States. The bill was sponsored by Florida Senator Bill Nelson and Florida Representative Alan Grayson.
A group of British scientists interviewed as part of the anniversary events reflected on the significance of the Moon landing:
On June 10, 2015, Congressman Bill Posey introduced resolution H.R. 2726 to the 114th session of the United States House of Representatives directing the United States Mint to design and sell commemorative coins in gold, silver and clad for the 50th anniversary of the Apollo 11 mission. On January 24, 2019, the Mint released the Apollo 11 Fiftieth Anniversary commemorative coins to the public on its website.
A documentary film, "Apollo 11", with restored footage of the 1969 event, premiered in IMAX on March 1, 2019, and broadly in theaters on March 8.
The Smithsonian Institute's National Air and Space Museum and NASA sponsored the "Apollo 50 Festival" on the National Mall in Washington DC. The three day (July 18 to 20, 2019) outdoor festival featured hands-on exhibits and activities, live performances, and speakers such as Adam Savage and NASA scientists.
As part of the festival, a projection of the tall Saturn V rocket was displayed on the east face of the tall Washington Monument from July 16 through the 20th from 9:30pm until 11:30pm (EDT). The program also included a 17-minute show that combined full-motion video projected on the Washington Monument to recreate the assembly and launch of the Saturn V rocket. The projection was joined by a wide recreation of the Kennedy Space Center countdown clock and two large video screens showing archival footage to recreate the time leading up to the moon landing. There were three shows per night on July 19–20, with the last show on Saturday, delayed slightly so the portion where Armstrong first set foot on the Moon would happen exactly 50 years to the second after the actual event.
On July 19, 2019, the Google Doodle paid tribute to the Apollo 11 Moon Landing, complete with a link to an animated YouTube video with voiceover by astronaut Michael Collins.
Aldrin, Collins, and Armstrong's sons were hosted by President Donald Trump in the Oval Office.
In some of the following sources, times are shown in the format "hours:minutes:seconds" (e.g. 109:24:15), referring to the mission's Ground Elapsed Time (GET), based on the official launch time of July 16, 1969, 13:32:00 UTC (000:00:00 GET). | https://en.wikipedia.org/wiki?curid=662 |
Apollo 8
Apollo 8 was the first crewed spacecraft to leave low Earth orbit and the first to reach the Moon, orbit it, and return. Its three-astronaut crew — Frank Borman, James Lovell, and William Anders — were the first humans to fly to the Moon, to witness and photograph an Earthrise, and to escape the gravity of a celestial body.
Apollo 8 launched on December 21, 1968, and was the second crewed spaceflight mission flown in the United States Apollo space program after Apollo7, which stayed in Earth orbit. Apollo8 was the third flight and the first crewed launch of the Saturn V rocket, and was the first human spaceflight from the Kennedy Space Center, located adjacent to Cape Canaveral Air Force Station in Florida.
Originally planned as the second crewed Apollo Lunar Module and command module test, to be flown in an elliptical medium Earth orbit in early 1969, the mission profile was changed in August 1968 to a more ambitious command-module-only lunar orbital flight to be flown in December, as the lunar module was not yet ready to make its first flight. Astronaut Jim McDivitt's crew, who were training to fly the first lunar module flight in low Earth orbit, became the crew for the Apollo9 mission, and Borman's crew were moved to the Apollo8 mission. This left Borman's crew with two to three months' less training and preparation time than originally planned, and replaced the planned lunar module training with translunar navigation training.
Apollo 8 took 68 hours (almost three days) to travel the distance to the Moon. The crew orbited the Moon ten times over the course of twenty hours, during which they made a Christmas Eve television broadcast in which they read the first ten verses from the Book of Genesis. At the time, the broadcast was the most watched TV program ever. Apollo8's successful mission paved the way for Apollo11 to fulfill U.S. president John F. Kennedy's goal of landing a man on the Moon before the end of the 1960s. The Apollo8 astronauts returned to Earth on December 27, 1968, when their spacecraft splashed down in the northern Pacific Ocean. The crew members were named "Time" magazine's "Men of the Year" for 1968 upon their return.
In the late 1950s and early 1960s, the United States was engaged in the Cold War, a geopolitical rivalry with the Soviet Union. On October 4, 1957, the Soviet Union launched Sputnik 1, the first artificial satellite. This unexpected success stoked fears and imaginations around the world. It not only demonstrated that the Soviet Union had the capability to deliver nuclear weapons over intercontinental distances, it challenged American claims of military, economic, and technological superiority. The launch precipitated the Sputnik crisis and triggered the Space Race.
President John F. Kennedy believed that not only was it in the national interest of the United States to be superior to other nations, but that the perception of American power was at least as important as the actuality. It was therefore intolerable to him for the Soviet Union to be more advanced in the field of space exploration. He was determined that the United States should compete, and sought a challenge that maximized its chances of winning.
The Soviet Union had better booster rockets, which meant Kennedy needed to choose a goal that was beyond the capacity of the existing generation of rocketry, one where the US and Soviet Union would be starting from a position of equality—something spectacular, even if it could not be justified on military, economic, or scientific grounds. After consulting with his experts and advisors, he chose such a project: to land a man on the Moon and return him to the Earth. This project already had a name: Project Apollo.
An early and crucial decision was the adoption of lunar orbit rendezvous, under which a specialized spacecraft would land on the lunar surface. The Apollo spacecraft therefore had three primary components: a command module (CM) with a cabin for the three astronauts, and the only part that would return to Earth; a service module (SM) to provide the command module with propulsion, electrical power, oxygen, and water; and a two-stage lunar module (LM), which comprised a descent stage for landing on the Moon and an ascent stage to return the astronauts to lunar orbit. This configuration could be launched by the Saturn V rocket that was then under development.
The initial crew assignment of Frank Borman as Commander, Michael Collins as Command Module Pilot (CMP) and William Anders as Lunar Module Pilot (LMP) for the third crewed Apollo flight was officially announced on November 20, 1967. Collins was replaced by Jim Lovell in July 1968, after suffering a cervical disc herniation that required surgery to repair. This crew was unique among pre-Space Shuttle era missions in that the commander was not the most experienced member of the crew: Lovell had flown twice before, on Gemini VII and Gemini XII. This would also be the first case of a commander of a previous mission (Lovell, Gemini XII) flying as a non-commander.
As of 2020, all three Apollo 8 astronauts remain alive.
The backup crew assignment of Neil Armstrong as Commander, Lovell as CMP, and Buzz Aldrin as LMP for the third crewed Apollo flight was officially announced at the same time as the prime crew. When Lovell was reassigned to the prime crew, Aldrin was moved to CMP, and Fred Haise was brought in as backup LMP. Armstrong would later command Apollo11, with Aldrin as LMP and Collins as CMP. Haise served on the backup crew of Apollo11 as LMP and flew on Apollo13 as LMP.
During Projects Mercury and Gemini, each mission had a prime and a backup crew. For Apollo, a third crew of astronauts was added, known as the support crew. The support crew maintained the flight plan, checklists, and mission ground rules, and ensured that the prime and backup crews were apprised of any changes. The support crew developed procedures in the simulators, especially those for emergency situations, so that the prime and backup crews could practice and master them in their simulator training. For Apollo8, the support crew consisted of Ken Mattingly, Vance Brand, and Gerald Carr.
The capsule communicator (CAPCOM) was an astronaut at the Mission Control Center in Houston, Texas, who was the only person who communicated directly with the flight crew. For Apollo8, the CAPCOMs were Michael Collins, Gerald Carr, Ken Mattingly, Neil Armstrong, Buzz Aldrin, Vance Brand, and Fred Haise.
The mission control teams rotated in three shifts, each led by a flight director. The directors for Apollo8 were Clifford E. Charlesworth (Green team), Glynn Lunney (Black team), and Milton Windler (Maroon team).
The triangular shape of the insignia refers to the shape of the Apollo CM. It shows a red figure8 looping around the Earth and Moon to reflect both the mission number and the circumlunar nature of the mission. On the bottom of the8 are the names of the three astronauts. The initial design of the insignia was developed by Jim Lovell, who reportedly sketched it while riding in the back seat of a T-38 flight from California to Houston shortly after learning of Apollo8's re-designation as a lunar-orbital mission.
The crew wanted to name their spacecraft, but NASA did not allow it. The crew would have likely chosen "Columbiad", the name of the giant cannon that launches a space vehicle in Jules Verne's 1865 novel "From the Earth to the Moon". The Apollo11 CM was named "Columbia" in part for that reason.
On September 20, 1967, NASA adopted a seven-step plan for Apollo missions, with the final step being a Moon landing. Apollo4 and Apollo6 were "A" missions, tests of the SaturnV launch vehicle using an uncrewed Block I production model of the command and service module (CSM) in Earth orbit. Apollo5 was a "B" mission, a test of the LM in Earth orbit. Apollo7, scheduled for October 1968, would be a "C" mission, a crewed Earth-orbit flight of the CSM. Further missions depended on the readiness of the LM. It had been decided as early as May 1967 that there would be at least four additional missions. Apollo8 was planned as the "D" mission, a test of the LM in a low Earth orbit in December 1968 by James McDivitt, David Scott, and Russell Schweickart, while Borman's crew would fly the "E" mission, a more rigorous LM test in an elliptical medium Earth orbit as Apollo9, in early 1969. The "F" Mission would test the CSM and LM in lunar orbit, and the "G" mission would be the finale, the Moon landing.
Production of the LM fell behind schedule, and when Apollo8's LM-3 arrived at the Kennedy Space Center (KSC) in June 1968, more than a hundred significant defects were discovered, leading Bob Gilruth, the director of the Manned Spacecraft Center (MSC), and others to conclude that there was no prospect of LM-3 being ready to fly in 1968. Indeed, it was possible that delivery would slip to February or March 1969. Following the original seven-step plan would have meant delaying the "D" and subsequent missions, and endangering the program's goal of a lunar landing before the end of 1969. George Low, the Manager of the Apollo Spacecraft Program Office, proposed a solution in August 1968 to keep the program on track despite the LM delay. Since the next CSM (designated as "CSM-103") would be ready three months before LM-3, a CSM-only mission could be flown in December 1968. Instead of repeating the "C" mission flight of Apollo7, this CSM could be sent all the way to the Moon, with the possibility of entering a lunar orbit and returning to Earth. The new mission would also allow NASA to test lunar landing procedures that would otherwise have had to wait until Apollo10, the scheduled "F" mission. This also meant that the medium Earth orbit "E" mission could be dispensed with. The net result was that only the "D" mission had to be delayed, and the plan for lunar landing in mid-1969 could remain on timeline.
On August 9, 1968, Low discussed the idea with Gilruth, Flight Director Chris Kraft, and the Director of Flight Crew Operations, Donald Slayton. They then flew to the Marshall Space Flight Center (MSFC) in Huntsville, Alabama, where they met with KSC Director Kurt Debus, Apollo Program Director Samuel C. Phillips, Rocco Petrone, and Wernher von Braun. Kraft considered the proposal feasible from a flight control standpoint; Debus and Petrone agreed that the next Saturn V, AS-503, could be made ready by December 1; and von Braun was confident the pogo oscillation problems that had afflicted Apollo6 had been fixed. Almost every senior manager at NASA agreed with this new mission, citing confidence in both the hardware and the personnel, along with the potential for a circumlunar flight providing a significant morale boost. The only person who needed some convincing was James E. Webb, the NASA administrator. Backed by the full support of his agency, Webb authorized the mission. Apollo8 was officially changed from a "D" mission to a "C-Prime" lunar-orbit mission.
With the change in mission for Apollo 8, Slayton asked McDivitt if he still wanted to fly it. McDivitt turned it down; his crew had spent a great deal of time preparing to test the LM, and that was what he still wanted to do. Slayton then decided to swap the prime and backup crews of the Dand Emissions. This swap also meant a swap of spacecraft, requiring Borman's crew to use CSM-103, while McDivitt's crew would use CSM-104, since CM-104 could not be made ready by December. David Scott was not happy about giving up CM-103, the testing of which he had closely supervised, for CM-104, although the two were almost identical, and Anders was less than enthusiastic about being an LMP on a flight with no LM. Instead, in order that the spacecraft would have the correct weight and balance, Apollo8 would carry LM test article, a boilerplate model of LM-3.
Added pressure on the Apollo program to make its 1969 landing goal was provided by the Soviet Union's Zond5 mission, which flew some living creatures, including Russian tortoises, in a cislunar loop around the Moon and returned them to Earth on September 21. There was speculation within NASA and the press that they might be preparing to launch cosmonauts on a similar circumlunar mission before the end of 1968.
The Apollo 8 crew, now living in the crew quarters at Kennedy Space Center, received a visit from Charles Lindbergh and his wife, Anne Morrow Lindbergh, the night before the launch. They talked about how, before his 1927 flight, Lindbergh had used a piece of string to measure the distance from New York City to Paris on a globe and from that calculated the fuel needed for the flight. The total he had carried was a tenth of the amount that the Saturn V would burn every second. The next day, the Lindberghs watched the launch of Apollo8 from a nearby dune.
The Saturn V rocket used by Apollo8 was designated AS-503, or the "03rd" model of the SaturnV ("5") Rocket to be used in the Apollo-Saturn ("AS") program. When it was erected in the Vehicle Assembly Building on December 20, 1967, it was thought that the rocket would be used for an uncrewed Earth-orbit test flight carrying a boilerplate command and service module. Apollo6 had suffered several major problems during its April 1968 flight, including severe pogo oscillation during its first stage, two second-stage engine failures, and a third stage that failed to reignite in orbit. Without assurances that these problems had been rectified, NASA administrators could not justify risking a crewed mission until additional uncrewed test flights proved the Saturn V was ready.
Teams from the MSFC went to work on the problems. Of primary concern was the pogo oscillation, which would not only hamper engine performance, but could exert significant g-forces on a crew. A task force of contractors, NASA agency representatives, and MSFC researchers concluded that the engines vibrated at a frequency similar to the frequency at which the spacecraft itself vibrated, causing a resonance effect that induced oscillations in the rocket. A system that used helium gas to absorb some of these vibrations was installed.
Of equal importance was the failure of three engines during flight. Researchers quickly determined that a leaking hydrogen fuel line ruptured when exposed to vacuum, causing a loss of fuel pressure in engine two. When an automatic shutoff attempted to close the liquid hydrogen valve and shut down engine two, it had accidentally shut down engine three's liquid oxygen due to a miswired connection. As a result, engine three failed within one second of engine two's shutdown. Further investigation revealed the same problem for the third-stage engine—a faulty igniter line. The team modified the igniter lines and fuel conduits, hoping to avoid similar problems on future launches.
The teams tested their solutions in August 1968 at the MSFC. A Saturn stage IC was equipped with shock-absorbing devices to demonstrate the team's solution to the problem of pogo oscillation, while a Saturn Stage II was retrofitted with modified fuel lines to demonstrate their resistance to leaks and ruptures in vacuum conditions. Once NASA administrators were convinced that the problems had been solved, they gave their approval for a crewed mission using AS-503.
The Apollo 8 spacecraft was placed on top of the rocket on September 21, and the rocket made the slow journey to the launch pad on October9. Testing continued all through December until the day before launch, including various levels of readiness testing from December5 through 11. Final testing of modifications to address the problems of pogo oscillation, ruptured fuel lines, and bad igniter lines took place on December 18, three days before the scheduled launch.
As the first crewed spacecraft to orbit more than one celestial body, Apollo8's profile had two different sets of orbital parameters, separated by a translunar injection maneuver. Apollo lunar missions would begin with a nominal circular Earth parking orbit. Apollo8 was launched into an initial orbit with an apogee of and a perigee of , with an inclination of 32.51° to the Equator, and an orbital period of 88.19 minutes. Propellant venting increased the apogee by over the 2hours, 44 minutes, and 30 seconds spent in the parking orbit.
This was followed by a trans-lunar injection (TLI) burn of the S-IVB third stage for 318 seconds, accelerating the command and service module and LM test article from an orbital velocity of to the injection velocity of which set a record for the highest speed, relative to Earth, that humans had ever traveled. This speed was slightly less than the Earth's escape velocity of , but put Apollo8 into an elongated elliptical Earth orbit, close enough to the Moon to be captured by the Moon's gravity.
The standard lunar orbit for Apollo missions was planned as a nominal circular orbit above the Moon's surface. Initial lunar orbit insertion was an ellipse with a perilune of and an apolune of , at an inclination of 12° from the lunar equator. This was then circularized at by , with an orbital period of 128.7 minutes. The effect of lunar mass concentrations ("mascons") on the orbit was found to be greater than initially predicted; over the course of the ten lunar orbits lasting twenty hours, the orbital distance was perturbated to by .
Apollo 8 achieved a maximum distance from Earth of .
Apollo 8 launched at 12:51:00 UTC (07:51:00 Eastern Standard Time) on December 21, 1968, using the Saturn V's three stages to achieve Earth orbit. The S-IC first stage landed in the Atlantic Ocean at , and the S-II second stage landed at . The S-IVB third stage injected the craft into Earth orbit and remained attached to perform the TLI burn that would put the spacecraft on a trajectory to the Moon.
Once the vehicle reached Earth orbit, both the crew and Houston flight controllers spent the next 2hours and 38 minutes checking that the spacecraft was in proper working order and ready for TLI. The proper operation of the S-IVB third stage of the rocket was crucial, and in the last uncrewed test, it had failed to reignite for this burn. Collins was the first CAPCOM on duty, and at 2hours, 27 minutes and 22 seconds after launch he radioed, "Apollo8. You are Go for TLI." This communication meant that Mission Control had given official permission for Apollo8 to go to the Moon. The S-IVB engine ignited on time and performed the TLI burn perfectly. Over the next five minutes, the spacecraft's speed increased from .
After the S-IVB had placed the mission on course for the Moon, the command and service modules (CSM), the remaining Apollo8 spacecraft, separated from it. The crew then rotated the spacecraft to take photographs of the spent stage and then practiced flying in formation with it. As the crew rotated the spacecraft, they had their first views of the Earth as they moved away from it—this marked the first time humans had viewed the whole Earth at once. Borman became worried that the S-IVB was staying too close to the CSM and suggested to Mission Control that the crew perform a separation maneuver. Mission Control first suggested pointing the spacecraft towards Earth and using the small reaction control system (RCS) thrusters on the service module (SM) to add to their velocity away from the Earth, but Borman did not want to lose sight of the S-IVB. After discussion, the crew and Mission Control decided to burn in the Earth direction to increase speed, but at instead. The time needed to prepare and perform the additional burn put the crew an hour behind their onboard tasks.
Five hours after launch, Mission Control sent a command to the S-IVB to vent its remaining fuel, changing its trajectory. The S-IVB, with the test article attached, posed no further hazard to Apollo8, passing the orbit of the Moon and going into a solar orbit with an inclination of 23.47° from the plane of the ecliptic, and an orbital period of 340.80 days. It became a , and will continue to orbit the Sun for many years.
The Apollo 8 crew were the first humans to pass through the Van Allen radiation belts, which extend up to from Earth. Scientists predicted that passing through the belts quickly at the spacecraft's high speed would cause a radiation dosage of no more than a chest X-ray, or 1milligray (mGy; during a year, the average human receives a dose of 2to 3mGy). To record the actual radiation dosages, each crew member wore a Personal Radiation Dosimeter that transmitted data to Earth, as well as three passive film dosimeters that showed the cumulative radiation experienced by the crew. By the end of the mission, the crew members experienced an average radiation dose of 1.6 mGy.
Lovell's main job as Command Module Pilot was as navigator. Although Mission Control normally performed all the actual navigation calculations, it was necessary to have a crew member adept at navigation so that the crew could return to Earth in case communication with Mission Control was lost. Lovell navigated by star sightings using a sextant built into the spacecraft, measuring the angle between a star and the Earth's (or the Moon's) horizon. This task was made difficult by a large cloud of debris around the spacecraft, which made it hard to distinguish the stars.
By seven hours into the mission, the crew was about 1hour and 40 minutes behind flight plan because of the problems in moving away from the S-IVB and Lovell's obscured star sightings. The crew placed the spacecraft into Passive Thermal Control (PTC), also called "barbecue roll", in which the spacecraft rotated about once per hour around its long axis to ensure even heat distribution across the surface of the spacecraft. In direct sunlight, parts of the spacecraft's outer surface could be heated to over , while the parts in shadow would be . These temperatures could cause the heat shield to crack and propellant lines to burst. Because it was impossible to get a perfect roll, the spacecraft swept out a cone as it rotated. The crew had to make minor adjustments every half hour as the cone pattern got larger and larger.
The first mid-course correction came eleven hours into the flight. The crew had been awake for more than 16 hours. Before launch, NASA had decided at least one crew member should be awake at all times to deal with problems that might arise. Borman started the first sleep shift but found sleeping difficult because of the constant radio chatter and mechanical noises. Testing on the ground had shown that the service propulsion system (SPS) engine had a small chance of exploding when burned for long periods unless its combustion chamber was "coated" first by burning the engine for a short period. This first correction burn was only 2.4 seconds and added about velocity prograde (in the direction of travel). This change was less than the planned , because of a bubble of helium in the oxidizer lines, which caused unexpectedly low propellant pressure. The crew had to use the small RCS thrusters to make up the shortfall. Two later planned mid-course corrections were canceled because the Apollo8 trajectory was found to be perfect.
About an hour after starting his sleep shift, Borman obtained permission from ground control to take a Seconal sleeping pill. The pill had little effect. Borman eventually fell asleep, and then awoke feeling ill. He vomited twice and had a bout of diarrhea; this left the spacecraft full of small globules of vomit and feces, which the crew cleaned up as well as they could. Borman initially did not want everyone to know about his medical problems, but Lovell and Anders wanted to inform Mission Control. The crew decided to use the Data Storage Equipment (DSE), which could tape voice recordings and telemetry and dump them to Mission Control at high speed. After recording a description of Borman's illness they asked Mission Control to check the recording, stating that they "would like an evaluation of the voice comments".
The Apollo 8 crew and Mission Control medical personnel held a conference using an unoccupied second-floor control room (there were two identical control rooms in Houston, on the second and third floors, only one of which was used during a mission). The conference participants concluded that there was little to worry about and that Borman's illness was either a 24-hour flu, as Borman thought, or a reaction to the sleeping pill. Researchers now believe that he was suffering from space adaptation syndrome, which affects about a third of astronauts during their first day in space as their vestibular system adapts to weightlessness. Space adaptation syndrome had not occurred on previous spacecraft (Mercury and Gemini), because those astronauts could not move freely in the small cabins of those spacecraft. The increased cabin space in the Apollo command module afforded astronauts greater freedom of movement, contributing to symptoms of space sickness for Borman and, later, astronaut Rusty Schweickart during Apollo9.
The cruise phase was a relatively uneventful part of the flight, except for the crew's checking that the spacecraft was in working order and that they were on course. During this time, NASA scheduled a television broadcast at 31 hours after launch. The Apollo8 crew used a camera that broadcast in black-and-white only, using a Vidicon tube. The camera had two lenses, a very wide-angle (160°) lens, and a telephoto (9°) lens.
During this first broadcast, the crew gave a tour of the spacecraft and attempted to show how the Earth appeared from space. However, difficulties aiming the narrow-angle lens without the aid of a monitor to show what it was looking at made showing the Earth impossible. Additionally, without proper filters, the Earth image became saturated by any bright source. In the end, all the crew could show the people watching back on Earth was a bright blob. After broadcasting for 17 minutes, the rotation of the spacecraft took the high-gain antenna out of view of the receiving stations on Earth and they ended the transmission with Lovell wishing his mother a happy birthday.
By this time, the crew had completely abandoned the planned sleep shifts. Lovell went to sleep 32 and a half hours into the flight—3 and a half hours before he had planned to. A short while later, Anders also went to sleep after taking a sleeping pill. The crew was unable to see the Moon for much of the outward cruise. Two factors made the Moon almost impossible to see from inside the spacecraft: three of the five windows fogging up due to out-gassed oils from the silicone sealant, and the attitude required for passive thermal control. It was not until the crew had gone behind the Moon that they would be able to see it for the first time.
Apollo 8 made a second television broadcast at 55 hours into the flight. This time, the crew rigged up filters meant for the still cameras so they could acquire images of the Earth through the telephoto lens. Although difficult to aim, as they had to maneuver the entire spacecraft, the crew was able to broadcast back to Earth the first television pictures of the Earth. The crew spent the transmission describing the Earth, what was visible, and the colors they could see. The transmission lasted 23 minutes.
At about 55 hours and 40 minutes into the flight, and 13 hours before entering lunar orbit, the crew of Apollo8 became the first humans to enter the gravitational sphere of influence of another celestial body. In other words, the effect of the Moon's gravitational force on Apollo8 became stronger than that of the Earth. At the time it happened, Apollo8 was from the Moon and had a speed of relative to the Moon. This historic moment was of little interest to the crew, since they were still calculating their trajectory with respect to the launch pad at Kennedy Space Center. They would continue to do so until they performed their last mid-course correction, switching to a reference frame based on ideal orientation for the second engine burn they would make in lunar orbit.
The last major event before Lunar Orbit Insertion (LOI) was a second mid-course correction. It was in retrograde (against the direction of travel) and slowed the spacecraft down by , effectively reducing the closest distance at which the spacecraft would pass the Moon. At exactly 61 hours after launch, about from the Moon, the crew burned the RCS for 11 seconds. They would now pass from the lunar surface.
At 64 hours into the flight, the crew began to prepare for Lunar Orbit Insertion1 (LOI-1). This maneuver had to be performed perfectly, and due to orbital mechanics had to be on the far side of the Moon, out of contact with the Earth. After Mission Control was polled for a "go/no go" decision, the crew was told at 68 hours that they were Go and "riding the best bird we can find". Lovell replied, "We'll see you on the other side", and for the first time in history, humans travelled behind the Moon and out of radio contact with the Earth.
With ten minutes remaining before LOI-1, the crew began one last check of the spacecraft systems and made sure that every switch was in its correct position. At that time, they finally got their first glimpses of the Moon. They had been flying over the unlit side, and it was Lovell who saw the first shafts of sunlight obliquely illuminating the lunar surface. The LOI burn was only two minutes away, so the crew had little time to appreciate the view.
The SPS was ignited at 69 hours, 8minutes, and 16 seconds after launch and burned for 4minutes and 7seconds, placing the Apollo8 spacecraft in orbit around the Moon. The crew described the burn as being the longest four minutes of their lives. If the burn had not lasted exactly the correct amount of time, the spacecraft could have ended up in a highly elliptical lunar orbit or even been flung off into space. If it had lasted too long, they could have struck the Moon. After making sure the spacecraft was working, they finally had a chance to look at the Moon, which they would orbit for the next 20 hours.
On Earth, Mission Control continued to wait. If the crew had not burned the engine, or the burn had not lasted the planned length of time, the crew would have appeared early from behind the Moon. Exactly at the calculated moment, however, the signal was received from the spacecraft, indicating it was in a orbit around the Moon.
After reporting on the status of the spacecraft, Lovell gave the first description of what the lunar surface looked like:
Lovell continued to describe the terrain they were passing over. One of the crew's major tasks was reconnaissance of planned future landing sites on the Moon, especially one in Mare Tranquillitatis that was planned as the Apollo11 landing site. The launch time of Apollo8 had been chosen to give the best lighting conditions for examining the site. A film camera had been set up in one of the spacecraft windows to record one frame per second of the Moon below. Bill Anders spent much of the next 20 hours taking as many photographs as possible of targets of interest. By the end of the mission, the crew had taken over eight hundred 70 mm still photographs and of 16 mm movie film.
Throughout the hour that the spacecraft was in contact with Earth, Borman kept asking how the data for the SPS looked. He wanted to make sure that the engine was working and could be used to return early to the Earth if necessary. He also asked that they receive a "go/no go" decision before they passed behind the Moon on each orbit.
As they reappeared for their second pass in front of the Moon, the crew set up equipment to broadcast a view of the lunar surface. Anders described the craters that they were passing over. At the end of this second orbit, they performed an 11-second LOI-2 burn of the SPS to circularize the orbit to .
Throughout the next two orbits, the crew continued to check the spacecraft and to observe and photograph the Moon. During the third pass, Borman read a small prayer for his church. He had been scheduled to participate in a service at St. Christopher's Episcopal Church near Seabrook, Texas, but due to the Apollo8 flight, he was unable to attend. A fellow parishioner and engineer at Mission Control, Rod Rose, suggested that Borman read the prayer, which could be recorded and then replayed during the service.
When the spacecraft came out from behind the Moon for its fourth pass across the front, the crew witnessed an "Earthrise" in person for the first time in human history. NASA's Lunar Orbiter 1 had taken the first picture of an Earthrise from the vicinity of the Moon, on August 23, 1966. Anders saw the Earth emerging from behind the lunar horizon and called in excitement to the others, taking a black-and-white photograph as he did so. Anders asked Lovell for color film and then took "Earthrise", a now famous color photo, later picked by "Life" magazine as one of its hundred photos of the century.
Due to the synchronous rotation of the Moon about the Earth, Earthrise is not generally visible from the lunar surface. This is because, as seen from any one place on the Moon's surface, Earth remains in approximately the same position in the lunar sky, either above or below the horizon. Earthrise is generally visible only while orbiting the Moon, and at selected surface locations near the Moon's limb, where libration carries the Earth slightly above and below the lunar horizon.
Anders continued to take photographs while Lovell assumed control of the spacecraft so that Borman could rest. Despite the difficulty resting in the cramped and noisy spacecraft, Borman was able to sleep for two orbits, awakening periodically to ask questions about their status. Borman awoke fully, however, when he started to hear his fellow crew members make mistakes. They were beginning to not understand questions and had to ask for the answers to be repeated. Borman realized that everyone was extremely tired from not having a good night's sleep in over three days. He ordered Anders and Lovell to get some sleep and that the rest of the flight plan regarding observing the Moon be scrubbed. Anders initially protested, saying that he was fine, but Borman would not be swayed. Anders finally agreed under the condition that Borman would set up the camera to continue to take automatic pictures of the Moon. Borman also remembered that there was a second television broadcast planned, and with so many people expected to be watching, he wanted the crew to be alert. For the next two orbits, Anders and Lovell slept while Borman sat at the helm.
As they rounded the Moon for the ninth time, the astronauts began the second television transmission. Borman introduced the crew, followed by each man giving his impression of the lunar surface and what it was like to be orbiting the Moon. Borman described it as being "a vast, lonely, forbidding expanse of nothing". Then, after talking about what they were flying over, Anders said that the crew had a message for all those on Earth. Each man on board read a section from the Biblical creation story from the Book of Genesis. Borman finished the broadcast by wishing a Merry Christmas to everyone on Earth. His message appeared to sum up the feelings that all three crewmen had from their vantage point in lunar orbit. Borman said, "And from the crew of Apollo8, we close with good night, good luck, a Merry Christmas and God bless all of you—all of you on the good Earth."
The only task left for the crew at this point was to perform the trans-Earth injection (TEI), which was scheduled for hours after the end of the television transmission. The TEI was the most critical burn of the flight, as any failure of the SPS to ignite would strand the crew in lunar orbit, with little hope of escape. As with the previous burn, the crew had to perform the maneuver above the far side of the Moon, out of contact with Earth. The burn occurred exactly on time. The spacecraft telemetry was reacquired as it re-emerged from behind the Moon at 89 hours, 28 minutes, and 39 seconds, the exact time calculated. When voice contact was regained, Lovell announced, "Please be informed, there is a Santa Claus", to which Ken Mattingly, the current CAPCOM, replied, "That's affirmative, you are the best ones to know." The spacecraft began its journey back to Earth on December 25, Christmas Day.
Later, Lovell used some otherwise idle time to do some navigational sightings, maneuvering the module to view various stars by using the computer keyboard. However, he accidentally erased some of the computer's memory, which caused the inertial measurement unit (IMU) to contain data indicating that the module was in the same relative orientation it had been in before lift-off; the IMU then fired the thrusters to "correct" the module's attitude.
Once the crew realized why the computer had changed the module's attitude, they realized that they would have to reenter data to tell the computer the module's actual orientation. It took Lovell ten minutes to figure out the right numbers, using the thrusters to get the stars Rigel and Sirius aligned, and another 15 minutes to enter the corrected data into the computer. Sixteen months later, during the Apollo13 mission, Lovell would have to perform a similar manual realignment under more critical conditions after the module's IMU had to be turned off to conserve energy.
The cruise back to Earth was mostly a time for the crew to relax and monitor the spacecraft. As long as the trajectory specialists had calculated everything correctly, the spacecraft would reenter Earth's atmosphere two-and-a-half days after TEI and splash down in the Pacific.
On Christmas afternoon, the crew made their fifth television broadcast. This time, they gave a tour of the spacecraft, showing how an astronaut lived in space. When they finished broadcasting, they found a small present from Slayton in the food locker: a real turkey dinner with stuffing, in the same kind of pack given to the troops in Vietnam.
Another Slayton surprise was a gift of three miniature bottles of brandy, which Borman ordered the crew to leave alone until after they landed. They remained unopened, even years after the flight. There were also small presents to the crew from their wives. The next day, at about 124 hours into the mission, the sixth and final TV transmission showed the mission's best video images of the Earth, during a four-minute broadcast. After two uneventful days, the crew prepared for reentry. The computer would control the reentry, and all the crew had to do was put the spacecraft in the correct attitude, with the blunt end forward. In the event of computer failure, Borman was ready to take over.
Separation from the service module prepared the command module for reentry by exposing the heat shield and shedding unneeded mass. The service module would burn up in the atmosphere as planned. Six minutes before they hit the top of the atmosphere, the crew saw the Moon rising above the Earth's horizon, just as had been calculated by the trajectory specialists. As the module hit the thin outer atmosphere, the crew noticed that it was becoming hazy outside as glowing plasma formed around the spacecraft. The spacecraft started slowing down, and the deceleration peaked at . With the computer controlling the descent by changing the attitude of the spacecraft, Apollo8 rose briefly like a skipping stone before descending to the ocean. At , the drogue parachute deployed, stabilizing the spacecraft, followed at by the three main parachutes. The spacecraft splashdown position was officially reported as in the North Pacific Ocean, southwest of Hawaii at 15:51:42 UTC on December 27, 1968.
When the spacecraft hit the water, the parachutes dragged it over and left it upside down, in what was termed Stable2 position. As they were buffeted by a swell, Borman was sick, waiting for the three flotation balloons to right the spacecraft. About six minutes after splashdown, the command module was righted into a normal apex-up orientation by its inflatable bag uprighting system. The first frogman from aircraft carrier arrived 43 minutes after splashdown. Forty-five minutes later, the crew was safe on the flight deck of the "Yorktown".
Apollo 8 came at the end of 1968, a year that had seen much upheaval in the United States and most of the world. Even though the year saw political assassinations, political unrest in the streets of Europe and America, and the Prague Spring, "Time" magazine chose the crew of Apollo8 as its Men of the Year for 1968, recognizing them as the people who most influenced events in the preceding year. They had been the first people ever to leave the gravitational influence of the Earth and orbit another celestial body. They had survived a mission that even the crew themselves had rated as having only a fifty-fifty chance of fully succeeding. The effect of Apollo8 was summed up in a telegram from a stranger, received by Borman after the mission, that stated simply, "Thank you Apollo8. You saved 1968."
One of the most famous aspects of the flight was the "Earthrise" picture that the crew took as they came around for their fourth orbit of the Moon. This was the first time that humans had taken such a picture while actually behind the camera, and it has been credited as one of the inspirations of the first Earth Day in 1970. It was selected as the first of "Life" magazine's "100 Photographs That Changed the World".
Apollo 11 astronaut Michael Collins said, "Eight's momentous historic significance was foremost"; while space historian Robert K. Poole saw Apollo8 as the most historically significant of all the Apollo missions. The mission was the most widely covered by the media since the first American orbital flight, Mercury-Atlas 6 by John Glenn, in 1962. There were 1,200 journalists covering the mission, with the BBC's coverage broadcast in 54 countries in 15 different languages. The Soviet newspaper "Pravda" featured a quote from Boris Nikolaevich Petrov, Chairman of the Soviet Interkosmos program, who described the flight as an "outstanding achievement of American space sciences and technology". It is estimated that a quarter of the people alive at the time saw—either live or delayed—the Christmas Eve transmission during the ninth orbit of the Moon. The Apollo8 broadcasts won an Emmy Award, the highest honor given by the Academy of Television Arts & Sciences.
Madalyn Murray O'Hair, an atheist, later caused controversy by bringing a lawsuit against NASA over the reading from Genesis. O'Hair wanted the courts to ban American astronauts—who were all government employees—from public prayer in space. Though the case was rejected by the Supreme Court of the United States, apparently for lack of jurisdiction in outer space, it caused NASA to be skittish about the issue of religion throughout the rest of the Apollo program. Buzz Aldrin, on Apollo11, self-communicated Presbyterian Communion on the surface of the Moon after landing; he refrained from mentioning this publicly for several years and referred to it only obliquely at the time.
In 1969, the United States Post Office Department issued a postage stamp (Scott catalogue #1371) commemorating the Apollo8 flight around the Moon. The stamp featured a detail of the famous photograph of the Earthrise over the Moon taken by Anders on Christmas Eve, and the words, "In the beginning God...", the first words of the book of Genesis. In January 1969, just 18 days after the crew's return to Earth, they appeared in the Super Bowl III pre-game show, reciting the Pledge of Allegiance, before the national anthem was performed by Anita Bryant.
In January 1970, the spacecraft was delivered to Osaka, Japan, for display in the U.S. pavilion at Expo '70. It is now displayed at the Chicago Museum of Science and Industry, along with a collection of personal items from the flight donated by Lovell and the space suit worn by Frank Borman. Jim Lovell's Apollo8 space suit is on public display in the Visitor Center at NASA's Glenn Research Center. Bill Anders's space suit is on display at the Science Museum in London, United Kingdom.
Apollo 8's historic mission has been depicted and referred to in several forms, both documentary and fiction. The various television transmissions and 16 mm footage shot by the crew of Apollo8 were compiled and released by NASA in the 1969 documentary "Debrief: Apollo8", hosted by Burgess Meredith. In addition, Spacecraft Films released, in 2003, a three-disc DVD set containing all of NASA's TV and 16 mm film footage related to the mission, including all TV transmissions from space, training and launch footage, and motion pictures taken in flight. Other documentaries include "Race to the Moon" (2005) as part of season 18 of "American Experience" and "In the Shadow of the Moon" (2007). Apollo's Daring Mission aired on PBS' "" in December 2018, marking the flight's 50th anniversary.
Parts of the mission are dramatized in the 1998 miniseries "From the Earth to the Moon" episode "1968". The S-IVB stage of Apollo8 was also portrayed as the location of an alien device in the 1970 "UFO" episode "Conflict". Apollo8's lunar orbit insertion was chronicled with actual recordings in the song "The Other Side", on the album "The Race for Space", by the band Public Service Broadcasting.
A documentary film, "" was released in 2018.
The choral music piece "Earthrise" by Luke Byrne commemorates the mission. The piece was premièred on January 19, 2020 by Sydney Philharmonia Choirs at the Sydney Opera House. | https://en.wikipedia.org/wiki?curid=663 |
Astronaut
An astronaut or cosmonaut is a person trained by a human spaceflight program to command, pilot, or serve as a crew member of a spacecraft. Although generally reserved for professional space travelers, the terms are sometimes applied to anyone who travels into space, including scientists, politicians, journalists and tourists.
Until 2002, astronauts were sponsored and trained exclusively by governments, either by the military or by civilian space agencies. With the suborbital flight of the privately funded SpaceShipOne in 2004, a new category of astronaut was created: the commercial astronaut.
The criteria for what constitutes human spaceflight vary, with some focus on the point where the atmosphere becomes so thin that centrifugal force, rather than aerodynamic force, carries a significant portion of the weight of the flight object. The Fédération Aéronautique Internationale (FAI) Sporting Code for astronautics recognizes only flights that exceed the Kármán line, at an altitude of . In the United States, professional, military, and commercial astronauts who travel above an altitude of are awarded astronaut wings.
, a total of 552 people from 36 countries have reached or more in altitude, of whom 549 reached low Earth orbit or beyond.
Of these, 24 people have traveled beyond low Earth orbit, either to lunar orbit, the lunar surface, or, in one case, a loop around the Moon. Three of the 24—Jim Lovell, John Young and Eugene Cernan—did so twice.
, under the U.S. definition, 558 people qualify as having reached space, above altitude. Of eight X-15 pilots who exceeded in altitude, only one exceeded 100 kilometers (about 62 miles). Space travelers have spent over 41,790 man-days (114.5 man-years) in space, including over 100 astronaut-days of spacewalks. , the man with the longest cumulative time in space is Gennady Padalka, who has spent 879 days in space. Peggy A. Whitson holds the record for the most time in space by a woman, 377 days.
In 1959, when both the United States and Soviet Union were planning, but had yet to launch humans into space, NASA Administrator T. Keith Glennan and his Deputy Administrator, Dr. Hugh Dryden, discussed whether spacecraft crew members should be called "astronauts" or "cosmonauts". Dryden preferred "cosmonaut", on the grounds that flights would occur in the "cosmos" (near space), while the "astro" prefix suggested flight to the stars. Most NASA Space Task Group members preferred "astronaut", which survived by common usage as the preferred American term. When the Soviet Union launched the first man into space, Yuri Gagarin in 1961, they chose a term which anglicizes to "cosmonaut".
In English-speaking nations, a professional space traveler is called an "astronaut". The term derives from the Greek words "ástron" (ἄστρον), meaning "star", and "nautes" (ναύτης), meaning "sailor". The first known use of the term "astronaut" in the modern sense was by Neil R. Jones in his 1930 short story "The Death's Head Meteor". The word itself had been known earlier; for example, in Percy Greg's 1880 book "Across the Zodiac", "astronaut" referred to a spacecraft. In "Les Navigateurs de l'Infini" (1925) by J.-H. Rosny aîné, the word "astronautique" (astronautic) was used. The word may have been inspired by "aeronaut", an older term for an air traveler first applied in 1784 to balloonists. An early use of "astronaut" in a non-fiction publication is Eric Frank Russell's poem "The Astronaut", appearing in the November 1934 "Bulletin of the British Interplanetary Society".
The first known formal use of the term astronautics in the scientific community was the establishment of the annual International Astronautical Congress in 1950, and the subsequent founding of the International Astronautical Federation the following year.
NASA applies the term astronaut to any crew member aboard NASA spacecraft bound for Earth orbit or beyond. NASA also uses the term as a title for those selected to join its Astronaut Corps. The European Space Agency similarly uses the term astronaut for members of its Astronaut Corps.
By convention, an astronaut employed by the Russian Federal Space Agency (or its Soviet predecessor) is called a "cosmonaut" in English texts. The word is an anglicisation of the Russian word "kosmonavt" (, ), one who works in space outside the Earth's atmosphere, a space traveler, which derives from the Greek words "kosmos" (κόσμος), meaning "universe", and "nautes" (ναύτης), meaning "sailor". Other countries of the former Eastern Bloc use variations of the Russian word "kosmonavt", such as the Polish "kosmonauta" (although Polish also uses "astronauta", and the two words are considered synonyms).
Coinage of the term "kosmonavt" has been credited to Soviet aeronautics pioneer Mikhail Tikhonravov (1900–1974). The first cosmonaut was Soviet Air Force pilot Yuri Gagarin, also the first person in space. He was part of the first six Russians, with German Titov, Yevgeny Khrunov, Andriyan Nikolayev, Pavel Popovich, and Grigoriy Nelyubov, who were given the title of pilot-cosmonaut in January 1961. Valentina Tereshkova was the first female cosmonaut and the first and youngest woman to have flown in space with a solo mission on the Vostok 6 in 1963. On March 14, 1995, Norman Thagard became the first American to ride to space on board a Russian launch vehicle, and thus became the first "American cosmonaut".
(, "Space-universe navigating personnel") is used for astronauts and cosmonauts in general, while (, "navigating outer space personnel") is used for Chinese astronauts. Here, () is strictly defined as the navigation of outer space within the local star system, i.e. solar system. The phrase (, "spaceman") is often used in Hong Kong and Taiwan.
The term "taikonaut" is used by some English-language news media organizations for professional space travelers from China. The word has featured in the Longman and Oxford English dictionaries, the latter of which describes it as a hybrid of the Chinese term (, 'space') and the Greek (, 'sailor'); the term became more common in 2003 when China sent its first astronaut Yang Liwei into space aboard the "Shenzhou 5" spacecraft. This is the term used by Xinhua News Agency in the English version of the Chinese "People's Daily" since the advent of the Chinese space program. The origin of the term is unclear; as early as May 1998, Chiew Lee Yih () from Malaysia, used it in newsgroups.
With the rise of space tourism, NASA and the Russian Federal Space Agency agreed to use the term "spaceflight participant" to distinguish those space travelers from professional astronauts on missions coordinated by those two agencies.
While no nation other than Russia (and previously the Soviet Union), the United States, and China have launched a manned spacecraft, several other nations have sent people into space in cooperation with one of these countries, i.e. the Soviet-led Interkosmos programme. Inspired partly by these missions, other synonyms for astronaut have entered occasional English usage. For example, the term "spationaut" (French spelling: ) is sometimes used to describe French space travelers, from the Latin word for "space", the Malay term was used to describe participants in the Angkasawan program, and the Indian Space Research Organisation hope to launch a spacecraft in 2022 that would carry "vyomanauts", coined from the Sanskrit word ( meaning 'sky' or 'space'). In Finland, the NASA astronaut Timothy Kopra, a Finnish American, has sometimes been referred to as , from the Finnish word .
As of 2020 in the United States, astronaut status is conferred on a person depending on the authorizing agency:
The first human in space was Soviet Yuri Gagarin, who was launched on April 12, 1961, aboard Vostok 1 and orbited around the Earth for 108 minutes. The first woman in space was Soviet Valentina Tereshkova, who launched on June 16, 1963, aboard Vostok 6 and orbited Earth for almost three days.
Alan Shepard became the first American and second person in space on May 5, 1961, on a 15-minute sub-orbital flight aboard "Freedom 7". The first American to orbit the Earth was John Glenn, aboard "Friendship 7" on February 20, 1962. The first American woman in space was Sally Ride, during Space Shuttle "Challenger"'s mission STS-7, on June 18, 1983. In 1992 Mae Jemison became the first African American woman to travel in space aboard STS-47.
Cosmonaut Alexei Leonov was the first person to conduct an extravehicular activity (EVA), (commonly called a "spacewalk"), on March 18, 1965, on the Soviet Union's Voskhod 2 mission. This was followed two and a half months later by astronaut Ed White who made the first American EVA on NASA's Gemini 4 mission.
The first manned mission to orbit the Moon, Apollo 8, included American William Anders who was born in Hong Kong, making him the first Asian-born astronaut in 1968.
The Soviet Union, through its Intercosmos program, allowed people from other "socialist" (i.e. Warsaw Pact and other Soviet-allied) countries to fly on its missions, with the notable exceptions of France and Austria participating in Soyuz TM-7 and Soyuz TM-13, respectively. An example is Czechoslovak Vladimír Remek, the first cosmonaut from a country other than the Soviet Union or the United States, who flew to space in 1978 on a Soyuz-U rocket. Rakesh Sharma became the first Indian citizen to travel to space. He was launched aboard Soyuz T-11, on April 2, 1984.
On July 23, 1980, Pham Tuan of Vietnam became the first Asian in space when he flew aboard Soyuz 37. Also in 1980, Cuban Arnaldo Tamayo Méndez became the first person of Hispanic and black African descent to fly in space, and in 1983, Guion Bluford became the first African American to fly into space. In April 1985, Taylor Wang became the first ethnic Chinese person in space. The first person born in Africa to fly in space was Patrick Baudry (France), in 1985. In 1985, Saudi Arabian Prince Sultan Bin Salman Bin AbdulAziz Al-Saud became the first Arab Muslim astronaut in space. In 1988, Abdul Ahad Mohmand became the first Afghan to reach space, spending nine days aboard the "Mir" space station.
With the increase of seats on the Space Shuttle, the U.S. began taking international astronauts. In 1983, Ulf Merbold of West Germany became the first non-US citizen to fly in a US spacecraft. In 1984, Marc Garneau became the first of 8 Canadian astronauts to fly in space (through 2010).
In 1985, Rodolfo Neri Vela became the first Mexican-born person in space. In 1991, Helen Sharman became the first Briton to fly in space.
In 2002, Mark Shuttleworth became the first citizen of an African country to fly in space, as a paying spaceflight participant. In 2003, Ilan Ramon became the first Israeli to fly in space, although he died during a re-entry accident.
On October 15, 2003, Yang Liwei became China's first astronaut on the Shenzhou 5 spacecraft.
The youngest person to fly in space is Gherman Titov, who was 25 years old when he flew Vostok 2. (Titov was also the first person to suffer space sickness).
The oldest person who has flown in space is John Glenn, who was 77 when he flew on STS-95.
438 days is the longest time spent in space, by Russian Valeri Polyakov.
As of 2006, the most spaceflights by an individual astronaut is seven, a record held by both Jerry L. Ross and Franklin Chang-Diaz. The farthest distance from Earth an astronaut has traveled was , when Jim Lovell, Jack Swigert, and Fred Haise went around the Moon during the Apollo 13 emergency.
The first civilian in space was Valentina Tereshkova aboard Vostok 6 (she also became the first woman in space on that mission).
Tereshkova was only honorarily inducted into the USSR's Air Force, which did not accept female pilots at that time. A month later, Joseph Albert Walker became the first American civilian in space when his X-15 Flight 90 crossed the line, qualifying him by the international definition of spaceflight. Walker had joined the US Army Air Force but was not a member during his flight.
The first people in space who had never been a member of any country's armed forces were both Konstantin Feoktistov and Boris Yegorov aboard Voskhod 1.
The first non-governmental space traveler was Byron K. Lichtenberg, a researcher from the Massachusetts Institute of Technology who flew on STS-9 in 1983. In December 1990, Toyohiro Akiyama became the first paying space traveler as a reporter for Tokyo Broadcasting System, a visit to Mir as part of an estimated $12 million (USD) deal with a Japanese TV station, although at the time, the term used to refer to Akiyama was "Research Cosmonaut". Akiyama suffered severe space sickness during his mission, which affected his productivity.
The first self-funded space tourist was Dennis Tito on board the Russian spacecraft Soyuz TM-3 on April 28, 2001.
The first person to fly on an entirely privately funded mission was Mike Melvill, piloting SpaceShipOne flight 15P on a suborbital journey, although he was a test pilot employed by Scaled Composites and not an actual paying space tourist. Seven others have paid the Russian Space Agency to fly into space:
The first NASA astronauts were selected for training in 1959. Early in the space program, military jet test piloting and engineering training were often cited as prerequisites for selection as an astronaut at NASA, although neither John Glenn nor Scott Carpenter (of the Mercury Seven) had any university degree, in engineering or any other discipline at the time of their selection. Selection was initially limited to military pilots. The earliest astronauts for both America and the USSR tended to be jet fighter pilots, and were often test pilots.
Once selected, NASA astronauts go through twenty months of training in a variety of areas, including training for extravehicular activity in a facility such as NASA's Neutral Buoyancy Laboratory. Astronauts-in-training (astronaut candidates) may also experience short periods of weightlessness (microgravity) in an aircraft called the "Vomit Comet," the nickname given to a pair of modified KC-135s (retired in 2000 and 2004, respectively, and replaced in 2005 with a C-9) which perform parabolic flights. Astronauts are also required to accumulate a number of flight hours in high-performance jet aircraft. This is mostly done in T-38 jet aircraft out of Ellington Field, due to its proximity to the Johnson Space Center. Ellington Field is also where the Shuttle Training Aircraft is maintained and developed, although most flights of the aircraft are conducted from Edwards Air Force Base.
Astronauts in training must learn how to control and fly the Space Shuttle and, it is vital that they are familiar with the International Space Station so they know what they must do when they get there.
Mission Specialist Educators, or "Educator Astronauts", were first selected in 2004, and as of 2007, there are three NASA Educator astronauts: Joseph M. Acaba, Richard R. Arnold, and Dorothy Metcalf-Lindenburger.
Barbara Morgan, selected as back-up teacher to Christa McAuliffe in 1985, is considered to be the first Educator astronaut by the media, but she trained as a mission specialist.
The Educator Astronaut program is a successor to the Teacher in Space program from the 1980s.
Astronauts are susceptible to a variety of health risks including decompression sickness, barotrauma, immunodeficiencies, loss of bone and muscle, loss of eyesight, orthostatic intolerance, sleep disturbances, and radiation injury. A variety of large scale medical studies are being conducted in space via the National Space and Biomedical Research Institute (NSBRI) to address these issues. Prominent among these is the Advanced Diagnostic Ultrasound in Microgravity Study in which astronauts (including former ISS commanders Leroy Chiao and Gennady Padalka) perform ultrasound scans under the guidance of remote experts to diagnose and potentially treat hundreds of medical conditions in space. This study's techniques are now being applied to cover professional and Olympic sports injuries as well as ultrasound performed by non-expert operators in medical and high school students. It is anticipated that remote guided ultrasound will have application on Earth in emergency and rural care situations, where access to a trained physician is often rare.
A 2006 Space Shuttle experiment found that "Salmonella typhimurium", a bacterium that can cause food poisoning, became more virulent when cultivated in space. More recently, in 2017, bacteria were found to be more resistant to antibiotics and to thrive in the near-weightlessness of space. Microorganisms have been observed to survive the vacuum of outer space.
On December 31, 2012, a NASA-supported study reported that human spaceflight may harm the brain and accelerate the onset of Alzheimer's disease.
In October 2015, the NASA Office of Inspector General issued a health hazards report related to space exploration, including a human mission to Mars.
Over the last decade, flight surgeons and scientists at NASA have seen a pattern of vision problems in astronauts on long-duration space missions. The syndrome, known as visual impairment intracranial pressure (VIIP), has been reported in nearly two-thirds of space explorers after long periods spent aboard the International Space Station (ISS).
On November 2, 2017, scientists reported that significant changes in the position and structure of the brain have been found in astronauts who have taken trips in space, based on MRI studies. Astronauts who took longer space trips were associated with greater brain changes.
Being in space can be physiologically deconditioning on the body. It can affect the otolith organs and adaptive capabilities of the central nervous system. Zero gravity and cosmic rays can cause many implications for astronauts.
In October 2018, NASA-funded researchers found that lengthy journeys into outer space, including travel to the planet Mars, may substantially damage the gastrointestinal tissues of astronauts. The studies support earlier work that found such journeys could significantly damage the brains of astronauts, and age them prematurely.
Researchers in 2018 reported, after detecting the presence on the International Space Station (ISS) of five "Enterobacter bugandensis" bacterial strains, none pathogenic to humans, that microorganisms on ISS should be carefully monitored to continue assuring a medically healthy environment for astronauts.
A recent study by Russian scientists published in April 2019 stated that astronauts facing space radiation could face temporary hindrance of their memory centres. While this does not affect their intellectual capabilities, it temporarily hinders formation of new cells in brain's memory centers. The study conducted by Moscow Institute of Physics and Technology (MIPT) concluded this after they observed that mice exposed to neutron and gamma radiation did not impact the rodents' intellectual capabilities.
An astronaut on the International Space Station requires about mass of food inclusive of food packaging per meal each day. (The packaging mass for each meal is about ) Longer-duration missions require more food.
Shuttle astronauts worked with nutritionists to select menus that appeal to their individual tastes. Five months before flight, menus are selected and analyzed for nutritional content by the shuttle dietician. Foods are tested to see how they will react in a reduced gravity environment. Caloric requirements are determined using a basal energy expenditure (BEE) formula.
On Earth, the average American uses about of water every day. On board the ISS astronauts limit water use to only about per day.
In Russia, cosmonauts are awarded Pilot-Cosmonaut of the Russian Federation upon completion of their missions, often accompanied with the award of Hero of the Russian Federation. This follows the practice established in the USSR where cosmonauts were usually awarded the title Hero of the Soviet Union.
At NASA, those who complete astronaut candidate training receive a silver lapel pin. Once they have flown in space, they receive a gold pin. U.S. astronauts who also have active-duty military status receive a special qualification badge, known as the Astronaut Badge, after participation on a spaceflight. The United States Air Force also presents an Astronaut Badge to its pilots who exceed in altitude.
Eighteen astronauts (fourteen men and four women) have lost their lives during four space flights. By nationality, thirteen were American (including one born in India), four were Russian (Soviet Union), and one was Israeli.
Eleven people (all men) have lost their lives training for spaceflight: eight Americans and three Russians. Six of these were in crashes of training jet aircraft, one drowned during water recovery training, and four were due to fires in pure oxygen environments.
The Space Mirror Memorial, which stands on the grounds of the John F. Kennedy Space Center Visitor Complex, commemorates the lives of the men and women who have died during spaceflight and during training in the space programs of the United States. In addition to twenty NASA career astronauts, the memorial includes the names of a U.S. Air Force X-15 test pilot, a U.S. Air Force officer who died while training for a then-classified military space program, and a civilian spaceflight participant. | https://en.wikipedia.org/wiki?curid=664 |
A Modest Proposal
A Modest Proposal For preventing the Children of Poor People From being a Burthen to Their Parents or Country, and For making them Beneficial to the Publick, commonly referred to as A Modest Proposal, is a Juvenalian satirical essay written and published anonymously by Jonathan Swift in 1729. The essay suggests that the impoverished Irish might ease their economic troubles by selling their children as food to rich gentlemen and ladies. This satirical hyperbole mocked heartless attitudes towards the poor, as well as British policy toward the Irish in general.
In English writing, the phrase "a modest proposal" is now conventionally an allusion to this style of straight-faced satire.
Swift's essay is widely held to be one of the greatest examples of sustained irony in the history of the English language. Much of its shock value derives from the fact that the first portion of the essay describes the plight of starving beggars in Ireland, so that the reader is unprepared for the surprise of Swift's solution when he states: "A young healthy child well nursed, is, at a year old, a most delicious nourishing and wholesome food, whether stewed, roasted, baked, or boiled; and I make no doubt that it will equally serve in a fricassee, or a ragout."
Swift goes to great lengths to support his argument, including a list of possible preparation styles for the children, and calculations showing the financial benefits of his suggestion. He uses methods of argument throughout his essay which lampoon the then-influential William Petty and the social engineering popular among followers of Francis Bacon. These lampoons include appealing to the authority of "a very knowing American of my acquaintance in London" and "the famous Psalmanazar, a native of the island Formosa" (who had already confessed to "not" being from Formosa in 1706).
In the tradition of Roman satire, Swift introduces the reforms he is actually suggesting by paralipsis:
George Wittkowsky argued that Swift's main target in "A Modest Proposal" was not the conditions in Ireland, but rather the can-do spirit of the times that led people to devise a number of illogical schemes that would purportedly solve social and economic ills. Swift was especially attacking projects that tried to fix population and labour issues with a simple cure-all solution. A memorable example of these sorts of schemes "involved the idea of running the poor through a joint-stock company". In response, Swift's "Modest Proposal" was "a burlesque of projects concerning the poor" that were in vogue during the early 18th century.
"A Modest Proposal" also targets the calculating way people perceived the poor in designing their projects. The pamphlet targets reformers who "regard people as commodities". In the piece, Swift adopts the "technique of a political arithmetician" to show the utter ridiculousness of trying to prove any proposal with dispassionate statistics.
Critics differ about Swift's intentions in using this faux-mathematical philosophy. Edmund Wilson argues that statistically "the logic of the 'Modest proposal' can be compared with defence of crime (arrogated to Marx) in which he argues that crime takes care of the superfluous population". Wittkowsky counters that Swift's satiric use of statistical analysis is an effort to enhance his satire that "springs from a spirit of bitter mockery, not from the delight in calculations for their own sake".
Charles K. Smith argues that Swift's rhetorical style persuades the reader to detest the speaker and pity the Irish. Swift's specific strategy is twofold, using a "trap" to create sympathy for the Irish and a dislike of the narrator who, in the span of one sentence, "details vividly and with rhetorical emphasis the grinding poverty" but feels emotion solely for members of his own class. Swift's use of gripping details of poverty and his narrator's cool approach towards them create "two opposing points of view" that "alienate the reader, perhaps unconsciously, from a narrator who can view with 'melancholy' detachment a subject that Swift has directed us, rhetorically, to see in a much less detached way."
Swift has his proposer further degrade the Irish by using language ordinarily reserved for animals. Lewis argues that the speaker uses "the vocabulary of animal husbandry" to describe the Irish. Once the children have been commodified, Swift's rhetoric can easily turn "people into animals, then meat, and from meat, logically, into tonnage worth a price per pound".
Swift uses the proposer's serious tone to highlight the absurdity of his proposal. In making his argument, the speaker uses the conventional, textbook-approved order of argument from Swift's time (which was derived from the Latin rhetorician Quintilian). The contrast between the "careful control against the almost inconceivable perversion of his scheme" and "the ridiculousness of the proposal" create a situation in which the reader has "to consider just what perverted values and assumptions would allow such a diligent, thoughtful, and conventional man to propose so perverse a plan".
Scholars have speculated about which earlier works Swift may have had in mind when he wrote "A Modest Proposal".
James William Johnson argues that "A Modest Proposal" was largely influenced and inspired by Tertullian's "Apology": a satirical attack against early Roman persecution of Christianity. Johnson believes that Swift saw major similarities between the two situations. Johnson notes Swift's obvious affinity for Tertullian and the bold stylistic and structural similarities between the works "A Modest Proposal" and "Apology". In structure, Johnson points out the same central theme, that of cannibalism and the eating of babies as well as the same final argument, that "human depravity is such that men will attempt to justify their own cruelty by accusing their victims of being lower than human". Stylistically, Swift and Tertullian share the same command of sarcasm and language. In agreement with Johnson, Donald C. Baker points out the similarity between both authors' tones and use of irony. Baker notes the uncanny way that both authors imply an ironic "justification by ownership" over the subject of sacrificing children—Tertullian while attacking pagan parents, and Swift while attacking the English mistreatment of the Irish poor.
It has also been argued that "A Modest Proposal" was, at least in part, a response to the 1728 essay "The Generous Projector or, A Friendly Proposal to Prevent Murder and Other Enormous Abuses, By Erecting an Hospital for Foundlings and Bastard Children" by Swift's rival Daniel Defoe.
Bernard Mandeville's "Modest Defence of Publick Stews" asked to introduce public and state controlled bordellos. The 1726 paper acknowledges women's interests andwhile not being a completely satirical texthas also been discussed as an inspiration for Jonathan Swift's title. Mandeville had by 1705 already become famous for the Fable of The Bees and deliberations on private vices and public benefits.
John Locke commented: "Be it then as Sir Robert says, that Anciently, it was usual for Men to sell and Castrate their Children. Let it be, that they exposed them; Add to it, if you please, for this is still greater Power, "that they begat them for their Tables to fat and eat them": If this proves a right to do so, we may, by the same Argument, justifie Adultery, Incest and Sodomy, for there are examples of these too, both Ancient and Modern; Sins, which I suppose, have the Principle Aggravation from this, that they cross the main intention of Nature, which willeth the increase of Mankind, and the continuation of the Species in the highest perfection, and the distinction of Families, with the Security of the Marriage Bed, as necessary thereunto". (First Treatise, sec. 59).
Robert Phiddian's article "Have you eaten yet? The Reader in A Modest Proposal" focuses on two aspects of "A Modest Proposal": the voice of Swift and the voice of the Proposer. Phiddian stresses that a reader of the pamphlet must learn to distinguish between the satirical voice of Jonathan Swift and the apparent economic projections of the Proposer. He reminds readers that "there is a gap between the narrator's meaning and the text's, and that a moral-political argument is being carried out by means of parody".
While Swift's proposal is obviously not a serious economic proposal, George Wittkowsky, author of "Swift's Modest Proposal: The Biography of an Early Georgian Pamphlet", argues that to understand the piece fully it is important to understand the economics of Swift's time. Wittowsky argues that not enough critics have taken the time to focus directly on the mercantilism and theories of labour in 18th century England. "[I]f one regards the "Modest Proposal" simply as a criticism of condition, about all one can say is that conditions were bad and that Swift's irony brilliantly underscored this fact".
At the start of a new industrial age in the 18th century, it was believed that "people are the riches of the nation", and there was a general faith in an economy that paid its workers low wages because high wages meant workers would work less. Furthermore, "in the mercantilist view no child was too young to go into industry". In those times, the "somewhat more humane attitudes of an earlier day had all but disappeared and the laborer had come to be regarded as a commodity".
Louis A. Landa composed a conducive analysis when he noted that it would have been healthier for the Irish economy to more appropriately utilize their human assets by giving the people an opportunity to "become a source of wealth to the nation" or else they "must turn to begging and thievery". This opportunity may have included giving the farmers more coin to work for, diversifying their professions, or even consider enslaving their people to lower coin usage and build up financial stock in Ireland. Landa wrote that, "Swift is maintaining that the maxim—people are the riches of a nation—applies to Ireland only if Ireland is permitted slavery or cannibalism"
Landa presents Swift's "A Modest Proposal" as a critique of the popular and unjustified maxim of mercantilism in the 18th century that "people are the riches of a nation". Swift presents the dire state of Ireland and shows that mere population itself, in Ireland's case, did not always mean greater wealth and economy. The uncontrolled maxim fails to take into account that a person who does not produce in an economic or political way makes a country poorer, not richer. Swift also recognises the implications of this fact in making mercantilist philosophy a paradox: the wealth of a country is based on the poverty of the majority of its citizens. Swift however, Landa argues, is not merely criticising economic maxims but also addressing the fact that England was denying Irish citizens their natural rights and dehumanising them by viewing them as a mere commodity.
Swift's essay created a backlash within the community after its publication. The work was aimed at the aristocracy, and they responded in turn. Several members of society wrote to Swift regarding the work. Lord Bathurst's letter intimated that he certainly understood the message, and interpreted it as a work of comedy:
February 12, 1729–30:"I did immediately propose it to Lady Bathurst, as your advice, particularly for her last boy, which was born the plumpest, finest thing, that could be seen; but she fell in a passion, and bid me send you word, that she would not follow your direction, but that she would breed him up to be a parson, and he should live upon the fat of the land; or a lawyer, and then, instead of being eat himself, he should devour others. You know women in passion never mind what they say; but, as she is a very reasonable woman, I have almost brought her over now to your opinion; and having convinced her, that as matters stood, we could not possibly maintain all the nine, she does begin to think it reasonable the youngest should raise fortunes for the eldest: and upon that foot a man may perform family duty with more courage and zeal; for, if he should happen to get twins, the selling of one might provide for the other. Or if, by any accident, while his wife lies in with one child, he should get a second upon the body of another woman, he might dispose of the fattest of the two, and that would help to breed up the other.The more I think upon this scheme, the more reasonable it appears to me; and it ought by no means to be confined to Ireland; for, in all probability, we shall, in a very little time, be altogether as poor here as you are there. I believe, indeed, we shall carry it farther, and not confine our luxury only to the eating of children; for I happened to peep the other day into a large assembly [Parliament] not far from Westminster-hall, and I found them roasting a great fat fellow, [Walpole again] For my own part, I had not the least inclination to a slice of him; but, if I guessed right, four or five of the company had a devilish mind to be at him. Well, adieu, you begin now to wish I had ended, when I might have done it so conveniently".
"A Modest Proposal" is included in many literature courses as an example of early modern western satire. It also serves as an exceptional introduction to the concept and use of argumentative language, lending itself well to secondary and post-secondary essay courses. Outside of the realm of English studies, "A Modest Proposal" is included in many comparative and global literature and history courses, as well as those of numerous other disciplines in the arts, humanities, and even the social sciences.
The essay's approach has been copied many times. In his book "A Modest Proposal" (1984), the evangelical author Frank Schaeffer emulated Swift's work in a social conservative polemic against abortion and euthanasia, imagining a future dystopia that advocates recycling of aborted embryos, fetuses, and some disabled infants with compound intellectual, physical and physiological difficulties. (Such Baby Doe Rules cases were then a major concern of the US anti-abortion movement of the early 1980s, which viewed selective treatment of those infants as disability discrimination.) In his book "A Modest Proposal for America" (2013), statistician Howard Friedman opens with a satirical reflection of the extreme drive to fiscal stability by ultra-conservatives.
In the 1998 edition of "The Handmaid's Tale" by Margaret Atwood there is a quote from "A Modest Proposal" before the introduction.
"A Modest Video Game Proposal" is the title of an open letter sent by activist/former attorney Jack Thompson on 10 October 2005. He proposed that someone should "create, manufacture, distribute, and sell a video game" that would allow players to act out a scenario in which the game character kills video game developers.[1]
Hunter S. Thompson's "Fear and Loathing in America: The Brutal Odyssey of an Outlaw Journalist" includes a letter in which he uses Swift's approach in connection with the Vietnam War. Thompson writes a letter to a local Aspen newspaper informing them that, on Christmas Eve, he is going to use napalm to burn a number of dogs and hopefully any humans they find. The letter protests against the burning of Vietnamese people occurring overseas.
The 2012 film "Butcher Boys," written by Kim Henkel, is said to be loosely based on Jonathan Swift's "A Modest Proposal." The film's opening scene takes place in a restaurant named "J. Swift's".
On November 30, 2017, Jonathan Swift's 350th birthday, "The Washington Post" published a column entitled 'Why Alabamians should consider eating Democrats' babies", by Alexandra Petri.
In July 2019, E. Jean Carroll published a book titled "", discussing problematic behaviour of male humans.
On October 3, 2019, a satirist spoke up at an event for Alexandria Ocasio-Cortez, claiming that a solution to the climate crisis was "we need to eat the babies". The individual also wore a T-shirt saying "Save The Planet, Eat The Children". This stunt was understood by many as a modern application of "A Modest Proposal". | https://en.wikipedia.org/wiki?curid=665 |
Alkali metal
The alkali metals consist of the chemical elements lithium (Li), sodium (Na), potassium (K), rubidium (Rb), caesium (Cs), and francium (Fr). Together with hydrogen they constitute group 1, which lies in the s-block of the periodic table. All alkali metals have their outermost electron in an s-orbital: this shared electron configuration results in their having very similar characteristic properties. Indeed, the alkali metals provide the best example of group trends in properties in the periodic table, with elements exhibiting well-characterised homologous behaviour. This family of elements is also known as the lithium family after its leading element.
The alkali metals are all shiny, soft, highly reactive metals at standard temperature and pressure and readily lose their outermost electron to form cations with charge +1. They can all be cut easily with a knife due to their softness, exposing a shiny surface that tarnishes rapidly in air due to oxidation by atmospheric moisture and oxygen (and in the case of lithium, nitrogen). Because of their high reactivity, they must be stored under oil to prevent reaction with air, and are found naturally only in salts and never as the free elements. Caesium, the fifth alkali metal, is the most reactive of all the metals. All the alkali metals react with water, with the heavier alkali metals reacting more vigorously than the lighter ones.
All of the discovered alkali metals occur in nature as their compounds: in order of abundance, sodium is the most abundant, followed by potassium, lithium, rubidium, caesium, and finally francium, which is very rare due to its extremely high radioactivity; francium occurs only in minute traces in nature as an intermediate step in some obscure side branches of the natural decay chains. Experiments have been conducted to attempt the synthesis of ununennium (Uue), which is likely to be the next member of the group; none were successful. However, ununennium may not be an alkali metal due to relativistic effects, which are predicted to have a large influence on the chemical properties of superheavy elements; even if it does turn out to be an alkali metal, it is predicted to have some differences in physical and chemical properties from its lighter homologues.
Most alkali metals have many different applications. One of the best-known applications of the pure elements is the use of rubidium and caesium in atomic clocks, of which caesium atomic clocks form the basis of the second. A common application of the compounds of sodium is the sodium-vapour lamp, which emits light very efficiently. Table salt, or sodium chloride, has been used since antiquity. Lithium finds use as a psychiatric medication and as an anode in lithium batteries. Sodium and potassium are also essential elements, having major biological roles as electrolytes, and although the other alkali metals are not essential, they also have various effects on the body, both beneficial and harmful.
Sodium compounds have been known since ancient times; salt (sodium chloride) has been an important commodity in human activities, as testified by the English word "salary", referring to "salarium", money paid to Roman soldiers for the purchase of salt. While potash has been used since ancient times, it was not understood for most of its history to be a fundamentally different substance from sodium mineral salts. Georg Ernst Stahl obtained experimental evidence which led him to suggest the fundamental difference of sodium and potassium salts in 1702, and Henri-Louis Duhamel du Monceau was able to prove this difference in 1736. The exact chemical composition of potassium and sodium compounds, and the status as chemical element of potassium and sodium, was not known then, and thus Antoine Lavoisier did not include either alkali in his list of chemical elements in 1789.
Pure potassium was first isolated in 1807 in England by Humphry Davy, who derived it from caustic potash (KOH, potassium hydroxide) by the use of electrolysis of the molten salt with the newly invented voltaic pile. Previous attempts at electrolysis of the aqueous salt were unsuccessful due to potassium's extreme reactivity. Potassium was the first metal that was isolated by electrolysis. Later that same year, Davy reported extraction of sodium from the similar substance caustic soda (NaOH, lye) by a similar technique, demonstrating the elements, and thus the salts, to be different.
Petalite (Li Al Si4O10) was discovered in 1800 by the Brazilian chemist José Bonifácio de Andrada in a mine on the island of Utö, Sweden. However, it was not until 1817 that Johan August Arfwedson, then working in the laboratory of the chemist Jöns Jacob Berzelius, detected the presence of a new element while analysing petalite ore. This new element was noted by him to form compounds similar to those of sodium and potassium, though its carbonate and hydroxide were less soluble in water and more alkaline than the other alkali metals. Berzelius gave the unknown material the name ""lithion"/"lithina"", from the Greek word "λιθoς" (transliterated as "lithos", meaning "stone"), to reflect its discovery in a solid mineral, as opposed to potassium, which had been discovered in plant ashes, and sodium, which was known partly for its high abundance in animal blood. He named the metal inside the material ""lithium"". Lithium, sodium, and potassium were part of the discovery of periodicity, as they are among a series of triads of elements in the same group that were noted by Johann Wolfgang Döbereiner in 1850 as having similar properties.
Rubidium and caesium were the first elements to be discovered using the spectroscope, invented in 1859 by Robert Bunsen and Gustav Kirchhoff. The next year, they discovered caesium in the mineral water from Bad Dürkheim, Germany. Their discovery of rubidium came the following year in Heidelberg, Germany, finding it in the mineral lepidolite. The names of rubidium and caesium come from the most prominent lines in their emission spectra: a bright red line for rubidium (from the Latin word "rubidus", meaning dark red or bright red), and a sky-blue line for caesium (derived from the Latin word "caesius", meaning sky-blue).
Around 1865 John Newlands produced a series of papers where he listed the elements in order of increasing atomic weight and similar physical and chemical properties that recurred at intervals of eight; he likened such periodicity to the octaves of music, where notes an octave apart have similar musical functions. His version put all the alkali metals then known (lithium to caesium), as well as copper, silver, and thallium (which show the +1 oxidation state characteristic of the alkali metals), together into a group. His table placed hydrogen with the halogens.
After 1869, Dmitri Mendeleev proposed his periodic table placing lithium at the top of a group with sodium, potassium, rubidium, caesium, and thallium. Two years later, Mendeleev revised his table, placing hydrogen in group 1 above lithium, and also moving thallium to the boron group. In this 1871 version, copper, silver, and gold were placed twice, once as part of group IB, and once as part of a "group VIII" encompassing today's groups 8 to 11. After the introduction of the 18-column table, the group IB elements were moved to their current position in the d-block, while alkali metals were left in "group IA". Later the group's name was changed to "group 1" in 1988. The trivial name "alkali metals" comes from the fact that the hydroxides of the group 1 elements are all strong alkalis when dissolved in water.
There were at least four erroneous and incomplete discoveries before Marguerite Perey of the Curie Institute in Paris, France discovered francium in 1939 by purifying a sample of actinium-227, which had been reported to have a decay energy of 220 keV. However, Perey noticed decay particles with an energy level below 80 keV. Perey thought this decay activity might have been caused by a previously unidentified decay product, one that was separated during purification, but emerged again out of the pure actinium-227. Various tests eliminated the possibility of the unknown element being thorium, radium, lead, bismuth, or thallium. The new product exhibited chemical properties of an alkali metal (such as coprecipitating with caesium salts), which led Perey to believe that it was element 87, caused by the alpha decay of actinium-227. Perey then attempted to determine the proportion of beta decay to alpha decay in actinium-227. Her first test put the alpha branching at 0.6%, a figure that she later revised to 1%.
The next element below francium (eka-francium) in the periodic table would be ununennium (Uue), element 119. The synthesis of ununennium was first attempted in 1985 by bombarding a target of einsteinium-254 with calcium-48 ions at the superHILAC accelerator at Berkeley, California. No atoms were identified, leading to a limiting yield of 300 nb.
It is highly unlikely that this reaction will be able to create any atoms of ununennium in the near future, given the extremely difficult task of making sufficient amounts of einsteinium-254, which is favoured for production of ultraheavy elements because of its large mass, relatively long half-life of 270 days, and availability in significant amounts of several micrograms, to make a large enough target to increase the sensitivity of the experiment to the required level; einsteinium has not been found in nature and has only been produced in laboratories, and in quantities smaller than those needed for effective synthesis of superheavy elements. However, given that ununennium is only the first period 8 element on the extended periodic table, it may well be discovered in the near future through other reactions, and indeed an attempt to synthesise it is currently ongoing in Japan. Currently, none of the period 8 elements have been discovered yet, and it is also possible, due to drip instabilities, that only the lower period 8 elements, up to around element 128, are physically possible. No attempts at synthesis have been made for any heavier alkali metals: due to their extremely high atomic number, they would require new, more powerful methods and technology to make.
The Oddo–Harkins rule holds that elements with even atomic numbers are more common that those with odd atomic numbers, with the exception of hydrogen. This rule argues that elements with odd atomic numbers have one unpaired proton and are more likely to capture another, thus increasing their atomic number. In elements with even atomic numbers, protons are paired, with each member of the pair offsetting the spin of the other, enhancing stability. All the alkali metals have odd atomic numbers and they are not as common as the elements with even atomic numbers adjacent to them (the noble gases and the alkaline earth metals) in the Solar System. The heavier alkali metals are also less abundant than the lighter ones as the alkali metals from rubidium onward can only be synthesised in supernovae and not in stellar nucleosynthesis. Lithium is also much less abundant than sodium and potassium as it is poorly synthesised in both Big Bang nucleosynthesis and in stars: the Big Bang could only produce trace quantities of lithium, beryllium and boron due to the absence of a stable nucleus with 5 or 8 nucleons, and stellar nucleosynthesis could only pass this bottleneck by the triple-alpha process, fusing three helium nuclei to form carbon, and skipping over those three elements.
The Earth formed from the same cloud of matter that formed the Sun, but the planets acquired different compositions during the formation and evolution of the solar system. In turn, the natural history of the Earth caused parts of this planet to have differing concentrations of the elements. The mass of the Earth is approximately 5.98 kg. It is composed mostly of iron (32.1%), oxygen (30.1%), silicon (15.1%), magnesium (13.9%), sulfur (2.9%), nickel (1.8%), calcium (1.5%), and aluminium (1.4%); with the remaining 1.2% consisting of trace amounts of other elements. Due to planetary differentiation, the core region is believed to be primarily composed of iron (88.8%), with smaller amounts of nickel (5.8%), sulfur (4.5%), and less than 1% trace elements.
The alkali metals, due to their high reactivity, do not occur naturally in pure form in nature. They are lithophiles and therefore remain close to the Earth's surface because they combine readily with oxygen and so associate strongly with silica, forming relatively low-density minerals that do not sink down into the Earth's core. Potassium, rubidium and caesium are also incompatible elements due to their large ionic radii.
Sodium and potassium are very abundant in earth, both being among the ten most common elements in Earth's crust; sodium makes up approximately 2.6% of the Earth's crust measured by weight, making it the sixth most abundant element overall and the most abundant alkali metal. Potassium makes up approximately 1.5% of the Earth's crust and is the seventh most abundant element. Sodium is found in many different minerals, of which the most common is ordinary salt (sodium chloride), which occurs in vast quantities dissolved in seawater. Other solid deposits include halite, amphibole, cryolite, nitratine, and zeolite. Many of these solid deposits occur as a result of ancient seas evaporating, which still occurs now in places such as Utah's Great Salt Lake and the Dead Sea. Despite their near-equal abundance in Earth's crust, sodium is far more common than potassium in the ocean, both because potassium's larger size makes its salts less soluble, and because potassium is bound by silicates in soil and what potassium leaches is absorbed far more readily by plant life than sodium.
Despite its chemical similarity, lithium typically does not occur together with sodium or potassium due to its smaller size. Due to its relatively low reactivity, it can be found in seawater in large amounts; it is estimated that seawater is approximately 0.14 to 0.25 parts per million (ppm) or 25 micromolar. Its diagonal relationship with magnesium often allows it to replace magnesium in ferromagnesium minerals, where its crustal concentration is about 18 ppm, comparable to that of gallium and niobium. Commercially, the most important lithium mineral is spodumene, which occurs in large deposits worldwide.
Rubidium is approximately as abundant as zinc and more abundant than copper. It occurs naturally in the minerals leucite, pollucite, carnallite, zinnwaldite, and lepidolite, although none of these contain only rubidium and no other alkali metals. Caesium is more abundant than some commonly known elements, such as antimony, cadmium, tin, and tungsten, but is much less abundant than rubidium.
Francium-223, the only naturally occurring isotope of francium, is the product of the alpha decay of actinium-227 and can be found in trace amounts in uranium minerals. In a given sample of uranium, there is estimated to be only one francium atom for every 1018 uranium atoms. It has been calculated that there is at most 30 g of francium in the earth's crust at any time, due to its extremely short half-life of 22 minutes.
The physical and chemical properties of the alkali metals can be readily explained by their having an ns1 valence electron configuration, which results in weak metallic bonding. Hence, all the alkali metals are soft and have low densities, melting and boiling points, as well as heats of sublimation, vaporisation, and dissociation. They all crystallise in the body-centered cubic crystal structure, and have distinctive flame colours because their outer s electron is very easily excited. The ns1 configuration also results in the alkali metals having very large atomic and ionic radii, as well as very high thermal and electrical conductivity. Their chemistry is dominated by the loss of their lone valence electron in the outermost s-orbital to form the +1 oxidation state, due to the ease of ionising this electron and the very high second ionisation energy. Most of the chemistry has been observed only for the first five members of the group. The chemistry of francium is not well established due to its extreme radioactivity; thus, the presentation of its properties here is limited. What little is known about francium shows that it is very close in behaviour to caesium, as expected. The physical properties of francium are even sketchier because the bulk element has never been observed; hence any data that may be found in the literature are certainly speculative extrapolations.
The alkali metals are more similar to each other than the elements in any other group are to each other. Indeed, the similarity is so great that it is quite difficult to separate potassium, rubidium, and caesium, due to their similar ionic radii; lithium and sodium are more distinct. For instance, when moving down the table, all known alkali metals show increasing atomic radius, decreasing electronegativity, increasing reactivity, and decreasing melting and boiling points as well as heats of fusion and vaporisation. In general, their densities increase when moving down the table, with the exception that potassium is less dense than sodium. One of the very few properties of the alkali metals that does not display a very smooth trend is their reduction potentials: lithium's value is anomalous, being more negative than the others. This is because the Li+ ion has a very high hydration energy in the gas phase: though the lithium ion disrupts the structure of water significantly, causing a higher change in entropy, this high hydration energy is enough to make the reduction potentials indicate it as being the most electropositive alkali metal, despite the difficulty of ionising it in the gas phase.
The stable alkali metals are all silver-coloured metals except for caesium, which has a pale golden tint: it is one of only three metals that are clearly coloured (the other two being copper and gold). Additionally, the heavy alkaline earth metals calcium, strontium, and barium, as well as the divalent lanthanides europium and ytterbium, are pale yellow, though the colour is much less prominent than it is for caesium. Their lustre tarnishes rapidly in air due to oxidation. They all crystallise in the body-centered cubic crystal structure, and have distinctive flame colours because their outer s electron is very easily excited. Indeed, these flame test colours are the most common way of identifying them since all their salts with common ions are soluble.
All the alkali metals are highly reactive and are never found in elemental forms in nature. Because of this, they are usually stored in mineral oil or kerosene (paraffin oil). They react aggressively with the halogens to form the alkali metal halides, which are white ionic crystalline compounds that are all soluble in water except lithium fluoride (Li F). The alkali metals also react with water to form strongly alkaline hydroxides and thus should be handled with great care. The heavier alkali metals react more vigorously than the lighter ones; for example, when dropped into water, caesium produces a larger explosion than potassium if the same number of moles of each metal is used. The alkali metals have the lowest first ionisation energies in their respective periods of the periodic table because of their low effective nuclear charge and the ability to attain a noble gas configuration by losing just one electron. Not only do the alkali metals react with water, but also with proton donors like alcohols and phenols, gaseous ammonia, and alkynes, the last demonstrating the phenomenal degree of their reactivity. Their great power as reducing agents makes them very useful in liberating other metals from their oxides or halides.
The second ionisation energy of all of the alkali metals is very high as it is in a full shell that is also closer to the nucleus; thus, they almost always lose a single electron, forming cations. The alkalides are an exception: they are unstable compounds which contain alkali metals in a −1 oxidation state, which is very unusual as before the discovery of the alkalides, the alkali metals were not expected to be able to form anions and were thought to be able to appear in salts only as cations. The alkalide anions have filled s-subshells, which gives them enough stability to exist. All the stable alkali metals except lithium are known to be able to form alkalides, and the alkalides have much theoretical interest due to their unusual stoichiometry and low ionisation potentials. Alkalides are chemically similar to the electrides, which are salts with trapped electrons acting as anions. A particularly striking example of an alkalide is "inverse sodium hydride", H+Na− (both ions being complexed), as opposed to the usual sodium hydride, Na+H−: it is unstable in isolation, due to its high energy resulting from the displacement of two electrons from hydrogen to sodium, although several derivatives are predicted to be metastable or stable.
In aqueous solution, the alkali metal ions form aqua ions of the formula [M(H2O)"n"]+, where "n" is the solvation number. Their coordination numbers and shapes agree well with those expected from their ionic radii. In aqueous solution the water molecules directly attached to the metal ion are said to belong to the first coordination sphere, also known as the first, or primary, solvation shell. The bond between a water molecule and the metal ion is a dative covalent bond, with the oxygen atom donating both electrons to the bond. Each coordinated water molecule may be attached by hydrogen bonds to other water molecules. The latter are said to reside in the second coordination sphere. However, for the alkali metal cations, the second coordination sphere is not well-defined as the +1 charge on the cation is not high enough to polarise the water molecules in the primary solvation shell enough for them to form strong hydrogen bonds with those in the second coordination sphere, producing a more stable entity. The solvation number for Li+ has been experimentally determined to be 4, forming the tetrahedral [Li(H2O)4]+: while solvation numbers of 3 to 6 have been found for lithium aqua ions, solvation numbers less than 4 may be the result of the formation of contact ion pairs, and the higher solvation numbers may be interpreted in terms of water molecules that approach [Li(H2O)4]+ through a face of the tetrahedron, though molecular dynamic simulations may indicate the existence of an octahedral hexaaqua ion. There are also probably six water molecules in the primary solvation sphere of the sodium ion, forming the octahedral [Na(H2O)6]+ ion. While it was previously thought that the heavier alkali metals also formed octahedral hexaaqua ions, it has since been found that potassium and rubidium probably form the [K(H2O)8]+ and [Rb(H2O)8]+ ions, which have the square antiprismatic structure, and that caesium forms the 12-coordinate [Cs(H2O)12]+ ion.
The chemistry of lithium shows several differences from that of the rest of the group as the small Li+ cation polarises anions and gives its compounds a more covalent character. Lithium and magnesium have a diagonal relationship due to their similar atomic radii, so that they show some similarities. For example, lithium forms a stable nitride, a property common among all the alkaline earth metals (magnesium's group) but unique among the alkali metals. In addition, among their respective groups, only lithium and magnesium form organometallic compounds with significant covalent character (e.g. LiMe and MgMe2).
Lithium fluoride is the only alkali metal halide that is poorly soluble in water, and lithium hydroxide is the only alkali metal hydroxide that is not deliquescent. Conversely, lithium perchlorate and other lithium salts with large anions that cannot be polarised are much more stable than the analogous compounds of the other alkali metals, probably because Li+ has a high solvation energy. This effect also means that most simple lithium salts are commonly encountered in hydrated form, because the anhydrous forms are extremely hygroscopic: this allows salts like lithium chloride and lithium bromide to be used in dehumidifiers and air-conditioners.
Francium is also predicted to show some differences due to its high atomic weight, causing its electrons to travel at considerable fractions of the speed of light and thus making relativistic effects more prominent. In contrast to the trend of decreasing electronegativities and ionisation energies of the alkali metals, francium's electronegativity and ionisation energy are predicted to be higher than caesium's due to the relativistic stabilisation of the 7s electrons; also, its atomic radius is expected to be abnormally low. Thus, contrary to expectation, caesium is the most reactive of the alkali metals, not francium. All known physical properties of francium also deviate from the clear trends going from lithium to caesium, such as the first ionisation energy, electron affinity, and anion polarisability, though due to the paucity of known data about francium many sources give extrapolated values, ignoring that relativistic effects make the trend from lithium to caesium become inapplicable at francium. Some of the few properties of francium that have been predicted taking relativity into account are the electron affinity (47.2 kJ/mol) and the enthalpy of dissociation of the Fr2 molecule (42.1 kJ/mol). The CsFr molecule is polarised as Cs+Fr−, showing that the 7s subshell of francium is much more strongly affected by relativistic effects than the 6s subshell of caesium. Additionally, francium superoxide (FrO2) is expected to have significant covalent character, unlike the other alkali metal superoxides, because of bonding contributions from the 6p electrons of francium.
All the alkali metals have odd atomic numbers; hence, their isotopes must be either odd–odd (both proton and neutron number are odd) or odd–even (proton number is odd, but neutron number is even). Odd–odd nuclei have even mass numbers, whereas odd–even nuclei have odd mass numbers. Odd–odd primordial nuclides are rare because most odd–odd nuclei are highly unstable with respect to beta decay, because the decay products are even–even, and are therefore more strongly bound, due to nuclear pairing effects.
Due to the great rarity of odd–odd nuclei, almost all the primordial isotopes of the alkali metals are odd–even (the exceptions being the light stable isotope lithium-6 and the long-lived radioisotope potassium-40). For a given odd mass number, there can be only a single beta-stable nuclide, since there is not a difference in binding energy between even–odd and odd–even comparable to that between even–even and odd–odd, leaving other nuclides of the same mass number (isobars) free to beta decay toward the lowest-mass nuclide. An effect of the instability of an odd number of either type of nucleons is that odd-numbered elements, such as the alkali metals, tend to have fewer stable isotopes than even-numbered elements. Of the 26 monoisotopic elements that have only a single stable isotope, all but one have an odd atomic number and all but one also have an even number of neutrons. Beryllium is the single exception to both rules, due to its low atomic number.
All of the alkali metals except lithium and caesium have at least one naturally occurring radioisotope: sodium-22 and sodium-24 are trace radioisotopes produced cosmogenically, potassium-40 and rubidium-87 have very long half-lives and thus occur naturally, and all isotopes of francium are radioactive. Caesium was also thought to be radioactive in the early 20th century, although it has no naturally occurring radioisotopes. (Francium had not been discovered yet at that time.) The natural long-lived radioisotope of potassium, potassium-40, makes up about 0.012% of natural potassium, and thus natural potassium is weakly radioactive. This natural radioactivity became a basis for a mistaken claim of the discovery for element 87 (the next alkali metal after caesium) in 1925. Natural rubidium is similarly slightly radioactive, with 27.83% being the long-lived radioisotope rubidium-87.
Caesium-137, with a half-life of 30.17 years, is one of the two principal medium-lived fission products, along with strontium-90, which are responsible for most of the radioactivity of spent nuclear fuel after several years of cooling, up to several hundred years after use. It constitutes most of the radioactivity still left from the Chernobyl accident. Caesium-137 undergoes high-energy beta decay and eventually becomes stable barium-137. It is a strong emitter of gamma radiation. Caesium-137 has a very low rate of neutron capture and cannot be feasibly disposed of in this way, but must be allowed to decay. Caesium-137 has been used as a tracer in hydrologic studies, analogous to the use of tritium. Small amounts of caesium-134 and caesium-137 were released into the environment during nearly all nuclear weapon tests and some nuclear accidents, most notably the Goiânia accident and the Chernobyl disaster. As of 2005, caesium-137 is the principal source of radiation in the zone of alienation around the Chernobyl nuclear power plant. Its chemical properties as one of the alkali metals make it one of most problematic of the short-to-medium-lifetime fission products because it easily moves and spreads in nature due to the high water solubility of its salts, and is taken up by the body, which mistakes it for its essential congeners sodium and potassium.
The alkali metals are more similar to each other than the elements in any other group are to each other. For instance, when moving down the table, all known alkali metals show increasing atomic radius, decreasing electronegativity, increasing reactivity, and decreasing melting and boiling points as well as heats of fusion and vaporisation. In general, their densities increase when moving down the table, with the exception that potassium is less dense than sodium.
The atomic radii of the alkali metals increase going down the group. Because of the shielding effect, when an atom has more than one electron shell, each electron feels electric repulsion from the other electrons as well as electric attraction from the nucleus. In the alkali metals, the outermost electron only feels a net charge of +1, as some of the nuclear charge (which is equal to the atomic number) is cancelled by the inner electrons; the number of inner electrons of an alkali metal is always one less than the nuclear charge. Therefore, the only factor which affects the atomic radius of the alkali metals is the number of electron shells. Since this number increases down the group, the atomic radius must also increase down the group.
The ionic radii of the alkali metals are much smaller than their atomic radii. This is because the outermost electron of the alkali metals is in a different electron shell than the inner electrons, and thus when it is removed the resulting atom has one fewer electron shell and is smaller. Additionally, the effective nuclear charge has increased, and thus the electrons are attracted more strongly towards the nucleus and the ionic radius decreases.
The first ionisation energy of an element or molecule is the energy required to move the most loosely held electron from one mole of gaseous atoms of the element or molecules to form one mole of gaseous ions with electric charge +1. The factors affecting the first ionisation energy are the nuclear charge, the amount of shielding by the inner electrons and the distance from the most loosely held electron from the nucleus, which is always an outer electron in main group elements. The first two factors change the effective nuclear charge the most loosely held electron feels. Since the outermost electron of alkali metals always feels the same effective nuclear charge (+1), the only factor which affects the first ionisation energy is the distance from the outermost electron to the nucleus. Since this distance increases down the group, the outermost electron feels less attraction from the nucleus and thus the first ionisation energy decreases. (This trend is broken in francium due to the relativistic stabilisation and contraction of the 7s orbital, bringing francium's valence electron closer to the nucleus than would be expected from non-relativistic calculations. This makes francium's outermost electron feel more attraction from the nucleus, increasing its first ionisation energy slightly beyond that of caesium.)
The second ionisation energy of the alkali metals is much higher than the first as the second-most loosely held electron is part of a fully filled electron shell and is thus difficult to remove.
The reactivities of the alkali metals increase going down the group. This is the result of a combination of two factors: the first ionisation energies and atomisation energies of the alkali metals. Because the first ionisation energy of the alkali metals decreases down the group, it is easier for the outermost electron to be removed from the atom and participate in chemical reactions, thus increasing reactivity down the group. The atomisation energy measures the strength of the metallic bond of an element, which falls down the group as the atoms increase in radius and thus the metallic bond must increase in length, making the delocalised electrons further away from the attraction of the nuclei of the heavier alkali metals. Adding the atomisation and first ionisation energies gives a quantity closely related to (but not equal to) the activation energy of the reaction of an alkali metal with another substance. This quantity decreases going down the group, and so does the activation energy; thus, chemical reactions can occur faster and the reactivity increases down the group.
Electronegativity is a chemical property that describes the tendency of an atom or a functional group to attract electrons (or electron density) towards itself. If the bond between sodium and chlorine in sodium chloride were covalent, the pair of shared electrons would be attracted to the chlorine because the effective nuclear charge on the outer electrons is +7 in chlorine but is only +1 in sodium. The electron pair is attracted so close to the chlorine atom that they are practically transferred to the chlorine atom (an ionic bond). However, if the sodium atom was replaced by a lithium atom, the electrons will not be attracted as close to the chlorine atom as before because the lithium atom is smaller, making the electron pair more strongly attracted to the closer effective nuclear charge from lithium. Hence, the larger alkali metal atoms (further down the group) will be less electronegative as the bonding pair is less strongly attracted towards them. As mentioned previously, francium is expected to be an exception.
Because of the higher electronegativity of lithium, some of its compounds have a more covalent character. For example, lithium iodide (Li I) will dissolve in organic solvents, a property of most covalent compounds. Lithium fluoride (LiF) is the only alkali halide that is not soluble in water, and lithium hydroxide (LiOH) is the only alkali metal hydroxide that is not deliquescent.
The melting point of a substance is the point where it changes state from solid to liquid while the boiling point of a substance (in liquid state) is the point where the vapour pressure of the liquid equals the environmental pressure surrounding the liquid and all the liquid changes state to gas. As a metal is heated to its melting point, the metallic bonds keeping the atoms in place weaken so that the atoms can move around, and the metallic bonds eventually break completely at the metal's boiling point. Therefore, the falling melting and boiling points of the alkali metals indicate that the strength of the metallic bonds of the alkali metals decreases down the group. This is because metal atoms are held together by the electromagnetic attraction from the positive ions to the delocalised electrons. As the atoms increase in size going down the group (because their atomic radius increases), the nuclei of the ions move further away from the delocalised electrons and hence the metallic bond becomes weaker so that the metal can more easily melt and boil, thus lowering the melting and boiling points. (The increased nuclear charge is not a relevant factor due to the shielding effect.)
The alkali metals all have the same crystal structure (body-centred cubic) and thus the only relevant factors are the number of atoms that can fit into a certain volume and the mass of one of the atoms, since density is defined as mass per unit volume. The first factor depends on the volume of the atom and thus the atomic radius, which increases going down the group; thus, the volume of an alkali metal atom increases going down the group. The mass of an alkali metal atom also increases going down the group. Thus, the trend for the densities of the alkali metals depends on their atomic weights and atomic radii; if figures for these two factors are known, the ratios between the densities of the alkali metals can then be calculated. The resultant trend is that the densities of the alkali metals increase down the table, with an exception at potassium. Due to having the lowest atomic weight and the largest atomic radius of all the elements in their periods, the alkali metals are the least dense metals in the periodic table. Lithium, sodium, and potassium are the only three metals in the periodic table that are less dense than water: in fact, lithium is the least dense known solid at room temperature.
The alkali metals form complete series of compounds with all usually encountered anions, which well illustrate group trends. These compounds can be described as involving the alkali metals losing electrons to acceptor species and forming monopositive ions. This description is most accurate for alkali halides and becomes less and less accurate as cationic and anionic charge increase, and as the anion becomes larger and more polarisable. For instance, ionic bonding gives way to metallic bonding along the series NaCl, Na2O, Na2S, Na3P, Na3As, Na3Sb, Na3Bi, Na.
All the alkali metals react vigorously or explosively with cold water, producing an aqueous solution of a strongly basic alkali metal hydroxide and releasing hydrogen gas. This reaction becomes more vigorous going down the group: lithium reacts steadily with effervescence, but sodium and potassium can ignite and rubidium and caesium sink in water and generate hydrogen gas so rapidly that shock waves form in the water that may shatter glass containers. When an alkali metal is dropped into water, it produces an explosion, of which there are two separate stages. The metal reacts with the water first, breaking the hydrogen bonds in the water and producing hydrogen gas; this takes place faster for the more reactive heavier alkali metals. Second, the heat generated by the first part of the reaction often ignites the hydrogen gas, causing it to burn explosively into the surrounding air. This secondary hydrogen gas explosion produces the visible flame above the bowl of water, lake or other body of water, not the initial reaction of the metal with water (which tends to happen mostly under water). The alkali metal hydroxides are the most basic known hydroxides.
Recent research has suggested that the explosive behavior of alkali metals in water is driven by a Coulomb explosion rather than solely by rapid generation of hydrogen itself. All alkali metals melt as a part of the reaction with water. Water molecules ionise the bare metallic surface of the liquid metal, leaving a positively charged metal surface and negatively charged water ions. The attraction between the charged metal and water ions will rapidly increase the surface area, causing an exponential increase of ionisation. When the repulsive forces within the liquid metal surface exceeds the forces of the surface tension, it vigorously explodes.
The hydroxides themselves are the most basic hydroxides known, reacting with acids to give salts and with alcohols to give oligomeric alkoxides. They easily react with carbon dioxide to form carbonates or bicarbonates, or with hydrogen sulfide to form sulfides or bisulfides, and may be used to separate thiols from petroleum. They react with amphoteric oxides: for example, the oxides of aluminium, zinc, tin, and lead react with the alkali metal hydroxides to give aluminates, zincates, stannates, and plumbates. Silicon dioxide is acidic, and thus the alkali metal hydroxides can also attack silicate glass.
The alkali metals form many intermetallic compounds with each other and the elements from groups 2 to 13 in the periodic table of varying stoichiometries, such as the sodium amalgams with mercury, including Na5Hg8 and Na3Hg. Some of these have ionic characteristics: taking the alloys with gold, the most electronegative of metals, as an example, NaAu and KAu are metallic, but RbAu and CsAu are semiconductors. NaK is an alloy of sodium and potassium that is very useful because it is liquid at room temperature, although precautions must be taken due to its extreme reactivity towards water and air. The eutectic mixture melts at −12.6 °C. An alloy of 41% caesium, 47% sodium, and 12% potassium has the lowest known melting point of any metal or alloy, −78 °C.
The intermetallic compounds of the alkali metals with the heavier group 13 elements (aluminium, gallium, indium, and thallium), such as NaTl, are poor conductors or semiconductors, unlike the normal alloys with the preceding elements, implying that the alkali metal involved has lost an electron to the Zintl anions involved. Nevertheless, while the elements in group 14 and beyond tend to form discrete anionic clusters, group 13 elements tend to form polymeric ions with the alkali metal cations located between the giant ionic lattice. For example, NaTl consists of a polymeric anion (—Tl−—)n with a covalent diamond cubic structure with Na+ ions located between the anionic lattice. The larger alkali metals cannot fit similarly into an anionic lattice and tend to force the heavier group 13 elements to form anionic clusters.
Boron is a special case, being the only nonmetal in group 13. The alkali metal borides tend to be boron-rich, involving appreciable boron–boron bonding involving deltahedral structures, and are thermally unstable due to the alkali metals having a very high vapour pressure at elevated temperatures. This makes direct synthesis problematic because the alkali metals do not react with boron below 700 °C, and thus this must be accomplished in sealed containers with the alkali metal in excess. Furthermore, exceptionally in this group, reactivity with boron decreases down the group: lithium reacts completely at 700 °C, but sodium at 900 °C and potassium not until 1200 °C, and the reaction is instantaneous for lithium but takes hours for potassium. Rubidium and caesium borides have not even been characterised. Various phases are known, such as LiB10, NaB6, NaB15, and KB6. Under high pressure the boron–boron bonding in the lithium borides changes from following Wade's rules to forming Zintl anions like the rest of group 13.
Lithium and sodium react with carbon to form acetylides, Li2C2 and Na2C2, which can also be obtained by reaction of the metal with acetylene. Potassium, rubidium, and caesium react with graphite; their atoms are intercalated between the hexagonal graphite layers, forming graphite intercalation compounds of formulae MC60 (dark grey, almost black), MC48 (dark grey, almost black), MC36 (blue), MC24 (steel blue), and MC8 (bronze) (M = K, Rb, or Cs). These compounds are over 200 times more electrically conductive than pure graphite, suggesting that the valence electron of the alkali metal is transferred to the graphite layers (e.g. ). Upon heating of KC8, the elimination of potassium atoms results in the conversion in sequence to KC24, KC36, KC48 and finally KC60. KC8 is a very strong reducing agent and is pyrophoric and explodes on contact with water. While the larger alkali metals (K, Rb, and Cs) initially form MC8, the smaller ones initially form MC6, and indeed they require reaction of the metals with graphite at high temperatures around 500 °C to form. Apart from this, the alkali metals are such strong reducing agents that they can even reduce buckminsterfullerene to produce solid fullerides M"n"C60; sodium, potassium, rubidium, and caesium can form fullerides where "n" = 2, 3, 4, or 6, and rubidium and caesium additionally can achieve "n" = 1.
When the alkali metals react with the heavier elements in the carbon group (silicon, germanium, tin, and lead), ionic substances with cage-like structures are formed, such as the silicides M4Si4 (M = K, Rb, or Cs), which contains M+ and tetrahedral ions. The chemistry of alkali metal germanides, involving the germanide ion Ge4− and other cluster (Zintl) ions such as , , , and [(Ge9)2]6−, is largely analogous to that of the corresponding silicides. Alkali metal stannides are mostly ionic, sometimes with the stannide ion (Sn4−), and sometimes with more complex Zintl ions such as , which appears in tetrapotassium nonastannide (K4Sn9). The monatomic plumbide ion (Pb4−) is unknown, and indeed its formation is predicted to be energetically unfavourable; alkali metal plumbides have complex Zintl ions, such as . These alkali metal germanides, stannides, and plumbides may be produced by reducing germanium, tin, and lead with sodium metal in liquid ammonia.
Lithium, the lightest of the alkali metals, is the only alkali metal which reacts with nitrogen at standard conditions, and its nitride is the only stable alkali metal nitride. Nitrogen is an unreactive gas because breaking the strong triple bond in the dinitrogen molecule (N2) requires a lot of energy. The formation of an alkali metal nitride would consume the ionisation energy of the alkali metal (forming M+ ions), the energy required to break the triple bond in N2 and the formation of N3− ions, and all the energy released from the formation of an alkali metal nitride is from the lattice energy of the alkali metal nitride. The lattice energy is maximised with small, highly charged ions; the alkali metals do not form highly charged ions, only forming ions with a charge of +1, so only lithium, the smallest alkali metal, can release enough lattice energy to make the reaction with nitrogen exothermic, forming lithium nitride. The reactions of the other alkali metals with nitrogen would not release enough lattice energy and would thus be endothermic, so they do not form nitrides at standard conditions. Sodium nitride (Na3N) and potassium nitride (K3N), while existing, are extremely unstable, being prone to decomposing back into their constituent elements, and cannot be produced by reacting the elements with each other at standard conditions. Steric hindrance forbids the existence of rubidium or caesium nitride. However, sodium and potassium form colourless azide salts involving the linear anion; due to the large size of the alkali metal cations, they are thermally stable enough to be able to melt before decomposing.
All the alkali metals react readily with phosphorus and arsenic to form phosphides and arsenides with the formula M3Pn (where M represents an alkali metal and Pn represents a pnictogen – phosphorus, arsenic, antimony, or bismuth). This is due to the greater size of the P3− and As3− ions, so that less lattice energy needs to be released for the salts to form. These are not the only phosphides and arsenides of the alkali metals: for example, potassium has nine different known phosphides, with formulae K3P, K4P3, K5P4, KP, K4P6, K3P7, K3P11, KP10.3, and KP15. While most metals form arsenides, only the alkali and alkaline earth metals form mostly ionic arsenides. The structure of Na3As is complex with unusually short Na–Na distances of 328–330 pm which are shorter than in sodium metal, and this indicates that even with these electropositive metals the bonding cannot be straightforwardly ionic. Other alkali metal arsenides not conforming to the formula M3As are known, such as LiAs, which has a metallic lustre and electrical conductivity indicating the presence of some metallic bonding. The antimonides are unstable and reactive as the Sb3− ion is a strong reducing agent; reaction of them with acids form the toxic and unstable gas stibine (SbH3). Indeed, they have some metallic properties, and the alkali metal antimonides of stoichiometry MSb involve antimony atoms bonded in a spiral Zintl structure. Bismuthides are not even wholly ionic; they are intermetallic compounds containing partially metallic and partially ionic bonds.
All the alkali metals react vigorously with oxygen at standard conditions. They form various types of oxides, such as simple oxides (containing the O2− ion), peroxides (containing the ion, where there is a single bond between the two oxygen atoms), superoxides (containing the ion), and many others. Lithium burns in air to form lithium oxide, but sodium reacts with oxygen to form a mixture of sodium oxide and sodium peroxide. Potassium forms a mixture of potassium peroxide and potassium superoxide, while rubidium and caesium form the superoxide exclusively. Their reactivity increases going down the group: while lithium, sodium and potassium merely burn in air, rubidium and caesium are pyrophoric (spontaneously catch fire in air).
The smaller alkali metals tend to polarise the larger anions (the peroxide and superoxide) due to their small size. This attracts the electrons in the more complex anions towards one of its constituent oxygen atoms, forming an oxide ion and an oxygen atom. This causes lithium to form the oxide exclusively on reaction with oxygen at room temperature. This effect becomes drastically weaker for the larger sodium and potassium, allowing them to form the less stable peroxides. Rubidium and caesium, at the bottom of the group, are so large that even the least stable superoxides can form. Because the superoxide releases the most energy when formed, the superoxide is preferentially formed for the larger alkali metals where the more complex anions are not polarised. (The oxides and peroxides for these alkali metals do exist, but do not form upon direct reaction of the metal with oxygen at standard conditions.) In addition, the small size of the Li+ and O2− ions contributes to their forming a stable ionic lattice structure. Under controlled conditions, however, all the alkali metals, with the exception of francium, are known to form their oxides, peroxides, and superoxides. The alkali metal peroxides and superoxides are powerful oxidising agents. Sodium peroxide and potassium superoxide react with carbon dioxide to form the alkali metal carbonate and oxygen gas, which allows them to be used in submarine air purifiers; the presence of water vapour, naturally present in breath, makes the removal of carbon dioxide by potassium superoxide even more efficient. All the stable alkali metals except lithium can form red ozonides (MO3) through low-temperature reaction of the powdered anhydrous hydroxide with ozone: the ozonides may be then extracted using liquid ammonia. They slowly decompose at standard conditions to the superoxides and oxygen, and hydrolyse immediately to the hydroxides when in contact with water. Potassium, rubidium, and caesium also form sesquioxides M2O3, which may be better considered peroxide disuperoxides, .
Rubidium and caesium can form a great variety of suboxides with the metals in formal oxidation states below +1. Rubidium can form Rb6O and Rb9O2 (copper-coloured) upon oxidation in air, while caesium forms an immense variety of oxides, such as the ozonide CsO3 and several brightly coloured suboxides, such as Cs7O (bronze), Cs4O (red-violet), Cs11O3 (violet), Cs3O (dark green), CsO, Cs3O2, as well as Cs7O2. The last of these may be heated under vacuum to generate Cs2O.
The alkali metals can also react analogously with the heavier chalcogens (sulfur, selenium, tellurium, and polonium), and all the alkali metal chalcogenides are known (with the exception of francium's). Reaction with an excess of the chalcogen can similarly result in lower chalcogenides, with chalcogen ions containing chains of the chalcogen atoms in question. For example, sodium can react with sulfur to form the sulfide (Na2S) and various polysulfides with the formula Na2S"x" ("x" from 2 to 6), containing the ions. Due to the basicity of the Se2− and Te2− ions, the alkali metal selenides and tellurides are alkaline in solution; when reacted directly with selenium and tellurium, alkali metal polyselenides and polytellurides are formed along with the selenides and tellurides with the and ions. They may be obtained directly from the elements in liquid ammonia or when air is not present, and are colourless, water-soluble compounds that air oxidises quickly back to selenium or tellurium. The alkali metal polonides are all ionic compounds containing the Po2− ion; they are very chemically stable and can be produced by direct reaction of the elements at around 300–400 °C.
The alkali metals are among the most electropositive elements on the periodic table and thus tend to bond ionically to the most electronegative elements on the periodic table, the halogens (fluorine, chlorine, bromine, iodine, and astatine), forming salts known as the alkali metal halides. The reaction is very vigorous and can sometimes result in explosions. All twenty stable alkali metal halides are known; the unstable ones are not known, with the exception of sodium astatide, because of the great instability and rarity of astatine and francium. The most well-known of the twenty is certainly sodium chloride, otherwise known as common salt. All of the stable alkali metal halides have the formula MX where M is an alkali metal and X is a halogen. They are all white ionic crystalline solids that have high melting points. All the alkali metal halides are soluble in water except for lithium fluoride (LiF), which is insoluble in water due to its very high lattice enthalpy. The high lattice enthalpy of lithium fluoride is due to the small sizes of the Li+ and F− ions, causing the electrostatic interactions between them to be strong: a similar effect occurs for magnesium fluoride, consistent with the diagonal relationship between lithium and magnesium.
The alkali metals also react similarly with hydrogen to form ionic alkali metal hydrides, where the hydride anion acts as a pseudohalide: these are often used as reducing agents, producing hydrides, complex metal hydrides, or hydrogen gas. Other pseudohalides are also known, notably the cyanides. These are isostructural to the respective halides except for lithium cyanide, indicating that the cyanide ions may rotate freely. Ternary alkali metal halide oxides, such as Na3ClO, K3BrO (yellow), Na4Br2O, Na4I2O, and K4Br2O, are also known. The polyhalides are rather unstable, although those of rubidium and caesium are greatly stabilised by the feeble polarising power of these extremely large cations.
Alkali metal cations do not usually form coordination complexes with simple Lewis bases due to their low charge of just +1 and their relatively large size; thus the Li+ ion forms most complexes and the heavier alkali metal ions form less and less (though exceptions occur for weak complexes). Lithium in particular has a very rich coordination chemistry in which it exhibits coordination numbers from 1 to 12, although octahedral hexacoordination is its preferred mode. In aqueous solution, the alkali metal ions exist as octahedral hexahydrate complexes ([M(H2O)6)]+), with the exception of the lithium ion, which due to its small size forms tetrahedral tetrahydrate complexes ([Li(H2O)4)]+); the alkali metals form these complexes because their ions are attracted by electrostatic forces of attraction to the polar water molecules. Because of this, anhydrous salts containing alkali metal cations are often used as desiccants. Alkali metals also readily form complexes with crown ethers (e.g. 12-crown-4 for Li+, 15-crown-5 for Na+, 18-crown-6 for K+, and 21-crown-7 for Rb+) and cryptands due to electrostatic attraction.
The alkali metals dissolve slowly in liquid ammonia, forming ammoniacal solutions of solvated metal cation M+ and solvated electron e−, which react to form hydrogen gas and the alkali metal amide (MNH2, where M represents an alkali metal): this was first noted by Humphry Davy in 1809 and rediscovered by W. Weyl in 1864. The process may be speeded up by a catalyst. Similar solutions are formed by the heavy divalent alkaline earth metals calcium, strontium, barium, as well as the divalent lanthanides, europium and ytterbium. The amide salt is quite insoluble and readily precipitates out of solution, leaving intensely coloured ammonia solutions of the alkali metals. In 1907, Charles Krause identified the colour as being due to the presence of solvated electrons, which contribute to the high electrical conductivity of these solutions. At low concentrations (below 3 M), the solution is dark blue and has ten times the conductivity of aqueous sodium chloride; at higher concentrations (above 3 M), the solution is copper-coloured and has approximately the conductivity of liquid metals like mercury. In addition to the alkali metal amide salt and solvated electrons, such ammonia solutions also contain the alkali metal cation (M+), the neutral alkali metal atom (M), diatomic alkali metal molecules (M2) and alkali metal anions (M−). These are unstable and eventually become the more thermodynamically stable alkali metal amide and hydrogen gas. Solvated electrons are powerful reducing agents and are often used in chemical synthesis.
Being the smallest alkali metal, lithium forms the widest variety of and most stable organometallic compounds, which are bonded covalently. Organolithium compounds are electrically non-conducting volatile solids or liquids that melt at low temperatures, and tend to form oligomers with the structure (RLi)"x" where R is the organic group. As the electropositive nature of lithium puts most of the charge density of the bond on the carbon atom, effectively creating a carbanion, organolithium compounds are extremely powerful bases and nucleophiles. For use as bases, butyllithiums are often used and are commercially available. An example of an organolithium compound is methyllithium ((CH3Li)"x"), which exists in tetrameric ("x" = 4, tetrahedral) and hexameric ("x" = 6, octahedral) forms. Organolithium compounds, especially "n"-butyllithium, are useful reagents in organic synthesis, as might be expected given lithium's diagonal relationship with magnesium, which plays an important role in the Grignard reaction. For example, alkyllithiums and aryllithiums may be used to synthesise aldehydes and ketones by reaction with metal carbonyls. The reaction with nickel tetracarbonyl, for example, proceeds through an unstable acyl nickel carbonyl complex which then undergoes electrophilic substitution to give the desired aldehyde (using H+ as the electrophile) or ketone (using an alkyl halide) product.
Alkyllithiums and aryllithiums may also react with "N","N"-disubstituted amides to give aldehydes and ketones, and symmetrical ketones by reacting with carbon monoxide. They thermally decompose to eliminate a β-hydrogen, producing alkenes and lithium hydride: another route is the reaction of ethers with alkyl- and aryllithiums that act as strong bases. In non-polar solvents, aryllithiums react as the carbanions they effectively are, turning carbon dioxide to aromatic carboxylic acids (ArCO2H) and aryl ketones to tertiary carbinols (Ar'2C(Ar)OH). Finally, they may be used to synthesise other organometallic compounds through metal-halogen exchange.
Unlike the organolithium compounds, the organometallic compounds of the heavier alkali metals are predominantly ionic. The application of organosodium compounds in chemistry is limited in part due to competition from organolithium compounds, which are commercially available and exhibit more convenient reactivity. The principal organosodium compound of commercial importance is sodium cyclopentadienide. Sodium tetraphenylborate can also be classified as an organosodium compound since in the solid state sodium is bound to the aryl groups. Organometallic compounds of the higher alkali metals are even more reactive than organosodium compounds and of limited utility. A notable reagent is Schlosser's base, a mixture of "n"-butyllithium and potassium "tert"-butoxide. This reagent reacts with propene to form the compound allylpotassium (KCH2CHCH2). "cis"-2-Butene and "trans"-2-butene equilibrate when in contact with alkali metals. Whereas isomerisation is fast with lithium and sodium, it is slow with the heavier alkali metals. The heavier alkali metals also favour the sterically congested conformation. Several crystal structures of organopotassium compounds have been reported, establishing that they, like the sodium compounds, are polymeric. Organosodium, organopotassium, organorubidium and organocaesium compounds are all mostly ionic and are insoluble (or nearly so) in nonpolar solvents.
Alkyl and aryl derivatives of sodium and potassium tend to react with air. They cause the cleavage of ethers, generating alkoxides. Unlike alkyllithium compounds, alkylsodiums and alkylpotassiums cannot be made by reacting the metals with alkyl halides because Wurtz coupling occurs:
As such, they have to be made by reacting alkylmercury compounds with sodium or potassium metal in inert hydrocarbon solvents. While methylsodium forms tetramers like methyllithium, methylpotassium is more ionic and has the nickel arsenide structure with discrete methyl anions and potassium cations.
The alkali metals and their hydrides react with acidic hydrocarbons, for example cyclopentadienes and terminal alkynes, to give salts. Liquid ammonia, ether, or hydrocarbon solvents are used, the most common of which being tetrahydrofuran. The most important of these compounds is sodium cyclopentadienide, NaC5H5, an important precursor to many transition metal cyclopentadienyl derivatives. Similarly, the alkali metals react with cyclooctatetraene in tetrahydrofuran to give alkali metal cyclooctatetraenides; for example, dipotassium cyclooctatetraenide (K2C8H8) is an important precursor to many metal cyclooctatetraenyl derivatives, such as uranocene. The large and very weakly polarising alkali metal cations can stabilise large, aromatic, polarisable radical anions, such as the dark-green sodium naphthalenide, Na+[C10H8•]−, a strong reducing agent.
Reaction with oxygen
Upon reacting with oxygen, alkali metals form oxides, peroxides, superoxides and suboxides. However, the first three are more common. The table below shows the types of compounds formed in reaction with oxygen. The compound in brackets represents the minor product of combustion.
The alkali metal peroxides are ionic compounds that are unstable in water. The peroxide anion is weakly bound to the cation, and it is hydrolysed, forming stronger covalent bonds.
The other oxygen compounds are also unstable in water.
Reaction with sulphur
With sulphur, they form sulphides and polysulphides.
Because alkali metal sulphides are essentially salts of a weak acid and a strong base, they form basic solutions.
Reaction with nitrogen
Lithium is the only metal that combines directly with nitrogen at room temperature.
Li3N can react with water to liberate ammonia.
Reaction with hydrogen
With hydrogen, alkali metals form saline hydrides that hydrolyse in water.
Reaction with carbon
Lithium is the only metal that reacts directly with carbon to give dilithium acetylide. Na and K can react with acetylene to give acetylides.
Reaction with water
On reaction with water, they generate hydroxide ions and hydrogen gas. This reaction is vigorous and highly exothermic and the hydrogen resulted may ignite in air or even explode in the case of Rb and Cs.
Reaction with other salts
The alkali metals are very good reducing agents. They can reduce metal cations that are less electropositive. Titanium is produced industrially by the reduction of titanium tetrachloride with Na at 4000C (van Arkel process).
Reaction with organohalide compounds
Alkali metals react with halogen derivatives to generate hydrocarbon via the Wurtz reaction.
Alkali metals in liquid ammonia
Alkali metals dissolve in liquid ammonia or other donor solvents like aliphatic amines or hexamethylphosphoramide to give blue solutions. These solutions are believed to contain free electrons.
Due to the presence of solvated electrons, these solutions are very powerful reducing agents used in organic synthesis.
Reaction 1) is known as Birch reduction.
Other reductions that can be carried by these solutions are:
Although francium is the heaviest alkali metal that has been discovered, there has been some theoretical work predicting the physical and chemical characteristics of hypothetical heavier alkali metals. Being the first period 8 element, the undiscovered element ununennium (element 119) is predicted to be the next alkali metal after francium and behave much like their lighter congeners; however, it is also predicted to differ from the lighter alkali metals in some properties. Its chemistry is predicted to be closer to that of potassium or rubidium instead of caesium or francium. This is unusual as periodic trends, ignoring relativistic effects would predict ununennium to be even more reactive than caesium and francium. This lowered reactivity is due to the relativistic stabilisation of ununennium's valence electron, increasing ununennium's first ionisation energy and decreasing the metallic and ionic radii; this effect is already seen for francium. This assumes that ununennium will behave chemically as an alkali metal, which, although likely, may not be true due to relativistic effects. The relativistic stabilisation of the 8s orbital also increases ununennium's electron affinity far beyond that of caesium and francium; indeed, ununennium is expected to have an electron affinity higher than all the alkali metals lighter than it. Relativistic effects also cause a very large drop in the polarisability of ununennium. On the other hand, ununennium is predicted to continue the trend of melting points decreasing going down the group, being expected to have a melting point between 0 °C and 30 °C.
The stabilisation of ununennium's valence electron and thus the contraction of the 8s orbital cause its atomic radius to be lowered to 240 pm, very close to that of rubidium (247 pm), so that the chemistry of ununennium in the +1 oxidation state should be more similar to the chemistry of rubidium than to that of francium. On the other hand, the ionic radius of the Uue+ ion is predicted to be larger than that of Rb+, because the 7p orbitals are destabilised and are thus larger than the p-orbitals of the lower shells. Ununennium may also show the +3 oxidation state, which is not seen in any other alkali metal, in addition to the +1 oxidation state that is characteristic of the other alkali metals and is also the main oxidation state of all the known alkali metals: this is because of the destabilisation and expansion of the 7p3/2 spinor, causing its outermost electrons to have a lower ionisation energy than what would otherwise be expected. Indeed, many ununennium compounds are expected to have a large covalent character, due to the involvement of the 7p3/2 electrons in the bonding.
Not as much work has been done predicting the properties of the alkali metals beyond ununennium. Although a simple extrapolation of the periodic table (by the aufbau principle) would put element 169, unhexennium, under ununennium, Dirac-Fock calculations predict that the next element after ununennium with alkali-metal-like properties may be element 165, unhexpentium, which is predicted to have the electron configuration [Og] 5g18 6f14 7d10 8s2 8p1/22 9s1. This element would be intermediate in properties between an alkali metal and a group 11 element, and while its physical and atomic properties would be closer to the former, its chemistry may be closer to that of the latter. Further calculations show that unhexpentium would follow the trend of increasing ionisation energy beyond caesium, having an ionisation energy comparable to that of sodium, and that it should also continue the trend of decreasing atomic radii beyond caesium, having an atomic radius comparable to that of potassium. However, the 7d electrons of unhexpentium may also be able to participate in chemical reactions along with the 9s electron, possibly allowing oxidation states beyond +1, whence the likely transition metal behaviour of unhexpentium. Due to the alkali and alkaline earth metals both being s-block elements, these predictions for the trends and properties of ununennium and unhexpentium also mostly hold quite similarly for the corresponding alkaline earth metals unbinilium (Ubn) and unhexhexium (Uhh). Unsepttrium, element 173, may be an even better heavier homologue of ununennium; with a predicted electron configuration of [Usb] 6g1, it returns to the alkali-metal-like situation of having one easily removed electron far above a closed p-shell in energy, and is expected to be even more reactive than caesium.
The probable properties of further alkali metals beyond unsepttrium have not been explored yet as of 2019, and they may or may not be able to exist. In periods 8 and above of the periodic table, relativistic and shell-structure effects become so strong that extrapolations from lighter congeners become completely inaccurate. In addition, the relativistic and shell-structure effects (which stabilise the s-orbitals and destabilise and expand the d-, f-, and g-orbitals of higher shells) have opposite effects, causing even larger difference between relativistic and non-relativistic calculations of the properties of elements with such high atomic numbers. Interest in the chemical properties of ununennium, unhexpentium, and unsepttrium stems from the fact that they are located close to the expected locations of islands of stability, centered at elements 122 (306Ubb) and 164 (482Uhq).
Many other substances are similar to the alkali metals in their tendency to form monopositive cations. Analogously to the pseudohalogens, they have sometimes been called "pseudo-alkali metals". These substances include some elements and many more polyatomic ions; the polyatomic ions are especially similar to the alkali metals in their large size and weak polarising power.
The element hydrogen, with one electron per neutral atom, is usually placed at the top of Group 1 of the periodic table for convenience, but hydrogen is not normally considered to be an alkali metal; when it is considered to be an alkali metal, it is because of its atomic properties and not its chemical properties. Under typical conditions, pure hydrogen exists as a diatomic gas consisting of two atoms per molecule (H2); however, the alkali metals only form diatomic molecules (such as dilithium, Li2) at high temperatures, when they are in the gaseous state.
Hydrogen, like the alkali metals, has one valence electron and reacts easily with the halogens, but the similarities end there because of the small size of a bare proton H+ compared to the alkali metal cations. Its placement above lithium is primarily due to its electron configuration. It is sometimes placed above carbon due to their similar electronegativities or fluorine due to their similar chemical properties.
The first ionisation energy of hydrogen (1312.0 kJ/mol) is much higher than that of the alkali metals. As only one additional electron is required to fill in the outermost shell of the hydrogen atom, hydrogen often behaves like a halogen, forming the negative hydride ion, and is very occasionally considered to be a halogen on that basis. (The alkali metals can also form negative ions, known as alkalides, but these are little more than laboratory curiosities, being unstable.) An argument against this placement is that formation of hydride from hydrogen is endothermic, unlike the exothermic formation of halides from halogens. The radius of the H− anion also does not fit the trend of increasing size going down the halogens: indeed, H− is very diffuse because its single proton cannot easily control both electrons. It was expected for some time that liquid hydrogen would show metallic properties; while this has been shown to not be the case, under extremely high pressures, such as those found at the cores of Jupiter and Saturn, hydrogen does become metallic and behaves like an alkali metal; in this phase, it is known as metallic hydrogen. The electrical resistivity of liquid metallic hydrogen at 3000 K is approximately equal to that of liquid rubidium and caesium at 2000 K at the respective pressures when they undergo a nonmetal-to-metal transition.
The 1s1 electron configuration of hydrogen, while superficially similar to that of the alkali metals (ns1), is unique because there is no 1p subshell. Hence it can lose an electron to form the hydron H+, or gain one to form the hydride ion H−. In the former case it resembles superficially the alkali metals; in the latter case, the halogens, but the differences due to the lack of a 1p subshell are important enough that neither group fits the properties of hydrogen well. Group 14 is also a good fit in terms of thermodynamic properties such as ionisation energy and electron affinity, but makes chemical nonsense because hydrogen cannot be tetravalent. Thus none of the three placements are entirely satisfactory, although group 1 is the most common placement (if one is chosen) because the hydron is by far the most important of all monatomic hydrogen species, being the foundation of acid-base chemistry. As an example of hydrogen's unorthodox properties stemming from its unusual electron configuration and small size, the hydrogen ion is very small (radius around 150 fm compared to the 50–220 pm size of most other atoms and ions) and so is nonexistent in condensed systems other than in association with other atoms or molecules. Indeed, transferring of protons between chemicals is the basis of acid-base chemistry. Also unique is hydrogen's ability to form hydrogen bonds, which are an effect of charge-transfer, electrostatic, and electron correlative contributing phenomena. While analogous lithium bonds are also known, they are mostly electrostatic. Nevertheless, hydrogen can take on the same structural role as the alkali metals in some molecular crystals, and has a close relationship with the lightest alkali metals (especially lithium).
The ammonium ion () has very similar properties to the heavier alkali metals, acting as an alkali metal intermediate between potassium and rubidium, and is often considered a close relative. For example, most alkali metal salts are soluble in water, a property which ammonium salts share. Ammonium is expected to behave stably as a metal ( ions in a sea of delocalised electrons) at very high pressures (though less than the typical pressure where transitions from insulating to metallic behaviour occur around, 100 GPa), and could possibly occur inside the ice giants Uranus and Neptune, which may have significant impacts on their interior magnetic fields. It has been estimated that the transition from a mixture of ammonia and dihydrogen molecules to metallic ammonium may occur at pressures just below 25 GPa. Under standard conditions, ammonium can form a metallic amalgam with mercury.
Other "pseudo-alkali metals" include the alkylammonium cations, in which some of the hydrogen atoms in the ammonium cation are replaced by alkyl or aryl groups. In particular, the quaternary ammonium cations () are very useful since they are permanently charged, and they are often used as an alternative to the expensive Cs+ to stabilise very large and very easily polarisable anions such as . Tetraalkylammonium hydroxides, like alkali metal hydroxides, are very strong bases that react with atmospheric carbon dioxide to form carbonates. Furthermore, the nitrogen atom may be replaced by a phosphorus, arsenic, or antimony atom (the heavier nonmetallic pnictogens), creating a phosphonium () or arsonium () cation that can itself be substituted similarly; while stibonium () itself is not known, some of its organic derivatives are characterised.
Cobaltocene, Co(C5H5)2, is a metallocene, the cobalt analogue of ferrocene. It is a dark purple solid. Cobaltocene has 19 valence electrons, one more than usually found in organotransition metal complexes, such as its very stable relative, ferrocene, in accordance with the 18-electron rule. This additional electron occupies an orbital that is antibonding with respect to the Co–C bonds. Consequently, many chemical reactions of Co(C5H5)2 are characterized by its tendency to lose this "extra" electron, yielding a very stable 18-electron cation known as cobaltocenium. Many cobaltocenium salts coprecipitate with caesium salts, and cobaltocenium hydroxide is a strong base that absorbs atmospheric carbon dioxide to form cobaltocenium carbonate. Like the alkali metals, cobaltocene is a strong reducing agent, and decamethylcobaltocene is stronger still due to the combined inductive effect of the ten methyl groups. Cobalt may be substituted by its heavier congener rhodium to give rhodocene, an even stronger reducing agent. Iridocene (involving iridium) would presumably be still more potent, but is not very well-studied due to its instability.
Thallium is the heaviest stable element in group 13 of the periodic table. At the bottom of the periodic table, the inert pair effect is quite strong, because of the relativistic stabilisation of the 6s orbital and the decreasing bond energy as the atoms increase in size so that the amount of energy released in forming two more bonds is not worth the high ionisation energies of the 6s electrons. It displays the +1 oxidation state that all the known alkali metals display, and thallium compounds with thallium in its +1 oxidation state closely resemble the corresponding potassium or silver compounds stoichiometrically due to the similar ionic radii of the Tl+ (164 pm), K+ (152 pm) and Ag+ (129 pm) ions. It was sometimes considered an alkali metal in continental Europe (but not in England) in the years immediately following its discovery, and was placed just after caesium as the sixth alkali metal in Dmitri Mendeleev's 1869 periodic table and Julius Lothar Meyer's 1868 periodic table. (Mendeleev's 1871 periodic table and Meyer's 1870 periodic table put thallium in its current position in the boron group and left the space below caesium blank.) However, thallium also displays the oxidation state +3, which no known alkali metal displays (although ununennium, the undiscovered seventh alkali metal, is predicted to possibly display the +3 oxidation state). The sixth alkali metal is now considered to be francium. While Tl+ is stabilised by the inert pair effect, this inert pair of 6s electrons is still able to participate chemically, so that these electrons are stereochemically active in aqueous solution. Additionally, the thallium halides (except TlF) are quite insoluble in water, and TlI has an unusual structure because of the presence of the stereochemically active inert pair in thallium.
The group 11 metals (or coinage metals), copper, silver, and gold, are typically categorised as transition metals given they can form ions with incomplete d-shells. Physically, they have the relatively low melting points and high electronegativity values associated with post-transition metals. "The filled "d" subshell and free "s" electron of Cu, Ag, and Au contribute to their high electrical and thermal conductivity. Transition metals to the left of group 11 experience interactions between "s" electrons and the partially filled "d" subshell that lower electron mobility." Chemically, the group 11 metals behave like main-group metals in their +1 valence states, and are hence somewhat related to the alkali metals: this is one reason for their previously being labelled as "group IB", paralleling the alkali metals' "group IA". They are occasionally classified as post-transition metals. Their spectra are analogous to those of the alkali metals. Their monopositive ions are paramagnetic and contribute no colour to their salts, like those of the alkali metals.
In Mendeleev's 1871 periodic table, copper, silver, and gold are listed twice, once under group VIII (with the iron triad and platinum group metals), and once under group IB. Group IB was nonetheless parenthesised to note that it was tentative. Mendeleev's main criterion for group assignment was the maximum oxidation state of an element: on that basis, the group 11 elements could not be classified in group IB, due to the existence of copper(II) and gold(III) compounds being known at that time. However, eliminating group IB would make group I the only main group (group VIII was labelled a transition group) to lack an A–B bifurcation. Soon afterward, a majority of chemists chose to classify these elements in group IB and remove them from group VIII for the resulting symmetry: this was the predominant classification until the rise of the modern medium-long 18-column periodic table, which separated the alkali metals and group 11 metals.
The coinage metals were traditionally regarded as a subdivision of the alkali metal group, due to them sharing the characteristic s1 electron configuration of the alkali metals (group 1: p6s1; group 11: d10s1). However, the similarities are largely confined to the stoichiometries of the +1 compounds of both groups, and not their chemical properties. This stems from the filled d subshell providing a much weaker shielding effect on the outermost s electron than the filled p subshell, so that the coinage metals have much higher first ionisation energies and smaller ionic radii than do the corresponding alkali metals. Furthermore, they have higher melting points, hardnesses, and densities, and lower reactivities and solubilities in liquid ammonia, as well as having more covalent character in their compounds. Finally, the alkali metals are at the top of the electrochemical series, whereas the coinage metals are almost at the very bottom. The coinage metals' filled d shell is much more easily disrupted than the alkali metals' filled p shell, so that the second and third ionisation energies are lower, enabling higher oxidation states than +1 and a richer coordination chemistry, thus giving the group 11 metals clear transition metal character. Particularly noteworthy is gold forming ionic compounds with rubidium and caesium, in which it forms the auride ion (Au−) which also occurs in solvated form in liquid ammonia solution: here gold behaves as a pseudohalogen because its 5d106s1 configuration has one electron less than the quasi-closed shell 5d106s2 configuration of mercury.
The production of pure alkali metals is somewhat complicated due to their extreme reactivity with commonly used substances, such as water. From their silicate ores, all the stable alkali metals may be obtained the same way: sulfuric acid is first used to dissolve the desired alkali metal ion and aluminium(III) ions from the ore (leaching), whereupon basic precipitation removes aluminium ions from the mixture by precipitating it as the hydroxide. The remaining insoluble alkali metal carbonate is then precipitated selectively; the salt is then dissolved in hydrochloric acid to produce the chloride. The result is then left to evaporate and the alkali metal can then be isolated. Lithium and sodium are typically isolated through electrolysis from their liquid chlorides, with calcium chloride typically added to lower the melting point of the mixture. The heavier alkali metals, however, is more typically isolated in a different way, where a reducing agent (typically sodium for potassium and magnesium or calcium for the heaviest alkali metals) is used to reduce the alkali metal chloride. The liquid or gaseous product (the alkali metal) then undergoes fractional distillation for purification. Most routes to the pure alkali metals require the use of electrolysis due to their high reactivity; one of the few which does not is the pyrolysis of the corresponding alkali metal azide, which yields the metal for sodium, potassium, rubidium, and caesium and the nitride for lithium.
Lithium salts have to be extracted from the water of mineral springs, brine pools, and brine deposits. The metal is produced electrolytically from a mixture of fused lithium chloride and potassium chloride.
Sodium occurs mostly in seawater and dried seabed, but is now produced through electrolysis of sodium chloride by lowering the melting point of the substance to below 700 °C through the use of a Downs cell. Extremely pure sodium can be produced through the thermal decomposition of sodium azide. Potassium occurs in many minerals, such as sylvite (potassium chloride). Previously, potassium was generally made from the electrolysis of potassium chloride or potassium hydroxide, found extensively in places such as Canada, Russia, Belarus, Germany, Israel, United States, and Jordan, in a method similar to how sodium was produced in the late 1800s and early 1900s. It can also be produced from seawater. However, these methods are problematic because the potassium metal tends to dissolve in its molten chloride and vaporises significantly at the operating temperatures, potentially forming the explosive superoxide. As a result, pure potassium metal is now produced by reducing molten potassium chloride with sodium metal at 850 °C.
Although sodium is less reactive than potassium, this process works because at such high temperatures potassium is more volatile than sodium and can easily be distilled off, so that the equilibrium shifts towards the right to produce more potassium gas and proceeds almost to completion.
For several years in the 1950s and 1960s, a by-product of the potassium production called Alkarb was a main source for rubidium. Alkarb contained 21% rubidium while the rest was potassium and a small fraction of caesium. Today the largest producers of caesium, for example the Tanco Mine in Manitoba, Canada, produce rubidium as by-product from pollucite. Today, a common method for separating rubidium from potassium and caesium is the fractional crystallisation of a rubidium and caesium alum (Cs, Rb)Al(SO4)2·12H2O, which yields pure rubidium alum after approximately 30 recrystallisations. The limited applications and the lack of a mineral rich in rubidium limit the production of rubidium compounds to 2 to 4 tonnes per year. Caesium, however, is not produced from the above reaction. Instead, the mining of pollucite ore is the main method of obtaining pure caesium, extracted from the ore mainly by three methods: acid digestion, alkaline decomposition, and direct reduction. Both metals are produced as by-products of lithium production: after 1958, when interest in lithium's thermonuclear properties increased sharply, the production of rubidium and caesium also increased correspondingly. Pure rubidium and caesium metals are produced by reducing their chlorides with calcium metal at 750 °C and low pressure.
As a result of its extreme rarity in nature, most francium is synthesised in the nuclear reaction 197Au + 18O → 210Fr + 5 n, yielding francium-209, francium-210, and francium-211. The greatest quantity of francium ever assembled to date is about 300,000 neutral atoms, which were synthesised using the nuclear reaction given above. When the only natural isotope francium-223 is specifically required, it is produced as the alpha daughter of actinium-227, itself produced synthetically from the neutron irradiation of natural radium-226, one of the daughters of natural uranium-238.
Lithium, sodium, and potassium have many applications, while rubidium and caesium are very useful in academic contexts but do not have many applications yet. Lithium is often used in lithium-ion batteries, and lithium oxide can help process silica. Lithium stearate is a thickener and can be used to make lubricating greases; it is produced from lithium hydroxide, which is also used to absorb carbon dioxide in space capsules and submarines. Lithium chloride is used as a brazing alloy for aluminium parts. Metallic lithium is used in alloys with magnesium and aluminium to give very tough and light alloys.
Sodium compounds have many applications, the most well-known being sodium chloride as table salt. Sodium salts of fatty acids are used as soap. Pure sodium metal also has many applications, including use in sodium-vapour lamps, which produce very efficient light compared to other types of lighting, and can help smooth the surface of other metals. Being a strong reducing agent, it is often used to reduce many other metals, such as titanium and zirconium, from their chlorides. Furthermore, it is very useful as a heat-exchange liquid in fast breeder nuclear reactors due to its low melting point, viscosity, and cross-section towards neutron absorption.
Potassium compounds are often used as fertilisers as potassium is an important element for plant nutrition. Potassium hydroxide is a very strong base, and is used to control the pH of various substances. Potassium nitrate and potassium permanganate are often used as powerful oxidising agents. Potassium superoxide is used in breathing masks, as it reacts with carbon dioxide to give potassium carbonate and oxygen gas. Pure potassium metal is not often used, but its alloys with sodium may substitute for pure sodium in fast breeder nuclear reactors.
Rubidium and caesium are often used in atomic clocks. Caesium atomic clocks are extraordinarily accurate; if a clock had been made at the time of the dinosaurs, it would be off by less than four seconds (after 80 million years). For that reason, caesium atoms are used as the definition of the second. Rubidium ions are often used in purple fireworks, and caesium is often used in drilling fluids in the petroleum industry.
Francium has no commercial applications, but because of francium's relatively simple atomic structure, among other things, it has been used in spectroscopy experiments, leading to more information regarding energy levels and the coupling constants between subatomic particles. Studies on the light emitted by laser-trapped francium-210 ions have provided accurate data on transitions between atomic energy levels, similar to those predicted by quantum theory.
Pure alkali metals are dangerously reactive with air and water and must be kept away from heat, fire, oxidising agents, acids, most organic compounds, halocarbons, plastics, and moisture. They also react with carbon dioxide and carbon tetrachloride, so that normal fire extinguishers are counterproductive when used on alkali metal fires. Some Class D dry powder extinguishers designed for metal fires are effective, depriving the fire of oxygen and cooling the alkali metal.
Experiments are usually conducted using only small quantities of a few grams in a fume hood. Small quantities of lithium may be disposed of by reaction with cool water, but the heavier alkali metals should be dissolved in the less reactive isopropanol. The alkali metals must be stored under mineral oil or an inert atmosphere. The inert atmosphere used may be argon or nitrogen gas, except for lithium, which reacts with nitrogen. Rubidium and caesium must be kept away from air, even under oil, because even a small amount of air diffused into the oil may trigger formation of the dangerously explosive peroxide; for the same reason, potassium should not be stored under oil in an oxygen-containing atmosphere for longer than 6 months.
The bioinorganic chemistry of the alkali metal ions has been extensively reviewed.
Solid state crystal structures have been determined for many complexes of alkali metal ions in small peptides, nucleic acid constituents, carbohydrates and ionophore complexes.
Lithium naturally only occurs in traces in biological systems and has no known biological role, but does have effects on the body when ingested. Lithium carbonate is used as a mood stabiliser in psychiatry to treat bipolar disorder (manic-depression) in daily doses of about 0.5 to 2 grams, although there are side-effects. Excessive ingestion of lithium causes drowsiness, slurred speech and vomiting, among other symptoms, and poisons the central nervous system, which is dangerous as the required dosage of lithium to treat bipolar disorder is only slightly lower than the toxic dosage. Its biochemistry, the way it is handled by the human body and studies using rats and goats suggest that it is an essential trace element, although the natural biological function of lithium in humans has yet to be identified.
Sodium and potassium occur in all known biological systems, generally functioning as electrolytes inside and outside cells. Sodium is an essential nutrient that regulates blood volume, blood pressure, osmotic equilibrium and pH; the minimum physiological requirement for sodium is 500 milligrams per day. Sodium chloride (also known as common salt) is the principal source of sodium in the diet, and is used as seasoning and preservative, such as for pickling and jerky; most of it comes from processed foods. The Dietary Reference Intake for sodium is 1.5 grams per day, but most people in the United States consume more than 2.3 grams per day, the minimum amount that promotes hypertension; this in turn causes 7.6 million premature deaths worldwide.
Potassium is the major cation (positive ion) inside animal cells, while sodium is the major cation outside animal cells. The concentration differences of these charged particles causes a difference in electric potential between the inside and outside of cells, known as the membrane potential. The balance between potassium and sodium is maintained by ion transporter proteins in the cell membrane. The cell membrane potential created by potassium and sodium ions allows the cell to generate an action potential—a "spike" of electrical discharge. The ability of cells to produce electrical discharge is critical for body functions such as neurotransmission, muscle contraction, and heart function. Disruption of this balance may thus be fatal: for example, ingestion of large amounts of potassium compounds can lead to hyperkalemia strongly influencing the cardiovascular system. Potassium chloride is used in the United States for lethal injection executions.
Due to their similar atomic radii, rubidium and caesium in the body mimic potassium and are taken up similarly. Rubidium has no known biological role, but may help stimulate metabolism, and, similarly to caesium, replace potassium in the body causing potassium deficiency. Partial substitution is quite possible and rather non-toxic: a 70 kg person contains on average 0.36 g of rubidium, and an increase in this value by 50 to 100 times did not show negative effects in test persons. Rats can survive up to 50% substitution of potassium by rubidium. Rubidium (and to a much lesser extent caesium) can function as temporary cures for hypokalemia; while rubidium can adequately physiologically substitute potassium in some systems, caesium is never able to do so. There is only very limited evidence in the form of deficiency symptoms for rubidium being possibly essential in goats; even if this is true, the trace amounts usually present in food are more than enough.
Caesium compounds are rarely encountered by most people, but most caesium compounds are mildly toxic. Like rubidium, caesium tends to substitute potassium in the body, but is significantly larger and is therefore a poorer substitute. Excess caesium can lead to hypokalemia, arrythmia, and acute cardiac arrest, but such amounts would not ordinarily be encountered in natural sources. As such, caesium is not a major chemical environmental pollutant. The median lethal dose (LD50) value for caesium chloride in mice is 2.3 g per kilogram, which is comparable to the LD50 values of potassium chloride and sodium chloride. Caesium chloride has been promoted as an alternative cancer therapy, but has been linked to the deaths of over 50 patients, on whom it was used as part of a scientifically unvalidated cancer treatment.
Radioisotopes of caesium require special precautions: the improper handling of caesium-137 gamma ray sources can lead to release of this radioisotope and radiation injuries. Perhaps the best-known case is the Goiânia accident of 1987, in which an improperly-disposed-of radiation therapy system from an abandoned clinic in the city of Goiânia, Brazil, was scavenged from a junkyard, and the glowing caesium salt sold to curious, uneducated buyers. This led to four deaths and serious injuries from radiation exposure. Together with caesium-134, iodine-131, and strontium-90, caesium-137 was among the isotopes distributed by the Chernobyl disaster which constitute the greatest risk to health. Radioisotopes of francium would presumably be dangerous as well due to their high decay energy and short half-life, but none have been produced in large enough amounts to pose any serious risk. | https://en.wikipedia.org/wiki?curid=666 |
Alphabet
An alphabet is a standardized set of basic written symbols or graphemes (called letters) that represent the phonemes of certain spoken languages. Not all writing systems represent language in this way; in a syllabary, each character represents a syllable, for instance, and logographic systems use characters to represent words, morphemes, or other semantic units).
The first fully phonemic script, the Proto-Canaanite script, later known as the Phoenician alphabet, is considered to be the first alphabet, and is the ancestor of most modern alphabets, including Arabic, Greek, Latin, Cyrillic, Hebrew, and possibly Brahmic. Peter T. Daniels, however, distinguishes an abugida or alphasyllabary, a set of graphemes that represent consonantal base letters which diacritics modify to represent vowels (as in Devanagari and other South Asian scripts), an abjad, in which letters predominantly or exclusively represent consonants (as in the original Phoenician, Hebrew or Arabic), and an "alphabet", a set of graphemes that represent both vowels and consonants. In this narrow sense of the word the first "true" alphabet was the Greek alphabet, which was developed on the basis of the earlier Phoenician alphabet.
Of the dozens of alphabets in use today, the most popular is the Latin alphabet, which was derived from the Greek, and which many languages modify by adding letters formed using diacritical marks. While most alphabets have letters composed of lines (linear writing), there are also exceptions such as the alphabets used in Braille. The Khmer alphabet (for Cambodian) is the longest, with 74 letters.
Alphabets are usually associated with a standard ordering of letters. This makes them useful for purposes of collation, specifically by allowing words to be sorted in alphabetical order. It also means that their letters can be used as an alternative method of "numbering" ordered items, in such contexts as numbered lists and number placements.
The English word "alphabet" came into Middle English from the Late Latin word "alphabetum", which in turn originated in the Greek ἀλφάβητος ("alphabētos"). The Greek word was made from the first two letters, "alpha"(α) and "beta"(β). The names for the Greek letters came from the first two letters of the Phoenician alphabet; "aleph", which also meant "ox", and "bet", which also meant "house".
Sometimes, like in the alphabet song in English, the term "ABCs" is used instead of the word "alphabet" ("Now I know my ABCs"...). "Knowing one's ABCs", in general, can be used as a metaphor for knowing the basics about anything.
The history of the alphabet started in ancient Egypt. Egyptian writing had a set of some 24 hieroglyphs that are called uniliterals, to represent syllables that begin with a single consonant of their language, plus a vowel (or no vowel) to be supplied by the native speaker. These glyphs were used as pronunciation guides for logograms, to write grammatical inflections, and, later, to transcribe loan words and foreign names.
In the Middle Bronze Age, an apparently "alphabetic" system known as the Proto-Sinaitic script appears in Egyptian turquoise mines in the Sinai peninsula dated to circa the 15th century BC, apparently left by Canaanite workers. In 1999, John and Deborah Darnell discovered an even earlier version of this first alphabet at Wadi el-Hol dated to circa 1800 BC and showing evidence of having been adapted from specific forms of Egyptian hieroglyphs that could be dated to circa 2000 BC, strongly suggesting that the first alphabet had been developed about that time. Based on letter appearances and names, it is believed to be based on Egyptian hieroglyphs. This script had no characters representing vowels, although originally it probably was a syllabary, but unneeded symbols were discarded. An alphabetic cuneiform script with 30 signs including three that indicate the following vowel was invented in Ugarit before the 15th century BC. This script was not used after the destruction of Ugarit.
The Proto-Sinaitic script eventually developed into the Phoenician alphabet, which is conventionally called "Proto-Canaanite" before ca. 1050 BC. The oldest text in Phoenician script is an inscription on the sarcophagus of King Ahiram. This script is the parent script of all western alphabets. By the tenth century, two other forms can be distinguished, namely Canaanite and Aramaic. The Aramaic gave rise to the Hebrew script. The South Arabian alphabet, a sister script to the Phoenician alphabet, is the script from which the Ge'ez alphabet (an abugida) is descended. Vowelless alphabets are called abjads, currently exemplified in scripts including Arabic, Hebrew, and Syriac. The omission of vowels was not always a satisfactory solution and some "weak" consonants are sometimes used to indicate the vowel quality of a syllable (matres lectionis). These letters have a dual function since they are also used as pure consonants.
The Proto-Sinaitic or Proto-Canaanite script and the Ugaritic script were the first scripts with a limited number of signs, in contrast to the other widely used writing systems at the time, Cuneiform, Egyptian hieroglyphs, and Linear B. The Phoenician script was probably the first phonemic script and it contained only about two dozen distinct letters, making it a script simple enough for common traders to learn. Another advantage of Phoenician was that it could be used to write down many different languages, since it recorded words phonemically.
The script was spread by the Phoenicians across the Mediterranean. In Greece, the script was modified to add vowels, giving rise to the ancestor of all alphabets in the West. It was the first alphabet in which vowels have independent letter forms separate from those of consonants. The Greeks chose letters representing sounds that did not exist in Greek to represent vowels. Vowels are significant in the Greek language, and the syllabical Linear B script that was used by the Mycenaean Greeks from the 16th century BC had 87 symbols, including 5 vowels. In its early years, there were many variants of the Greek alphabet, a situation that caused many different alphabets to evolve from it.
The Greek alphabet, in its Euboean form, was carried over by Greek colonists to the Italian peninsula, where it gave rise to a variety of alphabets used to write the Italic languages. One of these became the Latin alphabet, which was spread across Europe as the Romans expanded their empire. Even after the fall of the Roman state, the alphabet survived in intellectual and religious works. It eventually became used for the descendant languages of Latin (the Romance languages) and then for most of the other languages of Europe.
Some adaptations of the Latin alphabet are augmented with ligatures, such as æ in Danish and Icelandic and Ȣ in Algonquian; by borrowings from other alphabets, such as the thorn þ in Old English and Icelandic, which came from the Futhark runes; and by modifying existing letters, such as the eth ð of Old English and Icelandic, which is a modified "d". Other alphabets only use a subset of the Latin alphabet, such as Hawaiian, and Italian, which uses the letters "j, k, x, y" and "w" only in foreign words.
Another notable script is Elder Futhark, which is believed to have evolved out of one of the Old Italic alphabets. Elder Futhark gave rise to a variety of alphabets known collectively as the Runic alphabets. The Runic alphabets were used for Germanic languages from AD 100 to the late Middle Ages. Its usage is mostly restricted to engravings on stone and jewelry, although inscriptions have also been found on bone and wood. These alphabets have since been replaced with the Latin alphabet, except for decorative usage for which the runes remained in use until the 20th century.
The Old Hungarian script is a contemporary writing system of the Hungarians. It was in use during the entire history of Hungary, albeit not as an official writing system. From the 19th century it once again became more and more popular.
The Glagolitic alphabet was the initial script of the liturgical language Old Church Slavonic and became, together with the Greek uncial script, the basis of the Cyrillic script. Cyrillic is one of the most widely used modern alphabetic scripts, and is notable for its use in Slavic languages and also for other languages within the former Soviet Union. Cyrillic alphabets include the Serbian, Macedonian, Bulgarian, Russian, Belarusian and Ukrainian. The Glagolitic alphabet is believed to have been created by Saints Cyril and Methodius, while the Cyrillic alphabet was invented by Clement of Ohrid, who was their disciple. They feature many letters that appear to have been borrowed from or influenced by the Greek alphabet and the Hebrew alphabet.
The longest European alphabet is the Latin-derived Slovak alphabet which has 46 letters.
Beyond the logographic Chinese writing, many phonetic scripts are in existence in Asia. The Arabic alphabet, Hebrew alphabet, Syriac alphabet, and other abjads of the Middle East are developments of the Aramaic alphabet.
Most alphabetic scripts of India and Eastern Asia are descended from the Brahmi script, which is often believed to be a descendant of Aramaic.
In Korea, the Hangul alphabet was created by Sejong the Great. Hangul is a unique alphabet: it is a featural alphabet, where many of the letters are designed from a sound's place of articulation (P to look like the widened mouth, L to look like the tongue pulled in, etc.); its design was planned by the government of the day; and it places individual letters in syllable clusters with equal dimensions, in the same way as Chinese characters, to allow for mixed-script writing (one syllable always takes up one type-space no matter how many letters get stacked into building that one sound-block).
Zhuyin (sometimes called "Bopomofo") is a semi-syllabary used to phonetically transcribe Mandarin Chinese in the Republic of China. After the later establishment of the People's Republic of China and its adoption of Hanyu Pinyin, the use of Zhuyin today is limited, but it is still widely used in Taiwan where the Republic of China still governs. Zhuyin developed out of a form of Chinese shorthand based on Chinese characters in the early 1900s and has elements of both an alphabet and a syllabary. Like an alphabet the phonemes of syllable initials are represented by individual symbols, but like a syllabary the phonemes of the syllable finals are not; rather, each possible final (excluding the medial glide) is represented by its own symbol. For example, "luan" is represented as ㄌㄨㄢ ("l-u-an"), where the last symbol ㄢ represents the entire final "-an". While Zhuyin is not used as a mainstream writing system, it is still often used in ways similar to a romanization system—that is, for aiding in pronunciation and as an input method for Chinese characters on computers and cellphones.
European alphabets, especially Latin and Cyrillic, have been adapted for many languages of Asia. Arabic is also widely used, sometimes as an abjad (as with Urdu and Persian) and sometimes as a complete alphabet (as with Kurdish and Uyghur).
The term "alphabet" is used by linguists and paleographers in both a wide and a narrow sense. In the wider sense, an alphabet is a script that is "segmental" at the phoneme level—that is, it has separate glyphs for individual sounds and not for larger units such as syllables or words. In the narrower sense, some scholars distinguish "true" alphabets from two other types of segmental script, abjads and abugidas. These three differ from each other in the way they treat vowels: abjads have letters for consonants and leave most vowels unexpressed; abugidas are also consonant-based, but indicate vowels with diacritics to or a systematic graphic modification of the consonants. In alphabets in the narrow sense, on the other hand, consonants and vowels are written as independent letters. The earliest known alphabet in the wider sense is the Wadi el-Hol script, believed to be an abjad, which through its successor Phoenician is the ancestor of modern alphabets, including Arabic, Greek, Latin (via the Old Italic alphabet), Cyrillic (via the Greek alphabet) and Hebrew (via Aramaic).
Examples of present-day abjads are the Arabic and Hebrew scripts; true alphabets include Latin, Cyrillic, and Korean hangul; and abugidas are used to write Tigrinya, Amharic, Hindi, and Thai. The Canadian Aboriginal syllabics are also an abugida rather than a syllabary as their name would imply, since each glyph stands for a consonant that is modified by rotation to represent the following vowel. (In a true syllabary, each consonant-vowel combination would be represented by a separate glyph.)
All three types may be augmented with syllabic glyphs. Ugaritic, for example, is basically an abjad, but has syllabic letters for . (These are the only time vowels are indicated.) Cyrillic is basically a true alphabet, but has syllabic letters for (я, е, ю); Coptic has a letter for . Devanagari is typically an abugida augmented with dedicated letters for initial vowels, though some traditions use अ as a zero consonant as the graphic base for such vowels.
The boundaries between the three types of segmental scripts are not always clear-cut. For example, Sorani Kurdish is written in the Arabic script, which is normally an abjad. However, in Kurdish, writing the vowels is mandatory, and full letters are used, so the script is a true alphabet. Other languages may use a Semitic abjad with mandatory vowel diacritics, effectively making them abugidas. On the other hand, the Phagspa script of the Mongol Empire was based closely on the Tibetan abugida, but all vowel marks were written after the preceding consonant rather than as diacritic marks. Although short "a" was not written, as in the Indic abugidas, one could argue that the linear arrangement made this a true alphabet. Conversely, the vowel marks of the Tigrinya abugida and the Amharic abugida (ironically, the original source of the term "abugida") have been so completely assimilated into their consonants that the modifications are no longer systematic and have to be learned as a syllabary rather than as a segmental script. Even more extreme, the Pahlavi abjad eventually became logographic. (See below.)
Thus the primary classification of alphabets reflects how they treat vowels. For tonal languages, further classification can be based on their treatment of tone, though names do not yet exist to distinguish the various types. Some alphabets disregard tone entirely, especially when it does not carry a heavy functional load, as in Somali and many other languages of Africa and the Americas. Such scripts are to tone what abjads are to vowels. Most commonly, tones are indicated with diacritics, the way vowels are treated in abugidas. This is the case for Vietnamese (a true alphabet) and Thai (an abugida). In Thai, tone is determined primarily by the choice of consonant, with diacritics for disambiguation. In the Pollard script, an abugida, vowels are indicated by diacritics, but the placement of the diacritic relative to the consonant is modified to indicate the tone. More rarely, a script may have separate letters for tones, as is the case for Hmong and Zhuang. For most of these scripts, regardless of whether letters or diacritics are used, the most common tone is not marked, just as the most common vowel is not marked in Indic abugidas; in Zhuyin not only is one of the tones unmarked, but there is a diacritic to indicate lack of tone, like the virama of Indic.
The number of letters in an alphabet can be quite small. The Book Pahlavi script, an abjad, had only twelve letters at one point, and may have had even fewer later on. Today the Rotokas alphabet has only twelve letters. (The Hawaiian alphabet is sometimes claimed to be as small, but it actually consists of 18 letters, including the ʻokina and five long vowels. However, Hawaiian Braille has only 13 letters.) While Rotokas has a small alphabet because it has few phonemes to represent (just eleven), Book Pahlavi was small because many letters had been "conflated"—that is, the graphic distinctions had been lost over time, and diacritics were not developed to compensate for this as they were in Arabic, another script that lost many of its distinct letter shapes. For example, a comma-shaped letter represented "g", "d", "y", "k", or "j". However, such apparent simplifications can perversely make a script more complicated. In later Pahlavi papyri, up to half of the remaining graphic distinctions of these twelve letters were lost, and the script could no longer be read as a sequence of letters at all, but instead each word had to be learned as a whole—that is, they had become logograms as in Egyptian Demotic.
The largest segmental script is probably an abugida, Devanagari. When written in Devanagari, Vedic Sanskrit has an alphabet of 53 letters, including the "visarga" mark for final aspiration and special letters for "kš" and "jñ," though one of the letters is theoretical and not actually used. The Hindi alphabet must represent both Sanskrit and modern vocabulary, and so has been expanded to 58 with the "khutma" letters (letters with a dot added) to represent sounds from Persian and English. Thai has a total of 59 symbols, consisting of 44 consonants, 13 vowels and 2 syllabics, not including 4 diacritics for tone marks and one for vowel length.
The largest known abjad is Sindhi, with 51 letters. The largest alphabets in the narrow sense include Kabardian and Abkhaz (for Cyrillic), with 58 and 56 letters, respectively, and Slovak (for the Latin script), with 46. However, these scripts either count di- and tri-graphs as separate letters, as Spanish did with "ch" and "ll" until recently, or uses diacritics like Slovak "č".
The Georgian alphabet ( "") is an alphabetic writing system. With 33 letters, it is the largest true alphabet where each letter is graphically independent. The original Georgian alphabet had 38 letters but 5 letters were removed in 19th century by Ilia Chavchavadze. The Georgian alphabet is much closer to Greek than the other Caucasian alphabets. The letter order parallels the Greek, with the consonants without a Greek equivalent organized at the end of the alphabet. The origins of the alphabet are still unknown. Some Armenian and Western scholars believe it was created by Mesrop Mashtots (Armenian: Մեսրոպ Մաշտոց Mesrop Maštoc') also known as Mesrob the Vartabed, who was an early medieval Armenian linguist, theologian, statesman and hymnologist, best known for inventing the Armenian alphabet c. 405 AD; other Georgian and Western scholars are against this theory.
Syllabaries typically contain 50 to 400 glyphs, and the glyphs of logographic systems typically number from the many hundreds into the thousands. Thus a simple count of the number of distinct symbols is an important clue to the nature of an unknown script.
The Armenian alphabet ( ' or ') is a graphically unique alphabetical writing system that has been used to write the Armenian language. It was created in year 405 A.D. originally contained 36 letters. Two more letters, օ (o) and ֆ (f), were added in the Middle Ages. During the 1920s orthography reform, a new letter և (capital ԵՎ) was added, which was a ligature before ե+ւ, while the letter Ւ ւ was discarded and reintroduced as part of a new letter ՈՒ ու (which was a digraph before).
The Armenian script's directionality is horizontal left-to-right, like the Latin and Greek alphabets. It also uses bicameral script like those. The Armenian word for "alphabet" is "" (), named after the first two letters of the Armenian alphabet Ա այբ ayb and Բ բեն ben.
Alphabets often come to be associated with a standard ordering of their letters, which can then be used for purposes of collation—namely for the listing of words and other items in what is called "alphabetical order".
The basic ordering of the Latin alphabet (A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
P
Q
R
S
T
U
V
W
X
Y
Z), which is derived from the Northwest Semitic "Abgad" order, is well established, although languages using this alphabet have different conventions for their treatment of modified letters (such as the French "é", "à", and "ô") and of certain combinations of letters (multigraphs). In French, these are not considered to be additional letters for the purposes of collation. However, in Icelandic, the accented letters such as "á", "í", and "ö" are considered distinct letters representing different vowel sounds from the sounds represented by their unaccented counterparts. In Spanish, "ñ" is considered a separate letter, but accented vowels such as "á" and "é" are not. The "ll" and "ch" were also considered single letters, but in 1994 the Real Academia Española changed the collating order so that "ll" is between "lk" and "lm" in the dictionary and "ch" is between "cg" and "ci", and in 2010 the tenth congress of the Association of Spanish Language Academies changed it so they were no longer letters at all.
In German, words starting with "sch-" (which spells the German phoneme ) are inserted between words with initial "sca-" and "sci-" (all incidentally loanwords) instead of appearing after initial "sz", as though it were a single letter—in contrast to several languages such as Albanian, in which "dh-", "ë-", "gj-", "ll-", "rr-", "th-", "xh-" and "zh-" (all representing phonemes and considered separate single letters) would follow the letters "d", "e", "g", "l", "n", "r", "t", "x" and "z" respectively, as well as Hungarian and Welsh. Further, German words with an umlaut are collated ignoring the umlaut—contrary to Turkish that adopted the graphemes ö and ü, and where a word like "tüfek", would come after "tuz", in the dictionary. An exception is the German telephone directory where umlauts are sorted like "ä" = "ae" since names such as "Jäger" also appear with the spelling "Jaeger", and are not distinguished in the spoken language.
The Danish and Norwegian alphabets end with "æ"—"ø"—"å", whereas the Swedish and Finnish ones conventionally put "å"—"ä"—"ö" at the end.
It is unknown whether the earliest alphabets had a defined sequence. Some alphabets today, such as the Hanuno'o script, are learned one letter at a time, in no particular order, and are not used for collation where a definite order is required. However, a dozen Ugaritic tablets from the fourteenth century BC preserve the alphabet in two sequences. One, the "ABCDE" order later used in Phoenician, has continued with minor changes in Hebrew, Greek, Armenian, Gothic, Cyrillic, and Latin; the other, "HMĦLQ," was used in southern Arabia and is preserved today in Ethiopic. Both orders have therefore been stable for at least 3000 years.
Runic used an unrelated Futhark sequence, which was later simplified. Arabic uses its own sequence, although Arabic retains the traditional abjadi order for numbering.
The Brahmic family of alphabets used in India use a unique order based on phonology: The letters are arranged according to how and where they are produced in the mouth. This organization is used in Southeast Asia, Tibet, Korean hangul, and even Japanese kana, which is not an alphabet.
The Phoenician letter names, in which each letter was associated with a word that begins with that sound (acrophony), continue to be used to varying degrees in Samaritan, Aramaic, Syriac, Hebrew, Greek and Arabic.
The names were abandoned in Latin, which instead referred to the letters by adding a vowel (usually e) before or after the consonant; the two exceptions were Y and Z, which were borrowed from the Greek alphabet rather than Etruscan, and were known as "Y Graeca" "Greek Y" (pronounced "I Graeca" "Greek I") and "zeta" (from Greek)—this discrepancy was inherited by many European languages, as in the term "zed" for Z in all forms of English other than American English. Over time names sometimes shifted or were added, as in "double U" for W ("double V" in French), the English name for Y, and American "zee" for Z. Comparing names in English and French gives a clear reflection of the Great Vowel Shift: A, B, C and D are pronounced in today's English, but in contemporary French they are . The French names (from which the English names are derived) preserve the qualities of the English vowels from before the Great Vowel Shift. By contrast, the names of F, L, M, N and S () remain the same in both languages, because "short" vowels were largely unaffected by the Shift.
In Cyrillic originally the letters were given names based on Slavic words; this was later abandoned as well in favor of a system similar to that used in Latin.
Letters of Armenian alphabet also have distinct letter names.
When an alphabet is adopted or developed to represent a given language, an orthography generally comes into being, providing rules for the spelling of words in that language. In accordance with the principle on which alphabets are based, these rules will generally map letters of the alphabet to the phonemes (significant sounds) of the spoken language. In a perfectly phonemic orthography there would be a consistent one-to-one correspondence between the letters and the phonemes, so that a writer could predict the spelling of a word given its pronunciation, and a speaker would always know the pronunciation of a word given its spelling, and vice versa. However this ideal is not usually achieved in practice; some languages (such as Spanish and Finnish) come close to it, while others (such as English) deviate from it to a much larger degree.
The pronunciation of a language often evolves independently of its writing system, and writing systems have been borrowed for languages they were not designed for, so the degree to which letters of an alphabet correspond to phonemes of a language varies greatly from one language to another and even within a single language.
Languages may fail to achieve a one-to-one correspondence between letters and sounds in any of several ways:
National languages sometimes elect to address the problem of dialects by simply associating the alphabet with the national standard. Some national languages like Finnish, Armenian, Turkish, Russian, Serbo-Croatian (Serbian, Croatian and Bosnian) and Bulgarian have a very regular spelling system with a nearly one-to-one correspondence between letters and phonemes. Strictly speaking, these national languages lack a word corresponding to the verb "to spell" (meaning to split a word into its letters), the closest match being a verb meaning to split a word into its syllables. Similarly, the Italian verb corresponding to 'spell (out)', "compitare", is unknown to many Italians because spelling is usually trivial, as Italian spelling is highly phonemic. In standard Spanish, one can tell the pronunciation of a word from its spelling, but not vice versa, as certain phonemes can be represented in more than one way, but a given letter is consistently pronounced. French, with its silent letters and its heavy use of nasal vowels and elision, may seem to lack much correspondence between spelling and pronunciation, but its rules on pronunciation, though complex, are actually consistent and predictable with a fair degree of accuracy.
At the other extreme are languages such as English, where the pronunciations of many words simply have to be memorized as they do not correspond to the spelling in a consistent way. For English, this is partly because the Great Vowel Shift occurred after the orthography was established, and because English has acquired a large number of loanwords at different times, retaining their original spelling at varying levels. Even English has general, albeit complex, rules that predict pronunciation from spelling, and these rules are successful most of the time; rules to predict spelling from the pronunciation have a higher failure rate.
Sometimes, countries have the written language undergo a spelling reform to realign the writing with the contemporary spoken language. These can range from simple spelling changes and word forms to switching the entire writing system itself, as when Turkey switched from the Arabic alphabet to a Latin-based Turkish alphabet.
The standard system of symbols used by linguists to represent sounds in any language, independently of orthography, is called the International Phonetic Alphabet. | https://en.wikipedia.org/wiki?curid=670 |
Atomic number
The atomic number or proton number (symbol "Z") of a chemical element is the number of protons found in the nucleus of every atom of that element. The atomic number uniquely identifies a chemical element. It is identical to the charge number of the nucleus. In an uncharged atom, the atomic number is also equal to the number of electrons.
The sum of the atomic number "Z" and the number of neutrons "N" gives the mass number "A" of an atom. Since protons and neutrons have approximately the same mass (and the mass of the electrons is negligible for many purposes) and the mass defect of nucleon binding is always small compared to the nucleon mass, the atomic mass of any atom, when expressed in unified atomic mass units (making a quantity called the "relative isotopic mass"), is within 1% of the whole number "A".
Atoms with the same atomic number but different neutron numbers, and hence different mass numbers, are known as isotopes. A little more than three-quarters of naturally occurring elements exist as a mixture of isotopes (see monoisotopic elements), and the average isotopic mass of an isotopic mixture for an element (called the relative atomic mass) in a defined environment on Earth, determines the element's standard atomic weight. Historically, it was these atomic weights of elements (in comparison to hydrogen) that were the quantities measurable by chemists in the 19th century.
The conventional symbol "Z" comes from the German word meaning "number", which, before the modern synthesis of ideas from chemistry and physics, merely denoted an element's numerical place in the periodic table, whose order is approximately, but not completely, consistent with the order of the elements by atomic weights. Only after 1915, with the suggestion and evidence that this "Z" number was also the nuclear charge and a physical characteristic of atoms, did the word (and its English equivalent "atomic number") come into common use in this context.
Loosely speaking, the existence or construction of a periodic table of elements creates an ordering of the elements, and so they can be numbered in order.
Dmitri Mendeleev claimed that he arranged his first periodic tables (first published on March 6, 1869) in order of atomic weight ("Atomgewicht"). However, in consideration of the elements' observed chemical properties, he changed the order slightly and placed tellurium (atomic weight 127.6) ahead of iodine (atomic weight 126.9). This placement is consistent with the modern practice of ordering the elements by proton number, "Z", but that number was not known or suspected at the time.
A simple numbering based on periodic table position was never entirely satisfactory, however. Besides the case of iodine and tellurium, later several other pairs of elements (such as argon and potassium, cobalt and nickel) were known to have nearly identical or reversed atomic weights, thus requiring their placement in the periodic table to be determined by their chemical properties. However the gradual identification of more and more chemically similar lanthanide elements, whose atomic number was not obvious, led to inconsistency and uncertainty in the periodic numbering of elements at least from lutetium (element 71) onward (hafnium was not known at this time).
In 1911, Ernest Rutherford gave a model of the atom in which a central nucleus held most of the atom's mass and a positive charge which, in units of the electron's charge, was to be approximately equal to half of the atom's atomic weight, expressed in numbers of hydrogen atoms. This central charge would thus be approximately half the atomic weight (though it was almost 25% different from the atomic number of gold , ), the single element from which Rutherford made his guess). Nevertheless, in spite of Rutherford's estimation that gold had a central charge of about 100 (but was element on the periodic table), a month after Rutherford's paper appeared, Antonius van den Broek first formally suggested that the central charge and number of electrons in an atom was "exactly" equal to its place in the periodic table (also known as element number, atomic number, and symbolized "Z"). This proved eventually to be the case.
The experimental position improved dramatically after research by Henry Moseley in 1913. Moseley, after discussions with Bohr who was at the same lab (and who had used Van den Broek's hypothesis in his Bohr model of the atom), decided to test Van den Broek's and Bohr's hypothesis directly, by seeing if spectral lines emitted from excited atoms fitted the Bohr theory's postulation that the frequency of the spectral lines be proportional to the square of "Z".
To do this, Moseley measured the wavelengths of the innermost photon transitions (K and L lines) produced by the elements from aluminum ("Z" = 13) to gold ("Z" = 79) used as a series of movable anodic targets inside an x-ray tube. The square root of the frequency of these photons increased from one target to the next in an arithmetic progression. This led to the conclusion (Moseley's law) that the atomic number does closely correspond (with an offset of one unit for K-lines, in Moseley's work) to the calculated electric charge of the nucleus, i.e. the element number "Z". Among other things, Moseley demonstrated that the lanthanide series (from lanthanum to lutetium inclusive) must have 15 members—no fewer and no more—which was far from obvious from known chemistry at that time.
After Moseley's death in 1915, the atomic numbers of all known elements from hydrogen to uranium ("Z" = 92) were examined by his method. There were seven elements (with "Z" Eric Scerri, "A tale of seven elements," (Oxford University Press 2013) , p.47 From 1918 to 1947, all seven of these missing elements were discovered. By this time, the first four transuranium elements had also been discovered, so that the periodic table was complete with no gaps as far as curium ("Z" = 96).
In 1915, the reason for nuclear charge being quantized in units of "Z", which were now recognized to be the same as the element number, was not understood. An old idea called Prout's hypothesis had postulated that the elements were all made of residues (or "protyles") of the lightest element hydrogen, which in the Bohr-Rutherford model had a single electron and a nuclear charge of one. However, as early as 1907, Rutherford and Thomas Royds had shown that alpha particles, which had a charge of +2, were the nuclei of helium atoms, which had a mass four times that of hydrogen, not two times. If Prout's hypothesis were true, something had to be neutralizing some of the charge of the hydrogen nuclei present in the nuclei of heavier atoms.
In 1917, Rutherford succeeded in generating hydrogen nuclei from a nuclear reaction between alpha particles and nitrogen gas, and believed he had proven Prout's law. He called the new heavy nuclear particles protons in 1920 (alternate names being proutons and protyles). It had been immediately apparent from the work of Moseley that the nuclei of heavy atoms have more than twice as much mass as would be expected from their being made of hydrogen nuclei, and thus there was required a hypothesis for the neutralization of the extra protons presumed present in all heavy nuclei. A helium nucleus was presumed to be composed of four protons plus two "nuclear electrons" (electrons bound inside the nucleus) to cancel two of the charges. At the other end of the periodic table, a nucleus of gold with a mass 197 times that of hydrogen was thought to contain 118 nuclear electrons in the nucleus to give it a residual charge of +79, consistent with its atomic number.
All consideration of nuclear electrons ended with James Chadwick's discovery of the neutron in 1932. An atom of gold now was seen as containing 118 neutrons rather than 118 nuclear electrons, and its positive charge now was realized to come entirely from a content of 79 protons. After 1932, therefore, an element's atomic number "Z" was also realized to be identical to the proton number of its nuclei.
The conventional symbol "Z" possibly comes from the German word (atomic number). However, prior to 1915, the word "Zahl" (simply "number") was used for an element's assigned number in the periodic table.
Each element has a specific set of chemical properties as a consequence of the number of electrons present in the neutral atom, which is "Z" (the atomic number). The configuration of these electrons follows from the principles of quantum mechanics. The number of electrons in each element's electron shells, particularly the outermost valence shell, is the primary factor in determining its chemical bonding behavior. Hence, it is the atomic number alone that determines the chemical properties of an element; and it is for this reason that an element can be defined as consisting of "any" mixture of atoms with a given atomic number.
The quest for new elements is usually described using atomic numbers. As of 2019, all elements with atomic numbers 1 to 118 have been observed. Synthesis of new elements is accomplished by bombarding target atoms of heavy elements with ions, such that the sum of the atomic numbers of the target and ion elements equals the atomic number of the element being created. In general, the half-life becomes shorter as atomic number increases, though an "island of stability" may exist for undiscovered isotopes with certain numbers of protons and neutrons. | https://en.wikipedia.org/wiki?curid=673 |
Anatomy
Anatomy (Greek "anatomē", 'dissection') is the branch of biology concerned with the study of the structure of organisms and their parts. Anatomy is a branch of natural science which deals with the structural organization of living things. It is an old science, having its beginnings in prehistoric times. Anatomy is inherently tied to developmental biology, embryology, comparative anatomy, evolutionary biology, and phylogeny, as these are the processes by which anatomy is generated, both over immediate and long-term timescales. Anatomy and physiology, which study the structure and function of organisms and their parts respectively, make a natural pair of related disciplines, and are often studied together. Human anatomy is one of the essential basic sciences that are applied in medicine.
The discipline of anatomy is divided into macroscopic and microscopic. Macroscopic anatomy, or gross anatomy, is the examination of an animal's body parts using unaided eyesight. Gross anatomy also includes the branch of superficial anatomy. Microscopic anatomy involves the use of optical instruments in the study of the tissues of various structures, known as histology, and also in the study of cells.
The history of anatomy is characterized by a progressive understanding of the functions of the organs and structures of the human body. Methods have also improved dramatically, advancing from the examination of animals by dissection of carcasses and cadavers (corpses) to 20th century medical imaging techniques including X-ray, ultrasound, and magnetic resonance imaging.
Derived from the Greek "anatomē" "dissection" (from "anatémnō" "I cut up, cut open" from ἀνά "aná" "up", and τέμνω "témnō" "I cut"), anatomy is the scientific study of the structure of organisms including their systems, organs and tissues. It includes the appearance and position of the various parts, the materials from which they are composed, their locations and their relationships with other parts. Anatomy is quite distinct from physiology and biochemistry, which deal respectively with the functions of those parts and the chemical processes involved. For example, an anatomist is concerned with the shape, size, position, structure, blood supply and innervation of an organ such as the liver; while a physiologist is interested in the production of bile, the role of the liver in nutrition and the regulation of bodily functions.
The discipline of anatomy can be subdivided into a number of branches including gross or macroscopic anatomy and microscopic anatomy. Gross anatomy is the study of structures large enough to be seen with the naked eye, and also includes superficial anatomy or surface anatomy, the study by sight of the external body features. Microscopic anatomy is the study of structures on a microscopic scale, along with histology (the study of tissues), and embryology (the study of an organism in its immature condition).
Anatomy can be studied using both invasive and non-invasive methods with the goal of obtaining information about the structure and organization of organs and systems. Methods used include dissection, in which a body is opened and its organs studied, and endoscopy, in which a video camera-equipped instrument is inserted through a small incision in the body wall and used to explore the internal organs and other structures. Angiography using X-rays or magnetic resonance angiography are methods to visualize blood vessels.
The term "anatomy" is commonly taken to refer to human anatomy. However, substantially the same structures and tissues are found throughout the rest of the animal kingdom and the term also includes the anatomy of other animals. The term "zootomy" is also sometimes used to specifically refer to non-human animals. The structure and tissues of plants are of a dissimilar nature and they are studied in plant anatomy.
The kingdom Animalia contains multicellular organisms that are heterotrophic and motile (although some have secondarily adopted a sessile lifestyle). Most animals have bodies differentiated into separate tissues and these animals are also known as eumetazoans. They have an internal digestive chamber, with one or two openings; the gametes are produced in multicellular sex organs, and the zygotes include a blastula stage in their embryonic development. Metazoans do not include the sponges, which have undifferentiated cells.
Unlike plant cells, animal cells have neither a cell wall nor chloroplasts. Vacuoles, when present, are more in number and much smaller than those in the plant cell. The body tissues are composed of numerous types of cell, including those found in muscles, nerves and skin. Each typically has a cell membrane formed of phospholipids, cytoplasm and a nucleus. All of the different cells of an animal are derived from the embryonic germ layers. Those simpler invertebrates which are formed from two germ layers of ectoderm and endoderm are called diploblastic and the more developed animals whose structures and organs are formed from three germ layers are called triploblastic. All of a triploblastic animal's tissues and organs are derived from the three germ layers of the embryo, the ectoderm, mesoderm and endoderm.
Animal tissues can be grouped into four basic types: connective, epithelial, muscle and nervous tissue.
Connective tissues are fibrous and made up of cells scattered among inorganic material called the extracellular matrix. Connective tissue gives shape to organs and holds them in place. The main types are loose connective tissue, adipose tissue, fibrous connective tissue, cartilage and bone. The extracellular matrix contains proteins, the chief and most abundant of which is collagen. Collagen plays a major part in organizing and maintaining tissues. The matrix can be modified to form a skeleton to support or protect the body. An exoskeleton is a thickened, rigid cuticle which is stiffened by mineralization, as in crustaceans or by the cross-linking of its proteins as in insects. An endoskeleton is internal and present in all developed animals, as well as in many of those less developed.
Epithelial tissue is composed of closely packed cells, bound to each other by cell adhesion molecules, with little intercellular space. Epithelial cells can be squamous (flat), cuboidal or columnar and rest on a basal lamina, the upper layer of the basement membrane, the lower layer is the reticular lamina lying next to the connective tissue in the extracellular matrix secreted by the epithelial cells. There are many different types of epithelium, modified to suit a particular function. In the respiratory tract there is a type of ciliated epithelial lining; in the small intestine there are microvilli on the epithelial lining and in the large intestine there are intestinal villi. Skin consists of an outer layer of keratinized stratified squamous epithelium that covers the exterior of the vertebrate body. Keratinocytes make up to 95% of the cells in the skin. The epithelial cells on the external surface of the body typically secrete an extracellular matrix in the form of a cuticle. In simple animals this may just be a coat of glycoproteins. In more advanced animals, many glands are formed of epithelial cells.
Muscle cells (myocytes) form the active contractile tissue of the body. Muscle tissue functions to produce force and cause motion, either locomotion or movement within internal organs. Muscle is formed of contractile filaments and is separated into three main types; smooth muscle, skeletal muscle and cardiac muscle. Smooth muscle has no striations when examined microscopically. It contracts slowly but maintains contractibility over a wide range of stretch lengths. It is found in such organs as sea anemone tentacles and the body wall of sea cucumbers. Skeletal muscle contracts rapidly but has a limited range of extension. It is found in the movement of appendages and jaws. Obliquely striated muscle is intermediate between the other two. The filaments are staggered and this is the type of muscle found in earthworms that can extend slowly or make rapid contractions. In higher animals striated muscles occur in bundles attached to bone to provide movement and are often arranged in antagonistic sets. Smooth muscle is found in the walls of the uterus, bladder, intestines, stomach, oesophagus, respiratory airways, and blood vessels. Cardiac muscle is found only in the heart, allowing it to contract and pump blood round the body.
Nervous tissue is composed of many nerve cells known as neurons which transmit information. In some slow-moving radially symmetrical marine animals such as ctenophores and cnidarians (including sea anemones and jellyfish), the nerves form a nerve net, but in most animals they are organized longitudinally into bundles. In simple animals, receptor neurons in the body wall cause a local reaction to a stimulus. In more complex animals, specialized receptor cells such as chemoreceptors and photoreceptors are found in groups and send messages along neural networks to other parts of the organism. Neurons can be connected together in ganglia. In higher animals, specialized receptors are the basis of sense organs and there is a central nervous system (brain and spinal cord) and a peripheral nervous system. The latter consists of sensory nerves that transmit information from sense organs and motor nerves that influence target organs. The peripheral nervous system is divided into the somatic nervous system which conveys sensation and controls voluntary muscle, and the autonomic nervous system which involuntarily controls smooth muscle, certain glands and internal organs, including the stomach.
All vertebrates have a similar basic body plan and at some point in their lives, mostly in the embryonic stage, share the major chordate characteristics; a stiffening rod, the notochord; a dorsal hollow tube of nervous material, the neural tube; pharyngeal arches; and a tail posterior to the anus. The spinal cord is protected by the vertebral column and is above the notochord and the gastrointestinal tract is below it. Nervous tissue is derived from the ectoderm, connective tissues are derived from mesoderm, and gut is derived from the endoderm. At the posterior end is a tail which continues the spinal cord and vertebrae but not the gut. The mouth is found at the anterior end of the animal, and the anus at the base of the tail. The defining characteristic of a vertebrate is the vertebral column, formed in the development of the segmented series of vertebrae. In most vertebrates the notochord becomes the nucleus pulposus of the intervertebral discs. However, a few vertebrates, such as the sturgeon and the coelacanth retain the notochord into adulthood. Jawed vertebrates are typified by paired appendages, fins or legs, which may be secondarily lost. The limbs of vertebrates are considered to be homologous because the same underlying skeletal structure was inherited from their last common ancestor. This is one of the arguments put forward by Charles Darwin to support his theory of evolution.
The body of a fish is divided into a head, trunk and tail, although the divisions between the three are not always externally visible. The skeleton, which forms the support structure inside the fish, is either made of cartilage, in cartilaginous fish, or bone in bony fish. The main skeletal element is the vertebral column, composed of articulating vertebrae which are lightweight yet strong. The ribs attach to the spine and there are no limbs or limb girdles. The main external features of the fish, the fins, are composed of either bony or soft spines called rays, which with the exception of the caudal fins, have no direct connection with the spine. They are supported by the muscles which compose the main part of the trunk. The heart has two chambers and pumps the blood through the respiratory surfaces of the gills and on round the body in a single circulatory loop. The eyes are adapted for seeing underwater and have only local vision. There is an inner ear but no external or middle ear. Low frequency vibrations are detected by the lateral line system of sense organs that run along the length of the sides of fish, and these respond to nearby movements and to changes in water pressure.
Sharks and rays are basal fish with numerous primitive anatomical features similar to those of ancient fish, including skeletons composed of cartilage. Their bodies tend to be dorso-ventrally flattened, they usually have five pairs of gill slits and a large mouth set on the underside of the head. The dermis is covered with separate dermal placoid scales. They have a cloaca into which the urinary and genital passages open, but not a swim bladder. Cartilaginous fish produce a small number of large, yolky eggs. Some species are ovoviviparous and the young develop internally but others are oviparous and the larvae develop externally in egg cases.
The bony fish lineage shows more derived anatomical traits, often with major evolutionary changes from the features of ancient fish. They have a bony skeleton, are generally laterally flattened, have five pairs of gills protected by an operculum, and a mouth at or near the tip of the snout. The dermis is covered with overlapping scales. Bony fish have a swim bladder which helps them maintain a constant depth in the water column, but not a cloaca. They mostly spawn a large number of small eggs with little yolk which they broadcast into the water column.
Amphibians are a class of animals comprising frogs, salamanders and caecilians. They are tetrapods, but the caecilians and a few species of salamander have either no limbs or their limbs are much reduced in size. Their main bones are hollow and lightweight and are fully ossified and the vertebrae interlock with each other and have articular processes. Their ribs are usually short and may be fused to the vertebrae. Their skulls are mostly broad and short, and are often incompletely ossified. Their skin contains little keratin and lacks scales, but contains many mucous glands and in some species, poison glands. The hearts of amphibians have three chambers, two atria and one ventricle. They have a urinary bladder and nitrogenous waste products are excreted primarily as urea. Amphibians breathe by means of buccal pumping, a pump action in which air is first drawn into the buccopharyngeal region through the nostrils. These are then closed and the air is forced into the lungs by contraction of the throat. They supplement this with gas exchange through the skin which needs to be kept moist.
In frogs the pelvic girdle is robust and the hind legs are much longer and stronger than the forelimbs. The feet have four or five digits and the toes are often webbed for swimming or have suction pads for climbing. Frogs have large eyes and no tail. Salamanders resemble lizards in appearance; their short legs project sideways, the belly is close to or in contact with the ground and they have a long tail. Caecilians superficially resemble earthworms and are limbless. They burrow by means of zones of muscle contractions which move along the body and they swim by undulating their body from side to side.
Reptiles are a class of animals comprising turtles, tuataras, lizards, snakes and crocodiles. They are tetrapods, but the snakes and a few species of lizard either have no limbs or their limbs are much reduced in size. Their bones are better ossified and their skeletons stronger than those of amphibians. The teeth are conical and mostly uniform in size. The surface cells of the epidermis are modified into horny scales which create a waterproof layer. Reptiles are unable to use their skin for respiration as do amphibians and have a more efficient respiratory system drawing air into their lungs by expanding their chest walls. The heart resembles that of the amphibian but there is a septum which more completely separates the oxygenated and deoxygenated bloodstreams. The reproductive system has evolved for internal fertilization, with a copulatory organ present in most species. The eggs are surrounded by amniotic membranes which prevents them from drying out and are laid on land, or develop internally in some species. The bladder is small as nitrogenous waste is excreted as uric acid.
Turtles are notable for their protective shells. They have an inflexible trunk encased in a horny carapace above and a plastron below. These are formed from bony plates embedded in the dermis which are overlain by horny ones and are partially fused with the ribs and spine. The neck is long and flexible and the head and the legs can be drawn back inside the shell. Turtles are vegetarians and the typical reptile teeth have been replaced by sharp, horny plates. In aquatic species, the front legs are modified into flippers.
Tuataras superficially resemble lizards but the lineages diverged in the Triassic period. There is one living species, "Sphenodon punctatus". The skull has two openings (fenestrae) on either side and the jaw is rigidly attached to the skull. There is one row of teeth in the lower jaw and this fits between the two rows in the upper jaw when the animal chews. The teeth are merely projections of bony material from the jaw and eventually wear down. The brain and heart are more primitive than those of other reptiles, and the lungs have a single chamber and lack bronchi. The tuatara has a well-developed parietal eye on its forehead.
Lizards have skulls with only one fenestra on each side, the lower bar of bone below the second fenestra having been lost. This results in the jaws being less rigidly attached which allows the mouth to open wider. Lizards are mostly quadrupeds, with the trunk held off the ground by short, sideways-facing legs, but a few species have no limbs and resemble snakes. Lizards have moveable eyelids, eardrums are present and some species have a central parietal eye.
Snakes are closely related to lizards, having branched off from a common ancestral lineage during the Cretaceous period, and they share many of the same features. The skeleton consists of a skull, a hyoid bone, spine and ribs though a few species retain a vestige of the pelvis and rear limbs in the form of pelvic spurs. The bar under the second fenestra has also been lost and the jaws have extreme flexibility allowing the snake to swallow its prey whole. Snakes lack moveable eyelids, the eyes being covered by transparent "spectacle" scales. They do not have eardrums but can detect ground vibrations through the bones of their skull. Their forked tongues are used as organs of taste and smell and some species have sensory pits on their heads enabling them to locate warm-blooded prey.
Crocodilians are large, low-slung aquatic reptiles with long snouts and large numbers of teeth. The head and trunk are dorso-ventrally flattened and the tail is laterally compressed. It undulates from side to side to force the animal through the water when swimming. The tough keratinized scales provide body armour and some are fused to the skull. The nostrils, eyes and ears are elevated above the top of the flat head enabling them to remain above the surface of the water when the animal is floating. Valves seal the nostrils and ears when it is submerged. Unlike other reptiles, crocodilians have hearts with four chambers allowing complete separation of oxygenated and deoxygenated blood.
Birds are tetrapods but though their hind limbs are used for walking or hopping, their front limbs are wings covered with feathers and adapted for flight. Birds are endothermic, have a high metabolic rate, a light skeletal system and powerful muscles. The long bones are thin, hollow and very light. Air sac extensions from the lungs occupy the centre of some bones. The sternum is wide and usually has a keel and the caudal vertebrae are fused. There are no teeth and the narrow jaws are adapted into a horn-covered beak. The eyes are relatively large, particularly in nocturnal species such as owls. They face forwards in predators and sideways in ducks.
The feathers are outgrowths of the epidermis and are found in localized bands from where they fan out over the skin. Large flight feathers are found on the wings and tail, contour feathers cover the bird's surface and fine down occurs on young birds and under the contour feathers of water birds. The only cutaneous gland is the single uropygial gland near the base of the tail. This produces an oily secretion that waterproofs the feathers when the bird preens. There are scales on the legs, feet and claws on the tips of the toes.
Mammals are a diverse class of animals, mostly terrestrial but some are aquatic and others have evolved flapping or gliding flight. They mostly have four limbs but some aquatic mammals have no limbs or limbs modified into fins and the forelimbs of bats are modified into wings. The legs of most mammals are situated below the trunk, which is held well clear of the ground. The bones of mammals are well ossified and their teeth, which are usually differentiated, are coated in a layer of prismatic enamel. The teeth are shed once (milk teeth) during the animal's lifetime or not at all, as is the case in cetaceans. Mammals have three bones in the middle ear and a cochlea in the inner ear. They are clothed in hair and their skin contains glands which secrete sweat. Some of these glands are specialized as mammary glands, producing milk to feed the young. Mammals breathe with lungs and have a muscular diaphragm separating the thorax from the abdomen which helps them draw air into the lungs. The mammalian heart has four chambers and oxygenated and deoxygenated blood are kept entirely separate. Nitrogenous waste is excreted primarily as urea.
Mammals are amniotes, and most are viviparous, giving birth to live young. The exception to this are the egg-laying monotremes, the platypus and the echidnas of Australia. Most other mammals have a placenta through which the developing foetus obtains nourishment, but in marsupials, the foetal stage is very short and the immature young is born and finds its way to its mother's pouch where it latches on to a nipple and completes its development.
Humans have the overall body plan of a mammal. Humans have a head, neck, trunk (which includes the thorax and abdomen), two arms and hands, and two legs and feet.
Generally, students of certain biological sciences, paramedics, prosthetists and orthotists, physiotherapists, occupational therapists, nurses, podiatrists, and medical students learn gross anatomy and microscopic anatomy from anatomical models, skeletons, textbooks, diagrams, photographs, lectures and tutorials and in addition, medical students generally also learn gross anatomy through practical experience of dissection and inspection of cadavers. The study of microscopic anatomy (or histology) can be aided by practical experience examining histological preparations (or slides) under a microscope.
Human anatomy, physiology and biochemistry are complementary basic medical sciences, which are generally taught to medical students in their first year at medical school. Human anatomy can be taught regionally or systemically; that is, respectively, studying anatomy by bodily regions such as the head and chest, or studying by specific systems, such as the nervous or respiratory systems. The major anatomy textbook, Gray's Anatomy, has been reorganized from a systems format to a regional format, in line with modern teaching methods. A thorough working knowledge of anatomy is required by physicians, especially surgeons and doctors working in some diagnostic specialties, such as histopathology and radiology.
Academic anatomists are usually employed by universities, medical schools or teaching hospitals. They are often involved in teaching anatomy, and research into certain systems, organs, tissues or cells.
Invertebrates constitute a vast array of living organisms ranging from the simplest unicellular eukaryotes such as "Paramecium" to such complex multicellular animals as the octopus, lobster and dragonfly. They constitute about 95% of the animal species. By definition, none of these creatures has a backbone. The cells of single-cell protozoans have the same basic structure as those of multicellular animals but some parts are specialized into the equivalent of tissues and organs. Locomotion is often provided by cilia or flagella or may proceed via the advance of pseudopodia, food may be gathered by phagocytosis, energy needs may be supplied by photosynthesis and the cell may be supported by an endoskeleton or an exoskeleton. Some protozoans can form multicellular colonies.
Metazoans are multicellular organism, different groups of cells of which have separate functions. The most basic types of metazoan tissues are epithelium and connective tissue, both of which are present in nearly all invertebrates. The outer surface of the epidermis is normally formed of epithelial cells and secretes an extracellular matrix which provides support to the organism. An endoskeleton derived from the mesoderm is present in echinoderms, sponges and some cephalopods. Exoskeletons are derived from the epidermis and is composed of chitin in arthropods (insects, spiders, ticks, shrimps, crabs, lobsters). Calcium carbonate constitutes the shells of molluscs, brachiopods and some tube-building polychaete worms and silica forms the exoskeleton of the microscopic diatoms and radiolaria. Other invertebrates may have no rigid structures but the epidermis may secrete a variety of surface coatings such as the pinacoderm of sponges, the gelatinous cuticle of cnidarians (polyps, sea anemones, jellyfish) and the collagenous cuticle of annelids. The outer epithelial layer may include cells of several types including sensory cells, gland cells and stinging cells. There may also be protrusions such as microvilli, cilia, bristles, spines and tubercles.
Marcello Malpighi, the father of microscopical anatomy, discovered that plants had tubules similar to those he saw in insects like the silk worm. He observed that when a ring-like portion of bark was removed on a trunk a swelling occurred in the tissues above the ring, and he unmistakably interpreted this as growth stimulated by food coming down from the leaves, and being captured above the ring.
Arthropods comprise the largest phylum in the animal kingdom with over a million known invertebrate species.
Insects possess segmented bodies supported by a hard-jointed outer covering, the exoskeleton, made mostly of chitin. The segments of the body are organized into three distinct parts, a head, a thorax and an abdomen. The head typically bears a pair of sensory antennae, a pair of compound eyes, one to three simple eyes (ocelli) and three sets of modified appendages that form the mouthparts. The thorax has three pairs of segmented legs, one pair each for the three segments that compose the thorax and one or two pairs of wings. The abdomen is composed of eleven segments, some of which may be fused and houses the digestive, respiratory, excretory and reproductive systems. There is considerable variation between species and many adaptations to the body parts, especially wings, legs, antennae and mouthparts.
Spiders a class of arachnids have four pairs of legs; a body of two segments—a cephalothorax and an abdomen. Spiders have no wings and no antennae. They have mouthparts called chelicerae which are often connected to venom glands as most spiders are venomous. They have a second pair of appendages called pedipalps attached to the cephalothorax. These have similar segmentation to the legs and function as taste and smell organs. At the end of each male pedipalp is a spoon-shaped cymbium that acts to support the copulatory organ.
In 1600 BCE, the Edwin Smith Papyrus, an Ancient Egyptian medical text, described the heart, its vessels, liver, spleen, kidneys, hypothalamus, uterus and bladder, and showed the blood vessels diverging from the heart. The Ebers Papyrus (c. 1550 BCE) features a "treatise on the heart", with vessels carrying all the body's fluids to or from every member of the body.
Ancient Greek anatomy and physiology underwent great changes and advances throughout the early medieval world. Over time, this medical practice expanded by a continually developing understanding of the functions of organs and structures in the body. Phenomenal anatomical observations of the human body were made, which have contributed towards the understanding of the brain, eye, liver, reproductive organs and the nervous system.
The Hellenistic Egyptian city of Alexandria was the stepping-stone for Greek anatomy and physiology. Alexandria not only housed the biggest library for medical records and books of the liberal arts in the world during the time of the Greeks, but was also home to many medical practitioners and philosophers. Great patronage of the arts and sciences from the Ptolemy rulers helped raise Alexandria up, further rivalling the cultural and scientific achievements of other Greek states.
Some of the most striking advances in early anatomy and physiology took place in Hellenistic Alexandria. Two of the most famous anatomists and physiologists of the third century were Herophilus and Erasistratus. These two physicians helped pioneer human dissection for medical research. They also conducted vivisections on the cadavers of condemned criminals, which was considered taboo until the Renaissance—Herophilus was recognized as the first person to perform systematic dissections. Herophilus became known for his anatomical works making impressing contributions to many branches of anatomy and many other aspects of medicine. Some of the works included classifying the system of the pulse, the discovery that human arteries had thicker walls than veins, and that the atria were parts of the heart. Herophilus's knowledge of the human body has provided vital input towards understanding the brain, eye, liver, reproductive organs and nervous system, and characterizing the course of disease. Erasistratus accurately described the structure of the brain, including the cavities and membranes, and made a distinction between its cerebrum and cerebellum During his study in Alexandria, Erasistratus was particularly concerned with studies of the circulatory and nervous systems. He was able to distinguish the sensory and the motor nerves in the human body and believed that air entered the lungs and heart, which was then carried throughout the body. His distinction between the arteries and veins—the arteries carrying the air through the body, while the veins carried the blood from the heart was a great anatomical discovery. Erasistratus was also responsible for naming and describing the function of the epiglottis and the valves of the heart, including the tricuspid. During the third century, Greek physicians were able to differentiate nerves from blood vessels and tendons and to realize that the nerves convey neural impulses. It was Herophilus who made the point that damage to motor nerves induced paralysis. Herophilus named the meninges and ventricles in the brain, appreciated the division between cerebellum and cerebrum and recognized that the brain was the "seat of intellect" and not a "cooling chamber" as propounded by Aristotle Herophilus is also credited with describing the optic, oculomotor, motor division of the trigeminal, facial, vestibulocochlear and hypoglossal nerves.
Great feats were made during the third century in both the digestive and reproductive systems. Herophilus was able to discover and describe not only the salivary glands, but the small intestine and liver. He showed that the uterus is a hollow organ and described the ovaries and uterine tubes. He recognized that spermatozoa were produced by the testes and was the first to identify the prostate gland.
The anatomy of the muscles and skeleton is described in the "Hippocratic Corpus", an Ancient Greek medical work written by unknown authors. Aristotle described vertebrate anatomy based on animal dissection. Praxagoras identified the difference between arteries and veins. Also in the 4th century BCE, Herophilos and Erasistratus produced more accurate anatomical descriptions based on vivisection of criminals in Alexandria during the Ptolemaic dynasty.
In the 2nd century, Galen of Pergamum, an anatomist, clinician, writer and philosopher, wrote the final and highly influential anatomy treatise of ancient times. He compiled existing knowledge and studied anatomy through dissection of animals. He was one of the first experimental physiologists through his vivisection experiments on animals. Galen's drawings, based mostly on dog anatomy, became effectively the only anatomical textbook for the next thousand years. His work was known to Renaissance doctors only through Islamic Golden Age medicine until it was translated from the Greek some time in the 15th century.
Anatomy developed little from classical times until the sixteenth century; as the historian Marie Boas writes, "Progress in anatomy before the sixteenth century is as mysteriously slow as its development after 1500 is startlingly rapid". Between 1275 and 1326, the anatomists Mondino de Luzzi, Alessandro Achillini and Antonio Benivieni at Bologna carried out the first systematic human dissections since ancient times. Mondino's "Anatomy" of 1316 was the first textbook in the medieval rediscovery of human anatomy. It describes the body in the order followed in Mondino's dissections, starting with the abdomen, then the thorax, then the head and limbs. It was the standard anatomy textbook for the next century.
Leonardo da Vinci (1452–1519) was trained in anatomy by Andrea del Verrocchio. He made use of his anatomical knowledge in his artwork, making many sketches of skeletal structures, muscles and organs of humans and other vertebrates that he dissected.
Andreas Vesalius (1514–1564) (Latinized from Andries van Wezel), professor of anatomy at the University of Padua, is considered the founder of modern human anatomy. Originally from Brabant, Vesalius published the influential book "De humani corporis fabrica" ("the structure of the human body"), a large format book in seven volumes, in 1543. The accurate and intricately detailed illustrations, often in allegorical poses against Italianate landscapes, are thought to have been made by the artist Jan van Calcar, a pupil of Titian.
In England, anatomy was the subject of the first public lectures given in any science; these were given by the Company of Barbers and Surgeons in the 16th century, joined in 1583 by the Lumleian lectures in surgery at the Royal College of Physicians.
In the United States, medical schools began to be set up towards the end of the 18th century. Classes in anatomy needed a continual stream of cadavers for dissection and these were difficult to obtain. Philadelphia, Baltimore and New York were all renowned for body snatching activity as criminals raided graveyards at night, removing newly buried corpses from their coffins. A similar problem existed in Britain where demand for bodies became so great that grave-raiding and even anatomy murder were practised to obtain cadavers. Some graveyards were in consequence protected with watchtowers. The practice was halted in Britain by the Anatomy Act of 1832, while in the United States, similar legislation was enacted after the physician William S. Forbes of Jefferson Medical College was found guilty in 1882 of "complicity with resurrectionists in the despoliation of graves in Lebanon Cemetery".
The teaching of anatomy in Britain was transformed by Sir John Struthers, Regius Professor of Anatomy at the University of Aberdeen from 1863 to 1889. He was responsible for setting up the system of three years of "pre-clinical" academic teaching in the sciences underlying medicine, including especially anatomy. This system lasted until the reform of medical training in 1993 and 2003. As well as teaching, he collected many vertebrate skeletons for his museum of comparative anatomy, published over 70 research papers, and became famous for his public dissection of the Tay Whale. From 1822 the Royal College of Surgeons regulated the teaching of anatomy in medical schools. Medical museums provided examples in comparative anatomy, and were often used in teaching. Ignaz Semmelweis investigated puerperal fever and he discovered how it was caused. He noticed that the frequently fatal fever occurred more often in mothers examined by medical students than by midwives. The students went from the dissecting room to the hospital ward and examined women in childbirth. Semmelweis showed that when the trainees washed their hands in chlorinated lime before each clinical examination, the incidence of puerperal fever among the mothers could be reduced dramatically.
Before the modern medical era, the main means for studying the internal structures of the body were dissection of the dead and inspection, palpation and auscultation of the living. It was the advent of microscopy that opened up an understanding of the building blocks that constituted living tissues. Technical advances in the development of achromatic lenses increased the resolving power of the microscope and around 1839, Matthias Jakob Schleiden and Theodor Schwann identified that cells were the fundamental unit of organization of all living things. Study of small structures involved passing light through them and the microtome was invented to provide sufficiently thin slices of tissue to examine. Staining techniques using artificial dyes were established to help distinguish between different types of tissue. Advances in the fields of histology and cytology began in the late 19th century along with advances in surgical techniques allowing for the painless and safe removal of biopsy specimens. The invention of the electron microscope brought a great advance in resolution power and allowed research into the ultrastructure of cells and the organelles and other structures within them. About the same time, in the 1950s, the use of X-ray diffraction for studying the crystal structures of proteins, nucleic acids and other biological molecules gave rise to a new field of molecular anatomy.
Equally important advances have occurred in "non-invasive" techniques for examining the interior structures of the body. X-rays can be passed through the body and used in medical radiography and fluoroscopy to differentiate interior structures that have varying degrees of opaqueness. Magnetic resonance imaging, computed tomography, and ultrasound imaging have all enabled examination of internal structures in unprecedented detail to a degree far beyond the imagination of earlier generations. | https://en.wikipedia.org/wiki?curid=674 |
Affirming the consequent
Affirming the consequent, sometimes called converse error, fallacy of the converse, or confusion of necessity and sufficiency, is a formal fallacy of taking a true conditional statement (e.g., "If the lamp were broken, then the room would be dark,") and invalidly inferring its converse ("The room is dark, so the lamp is broken,") even though the converse may not be true. This arises when a consequent ("the room would be dark") has more than one "other" possible antecedents (for example, "the lamp is not plugged in" or "the lamp is in working order, but is switched off").
Converse errors are common in everyday thinking and communication and can result from, among other causes, communication issues, misconceptions about logic, and failure to consider other causes.
The opposite statement, denying the consequent, "is" a valid form of argument.
Affirming the consequent is the action of taking a true statement formula_1 and invalidly concluding its converse formula_2. The name "affirming the consequent" derives from using the consequent, "Q", of formula_1, to conclude the antecedent "P". This illogic can be summarized formally as formula_4 or, alternatively, formula_5.
The root cause of such a logic error is sometimes failure to realize that just because "P" is a "possible" condition for "Q", "P" may not be the "only" condition for "Q", i.e. "Q" may follow from another condition as well.
Affirming the consequent can also result from overgeneralizing the experience of many statements "having" true converses. If "P" and "Q" are "equivalent" statements, i.e. formula_6, it "is" possible to infer "P" under the condition "Q". For example, the statements "It is August 13, so it is my birthday" formula_1 and "It is my birthday, so it is August 13" formula_2 are equivalent and both true consequences of the statement "August 13 is my birthday" (an abbreviated form of formula_6). Using one statement to conclude the other is "not" an example of affirming the consequent, but some people may misapply the approach.
Example 1
One way to demonstrate the invalidity of this argument form is with a counterexample with true premises but an obviously false conclusion. For example:
Owning Fort Knox is not the "only" way to be rich. Any number of other ways to be rich exist.
However, one can affirm with certainty that "if someone is not rich" ("non-Q"), then "this person does not own Fort Knox" ("non-P"). This is the contrapositive of the first statement, and it must be true if and only if the original statement is true.
Example 2
Here is another useful, obviously-fallacious example, but one that does not require familiarity with who Bill Gates is and what Fort Knox is:
Here, it is immediately intuitive that any number of other antecedents ("If an animal is a deer...", "If an animal is an elephant...", "If an animal is a moose...", "etc.") can give rise to the consequent ("then it has four legs"), and that it is preposterous to suppose that having four legs "must" imply that the animal is a dog and nothing else. This is useful as a teaching example since most people can immediately recognize that the conclusion reached must be wrong (intuitively, a cat cannot be a dog), and that the method by which it was reached must therefore be fallacious.
Example 3
Arguments of the same form can sometimes seem superficially convincing, as in the following example:
Being thrown off the top of the Eiffel Tower is not the "only" cause of death, since there exist numerous different causes of death.
Affirming the consequent is commonly used in rationalization, and thus appears as a coping mechanism in some people.
Example 4
In "Catch-22", the chaplain is interrogated for supposedly being "Washington Irving"/"Irving Washington", who has been blocking out large portions of soldiers' letters home. The colonel has found such a letter, but with the Chaplain's name signed.
"P" in this case is 'The chaplain signs his own name', and "Q" 'The chaplain's name is written'. The chaplain's name may be written, but he did not necessarily write it, as the colonel falsely concludes." | https://en.wikipedia.org/wiki?curid=675 |
Andrei Tarkovsky
Andrei Arsenyevich Tarkovsky (; 4 April 1932 – 29 December 1986) was a Russian filmmaker, writer, and film theorist. He is widely considered one of the greatest and most influential directors in the history of Russian and world cinema. His films explored spiritual and metaphysical themes, and are noted for their slow pacing and long takes, dreamlike visual imagery, and preoccupation with nature and memory.
Tarkovsky studied film at Moscow's State Institute of Cinematography under filmmaker Mikhail Romm, and subsequently directed his first five feature films in the Soviet Union: "Ivan's Childhood" (1962), "Andrei Rublev" (1966), "Solaris" (1972), "Mirror" (1975), and "Stalker" (1979). After years of creative conflict with state film authorities, Tarkovsky left the country in 1979 and made his final two films abroad; "Nostalghia" (1983) and "The Sacrifice" (1986) were produced in Italy and Sweden respectively. In 1986, he also published a book about cinema and art entitled "Sculpting in Time". He died of cancer later that year.
Tarkovsky was the recipient of several awards at the Cannes Film Festival throughout his career (including the FIPRESCI prize, the Prize of the Ecumenical Jury, and the Grand Prix Spécial du Jury) and winner of the Golden Lion award at the Venice Film Festival for his debut film "Ivan's Childhood". In 1990, he was posthumously awarded the Soviet Union's prestigious Lenin Prize. Three of his films—"Andrei Rublev", "Mirror", and "Stalker"—featured in "Sight & Sound"'s 2012 poll of the 50 greatest films of all time.
Andrei Tarkovsky was born in the village of Zavrazhye in the Yuryevetsky District of the Ivanovo Industrial Oblast (modern-day Kadyysky District of the Kostroma Oblast, Russia) to the poet and translator Arseny Alexandrovich Tarkovsky, a native of Yelisavetgrad, Kherson Governorate, and Maria Ivanova Vishnyakova, a graduate of the Maxim Gorky Literature Institute who later worked as a corrector; she was born in Moscow in the Dubasov family estate. Andrei's paternal grandfather Aleksandr Karlovich Tarkovsky (in ) was a Polish nobleman who worked as a bank clerk. His wife Maria Danilovna Rachkovskaya was a Romanian teacher who arrived from Iași. Andrei's maternal grandmother Vera Nikolaevna Vishnyakova (née Dubasova) belonged to an old Dubasov family of Russian nobility that traces its history back to the 17th century; among her relatives was Admiral Fyodor Dubasov, a fact she had to conceal during the Soviet days. She was married to Ivan Ivanovich Vishnyakov, a native of the Kaluga Governorate who studied law at the Moscow University and served as a judge in Kozelsk. According to the family legend, Tarkovsky's ancestors on his father's side were princes from the Shamkhalate of Tarki, Dagestan, although his sister Marina Tarkovskaya who did a detailed research on their genealogy called it «a myth, even a prank of sorts», stressing that none of the documents confirms this version.
Tarkovsky spent his childhood in Yuryevets. He was described by childhood friends as active and popular, having many friends and being typically in the center of action. His father left the family in 1937, subsequently volunteering for the army in 1941. He returned home in 1943, having being awarded a Red Star after being shot in one of his legs (which he would eventually need to amputate due to gangrene). Tarkovsky stayed with his mother, moving with her and his sister Marina to Moscow, where she worked as a proofreader at a printing press. In 1939 Tarkovsky enrolled at the Moscow School No. 554. During the war, the three evacuated to Yuryevets, living with his maternal grandmother. In 1943 the family returned to Moscow. Tarkovsky continued his studies at his old school, where the poet Andrey Voznesensky was one of his classmates. He studied piano at a music school and attended classes at an art school. The family lived on Shchipok Street in the Zamoskvorechye District in Moscow. From November 1947 to spring 1948 he was in the hospital with tuberculosis. Many themes of his childhood—the evacuation, his mother and her two children, the withdrawn father, the time in the hospital—feature prominently in his film "Mirror".
In his school years, Tarkovsky was a troublemaker and a poor student. He still managed to graduate, and from 1951 to 1952 studied Arabic at the Oriental Institute in Moscow, a branch of the Academy of Sciences of the USSR. Although he already spoke some Arabic and was a successful student in his first semesters, he did not finish his studies and dropped out to work as a prospector for the Academy of Science Institute for Non-Ferrous Metals and Gold. He participated in a year-long research expedition to the river Kureikye near Turukhansk in the Krasnoyarsk Province. During this time in the taiga, Tarkovsky decided to study film.
Upon returning from the research expedition in 1954, Tarkovsky applied at the State Institute of Cinematography (VGIK) and was admitted to the film-directing program. He was in the same class as Irma Raush whom he married in April 1957.
The early Khrushchev era offered good opportunities for young film directors. Before 1953, annual film production was low and most films were directed by veteran directors. After 1953, more films were produced, many of them by young directors. The Khrushchev Thaw relaxed Soviet social restrictions a bit and permitted a limited influx of European and North American literature, films and music. This allowed Tarkovsky to see films of the Italian neorealists, French New Wave, and of directors such as Kurosawa, Buñuel, Bergman, Bresson, Andrzej Wajda (whose film "Ashes and Diamonds" influenced Tarkovsky) and Mizoguchi.
Tarkovsky's teacher and mentor was Mikhail Romm, who taught many film students who would later become influential film directors. In 1956 Tarkovsky directed his first student short film, "The Killers", from a short story of Ernest Hemingway. The short film "There Will Be No Leave Today" and the screenplay "Concentrate" followed in 1958 and 1959.
An important influence on Tarkovsky was the film director Grigory Chukhray, who was teaching at the VGIK. Impressed by the talent of his student, Chukhray offered Tarkovsky a position as assistant director for his film "Clear Skies". Tarkovsky initially showed interest but then decided to concentrate on his studies and his own projects.
During his third year at the VGIK, Tarkovsky met Andrei Konchalovsky. They found much in common as they liked the same film directors and shared ideas on cinema and films. In 1959 they wrote the script "Antarctica – Distant Country", which was later published in the "Moskovskij Komsomolets". Tarkovsky submitted the script to Lenfilm, but it was rejected. They were more successful with the script "The Steamroller and the Violin", which they sold to Mosfilm. This became Tarkovsky's graduation project, earning him his diploma in 1960 and winning First Prize at the New York Student Film Festival in 1961.
Tarkovsky's first feature film was "Ivan's Childhood" in 1962. He had inherited the film from director Eduard Abalov, who had to abort the project. The film earned Tarkovsky international acclaim and won the Golden Lion award at the Venice Film Festival in the year 1962. In the same year, on 30 September, his first son Arseny (called Senka in Tarkovsky's diaries) Tarkovsky was born.
In 1965, he directed the film "Andrei Rublev" about the life of Andrei Rublev, the fifteenth-century Russian icon painter. "Andrei Rublev" was not, except for a single screening in Moscow in 1966, immediately released after completion due to problems with Soviet authorities. Tarkovsky had to cut the film several times, resulting in several different versions of varying lengths. The film was widely released in the Soviet Union in a cut version in 1971. Nevertheless, the film had a budget of more than 1 million rubles – a significant sum for that period.
He divorced his wife, Irma Raush, in June 1970. In the same year, he married Larissa Kizilova (née Egorkina), who had been a production assistant for the film "Andrei Rublev" (they had been living together since 1965). Their son, Andrei Andreyevich Tarkovsky, was born in the same year on 7 August. A version of the film was presented at the Cannes Film Festival in 1969 and won the FIPRESCI prize.
In 1972, he completed "Solaris", an adaptation of the novel "Solaris" by Stanisław Lem. He had worked on this together with screenwriter Fridrikh Gorenshtein as early as 1968. The film was presented at the Cannes Film Festival, won the Grand Prix Spécial du Jury and the FIPRESCI prize, and was nominated for the Palme d'Or. From 1973 to 1974, he shot the film "Mirror", a highly autobiographical and unconventionally structured film drawing on his childhood and incorporating some of his father's poems. In this film Tarkovsky portrayed the plight of childhood affected by war. Tarkovsky had worked on the screenplay for this film since 1967, under the consecutive titles "Confession", "White day" and "A white, white day". From the beginning the film was not well received by Soviet authorities due to its content and its perceived elitist nature. Soviet authorities placed the film in the "third category," a severely limited distribution, and only allowed it to be shown in third-class cinemas and workers' clubs. Few prints were made and the film-makers received no returns. Third category films also placed the film-makers in danger of being accused of wasting public funds, which could have serious effects on their future productivity. These difficulties are presumed to have made Tarkovsky play with the idea of going abroad and producing a film outside the Soviet film industry.
During 1975, Tarkovsky also worked on the screenplay "Hoffmanniana", about the German writer and poet E. T. A. Hoffmann. In December 1976, he directed "Hamlet", his only stage play, at the Lenkom Theatre in Moscow. The main role was played by Anatoly Solonitsyn, who also acted in several of Tarkovsky's films. At the end of 1978, he also wrote the screenplay "Sardor" together with the writer Aleksandr Misharin.
The last film Tarkovsky completed in the Soviet Union was "Stalker", inspired by the novel "Roadside Picnic" by the brothers Arkady and Boris Strugatsky. Tarkovsky had met the brothers first in 1971 and was in contact with them until his death in 1986. Initially he wanted to shoot a film based on their novel "Dead Mountaineer's Hotel" and he developed a raw script. Influenced by a discussion with Arkady Strugatsky he changed his plan and began to work on the script based on "Roadside Picnic". Work on this film began in 1976. The production was mired in troubles; improper development of the negatives had ruined all the exterior shots. Tarkovsky's relationship with cinematographer Georgy Rerberg deteriorated to the point where he hired Alexander Knyazhinsky as a new first cinematographer. Furthermore, Tarkovsky suffered a heart attack in April 1978, resulting in further delay. The film was completed in 1979 and won the Prize of the Ecumenical Jury at the Cannes Film Festival.
In the same year Tarkovsky also began the production of the film "The First Day" (Russian: Первый День "Pervyj Dyen"'), based on a script by his friend and long-term collaborator Andrei Konchalovsky. The film was set in 18th-century Russia during the reign of Peter the Great and starred Natalya Bondarchuk and Anatoli Papanov. To get the project approved by Goskino, Tarkovsky submitted a script that was different from the original script, omitting several scenes that were critical of the official atheism in the Soviet Union. After shooting roughly half of the film the project was stopped by Goskino after it became apparent that the film differed from the script submitted to the censors. Tarkovsky was reportedly infuriated by this interruption and destroyed most of the film.
During the summer of 1979, Tarkovsky traveled to Italy, where he shot the documentary "Voyage in Time" together with his long-time friend Tonino Guerra. Tarkovsky returned to Italy in 1980 for an extended trip, during which he and Guerra completed the script for the film "Nostalghia". During this period, he took Polaroid photographs depicting his personal life.
Tarkovsky returned to Italy in 1982 to start shooting "Nostalghia". He did not return to his home country. As Mosfilm withdrew from the project, he had to complete the film with financial support provided by the Italian RAI. Tarkovsky completed the film in 1983. "Nostalghia" was presented at the Cannes Film Festival and won the FIPRESCI prize and the Prize of the Ecumenical Jury. Tarkovsky also shared a special prize called "Grand Prix du cinéma de creation" with Robert Bresson. Soviet authorities prevented the film from winning the Palme d'Or, a fact that hardened Tarkovsky's resolve to never work in the Soviet Union again. He also said: "I am not a Soviet dissident, I have no conflict with the Soviet Government." But if he returned home, he added, "[he] would be unemployed." In the same year, he also staged the opera "Boris Godunov" at the Royal Opera House in London under the musical direction of Claudio Abbado.
He spent most of 1984 preparing the film "The Sacrifice". At a press conference in Milan on 10 July 1984, he announced that he would never return to the Soviet Union and would remain in Europe. At that time, his son Andrei Jr. was still in the Soviet Union and not allowed to leave the country. On 28 August 1985, Tarkovsky arrived at Latina Refugee Camp in Latina, where he was registered with the serial number 13225/379.
"The Sacrifice" was Tarkovsky's last film, dedicated to his son, Andrei Jr. "Directed by Andrei Tarkovsky", which documents the making of "The Sacrifice", was released after the filmmaker's death in 1986. In a particularly poignant scene, writer/director Michal Leszczylowski follows Tarkovsky on a walk as he expresses his sentiments on death he claims himself to be immortal and has no fear of dying.
During 1985, he shot the film "The Sacrifice" in Sweden. At the end of the year he was diagnosed with terminal lung cancer. In January 1986, he began treatment in Paris and was joined there by his son, who was finally allowed to leave the Soviet Union. "The Sacrifice" was presented at the Cannes Film Festival and received the Grand Prix Spécial du Jury, the FIPRESCI prize and the Prize of the Ecumenical Jury. As Tarkovsky was unable to attend due to his illness, the prizes were collected by his son, Andrei Jr.
In Tarkovsky's last entry (15 December 1986), he wrote: "But now I have no strength left – that is the problem". The diaries are sometimes also known as "" and were published posthumously in 1989 and in English in 1991.
Tarkovsky died in Paris on 29 December 1986. His funeral ceremony was held at the Alexander Nevsky Cathedral. He was buried on 3 January 1987 in the Russian Cemetery in Sainte-Geneviève-des-Bois in France. The inscription on his gravestone, which was conceived by Tarkovsky's wife, Larisa Tarkovskaya, reads: "To the man who saw the Angel".
A conspiracy theory emerged in Russia in the early 1990s when it was alleged that Tarkovsky did not die of natural causes but was assassinated by the KGB. Evidence for this hypothesis includes testimonies by former KGB agents who claim that Viktor Chebrikov gave the order to eradicate Tarkovsky to curtail what the Soviet government and the KGB saw as anti-Soviet propaganda by Tarkovsky. Other evidence includes several memoranda that surfaced after the 1991 coup and the claim by one of Tarkovsky's doctors that his cancer could not have developed from a natural cause.
As with Tarkovsky, his wife Larisa Tarkovskaya and actor Anatoly Solonitsyn all died from the very same type of lung cancer. Vladimir Sharun, sound designer in "Stalker", is convinced that they were all poisoned by the chemical plant where they were shooting the film.
Numerous awards were bestowed on Tarkovsky throughout his lifetime. At the Venice Film Festival he was awarded the Golden Lion for "Ivan's Childhood". At the Cannes Film Festival, he won the FIPRESCI prize four times, the Prize of the Ecumenical Jury three times (more than any other director), and the Grand Prix Spécial du Jury twice. He was also nominated for the Palme d'Or two times. In 1987, the British Academy of Film and Television Arts awarded the BAFTA Award for Best Foreign Language Film to "The Sacrifice".
Under the influence of Glasnost and Perestroika, Tarkovsky was finally recognized in the Soviet Union in the Autumn of 1986, shortly before his death, by a retrospective of his films in Moscow. After his death, an entire issue of the film magazine "Iskusstvo Kino" was devoted to Tarkovsky. In their obituaries, the film committee of the Council of Ministers of the USSR and the Union of Soviet Film Makers expressed their sorrow that Tarkovsky had to spend the last years of his life in exile.
Posthumously, he was awarded the Lenin Prize in 1990, one of the highest state honors in the Soviet Union. In 1989 the "Andrei Tarkovsky Memorial Prize" was established, with its first recipient being the Russian animator Yuriy Norshteyn. In three consecutive events, the Moscow International Film Festival awards the annual "Andrei Tarkovsky Award" in the years of 1993, 1995 and 1997.
In 1996 the Andrei Tarkovsky Museum opened in Yuryevets, his childhood town. A minor planet, 3345 Tarkovskij, discovered by Soviet astronomer Lyudmila Georgievna Karachkina in 1982, has also been named after him.
Tarkovsky has been the subject of several documentaries. Most notable is the 1988 documentary "Moscow Elegy", by Russian film director Alexander Sokurov. Sokurov's own work has been heavily influenced by Tarkovsky. The film consists mostly of narration over stock footage from Tarkovsky's films. "Directed by Andrei Tarkovsky" is 1988 documentary film by Michal Leszczylowski, an editor of the film "The Sacrifice". Film director Chris Marker produced the television documentary "One Day in the Life of Andrei Arsenevich" as an homage to Andrei Tarkovsky in 2000.
Ingmar Bergman was quoted as saying: "Tarkovsky for me is the greatest [of us all], the one who invented a new language, true to the nature of film, as it captures life as a reflection, life as a dream". Film historian Steven Dillon says that much of subsequent film was deeply influenced by the films of Tarkovsky.
At the entrance to the Gerasimov Institute of Cinematography in Moscow, there is a monument that includes statues of Tarkovsky, Gennady Shpalikov and Vasily Shukshin.
Concentrate (, "Kontsentrat") is a never-filmed 1958 screenplay by Russian film director Andrei Tarkovsky. The screenplay is based on Tarkovsky's year in the taiga as a member of a research expedition, prior to his enrollment in film school. It's about the leader of a geological expedition, who waits for the boat that brings back the concentrates collected by the expedition. The expedition is surrounded by mystery, and its purpose is a state secret.
Although some authors claim that the screenplay was filmed, according to Marina Tarkovskaya, Tarkovsky's sister (and wife of Aleksandr Gordon, a fellow student of Tarvosky during his film school years) the screenplay was never filmed. Tarkovsky wrote the screenplay during his entrance examination at the State Institute of Cinematography (VGIK) in a single sitting. He earned the highest possible grade, excellent () for this work. In 1994 fragments of the "Concentrate" were filmed and used in the documentary "Andrei Tarkovsky's Taiga Summer" by Marina Tarkovskaya and Aleksandr Gordon.
Hoffmanniana () is a never-filmed 1974 screenplay by Russian film director Andrei Tarkovsky. The screenplay is based on the life and work of German author E. T. A. Hoffmann. In 1974 an acquaintance from Tallinnfilm approached Tarkovsky to write a screenplay on a German theme. Tarkovsky considered Thomas Mann and E.T.A. Hoffmann, and also thought about Ibsen's "Peer Gynt". In the end Tarkovsky signed a contract for a script based on the life and work of Hoffmann. Tarkovsky planned to write the script during the summer of 1974 at his dacha. Writing was not without difficulty, less than a month before the deadline he had not written a single page. He finally finished the project in late 1974 and submitted the final script to Tallinnfilm in October.
Although the script was well received by the officials at Tallinnfilm, it was the consensus that no one but Tarkovsky would be able to direct it. The script was sent to Goskino in February 1976, and although approval was granted for proceeding with making the film the screenplay was never realized. In 1984, during the time of his exile in the West, Tarkovsky revisited the screenplay and made a few changes. He also considered to finally direct a film based on the screenplay but ultimately dropped this idea.
Tarkovsky became a film director during the mid and late 1950s, a period referred to as the Khrushchev Thaw, during which Soviet society opened to foreign films, literature and music, among other things. This allowed Tarkovsky to see films of European, American and Japanese directors, an experience that influenced his own film making. His teacher and mentor at the film school, Mikhail Romm, allowed his students considerable freedom and emphasized the independence of the film director.
Tarkovsky was, according to fellow student Shavkat Abdusalmov, fascinated by Japanese films. He was amazed by how every character on the screen is exceptional and how everyday events such as a Samurai cutting bread with his sword are elevated to something special and put into the limelight. Tarkovsky has also expressed interest in the art of Haiku and its ability to create "images in such a way that they mean nothing beyond themselves".
Tarkovsky was also a deeply religious Orthodox Christian, who believed great art should have a higher spiritual purpose, Tarkovsky was a perfectionist not given to humor or humility. His signature style was ponderous and literary, having many characters that pondered over religious themes and issues regarding faith.
Tarkovsky perceived that the art of cinema has only been truly mastered by very few filmmakers, stating in a 1970 interview with Naum Abramov that "they can be counted on the fingers of one hand". In 1972, Tarkovsky told film historian Leonid Kozlov his ten favorite films. The list includes: "Diary of a Country Priest" and "Mouchette" by Robert Bresson; "Winter Light", "Wild Strawberries", and "Persona" by Ingmar Bergman; "Nazarín" by Luis Buñuel; "City Lights" by Charlie Chaplin; "Ugetsu" by Kenji Mizoguchi; "Seven Samurai" by Akira Kurosawa, and "Woman in the Dunes" by Hiroshi Teshigahara. Among his favorite directors were Buñuel, Mizoguchi, Bergman, Bresson, Kurosawa, Michelangelo Antonioni, Jean Vigo, and Carl Theodor Dreyer.
With the exception of "City Lights", the list does not contain any films of the early silent era. The reason is that Tarkovsky saw film as an art as only a relatively recent phenomenon, with the early film-making forming only a prelude. The list has also no films or directors from Tarkovsky's native Russia, although he rated Soviet directors such as Boris Barnet, Sergei Parajanov and Alexander Dovzhenko highly. He said of Dovzhenko's "Earth": "I have lived a lot among very simple farmers and met extraordinary people. They spread calmness, had such tact, they conveyed a feeling of dignity and displayed wisdom that I have seldom come across on such a scale. Dovzhenko had obviously understood wherein the sense of life resides. [...] This trespassing of the border between nature and mankind is an ideal place for the existence of man. Dovzhenko understood this."
Andrei Tarkovsky was not a fan of science fiction, largely dismissing it for its “comic book” trappings and vulgar commercialism. However in a famous exception Tarkovsky praised the blockbuster film "The Terminator", saying that its "vision of the future and the relation between man and its destiny is pushing the frontier of cinema as an art". He was critical of the "brutality and low acting skills", but was nevertheless impressed by the film.
In a 1962 interview, Tarkovsky argued: "All art, of course, is intellectual, but for me, all the arts, and cinema even more so, must above all be emotional and act upon the heart." His films are characterized by metaphysical themes, extremely long takes, and images often considered by critics to be of exceptional beauty. Recurring motifs are dreams, memory, childhood, running water accompanied by fire, rain indoors, reflections, levitation, and characters re-appearing in the foreground of long panning movements of the camera. He once said: "Juxtaposing a person with an environment that is boundless, collating him with a countless number of people passing by close to him and far away, relating a person to the whole world, that is the meaning of cinema."
Tarkovsky incorporated levitation scenes into several of his films, most notably "Solaris". To him these scenes possess great power and are used for their photogenic value and magical inexplicability. Water, clouds, and reflections were used by him for their surreal beauty and photogenic value, as well as their symbolism, such as waves or the forms of brooks or running water. Bells and candles are also frequent symbols. These are symbols of film, sight and sound, and Tarkovsky's film frequently has themes of self-reflection.
Tarkovsky developed a theory of cinema that he called "sculpting in time". By this he meant that the unique characteristic of cinema as a medium was to take our experience of time and alter it. Unedited movie footage transcribes time in real time. By using long takes and few cuts in his films, he aimed to give the viewers a sense of time passing, time lost, and the relationship of one moment in time to another.
Up to, and including, his film "Mirror", Tarkovsky focused his cinematic works on exploring this theory. After "Mirror", he announced that he would focus his work on exploring the dramatic unities proposed by Aristotle: a concentrated action, happening in one place, within the span of a single day.
Several of Tarkovsky's films have color or black-and-white sequences. This first occurs in the otherwise monochrome "Andrei Rublev", which features a color epilogue of Rublev's authentic religious icon paintings. All of his films afterwards contain monochrome, and in "Stalker's" case sepia sequences, while otherwise being in color. In 1966, in an interview conducted shortly after finishing "Andrei Rublev", Tarkovsky dismissed color film as a "commercial gimmick" and cast doubt on the idea that contemporary films meaningfully use color. He claimed that in everyday life one does not consciously notice colors most of the time, and that color should therefore be used in film mainly to emphasize certain moments, but not all the time, as this distracts the viewer. To him, films in color were like moving paintings or photographs, which are too beautiful to be a realistic depiction of life.
Ingmar Bergman, a renowned director, commented on Tarkovsky:
Contrarily, however, Bergman conceded the truth in the claim made by a critic who wrote that "with "Autumn Sonata" Bergman does Bergman", adding: "Tarkovsky began to make Tarkovsky films, and that Fellini began to make Fellini films [...] Buñuel nearly always made Buñuel films." This pastiche of one's own work has been derogatorily termed as "self-karaoke".
Tarkovsky worked in close collaboration with cinematographer Vadim Yusov from 1958 to 1972, and much of the visual style of Tarkovsky's films can be attributed to this collaboration. Tarkovsky would spend two days preparing for Yusov to film a single long take, and due to the preparation, usually only a single take was needed.
In his last film, "The Sacrifice", Tarkovsky worked with cinematographer Sven Nykvist, who had worked on many films with director Ingmar Bergman. (Nykvist was not alone: several people involved in the production had previously collaborated with Bergman, notably lead actor Erland Josephson, who had also acted for Tarkovsky in "Nostalghia".) Nykvist complained that Tarkovsky would frequently look through the camera and even direct actors through it, but ultimately stated that choosing to work with Tarkovsky was one of the best choices he had ever made.
Tarkovsky is mainly known as a film director. During his career he directed seven feature films, as well as three shorts from his time at VGIK. His features are:
He also wrote several screenplays. Furthermore, he directed the play "Hamlet" for the stage in Moscow, directed the opera "Boris Godunov" in London, and he directed a radio production of the short story "Turnabout" by William Faulkner. He also wrote "Sculpting in Time", a book on film theory.
Tarkovsky's first feature film was "Ivan's Childhood" in 1962. He then directed "Andrei Rublev" in 1966, "Solaris" in 1972, "Mirror" in 1975 and "Stalker" in 1979. The documentary "Voyage in Time" was produced in Italy in 1982, as was "Nostalghia" in 1983. His last film "The Sacrifice" was produced in Sweden in 1986. Tarkovsky was personally involved in writing the screenplays for all his films, sometimes with a cowriter. Tarkovsky once said that a director who realizes somebody else's screenplay without being involved in it becomes a mere illustrator, resulting in dead and monotonous films.
Books written by Tarkovsky
A book of 60 photos, "Instant Light, Tarkovsky Polaroids", taken by Tarkovsky in Russia and Italy between 1979 and 1984 was published in 2006. The collection was selected by Italian photographer Giovanni Chiaramonte and Tarkovsky's son Andrey A. Tarkovsky.
Notes
Bibliography | https://en.wikipedia.org/wiki?curid=676 |
Ambiguity
Ambiguity is a type of meaning in which a phrase, statement or resolution is not explicitly defined, making several interpretations plausible. A common aspect of ambiguity is uncertainty. It is thus an attribute of any idea or statement whose intended meaning cannot be definitively resolved according to a rule or process with a finite number of steps. (The "ambi-" part of the term reflects an idea of "two", as in "two meanings".)
The concept of ambiguity is generally contrasted with vagueness. In ambiguity, specific and distinct interpretations are permitted (although some may not be immediately obvious), whereas with information that is vague, it is difficult to form any interpretation at the desired level of specificity.
Context may play a role in resolving ambiguity. For example, the same piece of information may be ambiguous in one context and unambiguous in another.
Lexical ambiguity is contrasted with semantic ambiguity. The former represents a choice between a finite number of known and meaningful context-dependent interpretations. The latter represents a choice between any number of possible interpretations, none of which may have a standard agreed-upon meaning. This form of ambiguity is closely related to vagueness.
Linguistic ambiguity can be a problem in law, because the interpretation of written documents and oral agreements is often of paramount importance.
The lexical ambiguity of a word or phrase pertains to its having more than one meaning in the language to which the word belongs. "Meaning" here refers to whatever should be captured by a good dictionary. For instance, the word "bank" has several distinct lexical definitions, including "financial institution" and "edge of a river". Or consider "apothecary". One could say "I bought herbs from the apothecary". This could mean one actually spoke to the apothecary (pharmacist) or went to the apothecary (pharmacy).
The context in which an ambiguous word is used often makes it evident which of the meanings is intended. If, for instance, someone says "I buried $100 in the bank", most people would not think someone used a shovel to dig in the mud. However, some linguistic contexts do not provide sufficient information to disambiguate a used word.
Lexical ambiguity can be addressed by algorithmic methods that automatically associate the appropriate meaning with a word in context, a task referred to as word sense disambiguation.
The use of multi-defined words requires the author or speaker to clarify their context, and sometimes elaborate on their specific intended meaning (in which case, a less ambiguous term should have been used). The goal of clear concise communication is that the receiver(s) have no misunderstanding about what was meant to be conveyed. An exception to this could include a politician whose "weasel words" and obfuscation are necessary to gain support from multiple constituents with mutually exclusive conflicting desires from their candidate of choice. Ambiguity is a powerful tool of political science.
More problematic are words whose senses express closely related concepts. "Good", for example, can mean "useful" or "functional" ("That's a good hammer"), "exemplary" ("She's a good student"), "pleasing" ("This is good soup"), "moral" ("a good person" versus "the lesson to be learned from a story"), "righteous", etc. " I have a good daughter" is not clear about which sense is intended. The various ways to apply prefixes and suffixes can also create ambiguity ("unlockable" can mean "capable of being unlocked" or "impossible to lock").
Ambiguity is an effective narrative device to engage a hearer in conversation and to move it to a level of deeper import. In the Gospel of John, Jesus uses an ambiguous metaphor or double entendre to create bewilderment or misunderstanding in the hearer, which is then resolved either by Jesus or the narrator. The New Testament exegete, Rudolf Bultmann, explains the way ambiguity functions in the Fourth Gospel. “The misunderstanding comes when someone sees the right meaning of the word but mistakenly imagines that its meaning is exhausted by earthly matters.” A case in point is the ambiguous metaphor of “living water” (Greek: ὕδωρ ζῶν, "hydōr zōn") in John 4:7-15. A woman from Samaria assumes that Jesus knows of a flowing stream (“living water”) that will make her job of fetching water at Jacob's well in Sychar easier. Jesus, however, has in mind a different type of “living water”—one that wells up within a person in the figurative or spiritual sense of the word.
Semantic ambiguity occurs when a word, phrase or sentence, taken out of context, has more than one interpretation. In "We saw her duck" (example due to Richard Nordquist), the words "her duck" can refer either
Syntactic ambiguity arises when a sentence can have two (or more) different meanings because of the structure of the sentence—its syntax. This is often due to a modifying expression, such as a prepositional phrase, the application of which is unclear. "He ate the cookies on the couch", for example, could mean that he ate those cookies that were on the couch (as opposed to those that were on the table), or it could mean that he was sitting on the couch when he ate the cookies. "To get in, you will need an entrance fee of $10 or your voucher and your drivers' license." This could mean that you need EITHER ten dollars OR BOTH your voucher and your license. Or it could mean that you need your license AND you need EITHER ten dollars OR a voucher. Only rewriting the sentence, or placing appropriate punctuation can resolve a syntactic ambiguity.
For the notion of, and theoretic results about, syntactic ambiguity in artificial, formal languages (such as computer programming languages), see Ambiguous grammar.
Usually, semantic and syntactic ambiguity go hand in hand. The sentence "We saw her duck" is also syntactically ambiguous. Conversely, a sentence like "He ate the cookies on the couch" is also semantically ambiguous. Rarely, but occasionally, the different parsings of a syntactically ambiguous phrase result in the same meaning. For example, the command "Cook, cook!" can be parsed as "Cook (noun used as vocative), cook (imperative verb form)!", but also as "Cook (imperative verb form), cook (noun used as vocative)!". It is more common that a syntactically unambiguous phrase has a semantic ambiguity; for example, the lexical ambiguity in "Your boss is a funny man" is purely semantic, leading to the response "Funny ha-ha or funny peculiar?"
Spoken language can contain many more types of ambiguities which are called phonological ambiguities, where there is more than one way to compose a set of sounds into words. For example, "ice cream" and "I scream". Such ambiguity is generally resolved according to the context. A mishearing of such, based on incorrectly resolved ambiguity, is called a mondegreen.
Metonymy involves referring to one entity by the name of a different but closely related entity (for example, using "wheels" to refer to a car, or "Wall Street" to refer to the stock exchanges located on that street or even the entire US financial sector). In the modern vocabulary of critical semiotics, metonymy encompasses any potentially ambiguous word substitution that is based on contextual contiguity (located close together), or a function or process that an object performs, such as "sweet ride" to refer to a nice car. Metonym miscommunication is considered a primary mechanism of linguistic humor.
Philosophers (and other users of logic) spend a lot of time and effort searching for and removing (or intentionally adding) ambiguity in arguments because it can lead to incorrect conclusions and can be used to deliberately conceal bad arguments. For example, a politician might say, "I oppose taxes which hinder economic growth", an example of a glittering generality. Some will think he opposes taxes in general because they hinder economic growth. Others may think he opposes only those taxes that he believes will hinder economic growth. In writing, the sentence can be rewritten to reduce possible misinterpretation, either by adding a comma after "taxes" (to convey the first sense) or by changing "which" to "that" (to convey the second sense) or by rewriting it in other ways. The devious politician hopes that each constituent will interpret the statement in the most desirable way, and think the politician supports everyone's opinion. However, the opposite can also be true – an opponent can turn a positive statement into a bad one if the speaker uses ambiguity (intentionally or not). The logical fallacies of amphiboly and equivocation rely heavily on the use of ambiguous words and phrases.
In continental philosophy (particularly phenomenology and existentialism), there is much greater tolerance of ambiguity, as it is generally seen as an integral part of the human condition. Martin Heidegger argued that the relation between the subject and object is ambiguous, as is the relation of mind and body, and part and whole.[3] In Heidegger's phenomenology, Dasein is always in a meaningful world, but there is always an underlying background for every instance of signification. Thus, although some things may be certain, they have little to do with Dasein's sense of care and existential anxiety, e.g., in the face of death. In calling his work Being and Nothingness an "essay in phenomenological ontology" Jean-Paul Sartre follows Heidegger in defining the human essence as ambiguous, or relating fundamentally to such ambiguity. Simone de Beauvoir tries to base an ethics on Heidegger's and Sartre's writings (The Ethics of Ambiguity), where she highlights the need to grapple with ambiguity: "as long as philosophers and they [men] have thought, most of them have tried to mask it...And the ethics which they have proposed to their disciples have always pursued the same goal. It has been a matter of eliminating the ambiguity by making oneself pure inwardness or pure externality, by escaping from the sensible world or being engulfed by it, by yielding to eternity or enclosing oneself in the pure moment." Ethics cannot be based on the authoritative certainty given by mathematics and logic, or prescribed directly from the empirical findings of science. She states: "Since we do not succeed in fleeing it, let us, therefore, try to look the truth in the face. Let us try to assume our fundamental ambiguity. It is in the knowledge of the genuine conditions of our life that we must draw our strength to live and our reason for acting". Other continental philosophers suggest that concepts such as life, nature, and sex are ambiguous. Corey Anton has argued that we cannot be certain what is separate from or unified with something else: language, he asserts, divides what is not, in fact, separate. Following Ernest Becker, he argues that the desire to 'authoritatively disambiguate' the world and existence has led to numerous ideologies and historical events such as genocide. On this basis, he argues that ethics must focus on 'dialectically integrating opposites' and balancing tension, rather than seeking a priori validation or certainty. Like the existentialists and phenomenologists, he sees the ambiguity of life as the basis of creativity.
In literature and rhetoric, ambiguity can be a useful tool. Groucho Marx's classic joke depends on a grammatical ambiguity for its humor, for example: "Last night I shot an elephant in my pajamas. How he got in my pajamas, I'll never know". Songs and poetry often rely on ambiguous words for artistic effect, as in the song title "Don't It Make My Brown Eyes Blue" (where "blue" can refer to the color, or to sadness).
In the narrative, ambiguity can be introduced in several ways: motive, plot, character. F. Scott Fitzgerald uses the latter type of ambiguity with notable effect in his novel "The Great Gatsby".
Mathematical notation, widely used in physics and other sciences, avoids many ambiguities compared to expression in natural language. However, for various reasons, several lexical, syntactic and semantic ambiguities remain.
The ambiguity in the style of writing a function should not be confused with a multivalued function, which can (and should) be defined in a deterministic and unambiguous way. Several special functions still do not have established notations. Usually, the conversion to another notation requires to scale the argument or the resulting value; sometimes, the same name of the function is used, causing confusions. Examples of such underestablished functions:
Ambiguous expressions often appear in physical and mathematical texts.
It is common practice to omit multiplication signs in mathematical expressions. Also, it is common to give the same name to a variable and a function, for example, formula_1. Then, if one sees formula_2, there is no way to distinguish whether it means formula_1 multiplied by formula_4, or function formula_5 evaluated at argument equal to formula_4. In each case of use of such notations, the reader is supposed to be able to perform the deduction and reveal the true meaning.
Creators of algorithmic languages try to avoid ambiguities. Many algorithmic languages (C++ and Fortran) require the character * as symbol of multiplication. The Wolfram Language used in Mathematica allows the user to omit the multiplication symbol, but requires square brackets to indicate the argument of a function; square brackets are not allowed for grouping of expressions. Fortran, in addition, does not allow use of the same name (identifier) for different objects, for example, function and variable; in particular, the expression f=f(x) is qualified as an error.
The order of operations may depend on the context. In most programming languages, the operations of division and multiplication have equal priority and are executed from left to right. Until the last century, many editorials assumed that multiplication is performed first, for example, formula_7 is interpreted as formula_8; in this case, the insertion of parentheses is required when translating the formulas to an algorithmic language. In addition, it is common to write an argument of a function without parenthesis, which also may lead to ambiguity.
In the scientific journal style, one uses roman letters to denote elementary functions, whereas variables are written using italics.
For example, in mathematical journals the expression
formula_9
does not denote the sine function, but the
product of the three variables
formula_10,
formula_11,
formula_12, although in the informal notation of a slide presentation it may stand for formula_13.
Commas in multi-component subscripts and superscripts are sometimes omitted; this is also potentially ambiguous notation.
For example, in the notation formula_14, the reader can only infer from the context whether it means a single-index object, taken with the subscript equal to product of variables formula_15, formula_12 and formula_17, or it is an indication to a trivalent tensor.
An expression such as formula_18 can be understood to mean either formula_19 or formula_20. Often the author's intention can be understood from the context, in cases where only one of the two makes sense, but an ambiguity like this should be avoided, for example by writing formula_21 or formula_22.
The expression formula_23 means formula_24 in several texts, though it might be thought to mean formula_25, since formula_26 commonly means formula_27. Conversely, formula_28 might seem to mean formula_29, as this exponentiation notation usually denotes function iteration: in general, formula_30 means formula_31. However, for trigonometric and hyperbolic functions, this notation conventionally means exponentiation of the result of function application.
The expression formula_32 can be interpreted as meaning formula_33, in particular if one thinks that the common acronym "PEMDAS" for the order of operations implies that M(ultiplication) takes precedence over D(ivision); however, it is more commonly understood to mean formula_34.
It is common to define the coherent states in quantum optics with formula_35 and states with fixed number of photons with formula_36. Then, there is an "unwritten rule": the state is coherent if there are more Greek characters than Latin characters in the argument, and formula_37photon state if the Latin characters dominate. The ambiguity becomes even worse, if formula_38 is used for the states with certain value of the coordinate, and formula_39 means the state with certain value of the momentum, which may be used in books on quantum mechanics. Such ambiguities easily lead to confusions, especially if some normalized adimensional, dimensionless variables are used. Expression formula_40 may mean a state with single photon, or the coherent state with mean amplitude equal to 1, or state with momentum equal to unity, and so on. The reader is supposed to guess from the context.
Some physical quantities do not yet have established notations; their value (and sometimes even dimension, as in the case of the Einstein coefficients), depends on the system of notations. Many terms are ambiguous. Each use of an ambiguous term should be preceded by the definition, suitable for a specific case. Just like Ludwig Wittgenstein states in Tractatus Logico-Philosophicus: "... Only in the context of a proposition has a name meaning."
A highly confusing term is "gain". For example, the sentence "the gain of a system should be doubled", without context, means close to nothing.
The term "intensity" is ambiguous when applied to light. The term can refer to any of irradiance, luminous intensity, radiant intensity, or radiance, depending on the background of the person using the term.
Also, confusions may be related with the use of atomic percent as measure of concentration of a dopant, or resolution of an imaging system, as measure of the size of the smallest detail which still can be resolved at the background of statistical noise. See also Accuracy and precision and its talk.
The Berry paradox arises as a result of systematic ambiguity in the meaning of terms such as "definable" or "nameable". Terms of this kind give rise to vicious circle fallacies. Other terms with this type of ambiguity are: satisfiable, true, false, function, property, class, relation, cardinal, and ordinal.
In mathematics and logic, ambiguity can be considered to be an instance of the logical concept of underdetermination—for example, formula_41 leaves open what the value of "X" is—while its opposite is a self-contradiction, also called inconsistency, paradoxicalness, or oxymoron, or in mathematics an inconsistent system—such as formula_42, which has no solution.
Logical ambiguity and self-contradiction is analogous to visual ambiguity and impossible objects, such as the Necker cube and impossible cube, or many of the drawings of M. C. Escher.
Some languages have been created with the intention of avoiding ambiguity, especially lexical ambiguity. Lojban and Loglan are two related languages which have been created for this, focusing chiefly on syntactic ambiguity as well. The languages can be both spoken and written. These languages are intended to provide a greater technical precision over big natural languages, although historically, such attempts at language improvement have been criticized. Languages composed from many diverse sources contain much ambiguity and inconsistency. The many exceptions to syntax and semantic rules are time-consuming and difficult to learn.
Christianity and Judaism employ the concept of paradox synonymously with 'ambiguity'. Many Christians and Jews endorse Rudolf Otto's description of the sacred as 'mysterium tremendum et fascinans', the awe-inspiring mystery which fascinates humans. The orthodox Catholic writer G. K. Chesterton regularly employed paradox to tease out the meanings in common concepts which he found ambiguous or to reveal meaning often overlooked or forgotten in common phrases. (The title of one of his most famous books, Orthodoxy, itself employing such a paradox.)
In music, pieces or sections which confound expectations and may be or are interpreted simultaneously in different ways are ambiguous, such as some polytonality, polymeter, other ambiguous meters or rhythms, and ambiguous phrasing, or (Stein 2005, p. 79) any aspect of music. The music of Africa is often purposely ambiguous. To quote Sir Donald Francis Tovey (1935, p. 195), "Theorists are apt to vex themselves with vain efforts to remove uncertainty just where it has a high aesthetic value."
In visual art, certain images are visually ambiguous, such as the Necker cube, which can be interpreted in two ways. Perceptions of such objects remain stable for a time, then may flip, a phenomenon called multistable perception.
The opposite of such ambiguous images are impossible objects.
Pictures or photographs may also be ambiguous at the semantic level: the visual image is unambiguous, but the meaning and narrative may be ambiguous: is a certain facial expression one of excitement or fear, for instance?
In social psychology, ambiguity is a factor used in determining peoples' responses to various situations. High levels of ambiguity in an emergency (e.g. an unconscious man laying on a park bench) make witnesses less likely to offer any sort of assistance, due to the fear that they may have misinterpreted the situation and acted unnecessarily. Alternately, non-ambiguous emergencies (e.g. an injured person verbally asking for help) illicit more consistent intervention and assistance. With regard to the bystander effect, studies have shown that emergencies deemed ambiguous trigger the appearance of the classic bystander effect (wherein more witnesses decrease the likelihood of any of them helping) far more than non-ambiguous emergencies.
In computer science, the SI prefixes kilo-, mega- and giga- were historically used in certain contexts to mean either the first three powers of 1024 (1024, 10242 and 10243) contrary to the metric system in which these units unambiguously mean one thousand, one million, and one billion. This usage is particularly prevalent with electronic memory devices (e.g. DRAM) addressed directly by a binary machine register where a decimal interpretation makes no practical sense.
Subsequently, the Ki, Mi, and Gi prefixes were introduced so that binary prefixes could be written explicitly, also rendering k, M, and G "unambiguous" in texts conforming to the new standard — this led to a "new" ambiguity in engineering documents lacking outward trace of the binary prefixes (necessarily indicating the new style) as to whether the usage of k, M, and G remains ambiguous (old style) or not (new style). 1 M (where M is ambiguously 1,000,000 or 1,048,576) is "less" uncertain than the engineering value 1.0e6 (defined to designate the interval 950,000 to 1,050,000), and that as non-volatile storage devices began to commonly exceed 1 GB in capacity (where the ambiguity begins to routinely impact the second significant digit), GB and TB almost always mean 109 and 1012 bytes. | https://en.wikipedia.org/wiki?curid=677 |
Aardvark
The aardvark ( ; "Orycteropus afer") is a medium-sized, burrowing, nocturnal mammal native to Africa. It is the only living species of the order Tubulidentata, although other prehistoric species and genera of Tubulidentata are known. Unlike other insectivores, it has a long pig-like snout, which is used to sniff out food. It roams over most of the southern two-thirds of the African continent, avoiding areas that are mainly rocky. A nocturnal feeder, it subsists on ants and termites, which it will dig out of their hills using its sharp claws and powerful legs. It also digs to create burrows in which to live and rear its young. It receives a "least concern" rating from the IUCN, although its numbers seem to be decreasing.
Aardvarks are afrotheres, a clade which also includes elephants, manatees, and hyraxes.
The aardvark is sometimes colloquially called "African ant bear", "anteater" (not to be confused with the South American anteater), or the "Cape anteater" after the Cape of Good Hope. The name "aardvark" is Afrikaans (), comes from earlier Afrikaans (erdvark) and means "earth pig" or "ground pig" ("aarde": earth/ground, "vark": pig), because of its burrowing habits. The name "Orycteropus" means burrowing foot, and the name "afer" refers to Africa. The name of the aardvarks's order, "Tubulidentata," comes from the tubule-style teeth.
The aardvark is not closely related to the pig; rather, it is the sole extant representative of the obscure mammalian order Tubulidentata, in which it is usually considered to form one variable species of the genus "Orycteropus", the sole surviving genus in the family Orycteropodidae. The aardvark is not closely related to the South American anteater, despite sharing some characteristics and a superficial resemblance. The similarities are based on convergent evolution. The closest living relatives of the aardvark are the elephant shrews, tenrecs and golden moles. Along with the sirenians, hyraxes, elephants, and their extinct relatives, these animals form the superorder Afrotheria. Studies of the brain have shown the similarities with Condylarthra, and given the clade's status as a wastebasket taxon it may mean some species traditionally classified as "condylarths" are actually stem-aardvarks.
Based on fossils, Bryan Patterson has concluded that early relatives of the aardvark appeared in Africa around the end of the Paleocene. The ptolemaiidans, a mysterious clade of mammals with uncertain affinities, may actually be stem-aardvarks, either as a sister clade to Tubulidentata or as a grade leading to true tubulidentates.
The first unambiguous tubulidentate was probably "Myorycteropus africanus" from Kenyan Miocene deposits. The earliest example from the genus "Orycteropus" was "Orycteropus mauritanicus", found in Algeria in deposits from the middle Miocene, with an equally old version found in Kenya. Fossils from the aardvark have been dated to 5 million years, and have been located throughout Europe and the Near East.
The mysterious Pleistocene "Plesiorycteropus" from Madagascar was originally thought to be a tubulidentate that was descended from ancestors that entered the island during the Eocene. However, a number of subtle anatomical differences coupled with recent molecular evidence now lead researchers to believe that "Plesiorycteropus" is a relative of golden moles and tenrecs that achieved an aardvark-like appearance and ecological niche through convergent evolution.
The aardvark has seventeen poorly defined subspecies listed:
The 1911 Encyclopædia Britannica also mentions "O. a. capensis" or Cape ant-bear from South Africa.
The aardvark is vaguely pig-like in appearance. Its body is stout with a prominently arched back and is sparsely covered with coarse hairs. The limbs are of moderate length, with the rear legs being longer than the forelegs. The front feet have lost the pollex (or 'thumb'), resulting in four toes, while the rear feet have all five toes. Each toe bears a large, robust nail which is somewhat flattened and shovel-like, and appears to be intermediate between a claw and a hoof. Whereas the aardvark is considered digitigrade, it appears at time to be plantigrade. This confusion happens because when it squats it stands on its soles. A contributing characteristic to the burrow digging capabilities of aardvarks is an endosteal tissue called compacted coarse cancellous bone (CCCB). The stress and strain resistance provided by CCCB allows aardvarks to create their burrows, ultimately leading to a favorable environment for plants and a variety of animals.
An aardvark's weight is typically between . An aardvark's length is usually between , and can reach lengths of when its tail (which can be up to ) is taken into account. It is tall at the shoulder, and has a girth of about . It is the largest member of the proposed clade Afroinsectiphilia. The aardvark is pale yellowish-gray in color and often stained reddish-brown by soil. The aardvark's coat is thin, and the animal's primary protection is its tough skin. Its hair is short on its head and tail; however its legs tend to have longer hair. The hair on the majority of its body is grouped in clusters of 3-4 hairs. The hair surrounding its nostrils is dense to help filter particulate matter out as it digs. Its tail is very thick at the base and gradually tapers.
The greatly elongated head is set on a short, thick neck, and the end of the snout bears a disc, which houses the nostrils. It contains a thin but complete zygomatic arch. The head of the aardvark contains many unique and different features. One of the most distinctive characteristics of the Tubulidentata is their teeth. Instead of having a pulp cavity, each tooth has a cluster of thin, hexagonal, upright, parallel tubes of vasodentin (a modified form of dentine), with individual pulp canals, held together by cementum. The number of columns is dependent on the size of the tooth, with the largest having about 1,500. The teeth have no enamel coating and are worn away and regrow continuously. The aardvark is born with conventional incisors and canines at the front of the jaw, which fall out and are not replaced. Adult aardvarks have only cheek teeth at the back of the jaw, and have a dental formula of: These remaining teeth are peg-like and rootless and are of unique composition. The teeth consist of 14 upper and 12 lower jaw molars. The nasal area of the aardvark is another unique area, as it contains ten nasal conchae, more than any other placental mammal.
The sides of the nostrils are thick with hair. The tip of the snout is highly mobile and is moved by modified mimetic muscles. The fleshy dividing tissue between its nostrils probably has sensory functions, but it is uncertain whether they are olfactory or vibratory in nature. Its nose is made up of more turbinate bones than any other mammal, with between 9 and 11, compared to dogs with 4 to 5. With a large quantity of turbinate bones, the aardvark has more space for the moist epithelium, which is the location of the olfactory bulb. The nose contains nine olfactory bulbs, more than any other mammal. Its keen sense of smell is not just from the quantity of bulbs in the nose but also in the development of the brain, as its olfactory lobe is very developed. The snout resembles an elongated pig snout. The mouth is small and tubular, typical of species that feed on ants and termites. The aardvark has a long, thin, snakelike, protruding tongue (as much as long) and elaborate structures supporting a keen sense of smell. The ears, which are very effective, are disproportionately long, about long. The eyes are small for its head, and consist only of rods.
The aardvark's stomach has a muscular pyloric area that acts as a gizzard to grind swallowed food up, thereby rendering chewing unnecessary. Its cecum is large. Both sexes emit a strong smelling secretion from an anal gland. Its salivary glands are highly developed and almost completely ring the neck; their output is what causes the tongue to maintain its tackiness. The female has two pairs of teats in the inguinal region.
Genetically speaking, the aardvark is a living fossil, as its chromosomes are highly conserved, reflecting much of the early eutherian arrangement before the divergence of the major modern taxa.
Aardvarks are found in sub-Saharan Africa, where suitable habitat (savannas, grasslands, woodlands and bushland) and food (i.e., ants and termites) is available. They spend the daylight hours in dark burrows to avoid the heat of the day. The only major habitat that they are not present in is swamp forest, as the high water table precludes digging to a sufficient depth. They also avoid terrain rocky enough to cause problems with digging. They have been documented as high as in Ethiopia. They are present throughout sub-Saharan Africa all the way to South Africa with few exceptions. These exceptions include the coastal areas of Namibia, Ivory Coast, and Ghana. They are not found in Madagascar.
Aardvarks live for up to 23 years in captivity. Its keen hearing warns it of predators: lions, leopards, cheetahs, African wild dogs, hyenas, and pythons. Some humans also hunt aardvarks for meat. Aardvarks can dig fast or run in zigzag fashion to elude enemies, but if all else fails, they will strike with their claws, tail and shoulders, sometimes flipping onto their backs lying motionless except to lash out with all four feet. They are capable of causing substantial damage to unprotected areas of an attacker. They will also dig to escape as they can, when pressed, dig extremely quickly.
The aardvark is nocturnal and is a solitary creature that feeds almost exclusively on ants and termites (myrmecophagy); the only fruit eaten by aardvarks is the aardvark cucumber. In fact, the cucumber and the aardvark have a symbiotic relationship as they eat the subterranean fruit, then defecate the seeds near their burrows, which then grow rapidly due to the loose soil and fertile nature of the area. The time spent in the intestine of the aardvark helps the fertility of the seed, and the fruit provides needed moisture for the aardvark. They avoid eating the African driver ant and red ants. Due to their stringent diet requirements, they require a large range to survive. An aardvark emerges from its burrow in the late afternoon or shortly after sunset, and forages over a considerable home range encompassing . While foraging for food, the aardvark will keep its nose to the ground and its ears pointed forward, which indicates that both smell and hearing are involved in the search for food. They zig-zag as they forage and will usually not repeat a route for 5–8 days as they appear to allow time for the termite nests to recover before feeding on it again.
During a foraging period, they will stop and dig a "V" shaped trench with their forefeet and then sniff it profusely as a means to explore their location. When a concentration of ants or termites is detected, the aardvark digs into it with its powerful front legs, keeping its long ears upright to listen for predators, and takes up an astonishing number of insects with its long, sticky tongue—as many as 50,000 in one night have been recorded. Its claws enable it to dig through the extremely hard crust of a termite or ant mound quickly. It avoids inhaling the dust by sealing the nostrils. When successful, the aardvark's long (up to ) tongue licks up the insects; the termites' biting, or the ants' stinging attacks are rendered futile by the tough skin. After an aardvark visit at a termite mound, other animals will visit to pick up all the leftovers. Termite mounds alone don't provide enough food for the aardvark, so they look for termites that are on the move. When these insects move, they can form columns long and these tend to provide easy pickings with little effort exerted by the aardvark. These columns are more common in areas of livestock or other hoofed animals. The trampled grass and dung attract termites from the "Odontotermes", "Microtermes", and "Pseudacanthotermes" genera.
On a nightly basis they tend to be more active during the first portion of the night time (20:00–00:00); however, they don't seem to prefer bright or dark nights over the other. During adverse weather or if disturbed they will retreat to their burrow systems. They cover between per night; however, some studies have shown that they may traverse as far as in a night.
The aardvark is a rather quiet animal. However, it does make soft grunting sounds as it forages and loud grunts as it makes for its tunnel entrance. It makes a bleating sound if frightened. When it is threatened it will make for one of its burrows. If one is not close it will dig a new one rapidly. This new one will be short and require the aardvark to back out when the coast is clear.
The aardvark is known to be a good swimmer and has been witnessed successfully swimming in strong currents. It can dig a yard of tunnel in about five minutes, but otherwise moves fairly slowly.
When leaving the burrow at night, they pause at the entrance for about ten minutes, sniffing and listening. After this period of watchfulness, it will bound out and within seconds it will be away. It will then pause, prick its ears, twisting its head to listen, then jump and move off to start foraging.
Aside from digging out ants and termites, the aardvark also excavates burrows in which to live; of which they generally fall into three categories: burrows made while foraging, refuge and resting location, and permanent homes. Temporary sites are scattered around the home range and are used as refuges, while the main burrow is also used for breeding. Main burrows can be deep and extensive, have several entrances and can be as long as . These burrows can be large enough for a man to enter. The aardvark changes the layout of its home burrow regularly, and periodically moves on and makes a new one. The old burrows are an important part of the African wildlife scene. As they are vacated, then they are inhabited by smaller animals like the African wild dog, ant-eating chat, "Nycteris thebaica" and warthogs. Other animals that use them are hares, mongooses, hyenas, owls, pythons, and lizards. Without these refuges many animals would die during wildfire season. Only mothers and young share burrows; however, the aardvark is known to live in small family groups or as a solitary creature. If attacked in the tunnel, it will escape by digging out of the tunnel thereby placing the fresh fill between it and its predator, or if it decides to fight it will roll onto its back, and attack with its claws. The aardvark has been known to sleep in a recently excavated ant nest, which also serves as protection from its predators.
Aardvarks pair only during the breeding season; after a gestation period of seven months, one cub weighing around is born during May–July. When born, the young has flaccid ears and many wrinkles. When nursing, it will nurse off each teat in succession. After two weeks, the folds of skin disappear and after three, the ears can be held upright. After 5–6 weeks, body hair starts growing. It is able to leave the burrow to accompany its mother after only two weeks and eats termites at 9 weeks, and is weaned between three months and 16 weeks. At six months of age, it is able to dig its own burrows, but it will often remain with the mother until the next mating season, and is sexually mature from approximately two years of age.
Aardvarks were thought to have declining numbers, however, this is possibly because they are not readily seen. There are no definitive counts because of their nocturnal and secretive habits; however, their numbers seem to be stable overall. They are not considered common anywhere in Africa, but due to their large range, they maintain sufficient numbers. There may be a slight decrease in numbers in eastern, northern, and western Africa. Southern African numbers are not decreasing. It receives an official designation from the IUCN as least concern. However, they are a species in a precarious situation, as they are so dependent on such specific food; therefore if a problem arises with the abundance of termites, the species as a whole would be affected drastically.
Aardvarks handle captivity well. The first zoo to have one was London Zoo in 1869, which had an animal from South Africa.
In African folklore, the aardvark is much admired because of its diligent quest for food and its fearless response to soldier ants. Hausa magicians make a charm from the heart, skin, forehead, and nails of the aardvark, which they then proceed to pound together with the root of a certain tree. Wrapped in a piece of skin and worn on the chest, the charm is said to give the owner the ability to pass through walls or roofs at night. The charm is said to be used by burglars and those seeking to visit young girls without their parents' permission. Also, some tribes, such as the Margbetu, Ayanda, and Logo, will use aardvark teeth to make bracelets, which are regarded as good luck charms. The meat, which has a resemblance to pork, is eaten in certain cultures.
The Egyptian god Set is usually depicted with the head of an unidentified animal, whose similarity to an aardvark has been noted in scholarship.
The titular character of "Arthur", an animated television series for children based on a book series and produced by WGBH, shown in more than 180 countries, is an aardvark.
Otis the Aardvark was a puppet character used on Children's BBC programming.
An aardvark features as the antagonist in the cartoon "The Ant and the Aardvark" as well as in the Canadian animated series "The Raccoons".
In the military, the Air Force supersonic fighter-bomber F-111/FB-111 was nicknamed the Aardvark because of its long nose resembling the animal. It also had similarities with its nocturnal missions flown at a very low level employing ordnance that could penetrate deep into the ground. In the US Navy, the squadron VF-114 was nicknamed the Aardvarks, flying F-4s and then F-14s. The squadron mascot was adapted from the animal in the comic strip "B.C.", which the F-4 was said to resemble.
"Cerebus the Aardvark" is a 300-issue comic book series by Dave Sim. | https://en.wikipedia.org/wiki?curid=680 |
Adobe
Adobe (; ) (")" is a building material made from earth and organic materials. Adobe is Spanish for mudbrick, but in some English-speaking regions of Spanish heritage the term is used to refer to any kind of earthen construction. Most adobe buildings are similar in appearance to cob and rammed earth buildings. Adobe is among the earliest building materials and is used throughout the world.
Adobe bricks are rectangular prisms small enough that they can quickly air dry individually without cracking. They can be subsequently assembled, with the application of adobe mud to bond the individual bricks into a structure. There is no standard size, with substantial variations over the years and in different regions. In some areas a popular size measured weighing about ; in other contexts the size is weighing about . The maximum sizes can reach up to ; above this weight it becomes difficult to move the pieces, and it is preferred to ram the mud "in situ", resulting in a different typology known as rammed earth.
In dry climates, adobe structures are extremely durable, and account for some of the oldest existing buildings in the world. Adobe buildings offer significant advantages due to their greater thermal mass, but they are known to be particularly susceptible to earthquake damage if they are not reinforced. Cases where adobe structures were widely damaged during earthquakes include the 1976 Guatemala earthquake, the 2003 Bam earthquake, and the 2010 Chile earthquake.
Buildings made of sun-dried earth are common throughout the world (Middle East, Western Asia, North Africa, West Africa, South America, southwestern North America, Spain, and Eastern Europe.) Adobe had been in use by indigenous peoples of the Americas in the Southwestern United States, Mesoamerica, and the Andes for several thousand years. Puebloan peoples built their adobe structures with handsful or basketsful of adobe, until the Spanish introduced them to making bricks. Adobe bricks were used in Spain from the Late Bronze and Iron Ages (eighth century BCE onwards). Its wide use can be attributed to its simplicity of design and manufacture, and economics.
A distinction is sometimes made between the smaller "adobes", which are about the size of ordinary baked bricks, and the larger "adobines", some of which may be one to two yards (1–2 m) long.
The word "adobe" has existed for around 4000 years with relatively little change in either pronunciation or meaning. The word can be traced from the Middle Egyptian (c. 2000 BC) word "ɟbt" "mud brick". Middle Egyptian evolved into Late Egyptian, Demotic or "pre-Coptic", and finally to Coptic (c. 600 BC), where it appeared as τωωβε . This was adopted into Arabic as "aṭ-ṭawbu" or "aṭ-ṭūbu", with the definite article "al-" attached. "tuba", This was assimilated into the Old Spanish language as "adobe" , probably via Mozarabic. English borrowed the word from Spanish in the early 18th century, still referring to mudbrick construction.
In more modern English usage, the term "adobe" has come to include a style of architecture popular in the desert climates of North America, especially in New Mexico, regardless of the construction method.
An adobe brick is a composite material made of earth mixed with water and an organic material such as straw or dung. The soil composition typically contains sand, silt and clay. Straw is useful in binding the brick together and allowing the brick to dry evenly, thereby preventing cracking due to uneven shrinkage rates through the brick. Dung offers the same advantage. The most desirable soil texture for producing the mud of adobe is 15% clay, 10–30% silt, and 55–75% fine sand. Another source quotes 15–25% clay and the remainder sand and coarser particles up to cobbles , with no deleterious effect. Modern adobe is stabilized with either emulsified asphalt or Portland cement up to 10% by weight.
No more than half the clay content should be expansive clays, with the remainder non-expansive illite or kaolinite. Too much expansive clay results in uneven drying through the brick, resulting in cracking, while too much kaolinite will make a weak brick. Typically the soils of the Southwest United States, where such construction has been widely used, are an adequate composition.
Adobe walls are load bearing, i.e. they carry their own weight into the foundation rather than by another structure, hence the adobe must have sufficient compressive strength. In the United States, most building codes call for a minimum compressive strength of 300 lbf/in2 (2.07 newton/mm2) for the adobe block. Adobe construction should be designed so as to avoid lateral structural loads that would cause bending loads. The building codes require the building sustain a 1 g lateral acceleration earthquake load. Such an acceleration will cause lateral loads on the walls, resulting in shear and bending and inducing tensile stresses. To withstand such loads, the codes typically call for a tensile modulus of rupture strength of at least 50 lbf/in2 (0.345 newton/mm2) for the finished block.
In addition to being an inexpensive material with a small resource cost, adobe can serve as a significant heat reservoir due to the thermal properties inherent in the massive walls typical in adobe construction. In climates typified by hot days and cool nights, the high thermal mass of adobe mediates the high and low temperatures of the day, moderating the temperature of the living space. The massive walls require a large and relatively long input of heat from the sun (radiation) and from the surrounding air (convection) before they warm through to the interior. After the sun sets and the temperature drops, the warm wall will continue to transfer heat to the interior for several hours due to the time-lag effect. Thus, a well-planned adobe wall of the appropriate thickness is very effective at controlling inside temperature through the wide daily fluctuations typical of desert climates, a factor which has contributed to its longevity as a building material.
Thermodynamic material properties have significant variation in the literature. Some experiments suggest that the standard consideration of conductivity is not adequate for this material, as its main thermodynamic property is inertia, and conclude that experimental tests should be performed over a longer period of time than usual - preferably with changing thermal jumps. There is an effective R-value for a north facing 10-in wall of R0=10 hr ft2 °F/Btu, which corresponds to thermal conductivity k=10 in x 1 ft/12 in /R0=0.33 Btu/(hr ft °F) or 0.57 W/(m K) in agreement with the thermal conductivity reported from another source. To determine the total R-value of a wall, scale R0 by the thickness of the wall in inches. The thermal resistance of adobe is also stated as an R-value for a 10-inch wall R0=4.1 hr ft2 °F/Btu. Another source provides the following properties: conductivity=0.30 Btu/(hr ft °F) or 0.52 W/(m K); specific heat capacity=0.24 Btu/(lb °F) or 1 kJ/(kg K) and density=106 lb/ft3 or 1700 kg/m3, giving heat capacity=25.4 Btu/(ft3 °F) or 1700 kJ/(m3 K). Using the average value of the thermal conductivity as k = 32 Btu/(hr ft °F) or 0.55 W/(m K), the thermal diffusivity is calculated to be 0.013 ft2/h or 3.3x10−7 m2/s.
Poured and puddled adobe (puddled clay, piled earth), today called "cob", is made by placing soft adobe in layers, rather than by making individual dried bricks or using a form. "Puddle" is a general term for a clay or clay and sand-based material worked into a dense, plastic state. These are the oldest methods of building with adobe in the Americas until holes in the ground were used as forms, and later wooden forms used to make individual bricks were introduced by the Spanish.
Bricks made from adobe are usually made by pressing the mud mixture into an open timber frame. In North America, the brick is typically about in size. The mixture is molded into the frame, which is removed after initial setting. After drying for a few hours, the bricks are turned on edge to finish drying. Slow drying in shade reduces cracking.
The same mixture, without straw, is used to make mortar and often plaster on interior and exterior walls. Some cultures used lime-based cement for the plaster to protect against rain damage.
Depending on the form into which the mixture is pressed, adobe can encompass nearly any shape or size, provided drying is even and the mixture includes reinforcement for larger bricks. Reinforcement can include manure, straw, cement, rebar or wooden posts. Straw, cement, or manure added to a standard adobe mixture can produce a stronger, more crack-resistant brick. A test is done on the soil content first. To do so, a sample of the soil is mixed into a clear container with some water, creating an almost completely saturated liquid. The container is shaken vigorously for one minute. It is then allowed to settle for a day until the soil has settled into layers. Heavier particles settle out first, sand above, silt above that and very fine clay and organic matter will stay in suspension for days. After the water has cleared, percentages of the various particles can be determined. Fifty to 60 percent sand and 35 to 40 percent clay will yield strong bricks. The Cooperative State Research, Education, and Extension Service at New Mexico State University recommends a mix of not more than 1/3 clay, not less than 1/2 sand, and never more than 1/3 silt.
The ground supporting an adobe structure should be compressed, as the weight of adobe wall is significant and foundation settling may cause cracking of the wall. Footing depth is to below the ground frost level. The footing and stem wall are commonly 24 and 14 inches thick, respectively. Modern construction codes call for the use of reinforcing steel in the footing and stem wall. Adobe bricks are laid by course. Adobe walls usually never rise above two stories as they are load bearing and adobe has low structural strength. When creating window and door openings, a lintel is placed on top of the opening to support the bricks above. Atop the last courses of brick, bond beams made of heavy wood beams or modern reinforced concrete are laid to provide a horizontal bearing plate for the roof beams and to redistribute lateral earthquake loads to shear walls more able to carry the forces. To protect the interior and exterior adobe walls, finishes such as mud plaster, whitewash or stucco can be applied. These protect the adobe wall from water damage, but need to be reapplied periodically. Alternatively, the walls can be finished with other nontraditional plasters that provide longer protection. Bricks made with stabilized adobe generally do not need protection of plasters.
The traditional adobe roof has been constructed using a mixture of soil/clay, water, sand and organic materials. The mixture was then formed and pressed into wood forms, producing rows of dried earth bricks that would then be laid across a support structure of wood and plastered into place with more adobe.
Depending on the materials available, a roof may be assembled using wood or metal beams to create a framework to begin layering adobe bricks. Depending on the thickness of the adobe bricks, the framework has been preformed using a steel framing and a layering of a metal fencing or wiring over the framework to allow an even load as masses of adobe are spread across the metal fencing like cob and allowed to air dry accordingly. This method was demonstrated with an adobe blend heavily impregnated with cement to allow even drying and prevent cracking.
The more traditional flat adobe roofs are functional only in dry climates that are not exposed to snow loads. The heaviest wooden beams, called vigas, lie atop the wall. Across the vigas lie smaller members called latillas and upon those brush is then laid. Finally, the adobe layer is applied.
To construct a flat adobe roof, beams of wood were laid to span the building, the ends of which were attached to the tops of the walls. Once the vigas, latillas and brush are laid, adobe bricks are placed. An adobe roof is often laid with bricks slightly larger in width to ensure a greater expanse is covered when placing the bricks onto the roof. Following each individual brick should be a layer of adobe mortar, recommended to be at least thick to make certain there is ample strength between the brick's edges and also to provide a relative moisture barrier during rain.
Roof design evolved around 1850 in the American Southwest. Three inches of adobe mud was applied on top of the latillas, then 18 inches of dry adobe dirt applied to the roof. The dirt was contoured into a low slope to a downspout aka a 'canal'. When moisture was applied to the roof the clay particles expanded to create a waterproof membrane. Once a year it was necessary to pull the weeds from the roof and re-slope the dirt as needed.
Depending on the materials, adobe roofs can be inherently fire-proof. The construction of a chimney can greatly influence the construction of the roof supports, creating an extra need for care in choosing the materials. The builders can make an adobe chimney by stacking simple adobe bricks in a similar fashion as the surrounding walls.
In 1927, the Uniform Building Code (UBC) was adopted in the United States. Local ordinances, referencing the UBC added requirements to building with adobe. These included: restriction of building height of adobe structures to 1-story, requirements for adobe mix (compressive and shear strength) and new requirements which stated that every building shall be designed to withstand seismic activity, specifically lateral forces. By the 1980s however, seismic related changes in the California Building Code effectively ended solid wall adobe construction in California; however Post-and-Beam adobe and veneers are still being used.
The largest structure ever made from adobe is the Arg-é Bam built by the Achaemenid Empire. Other large adobe structures are the Huaca del Sol in Peru, with 100 million signed bricks and the "ciudellas" of Chan Chan and Tambo Colorado, both in Peru. | https://en.wikipedia.org/wiki?curid=682 |
Adventure
An adventure is an exciting experience that is typically a bold, sometimes risky, undertaking. Adventures may be activities with some potential for physical danger such as traveling, exploring, skydiving, mountain climbing, scuba diving, river rafting or participating in extreme sports. Adventures are often undertaken to create psychological arousal or in order to achieve a greater goal such as the pursuit of knowledge that can only be obtained in a risky manner.
Adventurous experiences create psychological arousal, which can be interpreted as negative (e.g. fear) or positive (e.g. flow). For some people, adventure becomes a major pursuit in and of itself. According to adventurer André Malraux, in his "La Condition Humaine" (1933), "If a man is not ready to risk his life, where is his dignity?". Similarly, Helen Keller stated that "Life is either a daring adventure or nothing."
Outdoor adventurous activities are typically undertaken for the purposes of recreation or excitement: examples are adventure racing and adventure tourism. Adventurous activities can also lead to gains in knowledge, such as those undertaken by explorers and pioneers – the British adventurer Jason Lewis, for example, uses adventures to draw global sustainability lessons from living within finite environmental constraints on expeditions to share with schoolchildren. Adventure education intentionally uses challenging experiences for learning.
Author Jon Levy suggests that an experience should meet several criteria to be considered an adventure:
Some of the oldest and most widespread stories in the world are stories of adventure such as Homer's "The Odyssey".
The knight errant was the form the "adventure seeker" character took in the late Middle Ages.
The adventure novel exhibits these "protagonist on adventurous journey" characteristics as do many popular feature films, such as "Star Wars" and "Raiders of the Lost Ark".
Adventure books may have the theme of the hero or main character going to face the wilderness or Mother Nature. Examples include books such as "Hatchet" or "My Side of the Mountain". These books are less about "questing", such as in mythology or other adventure novels, but more about surviving on their own, living off the land, gaining new experiences, and becoming closer to the natural world.
Many adventures are based on the idea of a quest: the hero goes off in pursuit of a reward, whether it be a skill, prize, or perhaps the safety of a person. On the way, the hero must overcome various obstacles.
In video-game culture, an adventure game is a video game in which the player assumes the role of a protagonist in an interactive story driven by exploration and puzzle-solving. The genre's focus on story allows it to draw heavily from other narrative-based media, literature and film, encompassing a wide variety of literary genres. Many adventure games (text and graphic) are designed for a single player, since this emphasis on story and character makes multi-player design difficult.
From ancient times, travelers and explorers have written about their adventures. Journals which became best-sellers in their day were written, such as Marco Polo's journal "The Travels of Marco Polo" or Mark Twain's "Roughing It". Others were personal journals, only later published, such as the journals of Lewis and Clark or Captain James Cook's journals. There are also books written by those not directly a part of the adventure in question, such as "The Right Stuff" by Tom Wolfe, or books written by those participating in the adventure but in a format other than that of a journal, such as "Conquistadors of the Useless" by Lionel Terray. Documentaries often use the theme of adventure as well.
There are many sports classified as adventure games or sports, due to their inherent danger and excitement. Some of these include mountain climbing, skydiving, or other extreme sports. | https://en.wikipedia.org/wiki?curid=683 |
Asia
Asia () is Earth's largest and most populous continent, located primarily in the Eastern and Northern Hemispheres. It shares the continental landmass of Eurasia with the continent of Europe and the continental landmass of Afro-Eurasia with both Europe and Africa. Asia covers an area of , about 30% of Earth's total land area and 8.7% of the Earth's total surface area. The continent, which has long been home to the majority of the human population, was the site of many of the first civilizations. Asia is notable for not only its overall large size and population, but also dense and large settlements, as well as vast barely populated regions. Its 4.5 billion people () constitute roughly 60% of the world's population.
In general terms, Asia is bounded on the east by the Pacific Ocean, on the south by the Indian Ocean, and on the north by the Arctic Ocean. The border of Asia with Europe is a historical and cultural construct, as there is no clear physical and geographical separation between them. It is somewhat arbitrary and has moved since its first conception in classical antiquity. The division of Eurasia into two continents reflects East–West cultural, linguistic, and ethnic differences, some of which vary on a spectrum rather than with a sharp dividing line. The most commonly accepted boundaries place Asia to the east of the Suez Canal separating it from Africa; and to the east of the Turkish Straits, the Ural Mountains and Ural River, and to the south of the Caucasus Mountains and the Caspian and Black Seas, separating it from Europe.
China and India alternated in being the largest economies in the world from 1 to 1800 CE. China was a major economic power and attracted many to the east, and for many the legendary wealth and prosperity of the ancient culture of India personified Asia, attracting European commerce, exploration and colonialism. The accidental discovery of a trans-Atlantic route from Europe to America by Columbus while in search for a route to India demonstrates this deep fascination. The Silk Road became the main east–west trading route in the Asian hinterlands while the Straits of Malacca stood as a major sea route. Asia has exhibited economic dynamism (particularly East Asia) as well as robust population growth during the 20th century, but overall population growth has since fallen. Asia was the birthplace of most of the world's mainstream religions including Hinduism, Zoroastrianism, Judaism, Jainism, Buddhism, Confucianism, Taoism, Christianity, Islam, Sikhism, as well as many other religions.
Given its size and diversity, the concept of Asia—a name dating back to classical antiquity—may actually have more to do with human geography than physical geography. Asia varies greatly across and within its regions with regard to ethnic groups, cultures, environments, economics, historical ties and government systems. It also has a mix of many different climates ranging from the equatorial south via the hot desert in the Middle East, temperate areas in the east and the continental centre to vast subarctic and polar areas in Siberia.
The boundary between Asia and Africa is the Red Sea, the Gulf of Suez, and the Suez Canal. This makes Egypt a transcontinental country, with the Sinai peninsula in Asia and the remainder of the country in Africa.
The threefold division of the Old World into Europe, Asia and Africa has been in use since the 6th century BC, due to Greek geographers such as Anaximander and Hecataeus. Anaximander placed the boundary between Asia and Europe along the Phasis River (the modern Rioni river) in Georgia of Caucasus (from its mouth by Poti on the Black Sea coast, through the Surami Pass and along the Kura River to the Caspian Sea), a convention still followed by Herodotus in the 5th century BC. During the Hellenistic period, this convention was revised, and the boundary between Europe and Asia was now considered to be the Tanais (the modern Don River). This is the convention used by Roman era authors such as Posidonius, Strabo and Ptolemy.
The border between Asia and Europe was historically defined by European academics. The Don River became unsatisfactory to northern Europeans when Peter the Great, king of the Tsardom of Russia, defeating rival claims of Sweden and the Ottoman Empire to the eastern lands, and armed resistance by the tribes of Siberia, synthesized a new Russian Empire extending to the Ural Mountains and beyond, founded in 1721. The major geographical theorist of the empire was a former Swedish prisoner-of-war, taken at the Battle of Poltava in 1709 and assigned to Tobolsk, where he associated with Peter's Siberian official, Vasily Tatishchev, and was allowed freedom to conduct geographical and anthropological studies in preparation for a future book.
In Sweden, five years after Peter's death, in 1730 Philip Johan von Strahlenberg published a new atlas proposing the Ural Mountains as the border of Asia. Tatishchev announced that he had proposed the idea to von Strahlenberg. The latter had suggested the Emba River as the lower boundary. Over the next century various proposals were made until the Ural River prevailed in the mid-19th century. The border had been moved perforce from the Black Sea to the Caspian Sea into which the Ural River projects. The border between the Black Sea and the Caspian is usually placed along the crest of the Caucasus Mountains, although it is sometimes placed further north.
The border between Asia and the region of Oceania is usually placed somewhere in the Malay Archipelago. The Maluku Islands in Indonesia are often considered to lie on the border of southeast Asia, with New Guinea, to the east of the islands, being wholly part of Oceania. The terms Southeast Asia and Oceania, devised in the 19th century, have had several vastly different geographic meanings since their inception. The chief factor in determining which islands of the Malay Archipelago are Asian has been the location of the colonial possessions of the various empires there (not all European). Lewis and Wigen assert, "The narrowing of 'Southeast Asia' to its present boundaries was thus a gradual process."
Geographical Asia is a cultural artifact of European conceptions of the world, beginning with the Ancient Greeks, being imposed onto other cultures, an imprecise concept causing endemic contention about what it means. Asia does not exactly correspond to the cultural borders of its various types of constituents.
From the time of Herodotus a minority of geographers have rejected the three-continent system (Europe, Africa, Asia) on the grounds that there is no substantial physical separation between them. For example, Sir Barry Cunliffe, the emeritus professor of European archeology at Oxford, argues that Europe has been geographically and culturally merely "the western excrescence of the continent of Asia".
Geographically, Asia is the major eastern constituent of the continent of Eurasia with Europe being a northwestern peninsula of the landmass. Asia, Europe and Africa make up a single continuous landmass—Afro-Eurasia (except for the Suez Canal)—and share a common continental shelf. Almost all of Europe and the better part of Asia sit atop the Eurasian Plate, adjoined on the south by the Arabian and Indian Plate and with the easternmost part of Siberia (east of the Chersky Range) on the North American Plate.
The idea of a place called "Asia" was originally a concept of Greek civilization, though this might not correspond to the entire continent currently known by that name. The English word comes from Latin literature, where it has the same form, "Asia". Whether "Asia" in other languages comes from Latin of the Roman Empire is much less certain, and the ultimate source of the Latin word is uncertain, though several theories have been published. One of the first classical writers to use Asia as a name of the whole continent was Pliny. This metonymical change in meaning is common and can be observed in some other geographical names, such as Scandinavia (from Scania).
Before Greek poetry, the Aegean Sea area was in a Greek Dark Age, at the beginning of which syllabic writing was lost and alphabetic writing had not begun. Prior to then in the Bronze Age the records of the Assyrian Empire, the Hittite Empire and the various Mycenaean states of Greece mention a region undoubtedly Asia, certainly in Anatolia, including if not identical to Lydia. These records are administrative and do not include poetry.
The Mycenaean states were destroyed about 1200 BCE by unknown agents although one school of thought assigns the Dorian invasion to this time. The burning of the palaces baked clay diurnal administrative records written in a Greek syllabic script called Linear B, deciphered by a number of interested parties, most notably by a young World War II cryptographer, Michael Ventris, subsequently assisted by the scholar, John Chadwick. A major cache discovered by Carl Blegen at the site of ancient Pylos included hundreds of male and female names formed by different methods.
Some of these are of women held in servitude (as study of the society implied by the content reveals). They were used in trades, such as cloth-making, and usually came with children. The epithet "lawiaiai", "captives", associated with some of them identifies their origin. Some are ethnic names. One in particular, "aswiai", identifies "women of Asia". Perhaps they were captured in Asia, but some others, "Milatiai", appear to have been of Miletus, a Greek colony, which would not have been raided for slaves by Greeks. Chadwick suggests that the names record the locations where these foreign women were purchased. The name is also in the singular, "Aswia", which refers both to the name of a country and to a female from there. There is a masculine form, . This "Aswia" appears to have been a remnant of a region known to the Hittites as Assuwa, centered on Lydia, or "Roman Asia". This name, "Assuwa", has been suggested as the origin for the name of the continent "Asia". The Assuwa league was a confederation of states in western Anatolia, defeated by the Hittites under Tudhaliya I around 1400 BCE.
Alternatively, the etymology of the term may be from the Akkadian word , which means 'to go outside' or 'to ascend', referring to the direction of the sun at sunrise in the Middle East and also likely connected with the Phoenician word "asa" meaning 'east'. This may be contrasted to a similar etymology proposed for "Europe", as being from Akkadian 'to enter' or 'set' (of the sun).
T.R. Reid supports this alternative etymology, noting that the ancient Greek name must have derived from "asu", meaning 'east' in Assyrian ("ereb" for "Europe" meaning 'west'). The ideas of "Occidental" (form Latin "occidens" 'setting') and "Oriental" (from Latin "oriens" for 'rising') are also European invention, synonymous with "Western" and "Eastern". Reid further emphasizes that it explains the Western point of view of placing all the peoples and cultures of Asia into a single classification, almost as if there were a need for setting the distinction between Western and Eastern civilizations on the Eurasian continent. Kazuo Ogura and Tenshin Okakura are two outspoken Japanese figures on the subject.
Latin Asia and Greek Ἀσία appear to be the same word. Roman authors translated Ἀσία as Asia. The Romans named a province Asia, located in western Anatolia (in modern-day Turkey). There was an Asia Minor and an Asia Major located in modern-day Iraq. As the earliest evidence of the name is Greek, it is likely circumstantially that Asia came from Ἀσία, but ancient transitions, due to the lack of literary contexts, are difficult to catch in the act. The most likely vehicles were the ancient geographers and historians, such as Herodotus, who were all Greek. Ancient Greek certainly evidences early and rich uses of the name.
The first continental use of Asia is attributed to Herodotus (about 440 BCE), not because he innovated it, but because his "Histories" are the earliest surviving prose to describe it in any detail. He defines it carefully, mentioning the previous geographers whom he had read, but whose works are now missing. By it he means Anatolia and the Persian Empire, in contrast to Greece and Egypt.
Herodotus comments that he is puzzled as to why three women's names were "given to a tract which is in reality one" (Europa, Asia, and Libya, referring to Africa), stating that most Greeks assumed that Asia was named after the wife of Prometheus (i.e. Hesione), but that the Lydians say it was named after Asies, son of Cotys, who passed the name on to a tribe at Sardis. In Greek mythology, "Asia" ("Ἀσία") or "Asie" ("Ἀσίη") was the name of a "Nymph or Titan goddess of Lydia".
In ancient Greek religion, places were under the care of female divinities, parallel to guardian angels. The poets detailed their doings and generations in allegoric language salted with entertaining stories, which subsequently playwrights transformed into classical Greek drama and became "Greek mythology". For example, Hesiod mentions the daughters of Tethys and Ocean, among whom are a "holy company", "who with the Lord Apollo and the Rivers have youths in their keeping". Many of these are geographic: Doris, Rhodea, Europa, Asia. Hesiod explains:
The Iliad (attributed by the ancient Greeks to Homer) mentions two Phrygians (the tribe that replaced the Luvians in Lydia) in the Trojan War named Asios (an adjective meaning "Asian"); and also a marsh or lowland containing a marsh in Lydia as . According to many Muslims, the term came from Ancient Egypt's Queen Asiya, the adoptive mother of Moses.
The history of Asia can be seen as the distinct histories of several peripheral coastal regions: East Asia, South Asia, Southeast Asia and the Middle East, linked by the interior mass of the Central Asian steppes.
The coastal periphery was home to some of the world's earliest known civilizations, each of them developing around fertile river valleys. The civilizations in Mesopotamia, the Indus Valley and the Yellow River shared many similarities. These civilizations may well have exchanged technologies and ideas such as mathematics and the wheel. Other innovations, such as writing, seem to have been developed individually in each area. Cities, states and empires developed in these lowlands.
The central steppe region had long been inhabited by horse-mounted nomads who could reach all areas of Asia from the steppes. The earliest postulated expansion out of the steppe is that of the Indo-Europeans, who spread their languages into the Middle East, South Asia, and the borders of China, where the Tocharians resided. The northernmost part of Asia, including much of Siberia, was largely inaccessible to the steppe nomads, owing to the dense forests, climate and tundra. These areas remained very sparsely populated.
The center and the peripheries were mostly kept separated by mountains and deserts. The Caucasus and Himalaya mountains and the Karakum and Gobi deserts formed barriers that the steppe horsemen could cross only with difficulty. While the urban city dwellers were more advanced technologically and socially, in many cases they could do little in a military aspect to defend against the mounted hordes of the steppe. However, the lowlands did not have enough open grasslands to support a large horsebound force; for this and other reasons, the nomads who conquered states in China, India, and the Middle East often found themselves adapting to the local, more affluent societies.
The Islamic Caliphate's defeats of the Byzantine and Persian empires led to West Asia and southern parts of Central Asia and western parts of South Asia under its control during its conquests of the 7th century. The Mongol Empire conquered a large part of Asia in the 13th century, an area extending from China to Europe. Before the Mongol invasion, Song dynasty reportedly had approximately 120 million citizens; the 1300 census which followed the invasion reported roughly 60 million people.
The Black Death, one of the most devastating pandemics in human history, is thought to have originated in the arid plains of central Asia, where it then travelled along the Silk Road.
The Russian Empire began to expand into Asia from the 17th century, and would eventually take control of all of Siberia and most of Central Asia by the end of the 19th century. The Ottoman Empire controlled Anatolia, most of the Middle East, North Africa and the Balkans from the mid 16th century onwards. In the 17th century, the Manchu conquered China and established the Qing dynasty. The Islamic Mughal Empire and the Hindu Maratha Empire controlled much of India in the 16th and 18th centuries respectively. The Empire of Japan controlled most of East Asia and much of Southeast Asia, New Guinea and the Pacific islands until the end of World War II.
Asia is the largest continent on Earth. It covers 9% of the Earth's total surface area (or 30% of its land area), and has the longest coastline, at . Asia is generally defined as comprising the eastern four-fifths of Eurasia. It is located to the east of the Suez Canal and the Ural Mountains, and south of the Caucasus Mountains (or the Kuma–Manych Depression) and the Caspian and Black Seas. It is bounded on the east by the Pacific Ocean, on the south by the Indian Ocean and on the north by the Arctic Ocean. Asia is subdivided into 49 countries, Five of them (Georgia, Azerbaijan, Russia, Kazakhstan and Turkey) are transcontinental countries and having part of their land in Europe.
Asia has extremely diverse climates and geographic features. Climates range from arctic and subarctic in Siberia to tropical in southern India and Southeast Asia. It is moist across southeast sections, and dry across much of the interior. Some of the largest daily temperature ranges on Earth occur in western sections of Asia. The monsoon circulation dominates across southern and eastern sections, due to the presence of the Himalayas forcing the formation of a thermal low which draws in moisture during the summer. Southwestern sections of the continent are hot. Siberia is one of the coldest places in the Northern Hemisphere, and can act as a source of arctic air masses for North America. The most active place on Earth for tropical cyclone activity lies northeast of the Philippines and south of Japan. The Gobi Desert is in Mongolia and the Arabian Desert stretches across much of the Middle East. The Yangtze River in China is the longest river in the continent. The Himalayas between Nepal and China is the tallest mountain range in the world. Tropical rainforests stretch across much of southern Asia and coniferous and deciduous forests lie farther north.
There are various approaches to the regional division of Asia. The following subdivision into regions is used, among others, by the UN statistics agency UNSD. This division of Asia into regions by the United Nations is done solely for statistical reasons and does not imply any assumption about political or other affiliations of countries and territories.
A survey carried out in 2010 by global risk analysis farm Maplecroft identified 16 countries that are extremely vulnerable to climate change. Each nation's vulnerability was calculated using 42 socio, economic and environmental indicators, which identified the likely climate change impacts during the next 30 years. The Asian countries of Bangladesh, India, Vietnam, Thailand, Pakistan and Sri Lanka were among the 16 countries facing extreme risk from climate change. Some shifts are already occurring. For example, in tropical parts of India with a semi-arid climate, the temperature increased by 0.4 °C between 1901 and 2003.
A 2013 study by the International Crops Research Institute for the Semi-Arid Tropics (ICRISAT) aimed to find science-based, pro-poor approaches and techniques that would enable Asia's agricultural systems to cope with climate change, while benefitting poor and vulnerable farmers. The study's recommendations ranged from improving the use of climate information in local planning and strengthening weather-based agro-advisory services, to stimulating diversification of rural household incomes and providing incentives to farmers to adopt natural resource conservation measures to enhance forest cover, replenish groundwater and use renewable energy.
Asia has the largest continental economy by both GDP Nominal and PPP in the world, and is the fastest growing economic region. , the largest economies in Asia are China, Japan, India, South Korea, Indonesia and Turkey based on GDP in both nominal and PPP. Based on Global Office Locations 2011, Asia dominated the office locations with 4 of the top 5 being in Asia: Hong Kong, Singapore, Tokyo and Seoul. Around 68 percent of international firms have office in Hong Kong.
In the late 1990s and early 2000s, the economies of China and India have been growing rapidly, both with an average annual growth rate of more than 8%. Other recent very-high-growth nations in Asia include Israel, Malaysia, Indonesia, Bangladesh, Thailand, Vietnam, and the Philippines, and mineral-rich nations such as Kazakhstan, Turkmenistan, Iran, Brunei, the United Arab Emirates, Qatar, Kuwait, Saudi Arabia, Bahrain and Oman.
According to economic historian Angus Maddison in his book "The World Economy: A Millennial Perspective", India had the world's largest economy during 0 BCE and 1000 BCE. China was the largest and most advanced economy on earth for much of recorded history. For several decades in the late twentieth century Japan was the largest economy in Asia and second-largest of any single nation in the world, after surpassing the Soviet Union (measured in net material product) in 1986 and Germany in 1968. (NB: A number of supernational economies are larger, such as the European Union (EU), the North American Free Trade Agreement (NAFTA) or APEC). This ended in 2010 when China overtook Japan to become the world's second largest economy.
In the late 1980s and early 1990s, Japan's GDP was almost as large (current exchange rate method) as that of the rest of Asia combined. In 1995, Japan's economy nearly equaled that of the US as the largest economy in the world for a day, after the Japanese currency reached a record high of 79 yen/US$. Economic growth in Asia since World War II to the 1990s had been concentrated in Japan as well as the four regions of South Korea, Taiwan, Hong Kong and Singapore located in the Pacific Rim, known as the Asian tigers, which have now all received developed country status, having the highest GDP per capita in Asia.
It is forecasted that India will overtake Japan in terms of nominal GDP by 2025. By 2027, according to Goldman Sachs, China will have the largest economy in the world. Several trade blocs exist, with the most developed being the Association of Southeast Asian Nations.
Asia is the largest continent in the world by a considerable margin, and it is rich in natural resources, such as petroleum, forests, fish, water, rice, copper and silver. Manufacturing in Asia has traditionally been strongest in East and Southeast Asia, particularly in China, Taiwan, South Korea, Japan, India, the Philippines, and Singapore. Japan and South Korea continue to dominate in the area of multinational corporations, but increasingly the PRC and India are making significant inroads. Many companies from Europe, North America, South Korea and Japan have operations in Asia's developing countries to take advantage of its abundant supply of cheap labour and relatively developed infrastructure.
According to Citigroup 9 of 11 Global Growth Generators countries came from Asia driven by population and income growth. They are Bangladesh, China, India, Indonesia, Iraq, Mongolia, Philippines, Sri Lanka and Vietnam. Asia has three main financial centers: Hong Kong, Tokyo and Singapore. Call centers and business process outsourcing (BPOs) are becoming major employers in India and the Philippines due to the availability of a large pool of highly skilled, English-speaking workers. The increased use of outsourcing has assisted the rise of India and the China as financial centers. Due to its large and extremely competitive information technology industry, India has become a major hub for outsourcing.
In 2010, Asia had 3.3 million millionaires (people with net worth over US$1 million excluding their homes), slightly below North America with 3.4 million millionaires. Last year Asia had toppled Europe.
Citigroup in The Wealth Report 2012 stated that Asian centa-millionaire overtook North America's wealth for the first time as the world's "economic center of gravity" continued moving east. At the end of 2011, there were 18,000 Asian people mainly in Southeast Asia, China and Japan who have at least $100 million in disposable assets, while North America with 17,000 people and Western Europe with 14,000 people.
With growing Regional Tourism with domination of Chinese visitors, MasterCard has released Global Destination Cities Index 2013 with 10 of 20 are dominated by Asia and Pacific Region Cities and also for the first time a city of a country from Asia (Bangkok) set in the top-ranked with 15.98 international visitors.
East Asia had by far the strongest overall Human Development Index (HDI) improvement of any region in the world, nearly doubling average HDI attainment over the past 40 years, according to the report's analysis of health, education and income data. China, the second highest achiever in the world in terms of HDI improvement since
1970, is the only country on the "Top 10 Movers" list due to income rather than health or education achievements. Its per capita income increased a stunning 21-fold over the last four decades, also lifting hundreds of millions out of income poverty. Yet it was not among the region's top performers in improving school enrollment and life expectancy.
Nepal, a South Asian country, emerges as one of the world's fastest movers since 1970 mainly due to health and education achievements. Its present life expectancy is 25 years longer than in the 1970s. More than four of every five children of school age in Nepal now attend primary school, compared to just one in five 40 years ago.
Hong Kong ranked highest among the countries grouped on the HDI (number 7 in the world, which is in the "very high human development" category), followed by Singapore (9), Japan (19) and South Korea (22). Afghanistan (155) ranked lowest amongst Asian countries out of the 169 countries assessed.
Asia is home to several language families and many language isolates. Most Asian countries have more than one language that is natively spoken. For instance, according to Ethnologue, more than 600 languages are spoken in Indonesia, more than 800 languages spoken in India, and more than 100 are spoken in the Philippines. China has many languages and dialects in different provinces.
Many of the world's major religions have their origins in Asia, including the five most practiced in the world (excluding irreligion), which are Christianity, Islam, Hinduism, Chinese folk religion (classified as Confucianism and Taoism), and Buddhism respectively. Asian mythology is complex and diverse. The story of the Great Flood for example, as presented to Jews in the Hebrew Bible in the narrative of Noah—and later to Christians in the Old Testament, and to Muslims in the Quran—is earliest found in Mesopotamian mythology, in the Enûma Eliš and "Epic of Gilgamesh". Hindu mythology similarly tells about an avatar of Vishnu in the form of a fish who warned Manu of a terrible flood. Ancient Chinese mythology also tells of a Great Flood spanning generations, one that required the combined efforts of emperors and divinities to control.
The Abrahamic religions including Judaism, Christianity, Islam and Bahá'í Faith originated in West Asia.
Judaism, the oldest of the Abrahamic faiths, is practiced primarily in Israel, the indigenous homeland and historical birthplace of the Hebrew nation: which today consists both of those Jews who remained in the Middle East and those who returned from diaspora in Europe, North America, and other regions; though various diaspora communities persist worldwide. Jews are the predominant ethnic group in Israel (75.6%) numbering at about 6.1 million, although the levels of adherence to Jewish religion vary. Outside of Israel there are small ancient Jewish communities in Turkey (17,400), Azerbaijan (9,100), Iran (8,756), India (5,000) and Uzbekistan (4,000), among many other places. In total, there are 14.4–17.5 million (2016, est.) Jews alive in the world today, making them one of the smallest Asian minorities, at roughly 0.3 to 0.4 percent of the total population of the continent.
Christianity is a widespread religion in Asia with more than 286 million adherents according to Pew Research Center in 2010, and nearly 364 million according to Britannica Book of the Year 2014. Constituting around 12.6% of the total population of Asia. In the Philippines and East Timor, Roman Catholicism is the predominant religion; it was introduced by the Spaniards and the Portuguese, respectively. In Armenia, Georgia and Asian Russia, Eastern Orthodoxy is the predominant religion. In the Middle East, such as in the Levant, Syriac Christianity (Church of the East) and Oriental Orthodoxy are prevalent minority denominations, which are both Eastern Christian sects mainly adhered to Assyrian people or Syriac Christians. Saint Thomas Christians in India trace their origins to the evangelistic activity of Thomas the Apostle in the 1st century.
Islam, which originated in the Hejaz located in modern-day Saudi Arabia, is the second largest and most widely-spread religion in Asia with at least 1 billion Muslims constituting around 23.8% of the total population of Asia. With 12.7% of the world Muslim population, the country currently with the largest Muslim population in the world is Indonesia, followed by Pakistan (11.5%), India (10%), Bangladesh, Iran and Turkey. Mecca, Medina and Jerusalem are the three holiest cities for Islam in all the world. The Hajj and Umrah attract large numbers of Muslim devotees from all over the world to Mecca and Medina. Iran is the largest Shi'a country.
The Bahá'í Faith originated in Asia, in Iran (Persia), and spread from there to the Ottoman Empire, Central Asia, India, and Burma during the lifetime of Bahá'u'lláh. Since the middle of the 20th century, growth has particularly occurred in other Asian countries, because Bahá'í activities in many Muslim countries has been severely suppressed by authorities. Lotus Temple is a big Baha'i Temple in India.
Almost all Asian religions have philosophical character and Asian philosophical traditions cover a large spectrum of philosophical thoughts and writings. Indian philosophy includes Hindu philosophy and Buddhist philosophy. They include elements of nonmaterial pursuits, whereas another school of thought from India, Cārvāka, preached the enjoyment of the material world. The religions of Hinduism, Buddhism, Jainism and Sikhism originated in India, South Asia. In East Asia, particularly in China and Japan, Confucianism, Taoism and Zen Buddhism took shape.
, Hinduism has around 1.1 billion adherents. The faith represents around 25% of Asia's population and is the largest religion in Asia. However, it is mostly concentrated in South Asia. Over 80% of the populations of both India and Nepal adhere to Hinduism, alongside significant communities in Bangladesh, Pakistan, Bhutan, Sri Lanka and Bali, Indonesia. Many overseas Indians in countries such as Burma, Singapore and Malaysia also adhere to Hinduism.
Buddhism has a great following in mainland Southeast Asia and East Asia. Buddhism is the religion of the majority of the populations of Cambodia (96%), Thailand (95%), Burma (80–89%), Japan (36–96%), Bhutan (75–84%), Sri Lanka (70%), Laos (60–67%) and Mongolia (53–93%). Large Buddhist populations also exist in Singapore (33–51%), Taiwan (35–93%), South Korea (23–50%), Malaysia (19–21%), Nepal (9–11%), Vietnam (10–75%), China (20–50%), North Korea (2–14%), and small communities in India and Bangladesh. In many Chinese communities, Mahayana Buddhism is easily syncretized with Taoism, thus exact religious statistics is difficult to obtain and may be understated or overstated. The Communist-governed countries of China, Vietnam and North Korea are officially atheist, thus the number of Buddhists and other religious adherents may be under-reported.
Jainism is found mainly in India and in oversea Indian communities such as the United States and Malaysia. Sikhism is found in Northern India and amongst overseas Indian communities in other parts of Asia, especially Southeast Asia. Confucianism is found predominantly in Mainland China, South Korea, Taiwan and in overseas Chinese populations. Taoism is found mainly in Mainland China, Taiwan, Malaysia and Singapore. Taoism is easily syncretized with Mahayana Buddhism for many Chinese, thus exact religious statistics is difficult to obtain and may be understated or overstated.
Some of the events pivotal in the Asia territory related to the relationship with the outside world in the post-Second World War were:
The polymath Rabindranath Tagore, a Bengali poet, dramatist, and writer from Santiniketan, now in West Bengal, India, became in 1913 the first Asian Nobel laureate. He won his Nobel Prize in Literature for notable impact his prose works and poetic thought had on English, French, and other national literatures of Europe and the Americas. He is also the writer of the national anthems of Bangladesh and India.
Other Asian writers who won Nobel Prize for literature include Yasunari Kawabata (Japan, 1968), Kenzaburō Ōe (Japan, 1994), Gao Xingjian (China, 2000), Orhan Pamuk (Turkey, 2006), and Mo Yan (China, 2012). Some may consider the American writer, Pearl S. Buck, an honorary Asian Nobel laureate, having spent considerable time in China as the daughter of missionaries, and based many of her novels, namely "The Good Earth" (1931) and "The Mother" (1933), as well as the biographies of her parents of their time in China, "The Exile" and "Fighting Angel", all of which earned her the Literature prize in 1938.
Also, Mother Teresa of India and Shirin Ebadi of Iran were awarded the Nobel Peace Prize for their significant and pioneering efforts for democracy and human rights, especially for the rights of women and children. Ebadi is the first Iranian and the first Muslim woman to receive the prize. Another Nobel Peace Prize winner is Aung San Suu Kyi from Burma for her peaceful and non-violent struggle under a military dictatorship in Burma. She is a nonviolent pro-democracy activist and leader of the National League for Democracy in Burma (Myanmar) and a noted prisoner of conscience. She is a Buddhist and was awarded the Nobel Peace Prize in 1991. Chinese dissident Liu Xiaobo was awarded the Nobel Peace Prize for "his long and non-violent struggle for fundamental human rights in China" on 8 October 2010. He is the first Chinese citizen to be awarded a Nobel Prize of any kind while residing in China. In 2014, Kailash Satyarthi from India and Malala Yousafzai from Pakistan were awarded the Nobel Peace Prize "for their struggle against the suppression of children and young people and for the right of all children to education".
Sir C.V. Raman is the first Asian to get a Nobel prize in Sciences. He won the Nobel Prize in Physics "for his work on the scattering of light and for the discovery of the effect named after him".
Japan has won the most Nobel Prizes of any Asian nation with 24 followed by India which has won 13.
Amartya Sen, (born 3 November 1933) is an Indian economist who was awarded the 1998 Nobel Memorial Prize in Economic Sciences for his contributions to welfare economics and social choice theory, and for his interest in the problems of society's poorest members.
Other Asian Nobel Prize winners include Subrahmanyan Chandrasekhar, Abdus Salam, Malala Yousafzai, Robert Aumann, Menachem Begin, Aaron Ciechanover, Avram Hershko, Daniel Kahneman, Shimon Peres, Yitzhak Rabin, Ada Yonath, Yasser Arafat, José Ramos-Horta and Bishop Carlos Filipe Ximenes Belo of Timor Leste, Kim Dae-jung, and 13 Japanese scientists. Most of the said awardees are from Japan and Israel except for Chandrasekhar and Raman (India), Abdus Salam and Malala yousafzai, (Pakistan), Arafat (Palestinian Territories), Kim (South Korea), and Horta and Belo (Timor Leste).
In 2006, Dr. Muhammad Yunus of Bangladesh was awarded the Nobel Peace Prize for the establishment of Grameen Bank, a community development bank that lends money to poor people, especially women in Bangladesh. Dr. Yunus received his PhD in economics from Vanderbilt University, United States. He is internationally known for the concept of micro credit which allows poor and destitute people with little or no collateral to borrow money. The borrowers typically pay back money within the specified period and the incidence of default is very low.
The Dalai Lama has received approximately eighty-four awards over his spiritual and political career. On 22 June 2006, he became one of only four people ever to be recognized with Honorary Citizenship by the Governor General of Canada. On 28 May 2005, he received the Christmas Humphreys Award from the Buddhist Society in the United Kingdom. Most notable was the Nobel Peace Prize, presented in Oslo, Norway on 10 December 1989.
Within the above-mentioned states are several partially recognized countries with limited to no international recognition. None of them are members of the UN:
References to articles:
Special topics:
Lists:
Projects | https://en.wikipedia.org/wiki?curid=689 |
Datura
Datura is a genus of nine species of poisonous Vespertine flowering plants belonging to the family Solanaceae. They are commonly known as thornapples or jimsonweeds but are also known as devil's trumpets (not to be confused with angel's trumpets, which are placed in the closely related genus "Brugmansia"). Other English common names include moonflower, devil's weed and hell's bells. The Mexican common names Toloache and Tolguacha derive from the Nahuatl name Tolohuaxihuitl meaning "the plant with the nodding head" (in reference to the nodding seed capsules of "Datura" species belonging to section "Dutra" of the genus). "Datura" species are native to dry, temperate, and subtropical regions of the Americas and are distributed mostly in Mexico, which is considered the centre of origin of the genus. "Datura ferox" was long thought native to China, "Datura metel" to India and southeast Asia, and "Datura leichardthii" to Australia; however, recent research has shown these species to be early introductions from Central America.
The case of "Datura metel" is remarkable: not only is the plant not a true species at all but an assemblage of ancient pre-Columbian cultivars created from "Datura innoxia" in the Greater Antilles, but evidence is mounting that it was introduced to the Indian subcontinent no later than the 2nd century C.E. - whether by natural or human agency is, as yet, unknown - making it one of the most ancient plant introductions (even, possibly, the most ancient plant introduction) from the New World to the Old.
All species of "Datura" are poisonous, especially their seeds and flowers which can cause respiratory depression, arrhythmias, hallucinations, psychosis, and even death if taken internally.
A group of South American species formerly placed in the genus "Datura" are now placed in the distinct genus "Brugmansia" ("Brugmansia" differs from "Datura" in that it is a) woody (the species being shrubs or small trees) b) has generally larger, pendulous flowers, rather than erect ones c) has indehiscent fruits and d) has seeds bearing a thick corky layer). The Solanaceous tribe Datureae, to which "Datura" and "Brugmansia" belong, has recently acquired a new, monotypic genus Trompettia J. Dupin, featuring the species "Trompettia cardenasiana", which had hitherto been grossly misclassified as belonging to the genus "Iochroma".
Solanaceous tribes with a similar chemistry (i.e. a similar tropane alkaloid content), include the Hyoscyameae, containing such well-known toxic species as "Hyoscyamus niger" and "Atropa belladonna", the Solandreae containing the genus "Solandra" ("chalice vines") and the Mandragoreae, named for the famous Mandrake "Mandragora officinarum".
The name "Datura" is taken from Sanskrit धतूरा ' 'thorn-apple', ultimately from Sanskrit धत्तूर ' 'white thorn-apple' (referring to "Datura metel" of Asia). In the Ayurvedic text "Sushruta Samhita" different species of "Datura" are also referred to as ' and '. Dhatura is offered to Shiva in Hinduism. Record of this name in English dates back to 1662. Nathaniel Hawthorne refers to one type in "The Scarlet Letter" as "apple-Peru". In Mexico, its common name is "toloache".
"Datura" species are herbaceous, leafy annuals and short-lived perennials which can reach up to 2 m in height. The leaves are alternate, 10–20 cm long and 5–18 cm broad, with a lobed or toothed margin. The flowers are erect or spreading (not pendulous like those of "Brugmansia"), trumpet-shaped, 5–20 cm long and 4–12 cm broad at the mouth; colors vary from white to yellow, pink, and pale purple. The fruit is a spiny capsule 4–10 cm long and 2–6 cm broad, splitting open when ripe to release the numerous seeds. The seeds disperse freely over pastures, fields and even wasteland locations.
"Datura" belongs to the classic "witches' weeds", along with deadly nightshade, henbane, and mandrake. All parts of the plants are toxic, and datura has a long history of use for causing delirious states and death. It was well known as an essential ingredient of ointments, potions and witches' brews, most notably "Datura stramonium".
In India the species "Datura metel" has long been regarded as a poison and aphrodisiac, having been used in Ayurveda as a medicine since ancient times. It features in rituals and prayers to Shiva and also in Ganesh Chaturthi, a festival devoted to the deity Ganesha.
The larvae of some Lepidoptera (butterfly and moth) species, including "Hypercompe indecisa", eat some "Datura" species.
It is difficult to classify "Datura" as to its species, and it often happens that the descriptions of new species are accepted prematurely. Later, these "new species" are found to be simply varieties that have evolved due to conditions at a specific location. They usually disappear in a few years. Contributing to the confusion is the fact that various species, such as "D. wrightii" and "D. inoxia", are very similar in appearance, and the variation within a species can be extreme. For example, "Datura" species can change size of plant, leaf, and flowers, all depending on location. The same species, when growing in a half-shady, damp location can develop into a flowering bush half as tall as an adult human of average height, but when growing in a very dry location, will only grow into a thin plant not much more than ankle-high, with tiny flowers and a few miniature leaves.
Datura specialists the Preissels accept only nine species of "Datura":, but Kew's Plants of the World Online currently lists the following fourteen (out of which the current edition of The Plant List does not list "D. arenicola" , "D. lanosa" and "D. pruinosa" as accepted spp.):
Of the above, D. leichhardtii is close enough to D. pruinosa to merit demotion to a subspecies and likewise D. ferox and D. quercifolia are close enough in morphology to merit being subsumed in a single species. Furthermore, the Australian provenance of "D. leichhardtii", the Chinese provenance of "D. ferox" and the Afro-Asiatic provenance of D. metel have been cast into serious doubt, with the three species being almost certainly post-Columbian introductions to the Old World regions to which they were originally thought native.
"Datura arenicola" is a remarkable new species, described only in 2013, of very restricted range and so distinctive as to have merited the creation for it of the new section "Discola" [not to be confused with the species name "discolor"] within the genus. The specific name "arenicola" means "loving (i.e. "thriving in") sand".
American Brugmansia and Datura Society, Inc. (ABADS) is designated in the 2004 edition of the International Code of Nomenclature for Cultivated Plants as the official International Cultivar Registration Authority for "Datura". This role was delegated to ABADS by the International Society for Horticultural Science in 2002.
"Datura" species are usually sown annually from the seed produced in the spiny capsules, but, with care, the tuberous-rooted perennial species may be overwintered. Most species are suited to being planted outside or in containers. As a rule, they need warm, sunny places and soil that will keep their roots dry. When grown outdoors in good locations, the plants tend to reseed themselves and may become invasive. In containers, they should have porous, aerated potting soil with adequate drainage. The plants are susceptible to fungi in the root area, so anaerobic organic enrichment such as anaerobically composted organic matter or manure, should be avoided.
All "Datura" plants contain tropane alkaloids such as scopolamine and atropine, primarily in their seeds and flowers as well as the roots of certain species such as "D. wrightii". Because of the presence of these substances, "Datura" has been used for centuries in some cultures as a poison. A given plant's toxicity depends on its age, where it is growing, and the local weather conditions. These variations make "Datura" exceptionally hazardous as a drug.
In traditional cultures, a great deal of experience with and detailed knowledge of "Datura" was critical to minimize harm. Many tragic incidents result from modern users ingesting "Datura". For example, in the 1990s and 2000s, the United States media reported stories of adolescents and young adults dying or becoming seriously ill from intentionally ingesting "Datura". There are also several reports in the medical literature of deaths from "D. stramonium" and "D. ferox" intoxication. Children are especially vulnerable to atropine poisoning.
In some parts of Europe and India, "Datura" has been a popular poison for suicide and murder. From 1950 to 1965, the State Chemical Laboratories in Agra, India, investigated 2,778 deaths caused by ingesting "Datura." The Thugs "(practicers of thuggee)" were devotees of an Indian religious cult made up of robbers and assassins who strangled and/or poisoned their victims in rituals devoted to the Hindu goddess Kali. They were known to employ "Datura" in many such poisonings, using it also to induce drowsiness or stupefaction, making strangulation easier.
"Datura" toxins may be ingested accidentally by consumption of honey produced by several wasp species, including "Brachygastra lecheguana", during the "Datura" blooming season. It appears that these semi-domesticated honey wasps collect "Datura" nectar for honey production which can lead to poisoning.
The US Centers for Disease Control and Prevention reported accidental poisoning resulting in hospitalization for a family of six who inadvertently ingested "Datura" used as an ingredient in stew.
In some places, it is prohibited to buy, sell, or cultivate "Datura" plants.
Due to the potent combination of anticholinergic substances it contains, "Datura" intoxication typically produces effects similar to that of an anticholinergic delirium (usually involving a complete inability to differentiate reality from fantasy); hyperthermia; tachycardia; bizarre, and possibly violent behavior; and severe mydriasis (dilated pupils) with resultant painful photophobia that can last several days. Muscle stiffness, urinary retention, temporary paralysis, and confusion is often reported and pronounced amnesia is another commonly reported effect.
Datura is considered a deliriant. Christian Rätsch has said "A mild dosage produces medicinal and healing effects, a moderate dosage produces aphrodisiac effects, and high dosages are used for shamanic purposes". Wade Davis, an ethnobotanist, lists it as a possible ingredient of zombie potion.
In "Pharmacology and Abuse of Cocaine, Amphetamines, Ecstasy and Related Designer Drugs", Freye asserts:
Few substances have received as many severely negative recreational experience reports as has "Datura". The overwhelming majority of those who describe their use of "Datura" find their experiences extremely unpleasant both mentally and often physically dangerous. However, anthropologists have found that indigenous groups, with a great deal of experience with and detailed knowledge of "Datura", have been known to use "Datura" spiritually (including the Navajo and especially the Havasupai). Again, knowledge of "Datura" properties is necessary to facilitate a healthy experience.
The Southern Paiute believe "Datura" can help locate missing objects. In ancient Mexico, "Datura" also played an important role in the religion of the Aztecs and the practices of their medicine men and necromancers.
Bernardino de Sahagún, in around 1569, called attention to "Datura" in the following words:
“It is administered in potions in order to cause harm to those who are objects of hatred. Those who eat it have visions of fearful things. Magicians or those who wish to harm someone administer it in food or drink. This herb is medicinal and its seed is used as a remedy for gout, ground up and applied to the part affected.”
Due to their agitated behavior and confused mental state, victims of "Datura" poisoning are typically hospitalized. Gastric lavage and the administration of activated charcoal can be used to reduce the stomach's absorption of the ingested material and the drug physostigmine is used to reverse the effect of the poisons. Benzodiazepines can be given to curb the patient's agitation, and supportive care with oxygen, hydration, and symptomatic treatment is often provided. Observation of the patient is indicated until the symptoms resolve, usually from 24–36 hours after ingestion of the "Datura". | https://en.wikipedia.org/wiki?curid=8846 |
Commutator subgroup
In mathematics, more specifically in abstract algebra, the commutator subgroup or derived subgroup of a group is the subgroup generated by all the commutators of the group.
The commutator subgroup is important because it is the smallest normal subgroup such that the quotient group of the original group by this subgroup is abelian. In other words, "G"/"N" is abelian if and only if "N" contains the commutator subgroup of "G". So in some sense it provides a measure of how far the group is from being abelian; the larger the commutator subgroup is, the "less abelian" the group is.
For elements "g" and "h" of a group "G", the commutator of "g" and "h" is formula_1. The commutator formula_2 is equal to the identity element "e" if and only if formula_3 , that is, if and only if "g" and "h" commute. In general, formula_4.
However, the notation is somewhat arbitrary and there is a non-equivalent variant definition for the commutator that has the inverses on the right hand side of the equation: formula_5 in which case formula_6 but instead formula_7.
An element of "G" of the form formula_2 for some "g" and "h" is called a commutator. The identity element "e" = ["e","e"] is always a commutator, and it is the only commutator if and only if "G" is abelian.
Here are some simple but useful commutator identities, true for any elements "s", "g", "h" of a group "G":
The first and second identities imply that the set of commutators in "G" is closed under inversion and conjugation. If in the third identity we take "H" = "G", we get that the set of commutators is stable under any endomorphism of "G". This is in fact a generalization of the second identity, since we can take "f" to be the conjugation automorphism on "G", formula_17, to get the second identity.
However, the product of two or more commutators need not be a commutator. A generic example is ["a","b"]["c","d"] in the free group on "a","b","c","d". It is known that the least order of a finite group for which there exists two commutators whose product is not a commutator is 96; in fact there are two nonisomorphic groups of order 96 with this property.
This motivates the definition of the commutator subgroup formula_18 (also called the derived subgroup, and denoted formula_19 or formula_20) of "G": it is the subgroup generated by all the commutators.
It follows from the properties of commutators that any element of formula_18 is of the form
for some natural number formula_23, where the "g""i" and "h""i" are elements of "G". Moreover, since for any "s" in "G" we have formula_24, the commutator subgroup is normal in "G". For any homomorphism "f": "G" → "H",
so that formula_26.
This shows that the commutator subgroup can be viewed as a functor on the category of groups, some implications of which are explored below. Moreover, taking "G" = "H" it shows that the commutator subgroup is stable under every endomorphism of "G": that is, ["G","G"] is a fully characteristic subgroup of "G", a property considerably stronger than normality.
The commutator subgroup can also be defined as the set of elements "g" of the group that have an expression as a product "g" = "g"1 "g"2 ... "g""k" that can be rearranged to give the identity.
This construction can be iterated:
The groups formula_29 are called the second derived subgroup, third derived subgroup, and so forth, and the descending normal series
is called the derived series. This should not be confused with the lower central series, whose terms are formula_31.
For a finite group, the derived series terminates in a perfect group, which may or may not be trivial. For an infinite group, the derived series need not terminate at a finite stage, and one can continue it to infinite ordinal numbers via transfinite recursion, thereby obtaining the transfinite derived series, which eventually terminates at the perfect core of the group.
Given a group formula_32, a quotient group formula_33 is abelian if and only if formula_34.
The quotient formula_35 is an abelian group called the abelianization of formula_32 or formula_32 made abelian. It is usually denoted by formula_38 or formula_39.
There is a useful categorical interpretation of the map formula_40. Namely formula_41 is universal for homomorphisms from formula_32 to an abelian group formula_43: for any abelian group formula_43 and homomorphism of groups formula_45 there exists a unique homomorphism formula_46 such that formula_47. As usual for objects defined by universal mapping properties, this shows the uniqueness of the abelianization "G"ab up to canonical isomorphism, whereas the explicit construction formula_48 shows existence.
The abelianization functor is the left adjoint of the inclusion functor from the category of abelian groups to the category of groups. The existence of the abelianization functor Grp → Ab makes the category Ab a reflective subcategory of the category of groups, defined as a full subcategory whose inclusion functor has a left adjoint.
Another important interpretation of formula_38 is as formula_50, the first homology group of formula_32 with integral coefficients.
A group "G" is an abelian group if and only if the derived group is trivial: ["G","G"] = {"e"}. Equivalently, if and only if the group equals its abelianization. See above for the definition of a group's abelianization.
A group "G" is a perfect group if and only if the derived group equals the group itself: ["G","G"] = "G". Equivalently, if and only if the abelianization of the group is trivial. This is "opposite" to abelian.
A group with formula_52 for some "n" in N is called a solvable group; this is weaker than abelian, which is the case "n" = 1.
A group with formula_53 for all "n" in N is called a non-solvable group.
A group with formula_54 for some ordinal number, possibly infinite, is called a hypoabelian group; this is weaker than solvable, which is the case "α" is finite (a natural number).
Since the derived subgroup is characteristic, any automorphism of "G" induces an automorphism of the abelianization. Since the abelianization is abelian, inner automorphisms act trivially, hence this yields a map | https://en.wikipedia.org/wiki?curid=8847 |
Dr. Seuss
Theodor Seuss "Ted" Geisel (; March 2, 1904 – September 24, 1991) was an American children's author, political cartoonist, illustrator, poet, animator, screenwriter, and filmmaker. He is known for his work writing and illustrating more than 60 books under the pen name Dr. Seuss (,). His work includes many of the most popular children's books of all time, selling over 600 million copies and being translated into more than 20 languages by the time of his death.
Geisel adopted the name "Dr. Seuss" as an undergraduate at Dartmouth College and as a graduate student at Lincoln College, Oxford. He left Oxford in 1927 to begin his career as an illustrator and cartoonist for "Vanity Fair", "Life", and various other publications. He also worked as an illustrator for advertising campaigns, most notably for FLIT and Standard Oil, and as a political cartoonist for the New York newspaper "PM". He published his first children's book "And to Think That I Saw It on Mulberry Street" in 1937. During World War II, he took a brief hiatus from children's literature to illustrate political cartoons, and he also worked in the animation and film department of the United States Army where he wrote, produced or animated many productions – both live-action and animated – including "Design for Death", which later won the 1947 Academy Award for Best Documentary Feature.
After the war, Geisel returned to writing children's books, writing classics like "If I Ran the Zoo" (1950), "Horton Hears a Who!" (1955), "If I Ran the Circus" (1956), "The Cat in the Hat" (1957), "How the Grinch Stole Christmas!" (1957), and "Green Eggs and Ham" (1960). He published over 60 books during his career, which have spawned numerous adaptations, including 11 television specials, five feature films, a Broadway musical, and four television series.
Geisel won the Lewis Carroll Shelf Award in 1958 for "Horton Hatches the Egg" and again in 1961 for "And to Think That I Saw It on Mulberry Street". Geisel's birthday, March 2, has been adopted as the annual date for National Read Across America Day, an initiative on reading created by the National Education Association.
Geisel was born and raised in Springfield, Massachusetts, the son of Henrietta ("née" Seuss) and Theodor Robert Geisel. His father managed the family brewery and was later appointed to supervise Springfield's public park system by Mayor John A. Denison after the brewery closed because of Prohibition. Mulberry Street in Springfield, made famous in his first children's book "And to Think That I Saw It on Mulberry Street", is near his boyhood home on Fairfield Street. The family was of German descent, and Geisel and his sister Marnie experienced anti-German prejudice from other children following the outbreak of World War I in 1914.
Geisel attended Dartmouth College, graduating in 1925. At Dartmouth, he joined the Sigma Phi Epsilon fraternity and the humor magazine "Dartmouth Jack-O-Lantern", eventually rising to the rank of editor-in-chief. While at Dartmouth, he was caught drinking gin with nine friends in his room. At the time, the possession and consumption of alcohol was illegal under Prohibition laws, which remained in place between 1920 and 1933. As a result of this infraction, Dean Craven Laycock insisted that Geisel resign from all extracurricular activities, including the "Jack-O-Lantern". To continue working on the magazine without the administration's knowledge, Geisel began signing his work with the pen name "Seuss". He was encouraged in his writing by professor of rhetoric W. Benfield Pressey, whom he described as his "big inspiration for writing" at Dartmouth.
Upon graduating from Dartmouth, he entered Lincoln College, Oxford, intending to earn a D.Phil. in English literature. At Oxford, he met Helen Palmer, who encouraged him to give up becoming an English teacher in favor of pursuing drawing as a career. She later recalled that "Ted's notebooks were always filled with these fabulous animals. So I set to work diverting him; here was a man who could draw such pictures; he should be earning a living doing that."
Geisel left Oxford without earning a degree and returned to the United States in February 1927, where he immediately began submitting writings and drawings to magazines, book publishers, and advertising agencies. Making use of his time in Europe, he pitched a series of cartoons called "Eminent Europeans" to "Life" magazine, but the magazine passed on it. His first nationally published cartoon appeared in the July 16, 1927, issue of "The Saturday Evening Post". This single $25 sale encouraged Geisel to move from Springfield to New York City. Later that year, Geisel accepted a job as writer and illustrator at the humor magazine "Judge", and he felt financially stable enough to marry Helen. His first cartoon for "Judge" appeared on October 22, 1927, and the Geisels were married on November 29. Geisel's first work signed "Dr. Seuss" was published in "Judge" about six months after he started working there.
In early 1928, one of Geisel's cartoons for "Judge" mentioned Flit, a common bug spray at the time manufactured by Standard Oil of New Jersey. According to Geisel, the wife of an advertising executive in charge of advertising Flit saw Geisel's cartoon at a hairdresser's and urged her husband to sign him. Geisel's first Flit ad appeared on May 31, 1928, and the campaign continued sporadically until 1941. The campaign's catchphrase "Quick, Henry, the Flit!" became a part of popular culture. It spawned a song and was used as a punch line for comedians such as Fred Allen and Jack Benny. As Geisel gained notoriety for the Flit campaign, his work was in demand and began to appear regularly in magazines such as "Life", "Liberty", and "Vanity Fair".
The money Geisel earned from his advertising work and magazine submissions made him wealthier than even his most successful Dartmouth classmates. The increased income allowed the Geisels to move to better quarters and to socialize in higher social circles. They became friends with the wealthy family of banker Frank A. Vanderlip. They also traveled extensively: by 1936, Geisel and his wife had visited 30 countries together. They did not have children, neither kept regular office hours, and they had ample money. Geisel also felt that traveling helped his creativity.
Geisel's success with the Flit campaign led to more advertising work, including for other Standard Oil products like Essomarine boat fuel and Essolube Motor Oil and for other companies like the Ford Motor Company, NBC Radio Network, and Holly Sugar. His first foray into books, "Boners", a collection of children's sayings that he illustrated, was published by Viking Press in 1931. It topped "The New York Times" non-fiction bestseller list and led to a sequel, "More Boners", published the same year. Encouraged by the books' sales and positive critical reception, Geisel wrote and illustrated an ABC book featuring "very strange animals" that failed to interest publishers.
In 1936, Geisel and his wife were returning from an ocean voyage to Europe when the rhythm of the ship's engines inspired the poem that became his first children's book: "And to Think That I Saw It on Mulberry Street". Based on Geisel's varied accounts, the book was rejected by between 20 and 43 publishers. According to Geisel, he was walking home to burn the manuscript when a chance encounter with an old Dartmouth classmate led to its publication by Vanguard Press. Geisel wrote four more books before the US entered World War II. This included "The 500 Hats of Bartholomew Cubbins" in 1938, as well as "The King's Stilts" and "The Seven Lady Godivas" in 1939, all of which were in prose, atypically for him. This was followed by "Horton Hatches the Egg" in 1940, in which Geisel returned to the use of poetry.
As World War II began, Geisel turned to political cartoons, drawing over 400 in two years as editorial cartoonist for the left-leaning New York City daily newspaper, "PM". Geisel's political cartoons, later published in "Dr. Seuss Goes to War", denounced Hitler and Mussolini and were highly critical of non-interventionists ("isolationists"), most notably Charles Lindbergh, who opposed US entry into the war. One cartoon depicted Japanese Americans being handed TNT after a "call from home", while other cartoons deplored the racism at home against Jews and blacks that harmed the war effort. His cartoons were strongly supportive of President Roosevelt's handling of the war, combining the usual exhortations to ration and contribute to the war effort with frequent attacks on Congress (especially the Republican Party), parts of the press (such as the "New York Daily News", "Chicago Tribune", and "Washington Times-Herald"), and others for criticism of Roosevelt, criticism of aid to the Soviet Union, investigation of suspected Communists, and other offences that he depicted as leading to disunity and helping the Nazis, intentionally or inadvertently.
In 1942, Geisel turned his energies to direct support of the U.S. war effort. First, he worked drawing posters for the Treasury Department and the War Production Board. Then, in 1943, he joined the Army as a Captain and was commander of the Animation Department of the First Motion Picture Unit of the United States Army Air Forces, where he wrote films that included "Your Job in Germany", a 1945 propaganda film about peace in Europe after World War II; "Our Job in Japan"; and the "Private Snafu" series of adult army training films. While in the Army, he was awarded the Legion of Merit. "Our Job in Japan" became the basis for the commercially released film "Design for Death" (1947), a study of Japanese culture that won the Academy Award for Best Documentary Feature. "Gerald McBoing-Boing" (1950) was based on an original story by Seuss and won the Academy Award for Best Animated Short Film.
After the war, Geisel and his wife moved to La Jolla, California, where he returned to writing children's books. He published most of his books through Random House in North America and William Collins, Sons (later HarperCollins) internationally. He wrote many, including such favorites as "If I Ran the Zoo" (1950), "Horton Hears a Who!" (1955), "If I Ran the Circus" (1956), "The Cat in the Hat" (1957), "How the Grinch Stole Christmas!" (1957), and "Green Eggs and Ham" (1960). He received numerous awards throughout his career, but he won neither the Caldecott Medal nor the Newbery Medal. Three of his titles from this period were, however, chosen as Caldecott runners-up (now referred to as Caldecott Honor books): "McElligot's Pool" (1947), "Bartholomew and the Oobleck" (1949), and "If I Ran the Zoo" (1950). Dr. Seuss also wrote the musical and fantasy film "The 5,000 Fingers of Dr. T.", which was released in 1953. The movie was a critical and financial failure, and Geisel never attempted another feature film. During the 1950s, he also published a number of illustrated short stories, mostly in "Redbook" Magazine. Some of these were later collected (in volumes such as "The Sneetches and Other Stories") or reworked into independent books ("If I Ran the Zoo"). A number have never been reprinted since their original appearances.
In May 1954, "Life" magazine published a report on illiteracy among school children which concluded that children were not learning to read because their books were boring. William Ellsworth Spaulding was the director of the education division at Houghton Mifflin (he later became its chairman), and he compiled a list of 348 words that he felt were important for first-graders to recognize. He asked Geisel to cut the list to 250 words and to write a book using only those words. Spaulding challenged Geisel to "bring back a book children can't put down". Nine months later, Geisel completed "The Cat in the Hat", using 236 of the words given to him. It retained the drawing style, verse rhythms, and all the imaginative power of Geisel's earlier works but, because of its simplified vocabulary, it could be read by beginning readers. "The Cat in the Hat" and subsequent books written for young children achieved significant international success and they remain very popular today. For example, in 2009, "Green Eggs and Ham" sold 540,000 copies, "The Cat in the Hat" sold 452,000 copies, and "One Fish, Two Fish, Red Fish, Blue Fish" (1960) sold 409,000 copies — all outselling the majority of newly published children's books.
Geisel went on to write many other children's books, both in his new simplified-vocabulary manner (sold as Beginner Books) and in his older, more elaborate style.
In 1956, Dartmouth awarded Geisel with an honorary doctorate, finally legitimizing the "Dr." in his pen name.
On April 28, 1958, Geisel appeared on an episode of the panel game show "To Tell the Truth".
Geisel's wife Helen had a long struggle with illnesses. On October 23, 1967, Helen died by suicide; Geisel married Audrey Dimond on June 21, 1968. Although he devoted most of his life to writing children's books, Geisel had no children of his own, saying of children: "You have 'em; I'll entertain 'em." Dimond added that Geisel "lived his whole life without children and he was very happy without children." Audrey oversaw Geisel's estate until her death on December 19, 2018, at the age of 97.
Geisel was awarded an honorary Doctor of Humane Letters (L.H.D.) from Whittier College in 1980. He also received the Laura Ingalls Wilder Medal from the professional children's librarians in 1980, recognizing his "substantial and lasting contributions to children's literature". At the time, it was awarded every five years. He won a special Pulitzer Prize in 1984 citing his "contribution over nearly half a century to the education and enjoyment of America's children and their parents".
Geisel died of cancer on September 24, 1991, at his home in La Jolla, California, at the age of 87. His ashes were scattered in the Pacific Ocean. On December 1, 1995, four years after his death, University of California, San Diego's University Library Building was renamed Geisel Library in honor of Geisel and Audrey for the generous contributions that they made to the library and their devotion to improving literacy.
While Geisel was living in La Jolla, the United States Postal Service and others frequently confused him with fellow La Jolla resident Dr. Hans Suess, a noted nuclear physicist.
In 2002, the Dr. Seuss National Memorial Sculpture Garden opened in Springfield, Massachusetts, featuring sculptures of Geisel and of many of his characters. In 2008 he was inducted into the California Hall of Fame. On March 2, 2009, the Web search engine Google temporarily changed its logo to commemorate Geisel's birthday (a practice that it often performs for various holidays and events).
In 2004, U.S. children's librarians established the annual Theodor Seuss Geisel Award to recognize "the most distinguished American book for beginning readers published in English in the United States during the preceding year". It should "demonstrate creativity and imagination to engage children in reading" from pre-kindergarten to second grade.
At Geisel's alma mater of Dartmouth, more than 90 percent of incoming first-year students participate in pre-matriculation trips run by the Dartmouth Outing Club into the New Hampshire wilderness. It is traditional for students returning from the trips to stay overnight at Dartmouth's Moosilauke Ravine Lodge, where they are served green eggs for breakfast. On April 4, 2012, the Dartmouth Medical School was renamed the Audrey and Theodor Geisel School of Medicine in honor of their many years of generosity to the college.
Dr. Seuss's honors include two Academy Awards, two Emmy Awards, a Peabody Award, the Laura Ingalls Wilder Medal, and the Pulitzer Prize.
Dr. Seuss has a star on the Hollywood Walk of Fame at the 6500 block of Hollywood Boulevard.
Geisel's most famous pen name is regularly pronounced , an anglicized pronunciation inconsistent with his German surname (the standard German pronunciation is ). He himself noted that it rhymed with "voice" (his own pronunciation being ). Alexander Laing, one of his collaborators on the "Dartmouth Jack-O-Lantern", wrote of it:
Geisel switched to the anglicized pronunciation because it "evoked a figure advantageous for an author of children's books to be associated with—Mother Goose" and because most people used this pronunciation. He added the "Doctor (abbreviated Dr.)" to his pen name because his father had always wanted him to practice medicine.
For books that Geisel wrote and others illustrated, he used the pen name "Theo LeSieg", starting with "I Wish That I Had Duck Feet" published in 1965. "LeSieg" is "Geisel" spelled backward. Geisel also published one book under the name Rosetta Stone, 1975's "Because a Little Bug Went Ka-Choo!!", a collaboration with Michael K. Frith. Frith and Geisel chose the name in honor of Geisel's second wife Audrey, whose maiden name was Stone.
Geisel was a liberal Democrat and a supporter of President Franklin D. Roosevelt and the New Deal. His early political cartoons show a passionate opposition to fascism, and he urged action against it both before and after the United States entered World War II. His cartoons portrayed the fear of communism as overstated, finding greater threats in the House Un-American Activities Committee and those who threatened to cut the United States' "life line" to Stalin and the USSR, whom he once depicted as a porter carrying "our war load".
Geisel supported the internment of Japanese Americans during World War II. On the issue of the Japanese, he is quoted as saying:
After the war, though, Geisel overcame his feelings of animosity, using his book "Horton Hears a Who!" (1954) as an allegory for the American post-war occupation of Japan, as well as dedicating the book to a Japanese friend, though Ron Lamothe noted in an interview that even that book has a sense of "American chauvinism" and doesn't mention the atomic bombings of Hiroshima and Nagasaki.
In 1948, after living and working in Hollywood for years, Geisel moved to La Jolla, California, a predominantly Republican community.
Geisel converted a copy of one of his famous children's books, "Marvin K. Mooney Will You Please Go Now!", into a polemic shortly before the end of the 1972–1974 Watergate scandal, in which United States president Richard Nixon resigned, by replacing the name of the main character everywhere that it occurred. "Richard M. Nixon, Will You Please Go Now!" was published in major newspapers through the column of his friend Art Buchwald.
The line "a person's a person, no matter "how" small!!" from "Horton Hears a Who!" has been used widely as a slogan by the pro-life movement in the United States. Geisel and later his widow Audrey objected to this use; according to her attorney, "She doesn't like people to hijack Dr. Seuss characters or material to front their own points of view." In the 1980s Geisel threatened to sue an anti-abortion group for using this phrase on their stationery, according to his biographer, causing them to remove it. The attorney says he never discussed abortion with either of them, and the biographer says Geisel never expressed a public opinion on the subject. After Seuss' death, Audrey gave financial support to Planned Parenthood.
Geisel made a point of not beginning to write his stories with a moral in mind, stating that "kids can see a moral coming a mile off." He was not against writing about issues, however; he said that "there's an inherent moral in any story", and he remarked that he was "subversive as hell."
Many of Geisel's books express his views on a remarkable variety of social and political issues: "The Lorax" (1971), about environmentalism and anti-consumerism; "The Sneetches" (1961), about racial equality; "The Butter Battle Book" (1984), about the arms race; "Yertle the Turtle" (1958), about Adolf Hitler and anti-authoritarianism; "How the Grinch Stole Christmas!" (1957), criticizing the materialism and consumerism of the Christmas season; and "Horton Hears a Who!" (1954), about anti-isolationism and internationalism.
Geisel wrote most of his books in anapestic tetrameter, a poetic meter employed by many poets of the English literary canon. This is often suggested as one of the reasons that Geisel's writing was so well received.
Anapestic tetrameter consists of four rhythmic units called anapests, each composed of two weak syllables followed by one strong syllable (the beat); often, the first weak syllable is omitted, or an additional weak syllable is added at the end. An example of this meter can be found in Geisel's "Yertle the Turtle", from "Yertle the Turtle and Other Stories":
Some books by Geisel that are written mainly in anapestic tetrameter also contain many lines written in amphibrachic tetrameter wherein each strong syllable is surrounded by a weak syllable on each side. Here is an example from "If I Ran the Circus":
Geisel also wrote verse in trochaic tetrameter, an arrangement of a strong syllable followed by a weak syllable, with four units per line (for example, the title of "One Fish Two Fish Red Fish Blue Fish"). Traditionally, English trochaic meter permits the final weak position in the line to be omitted, which allows both masculine and feminine rhymes.
Geisel generally maintained trochaic meter for only brief passages, and for longer stretches typically mixed it with iambic tetrameter, which consists of a weak syllable followed by a strong, and is generally considered easier to write. Thus, for example, the magicians in "Bartholomew and the Oobleck" make their first appearance chanting in trochees (thus resembling the witches of Shakespeare's "Macbeth"):
They then switch to iambs for the oobleck spell:
Geisel's early artwork often employed the shaded texture of pencil drawings or watercolors, but in his children's books of the postwar period, he generally made use of a starker medium—pen and ink—normally using just black, white, and one or two colors. His later books, such as "The Lorax," used more colors.
Geisel's style was unique – his figures are often "rounded" and somewhat droopy. This is true, for instance, of the faces of The grinch and the Cat in the Hat. Almost all his buildings and machinery were devoid of straight lines when they were drawn, even when he was representing real objects. For example, "If I Ran the Circus" shows a droopy hoisting crane and a droopy steam calliope.
Geisel evidently enjoyed drawing architecturally elaborate objects, and a number of his motifs are identifiable with structures in his childhood home of Springfield, including examples such as the onion domes of its and his family's brewery. | https://en.wikipedia.org/wiki?curid=8855 |
Digital compositing
Digital compositing is the process of digitally assembling multiple images to make a final image, typically for print, motion pictures or screen display. It is the digital analogue of optical film compositing.
The basic operation used in digital compositing is known as 'alpha blending', where an opacity value, 'α', is used to control the proportions of two input pixel values that end up a single output pixel.
As a simple example, suppose two images of the same size are available and they are to be composited. The input images are referred to as the foreground image and the background image. Each image consists of the same number of pixels. Compositing is performed by mathematically combining information from the corresponding pixels from the two input images and recording the result in a third image, which is called the composited image.
Consider three pixels;
and
Then, considering all three colour channels, and assuming that the colour channels are expressed in a γ=1 colour space (that is to say, the measured values are proportional to light intensity), we have:
Note that if the operations are performed in a colour space where γ is not equal to 1 then the operation will lead to non-linear effects which can potentially be seen as aliasing artifacts (or 'jaggies') along sharp edges in the matte. More generally, nonlinear compositing can have effects such as "halos" around composited objects, because the influence of the alpha channel is non-linear. It is possible for a compositing artist to compensate for the effects of compositing in non-linear space.
Performing alpha blending is an expensive operation if performed on an entire image or 3D scene. If this operation has to be done in real time video games there is an easy trick to boost performance.
By simply rewriting the mathematical expression one can save 50% of the multiplications required.
When many partially transparent layers need to be composited together, it is worthwhile to consider the algebraic properties of compositing operators used. Specifically, the associativity and commutativity determine when repeated calculation can or cannot be avoided.
Consider the case when we have four layers to blend to produce the final image: F=A*(B*(C*D)) where A, B, C, D are partially transparent image layers and "*" denotes a compositing operator (with the left layer on top of the right layer). If only layer C changes, we should find a way to avoid re-blending all of the layers when computing F. Without any special considerations, four full-image blends would need to occur. For compositing operators that are commutative, such as additive blending, it is safe to re-order the blending operations. In this case, we might compute T=A*(B*D) only once and simply blend T*C to produce F, a single operation. Unfortunately, most operators are not commutative. However, many are associative, suggesting it is safe to re-group operations to F=(A*B)*(C*D), i.e. without changing their order. In this case we may compute S:=A*B once and save this result. To form F with an associative operator, we need only do two additional compositing operations to integrate the new layer S, by computing F:=S*(C*D). Note that this expression indicates compositing C with all of the layers below it in one step and then blending all of the layers on top of it with the previous result to produce the final image in the second step.
If all layers of an image change regularly but many layers still need to be composited (such as in distributed rendering), the commutativity of a compositing operator can still be exploited to speed up computation through parallelism even when there is no gain from pre-computation. Again, consider the image F=A*(B*(C*D)). Each compositing operation in this expression depends on the next, leading to serial computation. However, associativity can allow us to rewrite F=(A*B)*(C*D) where there are clearly two operations that do not depend on each other that may be executed in parallel. In general, we can build a tree of pair-wise compositing operations with a height that is logarithmic in the number of layers.
The most historically significant nonlinear compositing system was the Cineon, which operated in a logarithmic color space, which more closely mimics the natural light response of film emulsions (the Cineon system, made by Kodak, is no longer in production). Due to the limitations of processing speed and memory, compositing artists did not usually have the luxury of having the system make intermediate conversions to linear space for the compositing steps. Over time, the limitations have become much less significant, and now most compositing is done in a linear color space, even in cases where the source imagery is in a logarithmic color space.
Compositing often also includes scaling, retouching and colour correction of images.
There are two radically different digital compositing workflows: node-based compositing and layer-based compositing.
Node-based compositing represents an entire composite as a directed acyclic graph, linking media objects and effects in a procedural map, intuitively laying out the progression from source input to final output, and is in fact the way all compositing applications internally handle composites. This type of compositing interface allows great flexibility, including the ability to modify the parameters of an earlier image processing step "in context" (while viewing the final composite). Node-based compositing packages often handle keyframing and time effects poorly, as their workflow does not stem directly from a timeline, as do layer-based compositing packages. Software which incorporates a node based interface include Natron, Apple Shake, Blender, Blackmagic Fusion, and The Foundry's Nuke.
Layer-based compositing represents each media object in a composite as a separate layer within a timeline, each with its own time bounds, effects, and keyframes. All the layers are stacked, one above the next, in any desired order; and the bottom layer is usually rendered as a base in the resultant image, with each higher layer being progressively rendered on top of the previously composited of layers, moving upward until all layers have been rendered into the final composite. Layer-based compositing is very well suited for rapid 2D and limited 3D effects such as in motion graphics, but becomes awkward for more complex composites entailing numerous layers. A partial solution to this is some programs' ability to view the composite-order of elements (such as images, effects, or other attributes) with a visual diagram called a flowchart to nest compositions, or "comps," directly into other compositions, thereby adding complexity to the render-order by first compositing layers in the beginning composition, then combining that resultant imome. | https://en.wikipedia.org/wiki?curid=8858 |
Dandy
A dandy, historically, is a man who places particular importance upon physical appearance, refined language, and leisurely hobbies, pursued with the appearance of nonchalance in a cult of self. A dandy could be a self-made man who strove to imitate an aristocratic lifestyle despite coming from a middle-class background, especially in late 18th- and early 19th-century Britain.
Previous manifestations of the "petit-maître" (French for "small master") and the Muscadin have been noted by John C. Prevost, but the modern practice of dandyism first appeared in the revolutionary 1790s, both in London and in Paris. The dandy cultivated cynical reserve, yet to such extremes that novelist George Meredith, himself no dandy, once defined cynicism as "intellectual dandyism". Some took a more benign view; Thomas Carlyle wrote in "Sartor Resartus" that a dandy was no more than "a clothes-wearing man". Honoré de Balzac introduced the perfectly worldly and unmoved Henri de Marsay in "La fille aux yeux d'or" (1835), a part of "La Comédie Humaine", who fulfils at first the model of a perfect dandy, until an obsessive love-pursuit unravels him in passionate and murderous jealousy.
Charles Baudelaire defined the dandy, in the later "metaphysical" phase of dandyism, as one who elevates æsthetics to a living religion, that the dandy's mere existence reproaches the responsible citizen of the middle class: "Dandyism in certain respects comes close to spirituality and to stoicism" and "These beings have no other status, but that of cultivating the idea of beauty in their own persons, of satisfying their passions, of feeling and thinking ... Dandyism is a form of Romanticism. Contrary to what many thoughtless people seem to believe, dandyism is not even an excessive delight in clothes and material elegance. For the perfect dandy, these things are no more than the symbol of the aristocratic superiority of mind."
The linkage of clothing with political protest had become a particularly English characteristic during the 18th century. Given these connotations, dandyism can be seen as a political protest against the levelling effect of egalitarian principles, often including nostalgic adherence to feudal or pre-industrial values, such as the ideals of "the perfect gentleman" or "the autonomous aristocrat". Paradoxically, the dandy required an audience, as Susann Schmid observed in examining the "successfully marketed lives" of Oscar Wilde and Lord Byron, who exemplify the dandy's roles in the public sphere, both as writers and as "personae" providing sources of gossip and scandal. Nigel Rodgers in "The Dandy: Peacock or Enigma?" questions Wilde's status as a genuine dandy, seeing him as someone who only assumed a dandified stance in passing, not a man dedicated to the exacting ideals of dandyism.
The origin of the word is uncertain. "Eccentricity", defined as taking characteristics such as dress and appearance to extremes, began to be applied generally to human behavior in the 1770s; similarly, the word "dandy" first appears in the late 18th century: In the years immediately preceding the American Revolution, the first verse and chorus of "Yankee Doodle" derided the alleged poverty and rough manners of American-citizen colonists, suggesting that whereas a fine horse and gold-braided clothing ("mac[c]aroni") were required to set a dandy apart from those around him, the average American citizen-colonist's means were so meager that ownership of a mere pony and a few feathers for personal ornamentation would qualify one of them as a "dandy" by comparison to and/or in the minds of his even less sophisticated Eurasian compatriots. A slightly later Scottish border ballad, circa 1780, also features the word, but probably without all the contextual aspects of its more recent meaning. The original, full form of 'dandy' may have been "jack-a-dandy". It was a vogue word during the Napoleonic Wars. In that contemporary slang, a "dandy" was differentiated from a "fop" in that the dandy's dress was more refined and sober than the fop's.
In the twenty-first century, the word "dandy" is a jocular, often sarcastic adjective meaning "fine" or "great"; when used in the form of a noun, it refers to a well-groomed and well-dressed man, but often to one who is also self-absorbed.
The model dandy in British society was George Bryan "Beau" Brummell (1778–1840), in his early days, an undergraduate student at Oriel College, Oxford and later, an associate of the Prince Regent. Brummell was not from an aristocratic background; indeed, his greatness was "based on nothing at all," as J.A. Barbey d'Aurevilly observed in 1845. Never unpowdered or unperfumed, immaculately bathed and shaved, and dressed in a plain dark blue coat, he was always perfectly brushed, perfectly fitted, showing much perfectly starched linen, all freshly laundered, and composed with an elaborately knotted cravat. From the mid-1790s, Beau Brummell was the early incarnation of "the celebrity", a man chiefly famous for "being" famous.
By the time Pitt taxed hair powder in 1795 to help pay for the war against France and to discourage the use of flour (which had recently increased in both rarity and price, owing to bad harvests) in such a frivolous product, Brummell had already abandoned wearing a wig, and had his hair cut in the Roman fashion, "à la Brutus". Moreover, he led the transition from breeches to snugly tailored dark "pantaloons," which directly led to contemporary trousers, the sartorial mainstay of men's clothes in the Western world for the past two centuries. In 1799, upon coming of age, Beau Brummell inherited from his father a fortune of thirty thousand pounds, which he spent mostly on costume, gambling, and high living. In 1816 he suffered bankruptcy, the dandy's stereotyped fate; he fled his creditors to France, quietly dying in 1840, in a lunatic asylum in Caen, aged 61.
Men of more notable accomplishments than Beau Brummell also adopted the dandiacal pose: Lord Byron occasionally dressed the part, helping reintroduce the frilled, lace-cuffed and lace-collared "poet shirt". In that spirit, he had his portrait painted in Albanian costume.
Another prominent dandy of the period was Alfred Guillaume Gabriel d'Orsay, the Count d'Orsay, who had been friends with Byron and who moved in the highest social circles of London.
In 1836 Thomas Carlyle wrote: A Dandy is a clothes-wearing Man, a Man whose trade, office and existence consists in the wearing of Clothes. Every faculty of his soul, spirit, purse, and person is heroically consecrated to this one object, the wearing of Clothes wisely and well: so that the others dress to live, he lives to dress ... And now, for all this perennial Martyrdom, and Poesy, and even Prophecy, what is it that the Dandy asks in return? Solely, we may say, that you would recognise his existence; would admit him to be a living object; or even failing this, a visual object, or thing that will reflect rays of light...
By the mid-19th century, the English dandy, within the muted palette of male fashion, exhibited minute refinements—"The quality of the fine woollen cloth, the slope of a pocket flap or coat revers, exactly the right colour for the gloves, the correct amount of shine on boots and shoes, and so on. It was an image of a well-dressed man who, while taking infinite pains about his appearance, affected indifference to it. This refined dandyism continued to be regarded as an essential strand of male Englishness."
The beginnings of dandyism in France were bound to the politics of the French revolution; the initial stage of dandyism, the gilded youth, was a political statement of dressing in an aristocratic style in order to distinguish its members from the sans-culottes.
During his heyday, Beau Brummell's "dictat" on both fashion and etiquette reigned supreme. His habits of dress and fashion were much imitated, especially in France, where, in a curious development, they became the rage, especially in bohemian quarters. There, dandies sometimes were celebrated in revolutionary terms: self-created men of consciously designed personality, radically breaking with past traditions. With elaborate dress and idle, decadent styles of life, French bohemian dandies sought to convey contempt for and superiority to bourgeois society. In the latter 19th century, this fancy-dress bohemianism was a major influence on the Symbolist movement in French literature.
Baudelaire was deeply interested in dandyism, and memorably wrote that a dandy aspirant must have "no profession other than elegance... no other status, but that of cultivating the idea of beauty in their own persons... The dandy must aspire to be sublime without interruption; he must live and sleep before a mirror." Other French intellectuals also were interested in the dandies strolling the streets and boulevards of Paris. Jules Amédée Barbey d'Aurevilly wrote "On Dandyism and George Brummell", an essay devoted, in great measure, to examining the career of Beau Brummell.
The literary dandy is a familiar figure in the writings, and sometimes the self-presentation, of Oscar Wilde, H.H. Munro (Clovis and Reginald), P.G. Wodehouse (Bertie Wooster) and Ronald Firbank, writers linked by their subversive air.
The poets Algernon Charles Swinburne and Oscar Wilde, Walter Pater, the American artist James McNeill Whistler, the Spanish artist Salvador Dalí, Joris-Karl Huysmans, and Max Beerbohm were dandies of the Belle Époque, as was Robert de Montesquiou — Marcel Proust's inspiration for the Baron de Charlus. In Italy, Gabriele d'Annunzio and Carlo Bugatti exemplified the artistic bohemian dandyism of the fin de siecle. Wilde wrote that, "One should either be a work of Art, or wear a work of Art."
At the end of the 19th century, American dandies were called dudes. Evander Berry Wall was nicknamed the "King of the Dudes".
George Walden, in the essay "Who's a Dandy?", identifies Noël Coward, Andy Warhol, and Quentin Crisp as modern dandies. The character Psmith in the novels of P. G. Wodehouse is considered a dandy, both physically and intellectually. Agatha Christie's Poirot is said to be a dandy.
The artist Sebastian Horsley described himself as a "dandy in the underworld" in his eponymous autobiography.
In Japan, dandyism has become a with historical roots dating back to the Edo period.
In Spain during the early 19th century a curious phenomenon developed linked to the idea of dandyism. While in England and France individuals from the middle classes adopted aristocratic manners, the Spanish aristocracy adopted the fashions of the lower classes, called majos. They were characterized by their elaborate outfits and sense of style as opposed to the modern Frenchified "afrancesados", as for their cheeky arrogant attitude.
Some famous dandies in later times were amongst other the Duke of Osuna, Mariano Tellez-Girón, artist Salvador Dalí and poet Luís Cernuda.
Albert Camus said in "L'Homme révolté" (1951) that: The dandy creates his own unity by aesthetic means. But it is an aesthetic of negation. "To live and die before a mirror": that according to Baudelaire, was the dandy's slogan. It is indeed a coherent slogan. The dandy is, by occupation, always in opposition. He can only exist by defiance... The dandy, therefore, is always compelled to astonish. Singularity is his vocation, excess his way to perfection. Perpetually incomplete, always on the fringe of things, he compels others to create him, while denying their values. He plays at life because he is unable to live it.
Jean Baudrillard said that dandyism is "an aesthetic form of nihilism".
The female counterpart is a quaintrelle, a woman who emphasizes a life of passion expressed through personal style, leisurely pastimes, charm, and cultivation of life's pleasures.
In the 12th century, "cointerrels" (male) and "cointrelles" (female) emerged, based upon "coint", a word applied to things skillfully made, later indicating a person of beautiful dress and refined speech. By the 18th century, "coint" became "quaint", indicating elegant speech and beauty. Middle English dictionaries note "quaintrelle" as a beautifully dressed woman (or overly dressed), but do not include the favorable personality elements of grace and charm. The notion of a quaintrelle sharing the major philosophical components of refinement with dandies is a modern development that returns quaintrelles to their historic roots.
Female dandies did overlap with male dandies for a brief period during the early 19th century when "dandy" had a derisive definition of "fop" or "over-the-top fellow"; the female equivalents were "dandyess" or "dandizette". Charles Dickens, in "All the Year Around" (1869) comments, "The dandies and dandizettes of 1819–20 must have been a strange race. "Dandizette" was a term applied to the feminine devotees to dress, and their absurdities were fully equal to those of the dandies." In 1819, "Charms of Dandyism" in three volumes, was published by Olivia Moreland, Chief of the Female Dandies; most likely one of many pseudonyms used by Thomas Ashe. Olivia Moreland may have existed, as Ashe did write several novels about living persons. Throughout the novel, dandyism is associated with "living in style". Later, as the word "dandy" evolved to denote refinement, it became applied solely to men. "Popular Culture and Performance in the Victorian City" (2003) notes this evolution in the latter 19th century: "...or "dandizette", although the term was increasingly reserved for men."
The series featured the further adventures of the title character played by Peter Wyngarde who had first appeared in "Department S" (1969). In that series he was a dilettante, dandy, and author of a series of adventure novels, working as part of a team of investigators. In "Jason King" he had left that service to concentrate on writing the adventures of Mark Caine, who closely resembled Jason King in looks, manner, style, and personality. None of the other regular characters from "Department S" appeared in this series, although Department S itself is occasionally referred to in dialogue. | https://en.wikipedia.org/wiki?curid=8859 |
Dubbing (filmmaking)
Dubbing, mixing or re-recording, is a post-production process used in filmmaking and video production in which additional or supplementary recordings are lip-synced and "mixed" with original production sound to create the finished soundtrack.
The process usually takes place on a dub stage. After sound editors edit and prepare all the necessary tracks – dialogue, automated dialogue replacement (ADR), effects, Foley, music – the dubbing mixers proceed to balance all of the elements and record the finished soundtrack. Dubbing is sometimes confused with ADR, also known as "additional dialogue replacement", "automated dialogue recording" and "looping", in which the original actors re-record and synchronize audio segments.
Outside the film industry, the term "dubbing" commonly refers to the replacement of the actor's voices with those of different performers speaking another language, which is called "revoicing" in the film industry.
Films, videos, and sometimes video games are often dubbed into the local language of a foreign market. In foreign distribution, dubbing is common in theatrically released films, television films, television series, cartoons, and anime.
Dubbing originated from propagandist means. First after WWII movie dubbing was Konstantin Zaslonov (1949) dubbed from Russian to Czech language.
Automated Dialog Replacement (ADR) is the process of re-recording dialogue by the original actor (or a replacement actor) after the filming process to improve audio quality or reflect dialogue changes (also known as "looping" or a "looping session"). In India, the process is simply known as "dubbing", while in the UK, it is also called "post-synchronization" or "post-sync". The insertion of voice actor performances for animation, such as computer generated imagery or animated cartoons, is often referred to as ADR although it generally does not replace existing dialogue.
The ADR process may be used to:
In conventional film production, a production sound mixer records dialogue during filming. During post-production, a supervising sound editor, or ADR supervisor, reviews all of the dialogue in the film and decides which lines must be re-recorded. ADR is recorded during an ADR session, which takes place in a specialized sound studio. The actor, usually the original actor from the set, views the scene with the original sound, then attempts to recreate the performance. Over the course of multiple takes, the actor performs the lines while watching the scene; the most suitable take becomes the final version. The ADR process does not always take place in a post-production studio. The process may be recorded on location, with mobile equipment. ADR can also be recorded without showing the actor the image they must match, but by having them listen to the performance, since some actors believe that watching themselves act can degrade subsequent performances.
Sometimes, an actor other than the original actor is used during ADR. One famous example is the "Star Wars" character Darth Vader, portrayed by David Prowse; in post-production, James Earl Jones dubbed the voice of Vader.
Other examples include:
The tasks involved are performed by three different agents in the dubbing process:
Sometimes the translator performs all five tasks. In other cases, the translator just submits a rough translation and a dialogue writer does the rest.
The dialogue writer’s role is to make the translation sound natural of the target language, and to make the translation sound like a credible dialogue instead of merely a translated text.
Another task of dialogue writers is to check whether a translation matches an on-screen character’s mouth movements or not, by reading aloud simultaneously with the character. The dialogue writer often stays in the recording setting with the actors or the voice talents, to ensure that the dialogue is being spoken in the way that it was written to be, and to avoid any ambiguity in the way the dialogue is to be read (focusing on emphasis, intonation, pronunciation, articulation, pronouncing foreign words correctly, etc.). The overall goal is to make sure the script creates the illusion of authenticity of the spoken language.
An alternative method to dubbing, called "rythmo band" (or "lip-sync band"), has historically been used in Canada and France. It provides a more precise guide for the actors, directors, and technicians, and can be used to complement the traditional ADR method. The "band" is actually a clear 35 mm film leader on which the dialogue is hand-written in India ink, together with numerous additional indications for the actor—including laughs, cries, length of syllables, mouth sounds, breaths, and mouth openings and closings. The rythmo band is projected in the studio and scrolls in perfect synchronization with the picture.
Studio time is used more efficiently, since with the aid of scrolling text, picture, and audio cues, actors can read more lines per hour than with ADR alone (only picture and audio). With ADR, actors can average 10–12 lines per hour, while rythmo band can facilitate the reading of 35-50 lines per hour.
However, the preparation of a rythmo band is a time-consuming process involving a series of specialists organized in a production line. This has prevented the technique from being more widely adopted, but software emulations of rythmo band technology overcome the disadvantages of the traditional rythmo band process and significantly reduce the time needed to prepare a dubbing session.o
Dub localization, also often simply referred to as localization, is the practice of voice-over translation that alters a film or television series from one region of the world to the local language of another.
The new voice track is usually spoken by a voice actor. In many countries, actors who regularly perform this duty remain little-known, with the exception of particular circles (such as anime fandom) or when their voices have become synonymous with roles or actors whose voices they usually dub. In the United States, many of these voice artists may employ pseudonyms or go uncredited due to Screen Actors Guild regulations or the desire to dissociate themselves from the role.
Dub localization is a contentious issue in cinephilia amongst aficionados of foreign filmmaking and television programs, particularly anime fans. While some localization is virtually inevitable in translation, the controversy surrounding how much localization is "too much" is often discussed in such communities, especially when the final dub product is significantly different from the original. Some fans frown on any extensive localization, while others expect it, and to varying degrees, appreciate it.
In North-West Europe (the UK, Republic of Ireland, the Netherlands, the Dutch-speaking part of Belgium, the Nordic countries and the Baltic states), Portugal, Poland, normaly Ukraine and Balkan countries, generally only movies and TV shows intended for children are dubbed, while TV shows and movies for older audiences are subtitled (although animated productions have a tradition of being dubbed). For movies in cinemas with clear target audiences (both below and above 10–11 years of age), both a dubbed and a subtitled version are usually available.
The first movie dubbed in Albanian language was "The Great Warrior Skanderbeg" in 1954 and since then, there have been thousands of popular titles dubbed in Albanian by different dubbing studios. All animated movies and children's programs are dubbed into Albanian language, many live-action movies as well. TV series nevertheless are usually not dubbed, they are subtitled except for a few Mexican, Brazilian and Turkish soap operas, like: "Por Ti", "Celebridade", "A Casa das Sete Mulheres", "Paramparça", etc. As for documentaries, Albania usually uses voice-over.
In the Dutch-speaking part of Belgium (Flanders), movies and TV series are shown in their original language with subtitles, with the exception of most movies made for a young audience. In the latter case, sometimes separate versions are recorded in the Netherlands and in Flanders (for instance, several Walt Disney films and "Harry Potter" films). These dubbed versions only differ from each other in their use of different voice actors and different pronunciation, while the text is almost the same.
In the French-speaking part of Belgium (Wallonia), the range of French-dubbed versions is approximately as wide as the German range, where nearly all movies and TV series are dubbed.
Bosnia and Herzegovina usually uses Serbian and Croatian dubs, but they have dubbed some cartoons in Bosnian by themselves, for example "". Children's programs (both animated and live-action) are airing dubbed (in Serbian, Croatian or Bosnian), while every other program is subtitled (in Bosnian).
In Croatia, foreign films and TV series are always subtitled, while most children's programs and animated movies are dubbed into Croatian. The practice of dubbing began in the 1980s in some animated shows and continued in 90's, 00's and forward in other shows and films, the latter ones being released in home media. Recently, more efforts have been made to introduce dubbing, but public reception has been poor in some exceptions. Regardless of language, Croatian audiences prefer subtitling to dubbing, however it is still popular in animated films. Some previously popular shows (such as "Sailor Moon") lost their appeal completely after the practice of dubbing began, and the dubbing was eventually removed from the programs, even though most animated shows shown on television and some on home media have been well received by people watching dubbed versions of them. This situation is similar with theater movies, with only those intended for children being dubbed (such as "Finding Nemo" and "Shark Tale"), but nowadays are shown in dubbed versions. Also, there has been an effort to impose dubbing by Nova TV, with "La Fea Más Bella" translated as "Ružna ljepotica" (literally, "The Ugly Beauty"), a Mexican telenovela, but it failed. Some of Croatian dubbing is also broadcast in Bosnia and Herzegovina.
In Estonia in cinemas, only children's animated films are dubbed and live-action films are shown in the original English and Russian languages with subtitles at cinemas. Subtitles are usually presented in both Estonian and Russian languages. Cartoons and animated series voiced by dubbing or voiceover and live-action films and television series only with Estonian subtitles also but with English and Russian Dub Languages. Animated films are commonly shown in both the originals and Russian languages and dubbed into Estonian (or Russian in many cinemas). Most Estonian-language television channels use subtitles English and Russian Audio for foreign-language films and TV channels. However, Russian language channels tend to use dubbing more often, especially for Russian channels broadcast from Russia (as opposed to Russian channels broadcast from Estonia).
In Greece, most cartoon films have dubs. Usually when a movie has a Greek dub the dub is shown in cinemas but subtitled versions are shown as well. Foreign TV shows for adults are shown in their original versions with subtitles, most cartoons, for example, "The Flintstones" and "The Jetsons" were always dubbed, while "Family Guy" and "American Dad!" are always subtitled and contain the original English dialogue, since they are mostly for adults rather than children, (even though the movie "Space Jam" was subtitled instead of being dubbed, since also this is suitable for children). Also some Japanese anime series are dubbed in Greek (such as Pokémon, Pichi Pichi Pitch, Sailor Moon etc.) The only television programs dubbed in Greek includes Mexican TV series (like "Rubí" and "La usurpadora") and teen series (like "Hannah Montana" and "The Suite Life of Zack & Cody"). However, when Skai TV was re-launched in April 2006, the network opted for dubbing almost all foreign shows in Greek, unlike other Greek channels which had always broadcast most of the programs in their original language with subtitles.
Ireland usually receives the same film versions as the UK. However some films have been dubbed into Irish by TG4, including the "Harry Potter" film series.
In the Netherlands, for the most part, Dutch versions are only made for children's and family films. Animated movies are shown in theaters with Dutch dubbing, but usually those cinemas with more screening rooms also provide the original subtitled version, such as movies like "Finding Nemo", "Shrek the Third" and "WALL-E".
North Macedonia dubbed many cartoons in Macedonian, but they also air some Serbian dubs. Children's programs are airing dubbed (in Macedonian or Serbian), while every other program is subtitled (in Macedonian). They use Serbian dubs for Disney movies, because there are no Macedonian Disney dubs.
In Poland, cinema releases for general audiences are almost exclusively subtitled, with the exception of children's movies, and television screenings of movies, as well as made-for-TV shows. These are usually shown with voice-over, where a voice talent reads a translation over the original soundtrack. This method, called "juxtareading," is similar to the so-called Gavrilov translation in Russia, with one difference—all dialogues are voiced by one off-screen reader (), preferably with a deep and neutral voice which does not interfere with the pitch of voice of the original speakers in the background. To some extent, it resembles live translation. Certain highly qualified voice talents are traditionally assigned to particular kinds of production, such as action or drama. Standard dubbing is not widely popular with most audiences, with the exception of cartoons and children's shows, which are dubbed also for TV releases.
It is claimed that, until around 1951, there were no revoiced foreign movies available in Poland. Instead, they were exclusively subtitled in Polish.
Poland's dubbing traditions began between the two world wars. In 1931, among the first movies dubbed into Polish were "Dangerous Curves" (1929), "The Dance of Life" (1929), "Paramount on Parade" (1930), and "Darling of the Gods" (1930). In 1949, the first dubbing studio opened in Łódź. The first film dubbed that year was "Russkiy Vopros" (filmed 1948).
Polish dubbing in the first post-war years suffered from poor synchronization. Polish dialogues were not always audible and the cinema equipment of that time often made films sound less clear than they were. In the 1950s, Polish publicists discussed the quality of Polish versions of foreign movies.
The number of dubbed movies and the quality improved. Polish dubbing had a golden age between the 1960s and the 1980s. Approximately a third of foreign movies screened in cinemas were dubbed. The "Polish dubbing school" was known for its high quality. In that time, Poland had some of the best dubbing in the world. The person who initiated high-quality dubbing versions was director Zofia Dybowska-Aleksandrowicz. In that time, dubbing in Poland was very popular. Polish television dubbed popular films and TV series such as "Rich Man, Poor Man"; "Fawlty Towers", "Forsyte Saga", "Elizabeth R", "I, Claudius", "I'll Take Manhattan", and "Peter the Great".
In the 1980s, due to budget cuts, state-run TV saved on tapes by voicing films over live during transmission.
Overall, during 1948–1998, almost 1,000 films were dubbed in Polish. In the 1990s, dubbing films and TV series continued, although often also for one emission only.
In 1995, Canal+ was launched in Poland. In its first years, it dubbed 30% of its schedule dubbing popular films and TV series, one of the best-known and popular dubbings was that of "Friends", but this proved unsuccessful. It stopped dubbing films in 1999, although many people supported the idea of dubbing and bought the access only for dubbing versions of foreign productions. In the 1990s, dubbing was done by the television channel known as Wizja Jeden. They mainly dubbed BBC productions such as "The League of Gentlemen", "Absolutely Fabulous" and "Men Behaving Badly". Wizja Jeden was closed in 2001. In the same year, TVP stopped dubbing the TV series "Frasier", although that dubbing was very popular.
Currently, dubbing of films and TV series for teenagers is made by Nickelodeon and Disney Channel. One of the major breakthroughs in dubbing was the Polish release of "Shrek", which contained many references to local culture and Polish humor. Since then, people seem to have grown to like dubbed versions more, and pay more attention to the dubbing actors. However, this seems to be the case only with animated films, as live-action dubbing is still considered a bad practice. In the case of DVD releases, most discs contain both the original soundtrack and subtitles, and either voice over or dubbed Polish track. The dubbed version is, in most cases, the one from the theater release, while voice-over is provided for movies that were only subtitled in theaters.
Since theatrical release of "The Avengers" in May 2012, Walt Disney Company Polska dubs all films for cinema releases. Also in 2012, United International Pictures Polska dubbed "The Amazing Spider-Man", while Forum Film Polska – former distributor of Disney's films – decided to dub "", along with its . However, when a dub is produced but the film's target audience is not exclusively children, both dubbed and subtitled versions are usually available in movie theaters. The dubbed versions are more commonly shown in morning and early afternoon hours, with the subtitled version dominating in the evening. Both can be available in parallel at similar hours in multiplexes.
In Portugal, dubbing was banned under a 1948 law as a way of protecting the domestic film industry and reduce the access to culture as most of the population was illiterate. Until 1994, animated movies, as well as other TV series for children shown in Portugal, have imported Brazilian Portuguese dubs due to the lack of interest from Portuguese companies in the dubbing industry. This lack of interest was justified, since there were already quality dubbed copies of shows and movies in Portuguese made by Brazilians. "The Lion King" was the first feature film to be dubbed in European Portuguese rather than strictly Brazilian Portuguese. Currently, all movies for children are dubbed in European Portuguese. Subtitles are preferred in Portugal, used in every foreign-language documentary, TV series and film. The exception to this preference is when children are the target audience.
While on TV, children's shows and movies are always dubbed, in cinemas, films with a clear juvenile target can be found in two versions, one dubbed (identified by the letters V.P. for "versão portuguesa" - "Portuguese version") and another subtitled version (V.O. for "versão original" - "original version"). This duality applies only to juvenile films. Others use subtitles only. While the quality of these dubs is recognized (some have already received international recognition and prizes), original versions with subtitles are usually preferred by the adults ("Bee Movie", for example). Dubbing cartoons aimed at adults (such as "The Simpsons" or "South Park") is less common. When "The Simpsons Movie" debuted in Portugal, most cinemas showed both versions (V.O. and V.P.), but in some small cities, cinemas decided to offer only the Portuguese version, a decision that led to public protest. Presently, live action series and movies are always shown in their original language format with Portuguese subtitles. Television programs for young children (such as "Power Rangers", "Goosebumps", "Big Bad Beetleborgs", etc.) are dubbed into European Portuguese. Some video games aimed at adults (such as "God of War III", "Halo 3", "Assassin's Creed III" and "inFamous 2") are dubbed in European Portuguese, although there they provide an option to select the original language.
In Romania, virtually all programs intended for children are dubbed in Romanian, including cartoons, live-action movies and TV series on Disney Channel, Cartoon Network, Minimax, and Nickelodeon, as well as those shown on general television networks, children-focused series (such as "Power Rangers", "Goosebumps", "The New Addams Family", "The Planet's Funniest Animals") or movies screened on children's television. Animated movies are shown in theaters with Romanian dubbing. However, those cinemas with more screening rooms usually also provide the original subtitled version. Such was the case for movies like "Babe", "", "Finding Nemo", "Cars", "Shrek the Third", "Ratatouille", "Kung Fu Panda" and "WALL-E". Other foreign TV shows and movies are shown in the original language with Romanian subtitles. Subtitles are usually preferred in the Romanian market. According to "Special Eurobarometer 243" (graph QA11.8) of the European Commission (research carried out in November and December 2005), 62% of Romanians prefer to watch foreign films and programs with subtitles (rather than dubbed), 22% prefer dubbing, and 16% declined to answer. This is led by the assumption that watching movies in their original versions is very useful for learning foreign languages. However, according to the same Eurobarometer, virtually no Romanian found this method—watching movies in their original version—to be the most efficient way to learn foreign languages, compared to 53 percent who preferred language lessons at school.
In Romania, foreign language television programs and films are generally subtitled rather than dubbed. This includes programs in non-Western languages, such as Turkish, Korean or Hindi.
Serbian language dubs are made mainly for Serbia, but they broadcast in Montenegro and Bosnia & Herzegovina, too. Children's animated and some live-action movies and TV series are dubbed into Serbian, while live-action films and TV series for adults are always airing subtitled, because in this region people prefer subtitling for live-action formats. Turkish soap opera "Lale Devri" started airing dubbed in 2011, on RTV Pink, but because of bad reception, dub failed and rest of TV series was aired subtitled. "Married... with Children" was dubbed, too.
The dubbing of cartoon series in former Yugoslavia during the 1980s had a twist of its own: famous Serbian actors, such as Nikola Simić, Mića Tatić, Nada Blam and others provided the voices for characters of Disney, Warner Bros., MGM and other companies, frequently using region-specific phrases and sentences and, thus, adding a dose of local humor to the translation of the original lines. These phrases became immensely popular and are still being used for tongue-in-cheek comments in specific situations. These dubs are today considered cult dubs. The only dub made after 1980s and 1990s ones that's considered cult is "SpongeBob SquarePants" dub, made by B92 in period 2002–2017, because of a great popularity and memorable translation with local humor phrases, such as 1980s dubs translation.
Some Serbian dubs are also broadcast in North Macedonia, while cult dubs made during Yugoslavia were aired all over the country (today's Croatia, Bosnia and Herzegovina, Montenegro, Slovenia, North Macedonia and Serbia).
In the 21st-century, prominent dubbing/voice actors in Serbia include actors Marko Marković, Vladislava Đorđević, Jelena Gavrilović, Dragan Vujić, Milan Antonić, Boris Milivojević, Radovan Vujović, Goran Jevtić, Ivan Bosiljčić, Gordan Kičić, Slobodan Stefanović, Dubravko Jovanović, Dragan Mićanović, Slobodan Ninković, Branislav Lečić, Jakov Jevtović, Ivan Jevtović, Katarina Žutić, Anica Dobra, Voja Brajović, Nebojša Glogovac and Dejan Lutkić.
In Slovenia, all foreign films and television programs are subtitled with the exception of children's movies and TV shows (both animated or live-action). While dubbed versions are always shown in cinemas and later on TV channels, cinemas will sometimes play subtitled versions of children's movies as well.
In the United Kingdom, the vast majority of foreign language films are subtitled, although mostly animated films are dubbed in English. These usually originate from North America, as opposed to being dubbed locally. Foreign language serials shown on BBC Four are subtitled into English (although open subtitles are dropped during dialogues with English language segments already). There have, however, been notable examples of films and TV programs successfully dubbed in the UK, such as the Japanese "Monkey" and French "Magic Roundabout" series. When airing films on television, channels in the UK often choose subtitling over dubbing, even if a dubbing in English exists. It is also a fairly common practice for animation aimed at preschool children to be re-dubbed with British voice actors replacing the original voices, such as Spin Master Entertainment's PAW Patrol series, although this is not done with shows aimed at older audiences. The off-screen narrated portions of some programs and reality shows that originate from North America are also redone with British English voices.
Some animated films and TV programs are also dubbed into Welsh and Scottish Gaelic.
Hinterland displays a not so common example of a bilingual production. Each scene is filmed twice, in the English and Welsh languages, apart from a few scenes where Welsh with subtitles is used for the English version.
In the Nordic countries, dubbing is used only in animated features (except adult animated features) and other films for younger audiences. Some cinemas in the major cities may also screen the original version, usually as the last showing of the day, or in a smaller auditorium in a multiplex.
In television programs with off-screen narration, both the original audio and on-screen voices are usually subtitled in their native languages.
The Nordic countries are often treated as a common market issuing DVD and Blu-ray releases with original audio and user choosable subtitle options in Danish, Finnish, Norwegian and Swedish. The covers often have text in all four languages as well, but are sometimes unique for each country. Some releases may include other European language audio and/or subtitles (i.e. German, Greek, Hungarian or Italian). Children's films typically have Nordic audio tracks in all four languages, as well as original audio in most cases.
In Finland, the dubbed version from Sweden may also be available at certain cinemas for children of the 5% Swedish-speaking minority, but only in cities or towns with a significant percentage of Swedish speakers. Most DVD and Blu-ray releases usually only have the original audio, except for children's films, which have both Finnish and Swedish language tracks, in addition to the original audio and subtitles in both languages.
In Finnish movie theaters, films for adult audiences have both Finnish and Swedish subtitles, the Finnish printed in basic font and the Swedish printed below the Finnish in a cursive font. In the early ages of television, foreign TV shows and movies were voiced by narrator in Finland. Later, subtitles became a practice on Finnish television. Dubbing of films other than children's films is unpopular in Finland, as in many other countries. A good example is "The Simpsons Movie". While the original version was well-received, the Finnish-dubbed version received poor reviews, with some critics even calling it a disaster. On the other hand, many dubs of Disney animated features have been well-received, both critically and by the public.
In Iceland, the dubbed version of film and TV is usually Danish with some translated into Icelandic. LazyTown, an Icelandic TV show originally broadcast in English, was dubbed into Icelandic, amongst thirty-two other languages, and it remains the TV show to have been dubbed into the most languages.
In the Turkish, French, Italian, Spanish, German, Czech, Slovak, Hungarian, Polish, Russian and Ukrainian language-speaking markets of Europe, almost all foreign films and television shows are dubbed (the exception being the majority of theatrical releases of adult-audience movies in the Czech Republic, Slovakia, Poland and Turkey and high-profile videos in Russia). There are few opportunities to watch foreign movies in their original versions. In Spain, Italy, Germany and Austria, even in the largest cities, there are few cinemas that screen original versions with subtitles, or without any translation. However, digital pay-TV programming is often available in the original language, including the latest movies. Prior to the rise of DVDs, which in these countries are mostly issued with multi-language audio tracks, original-language films (those in languages other than the country's official language) were rare, whether in theaters, on TV, or on home video, and subtitled versions were considered a product for small niche markets such as intellectual or art films.
In France, dubbing is the norm. Most movies with a theatrical release, including all those from major distributors, are dubbed. Those that are not, are foreign independent films whose budget for international distribution is limited, or foreign art films with a niche audience.
Almost all theaters show movies with their French dubbing (“VF”, short for ). Some of them also offer screenings in the original language (”VO”, short for ), generally accompanied with French subtitles (”VOST”, short for ). A minority of theaters (usually small ones) screen exclusively in the original language. According to the CNC (National Centre for Cinematography), VOST screenings accounted for 16.4% of tickets sold in France.
In addition, dubbing is required for home entertainment and television screenings. However, since the advent of digital television, foreign programs are broadcast to television viewers in both languages (sometimes, French with audio description is also aired); while the French-language track is selected by default, viewers can switch to the original-language track and enable French subtitles. As a special case, the binational television channel Arte broadcasts both the French and German dubbings and subtitles, in addition to the original-language version.
Some voice actors that have dubbed for celebrities in the European French language are listed below.
In Italy, dubbing is systematic, with a tradition going back to the 1930s in Rome, Milan, Florence and Turin. In Mussolini's fascist Italy, release of movies in foreign languages was banned in 1938 for political reasons. Rome is the principal base of the dubbing industry, where major productions such as movies, drama, documentaries and some cartoons are dubbed. However, dubbing in Milan is mostly of cartoons and some minor productions. Practically every foreign film (mostly American ones) of every genre, for children or adults, as well as TV shows, are dubbed into Italian. In big cities, original-version movies can also be seen in some theaters but it is not so common. Subtitles may be available on late-night programs on mainstream TV channels, and on pay-TV all movies are available in the original language with Italian subtitles, many shows featuring their original soundtracks.
Early in their careers, actors such as Alberto Sordi or Nino Manfredi worked extensively as dubbing actors. At one point, common practice in Italian cinema was to shoot scenes MOS (motor only sync or motor only shot) and dub the dialogue in post-production. A notable example of this practice is "The Good, the Bad, and the Ugly", in which all actors had to dub in their own voices.
Video games are generally either dubbed into Italian (for instance, the "Assassin's Creed", "Halo", and "Harry Potter" series) or released with the original audio tracks providing Italian subtitles.
The most important Italian voice actors and actresses, as long as the main celebrities dubbed in their career, are listed below.
In Spain, practically all foreign television programs are shown dubbed in European Spanish, as are most films. Some dubbing actors have achieved popularity for their voices, such as Constantino Romero (who dubs Clint Eastwood, Darth Vader and Arnold Schwarzenegger's "Terminator", among others) and Óscar Muñoz (the official European Spanish dub-over voice artist for Elijah Wood and Hayden Christensen). Currently, with the spread of digital terrestrial television, viewers can choose between the original and the dubbed soundtracks for most movies and television.
In some communities such as Catalonia, Galicia and Basque Country, some foreign programs are also dubbed into their own languages, different from European Spanish. Films from the Spanish-speaking America shown in these communities are shown in their original language, while strong regional accents (from the Spanish-speaking America or from Spain) may be subtitled in news and documentaries.
The Germanophone dubbing market is the largest in Europe. Germany has the most foreign-movie-dubbing studios per capita and per given area in the world and according to the German newspaper Die Welt 52% of all voice actors currently work in the German dubbing industry. In Germany, Austria, and the German-speaking part of Switzerland, practically all films, shows, television series and foreign soap operas are shown in dubbed versions created for the German market. However, in some of Switzerland's towns and cities (particularly along the language-borders), subtitled versions are common. Dubbing films is a traditional and common practice in German-speaking Europe, since subtitles are not accepted and used as much as in other European countries. According to a European study, Austria is the country with the highest rejection rate (more than 70 percent) of subtitles, followed by Italy, Spain and Germany.
In German-speaking markets, computer and video games feature German text menus and are dubbed into the German language if speaking parts exist.
In recent years, Swiss and Austrian television stations have been showing increasing numbers of movies, series and TV-programs in "dual sound," which means the viewer can choose between the original language (e.g. English) and the language of the channel (German, French or Italian, according to the location).
Although German-speaking voice actors play only a secondary role, they are still notable for providing familiar voices to well-known actors. Famous foreign actors are known and recognized for their German voice, and the German audience is used to them, so dubbing is also a matter of authenticity. However, in larger cities, there are theaters where movies can be seen in their original versions, as English has become somewhat more popular among young educated viewers. On German mainstream television, films are never broadcast with subtitles, but pay-per-view programming is often available in the original language. Subtitled niche and art films are sometimes aired on smaller networks.
German-dubbed versions sometimes diverge greatly from the original, especially in adding humorous elements absent from the original. In extreme cases, such as "The Persuaders!", the German-dubbed version was more successful than the English original. Often, translation adds sexually explicit gags the U.S. versions might not be allowed to use. For example, in "Bewitched", the translators changed ""The Do Not Disturb sign will hang on the door tonight"" to ""The only hanging thing tonight will be the Do Not Disturb sign"".
Some movies dubbed in Austria diverge from the German Standard version in addressing other people but only when the movies are dubbed into certain Austrian dialect versions. (Mr. and Mrs. are translated into Herr and Frau which is usually not translated in order to be in lip-sync).
Sometimes even English pronounced first names are translated and are pronounced into the correct German equivalent (English name "Bert" became Southern German pronounced name "Bertl" which is an abbreviation for any name either beginning or even ending with "bert", e.g. "Berthold" or "Albert".)
Some movies dubbed before reunification exist in different versions for the east and the west. They use different translations, and often differ in the style of dubbing.
Some of the well-known German dubbing voice artists are listed below.
In Slovakia, home media market, Czech dubbed versions are widely used, with only children's films and some few exceptions (for example Independence Day) that have been dubbed for cinema being released with Slovak dubbing. Czech dubbing was also extensively used in the broadcast of Slovak television channels, but since 2008 Slovak language laws require any newer shows (understood as the first television broadcast in Slovakia) to be provided with Slovak localization (dubbing or subtitles); since then, television broadcasts of films, TV series and cartoons have been dubbed into Slovak.
In Hungary, dubbing is almost universally common. Almost every foreign movie or TV show released in Hungary is dubbed into Hungarian. The history of dubbing dates back to the 1950s, when the country was still under communist rule. One of the most iconic Hungarian dubs was of the American cartoon "The Flintstones", with a local translation by József Romhányi. The Internetes Szinkron Adatbázis (ISzDB) is the largest Hungarian database for film dubs, with information for many live action and animated films. On page 59 of the Eurobarometer, 84% of Hungarians said that they prefer dubbing over subtitles.
In the socialist era, every film was dubbed with professional and mostly popular actors. Care was taken to make sure the same voice actor would lend his voice to the same original actor. In the early 1990s, as cinemas tried to keep up with showing newly released films, subtitling became dominant in the cinema. This, in turn, forced TV channels to make their own cheap versions of dubbed soundtracks for the movies they presented, resulting in a constant degrading of dubbing quality. Once this became customary, cinema distributors resumed the habit of dubbing for popular productions, presenting them in a below-average quality. However, every feature is presented with the original soundtrack in at least one cinema in large towns and cities.
However, in Hungary, most documentary films and series (for example, those on Discovery Channel, National Geographic Channel) are made with voiceovers. Some old movies and series, or ones that provide non-translatable jokes and conversations (for example, the "Mr. Bean" television series), are shown only with subtitles.
There is a more recent problem arising from dubbing included on DVD releases. Many generations have grown up with an original (and, by current technological standards, outdated) soundtrack, which is either technologically (mono or bad quality stereo sound) or legally (expired soundtrack license) unsuitable for a DVD release. Many original features are released on DVD with a new soundtrack, which in some cases proves to be extremely unpopular, thus forcing DVD producers to include the original soundtrack. In some rare cases, the Hungarian soundtrack is left out altogether. This happens notably with Warner Home Video Hungary, which ignored the existence of Hungarian soundtracks completely, as they did not want to pay the licenses for the soundtracks to be included on their new DVD releases, which appear with improved picture quality, but very poor subtitling.
In Poland, cinema releases for general audiences are almost exclusively subtitled, with the exception of children's movies, and television screenings of movies, as well as made-for-TV shows. These are usually shown with voice-over, where a voice talent reads a translation over the original soundtrack. This method, called "juxtareading," is similar to the so-called Gavrilov translation in Russia, with one difference—all dialogues are voiced by one off-screen reader (), preferably with a deep and neutral voice which does not interfere with the pitch of voice of the original speakers in the background. To some extent, it resembles live translation. Certain highly qualified voice talents are traditionally assigned to particular kinds of production, such as action or drama. Standard dubbing is not widely popular with most audiences, with the exception of cartoons and children's shows, which are dubbed also for TV releases.
It is claimed that, until around 1951, there were no revoiced foreign movies available in Poland. Instead, they were exclusively subtitled in Polish.
Poland's dubbing traditions began between the two world wars. In 1931, among the first movies dubbed into Polish were "Dangerous Curves" (1929), "The Dance of Life" (1929), "Paramount on Parade" (1930), and "Darling of the Gods" (1930). In 1949, the first dubbing studio opened in Łódź. The first film dubbed that year was "Russkiy Vopros" (filmed 1948).
Polish dubbing in the first post-war years suffered from poor synchronization. Polish dialogues were not always audible and the cinema equipment of that time often made films sound less clear than they were. In the 1950s, Polish publicists discussed the quality of Polish versions of foreign movies.
The number of dubbed movies and the quality improved. Polish dubbing had a golden age between the 1960s and the 1980s. Approximately a third of foreign movies screened in cinemas were dubbed. The "Polish dubbing school" was known for its high quality. In that time, Poland had some of the best dubbing in the world. The person who initiated high-quality dubbing versions was director Zofia Dybowska-Aleksandrowicz. In that time, dubbing in Poland was very popular. Polish television dubbed popular films and TV series such as "Rich Man, Poor Man"; "Fawlty Towers", "Forsyte Saga", "Elizabeth R", "I, Claudius", "I'll Take Manhattan", and "Peter the Great".
In the 1980s, due to budget cuts, state-run TV saved on tapes by voicing films over live during transmission.
Overall, during 1948–1998, almost 1,000 films were dubbed in Polish. In the 1990s, dubbing films and TV series continued, although often also for one emission only.
In 1995, Canal+ was launched in Poland. In its first years, it dubbed 30% of its schedule dubbing popular films and TV series, one of the best-known and popular dubbings was that of "Friends", but this proved unsuccessful. It stopped dubbing films in 1999, although many people supported the idea of dubbing and bought the access only for dubbing versions of foreign productions. In the 1990s, dubbing was done by the television channel known as Wizja Jeden. They mainly dubbed BBC productions such as "The League of Gentlemen", "Absolutely Fabulous" and "Men Behaving Badly". Wizja Jeden was closed in 2001. In the same year, TVP stopped dubbing the TV series "Frasier", although that dubbing was very popular.
Currently, dubbing of films and TV series for teenagers is made by Nickelodeon and Disney Channel. One of the major breakthroughs in dubbing was the Polish release of "Shrek", which contained many references to local culture and Polish humor. Since then, people seem to have grown to like dubbed versions more, and pay more attention to the dubbing actors. However, this seems to be the case only with animated films, as live-action dubbing is still considered a bad practice. In the case of DVD releases, most discs contain both the original soundtrack and subtitles, and either voice over or dubbed Polish track. The dubbed version is, in most cases, the one from the theater release, while voice-over is provided for movies that were only subtitled in theaters.
Since theatrical release of "The Avengers" in May 2012, Walt Disney Company Polska dubs all films for cinema releases. Also in 2012, United International Pictures Polska dubbed "The Amazing Spider-Man", while Forum Film Polska – former distributor of Disney's films – decided to dub "", along with its . However, when a dub is produced but the film's target audience is not exclusively children, both dubbed and subtitled versions are usually available in movie theaters. The dubbed versions are more commonly shown in morning and early afternoon hours, with the subtitled version dominating in the evening. Both can be available in parallel at similar hours in multiplexes.
Russian television is generally dubbed, but in the cases using voice-over dub technique with only a couple of voice actors, with the original speech still audible underneath. In the Soviet Union, most foreign movies to be officially released were dubbed. Voice-over dub was invented in the Soviet Union in the 1980s when with the fall of the regime, many popular foreign movies, previously forbidden, or at least questionable under communist rule, started to flood in, in the form of low-quality home-copied videos. Being unofficial releases, they were dubbed in a very primitive way. For example, the translator spoke the text directly over the audio of a video being copied, using primitive equipment.
The quality of the resulting dub was very low, the translated phrases were off-sync, interfering with the original voices, background sounds leaked into the track, translation was inaccurate and, most importantly, all dub voices were made by a single person who usually lacked the intonation of the original, making comprehension of some scenes quite difficult. This method of translation exerted a strong influence on Russian pop culture. Voices of translators became recognizable for generations.
In modern Russia, the overdubbing technique is still used in many cases, although with vastly improved quality, and now with multiple voice actors dubbing different original voices. Video games are generally either dubbed into Russian (such as the "Legend of Spyro" trilogy, the "Skylanders" series, the "Assassin's Creed" saga, the "Halo" series, the "Harry Potter" series, etc.) or released with original-speaking tracks but with all the texts translated into Russian language.
Releases of cinemas are almost always dubbed in the Russian language. On television series are shown as a dubbed translation and offscreen. Subtitles are not used at all.
In Ukraine, since 2006 cinema releases are almost always dubbed into Ukrainian with the overdubbing technique and multiple voice actors dubbing different original voices with a small percent of art-house/documentaries shown in the original language with Ukrainian subtitles. For television, TV channels usually release movies and TV-shows with a Ukrainian voiceover, although certain high-profile films and TV shows are dubbed rather than voice-overe'ed.
In the past Russian-language films, TV series, cartoons, animated series and TV programs were usually not dubbed but were shown with the original audio with Ukrainian subtitles. However, this practice has been slowly abandoned since the late 2010s: all children's films and cartoons regardless of the original language (including Russian) are always dubbed into Ukrainian; example of the first Russian cartoons dubbed into Ukrainian for the cinematic-release is The Snow Queen 2 (2015), A Warrior's Tail (2015), Volki i Ovtsy: Be-e-e-zumnoe prevrashenie (2016), Ivan Tsarevich i Seryy Volk 3 (2016), Bremenskie razboyniki (2016), , Fantastic Journey to OZ (2017), Fixies: Top Secret (2017) etc.; the same trend is seen among Russian language feature films for adults, with the first such films dubbed into Ukrainian including Battle for Sevastopol (2015), Hardcore Henry (2016), The Duelist (2016).
In Latvia and Lithuania, only children's movies get dubbed in the cinema, while many live-action movies for an older audience use voice-over. In recent years however, many cartoons have been dubbed into Latvian and Lithuanian for TV. But some other kids shows, like "SpongeBob SquarePants", use the voice-over.
In the United States and English-speaking Canada, live-action foreign films are usually shown in theaters with their original languages and English subtitles. It is because live-action dubbed movies rarely did well in United States box office since the 1980s. The 1982 United States theatrical release of Wolfgang Peterson's "Das Boot" was the last major release to go out in both original and English-dubbed versions, and the film's original version actually grossed much higher than the English-dubbed version. Later on, English-dubbed versions of international hits like "Un indien dans la ville", "Godzilla 2000", "Anatomy", "Pinocchio" and "High Tension" flopped at United States box office. When Miramax planned to release the English-dubbed versions of "Shaolin Soccer" and "Hero" in the United States cinemas, their English-dubbed versions scored badly in test screenings in the United States, so Miramax finally released the films in United States cinemas with their original language.
Still, English-dubbed movies have much better commercial potential in ancillary market; therefore, more distributors would release live-action foreign films in theaters with their original languages (with English subtitles), then release both original versions and English-dubbed versions in ancillary market.
On the other hand, anime is almost always released in English-dubbed format, regardless of its content or target age group. The exceptions to this practice are either when an English dub has not been produced for the program (usually in the case of feature films) or when the program is being presented by a network that places importance on presenting it in its original format (as was the case when Turner Classic Movies aired several of Hayao Miyazaki's works, which were presented both dubbed and subtitled). Most anime DVDs contain options for original Japanese, Japanese with subtitles, and English-dubbed, except for a handful of series that have been heavily edited or Americanized. In addition, Disney has a policy that makes its directors undergo stages to perfect alignment of certain lip movements so the movie looks believable.
In addition, a small number of British films have been re-dubbed when released in the United States, due to the usage of dialects which Americans are not familiar with (for example, "Kes" and "Trainspotting"). However, British children's shows (such as "Bob the Builder") are always re-dubbed with American voice actors in order to make the series more understandable for American children. Conversely, British programs shown in Canada are not re-dubbed.
Some television shows shown in the US have Spanish dubs. These are accessible though the SAP (secondary audio program) function of the television unit.
For Spanish-speaking countries, all foreign-language programs, films, cartoons and documentaries shown on free-to-air TV networks are dubbed into Standard Spanish, while broadcasts on cable and satellite pan-regional channels are either dubbed or subtitled. In theaters, children's movies and most blockbuster films are dubbed into Standard Spanish or Mexican Spanish, and are sometimes further dubbed into regional dialects of Spanish where they are released.
In Mexico, by law, films shown in theaters must be shown in their original version. Films in languages other than Spanish are usually subtitled. Only educational documentaries and movies rated for children, as well as some movies that are expected to have a wide audience (for example, "" or "The Avengers") may be dubbed, but this is not compulsory, and some animated films are shown in theaters in both dubbed and subtitled versions (for instance, some DreamWorks productions). Nonetheless, a recent trend in several cinemas is to offer the dubbed versions only, with a stark decrease in the showing of the original ones.
Dubbing must be made in Mexico by Mexican nationals or foreigners residing in Mexico. Still, several programs that are shown on pay TV are dubbed in other countries like Venezuela, Chile or Colombia.
Most movies released on DVD feature neutral Spanish as a language option, and sometimes feature a specific dub for Mexican audiences (for example, "Rio"). Foreign programs are dubbed on broadcast TV, while on pay TV most shows and movies are subtitled. In a similar way to cinemas, in the last few years many channels on pay TV have begun to broadcast programs and films only in their dubbed version.
Dubbing became very popular in the 1990s with the rise in popularity of anime in Mexico. Some voice actors have become celebrities and are always identified with specific characters, such as Mario Castañeda (who became popular by dubbing Goku in "Dragon Ball Z") or Humberto Vélez (who dubbed Homer Simpson in the first 15 seasons of "The Simpsons").
The popularity of pay TV has allowed people to view several series in their original language rather than dubbed. Dubbing has been criticized for the use of TV or movie stars as voice actors (such as Ricky Martin in Disney's "Hercules", or Eugenio Derbez in DreamWorks' "Shrek"), or for the incorrect use of local popular culture that sometimes creates unintentional jokes or breaks the feeling of the original work (such as translating Sheldon Cooper's "Bazinga!" to "¡Vacilón!").
Several video games have been dubbed into neutral Spanish, rather than European Spanish, in Mexico (such as the "Gears of War" series, "Halo 3", "Infamous 2" and others). Sony recently announced that more games (such as "God of War: Ascension") will be dubbed into neutral Spanish.
In Peru, all foreign series, movies, and animated programming are shown dubbed in Latin American Spanish, with dubs imported from Mexico, Chile, Colombia and Venezuela on terrestrial and pay-television. Most movies intended for kids are being offered as dub-only movies, while most films aimed at older audiences are being offered dubbed and subtitled in Spanish. Also, at most theaters, kids films (on rare occasions) subtitled are commonly shown at nighttime. Most subtitled Pay-TV channels show both dubbed and subtitled version of every film they broadcast, being offered with a separate subtitle track and a second audio track in English. There is an increase of people preferring subtitle films and series rather than dubbed starting the late-2000s, as Peruvians viewers tend to get used to their original version.
Peru used to do not produce their own dubs since dubbing studios never existed in that country until 2016, when the company "Big Bang Films" started to dub movies and series, however, since 2014 a group of dubbing actors created a group called "Torre A Doblaje", who is a group of actors who gives dubbing and locution service.
In Brazil, foreign programs are invariably dubbed into Brazilian Portuguese on free to air TV, with only a few exceptions. Films shown at cinemas are generally offered with both subtitled and dubbed versions, with dubbing frequently being the only choice for children's movies. Subtitling was primarily for adult audience movies until 2012. Since then, dubbed versions also became available for all ages. As a result, in recent years, more cinemas have opened in Brazil, attracting new audiences to the cinema who prefer dubbing. According to a Datafolha survey, 56% of Brazilian movie theaters' audience prefer to watch dubbed movies. Most of the dubbing studios in Brazil is in the cities of Rio de Janeiro and São Paulo.
The first film to be dubbed in Brazil was the Disney animation ""Snow White and the Seven Dwarfs"" in 1938. By the end of the 1950s, most of the movies, TV series and cartoons on television in Brazil were shown in its original sound and subtitles. However, in 1961, a decree of President Jânio Quadros ruled that all foreign productions on television should be dubbed. This measure boosted the growth of dubbing in Brazil, and has led to several dubbing studios since then. The biggest dubbing studio in Brazil was Herbert Richers, headquartered in Rio de Janeiro and closed in 2009, At its peak in the 80s and 90s, the Herbert Richers studios dubbed about 70% of the productions shown in Brazilian cinemas.
In the 90's, with Saint Seiya, Dragon Ball and other anime shows becoming popular in Brazilian TV's, the voice actors and the dubbing career gained a higher space in Brazilian culture. Actors like Hermes Baroli (Brazilian dubber of Pegasus Seiya, in "Saint Seiya" and actors like Ashton Kutcher), Marco Ribeiro (Brazilian dubber of many actors like Tom Hanks, Jim Carrey and Robert Downey Jr., and Yusuke Urameshi from the anime "Yu Yu Hakusho") and Wendel Bezerra (Brazilian dubber of Goku in "Dragon Ball Z" and SpongeBob in "SpongeBob SquarePants") are recognized for their most notable roles.
Pay TV commonly offers both dubbed and subtitled movies, with statistics showing that dubbed versions are becoming predominant. Most DVD and Blu-ray releases usually feature Portuguese, Spanish, and the original audio along with subtitles in native languages. Most video games are dubbed in Brazilian Portuguese rather than having European Portuguese dubs alone. Games such as "Halo 3", "", "inFamous 2", "Assassin's Creed III", "", "World of Warcraft" and others are dubbed in Brazilian Portuguese. This is because despite the dropping of the dubbing law in Portugal in 1994, most companies in that country use the Brazilian Portuguese because of traditional usage during the days of the dubbing rule, along with these dubbings being more marketable than European Portuguese.
A list that showcases Brazilian Portuguese voice artists that dub for actors and actresses are displayed here. However, there can also be different official dub artists for certain regions within Brazil.
Apparently, for unknown reasons (probably technical), the Brazilian Portuguese dub credits from some shows or cartoons from channels from Viacom or Turner/Time Warner, are shown on Latin America (on Spanish-dubbed series).
In Quebec, Canada, most films and TV programs in English are dubbed into Standard French, occasionally with Quebec French idiosyncrasies. They speak with a mixed accent, they pronounce /ɛ̃/ with a Parisian accent, but they pronounce "â" and "ê" with a Quebec accent: "grâce" [ɡʁɑːs] and "être" [ɛːtʁ̥]. Occasionally, the dubbing of a series or a movie, such as "The Simpsons", is made using the more widely spoken "joual" variety of Quebec French. Dubbing has the advantage of making children's films and TV series more comprehensible to younger audiences. However, many bilingual Québécois prefer subtitling, since they would understand some or all of the original audio. In addition, all films are shown in English, as well in certain theaters (especially in major cities and English-speaking areas such as the West Island), and some theatres, such as the Scotiabank Cinema Montreal, show only movies in English. Most American television series are only available in English on DVD, or on English-language channels, but some of the more popular ones have French dubs shown on mainstream networks, and are released in French on DVD as well, sometimes separately from an English-only version.
Formerly, all French-language dubbed films in Quebec were imported from France and some still are. Such a practice was criticized by former politician Mario Dumont after he took his children to see the Parisian French dub of "Shrek the Third", which Dumont found incomprehensible. After his complaints and a proposed bill, "Bee Movie", the film from DreamWorks Animation, was dubbed in Quebec, making it the studio's first animated film to have a Quebec French dub, as all DreamWorks Animation films had previously been dubbed in France. In terms of Disney, the first Disney animated film to be dubbed in Quebec was "Oliver and Company." The Disney Renaissance films were also dubbed in Quebec except for "The Rescuers Down Under", "Beauty and the Beast", and "The Lion King".
In addition, because Canadian viewers usually find Quebec French more comprehensible than other dialects of the language, some older film series that had the French-language versions of previous installments dubbed in France have had later ones dubbed in Quebec, often creating inconsistencies within the French version of the series' canon. Lucasfilm's "Star Wars" and "Indiana Jones" series are examples. Both series had films released in the 1970s and 1980s, with no Québécois French dubbed versions; instead, the Parisian French versions, with altered character and object names and terms, were distributed in the province. However, later films in both series released 1999 and later were dubbed in Quebec, using different voice actors and "reversing" name changes made in France's dubbings due to the change in studio.
China has a long tradition of dubbing foreign films into Mandarin Chinese, starting in the 1930s. While during the Republic of China era Western motion pictures may have been imported and dubbed into Chinese, since 1950 Soviet movies, dubbed primarily in Shanghai, became the main import. Beginning in the late 1970s, in addition to films, popular TV series from the United States, Japan, Brazil, and Mexico were also dubbed. The Shanghai Film Dubbing Studio has been the most well-known studio in the film dubbing industry in China. In order to generate high-quality products, they divide each film into short segments, each one lasting only a few minutes, and then work on the segments one-by-one. In addition to the correct meaning in translation, they make tremendous effort to match the lips of the actors to the dialogue. As a result, the dubbing in these films generally is not readily detected. The cast of dubbers is acknowledged at the end of a dubbed film. Several dubbing actors and actresses of the Shanghai Film Dubbing Studio have become well-known celebrities, such as Qiu Yuefeng, Bi Ke, Li Zi, and Liu Guangning. In recent years, however, especially in the larger cities on the east and south coasts, it has become increasingly common for movie theaters to show subtitled versions with the original soundtracks intact.
Motion pictures are also dubbed into the languages of some of China's autonomous regions. Notably, the Translation Department of the Tibetan Autonomous Region Movie Company (西藏自治区电影公司译制科) has been dubbing movies into the Tibetan language since the 1960s. In the early decades, it would dub 25 to 30 movies each year, the number rising to 60-75 by the early 2010s.
Motion pictures are dubbed for China's Mongol- and Uyghur-speaking markets as well.
Taiwan dubs some foreign films and TV series in Mandarin Chinese. Until the mid-1990s, the major national terrestrial channels both dubbed and subtitled all foreign programs and films and, for some popular programs, the original voices were offered in second audio program. Gradually, however, both terrestrial and cable channels stopped dubbing for prime time U.S. shows and films, while subtitling continued.
In the 2000s, the dubbing practice has differed depending on the nature and origin of the program. Animations, children's shows and some educational programs on PTS are mostly dubbed. English live-action movies and shows are not dubbed in theaters or on television. Japanese TV dramas are no longer dubbed, while Korean dramas, Hong Kong dramas and dramas from other Asian countries are still often dubbed. Korean variety shows are not dubbed. Japanese and Korean films on Asian movie channels are still dubbed. In theaters, most foreign films are not dubbed, while animated films and some films meant for children offer a dubbed version. Hong Kong live-action films have a long tradition of being dubbed into Mandarin, while more famous films offer a Cantonese version.
In Hong Kong, foreign television programs, except for English-language and Mandarin television programs, are dubbed in Cantonese. English-language and Mandarin programs are generally shown in their original with subtitles. Foreign films, such as most live-action and animated films (such as anime and Disney), are usually dubbed in Cantonese. However most cinemas also offer subtitled versions of English-language films.
For the most part, foreign films and TV programs, both live-action and animated, are generally dubbed in both Mandarin and Cantonese. For example, in "The Lord of the Rings" film series, Elijah Wood's character Frodo Baggins was dubbed into Mandarin by Jiang Guangtao for China and Taiwan. For the Cantonese localization, there were actually two dubs for Hong Kong and Macau. The first Cantonese dub, he was voiced by Leung Wai Tak, with a second Cantonese dub released, he was voiced by Bosco Tang.
A list for Mandarin and Cantonese voice artists that dub for actors are shown here.
In Israel, only children's movies and TV programming are dubbed in Hebrew. In programs aimed at teenagers and adults, dubbing is rarely considered for translation, not only because of its high costs, but also because the audience is mainly multi-lingual. Most viewers in Israel speak at least one European language in addition to Hebrew, and a large part of the audience also speaks Arabic. Therefore, most viewers prefer to hear the original soundtrack, aided by Hebrew subtitles. Another problem is that dubbing does not allow for translation into two different languages simultaneously, as is often the case of Israeli television channels that use subtitles in Hebrew and another language (like Russian) simultaneously.
In Japan, many television programs appear on Japanese television subtitled or dubbed if they are intended for children. When the American film "Morocco" was released in Japan in 1931, subtitles became the mainstream method of translating TV programs and films in Japan. Later, around the 1950s, foreign television programs and films began to be shown dubbed in Japanese on television. The first ones to be dubbed into Japanese were the 1940s Superman cartoons in 1955.
Due to the lack of video software for domestic television, video software was imported from abroad. When the television program was shown on television, it was mostly dubbed. There was a character limit for a small TV screen at a lower resolution, and this method was not suitable for the poor elderly and illiterate eye, as was audio dubbing. Presently, TV shows and movies (both those aimed at all ages and adults-only) are shown dubbed with the original language and Japanese subtitles, while providing the original language option when the same film is released on VHS, DVD and Blu-ray. Laserdisc releases of Hollywood films were almost always subtitled.
Adult cartoons such as "Family Guy", "South Park", and "The Simpsons" are shown dubbed in Japanese on the WOWOW TV channel. "" was dubbed in Japanese by different actors instead of the same Japanese dubbing-actors from the cartoon because it was handled by a different Japanese dubbing studio, and it was marketed for the Kansai market. In Japanese theaters, foreign-language movies, except those intended for children, are usually shown in their original version with Japanese subtitles. Foreign films usually contain multiple Japanese-dubbing versions, but with several different original Japanese-dubbing voice actors, depending upon which TV station they are aired. NHK, Nippon TV, Fuji TV, TV Asahi, and TBS usually follow this practice, as do software releases on VHS, Laserdisc, DVD and Blu-ray. As for recent foreign films being released, there are now some film theaters in Japan that show both dubbed and subtitled editions.
On 22 June 2009, 20th Century Fox's Japanese division has opened up a Blu-ray lineup known as "Emperor of Dubbing", dedicated at having multiple Japanese dubs of popular English-language films (mostly Hollywood films) as well as retaining the original scripts, releasing them altogether in special Blu-ray releases. These also feature a new dub created exclusively for that release as a director's cut, or a new dub made with a better surround sound mix to match that of the original English mix (as most older Japanese dubbings were made on mono mixes to be aired on TV). Other companies have followed practice, like Universal Pictures's Japanese division NBCUniversal Entertainment Japan opening up "Reprint of Memories", along with having "Power of Dubbing", which act in a similar way by re-packaging all the multiple Japanese dubs of popular films and putting them out as Special Blu-ray releases.
"Japanese dub-over artists" provide the voices for certain performers, such as those listed in the following table:
In South Korea, anime that are imported from Japan are generally shown dubbed in Korean on television. However, some anime is censored, such as Japanese letters or content being edited for a suitable Korean audience. Western cartoons are dubbed in Korean as well, such as Nickelodeon cartoons like "SpongeBob SquarePants" and "Danny Phantom". Several English-language (mostly American) live-action films are dubbed in Korean, but they are not shown in theaters. Instead they are only broadcast on South Korean television networks (KBS, MBC, SBS, EBS), while DVD import releases of these films are shown with Korean subtitles, such as "The Wizard of Oz", "Mary Poppins", the "Star Wars" films, and "Avatar". This may be due to the fact that the six American major film studios may not own any rights to the Korean dubs of their live-action films that the Korean television networks have dubbed and aired. Even if they don't own the rights, Korean or non-Korean viewers can record from Korean-dubbed live-action films from television broadcasting onto DVDs with DVRs.
Sometimes, video games are dubbed in Korean. Examples would be the "Halo" series, the "Jak & Daxter" series, and the "God of War" series. For the "Halo" games, Lee Jeong Gu provides his Korean voice to the main protagonist Master Chief (replacing Steve Downes's voice), while Kim So Hyeong voices Chieftain Tartarus, one of the main antagonists (replacing Kevin Michael Richardson's voice).
The following South Korean voice-over artists are usually identified with the following actors:
In Thailand, foreign television programs are dubbed in Thai, but the original soundtrack is often simultaneously carried on a NICAM audio track on terrestrial broadcast, and alternate audio tracks on satellite broadcast. Previously, terrestrial stations simulcasted the original soundtrack on the radio. On pay-TV, many channels carry foreign-language movies and television programs with subtitles. Movie theaters in Bangkok and some larger cities show both the subtitled version and the dubbed version of English-language movies. In big cities like Bangkok, Thai-language movies have English subtitles.
This list features a collection of Thai voice actors and actresses that have dubbed for these featured performers.
Unlike movie theaters in most Asian countries, those in Indonesia show foreign movies with subtitles. Then a few months or years later, those movies appear on TV either dubbed in Indonesian or subtitled. Kids shows are mostly dubbed, though even in cartoon series, songs aren't dubbed, but in big movies such as Disney movies, both speaking and singing voice were cast for the new Indonesian dub even though it took maybe a few months or even years for the movie to come out. Adult films was mostly subtitled, but sometimes they can be dubbed as well and because there aren't many Indonesian voices, especially in dubbed movies, three characters can have the exact same voice.
Reality shows are never dubbed in Indonesian, because they are not a planned interaction like with movies and TV shows, so if they appear in TV, they will be appear with subtitles.
In the Philippines, media practitioners generally have mixed practices regarding whether to dub television programs or films, even within the same kind of medium. In general, the decision whether to dub a video production depends on a variety of factors such as the target audience of the channel or programming bloc on which the feature will be aired, its genre, and/or outlet of transmission (e.g. TV or film, free or pay-TV).
The prevalence of media needing to be dubbed has resulted in a talent pool that is very capable of syncing voice to lip, especially for shows broadcast by the country's three largest networks. It is not uncommon in the Filipino dub industry to have most of the voices in a series dubbing by only a handful of voice talents. Programs originally in English used to usually air in their original language on free-to-air television.
Since the late 1990s/early 2000s, however, more originally English-language programs that air on major free-to-air networks (i.e. 5, ABS-CBN, GMA) have been dubbed into Filipino. Even the former Studio 23 (now S+A), once known for its airing programs in English, had adopted Filipino-language dubbing for some of its foreign programs. Children's programs from cable networks Nickelodeon, Cartoon Network, and Disney Channel shown on 5, GMA, or ABS-CBN, have long been dubbed into Filipino or another Philippine regional language. Animated Disney films are often dubbed in Filipino except for the singing scenes, which are shown in their original language (though in recent years, there has been an increase in number of Disney musicals having their songs also translated such as "Frozen"). GMA News TV airs some documentaries, movies, and reality series originally shown in the English language as dubbed in Filipino.
Dubbing is less common in smaller free-to-air networks such as ETC and the former RPN 9 (now CNN Philippines) whereby the original-language version of the program is aired. Dramas from Asia (particularly Greater China and Korea) and Latin America (called "Asianovelas", and "Mexicanovelas", respectively) have always been dubbed into Filipino or another Philippine regional language, and each program from these genres feature their unique set of Filipino-speaking voice actors.
The original language-version of TV programs is also usually available on cable/satellite channels such as Fox Life, Fox, and AXN. However, some pay-TV channels specialize in showing foreign shows and films dubbed into Filipino. Cinema One, ABS-CBN's cable movie channel, shows some films originally in non-English language dubbed into Filipino. Nat Geo Wild airs most programs dubbed into Filipino for Philippine audiences, being one of the few cable channels to do so. Tagalized Movie Channel & Tag airs Hollywood and Asian movies dubbed in Filipino. Fox Filipino airs some English, Latin, and Asian series dubbed in Filipino such as "The Walking Dead", "Devious Maids", "La Teniente", "Kdabra", and some selected programs from Channel M. The defunct channel HERO TV, which focuses on anime and tokusatsu shows and now a web portal, dubs all its foreign programs into Filipino. This is in contrast to Animax, where their anime programs are dubbed in English.
Foreign films, especially English films shown in local cinemas, are almost always shown in their original language. Non-English foreign films make use of English subtitles. Unlike other countries, children's films originally in English are not dubbed in cinemas.
A list of voice actors with their associates that they dub into Filipino are listed here.
In India, where "foreign films" are synonymous with "Hollywood films", dubbing is done mostly in Hindi, Tamil and Telugu. Dubbing is rarely done with the other major Indian languages, namely Malayalam and Bengali, due to lack of significant market size. Despite this, some Kannada and Malayalam dubs of children television programs can be seen on the Sun TV channel. The dubbed versions are released into the towns and lower tier settlements of the respective states (where English penetration is low), often with the English-language originals released in the metropolitan areas. In all other states, the English originals are released along with the dubbed versions, where often the dubbed version collections are more outstanding than the originals. "Spider-Man 3" was also done in the Bhojpuri language, a language popular in eastern India in addition to Hindi, Tamil and Telugu. "A Good Day to Die Hard", the most recent installment in the "Die Hard" franchise, was the first ever Hollywood film to receive a Punjabi language dub as well.
Most TV channels mention neither the Indian-language dubbing credits, nor its staff, at the end of the original ending credits, since changing the credits casting for the original actors or voice actors involves a huge budget for modifying, making it somewhat difficult to find information for the dubbed versions. The same situation is encountered for films. Sometimes foreign programs and films receive more than one dub, such as for example, Jumanji, Dragonheart and Van Helsing having two Hindi dubs. Information for the Hindi, Tamil and Telugu voice actors who have done the voices for specific actors and for their roles on foreign films and television programs are published in local Indian data magazines, for those that are involved in the dubbing industry in India. But on a few occasions, there are some foreign productions that do credit the dubbing cast, such as animated films like the "Barbie" films, and some Disney films. Disney Channel original series released on DVD with their Hindi dubs show a list of the artists in the Hindi dub credits, after the original ending credits. Theatrical releases and VCD releases of foreign films do not credit the dubbing cast or staff. The DVD releases, however, do have credits for the dubbing staff, if they are released multilingual. As of recently, information for the dubbing staff of foreign productions have been expanding due to high demands of people wanting to know the voice actors behind characters in foreign works. Large dubbing studios in India include Sound & Vision India, Main Frame Software Communications, Visual Reality, ZamZam Productions, Treasure Tower International, Blue Whale Entertainment, Jai Hand Entertainment, Sugar Mediaz and Rudra Sound Solutionz.
In Pakistan "foreign films", and cartoons are not normally dubbed locally. Instead, foreign films, anime and cartoons, such as those shown on Nickelodeon Pakistan and Cartoon Network Pakistan, are dubbed in Hindi in India, as Hindi and Urdu, the national language of Pakistan, are mutually intelligible.
However, soap operas from Turkey are now dubbed in Urdu and have gained increased popularity at the expense of Indian soap operas in Hindi. This has led to protests from local producers that these are a threat to Pakistan's television industry, with local productions being moved out of peak viewing time or dropped altogether. Similarly, politicians leaders have expressed concerns over their content, given Turkey's less conservative culture.
In Vietnam, foreign-language films and programs are subtitled on television in Vietnamese. They were not dubbed until 1985, but are briefly translated with a speaker before commercial breaks. "Rio" was considered to be the very first American Hollywood film to be entirely dubbed in Vietnamese. Since then, children's films that came out afterwards have been released dubbed in theaters. HTV3 has dubbed television programs for children, including "Ben 10", and "Ned's Declassified School Survival Guide", by using various voice actors to dub over the character roles.
Sooner afterwards, more programs started to get dubbed. HTV3 also offers anime dubbed into Vietnamese. Pokémon got a Vietnamese dub in early 2014 on HTV3 starting with the Best Wishes series. But due to a controversy regarding Pokémon's cries being re-dubbed despite that all characters had their Japanese names, it was switched to VTV2 in September 2015 when the XY series debut. Sailor Moon also recently has been dubbed for HTV3 in early 2015.
In multilingual Singapore, dubbing is rare for western programs. English-language programs on the free-to-air terrestrial channels are usually subtitled in Chinese or Malay. Chinese, Malay and Tamil programs (except for news bulletins), usually have subtitles in English and the original language during the prime time hours. Dual sound programs, such as Korean and Japanese dramas, offer sound in the original languages with subtitles, Mandarin-dubbed and subtitled, or English-dubbed. The deliberate policy to encourage Mandarin among citizens made it required by law for programs in other Chinese dialects (Hokkien, Cantonese and Teochew) to be dubbed into Mandarin, with the exception of traditional operas. Cantonese and Hokkien shows from Hong Kong and Taiwan, respectively, are available on VCD and DVD. In a recent development, news bulletins are subtitled.
In Iran, foreign films and television programs are dubbed in Persian. Dubbing began in 1946 with the advent of movies and cinemas in the country. Since then, foreign movies have always been dubbed for the cinema and TV. Using various voice actors and adding local hints and witticisms to the original contents, dubbing played a major role in attracting people to the cinemas and developing an interest in other cultures. The dubbing art in Iran reached its apex during the 1960s and 1970s with the inflow of American, European and Hindi movies.
The most famous musicals of the time, such as "My Fair Lady" and "The Sound of Music", were translated, adjusted and performed in Persian by the voice artists. Since the 1990s, for political reasons and under pressure from the state, the dubbing industry has declined, with movies dubbed only for the state TV channels. During recent years, DVDs with Persian subtitles have found a market among viewers for the same reason, but most people still prefer the Persian-speaking dubbed versions. Recently, privately operated companies started dubbing TV series by hiring famous dubbers.
A list of Persian voice actors that associate with their actor counterparts are listed here.
In Georgia, original soundtracks are kept in films and TV series, but with voice-over translation. There are exceptions, such as some children's cartoons.
In Azerbaijan, dubbing is rare, as most Azerbaijani channels such as ARB Günəş air voice-overs or Azerbaijan originals.
See below.
In Algeria, Morocco, and Tunisia, most foreign movies (especially Hollywood productions) are shown dubbed in French. These movies are usually imported directly from French film distributors. The choice of movies dubbed into French can be explained by the widespread use of the French language. Another important factor is that local theaters and private media companies do not dub in local languages in order to avoid high costs, but also because of the lack of both expertise and demand.
Beginning in the 1980s, dubbed series and movies for children in Modern Standard Arabic became a popular choice among most TV channels, cinemas and VHS/DVD stores. However, dubbed films are still imported, and dubbing is performed in the Levant countries with a strong tradition of dubbing (mainly Syria, Lebanon and Jordan).
In the Arabic-speaking countries, some children shows (mainly cartoons) are dubbed in Arabic, otherwise Arabic subtitles are used. The only exception was telenovelas dubbed in Standard Arabic, or dialects, but also Turkish series, most notably Gümüş, in Syrian Arabic.
An example of Arabic voice actors that dub for certain performers is Safi Mohammed for Elijah Wood.
In Tunisia, the Tunisia National Television (TNT), the public broadcaster of Tunisia, is not allowed to show any content in any language other than Arabic, which forces it to broadcast only dubbed content (this restriction was recently removed for commercials). During the 1970s and 1980s, TNT (known as ERTT at the time) started dubbing famous cartoons in Tunisian and Standard Arabic. However, in the private sector, television channels are not subject to the language rule.
In South Africa, many television programs were dubbed in Afrikaans, with the original soundtrack (usually in English, but sometimes Dutch or German) "simulcast" in FM stereo on Radio 2000. These included US series such as "The Six Million Dollar Man", "(Steve Austin: Die Man van Staal)" "Miami Vice" "(Misdaad in Miami)", "Beverly Hills 90210", and the German detective series "Derrick".
As a result of the boycott by the British actors' union Equity, which banned the sale of most British television programs, the puppet series "The Adventures of Rupert Bear" was dubbed into South African English, as the original voices had been recorded by Equity voice artists.
This practice has declined as a result of the reduction of airtime for the language on SABC TV, and the increase of locally produced material in Afrikaans on other channels like KykNet. Similarly, many programs, such as "The Jeffersons", were dubbed into Zulu, but this has also declined as local drama production has increased. However, some animated films, such as "Maya the Bee", have been dubbed in both Afrikaans and Zulu by local artists. In 2018, eExtra began showing the Turkish drama series "Paramparça" dubbed in Afrikaans as "Gebroke Harte" or "Broken Hearts", the first foreign drama to be dubbed in the language for twenty years.
Uganda's own film industry is fairly small, and foreign movies are commonly watched. The English sound track is often accompanied by the Luganda translation and comments, provided by an Ugandan "video jockey" (VJ). VJ's interpreting and narration may be available in a recorded form or live.
In common with other English-speaking countries, there has traditionally been little dubbing in Australia, with foreign language television programs and films being shown (usually on SBS) with subtitles or English dubs produced in other countries. This has also been the case in New Zealand, but the Māori Television Service, launched in 2004, has dubbed animated films into Māori. However, some TV commercials from foreign countries are dubbed, even if the original commercial came from another English-speaking country. Moreover, the off-screen narration portions of some non-fiction programs originating from the UK or North America are re-dubbed by Australian voice talents to relay information in expressions that Australians can understand more easily.
Subtitles can be used instead of dubbing, as different countries have different traditions regarding the choice between dubbing and subtitling. On DVDs with higher translation budgets, the option for both types will often be provided to account for individual preferences; purists often demand subtitles. For small markets (small language area or films for a select audience), subtitling is more suitable, because it is cheaper. In the case of films for small children who cannot yet read, or do not read fast enough, dubbing is necessary.
In most English-speaking countries, dubbing is comparatively rare. In Israel, some programs need to be comprehensible to speakers of both Russian and Hebrew. This cannot be accomplished with dubbing, so subtitling is much more commonplace—sometimes even with subtitles in multiple languages, with the soundtrack remaining in the original language, usually English. The same applies to certain television shows in Finland, where Swedish and Finnish are both official languages.
In the Netherlands, Flanders, Nordic countries, Estonia and Portugal, films and television programs are shown in the original language (usually English) with subtitles, and only cartoons and children's movies and programs are dubbed, such as the "Harry Potter" series, "Finding Nemo", "Shrek", "Charlie and the Chocolate Factory" and others. Cinemas usually show both a dubbed version and one with subtitles for this kind of movie, with the subtitled version shown later in the evening.
In Portugal, one terrestrial channel, TVI, dubbed U.S. series like "Dawson's Creek" into Portuguese. RTP also transmitted "Friends" in a dubbed version, but it was poorly received and later re-aired in a subtitled version. Cartoons, on the other hand, are usually dubbed, sometimes by well-known actors, even on TV. Animated movies are usually released to the cinemas in both subtitled and dubbed versions.
In Argentina and Venezuela, terrestrial channels air films and TV series in a dubbed version, as demanded by law. However, those same series can be seen on cable channels at more accessible time-slots in their subtitled version and usually before they are shown on open TV. In contrast, the series "The Simpsons" is aired in its Mexican Spanish-dubbed version both on terrestrial television and on the cable station Fox, which broadcasts the series for the area. Although the first season of the series appeared with subtitles, this was not continued for the following seasons.
In Bulgaria, television series are dubbed, but most television channels use subtitles for action and drama movies. AXN uses subtitles for its series, but as of 2008 emphasizes dubbing. Only Diema channels dub all programs. Movies in theaters, with the exception of films for children, use dubbing and subtitles. Dubbing of television programs is usually done using voiceovers, but usually, voices professional actors, while trying to give each character a different voice by using appropriate intonations. Dubbing with synchronized voices is rarely used, mostly for animated films. "Mrs. Doubtfire" is a rare example of a feature film dubbed this way on BNT Channel 1, though a subtitled version is currently shown on other channels.
Walt Disney Television's animated series (such as "DuckTales", "Darkwing Duck", and "Timon & Pumbaa") were only aired with synchronized Bulgarian voices on BNT Channel 1 until 2005, but then the Disney shows were canceled. When airing of Disney series resumed on Nova Television and Jetix in 2008, voiceovers were used, but Disney animated-movie translations still use synchronized voices. Voiceover dubbing is not used in theatrical releases. The Bulgarian film industry law requires all children's films to be dubbed, not subtitled. Nova Television dubbed and aired the "Pokémon" anime with synchronized voices. Now, the show is airing on Disney Channel, also in a synchronized form.
Netflix provides both subtitles and dubbed audio with its foreign language shows, including Brazil's dystopian "3%" and the German thriller "Dark". Viewer testing indicates that its audience is more likely to finish watching a series if they select to view it with dubbed audio rather than translated subtitles. Netflix now streams its foreign language content with dubbed audio as default in an effort to increase viewer retention.
Dubbing is also used in applications and genres other than traditional film, including video games, television, and pornographic films.
Many video games originally produced in North America, Japan, and PAL countries are dubbed into foreign languages for release in areas such as Europe and Australia, especially for video games that place a heavy emphasis on dialogue. Because characters' mouth movements can be part of the game's code, lip sync is sometimes achieved by re-coding the mouth movements to match the dialogue in the new language. The Source engine automatically generates lip-sync data, making it easier for games to be localized.
To achieve synchronization when animations are intended only for the source language, localized content is mostly recorded using techniques borrowed from movie dubbing (such as rythmo band) or, when images are not available, localized dubbing is done using source audios as a reference. Sound-synch is a method where localized audios are recorded matching the length and internal pauses of the source content.
For the European version of a video game, the on-screen text of the game is available in various languages and, in many cases, the dialogue is dubbed into each respective language, as well.
The North American version of any game is always available in English, with translated text and dubbed dialogue, if necessary, in other languages, especially if the North American version of the game contains the same data as the European version. Several Japanese games, such as those in the "Dynasty Warriors", and "Soul" series, are released with both the original Japanese audio and the English dub included.
Dubbing is occasionally used on network television broadcasts of films that contain dialogue that the network executives or censors have decided to replace. This is usually done to remove profanity. In most cases, the original actor does not perform this duty, but an actor with a similar voice reads the changes. The results are sometimes seamless, but, in many cases, the voice of the replacement actor sounds nothing like the original performer, which becomes particularly noticeable when extensive dialogue must be replaced. Also, often easy to notice, is the sudden absence of background sounds in the movie during the dubbed dialogue. Among the films considered notorious for using substitute actors that sound very different from their theatrical counterparts are the "Smokey and the Bandit" and the "Die Hard" film series, as shown on broadcasters such as TBS. In the case of "Smokey and the Bandit", extensive dubbing was done for the first network airing on ABC Television in 1978, especially for Jackie Gleason's character, Buford T. Justice. The dubbing of his phrase "sombitch" (son of a bitch) became "scum bum," which became a catchphrase of the time.
Dubbing is commonly used in science fiction television, as well. Sound generated by effects equipment such as animatronic puppets or by actors' movements on elaborate multi-level plywood sets (for example, starship bridges or other command centers) will quite often make the original character dialogue unusable. "Stargate" and "Farscape" are two prime examples where ADR is used heavily to produce usable audio.
Since some anime series contain profanity, the studios recording the English dubs often re-record certain lines if a series or movie is going to be broadcast on Cartoon Network, removing references to death and hell as well. Some companies will offer both an edited and an uncut version of the series on DVD, so that there is an edited script available in case the series is broadcast. Other companies also edit the full-length version of a series, meaning that even on the uncut DVD characters say things like "Blast!" and "Darn!" in place of the original dialogue's profanity. Bandai Entertainment's English dub of "G Gundam" is infamous for this, among many other things, with such lines as "Bartender, more milk".
Dubbing has also been used for comedic purposes, replacing lines of dialogue to create comedies from footage that was originally another genre. Examples include the Australian shows "The Olden Days" and "Bargearse", re-dubbed from 1970s Australian drama and action series, respectively, the Irish show "Soupy Norman", re-dubbed from "", a Polish soap opera, and "Most Extreme Elimination Challenge", a comedic dub of the Japanese show "Takeshi's Castle".
Dubbing into a foreign language does not always entail the deletion of the original language. In some countries, a performer may read the translated dialogue as a voice-over. This often occurs in Russia and Poland, where "lektories" or "lektors" read the translated dialogue into Russian and Polish. In Poland, one announcer read all text. However, this is done almost exclusively for the television and home video markets, while theatrical releases are usually subtitled. Recently, however, the number of high-quality, fully dubbed films has increased, especially for children's movies. If a quality dubbed version exists for a film, it is shown in theaters. However, some films, such as "Harry Potter" or "Star Wars", are shown in both dubbed and subtitled versions, varying with the time of the show. Such films are also shown on TV (although some channels drop them and do standard one-narrator translation) and VHS/DVD.
In Russia, the reading of all lines by a single person is referred to as a Gavrilov translation, and is generally found only in illegal copies of films and on cable television. Professional copies always include at least two actors of opposite gender translating the dialogue. Some titles in Poland have been dubbed this way, too, but this method lacks public appeal, so it is very rare now.
On special occasions, such as film festivals, live interpreting is often done by professionals.
As budgets for pornographic films are often small, compared to films made by major studios, and there is an inherent need to film without interrupting filming, it is common for sex scenes to be over-dubbed. The audio for such over-dubbing is generally referred to as "the Ms and Gs", or "the moans and groans."
In the case of languages with large communities (such as English, Chinese, Portuguese, German, Spanish, or French), a single translation may sound foreign to native speakers in a given region. Therefore, a film may be translated into a certain variety of a certain language. For example, the animated movie "The Incredibles" was translated to European Spanish, Mexican Spanish, Neutral Spanish (which is Mexican Spanish but avoids colloquialisms), and Rioplatense Spanish (although people from Chile and Uruguay noticed a strong "porteño" accent from most of the characters of the Rioplatense Spanish translation). In Spanish-speaking regions, most media is dubbed twice: into European Spanish and Neutral Spanish.
Another example is the French dubbing of "The Simpsons", which has two entirely different versions for Quebec and for France. The humor is very different for each audience (see Non-English versions of "The Simpsons"). Audiences in Quebec are generally critical of France's dubbing of "The Simpsons", which they often do not find amusing.
Quebec-French dubbing of films is generally made in accent-free Standard French, but may sound peculiar to audiences in France because of the persistence of some regionally-neutral expressions and because Quebec-French performers pronounce Anglo-Saxon names with an American accent, unlike French performers. Occasionally, budget restraints cause American direct-to-video films, such as the 1995 film "When the Bullet Hits the Bone", to be released in France with a Quebec-French dubbing, sometimes resulting in what some members of French audiences perceive as unintentional humor.
Portugal and Brazil also use different versions of dubbed films and series. Because dubbing has never been very popular in Portugal, for decades, children's films were distributed using the higher-quality Brazilian dub (unlike children's TV series, which are traditionally dubbed in European Portuguese). Only in the 1990s did dubbing begin to gain popularity in Portugal. "The Lion King" became the first Disney feature film to be completely dubbed into European Portuguese, and subsequently all major animation films gained European-Portuguese versions. In recent DVD releases, most Brazilian-Portuguese-dubbed classics were released with new European-Portuguese dubs, eliminating the predominance of Brazilian-Portuguese dubs in Portugal.
Similarly, in the Dutch-speaking region of Flanders, Belgium, cartoons are often dubbed locally by Flemish artists rather than using soundtracks produced in the Netherlands.
The German-speaking region, which includes Germany, Austria, part of Switzerland, and Liechtenstein, share a common German-dubbed version of films and shows. Although there are some differences in the three major German varieties, all films, shows, and series are dubbed into a single Standard German version that avoids regional variations in the German-speaking audience. Most voice actors are primarily German or Austrian. Switzerland, which has four official languages (German, French, Italian, and Romansh), generally uses dubbed versions made in each respective country (except for Romansh). Liechtenstein uses German-dubbed versions only.
Sometimes, films are also dubbed into several German dialects (Berlinerisch, Kölsch, Saxonian, Austro-Bavarian or Swiss German), especially animated films and Disney films. They are as an additional "special feature" to entice the audience into buying it. Popular animated films dubbed into German variety include "Asterix" films (in addition to its Standard German version, every film has a particular variety version), "The Little Mermaid", "Shrek 2", "Cars", (+ Austrian German) and "Up" (+ Austrian German).
Some live-action films or TV-series have an additional German variety dubbing: "Babe" and its sequel, "" (German German, Austrian German, Swiss German); and "Rehearsal for Murder", "Framed" (+ Austrian German); "The Munsters", "Serpico", "Rumpole" (+ Austrian German), and "The Thorn Birds"
(only Austrian German dubbing).
Before German reunification, East Germany also made its own particular German version. For example, "Olsen Gang" and the Hungarian animated series "The Mézga Family" were dubbed in West Germany as well as East Germany.
Usually, there are two dubbings produced in Serbo-Croatian: Serbian and Croatian. Serbian for Serbia, Montenegro and Bosnia and Herzegovina; Croatian for Croatia and parts of Bosnia and Herzegovina.
While the voice actors involved usually bear the brunt of criticisms towards poor dubbing, other factors may include inaccurate script translation and poor audio mixing. Dialogue typically contains speech patterns and sentence structure that are natural to the original language but would appear awkward if translated literally. English dubs of Japanese animation, for example, must rewrite the dialogue so that it flows smoothly while following the natural pattern of English speech. On some occasions, voice actors record their dialogue individually instead of with the rest of the cast, and their performances can lack the dynamics gained from performing as a group.
Many martial arts movies from Hong Kong that were imported under the unofficial banner Kung Fu Theater were notorious for seemingly careless dubbing that included poor lip sync and awkward dialogue. Since the results were frequently unintentionally humorous, it has become one of the hallmarks that endear these films to fans of the 1980s culture. | https://en.wikipedia.org/wiki?curid=8860 |
Delaunay triangulation
In mathematics and computational geometry, a Delaunay triangulation (also known as a Delone triangulation) for a given set P of discrete points in a plane is a triangulation DT(P) such that no point in P is inside the circumcircle of any triangle in DT(P). Delaunay triangulations maximize the minimum angle of all the angles of the triangles in the triangulation; they tend to avoid sliver triangles. The triangulation is named after Boris Delaunay for his work on this topic from 1934.
For a set of points on the same line there is no Delaunay triangulation (the notion of triangulation is degenerate for this case). For four or more points on the same circle (e.g., the vertices of a rectangle) the Delaunay triangulation is not unique: each of the two possible triangulations that split the quadrangle into two triangles satisfies the "Delaunay condition", i.e., the requirement that the circumcircles of all triangles have empty interiors.
By considering circumscribed spheres, the notion of Delaunay triangulation extends to three and higher dimensions. Generalizations are possible to metrics other than Euclidean distance. However, in these cases a Delaunay triangulation is not guaranteed to exist or be unique.
The Delaunay triangulation of a discrete point set P in general position corresponds to the dual graph of the Voronoi diagram for P.
The circumcenters of Delaunay triangles are the vertices of the Voronoi diagram.
In the 2D case, the Voronoi vertices are connected via edges, that can be derived from adjacency-relationships of the Delaunay triangles: If two triangles share an edge in the Delaunay triangulation, their circumcenters are to be connected with an edge in the Voronoi tesselation.
Special cases where this relationship does not hold, or is ambiguous, include cases like:
For a set P of points in the ("d"-dimensional) Euclidean space, a Delaunay triangulation is a triangulation DT(P) such that no point in P is inside the circum-hypersphere of any "d"-simplex in DT(P). It is known that there exists a unique Delaunay triangulation for P if P is a set of points in "general position"; that is, the affine hull of P is "d"-dimensional and no set of "d" + 2 points in P lie on the boundary of a ball whose interior does not intersect P.
The problem of finding the Delaunay triangulation of a set of points in "d"-dimensional Euclidean space can be converted to the problem of finding the convex hull of a set of points in ("d" + 1)-dimensional space. This may be done by giving each point "p" an extra coordinate equal to |"p"|2, thus turning it into a hyper-paraboloid (this is termed "lifting"); taking the bottom side of the convex hull (as the top end-cap faces upwards away from the origin, and must be discarded); and mapping back to "d"-dimensional space by deleting the last coordinate. As the convex hull is unique, so is the triangulation, assuming all facets of the convex hull are simplices. Nonsimplicial facets only occur when "d" + 2 of the original points lie on the same "d"-hypersphere, i.e., the points are not in general position.
Let "n" be the number of points and "d" the number of dimensions.
From the above properties an important feature arises: Looking at two triangles ABD and BCD with the common edge BD (see figures), if the sum of the angles α and γ is less than or equal to 180°, the triangles meet the Delaunay condition.
This is an important property because it allows the use of a "flipping" technique. If two triangles do not meet the Delaunay condition, switching the common edge BD for the common edge AC produces two triangles that do meet the Delaunay condition:
This operation is called a "flip", and can be generalised to three and higher dimensions.
Many algorithms for computing Delaunay triangulations rely on fast operations for detecting when a point is within a triangle's circumcircle and an efficient data structure for storing triangles and edges. In two dimensions, one way to detect if point "D" lies in the circumcircle of "A", "B", "C" is to evaluate the determinant:
When "A", "B" and "C" are sorted in a counterclockwise order, this determinant is positive if and only if "D" lies inside the circumcircle.
As mentioned above, if a triangle is non-Delaunay, we can flip one of its edges. This leads to a straightforward algorithm: construct any triangulation of the points, and then flip edges until no triangle is non-Delaunay. Unfortunately, this can take Ω("n"2) edge flips. While this algorithm can be generalised to three and higher dimensions, its convergence is not guaranteed in these cases, as it is conditioned to the connectedness of the underlying flip graph: this graph is connected for two dimensional sets of points, but may be disconnected in higher dimensions.
The most straightforward way of efficiently computing the Delaunay triangulation is to repeatedly add one vertex at a time, retriangulating the affected parts of the graph. When a vertex "v" is added, we split in three the triangle that contains "v", then we apply the flip algorithm. Done naïvely, this will take O("n") time: we search through all the triangles to find the one that contains "v", then we potentially flip away every triangle. Then the overall runtime is O("n"2).
If we insert vertices in random order, it turns out (by a somewhat intricate proof) that each insertion will flip, on average, only O(1) triangles – although sometimes it will flip many more.
This still leaves the point location time to improve. We can store the history of the splits and flips performed: each triangle stores a pointer to the two or three triangles that replaced it. To find the triangle that contains "v", we start at a root triangle, and follow the pointer that points to a triangle that contains "v", until we find a triangle that has not yet been replaced. On average, this will also take O(log "n") time. Over all vertices, then, this takes O("n" log "n") time. While the technique extends to higher dimension (as proved by Edelsbrunner and Shah), the runtime can be exponential in the dimension even if the final Delaunay triangulation is small.
The Bowyer–Watson algorithm provides another approach for incremental construction. It gives an alternative to edge flipping for computing the Delaunay triangles containing a newly inserted vertex.
Unfortunately the flipping-based algorithms are generally hard to be parallelized, since adding some certain point (e.g. the center point of a wagon wheel) can lead to up to O("n") consecutive flips. Blelloch et al. proposed another version of incremental algorithm based on rip-and-tent, which is practical and highly parallelized with polylogarithmic span.
A divide and conquer algorithm for triangulations in two dimensions was developed by Lee and Schachter and improved by Guibas and Stolfi and later by Dwyer. In this algorithm, one recursively draws a line to split the vertices into two sets. The Delaunay triangulation is computed for each set, and then the two sets are merged along the splitting line. Using some clever tricks, the merge operation can be done in time O("n"), so the total running time is O("n" log "n").
For certain types of point sets, such as a uniform random distribution, by intelligently picking the splitting lines the expected time can be reduced to O("n" log log "n") while still maintaining worst-case performance.
A divide and conquer paradigm to performing a triangulation in "d" dimensions is presented in "DeWall: A fast divide and conquer Delaunay triangulation algorithm in E"d"" by P. Cignoni, C. Montani, R. Scopigno.
The divide and conquer algorithm has been shown to be the fastest DT generation technique.
Sweephull is a hybrid technique for 2D Delaunay triangulation that uses a radially propagating sweep-hull, and a flipping algorithm. The sweep-hull is created sequentially by iterating a radially-sorted set of 2D points, and connecting triangles to the visible part of the convex hull, which gives a non-overlapping triangulation. One can build a convex hull in this manner so long as the order of points guarantees no point would fall within the triangle. But, radially sorting should minimize flipping by being highly Delaunay to start. This is then paired with a final iterative triangle flipping step.
The Euclidean minimum spanning tree of a set of points is a subset of the Delaunay triangulation of the same points, and this can be exploited to compute it efficiently.
For modelling terrain or other objects given a set of sample points, the Delaunay triangulation gives a nice set of triangles to use as polygons in the model. In particular, the Delaunay triangulation avoids narrow triangles (as they have large circumcircles compared to their area). See triangulated irregular network.
Delaunay triangulations can be used to determine the density or intensity of points samplings by means of the Delaunay tessellation field estimator (DTFE).
Delaunay triangulations are often used to build meshes for space-discretised solvers such as the finite element method and the finite volume method of physics simulation, because of the angle guarantee and because fast triangulation algorithms have been developed. Typically, the domain to be meshed is specified as a coarse simplicial complex; for the mesh to be numerically stable, it must be refined, for instance by using Ruppert's algorithm.
The increasing popularity of finite element method and boundary element method techniques increases the incentive to improve automatic meshing algorithms. However, all of these algorithms can create distorted and even unusable grid elements. Fortunately, several techniques exist which can take an existing mesh and improve its quality. For example, smoothing (also referred to as mesh refinement) is one such method, which repositions nodal locations so as to minimize element distortion. The stretched grid method allows the generation of pseudo-regular meshes that meet the Delaunay criteria easily and quickly in a one-step solution.
Constrained Delaunay triangulation has found applications in path planning in automated driving | https://en.wikipedia.org/wiki?curid=8864 |
Defendant
A defendant is a person accused of committing a crime in criminal prosecution or a person against whom some type of civil relief is being sought in a civil case.
Terminology varies from one jurisdiction to another. For example, Scots law does not use the term "defendant"; the terms "accused" or "panel" are used instead in criminal proceedings, and "defender" in civil proceedings.
In a criminal trial, a defendant is a person accused (charged) of committing an offense (a crime; an act defined as punishable under criminal law). The other party to a criminal trial is usually a public prosecutor, but in some jurisdictions, private prosecutions are allowed.
Criminal defendants are often taken into custody by police and brought before a court under an arrest warrant. Criminal defendants are usually obliged to post bail before being released from custody. For serious cases, such as murder, bail is often refused. Defendants must be present at every stage of the proceedings against them. (There is an exception for very minor cases such as traffic offenses in jurisdictions which treat them as crimes.)
If more than one person is accused, the people may be referred as "co-defendant" or "co-conspirator" in British and Common-Law courts.
In some jurisdictions, vulnerable defendants may be able to get access of services of a Non-Registered Intermediary to assist with communication at court.
In a civil lawsuit, a defendant (or a respondent) is also the accused party, although not of an offense, but of a civil wrong (a tort or a breach of contract, for instance). The person who starts the civil action through filing a complaint is referred to as the plaintiff.
Defendants in civil actions usually make their first court appearance voluntarily in response to a summons. Historically, civil defendants could be taken into custody under a writ of "Caspian ad respondent". Modern-day civil defendants are usually able to avoid most (if not all) court appearances if represented by a lawyer.
Most often and familiarly, defendants are persons: either natural persons (actual human beings) or juridical persons ("persona fiction") under the legal fiction of treating organizations as persons. But a defendant may be an object, in which case the object itself is the direct subject of the action. When a court has jurisdiction over an object, it is said to have jurisdiction "in rem". An example of an "in rem" case is "United States v. Forty Barrels and Twenty Kegs of Coca-Cola" (1916), where the defendant was not The Coca-Cola Company itself, but rather "Forty Barrels and Twenty Kegs of Coca-Cola". In current US legal practice, "in rem" suits are primarily asset forfeiture cases, based on drug laws, as in "USA v. $124,700" (2006).
Defendants can set up an account to pay for litigation costs and legal expenses. These legal defense funds can have large membership counts where members contribute to the fund. The fund can be public or private and is set up for individuals, organizations, or a particular purpose. These funds are often used by public officials, civil-rights organizations, and public-interest organizations.
Historically, "defendant" was a legal term for a person prosecuted for misdemeanour. It was not applicable to a person prosecuted for felony. | https://en.wikipedia.org/wiki?curid=8865 |
Dan Simmons
Dan Simmons (born April 4, 1948) is an American science fiction and horror writer. He is the author of the Hyperion Cantos and the Ilium/Olympos cycles, among other works which span the science fiction, horror, and fantasy genres, sometimes within a single novel. Simmons' genre-intermingling" Song of Kali" (1985) won the World Fantasy Award. He also writes mysteries and thrillers, some of which feature the continuing character Joe Kurtz.
Born in Peoria, Illinois, Simmons received a B.A. in English from Wabash College in 1970 and, in 1971, a Masters in Education from Washington University in St. Louis.
He soon started writing short stories, although his career did not take off until 1982, when, through Harlan Ellison's help, his short story "The River Styx Runs Upstream" was published and awarded first prize in a "Twilight Zone Magazine" story competition, and he was taken on as a client by Ellison's agent, Richard Curtis. Simmons' first novel, "Song of Kali", was released in 1985.
He worked in elementary education until 1989.
"Summer of Night" (1991) recounts the childhood of a group of pre-teens who band together in the 1960s, to defeat a centuries-old evil that terrorizes their hometown of Elm Haven, Illinois. The novel, which was praised by Stephen King in a cover blurb, is similar to King's "It" (1986) in its focus on small town life, the corruption of innocence, the return of an ancient evil, and the responsibility for others that emerges with the transition from youth to adulthood.
In the sequel to "Summer of Night", "A Winter Haunting" (2002), Dale Stewart (one of the first book's protagonists and now an adult), revisits his boyhood home to come to grips with mysteries that have disrupted his adult life.
Between the publication of "Summer of Night" (1991) and "A Winter Haunting" (2002), several additional characters from "Summer of Night" appeared in: "Children of the Night" (1992), a loose sequel to "Summer of Night", which features Mike O'Rourke, now much older and a Roman Catholic priest, who is sent on a mission to investigate bizarre events in a European city; "Fires of Eden" (1994), in which the adult Cordie Cooke appears; and "Darwin's Blade" (2000), a thriller in which Dale's younger brother, Lawrence Stewart, appears as a minor character.
After "Summer of Night", Simmons focused on writing science fiction until the 2007 work of historical fiction and horror, "The Terror". His 2009 book "Drood" is based on the last years of Charles Dickens' life leading up to the writing of "The Mystery of Edwin Drood", which Dickens had partially completed at the time of his death.
"The Terror" (2007) crosses the bridge between horror and historical fiction. It is a fictionalized account of Sir John Franklin and his expedition to find the Northwest Passage. The two ships, and , become icebound the first winter, and the captains and crew struggle to survive while being stalked across an Arctic landscape by a monster.
"The Abominable" (2013) recounts a mid-1920s attempt on Mount Everest by five climbers—two English, one French, one Sherpa, and one American (the narrator)—to recover the body of one of the English characters' cousin.
Many of Simmons' works have strong ties with classic literature. For example:
Short stories:
Short stories:
Collections:
Uncollected short stories:
In January 2004, it was announced that the screenplay he wrote for his novels "Ilium" and "Olympos" would be made into a film by Digital Domain and Barnet Bain Films, with Simmons acting as executive producer. "Ilium" is described as an "epic tale that spans 5,000 years and sweeps across the entire solar system, including themes and characters from Homer's "The Iliad" and Shakespeare's "The Tempest"."
In 2008, Guillermo del Toro is scheduled to direct a film adaptation of "Drood" for Universal Pictures. As of December 2017, the project is still listed as "in development."
In 2009, Scott Derrickson was set to direct ""Hyperion Cantos"" for Warner Bros. and Graham King, with Trevor Sands penning the script to blend the first two cantos "Hyperion" and "The Fall of Hyperion" into one film. In 2011, actor Bradley Cooper expressed interest in taking over the adaptation. In 2015, it was announced that TV channel Syfy will produce a mini-series based on the Hyperion Cantos with the involvement of Cooper and King. As of May 2017, the project was still "in development" at Syfy.
"The Terror" (2007) was adapted as an AMC TV 10 episode-mini-series in 2018 and received generally positive reviews upon release.
Bram Stoker Award
British Fantasy Society Award
British Science Fiction Award
Hugo Award
International Horror Guild Award
Locus Award
Nocte Award
Seiun Award
World Fantasy Award
Dan Simmons has been nominated on numerous occasions in a range of categories for his fiction, including the Arthur C. Clarke Award, Bram Stoker Award, British Fantasy Society Award, Hugo Award, Nebula Award, and World Fantasy Award. | https://en.wikipedia.org/wiki?curid=8875 |
Denis Leary
Denis Colin Leary (born August 18, 1957) is an American actor, comedian, writer and producer. Leary was the star and co-creator of "Rescue Me". He has had starring roles in many films, including those of Captain George Stacy in Marc Webb's "The Amazing Spider-Man" and Cleveland Browns head coach Vince Penn in Ivan Reitman's "Draft Day". Leary also voiced the character of Francis in "A Bug's Life" and that of Diego in the "Ice Age" franchise.
He and wife Ann Leary are the inspiration behind Amazon's series ‘Modern Love’ Episode 4: "Rallying to Keep the Game Alive".
From 2015 to 2016, Leary wrote and starred in the comedy series "Sex & Drugs & Rock & Roll" on FX.
Denis Colin Leary was born on August 18, 1957, in Worcester, Massachusetts, the son of Catholic immigrant parents from County Kerry, Ireland. His mother, Nora (née Sullivan) (b. 1929), was a maid, and his father, John Leary (1924–1985), was an auto mechanic. Being the son of Irish parents, Leary is a citizen of both the United States and Ireland. Leary is a third cousin of talk show host Conan O'Brien.
Leary attended Saint Peter's High School (now Saint Peter-Marian High School) in Worcester and graduated from Emerson College in Boston. At Emerson, he met fellow comic Mario Cantone, whom Leary considers to be his closest friend. While a student, Leary founded the Emerson Comedy Workshop, a troupe that continues on the campus today.
After graduating from Emerson in 1981, Leary taught comedy-writing classes at the school for five years. In May 2005 he received an honorary doctorate and spoke at his alma mater's undergraduate commencement ceremony; and is credited as Dr. Denis Leary on the cover of his 2009 book "Why We Suck".
Leary began working as a comedian at the Boston underground club Play It Again Sam's. However, his first real gig was at the Rascals Comedy Club as part of the TV show "The Rascals Comedy Hour", on October 18, 1990. He wrote and appeared on a local comedy series, "Lenny Clarke's Late Show", hosted by his friend Lenny Clarke and written by Martin Olson. Leary and Clarke both spoke about their early affiliations and influences in the Boston comedy scene in the documentary film "When Standup Stood Out" (2006). During Leary's time as a Boston-area stand-up comic, he developed his stage persona.
Leary appeared in sketches on the MTV game show "Remote Control", playing characters such as Keith Richards, co-host Colin Quinn's brother and artist Andy Warhol. He earned fame when he ranted about R.E.M. in an early 1990s MTV sketch. Several other commercials for MTV quickly followed, in which Leary would rant at high speeds about a variety of topics, playing off the then-popular and growing alternative scene. One of these rants served as an introduction to the video for "Shamrocks and Shenanigans (Boom Shalock Lock Boom)" by House of Pain. Leary released two records of his stand-up comedy: "No Cure for Cancer" (1993) and "Lock 'n Load" (1997). In late 2004, he released the EP "Merry F#%$in' Christmas", which included a mix of new music, previously unreleased recordings and some tracks from "Lock 'n Load".
In 1993, Leary's sardonic song about the stereotypical American male, "Asshole", achieved much notoriety. However, this bit was allegedly stolen from Louis C.K., as was discussed by C.K. during an interview on the Opie and Anthony Show. The song was voted No. 1 in an Australian radio poll and was used in Holsten Pils ads in the UK, with Leary's participation, and with adapted lyrics criticizing a drunk driver. The single was a minor hit there, peaking at No. 58 in the UK Singles Chart in January 1996.
In 1995, Leary was asked by Boston Bruins legend Cam Neely to help orchestrate a Boston-based comedy benefit show for Neely's cancer charity; this became Comics Come Home, which Leary has hosted annually ever since.
Leary has appeared in many films, including "The Sandlot" as Scott's stepfather Bill, "Monument Ave.", "The Matchmaker", "The Ref", "Draft Day", "Suicide Kings", "Dawg", "Wag the Dog", "Demolition Man", "Judgment Night", "The Thomas Crown Affair" and "Operation Dumbo Drop". He had a role in Oliver Stone's "Natural Born Killers" that was eventually cut. He held the lead role in two television series, "The Job" and "Rescue Me", and he co-created the latter, in which he played Tommy Gavin, a New York City firefighter dealing with alcoholism, family dysfunction and other issues in post-9/11 New York City.
Leary received Emmy Award nominations in 2006 and 2007 for Outstanding Lead Actor in a Drama Series for "Rescue Me", and in 2008 for Outstanding Supporting Actor in a Miniseries or a Movie for the HBO movie "Recount". Leary was offered the role of Dignam in "The Departed" (2006) but turned it down because of scheduling conflicts with "Rescue Me". He provided voices for characters in animated films, such as a fire-breathing dragon named Flame in the series "The Agents", a pugnacious ladybug named Francis in "A Bug's Life" and a prehistoric saber-toothed tiger named Diego in the "Ice Age" film series. He has produced numerous movies, television shows and specials through his production company, Apostle; these include Comedy Central's "Shorties Watchin' Shorties", the stand-up special "Denis Leary's Merry F#$%in' Christmas" and the movie "Blow".
As a Boston Red Sox fan, Leary narrated the official 2004 World Series film. In 2006, Leary and Lenny Clarke appeared on television during a Red Sox telecast and, upon realizing that Red Sox first baseman Kevin Youkilis is Jewish, delivered a criticism of Mel Gibson's antisemitic comments. As an ice hockey fan, Leary hosted the National Hockey League video "NHL's Greatest Goals". In 2003, he was the subject of the "Comedy Central Roast of Denis Leary".
Leary did the TV voiceover for MLB 2K8 advertisements, using his trademark rant style in baseball terms, and ads for the 2009 Ford F-150 pickup truck. He has also appeared in commercials for Hulu and DirecTV's NFL Sunday Ticket package. Leary was a producer of the Fox series "Canterbury's Law", and wrote and directed its pilot episode. "Canterbury's Law" aired in the spring of 2008 and was canceled after eight episodes. On September 9, 2008, Leary hosted the sixth annual "Fashion Rocks" event, which aired on CBS. In December of the year, he appeared in a video on funnyordie.com critiquing a list of some of his "best" films, titled "Denis Leary Remembers Denis Leary Movies". Also in 2008, Leary voiced a guest role as himself on the "Lost Verizon" episode of "The Simpsons".
On March 21, 2009, Leary began the Rescue Me Comedy Tour in Atlantic City, New Jersey. The 11-date tour, featuring "Rescue Me" co-stars Lenny Clarke and Adam Ferrara, was Leary's first stand-up comedy tour in 12 years. The Comedy Central special "Douchebags and Donuts", filmed during the tour, debuted on American television on January 16, 2011, with a DVD release on January 18, 2011.
Leary played Captain George Stacy in the movie "The Amazing Spider-Man", released in July 2012. He wrote the American adaptation of "Sirens". He is an executive producer of the documentary "Burn", which chronicles the struggles of the Detroit Fire Department. "Burn" won the 2012 Tribeca Film Festival Audience Award.
Leary created a television series for FX called "Sex & Drugs & Rock & Roll", taking the starring role himself. A 10-episode first season was ordered by FX, with the premiere on July 16, 2015. The show was renewed for a second season, broadcast in the summer of 2016, but was canceled after the broadcast of the second season.
Leary has been the narrator for NESN's documentary show about the Boston Bruins called "Behind the B" since the show began in 2013.
Leary has been married to author Ann Lembeck Leary since 1989. They met when he was her instructor in an English class at Emerson College. They have two children, son John Joseph "Jack" (born 1990) and daughter Devin (born 1992). Ann Leary published a memoir, "An Innocent, a Broad", about the premature birth of their son on a visit to London. She has also written a novel, "Outtakes From a Marriage", which was published in 2008. Her second novel, "The Good House", was published in 2013. Her essay in a New York Times column about her marriage to Denis inspired the Modern Love series Episode 4: "Rallying to Keep the Game Alive".
Leary is an ice hockey fan and has a backyard rink at his home in Roxbury, Connecticut, with piping installed under the ice surface to help it stay frozen. He is a fan of the Boston Bruins and the Boston Red Sox, as well as the Green Bay Packers.
Leary describes himself as a "Jack Kennedy Democrat" with some conservative ideologies, including support for the military. Leary told Glenn Beck, "I was a life-long Democrat, but now at my age, I've come to realize that the Democrats suck, and the Republicans suck, and basically the entire system sucks. But you have to go within the system to find what you want."
Leary has said of his religious beliefs, "I'm a lapsed Catholic in the best sense of the word. You know, I was raised with Irish parents, Irish immigrant parents. My parents, you know, prayed all the time, took us to Mass. And my father would sometimes swear in Gaelic. It doesn't get more religious than that. But, no, after a while, they taught us wrong. I didn't raise my kids with the fear of God. I raised my kids with the sense of, you know, to me, Jesus was this great guy..."
Leary currently resides in New York City.
On December 3, 1999, six firefighters from Leary's hometown of Worcester were killed in the Worcester Cold Storage Warehouse fire. Among the dead were Leary's cousin Jerry Lucey and his close childhood friend, Lt. Tommy Spencer. In response, the comedian founded the Leary Firefighters Foundation. Since its creation in the year 2000, the foundation has distributed over $2.5 million (USD) to fire departments in the Worcester, Boston and New York City areas for equipment, training materials, new vehicles and new facilities. Leary won $125,000 for the foundation on the game show "Who Wants to Be a Millionaire". He had close ties with WAAF, which in 2000 released the station album "Survive This!". Part of the proceeds from this album were donated to the Leary Firefighters Foundation.
A separate fund run by Leary's foundation, the Fund for New York's Bravest, has distributed over $2 million to the families of the 343 firemen killed in the September 11 attacks in 2001, in addition to providing funding for necessities such as a new mobile command center, first-responder training, and a high-rise simulator for the New York City Fire Department's training campus. As the foundation's president, Leary has been active in all of the fundraising. In the aftermath of Hurricane Katrina in New Orleans, Leary donated over a dozen boats to the New Orleans Fire Department to aid in rescue efforts in future disasters. The foundation also rebuilt entire NOLA firehouses.
For many years, Leary had been friends with fellow comedian Bill Hicks. But when Leary's comedy album "No Cure for Cancer" was released, Leary was accused of stealing Hicks' act and material, and the friendship ended abruptly. In April 1993, the "Austin Comedy News" remarked on the similarities of Leary's performance: "Watching Leary is like seeing Hicks from two years ago. He smokes with the same mannerisms. (Hicks recently quit.) He sports the same attitude, the same clothes. He touches on almost all of the same themes. Leary even invokes Jim Fixx." When asked about this, Hicks told the magazine, "I have a scoop for you. I stole his [Leary's] act. I camouflaged it with punchlines, and to really throw people off, I did it before he did".
At least three stand-up comedians have gone on the record stating they believe Leary stole Hicks' material, comedic persona and attitude. One similar routine was about the so-called Judas Priest "suicide trial," during which Hicks says, "I don't think we lost a cancer cure."
During Leary's 2003 Comedy Central Roast, comedian Lenny Clarke, a friend of Leary's, said there was a carton of cigarettes backstage from Bill Hicks with the message, "Wish I had gotten these to you sooner." This joke was cut from the final broadcast.
The feud is also mentioned in Cynthia True's biography "American Scream: The Bill Hicks Story":
According to the book, True said that upon hearing a tape of Leary's album "No Cure for Cancer", "Bill was furious. All these years, aside from the occasional jibe, he had pretty much shrugged off Leary's lifting. Comedians borrowed, stole stuff and even bought bits from one another. Milton Berle and Robin Williams were famous for it. This was different. Leary had, practically line for line, taken huge chunks of Bill's act and "recorded" it."
In a 2008 appearance on "The Opie and Anthony Show", comedian Louis CK claimed that Leary stole his "I'm an asshole" routine, which was then expanded upon and turned into a hit song by Leary. On a later episode of the same show, Leary challenged this assertion by claiming to have co-written the song with Chris Phillips.
In his 2008 book "Why We Suck: A Feel Good Guide to Staying Fat, Loud, Lazy and Stupid", Leary wrote:
In response to the controversy, Leary stated that the quote was taken out of context and that in that paragraph he had been talking about what he calls the trend of "unwarranted" over-diagnosis of autism, which he attributed to American parents seeking an excuse for behavioral problems and under-performance. Later, he apologized to parents with autistic children whom he had offended. | https://en.wikipedia.org/wiki?curid=8878 |
Recreational use of dextromethorphan
Dextromethorphan, or DXM, a common active ingredient found in many over-the-counter cough suppressant cold medicines, is used as a recreational drug and entheogen for its dissociative effects. It has almost no psychoactive effects at medically recommended doses. Dextromethorphan has powerful dissociative properties when administered in doses well above those considered therapeutic for cough suppression. Recreational use of DXM is sometimes referred to in slang form as "robo-tripping", whose prefix derives from the Robitussin brand name, or "Triple Cs", which derives from the Coricidin brand. (The pills were printed with "CCC" for "Coricidin Cough and Cold".) However, this brand presents a danger when used at recreational doses due to the presence of chlorpheniramine.
In over-the-counter formulations, DXM is often combined with acetaminophen (paracetamol, APAP) to relieve pain and to prevent recreational use; however, to achieve DXM's dissociative effects, the maximum daily therapeutic dose of 4000 mg of APAP is often exceeded, potentially causing acute or chronic liver failure, making abuse and subsequent tolerance of products which contain both DXM and APAP potentially fatal.
An online essay first published in 1995 entitled "The DXM FAQ" described dextromethorphan's potential for recreational use, and classified its effects into plateaus.
Owing to its recreational use and theft concerns, many retailers in the US have moved dextromethorphan-containing products behind the counter so that one must ask a pharmacist to receive them or be 18 years (19 in New York and Alabama, 21 in Mississippi) or older to purchase them. Some retailers also give out printed recommendations about the potential for abuse with the purchase of products containing dextromethorphan.
At high doses, dextromethorphan is classified as a dissociative general anesthetic and hallucinogen, similar to the controlled substances ketamine and phencyclidine (PCP). Also like those drugs, dextromethorphan is an NMDA receptor antagonist. It generally does not produce withdrawal symptoms characteristic of physical dependence-inducing substances, but cases of psychological dependence have been reported. Due to dextromethorphan's selective serotonin reuptake inhibitor-like action, the sudden cessation of recreational dosing in tolerant individuals can result in mental and physical withdrawal symptoms similar to the withdrawal from SSRIs. These withdrawal effects can manifest as psychological effects, including depression, irritability, cravings, and as physical effects, including lethargy, body aches, and a sensation of unpleasant tingling, not unlike a mild "electric shock".
Dextromethorphan's effects have been divided into four plateaus. The first plateau (1.5 to 2.5 mg per kg body weight) is described as having euphoria, auditory changes, and change in perception of gravity. The second plateau (2.5 to 7.5 mg/kg) causes intense euphoria, vivid imagination, and closed-eye hallucinations. The third and fourth plateaus (7.5 mg/kg and over) cause profound alterations in consciousness, and users often report out-of-body experiences or temporary psychosis. Flanging (speeding up or slowing down) of sensory input is also a characteristic effect of recreational use.
Also, a marked difference is seen between dextromethorphan hydrobromide, contained in most cough suppressant preparations, and dextromethorphan polistirex, contained in the brand name preparation Delsym. Polistirex is a polymer that is bonded to the dextromethorphan that requires more time for the stomach to digest it, as it requires that an ion exchange reaction take place prior to its dissolution into the blood. Because of this, dextromethorphan polistirex takes considerably longer to absorb, resulting in more gradual and longer lasting effects reminiscent of time-release pills. As a cough suppressant, the polistirex version lasts up to 12 hours. This duration also holds true when used recreationally.
In 1981, a paper by Gosselin estimated that the lethal dose is between 50 and 500 mg/kg. Doses as high as 15–20 mg/kg are taken by some recreational users. A single case study suggests that the antidote to dextromethorphan overdose is naloxone, administered intravenously.
In addition to producing PCP-like mental effects, high doses may cause a false-positive result for PCP and opiates in some drug tests.
Dextromethorphan has not been shown to cause vacuolization in animals, also known as Olney's lesions, despite early speculation that it might, due to similarities with PCP. In rats, oral administration of dextromethorphan did not cause vacuolization in laboratory tests. Oral administration of dextromethorphan repeatedly during adolescence, however, has been shown to impair learning in those rats during adulthood. The occurrence of Olney's lesions in humans, however, has not been proven or disproven. William E. White, author of the "DXM FAQ", has compiled informal research from correspondence with dextromethorphan users suggesting that heavy abuse may result in various deficits corresponding to the brain areas affected by Olney's lesions; these include loss of episodic memory, decline in ability to learn, abnormalities in some aspects of visual processing, and deficits of abstract language comprehension. In 2004, however, White retracted the article in which he made these claims.
A formal survey of dextromethorphan users showed that more than half of users reported experience of these withdrawal symptoms individually for the first week after long-term/addictive dextromethorphan use: fatigue, apathy, flashbacks, and constipation. Over a quarter reported insomnia, nightmares, anhedonia, impaired memory, attention deficit, and decreased libido. Rarer side effects included panic attacks, impaired learning, tremor, jaundice, urticaria (hives), and myalgia. DXM has also been "known to increase the frequency of complex partial seizures in epileptics by 25% compared to placebo." Frequent and long-term usage at high doses could possibly lead to toxic psychosis and other permanent psychological problems. Medical DXM use has not been shown to cause the above issues.
Misuse of multisymptom cold medications, rather than using a cough suppressant whose sole active ingredient is dextromethorphan, carries significant risk of fatality or serious illness. Multisymptom cold medicines contain other active ingredients, such as paracetamol (acetaminophen), chlorpheniramine, and phenylephrine, any of which can cause permanent bodily damage such as kidney failure, or even death, if taken on the generally accepted recreational dosing scale of dextromethorphan. Sorbitol, an artificial sweetener found in many cough syrups containing dextromethorphan, can also have negative side effects, including diarrhea and nausea when taken at recreational dosages of dextromethorphan. Guaifenesin, an expectorant commonly accompanying dextromethorphan in cough preparations, can cause unpleasant symptoms including vomiting, nausea, kidney stones, and headache.
Combining dextromethorphan with other substances can compound risks. Central nervous system (CNS) stimulants such as amphetamine and/or cocaine can cause a dangerous rise in blood pressure and heart rate. CNS depressants such as ethanol (drinking alcohol) will have a combined depressant effect, which can cause a decreased respiratory rate. Combining dextromethorphan with other CYP2D6 substrates can cause both drugs to build to dangerous levels in the bloodstream. Combining dextromethorphan with other serotonergic drugs could possibly cause serotonin toxicity, an excess of serotonergic activity in the CNS and peripheral nervous system.
Dextromethorphan's hallucinogenic and dissociative effects can be attributed largely to dextrorphan (DXO), a metabolite produced when dextromethorphan is metabolized by the body. Both dextrorphan and dextromethorphan are NMDA receptor antagonists, like the dissociative hallucinogenic drugs ketamine and PCP, although dextrorphan is more potent than its "parent molecule" dextromethorphan.
As NMDA receptor antagonists, dextrorphan and dextromethorphan inhibit the excitatory amino acid and neurotransmitter glutamate in the brain. This can effectively slow, or even shut down certain neural pathways, preventing areas of the brain from communicating with each other. This leaves the user feeling dissociated or disconnected, experienced as brain fog or derealization.
Dextromethorphan's euphoric effects have sometimes been attributed to an increase in dopamine levels, since such an increase generally correlates with pleasurable responses to drugs, as is observed with some clinical antidepressants, as well as some recreational drugs. However, the effects of dextrorphan and dextromethorphan, and other NMDA receptor antagonists, on dopamine levels is a disputed subject. Studies show that the NMDA receptor antagonists ketamine and PCP do raise dopamine levels, although other studies show that another NMDA receptor antagonist, dizocilpine, has no effect on dopamine levels. Some findings even suggest that dextromethorphan can actually "counter" the dopamine-increasing effect caused by morphine. Due to these conflicting results, the actual effect of dextromethorphan on dopamine levels remains undetermined.
Antitussive preparations containing dextromethorphan are legal to purchase from pharmacies in most countries, with some exceptions being UAE, France, Sweden, Estonia, and Latvia. In Russia, dextromethorphan (commonly sold under the brand names Tussin+ and Glycodin) is a Schedule III controlled substance and is placed in the same list as benzodiazepines and the majority of barbiturates.
No legal distinction currently exists in the United States between medical and recreational use, sale, or purchase. Some states and store chains have implemented restrictions, such as requiring signatures for DXM sale, limiting quantities allowable for purchase, and requiring that purchasers be over the age of majority in their state. The sale of dextromethorphan in its pure powder form may incur penalties, although no explicit law exists prohibiting its sale or possession, other than in Illinois. Cases of individuals being sentenced to time in prison and other penalties for selling pure dextromethorphan in this form have been reported, because of the incidental violation of more general laws for the sale of legitimate drugs – such as resale of a medication without proper warning labels.
Dextromethorphan was excluded from the Controlled Substances Act (CSA) of 1970 and was specifically excluded from the Single Convention on Narcotic Drugs. As of 2010, it was still excluded from U.S. Schedules of Controlled Substances; however, officials have warned that it could still be added if increased abuse warrants its scheduling. The motivation behind its exclusion from the CSA was that under the CSA, all optical isomers of listed Schedule II opiates are automatically Schedule II substances. Since dextromethorphan is an optical isomer of the Schedule II opiate levomethorphan (but does not act like an opiate), an exemption was necessary to keep it an uncontrolled substance. The Federal Analog Act does not apply to dextromethorphan because a new drug application has been filed for it.
After previously available over the counter, the National Agency of Drug and Food Control of Republic of Indonesia (BPOM-RI) prohibit single-component dextromethorphan drug sales with or without prescription. Indonesia is the only country in the world that makes single-component dextromethorphan illegal even by prescription and violators may be prosecuted by law. Indonesian National Narcotic Bureau has even threatened to revoke pharmacies' and drug stores' licenses if they still stock dextromethorphan, and will notify the police for criminal prosecution. As a result of this regulation, 130 drugs have been withdrawn from the market, but drugs containing multicomponent dextromethorphan can be sold over the counter. In its official press release, the bureau also stated that dextromethorphan is often used as a substitute for marijuana, amphetamine, and heroin by drug abusers, and its use as an antitussive is less beneficial nowadays.
The Director of Narcotics, Psychotropics, and Addictive Substances Control (NAPZA) BPOM-RI, Dr. Danardi Sosrosumihardjo, SpKJ, explains that dextromethorphan, morphine, and heroin are derived from the same tree, and states the effect of dextromethorphan to be equivalent to 1/100 of morphine and injected heroin.
By contrast, the Deputy of Therapeutic Product and NAPZA Supervision BPOM-RI, Dra. Antonia Retno Tyas Utami, Apt. MEpid., states that dextromethorphan, being chemically similar to morphine, has a much more dangerous and direct effect to the central nervous system, thus causing mental breakdown in the user. She also claimed, without citing any prior scientific study or review, that unlike morphine users, dextromethorphan users cannot be rehabilitated. This claim is contradicted by numerous scientific studies which show that naloxone alone offers effective treatment and promising therapy results in treating dextromethorphan addiction and poisoning. Dra. Antonia Retno Tyas Utami also claimed high rates of dextromethorphan abuse, including fatalities, in Indonesia and to be further put in question suggest that codeine, despite being a more physically addictive µ-opioid class antitussive, be made available as an alternative to dextromethorphan. | https://en.wikipedia.org/wiki?curid=8879 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.